Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

461 posts

MuddyWater is an APT group who's targets have mainly been in the Middle East, such as the Kingdom of Saudi Arabia, the United Arab Emirates, Jordan, Iraq ... with a focus on oil, military, telco and government entities.


The group is using Spear Phishing attacks as an initial vector. The email contains an attached word document which tries to trick the user into enabling macros. The attachment's filename and its content are usually tailored towards the target, such as the language used.


In the below example, we will look at the behavior of the following malware sample:

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f


Filetype: MS Word Document



Endpoint Behavior

This specific malware sample is for an Arabic speaking victim targeted at Jordan, where the filename "معلومات هامة.doc" can translate into "important information.doc". Other variants contain content in Turkish, Pakistani ...


The file shows blurry text in Arabic, with a message telling the target to enable content (and therefore macros) to unlock the content of the document.


Once the user clicks on "Enable Content", we're able to see the following behaviors on RSA NetWitness Endpoint.


1- The user opens the file. In this case, the file was opened from the Desktop folder, but if it was from his email, it would have shown from "outlook.exe" instead of "explorer.exe"


2- The malware uses "rundll32.exe" to execute the dropped file (C:\ProgranData\EventManager.log), allowing to evade detection


3- Powershell is then used to decode the payload of another dropped file ("C:\ProgramData\WindowsDefenderService.ini") and executes it. Having the full arguments of the Powershell command, it would be possible for the analyst to use it to decode the content of the "WindowsDefenderService.ini" file for further analysis


4- Powershell modifies the "Run" Registry key to run the payload at startup


5- Scheduled tasks are also created 



After this, the malware will continue execution after a restart (this might be as a layer of protection against sandboxes).


6- The infected machine is restarted


7- an additional powershell script "a.ps1" is dropped


8- Some of the Windows security settings are disabled (such as Windows Firewall, Antivirus, ...)




By looking at the network activity on the endpoint, we can see that powershell has generated a number of connections to multiple domains and IPs (possible C2 domains).



Network Behavior

To look into the network part in more details, we can leverage the captured network traffic on RSA NetWitness Network.


We can see, on RSA NetWitness Network, the communication from the infected machine ( to multiple domains and IP addresses over HTTP that match what has been originating from powershell on RSA NetWitness Endpoint.

We can also see that most of the traffic is targeting "db-config-ini.php". From this, it seems that the attacker has compromised different legitimate websites, and the "db-config-ini.php" file is owned by the attacker.


Having the full payload of the session on RSA NetWitness network, we can reconstruct the session to confirm that it does in fact look like beaconing activity to a C2 server.



Even though the websites used might be legitimate (but compromised), we can still see suspicious indicators, such as:

  • POST request without a GET
  • Missing Headers
  • Suspicious / No User-Agent
  • High number of 404 Errors
  • ...





We can see how the attacker is using legitimate, trusted, and possibly white-listed modules, such as powershell and rundll32, to evade detection. The attacker is also using common file names for the dropped files and scripts, such as "EventManager" and "WindowsDefenderService" to avoid suspicion from analysts.


As shown in the below screenshot, even though "WmiPrvSE.exe" is a legitimate Microsoft files (it has a valid Microsot signature, as well as a known trusted hash value), but due to its behavioral activity (as shown in the Instant IOC section), we're able to assign a high behavioral score of 386. It should also be noted that any of the suspicious IIOCs that have been detected could trigger a real time alert over Syslog or E-Mail for early detection, even though the attacker is using advanced techniques to avoid detection.




Similarly, on the network, even though the attacker is leveraging (compromised) legitimate sites, and using standard known protocols (HTTP) and encrypted payloads, to avoid detection and suspicion, it is still possible to detect those suspicious behaviors using RSA NetWitness Network, and look for indicators such as Post no Get, suspicious user agents, missing headers, or other anomalies.






The following are IOCs that can be used to look if activity from this APT currently exists in your environment.

This list is not exhaustive and is only based on what has been seen during this test.


Malware Hash

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f



  • ipripak,org


IP Addresses


RSA Netwitness gives you the ability to use remote Virtual Log Collectors (VLC) to be able to reduce your footprint and reduce the amount of ports required. RSA Netwitness can leverage different mechanisms to retrieve (Pull) or send (Push) the log from or to a log collector.


Multiple customers and RSA partners will use the VLC to be able to send the logs from a remote location to a cloud or centralized infrastructure behind one or multiple firewalls in an isolated network. In an isolated network, the VLC won't have any route to this central location and the following article will help you configure your platform properly.


Before deploying your VLC, verify that the host configuration for your head unit is set to nw-node-zero :


When this is done, deploy your VLC in your virtual infrastructure and launch the nwsetup-tui to continue the installation.  When the setup asks you for the IP of the Node Zero enter the external IP of your head unit. For example, in an isolated network a firewall will control any communication to the isolated network:


(192.168.0.x) LAN Corpo --> Firewall Wan Interface ( --> Firewall Lan interface (Isolated Network --> Netwitness Head unit (


NOTE: You need to open the required ports for this installation in your firewall. You can refer to the official documentation related to network/port requirements at the following link : Deployment: Network Architecture and Ports 


In this example, the Node Zero external IP will be and when completing the setup, make sure you are using the external Node Zero IP (Firewall WAN Interface for this isolated network).


When this is done, launch the install process on the VLC and after several minutes the VLC will be up and running:


Next, we need to configure the VLC to send the logs to the log decoder behind the Firewall:


During this process, the operation will work but the IP will be the internal IP of the log decoder and we need to change this information to re-establish the communication. 


We need to modify the shovel.conf file to be able to send our logs to the log decoder using the same process for this isolated network. To facilitate the process you can add another IP to your firewall and configure a one to one NAT for your log decoder. For this example, we have a one to one NAT for the log decoder using the following IP ( on the external interface of the firewall.


The shovel_confing file is located on the VLC at the following path:



Connect to your VLC using SSH and edit the file and change the IP to the external IP of your Firewall for your isolated network:


When this is completed reboot your VLC and in the RSA Netwitness UI you will have the green dot confirming that the communication is working:


Context menu actions have long been a part of the RSA NetWitness Platform. v11.2 brought a few nice touches to help manage the menu items as well extend the functions into more areas of the product.


See here for previous information on the External Lookup options:

Context Menus - OOTB Options 


And these for Custom Additions that are useful to Analysts:

Context Menu - Microsoft EventID 

Context Menu - VirusTotal Hash Lookup 

Context Menu - RSA NW to Splunk 

Context Menu - Investigate IP from DNS 

Context Menu - 


As always access to the administration location is located here:

Admin > System > Context Menu Actions


The first thing you will notice is there is a bit of a different look since a good bit of cleanup has been done in the UI.


Before we start trimming the menu items... here is what it looks before the changes:

Data Science/Scan for Malware/Live Lookup are all candidates for reduction.


When you open an existing action or create a new one you will also see some new improvements.

No longer just a large block of text that can be edited if you know what and where to change but a set of options to change to implement your custom action (or tweak existing ones)


You can switch to the advanced view to get back to the old freeform world if you want to.


Clean up

To clean up the menu for your analysts you might consider disabling these items if you don't have a warehouse from RSA installed

Sort by Group Name, Locate the Data Science group and disable all the rules for them (4)

Disable any of the External lookup items that are not used or not important for your analysts

Scan for Malware - are you logs only? Malware not needed, are you packets or endpoint but don't use Malware?

Live Lookup - mostly doesn't provide value to analysts

Now you should have a nice clean right click action menu available to investigators to do their job better and faster.

The RSA NetWitness Platform has multiple new enhancements as to how it handles Lists and Feeds in v11.x.  One of the enhancements introduced in the v11.1 release was the ability to use Context Hub Lists as Blacklist and/or Whitelist enrichment sources in ESA alerts.  This feature allows analysts and administrators a much easier path to tuning and updating ESA alerts than was previously available.


In this post, I'll be explaining how you can take that one step further and create ESA alerts that automatically update Context Hub Lists that can in turn be used as blacklist/whitelist enrichment sources in other ESA alerts.  The capabilities you'll use to accomplish this will be the ESA's script notifications, the ESA's Enrichment Sources and the Context Hub's List Data Source.


Your first step is to determine what kind of data you want to put into the Context Hub List.  For my test case I chose source and destination IP addresses.  Your next step is to determine where this List should live so that the Context Hub can access it.  The Context Hub can pull Lists either via HTTP, HTTPS, or from its local file system on the ESA appliance - for my test case I chose the local filesystem.


With that decided, your next step is to create the file that will become the List - the Context Hub looks within the /var/netwitness/contexthub-server/data directory on the ESA, so you'll create a CSV file in this location and add headers to help you (and others) know what data the List contains:


**NOTE** Be sure to make this CSV writeable for all users, e.g.:

# chmod 666 esaList.csv


Next, add this CSV to the CH as a Data Source.  In Admin / Services / Contexthub Server / Config --> Data Sources, choose List:


Select "Local File Store," then give your List a name and description and choose the CSV from the dropdown:


If you created headers in the CSV, select "With Column Headers" and then validate that the Context Hub can see and read your file.  After validation is successful, tell the Context Hub what types of meta are in each column, whether to Append to or Overwrite values in the List when it updates, and also whether to automatically expire (delete) values once they reach a certain age (maximum value here is 30 days):


For my test case, I chose not to map the date_added and source_alert columns from the CSV to any meta keys, because I only want them for my own awareness to know where each value came from (i.e.: what ESA alert) and when it was added.  Also, I chose to Append new values rather than Overwrite, because the Context Hub List has built in functionality that identifies new and unique values within the CSV and adds only those to the List.  Append will also enable the List Value Expiration feature to automatically remove old values.


Once you have selected your options, save your settings to close the wizard.  Before moving on, there are a few additional configuration options to point out which are accessible through the gear icon on the right side of the page.  These settings will allow you to modify the existing meta mapping or add new ones, adjust the Expiration, enable or disable whether the List's values are loaded into cache, and most importantly - the List's update schedule, or Recurrence:


**NOTE** At the time of this writing, the Schedule Recurrence has a bug that causes the Context Hub to ignore any user-defined schedule, which means it will revert to the default setting and only automatically update every 12 hours.


With the Context Hub List created, you can move on to the script and notification template that you will use to auto-update the CSV (both are attached to this blog - you can upload/import them as is, or feel free to modify them however you like for your use cases / environment).  You can refer to the documentation (System Configuration Guide for Version 11.x - Table of Contents) to add notification outputs, servers, and templates.


To test that all of this works and writes what you want to the CSV file (for my test case, IP source and destination values), create an ESA alert that will fire with the data points you want capture, and then add the script notification, server, and template to the alert:


After deploying your alert and generating the traffic (or waiting) for it to fire, verify that your CSV auto-updates with the values from the alert by keeping an eye on the CSV file.  Additionally, you can force your Context Hub List to update by re-opening your List's settings (the gear icon mentioned above), re-saving your existing settings, and then checking its values within the Lists tab:



You'll notice that in my test case, my CSV file has 5 entries in it while my Context Hub List only has 3 - this is a result of the automatic de-duplication mentioned above; the List is only going to be Appending new and unique entries from the CSV.


Next up, add this List as an Enrichment Source to your ESA.  Navigate to Configure / ESA Rules --> Setting tab / Enrichment Sources, and add a new Context Hub source:


In the wizard, select the List you created at the start of this process and the columns that you will want to use within ESA alerts:


With that complete, save and exit the wizard, and then move on to the last step - creating or modifying an ESA alert to use this Context Hub List as a whitelist or blacklist.


Unless your ESA alert requires advanced logic and functionality, you can use the ESA Rule Builder to create the alert.  Within your alert statement, build out the alert logic and add a Meta Whitelist or Meta Blacklist Condition, depending on your use case:


Select the Context Hub List you just added as an Enrichment Source:


Select the column from the Context Hub List that you want to match against within your alert:


Lastly, select the NetWitness meta key that you want to match against it:


You can add additional Statements and additional blacklists or whitelists to your alert as your use case dictates.  Once complete, save and deploy your alert, and then verify that your alerts are firing as expected:


And finally, give yourself a pat on the back.

For those who are interested in becoming certified on the RSA NetWitness Platform - we have some great news for you!  This process just became a whole lot easier... you no longer have to travel to a Pearson VUE testing center to take the certification exams.  All four of the RSA NetWitness certifications can now be taken through online proctored testing!  That's right... 100% online!


You can find all of the details on the RSA Certification Program page.  There's also a page specifically for the RSA NetWitness Platform certifications where you can find details about the certifications, try out one of the practice exams, register to take a certification and much, much more.  


RSA NetWitness has 4 separate certifications available:

  1. RSA NetWitness Logs and Network Admin
  2. RSA NetWitness Logs and Network Analyst
  3. RSA NetWitness Endpoint Admin
  4. RSA NetWitness Endpoint Analyst


I wish you all the best of luck and encourage you to continue your professional development by becoming certified on our technology.  

The RSA NetWitness Platform has an integrated agent available that currently does base Endpoint Detection and Response (EDR) functions but will shortly have more complete parity with ECAT (in V 11.x).  One beneficial feature of the Insights agent (otherwise called NWE Insights Agent) is Windows Log collection and forwarding. 


Here is the agent Install Guide for v11.2:


The Endpoint packager is built from the Endpoint Server (Admin > Services) where you can define your configuration options.  To enable windows log collection check the box at the bottom of the initial screen


This expands the options for Windows log collection...

Define one or more Log Decoder/Collector services in the current RSA NetWitness deployment to send the endpoint logs to (define a primary and secondary destination)


Define your channels to collect from

The default list includes 4 channels (System, Security, Application and ForwardedEvents)

You can also add any channel you want as long as you know the EXACT name of it

In the Enter Filter Option in the selection box enter the channel name

In this case Windows PowerShell (again, make sure you match to the exact Event Channel run into issues)

We could also choose to add some other useful event channels

  • Microsoft-Windows-Sysmon/Operational
  • Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
  • Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational


You can choose to filter these channels to include or exclude certain events as well.


Finally, set the protocol to either UDP/TCP or TLS.


Generate Agent generates the download that includes the packager and the config files that define the agent settings.


From there you can build the agents for Windows, Linux and Mac from a local windows desktop.

Agents are installed as normal using local credentials or your package management tool of choice.


Now that you have windows events forwarded to your log decoders, make sure you have the Windows parser downloaded from RSA Live and deployed to your log decoders to start parsing the events.

The Windows parser is slightly different than the other windows log parsers (nic, snare, er) in that there are only 7 message sections (one each for the default channels and a TestEvent and Windows_Generic).


For the OOTB channels the Message section defines all the keys that could exist and then maps them to the table-map.xml values as well as the ec tags. 

Log Parser Customization 


The Windows_Generic is the catchall for this parser and any channel that is added custom will only parse from this section.  This catchall needs some help to make use of the keys that will come from the channels that we have selected which is where a windowsmsg-custom.xml (custom addition to the windows parser) comes in (internal feature enhancement as been added to make these OOTB)


Get the windows-custom parser from here:

GitHub - epartington/rsa_nw_log_windows: rsa windows parser for nw endpoint windows logs 

Add to your windows parser folder on the log decoder(s) that you configured in the endpoint config



Reload your parsers.

Now you should have additional meta available for these additional event channels.




What happens if you want to change your logging configuration but don't want to re-roll an agent? In the Log Collection Guide here you can see how to add a new config file to the agent directory to update the channel information

(page 113)


Currently the free NW Endpoint Insights agent doesn't have agent config management included so this needs to be manual at the moment.  Future versions will include config management to make this change easier.


Now you can accomplish things like this:

Logs - Collecting Windows Events with WEC 

Without needing a WEC/WEF server especially if you are deploying Sysmon and want to use the NWE agent to pull back the event channel.


While you are in the Log Collection frame of mind, why not create a Profile in Investigation for NWE logs. 

Pre-Query = device.type='windows'


In 11.2 you can create a profile (which isn't new) as well as meta and column groups that are flexible (new in 11.2).  Which means the pre-query is locked but you are able to switch metagroups within the profile (very handy)



Hopefully this helpful addition to our agent reduces friction to collecting windows events.  If there are specific event channels that are high on the priority list for collection add them to the comments below and i'll get them added to internal RFE.

We at RSA value your thoughts and feedback on our products. Please tell us what you think about RSA NetWitness by participating directly in our upcoming user research studies. 


What's in it for you?


You will get a chance to play around with new and exciting features and help us shape the future of our product through your direct feedback. After submitting your information in the survey below, if you are a match for an upcoming study, one of our researchers will work with you in facilitating a study session either in a lab setting or remote. There are no right or wrong answers in our studies - every single piece of your feedback will help us improve the RSA NetWitness experience. Please join us in this journey by completing the short survey below so that we can see if you are a match for one of our studies.


This survey should take less than a minute of your time.


Take the survey here.

I have found that there is quite a lot of incredibly useful meta packed into the 'query' meta key over the past several years.  The HTTP parser puts arguments and passed variables in there when used in GET's and POST's.  While examining some recent PCAP's from the Malware Traffic Analysis site, there are some common elements we can use to identify Trickbot infections.  This was not an exhaustive look at Trickbot, but simply a means to identify some common traits as meta values.  As Trickbot, or any malware campaign changes, IOC's will need to be updated.


First things first, let's look at the index level for the 'query' meta key.  By default, the 'query' meta key is set to 'IndexKeys'.  This means that you could perform a search where the key existed in a session, but could not query for the values stored within that key.



There are pro's and con's to setting the index level to 'IndexValues' in your 'index-concentrator-custom.xml' file on your concentrators.  Some pro's include being able to search for values in there during an investigation.  The con's are that these queries would likely involve 'contains' which taxes the query from a performance perspective.  Furthermore, 'query' is a Text formatted meta key and limited to 256 bytes.  Therefore, anything that after 256 bytes would be truncated and you may not have the complete query string.


Whether 'query' is set to 'IndexKeys' or 'IndexValues' or even 'IndexNone', we can take advantage of it in App rule creation.  In one Trickbot pcap, we can see an HTTP POST to an IP address on a non-standard port.



If we look at the meta created for this session, we can see the 'proclist' and 'sysinfo' as pieces in the 'query' meta.



Combine these with a service type (service = 80) and an action (action = 'post'), we can create an application rule that can help find Trickbot infections in the environment.  For good measure, we can add additional meta from analysis.service to help round it out.



Trickbot application rule
service = 80 && action = 'post' && query = 'name="sysinfo"' && query = 'name="proclist"' && analysis.service = 'windows cli admin commands'



The flexibility of app rule creation allows for analysts and threat hunters take a handful of indicators (meta) and combine them to make detection easier.



App rules help make detection easier.  Once a threat is identified, we can use this method to find the traffic easier moving forward so that we can go find the next new bad thing.  If the app rule fires too often on normal traffic, then we can adjust the rule to add or exclude other meta to ensure it is firing correctly.


As always, good luck, and happy hunting.



Encrypted traffic has always posed more challenges to network defenders than plaintext traffic but thanks to some recent enhancements in NetWitness 11.2 and a really useful feed from defenders have a new tool in their toolbox.


11.2 Added the ability to enable TLS certificate hashing by adding an optional parameter on your decoders

Decoder Configuration Guide for Version 11.2 

(search for TLS certificate hashing - page 164)

  • Explore > /decoder/parsers/config/parsers.options
  • add this after the entropy line (space delimited) HTTPS="cert.sha1=true"
  • Make sure the https native parser is enabled on the decoder


This new meta is the SHA1 hash of any DER encoded cerificates during the TLS handshake which is written to cert.checksum which is the same key that NetWitness Endpoint writes its values to.


Now is a good time to revisit your application rules that might be truncating encrypted traffic.  Take advantage of new parameters that were added in 11.1 related to truncation after the handshake



Now that we have a field for the certificate hash we need a method to track known certificate checksums to match against. has a feed that tracks these blacklisted certificates as long with information to identify the potential attacker campaign.


This is the feed (SSLBL Extended) could also leverage the Dyre list as well. 


Headers look like this

# Timestamp of Listing (UTC),Referencing Sample (MD5),Destination IP,Destination Port,SSL certificate SHA1 Fingerprint,Listing reason
Map the feed as follows

Configure > Custom Feeds > New Feed > Custom Feed


Add the url as above, recur every hour (get new data into the feed in reasonable time)


Apply to your decoders (and you will notice that the feed is also added to your Context Hub as well in 11.2 - which means you can create a feed that is used as feed and as well as ESA whitelist or blacklist)



Non-IP type, map Column 5 to cert.checksum and column 6 to IOC (as if we have a match this is pretty confidant that this traffic is bad)


And now you have an updated feed that will alert you to certificate usage that matches known lists of badness.


an example output looks like this (always ends <space>c&c in IOC key)


(the client value is from another science project related to JA3 signatures ...  in this case double confirmation of gootkit)


the testing data that was used to play with this came from here - 2018-09-05 - Emotet infection with IcedID banking Trojan and AZORult 


Great resource and challenges if you are looking for some live fire exercises.


To wrap this up an ESA rule can be created with the following criteria to identify these communications and create an Alert

Module debug section. If this is empty then debugging is off.
@Name("outbound_blacklisted_ssl_cert: {ioc}")
@Description('cert.checksum + ssl abuse blacklist all have ioc ends with <space>c&c')
/* Statement: outound_ssl_crypto_cnc */
direction.toLowerCase() IN ( 'outbound' ) AND
service IN ( 443 ) AND
matchLike(ioc,'% C&C' )
/*isOneOfIgnoreCase(ioc,{ '%c&c' })*/
) ;

The reason advanced mode was needed was that the IOC metakey needed to be wildcarded to look for any match of <name><space>C&C and I didnt want to enumerate all the potential names from the feed (the UI doesnt provide a means to do this in the basic rule builder for arrays - of which IOC is string[]).


Another thing to notice is that the @Name syntax creates a parameterized name that is only available in the alert details of the raw alert.

I was hoping to do more with that data but so far not having much luck.


You can also wrap this into a Respond alert to make sure you group all potential communications together for a system and these alerts (grouping by source IP)


If everything works correctly then you get Resond alerts like this that you should investigate 

With all the recent blogs from Christopher Ahearn about creating custom lua parsers, some folks who try their hand at it may find themselves wondering how to easily and efficiently deploy their new, custom parsers across their RSA NetWitness environment.


Manually browsing to each Decoder's Config/Parsers tab to upload there will quickly become frustrating in larger or distributed environments with more than one Decoder.


Manually uploading to a single Decoder and then using the Config/Files tab's Push option would help eliminate the need to upload to every single Decoder, but you would still need to reload each Decoder's parsers.  While this could, of course, be scripted, I believe there is a simpler, easier, and more efficient option available.


Not coincidentally, that option is the title of this blog. We can leverage the Live module within the NetWitness UI to deploy custom parsers across entire environments and automatically reload each Decoder's parsers in the process.  To do this, we will need to create a custom resource bundle that mimics an OOTB Live resource.


First, lets take a look at one of the newer lua parsers from Live to see how it's being packaged.  We'll select one parser and then choose Package --> Create to generate a resource bundle.


In this ZIP's top-level directory, we see a LUAPARSER folder and a resourceBundleInfo.xml file.


Navigating down through the LUAPARSER folder, we eventually come to another ZIP file:


This contains an encrypted lua parser and a token to allow NetWitness Decoders to decrypt it (FYI - you do not need to encrypt your custom lua parsers).


The main takeaway from this is that when we create our custom resource bundle, we now know to create a directory structure like in the above screenshot, and that our custom lua parser will need to be packaged into a ZIP file at the bottom of this directory tree.


Next, lets take a look at the resourceBundleInfo.xml file in the top-level directory of the resource bundle.  This XML is the key to getting Live to properly identify and deploy our custom lua parser.


Everything that we really need to know about this XML is in the <resourceInfo> section.


A common or friendly name for our parser:



The name of the ZIP file at the bottom of the directory tree:



The full path of this ZIP file:



The version number (which can really be anything you want, as long as it's reflected accurately in the filePath):



The resourceType line is the name of the top-level folder in the resource bundle (you shouldn't need to change this):



The typeTitle (which you also shouldn't change):

            <typeTitle>Lua Parser</typeTitle>


And lastly the uuid, which is how Live and the NetWitness platform identify Live resources:



Modifying everything in this file should be pretty straightforward - you'll simply want to modify each line to reflect your parser's information. And for the uuid, we can simply create our own - but don't worry, it doesn't need to be anywhere near as long or complex as a Live resource uuid.


Now that we know what the structure of the resource bundle should look like, and what information the XML needs to contain, we can go ahead and create our own custom resource bundle.


Here's what a completed custom resource bundle looks like, using one of  Chris Ahearn's parsers as an example: What's on your wire: Detect Linux ELF files:





With the custom bundle packaged and ready to go, we can go into Live, select Package --> Deploy, browse to our bundle, and simply step through the process, deploying to as many or as few of our Decoders as we want:





For confirmation, we can broswe to any of our Decoders at Admin --> Services and see our custom parser deployed and enabled in the Config/General tab:


Lastly, for those who might have multiple custom resources they want to deploy at once in a single resource bundle, it's just a matter of adjusting the resourceBundleInfo.xml file to reflect each resource's name, version, path, and making sure each uuid is unique within the resource bundle, e.g.: uuid1, uuid2, uuid3, etc:



You can find a resource bundle template attached to this blog.


Happy customizing, everybody!

Introduction to MITRE’s ATT&CK™


Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.


Consequently, two other frameworks are also developed namely, PRE-ATT&CK™ and ATT&CK Mobile Profile. PRE-ATT&CK™ is developed to categorize pre-compromise tactics, techniques and procedures (TTPs) independent of platform/OS. This framework categorizes the adversaries planning, information gathering, reconnaissance and setup before compromising the victim.


ATT&CK™ Mobile Profile is specific to Android and iOS mobile environments and has three matrices that classifies tactics and techniques. This does not just include post-compromise tactics and techniques but also deal with pre-compromise TTPs in mobile environments.


This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive and the community adds them as they are observed and verified.


                This matrix is helpful in validation of defenses already in place and designing new security measures. It can be used in the following ways to improve and validate the defenses:


  1. This framework can be used to create adversary emulation plans which can be used by hunters and defenders to test and verify their defenses. Also, these plans will make sure you are testing against an ever-evolving industry standard framework.
  2. Adversary behavior can be mapped using ATT&CK™ matrix which can be used for analytics purposes to improve your Indicators of Compromise (IOCs) or Behavior of Compromise (BOCs). This will enhance your detection capabilities with greater insight into threat actor specific information.
  3. Mapping your existing defense with this matrix can give a visualization of tactics and techniques detected and thus can present an opportunity to assess gaps and prioritize your efforts to build new defenses.
  4. ATT&CK™ framework can help to build the threat intelligence with perspective of not just TTPs but threat groups and software that are being used. This approach will enhance your defenses in a way that detection will not be just dependent upon TTPs but the relationship it has with threat groups and software that are in play.


Relationships between Threat-Group, Software, Tactic and Techniques

Figure 1: Relationships between Threat-Group, Software, Tactics and Techniques


This framework resolves the following problems:


  1. Existing Kill Chain concepts were too abstract to relate new techniques with new types of detection capabilities and defenses. ATT&CK can be called a Kill Chain on steroids.
  2. Techniques added or considered should be observed in a real environment and not just from theoretical concepts. The community adding techniques insures that the techniques have been seen in the wild and thus are suitable for people using this model in real environments.
  3. This model gives common language and terminology across different environments and threat actors. This factor is important in making this model industry standard.
  4. Granular indicators like domain names, hashes, protocols et cetera do not provide enough information to see the bigger picture of how the threat actor is exploiting the system, and its relationship with various sub-systems and tools used by the adversary. This model gives a good understanding and relationship between tactics and techniques used which can be used further to drill down into only the important granular details.
  5. This model helps with making a common repository from where this information can be used with APIs and programming. This model is available via public TAXII 2.0 server and serve STIX 2.0 content.


ATT&CK Navigator


ATT&CK Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK model.


ATT&CK Navigator


Figure 2: ATT&CK Navigator


By default, this uses MITRE’s TAXII server but it can be changed to use any TAXII server of choice. Navigator uses JSON files to create layers which can be programmatically created and thus used to generate layers.


RSA NetWitness Event Stream Analysis (ESA)


ESA is one of the defense systems that is used to generate alerts. ESA Rules provide real-time, complex event processing of log, packet, and endpoint meta across sessions. ESA Rules can identify threats and risks by recognizing adversarial Tactics, Techniques and Procedures (TTPs).


The following are ESA Components:


  1. Alert - Output from a rule that matches data in the environment.
  2. Template - Convert the rule syntax into code (Esper) that ESA understands.
  3. Constituent Events - All of the events involved in an alert, including the trigger event.
  4. Rule Library - A list of all the ESA Rules that have been created.
  5. Deployments - A list of the ESA Rules that have been deployed to an ESA device.


The Rule Library contains all the ESA Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK matrix. The mapping shows how many tactics/techniques are detected by ESA.


In other words, overlap between ESA Rules and ATT&CK matrix can not only show us how far our detection capabilities reach across the matrix but also can quantify the evolution of product. We can measure how much we are improving and in which directions we are improving.


We have created a layer as a JSON file which has all the ESA Rules mapped to techniques. Then we have imported that layer on ATT&CK Navigator matrix to show the overlap. In the following image, we can see all the techniques highlighted that are detected by ESA Rules:


ATT&CK Navigator ESA Rules Mapping


Figure 3: ATT&CK Navigator Mapping to ESA Rules


To quantify how much ESA Rules spread across the matrix we can refer to the following plot:


ATT&CK Navigator ESA Rules Mapping Plot


                               Figure 4: Plot for ATT&CK Matrix Mapping to ESA Rules


Moving forward we can map our other detection capabilities with ATT&CK matrix. This will help to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.










Thanks to Michael Sconzo and Raymond Carney for their valuable suggestions.

A recent advisory was sent out for firmware updates to a number of base components in NetWitness.


RSA NetWitness Availability of BIOS & iDRAC Firmware Updates


There were three components that were mentioned that needed potential updates and instructions to update them were provided in the Advisory.


How do you do gather the state of the environment quickly with the least amount of steps so that you can determine if there is work that needs to be done?


Chef to the rescue ...

You might need these tools installed on your appliances to run later commands (install perccli and ipmitool)


From the NW11 head server (node0)

salt '*' pkg.install "perccli,ipmitool" 2>/dev/null

then you can query for the current versions of the software for PERC, BIOS and iDRAC

salt '*' 'hostname; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "FW Package Build"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "FW Package Build"' 2>/dev/null
salt '*' 'hostname; ipmitool -I open bmc info | grep "Firmware Revision"' 2>/dev/null
salt '*' 'hostname; ip address show dev eth0 | grep inet; dmidecode -s system-serial-number; dmidecode -s bios-version; dmidecode -s system-product-name;' 2>&-


The output will list the host, and the version of the software that exists which can be used to determine if an update is required to your NetWitness Appliances.


Ideally this is in Health and Wellness where policies can be written against it with alerts (and an export to csv function would be handy).

Wireshark has been around for a long time and the display filters that exist are good reference points to learn about network (packet) traffic as well as how to navigate around various parts of sessions or streams.


Below you will find a handy reference which allows you to cross-reference many of the common Wireshark filters with their respective RSA NetWitness queries. 


This is where I pulled the Wireshark display filters from:  DisplayFilters - The Wireshark Wiki 


Show only SMTP (port 25) and ICMP traffic:

tcp.port eq 25 or icmpservice=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)
tcp.dstport=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)


Show only traffic in the LAN (192.168.x.x), between workstations and servers -- no Internet:

ip.src== and ip.dst== && ip.dst=
direction='lateral' (RFC1918 to RFC1918)


Filter on Windows -- Filter out noise, while watching Windows Client - DC exchanges

smb || nbns || dcerpc || nbss || dnsservice=139,137,135,139,53


Match HTTP requests where the last characters in the uri are the characters "gl=se":

http.request.uri matches "gl=se$"service=80 && query ends 'gl=se'


Filter by a protocol ( e.g. SIP ) and filter out unwanted IPs:

ip.src != && ip.dst != && sipservice=5060 && ip.src! && ip.dst !=


ip.addr == equivalent to

ip.src == or ip.dst ==
ip.src= || ip.dst=


Here's where I pulled some additional filters for mapping:  HTTP Packet Capturing to debug Apache 


View all http traffic



View all flash video stuff

http.request.uri contains "flv" or http.request.uri contains "swf" or http.content_type contains "flash" or http.content_type contains "video"service=80 && ( query contains 'flv' || query contains 'swf' || content contains 'flash' || content contains 'video')


Show only certain responses

http.response.code == 404service=80 && error begins 404
service=80 && result.code ='404'
http.response.code==200service=80 && error !exists (200 are not explicitly captured)
service=80 && result.code !exists (200 are not explicitly captured)


Show only certain http methods

http.request.method == "POST" || http.request.method == "PUT"service=80 && action='post','put'


Show only filetypes that begin with "text"

http.content_type[0:4] == "text"service=80 && filetype begins 'text'
service=80 && filename begins 'text'


Show only javascript

http.content_type contains "javascript"service=80 && content contain 'javascript'


Show all http with content-type="image/(gif|jpeg|png|etc)" §

http.content_type[0:5] == "image"service=80 && content ='image/gif','image/jpeg','image/png','image/etc'


Show all http with content-type="image/gif" §

http.content_type == "image/gif"service=80 && content ='image/gif'


Hope this is helpful for everyone and as always, Happy Hunting!

I was reviewing a packet capture file I had from a recent engagement. In it, the attacker had tried unsuccessfully to compress the System and SAM registry hives on the compromised web server. Instead, the attacker decided to copy the hives into a web accessible directory and give them a .jpg file extension. Given that the Windows Registry hives contain a well documented file structure, I decided to write a parser to detect them on the network.



If we see something on the wire, there is a pretty good chance we can create some content to detect it in the future. This is the premise behind most threat-hunting or content creation. Make it easier to detect the next time. This is the same approach I take when building Lua Parsers for the RSA NetWitness platform.


Here, we can see what appears to be the magic bytes for a registry file “regf”.



Let’s shift our view into View Hex and examine this file.



When creating a parser, we want to make it as consistent as possible to reduce false positives or errors. What I found was that immediately following the ‘regf’ signature the Primary Sequence Number (4 bytes) and Secondary Sequence Number (4 bytes) would be different. Then, there was the FileTime UTC (8 bytes) field which would most definitely be unique.


However, the Major and Minor versions were relatively consistent. Therefore, I could skip over those 16 bytes to land on the first byte of the Major Version immediately after my initial token matches.  Let’s create a token to start with.



   ["\114\101\103\102"] = fingerprint_reg.magic,   -- regf



If you notice, this token is in DECIMAL format, not HEX. Also, 4 bytes is quite small for a token. What happens is that when a parser is loaded into the decoder, the tokens are stored in memory and compared as network traffic is going through the decoder. Once a token matches, the function(s) within the parser are run. Too small of a token means the parser may run quite frequently with or without matching on the right traffic. Too large of a token means the parser may only run on those specific bytes and you could miss other relevant traffic. When creating a parser token, you may want to error on the side of caution and make it a little smaller but know that you will have to add additional checks to ensure it is the correct traffic you want.


In Lua for parsers, you are always on a byte. Therefore, we need to know where we are and where we want to go. I like to set a variable called ‘current_position’ to denote where my pointer is in the stream of data. When the parser matches on a token, it will return 3 values. The three values are the token itself, the first position of the token in the data stream and the last position of the token in the data stream. This helps me as I want to find the ‘regf’ token and move forward 17 bytes to land on the Major version field.


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")





This will put the pointer on the first byte (0x01) of the Major Version field. Next what I want to do is extract only the payload I need to do my next set of checks, which will involve reading the bytes.


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")






Here, I created a variable called ‘payload’ and used the built-in function ‘nw.getPayload’ to get the payload I wanted. Since I previously declared a variable called ‘current_position’, I use that as my starting point and tell it to go forward 7 bytes. This gives me a total of 8 bytes of payload. Next, I make sure that I have payload and that it is, in fact, 8 bytes in length (#payload == 8).


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")







If the payload checks out, then in this parser, I want to read the first 4 bytes, since that should be the Major Version. In the research I did, I saw that the Major Version was typically ‘1’ and was represented as ‘0x01000000’. Since I want to read those 4 bytes, I use “payload:uint32(1,4)”. Since those bytes will be read in as one value, I pre-calculate what that should be and use it as a check. The value should be ‘16777216’. If it is, then it should move to the next check.


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")






The Minor Version check winds up being the second and last check to make sure it is a Registry hive. For this to run, the Major version had to have been found and validated based on the IF statement. Here, we grab the next 4 bytes and store those in a variable called ‘minorversion’. There were four possible values that I found in my research. Those would be ‘0x03000000’, ‘0x04000000’, ‘0x05000000’, and ‘0x06000000’. Therefore, I pre-calculated those values in decimal form like I did with the Major Version and did a comparison (==). If the value matched, then the parser will write the text ‘registry hive’ as meta into the ‘filetype’ meta key.


The approach shown here was useful in examining a particular type of file as it was observed in network traffic. The same approach could be used for protocol analysis, identifying new service types, and many others as well.  If you would like expert assistance with creating a custom parser for traffic that is unique in your environment then that is a common service offering provided by RSA.  If you're interested in this type of service offering, please feel free to contact your local sales rep.  


The parser is attached, and I have also submitted it to RSA Live for future use.  I hope you find this parser breakdown helpful and as always, happy hunting.



Microsoft has been converting customers to O365 for a while, as a result more and more traffic is being routed from on-premise out to Microsoft clouds potentially putting it into visibility of NetWitness.  Being able to group that traffic into a bucket for potential whitelisting or at the very least identification could be a useful ability.


Microsoft used to provide an XML file for all the required IPv4, IPv6 and URLs that were required for accessing their O365 services.  This is being deprecated in October of 2018 in favor of API access.


This page gives a great explainer on the data in the API and how to interact with it as well as a python and Powershell script to grab data for use in firewalls etc.


Managing Office 365 endpoints - Office 365 


The powershell script is where I started so that a script could be run on client workstation to determine if there was any updates and then apply the relevant data to the NW environment.  Eventually, hopefully this gets into the generic whitelisting process that is being developed so that it is programatically delivered to NW environments.


GitHub - epartington/rsa_nw_lua_feed_o365_whitelist: whitelisting Office365 traffic using Lua and Feeds 


The script provided by Microsoft was modified to create 3 output files for use in NetWitness





the IP feeds are in a format that can be used as feeds in NetWitness, the github link with the code provides the xml for them to map to the same keys as the lua parser so there is alignment between the three.


the o365urlOut.txt is used in a lua parser to map against the key.  The reason the lua parser was used is as a result of a limitation of the feeds engine which prevents wildcard matching.  The matches in feeds need to be exact, and some of the hosts provided by the feeds are *  The Lua parser attempts to match direct exact match first then falls back to subdomain matches to see if there are any hits there.


The Lua parser has the updated host list as of the published version, as Microsoft updates their API the list needs to be changed.  Thats where the PS1 script comes in.  That can be run from client workstation, the output  txt file then opened up if there are changes and the text copied to the decoder > config > files tab and replace the text in the parser to include any changes published.  The decoder probably needs to have the parsers reloaded which can be done from REST or explore menu to reload the content into the decoder.  You can also push the updated parser to all your other Log and Packet decoders to keep them up to date as well.


The output of all the content is data in the filter metakey




Sample URL output

[""] = "office365",
[""] = "office365",
[""] = "office365",
[""] = "office365",
[""] = "office365",


sample IPv4 output,whitelist,office365,whitelist,office365,whitelist,office365,whitelist,office365


My knowledge of powershell is pretty close to 0 at the beginning of this exercise, now it's closer to 0.5.


To Do Items you can help with:

Ideally i would like the script to output the serviceArea of each URL or IP network so that you can tell which service from O365 the content belongs to to give you more granular data on what part of the suite is being used.

serviceArea = "Exchange","sway","proplus","yammer" ...

If you know how to modify the script to do this, more than happy to update the script to include those.  Ideally 3-4 levels of filter would be perfect.




would be sufficient granularity i think


Changes you might make:

The key to read from is, if you have logs that write values into domain.dst or host.dst that you want considered and you are on NW11 you can change the key to be host.all to include all of those at once in the filtering (just make sure that key is in your index-decoder-custom.xml)


Benefits of using this:

Ability to reduce the noise on the network for known or trusted communications to Microsoft that could be treated as lower priority.  Especially when investigating traffic outbound and you can remove known O365 traffic (powershell from endpoint to internet != microsoft)


As any FYI, so far all the test data that I have lists the outbound traffic as heading to org.dst='Microsoft Hosting', i'm sure on wider scale of data that isn't true, but so far the whitelist lines up 100% with that org.dst.

Filter Blog

By date: By tag: