Skip navigation
All Places > Products > RSA NetWitness Platform > Blog

A question has come up a few times on how someone could exclude certain machines from triggering NetWitness Endpoint Agent alerts easily.


This particular use case were their "Gold Images" which are used for deploying machines.  As part of a bigger vision for other server roles & rules, a custom meta key was created called Server.Role to hold the various roles they have defined for servers in their environment.


A Custom Feed was created to associate "Gold Image" as a meta value for that Meta Key by matching against, or host.src. This example is just an Adhoc feed, but a recurring feed from a CMDB or other tools could be leveraged to keep this list dynamic.

note: My example has not gold just to contrast the roles.


Now that the meta values are created, we can use these as whitelisting statements for the App rules.

From Admin>Services, select the Endpoint Log Decoder, click View>Config then select the App Rules tab.


Filter by nwendpoint to find the endpoint rules.

Edit the rule you'd like and add a server.role != 'gold image' && in front of the rule as shown in the example below:

Click OK then Apply the rules

Repeat for any other rules you would need whitelisted.


This is just a simple example, but you can use this approach for many other scenarios.


Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live, for retired content you'll need to manually remove those.

Detailed configuration procedures for getting RSA NetWitness Platform setup - Content Quick Start Guide 



RSA NetWitness Lua Parsers:

  • WireGuard – New Lua parser has been introduced to identify WireGuard VPN sessions. WireGuard open-source is a security-focused virtual private network (VPN) known for its simplicity and ease of use.

Read more about Identifying WireGuard (VPN) Traffic Using RSA NetWitness Network 



More information about Packet Parsers 


RSA NetWitness Application Rules:

More information about NetWitness 11.4 New Features andAlerting: ESA Rule Types 



RSA NetWitness Lua Parsers:

  • SMB_lua – This parser is updated for significant detection improvements with named pipe parsing capabilities. Detection is expanded to track parent-child relationships to recognize operations performed on child named pipes.

Read more about SMB_lua in action -

Detecting Lateral Movement in RSA NetWitness: Winexe 

Around the Fire With Old Friends (CVE-2019–0604, and CVE-2017-0144)

Keeping an eye on your Hounds...  


  • DCERPC – This parser is updated for similar detection improvements with named pipe parsing capabilities.

Read more about Using the RSA NetWitness Platform to Detect Lateral Movement: SCShell (DCE/RPC) 


  • TLS_lua - New detections are added in TLS parser to detect suspicious cipher suites for both client and server. This will give analysts added insight into what TLS connections based on suspicious client/server setup which will help detect and analyze malicious activity.

Read more about SSL and NetWitness 


  • rtmp_lua – rtmp parser is updated for accuracy and efficiency.
  • HTTP_lua – This parser has been updated with added detection and better accuracy




We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

Discontinued Content 


For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.


EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

Carrying on with the theme of Remote Access Tools (RATs), in this blog post will be covering Void-RAT. This tool is still in development and currently at alpha release so doesn't come with as many features as other RATs we've looked at, with that being said it still works quite nicely for controlling a remote endpoint. As always, check out the C2 Matrix for more details on its functionality.


The Attack

On our victim endpoint, we drop our compiled binary, client.exe, into the C:\PerfLogs\ directory and execute it:



After execution, it attempts to connect back to the C2 server, if successful it creates a slightly modified version of itself and stores it here: C:\Windows\Firewall\Firewall.exe - it then executes this binary which is the one that communicates back to the C2 server along with some information about the endpoint it is running on:


There are a number of options available to control the endpoint, but the most useful is the Remote CMD option. This allows us to execute commands remotely on the victim:



The Detection Using RSA Network

Void-RATs communication is in cleartext but uses a custom TCP protocol which is not directly understood by NetWitness. This means that the traffic gets tagged as OTHER, even though NetWitness does not understand the protocol, it will still analyse it. From the below screenshot, we can see that NetWitness has detected windows cli commands over some sessions using a suspect port:


Drilling into these sessions and reconstructing them, we can see the structure of the protocol used by Void-RAT, and the information that was sent to and from the victim:


Some more of the payload can be seen below. These commands are what NetWitness detected:


Void-RAT also reports back the public IP of the victim upon its initial check-in. It does this by making an HTTPS request to wtfismyip[.]com - this could also be used as a potential starting point for a hunt to find potentially compromised endpoints:

service = 443 && sld = 'wtfismyip'


These types of tools also require interaction from a remote operator, so at some point the attacker will perform actions that may supply additional indicators leading you to their presence. Here under the Indicators of Compromise meta key, we can see the meta value, hex encoded executable:



Drilling into this meta value and opening the events view to reconstruct the session, we can see that a hex encoded executable is being sent across the wire which uses the same proprietary protocol as Void-RAT, so even if we had not detected the RAT initially, we detected suspect behaviour, which led us to the RAT:



The Detection Using NetWitness Endpoint

Upon execution of Void-RAT, it sets up persistence for itself. It achieves this by creating a slightly modified version of itself here: C:\Windows\Firewall\Firewall.exe and modifies the \Current\Version\Run key to execute it upon boot. This behaviour was detected by NetWitness Endpoint and is shown as the two meta values in the following screenshot:



Drilling into these two meta values we can see these two events in more detail:



Changing our pivot in the Navigate view to focus on the new binary, filename.src = 'Firewall.exe', we can see that it is executing suspect commands (as shown under the Source Parameter meta key) and making network connections (as shown under the Context meta key):


Drilling into the network connections made by Firewall.exe, we can see the lookup performed to get the public IP of the victim using wtfismyip[.]com:


A simple application rule that could be created to look for this behaviour is shown below:

domain.dst = ''


We can also see the connection back to the C2, which would have given us a nice indicator to search and see if other endpoints are infected:



Similarly, as stated in the network detection, the tool is operated remotely and will at some point have to perform actions to achieve its end goal. The attacker transferred a hex encoded binary across the wire, but this cannot be executed by the system, so they used certutil (a LOLBin) to hex decode the file into an executable, which was detected under the Behaviours of Compromise meta key as shown below:




While many RATs seem to use custom TCP protocols to communicate, their behaviour is easily identifiable
with NetWitness. When hunting in network traffic make sure to spend some time on service = 0 - and
remember that a RAT has to do something in order to achieve its end goal, and those actions will be picked
up by NetWitness, so make sure to look for executables performing suspicious actions and
making network connections that you typically wouldn't expect for that endpoint. While this RAT does use a custom protocol, in a lot of cases, attackers exploit security controls in organizations that allow direct internet access on well-known common ports, like port 80/HTTP, 443/HTTPS, 22/SSH, etc. In these cases, NetWitness will also flag the unknown service on these ports. For more mature organizations, using NGFWs that do a certain level of protocol inspection before allowing traffic for well known services to flow through them, RATs like this would have some difficulty surviving, and therefore attackers are more prone to use tools that rely on standard protocols, which we have covered on some of the other posts.

This month we did a live demonstration of upgrading the firmware on an iDRAC of version 8 and version 9. Sadly I wasn't able to make videos for this one, but here are Dell's official walkthrough videos: (Please keep in mind RSA only supports certain firmware versions that can be found here RSA NetWitness Availability of BIOS & iDRAC Firmware Updates)

iDRAC9 Firmware Upgrade | iDRAC8 Firmware Upgrade


Dell has multiple guides on IPMI-based interfacing with iDRACs, which can all be found on Dell's website depending on your firmware and hardware versions.


The recording of the May webinar is available here:

Webinar Recording

Access Password: 8V*6.vT@


PowerPoint is attached.


Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live. For retired content, you must manually remove those items.


For detailed configuration procedures to setup RSA NetWitness Platform, see the Content Quick Start Guide



RSA NetWitness Lua Parsers:

  • TLS_lua Options – Optional parameters to alter the behavior of the TLS_lua parser.

Available Options:

"Overwrite Service": default value false

Default behavior is that if another parser has identified a session with service other than SSL, then this parser will not overwrite the service meta.

If this option is enabled,  the parser identifies all sessions containing SSL as SSL even if a session has been identified by another parser as another service.


"Ports Only": default value false

Default behavior is port-agnostic: that is, the parser looks for all SSL/TLS sessions regardless of which ports a session uses.  This allows identification of encrypted sessions on unexpected and non-standard ports.

If this option is enabled,  the parser only searches for SSL/TLS sessions using the configured ports.  Ports on other sessions will not be identified as SSL/TLS.  This may improve performance, at a cost of possibly decreased visibility.


Note that a session on a configured port that is not SSL/TLS will still not be identified as SSL/TLS.  In other words, the parser does not assume that all sessions on configured ports are SSL/TLS.

Read more about SSL and NetWitness 


More information about Packet Parsers:


RSA NetWitness Application Rules:

  • Creates Run Key – New application rule is added to detect creation of new run keys. Creating new run key can be an indication of someone trying to use startup configuration locations to execute malware, such as remote access tools, to maintain persistence through system reboots.

This rule addresses MITRE’s ATT&CK™ tactic – Persistence; Technique - Registry Run Keys / Startup Folder


  • Execute DLL Through Rundll32 – New application rule is introduced to detect DLL execution using Rundll32 program. Rundll32 program can be called to execute an arbitrary binary. Attackers may take advantage of this for proxy execution of code to avoid triggering security tools.

This rule addresses MITRE’s ATT&CK™ tactic – Execution, Defense Evasion; Technique - rundll32


  • Runs DNS Lookup Tool for TXT Record – New application rule is added to detect possible covert command and control channels. Running nslookup.exe to query TXT records can be used to establish a covert Command & Control channel to exchange commands and other malicious information. These malicious commands can be later executed on the target system.

This rule addresses MITRE’s ATT&CK™ tactic – Discovery, Command and Control; Techniques - System Network Configuration Discovery, Commonly Used Port, Standard Application Layer Protocol


For more information about NetWitness 11.4 New Features and Alerting: ESA Rule Types 




RSA NetWitness Lua Parsers:

  • ethernet_oui - The list of registered OUI in the parser is updated for added detection.

Read more about Lua - Mapping MAC to Vendor (Logs/Netflow and Endpoint)  


More content has been tagged with MITRE ATT&CK™ metadata for better coverage and improve detection.

For detailed information about MITRE ATT&CK™:

RSA Threat Content mapping with MITRE ATT&CK™  

Manifesting MITRE ATT&CK™ Metadata in RSA NetWitness  




We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

List of Discontinued Content 


RSA NetWitness Application Rules:

  • Stealth Email Use - Marked discontinued due to performance-to-value tradeoff.


For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.


EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

Delving back into the C2 Matrix to look for some more inspiration for blog posts, we noticed there are a number of Remote Administration Tools (RATs) listed. So we decided to start taking a look at these RATs and see how we can detect their usage in NetWitness. This post will cover QuasarRAT which is an open-source, remote access tool that is developed in C#. It has a large variety of features for controlling the victim endpoint and has been used by a number of APT groups.


The Attack

QuasarRAT can be compiled in two modes, debug and release - for this blog post we compiled QuasarRAT in debug mode as it is the quickest and easiest way to get up and running. Once our agent had been compiled, we dropped it onto our victim endpoint in the C:\PerfLogs\ directory and executed:


Shortly after execution we get a successful connection back to QuasarRAT from our victim endpoint:


QuasarRAT has a large feature set, here we are using the Remote Shell feature to execute some commands:


There is also a file explorer that allows us to easily navigate the file system, as well as upload and download files:


It even has a Remote Desktop feature to view and control the endpoint:



The Detection Using NetWitness Network

QuasarRAT does not have an option for insecure communication and all traffic will be over SSL, it also uses a custom TCP protocol for its communication so if intercepted the protocol would be tagged as OTHER and you would have to look for indicators similar to those outlined in our CHAOS C2 post: Using RSA NetWitness to Detect Chaos C2.


Under the Service Analysis meta key, we get some interesting meta values generated regarding the certificate. QuasarRAT upon compilation generates a self-signed cert, this means the certificates age is low, as is identified by the certificate issued within last week meta value, and the self-signed meta value is, ssl certificate self-signed - you'll also notice that there is an ssl over non-standard port meta value, this is generated as the default port for QuasarRAT is 4782 (this is easily changed however and would more commonly be over 443 to bypass firewall restrictions). With that being said, these are some great pivot points to start a hunt in SSL traffic to look for suspect SSL communication:


Looking into the parsed data from the certificate, we can see that the SSL CA and SSL Subject identify this as a Quasar Server, which are the default values given to the certificate created by QuasarRAT: = 'quasar server ca' || ssl.subject = 'quasar server ca'


Another interesting meta value is located under the Versions meta key, where we can see that QuasarRAT uses an outdated version of TLS, tls 1.0 - this could be another starting point to look for this tool, or other applications using outdated protocols for that matter:


The SSL JA3 hash for this comes back as, fc54e0d16d9764783542f0146a98b300, which according to JA3 OSINT maps to, PowerShell 5.1;Invoke-WebRequest. While there is often overlap with JA3 hashes, it would still be a good place to start a hunt from:


On initial execution the RAT will also make an HTTP call to to obtain the public IP address of the endpoint. It would be worth hunting through the network traffic for requests to this domain and others that provide the same function:


The Detection Using NetWitness Endpoint

When we were setting up QuasarRAT, we modified the persistence settings to true, the following two meta values were generated based off of this. This is because QuasarRAT will copy itself to the \AppData\Roaming\ directory and use the \CurrentVersion\Run key to start it upon boot:


If you are using the new ATT&CK meta keys, we also see this persistence mechanism described there as well with the following meta values:


As stated in the network detection section, the RAT will make an HTTP connection to to get the public IP of the victim, we can also see that in the network endpoint data as shown below:


We can also drill into the meta value, console.remote, which is located under the Context meta key. This will show us commands executed by cmd.exe or powershell.exe as a result of inter-process communication through anonymous pipes, i.e. a reverse shell - here we can see client.exe executing suspect commands:


It is important to triage through all the commands executed in order to identify and follow the attackers intentions. An interesting command seen above is in relation to the esentutl.exe; this binary provides database utilities for the Extensible Storage Engine but can also be used to copy locked files for example. Drilling into this command, we can see it was used to copy the SAM hive (which is a locked file) to the C:\PerfLogs\ directory - it does this by using the volume shadow copy service (as noted by the /vss switch in the command below) to make a backup of the locked file which we are then able to copy:


This is an interesting LOLBin (Living off the Land Binary) as it would allow an attacker to copy any locked file from the system, this is activity that should be monitored and the following application rule logic would detect the usage of this command to copy files using the volume shadow copy service:

(filename.src = 'esentutl.exe' || filename.dst = 'esentutl.exe') && (param.src contains '/vss' || param.dst contains '/vss')

NOTE: Not all usage of esentutl.exe will necessarily be malicious, this could be a legitimate technique used by backup software for example. It is down to the defender to determine the legitimacy of the tool executing the command.




QuasarRAT has been around for some time and has been used in a number of targeted attacks against organizations and it is easy to see why. Remote access tools such as this pose a real risk to organizations and monitoring for their activity is paramount to ensuring the security of your network. It is also important as a defender that when these tools are found, that all command are triaged to gain a better understanding of the attackers intentions and end goal.

To round out our series explaining how to use the indicators from ASD & NSA's report for detecting web shells (Detect and prevent web shell malware | ) with NetWitness, let's take a look at the endpoint focused indicators. If you missed the other posts, you can find them here:


Signature-Based Detection

To start with, the guide provides some YARA rules for static signature based analysis. However the guide then quickly moves on to say that this approach is unreliable as attackers can easily modify the web shells to avoid this type of detection. We couldn't agree more – YARA scanning is unlikely to yield many effective detections.


Endpoint Detection and Response (EDR) Capabilities

The guide then goes on to describe the potential benefits of using EDR tools like NetWitness Endpoint. EDR tools can be of great benefit to provide visibility into abnormal behaviour at a system level. As the paper notes:

For instance, it is uncommon for most benign web servers to launch the ipconfig utility, but this is a common reconnaissance technique enabled by web shells.

Indeed - monitoring process and commands invoked by web server processes is a good way to detect the presence of web shells. When a web shell is first accessed by an attacker, they will commonly run a few commands to figure out what sort of access they have. Appendix F of the guide includes a list of Windows Executables to watch for being launched by web server processes like IIS w3wp.exe (reproduced below):

NetWitness endpoint provides OOTB monitoring for many of these processes, and produces meta data when execution is detected. The examples below shows some of the meta generated for the execution of cmd.exe, ipconfig.exe and whoami.exe from a web shell - the Behaviors of Compromise key shows values of interest:

An important detail to be wary of is that in many cases the web server process like w3wp.exe may not invoke the target executable directly. So simply running a query looking for filename.src = 'w3wp.exe' && filename.dst = 'ipconfig.exe' won’t work. In the example below, we can see that the web server process actually invokes a script in memory, which then invokes cmd.exe to run the desired tool ipconfig.exe, similarly for whoami.exe:

The event detail shows the chain of execution across the two events:

We can see the full meta data includes the command to run ipconfig.exe passed as a parameter between the two processes:


We can get a clearer picture of the relationship between these processes usng the NetWitness Endpoint process analyser, which shows the links between the processes:


NetWitness Endpoint generates a lot of insightful metadata to describe actions on a host. It is well worth reviewing the metadata generated and which meta keys it is placed under. There is a great documentation page with all the details here: RSA NetWitness Endpoint Application Rules 

Not just IIS

Of course, web shells don't only run on IIS! The same principles can be used for detecting web shells installed on Apache Tomcat and other web servers. Application rules in NetWitness Endpoint also look for command execution by other web server processes. Make sure you check your environment for your web server daemons and add them to the rules as well:


That’s it for this series where we’ve gone through the indicators published by ASD & NSA in their guide for detecting web shells and transcribed how to use them in NetWitness.While the indicators in the guide serve as a starting point, real life detection can get very complicated very quickly. As we stated in a previous post:

Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.

My colleague Hermes Bojaxhi recently posted about another example involving web shells from one of our cases. He goes into great detail showing the exploitation of Exchange and the installation of a web shell: Exchange Exploit Case Study – CVE-2020-0688 


Let me know in the comments below if you’ve used any of these techniques in your environment and what you've found - or let me know if there's anything else you'd like to see.


Happy Hunting!

Josh Randall

Postman for NetWitness

Posted by Josh Randall Employee May 17, 2020

If you've ever done any work testing against an API (or even just for fun), then you've likely come across a number of tools that aim to make this work (or fun) easier.


Postman is one of these tools, and one of its features is a method to import and export collections of API methods that enable individuals to begin using those APIs much more easily and quickly than if, say...they only have a bunch of docs to get them started.


As NetWitness is a platform with a number of APIs and a number of docs to go along with them, a Postman collection detailing the uses, requirements, options, etc. of these APIs should (I hope) be a useful tool that individual and teams can leverage to enable more efficient and effective use of the NetWitness well as with any other tool that you may want to integrate with NetWitness via its APIs.


With that in mind, I present a Postman Collection for NetWitness.  This includes all the Endpoint APIs, all the Respond APIs, and the more commonly used Events (A.K.A. SDK) APIs --> Query, Values, Content, and Packets. Simply import the attached JSON file into Postman, fill in the variables, and start API'ing.


A few notes, tips, and how-to's....

  • upon importing the collection, the first thing you should do is update the variables to match your environment

  • the rest_user / rest_pass variables are required for the Endpoint and Respond API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Security --> Users & Roles tabs
    • the role assigned to the account must have the integration-server.api.access permission, as well as any underlying permissions required to fulfill the request
    • e.g.: if you're querying Endpoint APIs, you'll need integration-server.api.access as well as endpoint-server permissions
  • the svc_user / svc_pass variables are required for the Events API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Services --> <core_service> --> Security --> Users & Roles tabs
    • the role assigned to the account must have the sdk.content, sdk.meta, and sdk.packets permissions, as well as any additional permissions necessary to account for Meta and Content Restriction settings you may be using
  • every Respond and Endpoint call will automatically create and update the accessToken and refreshToken used to authenticate its API call
    • so long as your rest_user and rest_pass variables are correct and the account has the appropriate permissions to call the Respond and/or Endpoint node, there is no need to manually generate these tokens
    • that said, the API calls to generate tokens are still included so you can see how they are being made
  • several of the Endpoint APIs, when called, will create and update variables used in other Endpoint APIs
    • the first of these is the Get Services call, which lists all of the endpoint-server hosts and creates variables that can be used in other Endpoint API calls
      • the names of these variables will depend on the names of each service as you have them configured in the NW UI
    • the second of these is the Get Hosts call, which lists all of the endpoint agents/hosts reporting to the queried endpoint-server and creates a variable of each hostname that can be used in other Endpoint API calls

      • this one may be a bit unwieldy for many orgs, though, because if you have 2000 agents installed, this will create 2000 variables - 1 for each host - if you have 20,000 agents installed, or 200,000.... 
      • you may not want all those hostname variables, so you can adjust how many get created, or disable it altogether, by modifying or deleting or commenting out the javascript code in the Tests section of the Get Hosts call

Any questions, comments, concerns, suggestions, etc...please let me know.

We are back again with another C2 framework called, Chaos: CHAOS is a PoC written in Go and comes with a healthy number of features for controlling the remote endpoints. It supports agents for Windows, Mac, and Linux, however, the feature availability does differ depending on the platform the agent is deployed on. This C2 only allows control of one agent, and all communication is over TCP sockets. More information surrounding this C2 can be found over at the C2 Matrix: C2Matrix - Google Sheets.


This C2 reminds a lot of one we previously covered called, HARS: Using RSA NetWitness to Detect HTTP Asynchronous Reverse Shell (HARS) - so check that post out as well if you haven't already.



The Attack

As always, we're keeping this super simple to place more of a focal point on the C2 traffic itself, rather than the delivery mechanism. So to deploy the agent, we simply copy the binary to the victim endpoint and execute it from the C:\PerfLogs\ directory:


After execution, we see our successful connection back to Chaos as is evident from the [+] Connected! message displayed:


Now we have our connection, we can use one of the available built-in features to set up persistence for Chaos to ensure it starts up again should the system reboot:


From here, we can start to execute commands to get information regarding the endpoint we are controlling:




The Detection Using NetWitness Network

Chaos has no direct support for HTTP and all communication between the C2 and the agent is over TCP sockets. As there is no structure to the traffic being generated, it is not possible to classify it under a specific service, so NetWitness tags this traffic as service = 0 - otherwise known as OTHER. The service OTHER is often overlooked as an area for hunting but should still be analysed by defenders to look for malicious traffic using proprietary protocols, or TCP sockets like Chaos. From the below, we can see that there are some meta values of interest for the Chaos C2 traffic that would stand out during the hunting process:

NOTE: The unknown service over http port meta value is interesting here, as attackers often use typical ports for web browsing to get around firewall policies that block everything but web access for endpoints.


Drilling into the possible base64 windows shell meta value, we can see the structure of the Chaos C2 traffic. The commands are sent as typed to the agent, but the output from the command is Base64 encoded and sent back to the C2, hence why NetWitness generated the possible base64 windows shell meta value:


This gives us the ability to easily observe the commands being executed, and to Base64 decode the output of the commands directly within the UI:


For this type of C2 there is no need to create additional detections for NetWitness Network, the detection is already there and just requires that defenders triage traffic of type OTHER where interesting meta values are generated, such as the ones shown here.



The Detection Using NetWitness Endpoint

As always, when these C2 frameworks are deployed, they have to execute and do things in order to achieve their end goal, and with NetWitness Endpoint it is easy to detect these actions. Below are the meta values generated from the small number of commands that were executed through the C2:


  • chaos > whoami - gets current username
  • chaos > tasklist - enumerates processes on local system
  • chaos > ipconfig - enumerates ip configuration
  • chaos > hostname - gets hostname
  • chaos > persistence_enable - runs registry tool, runs xcopy.exemodifies run key, modifies registry using command-line registry tool


Opening the Events view for the meta values of interest, we can get a better view of all the commands being executed:



The Detection Using NetWitness Logs

In order to better identify suspicious activities taking place on the endpoint, we have chosen to install Sysmon, and to include the detections available through its logging. More information surrounding Sysmon can be found at the following link: The collection of these logs is performed via the NetWitness Endpoint agent itself and more detail on how that was set up can be found here:


There are multiple starting points for using Sysmon to find malicious activities, but for now we are going to start with the following logic which would detect the usage of whoami being executed on a system, this is normaqlly evidence of attacker activity after successful exploitation or privilege escalation and is not overly common for most users to execute regularly:

(event.source = 'microsoft-windows-sysmon') && ( ends'whoami.exe') && ( = '1')

NOTE: The ='1' shown in the above query is for process creations.


Upon executing this query in NetWitness, we can see we get hits for the whoami command being executed from the C:\PerfLogs\ directory:


This is a suspicious directory for processes to be created from, so we can take a look at all processes being created out of this directory by slightly modifying our query to look for all process execution out of the C:\PerfLogs\ directory:

(event.source = 'microsoft-windows-sysmon') && (directory = 'c:\\perflogs\\') && ( = '1')


Here we can see a suspect executable running from the C:\PerfLogs\ directory named chaos.exe, and we can also see that there is a number of other suspicious commands being executed from this directory, as well:




We could also create an application rule that identifies the persistence that was created by looking for edits being made to the \CurrentVersion\Run key using the following logic:

(event.source = 'microsoft-windows-sysmon') && ( = '1') && (param contains 'reg  add hkcu\\software\\microsoft\\windows\\currentversion\\run')


NOTE: While we covered Sysmon as a free alternative to EDR, our recommendation would still be to use one, as Sysmon may require a considerable amount of configuration and tweaking, and will not provide as many capabilties or visibility as an EDR solution would. We covered it here just to offer an alternative for those that don’t use EDR.




Chaos C2 is an easy-to-use framework that gives the attacker great control over the victim endpoint, it does not provide much in terms of obfuscation and does not attempt to blend in with normal traffic, so this should make this an easy detection for defenders whether you have NetWitness Network, Endpoint or Logs. Just remember to not shy away from the traffic type OTHER when hunting through those packets!


Octopus was presented at Black Hat London 2019 by Askar. The github page is available here. It is a pre-operation C2  for Red Teamers, based on HTTP/S and written in python. This blog post will show the detection of Octopus (over http) with NetWitness Endpoint and Network.



The attacker sets up an HTTP listener in Octopus and generates an exe payload. He then builds a webpage where he embeds the payload and spreads the webpage through social media and email spam campaigns. The victim opens the webpage from his Windows 10 machine and a pop-up message is immediately shown on the browser stating the current version of Adobe Flash plugin is outdated and needs to be updated to install latest security patches. Thus the victim clicks on the pop-up and installs the update which infects his machine.


Part 1 -  Attack phase

Once Octopus is started this is how the attacker creates a listener and generates the payload, in this case an exe payload (hta and powershell payloads are also an option):



More in detail we have:

listen_http listen_ip port hostname interval page listener_name
generate_unmanaged_exe listener_name output_path

The attacker uses the popular ngrok tunneling service as a proxy, that is once the victim machine is infected it will communicate with the address which will  in turn create a secure tunnel to the attacker box.


Next the attacker uses a technique known as browser hooking to embed the exe file into a webpage. To achieve this the attacker used the BeEF framework. Explaining this whole process is out of the scope of this post but if you are interested to know more about it you should have a look at the  Autorun Rule Engine BeEF github page.


The victim, using a Windows 10 machine, sees an interesting website about organic food on social medias and clicks on the webpage:



As shown above once the webpage is loaded a message pops up warning to install a new version of Adobe Flash plugin which included new security updates. Interestingly the message also warns to ignore the missing certificate signature and that it is a known issue which Adobe is working on.



The victim then clicks on Install missing Plugins and then on Run ignoring the signature warning as advised. Windows Defender is activated and did not detect the exe file.


On the other side of the wall the attacker receives a connection to the listener.


To interact with the victim the attacker runs the following command:

interact 1

where 1 is number of the session.


The attacker also runs some other commands such as "whoami", "quser", and "report". The latter is a command built-in in Octopus which provides some additional information about the victim machine. After a little of browsing within the victim machine folders the attacker also finds a file containing potential sensitive information (TopSecret.txt) and downloads it using Octopus download command.


Part 2 - Detection phase with the RSA NetWitness Platform

NetWitness Endpoint

The analyst receives an email alert about a high priority incident generated in the NetWitness Respond module so he starts investigating:



The incident is generated by the NetWitness Endpoint incident rule "High Risk Alerts: NetWitness Endpoint".  However, the rule originates from an App Rule which is part of a bundle content pack available in RSA Live. More information about this bundle is available here.


The App Rule condition is the following:

device.type = 'nwendpoint' && category = 'network event' && context = 'network.outgoing' && direction = 'outbound' && context != 'network.nonroutable' && context.src = 'file.unsigned' && dir.path.src = 'appdatalocal','appdataroaming'

and it basically alerts if an unsigned file initiated from the Windows AppData/local or AppData/roaming directory has made an outbound network connection. The alert in turn generates an incident since it is marked with High Risk.


It is apparent from the incident that the file adobe_flash_update.exe made a connection to which is the name of the ngrok server the attacker uses to tunnel the connection to his machine. The fact that file is unsigned and makes a connection to a website that is not from Adobe makes things extremely suspicious.

Drilling down into the events with NetWitness Endpoint and analyzing them in details the analysts also notices this:



which clearly shows the adobe_flash_update.exe spawned few other processes among which whoami.exe and quser.exe that are Windows utilities typically used by attackers for enumeration. 


NetWitness Network

With the information retrieved from the incident, the analyst investigates further with NetWitness Network filtering by hostname:


The analyst notices some potentially malicious HTTP requests under the Service Analysis meta key. While analyzing these meta keys he finds the following event under the "http1.1 without user-agent header" meta value.



The above is the initial communication of the victim machine with the Octopus C2. Note that "home.php" in the GET request is the name the attacker used in the command to setup the listener we saw in the beginning. The response to the request contains a powershell payload that intends to setup the communication with the C2 . We can see an AES key and its Initialization Vector used to encrypt the communication. This structure looks very similar to the Ninja C2, described by my colleague Lee Kirkpatrick in another blog post available here.


After the agent/C2 communication has been setup the next request is the "GET /login" where the encrypted communication is established:



each further request is a beacon to the C2 and the analyst notices that the request includes the victim machine name "WINEP1" followed by a 5 characters random name:



The below two requests show the command quser launched from the C2 in the previous steps and its response (the response is contained on a separate GET request):


Note that when the C2 requests something we see "/bills" in the GET request.


The below figure shows the decryption of the above strings using the powershell decryption function seen in the very first request (GET /home.php):


With the same process the analyst was able to see other commands the attacker ran but more importantly was able to see the attacker exfiltrated a file named TopSecret.txt from the infected machine:



The beaconing pattern can also be observed with 120 seconds intervals and same size:



It is important to note different destination IP addresses in the above figure. This is because ngrok resolves to different IP addresses in round robin.


Another interesting thing to note is that the URL parameters we saw in the GET requests can be customized via  the Octopus file:



# this is the web listener profile for Octopus C2
# you can customize your profile to handle a specific URLs to communicate with the agent
# TODO : add the ability to customize the request headers

# handling the file downloading
# Ex : /anything
# Ex : /anything.php
file_receiver_url = "/messages"

# handling the report generation
# Ex : /anything
# Ex : /anything.php
report_url = "/calls"

# command sending to agent (store the command will be executed on a host)
# leave <hostname> as it with the same format
# Ex : /profile/<hostname>
# Ex : /messages/<hostname>
# Ex : /bills/<hostname>
command_send_url = "/view/<hostname>"

# handling the executed command
# Ex : /anything
# Ex : /anything.php
command_receiver_url = "/bills"

# handling the first connection from the agent
# Ex : /anything
# Ex : /anything.php
first_ping_url = "/login"

# will return in every response as Server header
server_response_header = "nginx"

# will return white page that includes HTA script
mshta_url = "/hta"

# auto kill value after n tries
auto_kill = 10


Lastly, while inspecting the network for C2 traffic the analyst  finds the following:



These are HTTP beacons. Requests are sent on port 3000 which is the default port the BeEF framework uses.



Looking at one of the sessions the analyst sees it contains several requests like the one in the above screenshot. In the Referer field we can see the address of the phishing website used by the attacker and the GET request contains the hook to the BeEF C2. The victim will be hooked to the C2 until he closes the browser. The attacker can leverage the hook to performs social engineering attacks like the fake Adobe Flash update among many others.



A client-side attack vector was used to get initial foothold to the victim machine. Once the victim opened the legitimate-looking webpage his browser was "hooked" to the attacker BeEF C2. The attacker had also set an automatic rule that pushed a fake pop-up message suggesting the victim to install Adobe Flash security updates. Once the victim installed the fake Adobe Flash update the incident was created in NetWitness Respond module because of the App Rule discussed earlier.

Threat actors usually use multiple techniques to distribute their malicious payloads. What would have happened if the user had downloaded the file by a different mean on his machine? The same incident would have probably not been generated in NetWitness because that specific app rule relied on the fact that an unsigned file was started from appdatalocal directory in Windows. However, even without the incident the analysts would have identified suspicious network activity with NetWitness Network such as the beaconing to the C2 and also indicators of compromise and suspicious activities in NetWitness Endpoint . For example, the Behavior of Compromise meta key of NetWitness Endpoint would have shown following values:


    queries users logged on local system (1)  related to the whoami command
    gets current username (1)    related to the quser command


The same applies if the attacker had set an HTTPS listener instead of the HTTP one. In this case the analysts would not have been able to see the content of the communication between the C2 and the victim (unless there is an interceptor in place) but they would have noticed the beaconing and the indicators of compromise in NetWitness Endpoint.



Octopus is quite new but showed similarities to other recent C2 frameworks. It is customizable and modular (external modules can be plugged-in) and can run both on HTTP and HTTPS. This article showed that the NetWitness Suite can be of great use when it comes to C2 detection with the combination of both NetWitness Network and Endpoint by providing a very granular level of visibility. In the case of HTTPS an SSL/TLS interceptor would help providing more visibility but without it NetWitness can still identify C2 patterns and indicators of compromise that will help analysts detect potential malicious activities.

Following on from my last post that focused on analysing web server logs ASD & NSA's Guide to Detect and Prevent Web Shell Malware - Web Server Logs , this time we are going to look at the network based indicators from the ASD & NSA guide Detect and prevent web shell malware | .

There are already some fantastic resources posted by my colleague from the IR team Lee Kirkpatrick and the NetWitness product Documentation team that provide great details on the different ways we can detect web shells using NetWitness for network visibility:

The focus of this post is taking the indicators published by the ASD & NSA in their guide, and showing how to use them in NetWitness.

Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.

Now that’s out of the way, lets take a look at the network indicators.

Web Traffic Anomaly Detection

This is really focused on the URIs being accessed on your servers and the user agents that are being used to access those pages. An easy way to detect new user agents, or new files being accessed on your website (depending on how dynamic your content is) is to use the show_whats_new report action. The show_whats_new action will filter your results from a query to only show new values that did not appear in the database prior to the timeframe of your report. Here’s an example from my lab – if I run a report to show all user agents seen in the last 6 hours I get 20 user agents in my report:

Using show_whats_new in the THEN clause of the rule filters the results and shows me only 2 user agents (which makes sense as my chrome browser recently updated):

Obviously just because a user agent is new doesn’t automatically mean it is a web shell, as web browsers get updates all the time. But it is another method for highlighting anomalies and changes in your environment.

One of the common techniques we use in the IR team is to review the HTTP request methods used against a server – finding sessions that do not follow the pattern of normal user web browsing are a good indicator for web shells. Normal user generated browsing will consist of GET requests followed POST. Sessions that have a POST action with no GET request and no referrer present are a good indicator as Lee covers in his post mentioned above.

Signature-Based Detection

As the ASD & NSA guide states itself, network signatures are an unreliable way to detect web shell traffic:

From the network perspective, signature-based detection of web shells is unreliable because web shell communications are frequently obfuscated or encrypted. Additionally, “hard-coded” values like variable names are easily modified to further evade detection. While unlikely to discover unknown web shells, signature-based network detection can help identify additional infections of a known web shell.

The guide nevertheless includes some Snort rules to detect network communication from common, unmodified web shells:

RSA NetWitness has always had the ability to use Snort rules on the Network Decoder, and that capability was recently enhanced with the 11.3 release adding the ability to map meta data generated by the snort parser to the Unified Data Model. For the steps required to install and configure Snort rules on your network decoder, follow these guides for details and more information:

Here’s the short version:

  1. Create a new folder on your Network Decoder /etc/netwitness/ng/parsers/snort
  2. Create a snort.conf file in that directory. Here’s a simple configuration to get you started:
  3. Copy the rules from the ASD & NSA guide into a file called webshells.rules
    Mitigating-Web-Shells/network_signatures.snort.txt at master · nsacyber/Mitigating-Web-Shells · GitHub 

  4. Go to the Explore view for your Decoder, and go to decoder > parsers > config and add Snort=”udm=true” to the parsers.options field

  5. While in Explore view, right click on decoder > parsers, select properties, then choose reload and hit Send to reload the parsers and activate your Snort rules.

Here we can see the Snort rules successfully loaded and available on the Network Decoder:

Unexpected Network Flows

The ASD & NSA guide suggests monitoring the network for unexpected web servers, and provides a snort signature that simply alerts when a node in the targeted subnet responds to an HTTP(s) request by looking for traffic on port 80 or 443 with a destination IP address in a given subnet:

alert tcp [443,80] -> any any (msg: "potential unexpected web server"; sid 4000921)

Rather than updating this rule with the right subnet details for your environment (that will only be available to be used by this rule), we can do this natively in NetWitness utilising the Traffic Flow parser and its associated traffic_flow_options file to label subnets and IP addresses. Using the traffic_flow_options file to do this labelling means the resulting meta can be used by other parsers, feeds, and app rules as well.

For more details on the Traffic Flow parser, go here: Traffic Flow Lua Parser 

To configure your traffic_flow_options file, start with the subnet or IP addresses of known web servers and add them as a block in the INTERNAL section of the file, and label them “web servers”. When traffic is seen heading to those servers as a destination, the meta ‘web servers dst’ will be registered under the Network Name (netname) meta key.

Once the traffic_flow_options file is configured, we can translate the Snort rule from the guide into an app rule that will detect any HTTP or HTTPS traffic, or traffic destined to port 80 or 443, to any system that has not been added to our definition for web servers:

(service = 80,443 || tcp.dstport = 80,443) && netname != ‘web servers dst’


That covers the network based indicators included in the ASD & NSA guide. For more techniques to uncover web shell network traffic, check out the pages linked at the top of this blog, as well as the RSA IR Threat Hunting Guide for NetWitness: 

Stay tuned for the next part where we take a look at the endpoint based indicators from the guide, and see how to apply them using NetWitness Endpoint.


Happy Hunting!


The Australian Signals Directorate (ASD) & US National Security Agency (NSA) have jointly released a useful guide for detecting and preventing web shell malware. If you haven't seen it yet, you can find it here:

The guide includes some sample queries to run in Splunk to help detect potential web shell traffic by analysing IIS and Apache web logs. “That’s great, but how can we do the same search in NetWitness Logs?” I hear you ask! Let’s take a look.

Web Server Logging

If you are already collecting IIS and Apache logs – or any web server audit logs for that matter – you’ve probably already made some changes to your configuration to suit your needs to get the data that you want. To run the queries suggested by the guide, we need to make a change to the default log parser settings for IIS & Apache logs. The default log parser setting for IIS & Apache does not save the URI field as meta that we can query – it is parsed at the time of capture and available as transient meta for evaluation by feeds, parsers, & app rules, but it is not saved to disk as meta. To collect the data needed to run these queries, we are going to change the setting for the meta from “Transient” to “None”.

For more information on how RSA NetWitness generates and manages meta, go here: Customize the meta framework 

The IIS and Apache log parsers both parse the URI field from the logs into a meta key named webpage. The table-map.xml file on the Log Decoder shows that this meta value is set to “Transient”.

To change the way this meta is handled, take a copy of the line from the table-map.xml and paste it into the table-map-custom.xml, and change the flags=”Transient” setting to flags=”None”:

<mapping envisionName="webpage" nwName="" flags="None" format="Text"/>

Hit apply, then restart the log decoder service for the change to take effect. Remember to push the change to all Log Decoders in your environment.

Next, we want to tell the Concentrator how to handle this meta. Go to your index-concentrator-custom.xml file and add an entry for this new meta key:

<key description="URI" format="Text" level="IndexValues" name="" defaultAction="Closed" valueMax="10000" />

I set the display name for the key as URI – but you can set it to whatever makes sense for you. I also set a maximum value count of 10,000 for the key - you should use a value that makes sense for your website(s) and environment and review for any meta overflow errors.

Hit apply, then restart the concentrator service for the change to take effect. Remember to push the change to all Concentrators in your environment (Log & Network), especially if you use a Broker.

Now as you collect your web logs, the meta key will be populated:

You may also want to change the index level for the referer key. By default it is set to IndexKey, which means a query that tests if a referer exists or doesn’t exist will return quickly, but a search for a particular referer value will be slow. If you find yourself doing a lot of searches for specific referers you can change this setting to IndexValues as well.

Optionally, you can add the meta key to a meta group & column group so you can keep track of it in Navigate & Events views. I’ve attached a copy of my Web Logs Analysis meta group and column group to the end of this post.

Now we are ready for the queries themselves. While at first glance they seem pretty complicated, they really aren’t. Plus with the way NetWitness parses the data into a common taxonomy, you don’t need different queries for IIS & Apache – the same query will work for both!

Query 1 – Identify URIs accessed by few user agents and IP addresses

For this query, we need to use the countdistinct aggregation function to count how many different user agents and how many different IP addresses accessed the pages on our website.

For more information on NWDB query syntax, go here: Rule Syntax 
SELECT, countdistinct(user.agent),countdistinct(ip.src)
WHERE device.class = ‘web logs’ && result.code begins ‘2
ORDER BY countdistinct(user.agent) ASCENDING

Query 2 – Identify user agents uncommon for a target web server

This query simply shows the number of times each user agent accesses our web server. We can see this very easily by just using the Navigate interface and setting the result order to Ascending:

Here is the query to use in the report engine rule:

SELECT user.agent
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY user.agent
ORDER BY Total Ascending

Query 3 – Identify URIs with an uncommon HTTP referrer

This query is a bit more complicated – we want to show referrers that do not access many URIs, but also want to see how often they access each URI. This query could need some tuning if you have pages on your site that are typically only accessed by following a link from a previous page, or even an image file that is only loaded by a single page.

Our select statement will list the referrer followed by the number of URIs that the referrer is used for (sorted ASC – we’re interested in uncommon referers), then it will list those URIs where it is seen as the referer, followed by the number of hits (sorted DSC) – a URI that is accessed  

SELECT referer, countdistinct(, distinct(, count(referer)
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY referrer
ORDER BY countdistinct( Ascending, count(referrer) Descending

Query 4 – Identify URIs missing an HTTP referrer

This is an easy one to finish off – we’re interested in events where there is no referer present. To refine the results we want to filter events that are hitting the base of the site ‘/’ as this could easily be someone typing the URL directly into their browser.

WHERE device.class = ‘web logs’ && (referrer !exists || referrer =-) && !=/&& result.code begins ‘2
ORDER BY Total Desceding

These rules and a report that includes the rules can be found in the attached files.


Let me know in the comments below how these queries work in your environment, and if you have suggestions for improvements. The goal of this post was to quickly convert the queries included in the guide published by ASD & NSA. Stay tuned for more posts that show how we can improve the fidelity of these queries, and also how to utilise the endpoint and network indicators also found in thie ASD & NSA guide.


Happy Hunting!

Shout out to @Casey Switzer, @Josh Randall & @Larry Hammond.  Without their help, the lab, configuration and operational considerations would not be possible.


Last year in RSA NetWitness 11.3, a new integration was introduced to allow NetWitness to integrate with RSA SecurID to populate high risk users from incidents in Respond.


@Josh Randall covered this in his blog post here: Examining Threat Aware Authentication in v11.3


At the time, SecurID could only add a user to the list based on an email address.  While this is good for email based alerts, the majority of Linux and Windows logs do not contain that value.


An easy workaround for this is to configure a recurring feed (See Decoder: Create a Custom Feed) including sAMAccountName & email address.  A simple powershell script to export sAMAccountName & email address should suffice. When you create an incident based on sAMAccountName the email address is present in the session's meta data allowing the ThreatAware authentication integration to work.   I used several callback keys to ensure I covered the various conditions to capture the username.


 AdUserEmailAddress Feed


Once this feed is live, you will see email.src & email.all metadata upon an event containing any of the meta keys above.  In this case it was a failed logon:

Email Meta


As of April 2020, RSA SecurID will now accept email address or username for Threat Aware Authentication and to support this, version 11.4.1 of NetWitness, introduced configuration for Respond for which field send to SecurID.  See Respond Config: Configure Threat Aware Authentication for more information.  


This represents a great option to using ad_username, however when you choose that value, you will lose the email_address integration.  A way around this is to do the inverse of the feed earlier to ensure you have the email address field in your sessions.  For this blog, we will continue to use the existing feed and send email_address to SecurID.  I set my synchronization to 1 minute but the default setting is 15 minutes.


Threat Aware Authentication Settings

Within the RSA SecurID Cloud Access Service, you will need to configure your Assurance Levels and  Risk-Based Authentication policies.  I set my Assurance Levels to require Device Biometrics for High Assurance, Approve for Medium and allow at a Low level.  I set a simple policy which will be used for the SAML test.

Assurance Levels

Assurance Levels



 Threat Aware Policy


Rule set

Threat Aware Rules

We have a test user which will be used to demonstrate Threat Aware Authentication.  Currently as you can see, Brett Cline is synchronized from lab.internal and is currently a low risk user.

Low Risk User


When Brett navigates to an app, he is presented with a logon screen with his password:

Test App 

Since he is low risk, after a successful authentication with User ID and password, he is now logged in to the demo app.

App Success


We created a simple ESA rule to catch 3x failed logins  to create an alert (ec_activity = 'Logon' and ec_outcome = 'Failure' 3x within 3 minutes) and a corresponding Incident Rule to group these alerts and create a meaningful title. 

Threat Aware Incident Rule


We simulated a few failed logins to create an incident:

Threat Aware Incident


Back in the SecurID Cloud Authentication Service, you can see that Brett has been added to the high risk users

Test User High Risk



Now when he logs into the app, he will be prompted for his userid/password

But due to being on the high risk users list, he will be required to approve via biometrics on his phone as per the policy set above:


Which will then lead to the successful authentication.

*** Note: The user will remain on the high risk users list until the incident is closed. ***


Additional Information:


If you are collecting logs from the Cloud Authentication Service, you will see the following meta keys:

And here is the corresponding event: 

Operational Thoughts:

@Larry Hammond for some insight into operational considerations.  He and I spoke about how NetWitness has traditionally been a passive device and cannot/should not interfere with your network or operations.  With the addition of Threat Aware Authentication, a poorly crafted rule could add many users to require step up authentication which could result in a disruption to business.  Good rule building practices should be followed and ensuring you test them before creating alerts.


This was the reasoning behind creating meaningful alerts in ESA to ensure the NetWitness admins have a view of the incidents which resulted in adding someone to the high risk users.

Although the RSA NetWitness platform gives administrators visibility into system metrics through the Health & Wellness Systems Stats Browser, we currently do not have a method to see all storage / retention across our deployment in a single instance or view.


Below you will find several scripts that will help us gain this visibility quickly and easily.


Update: Please grab the latest version of the script, some bugs were discovered that were fixed.


How It Works:


1. Dependency: (attached) both v10 and v11 version for your particular environment. Please run this script prior to running the as it requires the 'all-systems' file which contains all of your appliances & services.

2. We then read through the all-systems file and look for services that have retention e.g. EndpointLogHybrid, EndpointHybrid, LogHybrid, LogDecoder, Decoder, Concentrator, Archiver.

3. Finally we use the 'tlogin' functionality of NwConsole to allow cert-based authentication, thus, no need to run this script with username/password as input to pull database statistics and output the retention (in days) for that particular service.




1. Run ./ (for 10.x systems) or ./ (for 11.x systems)

    NOTE: Make sure to grab the 11.4 version of the backup scripts if you are running NetWitness 11.4+

2. Run ./  (without any arguments). This MUST be run from Puppetmaster (v10) or Node0 (v11).


Sample Run: 


Please feel free to provide feedback, bug reports etc...


Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live, for retired content you'll need to manually remove those.

Detailed configuration procedures for getting RSA NetWitness Platform setup - Content Quick Start Guide 



RSA NetWitness Lua Parsers:

  • fingerprint_certificate Options - Optional parameters are added to alter the behavior of the fingerprint_certificate parser.
  • fingerprint_minidump - Detects Windows Minidump files. Meta will be output as filetype - 'minidump' This parser will also detect minidump files containing lsass memory and output meta as ioc – ‘lsass minidump’

Using RSA NetWitness to Detect Credential Harvesting: lsassy 


More information about Packet Parsers:


RSA NetWitness Application Rules:

Following app rules are added to Endpoint Content pack for RSA NetWitness 11.4 Investigation and Alerting –

  • Autorun Invalid Signature Windows Directory
  • Autorun Unsigned Hidden Only Executable In Directory
  • Autorun Unsigned winlogon helper DLL
  • Browser Runs Command Prompt
  • Command Line Writes Script Files
  • Command Prompt Obfuscation
  • Command Prompt Obfuscation Using Value Extraction
  • Command Shell Copy Items
  • Command Shell Runs Rundll32
  • Evasive Powershell Used Over Network
  • Explorer Public Folder DLL Load
  • Hidden and Hooking
  • Lateral Movement with Credentials Using Net Utility
  • OS Process Runs Command Shell
  • Outbound from Unsigned AppData Directory
  • Outbound from Windows Directory
  • Outbound Unsigned Temporary Directory
  • Potential Outlook Exploit
  • Powershell Double Base64
  • Process Redirects to STDOUT or STDERR
  • RDP Launching Loopback Address
  • Remote Directory Traversal
  • RPM Ownership Changed
  • RPM Permissions Changed
  • Unsigned Creates Remote Thread And File Hidden
  • Unsigned Library in Suspicious Daemon
  • Unsigned Opens LSASS
  • WMIC Remote Node Activity
  • Multiple Psexec Within Short Time


More information about NetWitness 11.4 New Features and Alerting: ESA Rule Types 




RSA NetWitness Lua Parsers:

  • china_chopper – Functionally has been added to detect new variants of china chopper. 
  • DCERPC – Parser now supports NTLM authentication along with Kerberos. Parser will now extract authentication meta from both Kerberos and NTLM

Using the RSA NetWitness Platform to Detect Lateral Movement: SCShell (DCE/RPC) 

  • DynDNS – Parser is updated with improved detection with addition of new dynamic DNS domains detected by RSA Incident Response. 

Read more about threat hunting/investigation using DynDNS parser What's updog? 

  • fingerprint_certificate - This parser is updated for efficiency improvements as well as added detection with more customization using options file.
  • HTTP_lua – Updated for accuracy and efficiency.
  • SMB_lua – Functionally has been added to support SMBv3.
  • MAIL_lua – Updated for accuracy and efficiency.
  • TLS_lua - Added a new option to TLS_lua to limit examination of sessions to only the ports specified in the option. If enabled, ports not listed will not be parsed by TLS_lua and thus will not be identified as service 443. This will reduce the workload of TLS_lua by eliminating identification of SSL/TLS sessions on unknown ports.

Read more about SSL and NetWitness 

  • SSH_lua - SSH_lua parser now include SSH Versions for both server and client thus providing better insights in investigation.
  • windows_command_shell_lua – Updates are made to base64 encoded command detections along with new commands.
  • xor_executable_lua – Improved detection with more xor'd executables by adding detection xor'd MZ header.


RSA NetWitness Application Rules:

Following app rules are updated to Endpoint Content pack for 11.4 Investigation and Alerting –

  • Office application injects remote process
  • Office Application Runs Scripting Engine
  • Creates Remote Service


RSA NetWitness Bundles:

Endpoint Pack has been updated with new and updated content so support Alerting for NetWitness Endpoint 11.4 and higher. 

Refer Endpoint Content for detailed information about content pack and its configuration. 


More content has been tagged with MITRE ATT&CK™ metadata for better coverage and improve detection.

For detailed information about MITRE ATT&CK™:

RSA Threat Content mapping with MITRE ATT&CK™ 

Manifesting MITRE ATT&CK™ Metadata in RSA NetWitness 




We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

List of Discontinued Content 


RSA NetWitness Application Rules:

  • php put with 40x error – Marked discontinued due to performance-to-value tradeoff.
  • php botnet beaconing w - Retiring this rule as provides little-to-no value as PHP beaconing has evolved and uses different patterns.
  • Windows NTLM Network Logon Successful - Retiring as improved application rule for ‘Pass the Hash’ has been created.



For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.


EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

Filter Blog

By date: By tag: