RSA NetWitness Hunting Guide

Document created by RSA Information Design and Development on Nov 10, 2016Last modified by Scott Marcus on Nov 16, 2017
Version 169Show Document
  • View in full screen mode
Due to problems saving this topic as a PDF, please use the following link if you require a PDF of this content:
Hunting Guide PDF


NetWitness is an evolution of the NetWitness NextGen security product, formerly known as Security Analytics. The platform ingests network traffic and logs, applies several layers of logic against the data, stores the values in a custom time-based database, and presents the metadata to the analyst in a unified view. When integrated with ECAT, a host based memory forensics tool, metadata about host activities is generated and presented in the same view, giving the analyst an unparalleled view into the state of the network. In this guide we will be discussing tactics and procedures for investigating the packet dataset for malicious activity.


NetWitness is not a typical network traffic based sensor, it is not an IDS/IPS or Netflow device, although some of its more basic capabilities could provide some overlap. Metadata is generated to describe a technical aspect or behavior within a network session. A session is defined as one or two related stream(s) of traffic with a requestor and, usually, a responder. These sessions are ordered by capture time and as such time is the first WHERE clause applied to the database when beginning an investigation. Knowing how the data is collected and ordered is integral to understanding how to hunt in NetWitness.


Metadata in NetWitness should be considered indicators of an activity, not signatures like those used by traditional IDS/IPS and as such should be handled differently. The logic contained in the Security Analytics parsers is far more versatile than your typical regex based signatures. The parsers, feeds and application rules that process traffic generate metadata about the structure of the data and extract values from the individual sessions that can be searched for efficiently. This differs from traditional IDS/IPS solutions in that it is possible to find new unknown malicious activity compared to only finding previously identified malicious activity. Signature-like parsers are also included, but because the parser engine is using a common scripting language, Lua, more complex logic can be used to determine a match, giving a far lower false-positive rate when used in this manner. This guide focuses on hunting for new unknown malicious activity using the content provided by the RSA Live content management system and generally does not include an overview of signature-like parsers.


Hunting within the NetWitness dataset is accomplished by analyzing intrusions, reverse engineering malware, analyzing traffic generated by malware and other attacks, then selecting metadata generated by NetWitness based on this type of behavior. The RSA IR team has conducted many investigations since being formed in 2012 and has created content and tactics for the platform that allow an analyst to quickly navigate the dataset by combining many aspects of behavior into a single piece of metadata. This cuts down on the number of drills needed to find the sessions with the desired behavior, enhancing performance of the platform and reducing the effort needed to find malicious behavior. This has allowed the IR team to discover incidents without any prior knowledge or notification that the organization was under a targeted attack. The IR team has also used these methodologies and content to discovery many incidents where the attacker wasn’t even using malware, but authenticated access, also called Living off the “LANd”.


The unprecedented view into network traffic provided by NetWitness is most effective for Incident Response capabilities, but can also be used to validate the appropriate enforcement of your security policies and/or uncover areas where these policies and procedures may require improvement. This guide is intended for analysts who want to uncover new malicious activity and not simply react to alerts based on known threats.

Hunting Pack

The Hunting pack is designed to allow you to quickly hunt for indicators of compromise or anomalous network activity by dissecting packet traffic within the NetWitness Suite and populating specific meta keys with natural language values for investigation.

The Hunting pack consists of the following separate pieces:

  • A set of meta keys that are populated with the indicators
  • Imports of meta groups, which provide a view to the analyst of relevant combinations of meta data
  • A set of Lua parsers to dissect the network sessions from common protocols used by an attacker
  • The Investigation Feed and the RSA FirstWatch SSL Blacklist feed.
  • Hunting-related RSA Security Analytics reports
  • Hunting-related RSA Security Analytics rules
  • Webshell Detected ESA rule: This rule indicates that 3 webshells have been detected through communication between the same IP source and destination pair within a 10 minute time window. More details are available in the RSA ESA Rules or Alerts topic.

Note: If you already have a version of the IR content pack previously distributed by the Incident Response team outside of Live, then it is recommended to remove this version before downloading the new pack. The separate topic, Removing the Original Incident Response (IR) Pack, provides instructions for how to remove this content.

Deploying the Hunting Pack

You can deploy all of the items in the Hunting Pack through Live.

Note the following:

  • For deployments prior to 10.6.2, you will also need to configure a set of new meta keys: netname, direction, ioc, boc, eoc, analysis.service, analysis.session, analysis.file. For details, see Meta Keys.
  • The trafflic_flow Lua parser may be deployed to a Log Decoder, but this is not currently supported through Live. In the Traffic Flow Lua Parser documentation,, see the section Deploy to Log Decoders.
  • If you are in an environment where you cannot Deploy, you should create a resource package (select > Create) to download a ZIP archive that you can use. Do not use the button, as this does not work for bundles.

To deploy the Hunting pack, depending on your version, see:

Meta Keys

The meta keys that are populated as a result of the Lua parser deployment that make up the Hunting content pack are as follows. These are available without additional configuration in version 10.6.2 and higher of the NetWitness Suite. If you are deploying the content pack to a version prior to this, then see Appendix: Hunting Content Pack Meta Keys for instructions to enable them.


Display Name

Meta Key



Network Name



Networks and host descriptions tagged with source or destination values. This eliminates the need for multiple network and asset keys. 

Traffic Flow Direction



Flow-based information derived from source and destination lookups. The value may be outbound, lateral or inbound. 

Session Analysis



Client-Server communication summations, deviations, conduct and session attributes. 

Service Analysis



Core application protocols identification. An underlying powerhouse of service-based inspection. 

File Analysis



A large inspection library that will highlight file characteristics and anomalies.

Indicators of Compromise



Indicators of Compromise are now ubiquitous across the information security landscape. It is important to classify and store them accordingly.

Behaviors of Compromise



The Behaviors of Compromise meta key is designated for suspect or nefarious behavior outside of standard signature-based detections.

Enablers of Compromise



Enablers of Compromise are instances of poor information or operational security that could be tied back to root cause post-mortem.

Meta Groups

NetWitness offers the analyst a method to customize the metadata views and groups that are displayed while conducting an investigation. Before beginning to hunt, the first items to set up are metadata groups. RSA provides a ZIP of files that contain Meta groups for incident response hunting. These files are available as a ZIP archive in the Downloads space on RSA Link at the following URL:

For deployment of the meta groups, see the product documentation Import a Meta Group under the topic Investigation: Manage User-Defined Meta Groups. By default, the meta keys are in the ‘Close’ state. You may change to ‘Open’ view state by default for each key, depending on your needs and performance considerations.

Display NameFile NameDescription

Outbound HTTP


Configures the investigation page to view meta keys indicators related to the HTTP protocol.

Outbound SSL / TLS


Configures the investigation page to view meta keys indicators related to the SSL / TLS protocol.

Lua Parsers

You may deploy the Hunting pack Lua parsers from Live. Select the parsers listed below within the Live Search UI and choose to go through the process of deployment or subscription to a Decoder.

List of Lua Parsers in the Hunting Pack


Display Name

Short Description


Detects possible apt WMI and windows registry manipulation.


Detects cleartext China Chopper sessions.


Detects CustomTCP beaconing activity. Registers C2 domain and victim hostname as meta.


Identifies DNS sessions. Registers query and response records including record type. Registers protocol error messages.


Detects dynamic DNS hosts and servers.


Detects Java JAR and CLASS files.


Extracts values from HTTP protocol request and response headers. Parses ICAP (HTTP) requests.


Use this file to influence the behavior of the HTTP_lua parser. See HTTP Lua Parser Options File for details.


Provides types and codes from ICMP packets.


Detects punycode-encoded internationalized domain names which use non-Latin Unicode code points whose glyphs resemble those of Latin Unicode code points.

Registers the decoded homograph as analysis.service meta. Reference the RSA Link blog post from RSA Research for more details about this threat: Dissecting PunyCode - Not All Characters are Created Equal.




  • analysis.service - host as which the homograph is masquerading
  • ioc - indicators of compromise




Identifies JSON-RPC 2.0 streams. Will not identify JSON-RPC 1.0 streams, and may not identify JSON-RPC over transports such as HTTP.


Extracts values from email messages such as email addresses, subject and client.


Use this file to influence the behavior of the Mail_lua parser. For details, see Mail Lua Parser Options File.


Detects MSU RAT activity.


Detect PlugX malware.


Detects Poison Ivy RAT activity.


Detects PGV_PVID malware activity. PGV_PVID is a cookie string the actor put into the malware's POST routine.


Identifies the Microsoft Remote Desktop Protocol.


Detects a variant of rekaf and derives the xor key (crypto) and name of the infected host.


Detects RTF files.


Analyzes session characteristics such as bytes transmitted vs bytes received, TCP flags seen, etc.


Parses the Microsoft SMB/CIFS protocol, versions 1 and 2.


Detects a possible Remote Code Execution attack when using the Struts REST plugin with XStream handler to handle XML payloads.


Detects SuperCMD Trojan beaconing. Reference the RSA Link blog post from RSA Research for more details about this threat: SUPERCMD RAT.




  • - hostname of compromised host
  • alias.ip - address of compromised host
  • alias.mac - MAC address of compromised host
  • ioc - indicators of compromise




Extracts the top level domain and second level domain portions from host names.


Use this file to influence the behavior of the TLD_lua parser. See TLD Lua Parser Options File for details.


Identifies SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1, and TLS 1.2.


Provides subnet names for internal networks, and directionality of the session (inbound, outbound, lateral).


This is an optional file for use with the traffic_flow Lua parser. If used, this file provides a way for customers to configure internal subnets as described within the full product documentation for this parser (Traffic Flow Lua Parser).


Identifies Microsoft Windows command shell sessions.


Identifies windows executables, and analyzes them for anomalies and other suspicious characteristics.


Detects executables that have been xor or hex encoded.

Lua Parser Options Files

The following Lua Parsers currently have options files associated with them:

  • HTTP_lua
  • Mail_lua
  • TLD_lua
  • traffic_flow

Caution: RSA strongly suggests that you do not subscribe to the options file. Subsequent downloads of this file will overwrite all changes that you have made to the file.

Note the following:

  • If you deploy the options file, it can be found in the same directory as parsers: /etc/netwitness/ng/parsers/.
  • The parser is not dependent upon the options file. The parser will load and run even in the absence of the options file. The options file is only required if you need to change the default settings.
  • If you do not have an options file (or if your options file is invalid), the parser uses the default settings.

Note: The parser will never use both the defaults and customized options. If the options file exists and its contents can be loaded, then the defaults will not be used at all.

RSA Security Analytics Reports

RSA provides two reports as part of the Hunting Pack:

  • Hunting Summary Report: This report displays a summary of the events that have been categorized according to the following meta keys.

  • Hunting Detail Report: This report displays events that have been categorized according to the following meta keys with added contextual evidence to assist an analyst.


    Note: This should be run as a daily report. The amount of meta values reported may be large depending on traffic volume and running over longer time frames may result in a query timeout.


These reports are based on events that have been categorized according to the following meta keys:

  • Indicators of Compromise
  • Behaviors of Compromise
  • Enablers of Compromise
  • Service Analysis
  • Session Analysis
  • File Analysis

These keys are described in the Meta Keys section.

RSA Security Analytics Rules

The two Hunting Pack reports are dependent on the following rules.


Note: You do not need to download or deploy the individual rules: since these rules are dependencies of the Hunting reports, you receive them when you download or deploy the reports.


The Hunting Summary Report is dependent upon these rules:

  • Behaviors of Compromise: Designated for suspect or nefarious behavior outside the standard signature-based detection. This rule displays output when the meta key, Behaviors of Compromise, is populated.
  • Enablers of Compromise: Instances of poor information or operational security. Post-mortem often ties these to the root cause. This rule displays output when the meta key, Enablers of Compromise, is populated.
  • File Analysis: A large inspection library that highlights file characteristics and anomalies. This rule displays output when the meta key, File Analysis, is populated.
  • Indicators of Compromise: Possible intrusions into the network that can be identified through malware signatures or IPs and domains associated with command and control campaigns. This rule displays output when the meta key, Indicators of Compromise, is populated.
  • Service Analysis: Core application protocols identification and inspection. This rule displays output when the meta key, Service Analysis, is populated.
  • Session Analysis: A large inspection library that highlights file characteristics and anomalies. This rule displays output when the meta key, File Analysis, is populated.


The Hunting Details Report is dependent on these rules:

  • Behaviors of Compromise Detail: Additional context (compared to Behaviors of Compromise rule) is provided to an analyst by grouping with additional meta keys of Service Type and Device Type.
  • Enablers of Compromise Detail: Additional context (compared to Enablers of Compromise rule) is provided to an analyst by grouping with additional meta keys of Service Type and Device Type.
  • File Analysis Detail: Additional context (compared to File Analysis rule) is provided to an analyst by grouping with the additional meta key of Filename.
  • Indicators of Compromise Detail: Additional context (compared to Indicators of Compromise rule) is provided to an analyst by grouping with additional meta keys of Service Type and Device Type.
  • Service Analysis Detail: Additional context (compared to Service Analysis rule) is provided to an analyst by grouping with additional meta keys of Service Type and Alias Host.
  • Session Analysis Detail: Additional context (compared to Session Analysis rule) is provided to an analyst by grouping with additional meta keys of Service Type and Alias Host.


Identifying Traffic Flows

It is important to understand how network traffic is processed by NetWitness and displayed to the user. Figure 1 shows how the Decoder service captures packets and copies them into memory in what are called ‘pages’. The first pool a frame lands in when it is captured is the packet capture pool. Here sessions are either begun or packets added to an existing session in the Assembler. NetWitness is IPv4 and IPv6 aware and will mark the first frame in a TCP session that contains the TCP SYN flag as the Request and the other end as Response. Non-TCP based IP protocols or continuation traffic’s directionality is determined by several criteria.

  • Client talks first
  • Server usually provides more data
  • Server usually has a lower port, if available
  • Server should be a non-RFC1918 IP
  • Organizations usually use lower IP octets for static IP addresses and servers


These considerations are weighted and can be adjusted by changing the values in within the Explorer interface.


When a session is begun in the Assembler two timers begin. One is counting seconds since the session has been started and after 60 seconds (SA default) the session will be declared over, parsed and written to disk. The second timer is a byte timer, after 32 MB (SA default) a session will be declared over, parsed and written to disk. There are some edge cases where extremely low bandwidth and long lived sessions will stay in the Assembler for the entire duration of the session and will be presented end to end with a lifetime value of over 60 seconds.


Figure 1. NetWitness Decoder Capture and Processing

Traffic Directionality

If you have ever used NetWitness for a length of time, you will quickly realize networks are noisy. There are retransmissions, single sided sessions, zero payload sessions, and Peer-to-Peer communications that make analyzing a dataset more difficult. When analyzing a dataset, you have to start with a direction. Do you want to view inside-to-outside, outside-to-inside, or inside-to-inside? The traffic_flow.lua parser makes this determination based on options set in the traffic_flow_options.lua file on the decoder. For details, see the Traffic Flow Lua Parser topic on RSA Link.


This defines RFC1918 IP address space as well as other non-routable blocks of IPs used to determine direction. It is advised that an organization modifies the provided options file with internal networks and their names as well as any non-RFC1918 IP space used by the organization, for example interesting traffic ACL’s for LAN-to-LAN IPSEC tunnels.

The following table shows metadata stored in Direction that is used for traffic flow by default without modifying the traffic_flow_options.lua file.

Direction MetadataDescription


RFC1918 Source IP to RFC1918 Destination IP


RFC1918 Source IP to Non-RFC1918 Destination IP


Non-RFC1918 Source IP to RFC1918 Destination IP

Session Characteristics Meta Category

The Session Characteristics Meta Category extends this logic by examining technical aspects of the captured sessions. It checks the number of streams, if any payload was transmitted in those streams, the lifetime of the session, the size and ratio of transmitted vs. received data and also combines some of this logic to give the analyst a clearer view into their network. The table below describes the Session Characteristics meta category—these meta keys are populated by the session_analysis Lua parser.

Session Characteristics MetadataDescription

single sided tcp

IP Protocol 6 with a single stream

single sided udp

IP Protocol 17 with a single stream

zero payload

Any protocol with zero payload

first carve

outbound traffic with two streams and payload > 0

first carve not dns

outbound traffic with two streams and payload > 0 and not service type 53

first carve not top 20 dst

outbound traffic with two streams and payload > 0 and org.dst that is not one of the most common 20 destinations like Apple or Microsoft

long connection

A connection with a lifetime > 50 seconds, max lifetime in NetWitness is 60 seconds by default

session size 0-5k

A total session size, request + response payload, between 0KB and 5KB

session size 5-10k

A total session size, request + response payload, between 5KB and 10KB

session size 10-50k

A total session size, request + response payload, between 10KB and 50KB

session size 50-100k

A total session size, request + response payload, between 50KB and 100KB

session size 100-250k

A total session size, request + response payload, between 100KB and 250KB

medium transmitted outbound

Between 1MB and 4MB transmitted outbound during the session

high transmitted outbound

Greater than 4MB transmitted outbound during the session

ratio high transmitted

Between 75% and 100% of the session payload transmitted outbound

ratio medium transmitted

Between 26% and 74% of the session payload transmitted outbound

ratio low transmitted

Between 0% and 25% of the session payload transmitted outbound


Utilizing this basic logic, we can start to understand which direction our traffic is flowing and begin to segment the dataset so we can focus on behavior that is interesting to us. The NetWitness Decoder service does not attempt any sanity checking on session directionality. That means that if I receive a TCP FIN packet as the first frame in a session, the requesting IP/Port combination will be tagged as the Requestor. Sometimes the example just given is part of another session that was closed out, previously, but it could also represent another type of malicious activity.


NetWitness is a forensics tool and will not attempt to correct what might be considered non-RFC compliant use of a protocol and will only present you with the data it has captured. For example, if we were interested in non-DNS sessions that had a payload, originated from within the organization, and whose destination was the internet, we would simple click on first carve not dns under Session Characteristics as our first drill. This removes sessions and therefore ‘noise’ that isn’t of current interest or relevance to our investigation of traffic that is originating from within our organization and going out to the Internet. This could be a user watching a YouTube video, checking Facebook or a Trojans C2 protocol fetching orders.


Conversely, looking for connections from the Internet into the organization would require some specific knowledge and special placement of the NetWitness Decoder device. Some considerations that should be taken include:

  • Is there a load balancer?
  • Are the inbound web services segmented into different DMZ’s like Web, Application and Database?
  • Are the DMZ servers using RFC1918 IP space and NAT/PAT or are they IP’d with a routable address?
  • Can I see inter DMZ communications or just inbound/outbound types of communications?
  • Can the DMZ make connections to my Inside network?


A good place to start when mapping this out is to examine the dataset with the following drill; “’org.src exists && tcpflags = ‘syn’”. This ensures that the IP sources are Internet routable and we are seeing the beginning of the session with the TCP SYN flag. This will remove the continuation sessions that might confuse the analyst. A side note, these longer sessions will appear with the meta session.split indicating the session was cut off by the decoder during processing. The linked sessions can be pivoted into similar to the way FTP is currently handled. Next, look under org.dst for your organization’s name, which could be resolved in several different ways depending on how the IP space was registered. With this base drill you can start answering some of the questions posed in the previous paragraph and analyze the different ways the Internet interacts with your DMZ servers and how the DMZ servers interact with the Internet and, preferably not, your Inside network.


By analyzing the directionality and the services your organization exposes to the Internet the analyst can create a single piece of metadata to begin their investigations into certain types of behavior while eliminating the other sessions that would be considered not interesting for the current investigation. The recommended segmentation is show in the table below.

Recommended Classifications for Directionality Rules

Outbound Communication with the Internet

Inbound Web Application Communication

Intra and Inter DMZ communications

DMZ to Inside Communications

Inside to Inside Communications

B2B or Partner Communications

Inbound SMTP Communications

Inbound Other Applications

Cleartext side of Inbound VPN Connections

Protocol Analysis: HTTP

The Hypertext Transfer Protocol is one of the most widely used protocols on the Internet. Even most SSL/TLS transmission merely tunnel HTTP. Within any given dataset there will be an enormous amount of HTTP sessions to analyze. The parsers and application rules in Live Content focus on the behavior and technical aspects of the protocol. By studying how HTTP communicates as well as analyzing malware generated HTTP traffic and user generated HTTP traffic an analyst will become able to quickly determine what is out of place in a dataset vs. what seems to be normal. This is a common strategy amongst malware authors, they want to blend in with regular network communications and appear as innocuous as possible. But by their very nature Trojans are programmatic and structured and when examined it becomes clear the communications hold no business value.


Be aware that there are many harmless, custom-built applications that can resemble malware (stock ticker, weather, etc.) that beacon for updates every X seconds/minutes. They often have “faked” HTTP headers, in order to pass through network inspection devices (IDS/IPS) without alerting or blocking.

HTTP Structure

HTTP has many different versions still in common use including 0.9, 1.0, 1.1, SPDY and the draft 2.0 proposal. Excluding SPDY and HTTP/2.0 the header request/response structure remains basically the same. The client begins with the Request Method such as GET, POST or PUT; then a path and/or filename (with or without arguments if it is a web application), the HTTP version and the first carriage return and line feed which are 0x0D 0x0A in hexadecimal. Various HTTP headers follow but the header name is punctuated by a colon character (“:”), none to two spaces (0x20) then a value and finally another carriage return and line feed and the next header. The HTTP daemon knows the header section is finished when it parses out the double carriage return and line feed that indicate the next bytes are the body, if in fact there is a body at all. If there is a body, then a Content-Length header must be present and correct.


Figure 2. HTTP GET Structure outlines the basic structure of a HTTP GET Request and Response while Figure 3. HTTP POST Structure outlines the basic structure of a HTTP POST Request and Response.


Figure 2. HTTP GET Structure

Figure 3. HTTP POST Structure

HTTP Methods

A Method, in the context of HTTP, is a verb. By definition, HTTP supports 9 Methods, with WebDav (Web Distributed Authoring and Versioning) adding an additional 7 Methods. The most common Method in use is GET, which is roughly ten times as common as the POST Method. This is an important observation we will utilize later. For an analyst to understand what they are looking at in NetWitness, the HTTP Methods must be understood as well as the RFC compliant structure of HTTP. The table below describes the common HTTP Methods.



Retrieve specified resource


Send a resource to the server in the body of the POST


Store a resource on the server, such as a file


Retrieve specified resource but omit the body


Delete a resource on the server


Echoes the request back to the sender for proxy/MitM detection