Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

615 posts

Table of Contents




Ransomware is something that’s haunted businesses for well over a decade, and now more than ever, detection for these attacks is something that should be prioritized by organizations. While reports have noted a slight decline in the number of ransomware attacks (Sophos 2020), they have now become highly targeted, more sophisticated, and deadly due to the value of the assets being encrypted.


How is Ransomware Deployed?

For ransomware to be as effective as possible, it must infect as many endpoints as possible, this means that ransomware is commonly deployed using techniques that allow for quick and easy distribution. Deployment methods could involve the following:


  • Microsoft SysInternals PsExec Utility
  • Group Policy Objects (GPOs)
  • System Center Configuration Manager (SCCM)


If the attacker has reached the stage where they are ready to distribute the ransomware, your detection of it will most likely occur once it starts encrypting your files, which is far too late. Prior to the deployment of the ransomware, the attacker must infiltrate the network, setup backdoors, harvest credentials, laterally move, and exfiltrate data – the attacker has to make a lot of noise to reach their end goal, and it is at these key points where defenders need to be detecting this attack. The dwell time from first signs of malicious activity to the deployment of ransomware can be as little as a few hours, so quick detection to prevent a successful attack is a must. The following figure shows an example flow of how a ransomware attack may play out:



Let's run through this and see how we can detect this with NetWitness.


Credential Harvesting

For an attacker to laterally move, they are going to need some credentials, these are typically obtained by dumping the memory of LSASS and using Mimikatz to extract the cleartext credentials from the dump. There are several methods an attacker can use to dump the memory of LSASS:


  • Microsoft Sysinternals ProcDump
  • Using the MiniDump function from comsvcs.dll
  • Custom applications (such as Dumpert)


Understanding these methods and how they manifest themselves in NetWitness is important for defenders, so they can quickly identify if these actions are occurring on their network.



ProcDump is a command line utility, and as such, will typically be executed via cmd.exe. The corresponding events for this would look similar to below, where cmd.exe launches the ProcDump binary with the command line arguments to dump LSASS memory and save it as a minidump:


We then see the ProcDump binary open lsass.exe in order to dump the memory:


This minidump would typically be exfiltrated from the network so the attacker can run Mimikatz against it to extract credentials. They do this activity offline as introducing Mimikatz into the network would most likely trigger antivirus and other detections. You should definitely monitor your AV logs for alerts of this type.


The activity above could be detected by adding the following application rule to your Endpoint Decoder(s):

procdump lsass dumpparam.src contains '-ma lsass' || param.dst contains '-ma lsass'
sysinternals tool usageparam.src contains '-accepteula' || param.dst contains '-accepteula'


Microsoft Sysinternal tools could also be detected by utilising the following query, file.vendor = 'sysinternals -':


As a defender, it would then be possible to identify malicious intent by analyzing the location and names of the binaries. For example, the screenshot below shows that the Sysinternal tool named, pd.exe, exists in the C:\PerfLogs\ directory, this should stand out as anomalous and be triaged:



This method has been around for quite some time but is seldom observed being utilized by attackers, however, it is a method to dump LSASS memory that should be monitored all the same. An example of how this may look is shown below, where we see a PowerShell command using rundll32.exe to utilize the MiniDump function to create minidump of LSASS:


We then see rundll32.exe open lsass.exe in order to dump the memory:


The activity above could be detected by adding the following application rule to your Endpoint Decoder(s):

comsvcs.dll lsass dumpparam.src contains 'comsvcs.dll MiniDump' || param.dst contains 'comsvcs.dll MiniDump'


Custom Applications

Custom applications can be made to dump the memory of LSASS using direct system calls and API unhooking. An example of a tool that does just that is, Dumpert. Tools such as this would commonly be executed by cmd.exe. From the below we can see that cmd.exe was used to run Outflank-Dumpert.exe, and subsequently Outflank-Dumpert.exe opens lsass.exe to dump the memory:


Activity from unsigned executables opening LSASS would be flagged by the meta value shown in the following figure. As a defender, all binaries flagged by this meta value should be investigated to confirm if they are legitimate or malicious:


If the LSASS minidump is transferred across the network via a cleartext protocol, and you have pushed the fingerprint_minidump Lua parser to your Packet Decoder(s), the following meta value would be created; which would be another great starting point for an investigation:


Lateral Movement

Once the attacker has credentials they can then begin to laterally move to endpoints in the network. There are a number of options an attacker has to move laterally, typically they are seen to use:


  • Remote Desktop Protocol (RDP)
  • Windows Management Instrumentation (WMI)
  • Server Message Block (SMB)


While all the above are used legitimately within an environment, it is important for defenders to understand how and where they are utilized to idenitfy anomalous usage.



RDP is a great way for attackers to laterally move, it provides an interactive graphical view of the endpoint they connect to and can easily blend in with normal day-to-day operations allowing it to go unnoticed by defenders. Typically, RDP logs are utilised when evidence of compromise is found. The attacker will be utilising one or more users and this information could then be utilised as a pivot point to identify lateral activity:

In order for the RDP event logs to be parsed as shown above, I added two dynamic log parser rules: Log Parser Customize: Log Parser Rules Tab 


The best log to monitor RDP activity is the Microsoft-Windows-TerminalServices-LocalSessionManager/Operational event log; an event ID of 21 will be a successful RDP connection. A great read to get a better handle on the event ID's related to RDP can be found here: Windows RDP-Related Event Logs: Identification, Tracking, and Investigation | Ponder The Bits.



Moving laterally to endpoints using WMI is a common technique adopted by attackers. Typically, usage of a tool named, WMIExec, is favoured. The following screenshot shows an example of how this tools usage looks in NetWitness Endpoint. From the below we can see the WMI provider service, WmiPrvSE.exe, executes cmd.exe and passes the parameters along with it:


Adding the following application rules to your Endpoint Decoder(s) would assist with detecting potentially malicious WMI usage:

wmiexecparam.dst contains '\\admin$\\__1'


The NetWitness Endpoint Decoder also comes with out of the box content to detect potentially malicious WMI usage:


Pivoting on these meta values would be a great way to detect possible attacker lateral movement, as a defender you would want to identify any atypical commands associated with the WMIC activity, an example of this is shown below, whereby the the attacker could use WMI to remotely execute commands on an endpoint using "process call create":


Remote WMI activity is also flagged in NetWitness Packets with the meta value, remote wmi activity. When process call create is utilised (CAR-2016-03-002: Create Remote Process via WMIC | MITRE Cyber Analytics Repository), the execmethod meta value will be populated under the action meta key. Identifying endpoints where this is taking place and typically does not, is another great starting point to identify potentially malicious WMI usage:



Lateral movement via SMB is typically performed with the net use command. It allows attackers to access a shared resource on a remote computer. Their favoured resources are typically the administrative shares, which commonly are C$, ADMIN$, D$. In order to identify if this type of activity is occurring in your environment keep an eye out for the following meta values:


A sample of the net use command to mount an administrative share is shown below:


As a defender you would want to pivot on these events and see what endpoints this activity is occurring on, from there you can perform timeline analysis on the endpoint to see what other activity took place around that time.




Once an attacker has breached a network they will need to maintain persistence. There are two primary ways that an attacker will do this:


  • Deploy a web shell to a public facing server
  • Deploy a Trojan to beacon back to a C2 server


A common method to detect C2's is via proactive hunting, which is something we have discussed in-depth on many occasions as part of the Profiling Attacker Series. We highly recommend reading through these posts to grasp C2 and web shell detection as they have been covered in-depth on a number of posts.


Another great resoure for identifying endpoints that are potentially infected with web shells or Trojans is the Microsoft-Windows-Windows Defender/Operational event log. Antivirus events are often overlooked but can be a great indicator to potential compromise as shown below, where Defender identified two web shells in the C:\PerfLogs\ directory:


Account Creation

Attackers may choose to create an account in order to push their ransomware or to laterally move. A common way for an attacker to create an account is with the net command. If the following meta value appears, it should be investigated to confirm if the user account creation was legitimate or not:

Pivoting on this meta value would give us some context as to what user was created and how. From the below we can see that lsass.exe executed net.exe to create an account named, helpdesk - this is indicative behaviour of the EternalBlue exploit:


If a user was adding via the command line it would look like the following. This is not to say that this is legitimate behaviour, but demonstrates the differences as to how a normal execution of net.exe would look:


For both of these events, the defender should perform anlaysis on the endpoint(s) in question and perform timeline analysis to look for further anomalous behaviour.


Some additonal useful application rules that could be deployed to detect anomalous behaviour by LSASS:

lsass writes exefilename.src = 'lsass.exe' && action = 'writetoexecutable'
lsass creates processfilename.src = 'lsass.exe' && action = 'createprocess'


From the account creation perspective, the Security event log would record a 4720 event ID along with information about the user that was created:


As a defender, you could pivot on = '4720' to analyse what user accounts were being created and where.


Ransomware Deployment

Ransomware can be deployed via a number of methods. The one we will cover here is deployment via PsExec. This is a common choice for attackers as it is a legitimate Microsoft tool that can be easily scripted to copy and execute files. Based on the way PsExec works, we can easily spot its activity based off of the following meta value:


Drilling into these events, we can see that PsExec.exe was used to connect to a remote endpoint. transfer a binary and execute it:


A useful application rule to further detect PsExec usage could be:

psexec usagefilename.dst = 'psexesvc.exe'


There are many clones os PsExec that work in a very similar to fashion, the following application rules should be added to help identify their usage within your envrionment:

remcom usagefilename.dst = 'remcomsvc.exe'
csexec usagefilename.dst = 'csexecsvc.exe'
paexec usagefilename.dst begins 'paexec'


From a Packet perspective, PsExec execution would be flagged under the Indicators of Compromise meta key. As a defender you would then need to determine if the PsExec activity is legitimateor not:


For a log perspective, the System event log records an event ID of 7045 (service creation) when PsExec is being used, as is shown below:


This is because PsExec and similar to tools utilise the service control manager (SCM) in order to function. For a better understanding of PsExec and how it works, please refer to the following URL:




What has been outlined above is merely an example of how a ransomware attack may unfold. Of course there are a myriad of tactics, techniques, and procedures (TTPs) an attacker will have in their arsenal that have not been outlined within this blog post, but this hopefully gives you a good starting point of how to use NetWitness to identify anomalous behaviours and prevent successful attacks. The further down you are in this process the higher the probabiity the attacker will succeed, if you are at the PsExec stage, it is already a bit too late. It should also be noted that the application rules listed in this blog may generate false positives, each envrionment is unique and the filtering as such should be performed on an individual basis.

RSA NetWitness Platform 11.5 has expanded support for Snort rules (also known as signatures) that can be imported into the network Decoders. Some of the newly supported rule parameters are:

  • nocase
  • byte-extract
  • byte-jump
  • threshold
  • depth
  • offset

This additional coverage enables administrators to use more commonly available detection rules that were not previously supported. The ability to use further Snort rules arms administrators with another mechanism, in addition to application rules and Lua parsers, to extend the detection of known threats. 


To expand your knowledge on what is and is not supported, along with a much more detailed initial setup guide, check out Decoder Snort Detection 


Once configured, to Investigate the threats that Snort rules have triggered, examine the Events pivoting in the metadata (, populated from the rules themselves or query for threat.source = "snort rule" to find all Snort events. The Signature Identifier ( corresponds to the sid attribute in the Snort rule while the Signature Name ( corresponds to the msg attribute of the rule options.

Snort rules found

As always, we welcome your feedback!


Please leave any feedback or suggestion on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit you own.

Zerologon (CVE-2020-1472) is a vulnerability with a perfect CVSS score of 10/10 being used in the wild by attackers, allowing them to gain admin access to a Windows Domain Controller.

 As more public exploits for this vulnerability are being published, including its support within mimikatz which is widely used, it’s expected to see even more attacks leveraging this vulnerability, and it's therefore crucial to be able to detect such attempts.


In this post we will see how this vulnerability can be exploited using mimikatz to gain administrative access to a Windows Domain Controller running on Windows Server 2019, and how the different stages of the attack can be identified by the RSA NetWitness Platform, leveraging Logs, Network and Endpoint data. This will include exploiting the Zerologon vulnerability, followed by the creation of golden tickets, and finally gaining admin access to the domain controller via a pass-the-hash attack/


We will assume that the attacker already has an initial foothold on one of the internal workstations, and now wants to move laterally to the domain controller.



Step 1


The attacker downloads “mimikatz” on the compromised system using the “bitsadmin” command.



RSA NetWitness Endpoint

The executed command is detected by RSA NetWitness Endpoint and tagged as remote file copy using BITS. The exact target parameters are also provided, allowing to see from where the file was downloaded (identifying the attacker’s server) as well as the location of the downloaded file. In addition, mimikatz being a known malicious file, we are able to tag the event accordingly.




RSA NetWitness Network

And the resulted network session is captured by RSA NetWitness Network, identifying the client application as Microsoft BITS as well as the downloaded file (mimikatz.exe). If needed, the session can be reconstructed to extract the file for further forensics.






Step 2


The attacker launches mimikatz, and tests whether the domain controller is vulnerable to the Zerologon vulnerability.


As the domain controller is vulnerable, the attacker executes the exploit.



RSA NetWitness Network

We know that the exploit starts with a “NetrServerReqChallenge” and spoofs the “NetrServerAuthenticate” with 8x ‘0’s (as seen in the previous screenshot). We also know that it takes an average of 256 of such attempts for the attack to be successful.

This consequently leads to the following:

  • We expect to see “NetrServerReqChallenge” and “NetrServerAuthenticate”
  • Due to the large number of attempts, we expect the size of the session to be larger than other similar connections
  • The session to contain lots of 0's


 In fact, by looking at the captured network session, we can see these indicators tagged by RSA NetWitness.


As seen in the above screenshot

  • The session is related to netlogon (as the vulnerability targets this service)
  • We can see both “NetrServerReqChallenge” and “NetrServrAuthenticate” within the session
  • The most common byte (MCB.REQ) is “0”
  • The size of the payload is around 200KB
  • As we also have the RSA NetWitness Endpoint agent installed on the workstation, we can link the captured network session to the process that generated this connection, in this case “mimikatz.exe”


Using this information, the use of this exploit could be identified with the follow Application Rule:

service=135 && filename='netlogon' && action begins 'NetrServerAuthenticate' && action='NetrServerReqChallenge' && mcb.req=0 && size>40000



RSA NetWitness Logs

A successful attack would lead to the domain controller’s password being changed. This can be identified within the Windows Logs based on the following criteria:

  • Event ID: 4742 (A computer account was changed)
  • Source User: Anonymous logon
  • Destination User: ends with “$” sign
  • Hostname: specify your domain controllers




The following Application Rule / Query could be used for this detection:

device.type='windows' &&'4742' && user.dst ends '$' && user.src='anonymous logon'






Step 3


Once the attacker successfully exploits the domain controller, he now has access to it with replication rights. He can now use the “dcsync” feature of mimikatz to mimic the behavior of a domain controller and request the replication of specific users to get their password hashes. This can be done to get the password hash of the Administrator account as seen in the below screenshot.




RSA NetWitness Network

User Replication is requested using the “GetNCChanges” function, which would result in the domain controller providing the account hashes. This behavior can be seen based on the captured network traffic.



This behavior should me monitored and alerted on when initiated from an IP or subnet not expected to perform domain replication.


The following is a rule that can identify this behavior, it should be fine-tuned to exclude IP addresses that are expected to have this behavior:


action = ‘drsgetncchanges’ && ip.src != <include list of approved IP addresses>



RSA NetWitness Logs

This would also generate Windows Logs with the event ID 4662, but by default this log doesn’t provide enough granularity to avoid having too many false positives and is therefore not recommended to be used on its own as a detection mechanism.






Step 4


The attacker then gets a golden ticket with a validity of 10 years for the Administrator account.


He is then able to use the ticket in a pass-the-hash attack.


He is now able to get shell access to the domain controller without the need for authentication and executes couple of commands to confirm he is connected to the Domain Controller (hostname, whoami ...).




RSA NetWitness Logs

The attacker gained shell access by using PsExec. This leads to the creation of a service named “psexesvc” on the domain controller that can be detected with Windows Logs and is tagged as a pass-the-hash attack by RSA NetWitness as seen below.



RSA NetWitness Network

Leveraging network data can uncover more details.

As seen in the below screenshot, we can identify:

  • The use of the “Administrator” account to login over SMB
  • The use of Windows admin shares
  • The transfer of an executable within one of the sessions (psexe)
  • The creation of a service (psexesvc)




RSA NetWitness Endpoint

The initial execution of “cmd.exe” by PsExec on the Domain Controller to gain the shell access can easily be identified by RSA NetWitness Endpoint.


Any other command executed by the attacker after he gets shell access would also be identified and logged by RSA NetWitness Endpoint, with the ability to track which commands have been executed, and by which processes they have been launched, providing a full picture of how and what the attacker is doing on the domain controller.





When dealing with such attacks and breaches, which often blend in within normal noise and behaviors, it becomes evident that the need for a rich data set based on a combination of Logs, Network and Endpoint is critical to both detect the breach as well as to identify the full scope of the breach from start to end, for each step done by the attacker.

Having visibility over East/West network traffic with rich metadata has also brought lots of value when compared to just relying on logs to detect and investigate more efficiently this attack. With the release of the RSA NetWitness Platform v11.5 it is now possible to setup policies to define for which network traffic to keep/drop the full payload in addition to the meta data, allowing to do east/west network capture in a more efficient way.

RSA NetWitness has been supporting Structured Threat Information eXpression (STIX™) as it has been the industry standard for Open Source Cyber Threat Intelligence for quite some time. 



In NetWitness v11.5 we take the power of Threat Intelligence coming from STIX to the next level. When in Investigate or Respond views, you will now see context of the Intel delivered by STIX right there next to the meta like this:


For this - NetWitness Platform’s has enhanced the existing integration with STIX to improve the threat detection capabilities with improved Threat Intel information to detect and respond to attacks in a timely manner. Now, when an analyst investigates threat intelligence information retrieved from a STIX data source, the context for each indicator is displayed. The context information includes viewing the adversary and the attack details directly from Context Hub, in both Investigate and Respond views.


Note that for the analyst to use this capability, an administrator needs to configure the STIX data sources to retrieve the threat intelligence data from the specified STIX source as below.



  1. Add & Configure STIX/TAXII as a 'Data Source' (note that you can add TAXII server/REST server/STIX file): 
  2. Create Feeds: Setup STIX feed from Custom Feeds section. Note that you can now see all the existing STIX Data Sources (as added in pervious step) to create feeds out of them. See Decoder: Create a STIX Custom Feed  for more details.
  3. Context Lookup Summary
  4. Context Lookup Details:

Here are the links to detailed documentation around STIX: 


Check it out and let us know what you think!


We strongly believe in the power of feedback! And thus please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

As of RSA NetWitness 11.5, configuring what network traffic your Decoders collect and to what degree it should collect it has become much easier. Administrators can now define a collection policy containing rules for many network protocols and choose whether to collect only metadata, collect all data (metadata and packets), or drop all data.


NW 11.5 Selective Collection Policy Creation


This is made simpler by out-of-the-box (OOTB) policies that cover most typical situations. These can also be cloned and turned into a custom policy that fits your environment best. 


NW 11.5 Initial Selective Collection Policies


The policies are managed out of a new central location that has the ability to publish these policies to multiple network Decoders at once. This allows an administrator to configure one collection policy for DMZ traffic and distribute that to all the DMZ Decoders while simultaneously using a separate policy for egress traffic and distribute that to all the egress Decoders.


NW 11.5 Selective Collection Policy Status


An administrator can view which policies are published, the Decoders they have been applied to, when the last update was made and by whom. The policies can also be created in draft form (unpublished) and not distributed to Decoders until a maintenance window is available.


Initially this capability focuses on network collection, but long-term plan is to continue adding types of configurations and content to be administered using this centralized management approach. Please reference the RSA NetWitness Platform 11.5 documentation for further details at Decoder: (Optional) Configure Selective Network Data Collection 


As always, we welcome your feedback!


Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

RSA NetWitness 11.5 introduces the ability to interactively filter events using the metadata associated with all the events. This is seen as a new Filter button inside the Event screen that opens the Filter Events panel.


NW 11.5 Event Filter Button


This new capability functions in two modes.


NW 11.5 Event Filter Panel


The first presents a familiar search experience for analysts of all skill levels as many websites have a similar layout where filters (attributes or categories of the data) exist on the left side of the page and the matching results display on the right side. As an example in the below image, clicking the metadata (#1) in this integrated panel automatically builds the query (#2) and retrieves the resulting table (#3) of matching events.


NW 11.5 Event Filter Interactive Workflow


As analysts use this, it helps build the relationship between the metadata associated with the events and how to use those to structure a query.


NW 11.5 Full Screen Filter Events Panel


The second mode allows the panel to extend full screen giving more real-estate to show more metadata at once. This mode may seem very familiar to those who have used Navigate previously. As meta data values are clicked they are added as filters to the query bar and updates a new filter list based on the events filtered out. What it does not do is execute the query to retrieve the resulting table of events. This allows the analyst to hunt through the data and then when ready to see the results they can minimize (highlighted in above image) the Filter Events panel to reveal the results.


In both modes, the meta values associated to the meta keys can be organized by event count or event size and sorted by the count or value. This allows for analysts to sort descending by event count to find outliers, a small limited number of communications, for example. The meta keys can also be shown in smaller meta groups to help analysts focus in on the most specific values for certain use cases. Analysts can use the query profiles to execute a query with a predefined query, meta group, and column group allowing them to jump right into a specific subset of data. The right click actions that provide additional query and lookup options are also available. To get a further deep dive into the capability check out the Investigate documentation Investigate: Drill into Metadata in the Events View (Beta)  


As always, we welcome your feedback!


Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

A business what?  A Business Context Feed is a feed that provides context about systems or data that is present in NetWitness to aid the analyst in understanding more about the system or data they are examining.  The Business Context Feed should answer some basic questions that always come up during the analysis.

What is this system? - Web Server, Domain Controller, Proxy Server, etc...

What does it do? - Authentication, Database/Application Server, Customer Portal, etc...

Would it be considered a Critical Asset?

A classic scenario would be for an IP address.  If an analyst would like to know if the IP address of interest is a Domain Controller, they would need to obtain or identify all of the IP addresses of the Domain Controllers.  Then a query must be constructed to determine if there is a match (ip.all=,,,,... you get the idea).  If there is any content such as reports or alerts that are developed for this use case the list of IP addresses would need to be in all of those as well.  It can get complicated real quick once you start putting this list of IP's in content, especially when the addresses change periodically.  Creating a Business Context Feed will simplify this use case by maintaining a single feed that is centrally managed. Updating the feed can even be automated in most cases.  When the feed is applied to this use case the query gets simplified from (ip.all=,,,,... you get the idea) to a query using a custom metakey hnl.asset.role='domain controller'.  Now, it is not uncommon for an organization to create around a dozen custom metakeys in NetWitness for their own use to provide additional context for data that is collected in NetWitness.  But not everyone takes the time to create a taxonomy document to set the standard on how the custom content will be defined and populated to provide consistency for other content that will be developed around it.  Frankly, it is not advised to comingle custom meta values with the meta values that are created by NetWitness natively.  This can create confusion on what the values "are" versus what they "should be", and can adversely affect other content that uses these standard keys. There are reserved metakeys that custom values do not belong, these can be identified in the Unified Data Model (UDM) as "Reserved" in the "Meta Class" column or in the "Notes" column (use "ctrl+f" in the browser).  When creating custom content it is important to set standards on how the content is created, this includes naming conventions, spelling, formatting and values. This practice provides the necessary consistency for stable content development and performance.  Another common issue is the custom content becomes knowledge exclusive to the author and can affect the time it takes to bring new people up to speed. Another factor is time, as the undocumented knowledge becomes stale to the author and often cannot recall the logic behind the naming, purpose, or value. The taxonomy document takes the burden off of the content author and provides a reference for all parties involved in creating, updating and consuming the content.  Below is an example use case of the taxonomy to create custom metakeys and content to identify critical assets.


Creating Custom Metakeys - Things to Know

Name Length

You are limited to 16 characters (including the "." dot delimiter)  - use lowercase only for the name and values.


Allowed Characters

Only alpha numeric values are allowed, except for the "." delimiter.


Name Construction

Metakey names should follow the Unified Data Model (UDM) "3 Logical Parts" and should not conflict with any current RSA keys.

Metakey concept image

Value Format

You must decide what your metakey value it will store and define it in the appropriate custom index files if needed. The most commonly used formats are "Text" and "Integer". There are other formats but these are the most commonly used.


Multivalued Field

You will have to properly identify whether or not your metakey may contain multiple values in the same session.  This is done in the index file with a singleton="true" in the concentrator custom index files.  The reason for this is to have the ESA properly identify the field as a multivalued field (array) or a single valued field automatically.  


Example Use Case:  Creating Critical Asset Metakeys


The concept is the least specific part of the metakey name, typically used to group the metakeys, or in this case clearly identify the custom metakeys from the standard metakeys.  The concept for these asset metakeys will be an abbreviation of my "Homenet Lab", it is not uncommon to use an abbreviated company name here.  I will use "hnl" in this case.



The context is more specific and will typically define the "classification" of the key.  A context name of "asset" will be used here as these keys are for identifying the critical assets



The sub-context is the most specific, the specific sub-context values are shown below:


Sub Context Abbreviation


General Description of the Metakeys

The table below contains the metakey names fully assembled with the "concept.context.sub-context" values applied, showing a general description of the custom metakeys.

Metakey NameDescription
hnl.asset.critNumeric "Criticality" rating of the asset."Category" of the asset
hnl.asset.role"Role" of the asset"Hostname" of the asset"Date" the asset was added to the feed
hnl.asset.loc"Location" of the asset


Metakey Value Format

Define whether this metakey value will be text or an integer.

MetakeyValue FormatStore Multiple Values







Metakey Values


This metakey identifies the criticality of the system.  The table below lists the possible values and describes the values to use in the metakey.

Metakey Value


1Extremely Critical
2Highly Critical

Moderately Critical


This metakey identifies the category of the system.  The table below lists the possible values and describes the values to use in this metakey.  Note the values are always lowercase.

Metakey Value


authenticationSystems that provide authentication services, like domain controllers, LDAP servers, RADIUS, SecurID, TACACS, etc.
firewallSystems that provide firewall services.

Systems that perform scanning activities like a port/vulnerability scanner or pen test

networkNetwork Infrastructure



This metakey identifies the role of the system.  The table below lists the possible values grouped by category along with the descriptions of the values to use in this metakey.  Note the values are always lowercase.




authenticationMicrosoft Active Directorydomain controller
authenticationRADIUS Serverradius server
authenticationSecurID Serversecurid server
firewallFirewall operating in the ecommerce DMZecommerce dmz
firewallInternal firewall for secure hostingsecure hosting
firewallInternet Perimeter Firewallinternet perimeter
scannerVulnerability Scannervulnerability
scannerPenetration testingpentest
networkCore network router

core router

networkCore network switchcore switch

This metakey has the short hostname in lowercase

This metakey contains the numeric date the system was added to the feed in YYYYMMDD format.  The date is used to determine the age of the entry and to also know that prior to this date there is no contextual meta generated.



This metakey identifies the location of the system. The table below lists the possible values and describes the values to use in this metakey. Note the values are always lowercase.

Metakey Value


hqdc-01Headquarters Data Center 1
lvdc-02Leonardville Data Center 2

Moscow Data Center 3

raddc-04Radium Data Center 4


Sample Business Context Feed Using Taxonomy

User Friendly Version:

#indexhnl.asset.crithnl.asset.cathnl.asset.rolehnl.asset.hosthnl.asset.datehnl.asset.loc hostinghnlshfw-0220200708hqdc-01 controllerhnraddc-0120200708raddc-04 controllerhnlvdc-0220200708lvdc-02 controllerhnmscwdc-0320200708mscwdc-03 switchhnlcsw-0120200708hqdc-01


CSV File format for Feed Consumption:

#index,hnl.asset.crit,,hnl.asset.role,,,hnl.asset.loc,1,firewall,perimeter,hnlhqfw-01,20200708,hqdc-01,1,firewall,secure hosting,hnlshfw-02,20200708,hqdc-01,1,authentication,domain controller,hnraddc-01,20200708,raddc-04,1,authentication,domain controller,hnlvdc-02,20200708,lvdc-02,1,authentication,domain controller,hnmscwdc-03,20200708,mscwdc-03,1,network,core switch,hnlcsw-01,20200708,hqdc-01


Customizing Index

Now that the metakey names and values have been established they can be added to the necessary index custom files so that they are available to the analyst in Investigate.


Log/Network Decoders

There are two metakeys that are defined as integers, so we need to tell the Log or Network Decoder that these metakeys are to be formatted as integers.

The following custom index files need to be modified with the entries below:

index-logdecoder-custom.xml (Log Decoder)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet Lab Custom metakeys *** -->
<key description="HNL Asset Criticality" name="hnl.asset.crit" format="UInt8" level="IndexNone"/>
<key description="HNL Asset Date" name="" format="UInt32" level="IndexNone"/>

index-decoder-custom.xml (Network Decoder)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet Lab Custom metakeys *** -->
<key description="HNL Asset Criticality" name="hnl.asset.crit" singleton="true" format="UInt8" level="IndexNone"/>
<key description="HNL Asset Date" name="" singleton="true" format="UInt32" level="IndexNone"/>


All of the custom meta keys will need to be added to the Concentrator to be available in Investigate for the Analysts.

The following custom index file need to be modified with the entries below.

index-concentrator-custom.xml (Concentrator)

<!-- *** Homenet Lab Custom Index 1.0 05/04/2020 *** -->
<!-- *** Homenet custom index keys added to provide additional information from feeds *** -->

<key description="HNL Asset Criticality" name="hnl.asset.crit" singleton="true" format="UInt8" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Category" name="" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Role" name="hnl.asset.role" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Hostname" name="" singleton="true" format="Text" level="IndexValues" valueMax="50"/>
<key description="HNL Asset Date Added" name="" singleton="true" format="UInt32" level="IndexValues" valueMax="100"/>
<key description="HNL Asset Location" name="hnl.asset.loc" singleton="true" format="Text" level="IndexValues" valueMax="50"/>


Now you have more information than just an IP address to look at thanks to the Taxonomy and a Business Context Feed.


As of RSA Netwitness Platform 11.5, analysts have a new landing page option to help them determine where to start upon login.  We call this new landing page Springboard.  In 11.5 it will become the new default starting page upon login (adjustable) and can be accessed from any screen simply by click the RSA logo on the top left. 


The Springboard is a specialized dashboard (independent of the existing "Dashboard" functionality) designed as a starting place where analysts can quickly see the variety of risks, threats, and most important events in their environment.  From the Springboard, analysts can drill into any of the leads presented in each panel and be taken directly to the appropriate product screen with the relevant filter pre-applied, saving time and streamlining the analysis process.  


As part of the 11.5 release, Springboard comes with five pre-configured (adjustable) panels that will be populated with the "Top 25" results in each category, depending on the components and data available:


Top Incidents - Sorted by descending priority.  Requires the use of the Respond module.

Top Alerts -  Sorted by descending severity, whether or not they are part of an Incident. Requires the use of the Respond module.

Top Risky Hosts -  Sorted by descending risk score.  Requires RSA NetWitness Endpoint.

Top Risky Users - Sorted by descending risk score.  Requires RSA UEBA.
Top Risky Files - Sorted by descending risk score. Requires RSA NetWitness Endpoint.


Springboard administrators can also create custom panels, up to a total of ten, of a 6th type for aggregating "Events" based on any existing saved query profile used in the Investigate module.  This only requires the core RSA NetWitness platform, with data being sourced from the underlying NetWitness Database (NWDB).  This enables organizations to add their own starting places for analysts that go beyond the defaults, and to customize the landing experience to adjust for deployed RSA NetWitness Platform components:


Example of custom Springboard Panel creation using Event data


For more details on management of the Springboard, please see: NW: Managing the Springboard 


And as always, if you have any feedback or ideas on how we can improve Springboard or anything else in the product, please submit your ideas via the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform  

RSA is pleased to announce the availability of the NetWitness Export Connector, which enables customers to export NetWitness Platform events and routes the data where you want, all in continuous, streaming fashion. Providing the flexibility to satisfy a variety of use cases. 


This plugin is installed on Logstash and integrates with NetWitness Platform Decoders and Log Decoders. This plugin aggregates meta data and raw logs from the Decoder or Log Decoder and converts it to Logstash JSON object, which can easily integrate with numerous consumers such as Kafka, AWS S3, TCP, Elastic and others.


Work Flow of NetWitness Export Connector 


  • The input plugin collects meta data and raw logs from the Log Decoder, and the meta data from the Decoder. The data is then forwarded to the Filter plugin.
  • The Filter plugin adds, removes, or modifies the received data and forwards it to the Output plugin.
  • The Output plugin sends the processed event data to the consumer destinations. You can use the standard Logstash output plugins to forward the data.


Check it out and let me know what you think!


Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 


Download and Documentation

We are excited to announce the release of the new RSA OSINT Indicator feed, powered by ThreatConnect!  


What is it?

There are two new feeds that have been introduced to RSA Live, built on Open Source Intelligence (OSINT) that has been curated and scored by our partners at ThreatConnect:

  • RSA OSINT IP Threat Intel Feed, including Tor Exit Nodes
  • RSA OSINT Non-IP Threat Intel Feed, which includes indicators of types:
    • Email Address
    • URLs
    • Hostnames
    • File Hashes

These feeds are automatically aggregated, de-duplicated, aged and scored with ThreatConnect's ThreatAssess score. ThreatAssess is a metric combining both the severity and confidence of an indicator, giving analysts a simple indication of the potential impact when a matching indicator is observed.  Higher ThreatAssess scores mean higher potential impact.  The range is 0-1000, with RSA opting to focus on the highest fidelity indicators with scores 500 or greater (as of the 11.5 release - subject to change as needed)


Who gets it?

These feeds are included for any customer, with any combination of RSA NetWitness Logs, RSA NetWitness Packets, or RSA NetWitness Endpoint under active maintenance at no charge. The feed will work on any version of RSA NetWitness, but please see the How do I deploy it? section for notes on version-specific considerations.


How do I deploy it?

These feeds will show up in RSA Live as follows:


To deploy and/or subscribe to the feed, please take a look at the detailed instructions here: Live: Manage Live Resources 


11.4 and earlier customers will want to add a new ioc.score meta key to their Concentrator(s) in order to be able to query and take advantage of the ThreatAssess score of any matched indicator. Please see 000026912 - How to add custom meta keys in RSA NetWitness Platform  for details on how to do this. Please note that this meta key should be of type Uint16 - inside the index file, the definition should look similar to this:


11.5 and greater customers do not need to add this key, as it's already included by default.



How do I use it?

Once the feeds are deployed, any events or sessions with matching indicators will be enriched with two additional meta values, ioc and ioc.score.  These values are available for use in all search, investigation, and reporting use cases assuming those keys have been enabled.



eg. Events filter view

eg. Event reconstruction view


What happens to the "RSA FirstWatch" and Tor Exit Node feeds?

If you are running these new feeds, you do not need to run the existing RSA FirstWatch & Tor Exit Node feeds in parallel as they are highly redundant and tend to be less informative when matches occur.  At some point in the near future once we believe impact will be minimal, we will officially deprecate the RSA FirstWatch & Standalone Tor Exit Node feeds.


Do you have ideas?

If you have ideas on how to make these feeds better, ideas for content creation leveraging these feeds, or anything else in the RSA NetWitness portfolio, please submit and vote on ideas in the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

Before I jump into explaining what is the relation between RSA NetWitness as an evolved SIEM and Threat Defense platform and Gartner’s SOC Visibility triad, I’m going to start by talking about Gartner for a minute. I expect everyone knows who Gartner are. They are a worldwide IT leading research and advisory organization and one of the most trusted and reputable ones in addition to being active within the Cyber Security field for SOC Threat Detection and Response tools such as: SIEM, NTA, EDR, UEBA, SOAR..etc. The reason we are mentioning Gartner today is that they did a piece of great work last year that sought to simplify complex views of modern security toolset requirements into a single picture of what good looks like.


They called it the SOC visibility triad and it calls out the three pillars of security, being your traditional log-centric SIEM, network-orientated and endpoint security detection and response tools.

Combining all these 3 technologies together helps in filling gaps among them to provide full security visibility. That combined approach significantly reduces the chances of an (internal or external) bad actor  to evade your deployed systems for a prolonged period of time which ultimately enables you to effectively meet the required SOC Metrics in terms of MTTD/MTTR and cut down the dwell time of a bad actor. 


The reason we like it is that Gartner, arguably the most respected of today’s analysts, has essentially drawn the core of RSA NetWitness.


RSA NetWitness brings together the breadth of coverage of log management solutions with the detailed, intelligence and forensic  worlds of endpoint and network into a single, modular and powerful  security platform.                                                                


Cyber security has always been a battleground so there has always been evolution of the tools used to attack and the tools used to defend. More recently we’ve seen huge rises in the use of automation by attackers, massive ransomware campaigns, huge data breaches and some pretty big fines being handed out through regulation like GDPR. Of course, most recently, the Covid-19 pandemic has seen huge numbers of businesses suddenly alter the way of doing business and consequently their security posture by rapidly allowing remote access to their corporate resources from anywhere.


All these cyber security pressures combined with most businesses thirst for technology adoption and digitization created huge change. At the heart of the change are security teams trying to build or maintain adequate protections, trying to be business enablers and not blockers.


To succeed, security teams need to move from the conventional approach of multi-layered, disjointed security tooling that uses old detection methods like rules and signatures to something more valuable. Modern security tooling needs to be able to consume all data sources, not just logs, and use the latest analysis techniques like machine learning to find important security insights and reduce the alert noise created by traditional approaches. Full visibility is important and by that we don’t just mean having visibility across the whole estate. We also mean combining intelligence from those data sources to undercover threats the individual tools wouldn’t notice.


As you’d expect, Gartner name us as a leader in their MQ reporting for this very reason.



Using a mixed approach in detection using a large library of out-of-the-box rule-sets combined with the latest in machine learning, RSA NetWitness as a modular and a platform-anywhere solution can automatically classify alerts based on their risk score across all data sources fully aligned with MITRE ATT&CK framework and Gartner agreed that as a single platform RSA NetWitness shines.


 For the traditional log centric SIEM space, we have a comprehensive integration coverage (see this URL RSA NetWitness Platform Integrations Catalog ) , intuitive/interactive UI ( ),

toolset with advanced query and advanced correlation capabilities. Where we can consume log data from  350+ log sources and get all this data filtered, normalized and enriched at capture time. Then applying real-time correlation-based analytics and reporting to provide real time alerts and dashboards visibility into any spotted threat.  NetWitness also extends this with a fully unsupervised, multi-model, machine learning UEBA (User and Entity Behavioral Analytics) engine. This engine forms a picture of normal user and entity (endpoint, network) activity and finds anomalies automatically, for example, a malicious insider, credentials theft, brute-force, process injections ..etc (further details on UEBA use cases and indicators can be found here UEBA: NetWitness UEBA Indicators)


The network detection space is really where RSA NetWitness was born and is unbeaten. RSA NetWitness can perform a continuous full-packet capture while providing a real time OSI stack "layer 2" to "layer 7" network threat detection. Like with log data this data is normalized and enriched alongside all other data sources. Specifically, with packet data we can reconstruct entire network sessions and extract malicious payloads, digital artefacts and the likes for further analysis.


At the endpoint, RSA NetWitness provides further security intelligence data by tracking system and user space processes and further enhancing the UEBA engine. With our lightweight agent we can directly perform remediation measures on endpoints from simple process shutdowns or protocol blocks to full endpoint isolation to stop compromise at the source (How to Isolate a Host from the Network ). Also, as with network detection, we can pull interesting assets such as malicious programs, MFT, system/process dump files from the endpoint for deeper analysis.


All of this analysed security data gathered and generated can be enriched with our threat intelligence engine which provides yet more insight, confidence, risk scoring into known threats like compromised IP addresses, malicious code or actors. This all provides huge amounts of insight for use in threat remediation or incident response activities. These threat responses can be tracked or automated through the main analyst interface (Respond: Responding to Incidents ) , or, through our security orchestration and automation (SOAR) engine called NetWitness Orchestrator (Security Automation and Orchestration ) .


We describe RSA NetWitness as a reliable evolved SIEM and threat defense SOC platform because of this ability to produce high-fidelity alerts across all data sources, lower false positives through the depth of its insight and detect threats faster. It can also act as your storyteller, allowing you to go back in time and pick through an attack blow by blow. It goes beyond a single indicator-of-compromise type detection to a malicious log/network/endpoint/user based behavior and TTP (tactics, Techniques and Procedures) detection,  to getting you a step ahead of the threat and ultimately improve your overall digital immunity across your estate in the face of known and unknown threats on a proactive manner. 


Importantly, it gives you the best possible information to answer the burning questions during any attack:

When and how did it happen?

What systems were affected?

What’s the magnitude and impact of it?


Special Thanks to Russel Ridgley RSA's UKI CTO, who contributed and helped me in writing this article. Please feel free to leave a comment if you have any question or interest to understand more on the RSA NetWitness solution. Thank you!. 

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.


But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.


This blog covers the hard way.


Everything that we do in the hard way must occur after the Endpoint Log Hybrid host has been fully installed and provisioned. This means you'll need to complete the entire host installation before moving on to this process.


There are 2 primary requirements for the hard way:

  • you must be able to create a server certificate and private key capable Server Authentication
  • you must be able to create a client certificate and private key capable of Client Authentication
    • this client certificate must have Common Name (CN) value of rsa-nw-endpoint-agent


I won't be going into details on how to generate these certificates and keys - your org should have some kind of process in place for this. And since the certificates and keys generated from that process can output in a number of different formats, I won't be going into details on how to convert or reformat them. There are numerous guides, documents, and instructions online to help with that.


Once we have our server and client certificates and keys, make sure to also grab the CA chain used to generate them (at the very least, both certs need to have a common Root or Intermediate CA to be part of the same trusted chain). This should hopefully be available through the same process used to create the certs and keys. If not, we can also export CA chains from websites - if you do this, make sure it is the same chain used to create your certificates and keys.


The endstate format that we'll need for everything will be PEM. The single server and/or client cert should look like this:



The private key should look like this:



And the Certificate Chain should look this (one BEGIN-END block per CA certificate in the chain...also, it will help to simplify the rest of the process if this chain only includes CA certificates):



We want to make sure we have each of these PEM files for both the server and client certs and key we generated. Once we have these, we can proceed to the next set of steps.


The rest of this process will assume that all of these certificates, keys, and chains are staged on the Endpoint Log Hybrid host.

Every command we run from this point forward occurs on the Endpoint Log Hybrid.

We end up replacing a number of different files on this host, so you should also consider backup all the affected files before running the following commands.


For the server certificates:

  • # cp /path/to/server/certificate.pem /etc/pki/nw/web/endpoint-web-server-cert.pem
  • # cp /path/to/server/key.pem /etc/pki/nw/web/endpoint-web-server-key.pem
  • # cat /path/to/server/certificate.pem > /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # cat /path/to/ca/chain.pem >> /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # openssl crl2pkcs7 -nocrl -certfile /path/to/server/certificate.pem -certfile /path/to/ca/chain.pem -out /etc/pki/nw/web/endpoint-web-server-cert.p7b
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-trust/truststore.pem
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-ca/customrootca-cert.pem
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.p12.idx
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.pem.idx


The end results, with all the files we modified and replaced, should be:


Once we're confident we've completed these steps, run:

  • # systemctl restart nginx


We can verify that everything so far has worked by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:


If this matches our server certificate and chain, then we can move on to the client certificates. If not, then we need to go back and figure out which step we did wrong.


For the client certificates:

  • openssl pkcs12 -export -out client.p12 -in /path/to/client/certificate.pem -inkey /path/to/client/key.pem -certfile /path/to/ca/chain.pem


...enter a password for the certificate bundle, and then SCP this client.p12 bundle onto a windows host. We'll come back to it in just a moment.


In the NetWitness UI, browse to Admin/Services --> Endpoint-Server --> Config --> Agent Packager tab. Change or validate any of the configurations you need, and then click "Generate Agent Packager." The Certificate Password field here is required to download the packager, but we won't be using the OOTB client certificate at all so don't stress about the password.


Unzip this packager onto the same windows host that has the client.p12 bundle we generated previously. Next, browse to the AgentPackager\config directory, replace the OOTB client.p12 file with the our custom-made client.p12 bundle, move back up up one directory, and run the AgentPackager.exe.


If our client.p12 bundle has been created correctly, then in the window that opens, we will be prompted for a password. This is the password we used when we ran the openssl pkcs12 command above, not the password we used in the UI to generate the packager. If they happen to be the same, fantastic....


We'll want to verify that the Client certificate and Root CA certificate thumbprints here match with our custom generated certificates.


With our newly generated agent installers, it is now time to test them. Pick a host in your org, run the appropriate agent installer, and then verify that you see the agent showing up in your UI at Investigate/Hosts.


If it does appear, congratulations! Make sure to record all these changes, and be ready to repeat them when certificates expire and agent installers need upgrading/updating.


If it doesn't, a couple things to check:

  • first, give it a couple's not going to show up instantly
  • go back through all these steps and double-check that everything is correct
  • check the c:\windows\temp directory for a log file with the same name as your endpoint agent; e.g.: NWEAgent.log....if there are communication errors between the agent/host and the endpoint server, this log will likely have relevant troubleshooting details
  • if the agent log file has entries showing both "AgentCert" and "KnownServerCert" values, check that these thumbprints match the Client and Root CA certificate thumbprints from the AgentPackager output

    • ...I was not able to consistently reproduce this issue, but it is related to how the certs and keys are bundled together in the client.p12
    • ...when this happened to me, I imported my custom p12 bundle into the Windows MMC Certificates snap-in, and then exported it (make sure that the private key gets both imported and exported, as well as all the CAs in the chain), then re-ran my AgentPackger with this exported client.p12, and it fixed the error
    • ... ¯\_(ツ)_/¯
  • from a cmd prompt on the host, run c:\windows\system32\<service name of the agent>.exe /testnet
  • check the NGINX access log on the Endpoint Log Hybrid; along with the agent log file on the endpoint, this can show whether the agent and/or server are communication properly
    # tail -f /var/log/nginx/access.log

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.


But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.


This blog covers the easy way.


The only real requirement for the easy way is that we are able to create an Intermediate CA certificate and its private key from our CA chain (or use an existing pair), and that this Intermediate CA is allowed to generate an additional, subordinate CA under it.


For my testing, "Root-ca" was my imaginary company's Root CA, and I created "My Company Intermediate CA" for use in my 11.4 Endpoint Log Hybrid.


(I'm no expert in certificates, but I can say that all the Intermediate CAs I created that had explicit extendedKeyUsage extensions failed. The only Intermediate CAs I could get to work included "All" of the Intended Purposes. If you know more about CAs and the specific extendedKeyUsage extensions needed for a CA to be able to create subordinate CAs, I'd be interested to know what they are.)


Once we have an Intermediate CA certificate and its private key, we need to make sure they are in PEM format. There are a number of ways to convert and check keys and certificates, and a whole bunch of resources online to help with this this, so I won't cover any of the various conversion commands or methods here. 


If the CA certificate looks like this, then it is most likely in the correct format:



And if the private key looks like this, then it is most likely in the correct format:



Our last step in this process has to occur at a very specific point during the endpoint log hybrid's installation - after we have run the nwsetup-tui command and the host has been enabled within the NetWitness UI, but before we install the Endpoint Log Hybrid services:

  • on the endpoint host, create directory /etc/pki/nw/nwe-ca
  • place the CA certificate and CA private key files in this directory and name them nwerootca-cert.pem and nwerootca-key.pem, respectively


The basis for this process comes directly from the "Configure Multiple Endpoint Log Hybrid Hosts" step in the Post Installation tasks guide (, if we want a bit more context or details on when this step should occur and how to do it properly.


Once we've done this, we can now install the Endpoint Log Hybrid services on the host.


I suggest you watch the installation log file on the endpoint server, because if the Intermediate CA does not have all the necessary capabilities, the installation will fail and this log file can help us identify which step (if my own experience is any guide, then it will most likely fail during the attempt to create the subordinate Endpoint Intermediate CA --> /etc/pki/nw/nwe-ca/esca-cert.pem):

# tail -f /var/log/netwitness/config-management/chef-solo.log


If all goes well, we'll be able to check that our endpoint-server is using our Intermediate CA by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:


And our client.p12 certificate bundle within the agentPackager will be generated from the same chain:


And that's it!


Any agent packages we generate from this point forward will use the client.p12 certificates generated from our CA. Likewise, all agent-server communications will be encrypted with the certificates generated from our CA.

Thank you for joining us for the July 22nd NetWitness Webinar covering Data Carving using Logs as presented by Leonard Chvilicek. An edited recording is available below, with the Zoom link to the original webinar recording.

Password: V0.*h5#v

This article applies to hunting with Netwitness for Networks (packet-based). Before proceeding, it is important that you are aware of any GDPR or other applicable data collection regulations which will not be covered here.


Hunting for plaintext credentials is an important and easy method of finding policy violations or other enablers of compromise. Increasing numbers of the workforce in remote or work-from-home situations means that employees will be transferring data over infrastructure not controlled by your organization. This may include home WiFi, mobile hotspots, or coffee shop free WiFi.


Frequently, this hunting method will reveal misconfigured web servers, poor authentication handling, or applications using baked-in URLs and credentials. While Netwitness does a good job parsing this by default, there are additional steps that can be taken to increase detection and parsing.


Key Takeaways

  • Ensure the Form_Data_lua parser is enabled and updated
  • Also hunt for sessions where passwords are not parsed



Most environments will have either the HTTP or HTTP_lua parser currently enabled considering that it is one of the core network parsers. You can check this under your Decoder > Config tab in the Parsers Configuration pane. More details about system parsers and Lua equivalents can be found here:



This parser looks at the body of HTTP content whereas the HTTP/HTTP_lua parsers primarily extract credentials from the headers. Before enabling Form_Data_lua, it is important to understand that this can come with increased resource usage due to the amount of additional data being searched.  You can find statistic monitoring instructions here, although this itself can come with a performance impact as well:


For the purpose of this hunting method, you can disable the “query” meta key if there are resource concerns. In either case, be sure to monitor keys for index overflow. You can adjust the per-key valueMax if needed per the Core Database Tuning Guide:


Also, if you are not subscribed to and deploying the Form_Data_lua parser, be sure to deploy the latest copy from Live. Along with optimizations, recent changes expand the variables that the parser is searching for, as well as introduce parsing of JSON-based authentication.



Once the parsers are enabled, you can go to Investigate > Navigate and begin a new query. For ease of record keeping, I like to structure my hunt in these categories:

  • Inbound
    • Password exists
    • Password does not exist
  • Lateral
    • Password exists
    • Password does not exist
  • Outbound
    • Password exists
    • Password does not exist


The assumption here is that you’re using the Traffic_Flow_lua parser with updated network definitions to easily identify directionality. If not, you can use other keys such as ip.src and ip.dst. More info on the Traffic_Flow_lua parser here:


Querying where passwords exist is straightforward:

password exists && direction = “inbound” && service = 80
password exists && direction = “lateral” && service = 80
password exists && direction = “outbound” && service = 80


Querying where passwords do not exist requires a bit of creativity and assumptions. In many cases, authentication over HTTP will involve URLs similar to http[:]//host[.]com/admin/formLogin. This path is recorded in the directory and filename meta keys, where “/admin/” would be the directory and “formlogin” would be the filename.


I’ll often start with the below query (the exclamation point is used to negate “exists”):

password !exists && direction = “outbound” && service = 80 && filename contains “login”,”logon”,”auth”


You can follow this pattern for other directions, filenames, and directory names as you see fit. The comma-separated strings in the filename query act as a logical OR. It would be equivalent to the following. Pay attention to the parentheses:

password !exists && direction = “outbound” && service = 80 && (filename contains “login” || filename contains ”logon” || filename contains ”auth”)


Many authentication sessions will occur using the “POST” HTTP method. If you’d like, you can also append ‘action = “post”’ to the above query.



After your query completes, you’ll be left with a dataset to review. (Hopefully) Not all of them will contain credentials, but this is where the human analysis begins. Choose a place to start, then open the Event Analysis view (now known simply as Event view in newer versions). My example here will be heavily censored for the purpose of this blog post.

Choose the “Decode Selected Text” option to make viewing this easier.

Now that you’ve found sessions of interest, you can begin appropriate follow-up action. Examples may include advising the website developer to enable HTTPS or discussing app configuration with your mobile appliance team.



This hunting method will aid in analyzing security posture from outbound, inbound, and lateral angles. It also serves as an easy gateway for analysts to quickly make a positive security impact as well as become familiar with the intricacies of HTTP communication.


Netwitness parsers must balance performance considerations alongside detection fidelity. While they currently have good coverage, it’s beneficial to know how to search data that is structured in a way that is malformed or formatted such that it is impractical for Netwitness to parse.


For more hunting ideas, see the Netwitness Hunting Guide:


If you have any comments, feel free to leave them below. If you’re finding recurring patterns in your environment that are not parsed, you can let us know and we’ll assess the feasibility of adding the detection to the parser.

Filter Blog

By date: By tag: