Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Sean Ennis

RSA NetWitness Platform

11 Posts authored by: Sean Ennis Employee

As of RSA Netwitness Platform 11.5, analysts have a new landing page option to help them determine where to start upon login.  We call this new landing page Springboard.  In 11.5 it will become the new default starting page upon login (adjustable) and can be accessed from any screen simply by click the RSA logo on the top left. 


The Springboard is a specialized dashboard (independent of the existing "Dashboard" functionality) designed as a starting place where analysts can quickly see the variety of risks, threats, and most important events in their environment.  From the Springboard, analysts can drill into any of the leads presented in each panel and be taken directly to the appropriate product screen with the relevant filter pre-applied, saving time and streamlining the analysis process.  


As part of the 11.5 release, Springboard comes with five pre-configured (adjustable) panels that will be populated with the "Top 25" results in each category, depending on the components and data available:


Top Incidents - Sorted by descending priority.  Requires the use of the Respond module.

Top Alerts -  Sorted by descending severity, whether or not they are part of an Incident. Requires the use of the Respond module.

Top Risky Hosts -  Sorted by descending risk score.  Requires RSA NetWitness Endpoint.

Top Risky Users - Sorted by descending risk score.  Requires RSA UEBA.
Top Risky Files - Sorted by descending risk score. Requires RSA NetWitness Endpoint.


Springboard administrators can also create custom panels, up to a total of ten, of a 6th type for aggregating "Events" based on any existing saved query profile used in the Investigate module.  This only requires the core RSA NetWitness platform, with data being sourced from the underlying NetWitness Database (NWDB).  This enables organizations to add their own starting places for analysts that go beyond the defaults, and to customize the landing experience to adjust for deployed RSA NetWitness Platform components:


Example of custom Springboard Panel creation using Event data


For more details on management of the Springboard, please see: NW: Managing the Springboard 


And as always, if you have any feedback or ideas on how we can improve Springboard or anything else in the product, please submit your ideas via the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform  

We are excited to announce the release of the new RSA OSINT Indicator feed, powered by ThreatConnect!  


What is it?

There are two new feeds that have been introduced to RSA Live, built on Open Source Intelligence (OSINT) that has been curated and scored by our partners at ThreatConnect:

  • RSA OSINT IP Threat Intel Feed, including Tor Exit Nodes
  • RSA OSINT Non-IP Threat Intel Feed, which includes indicators of types:
    • Email Address
    • URLs
    • Hostnames
    • File Hashes

These feeds are automatically aggregated, de-duplicated, aged and scored with ThreatConnect's ThreatAssess score. ThreatAssess is a metric combining both the severity and confidence of an indicator, giving analysts a simple indication of the potential impact when a matching indicator is observed.  Higher ThreatAssess scores mean higher potential impact.  The range is 0-1000, with RSA opting to focus on the highest fidelity indicators with scores 500 or greater (as of the 11.5 release - subject to change as needed)


Who gets it?

These feeds are included for any customer, with any combination of RSA NetWitness Logs, RSA NetWitness Packets, or RSA NetWitness Endpoint under active maintenance at no charge. The feed will work on any version of RSA NetWitness, but please see the How do I deploy it? section for notes on version-specific considerations.


How do I deploy it?

These feeds will show up in RSA Live as follows:


To deploy and/or subscribe to the feed, please take a look at the detailed instructions here: Live: Manage Live Resources 


11.4 and earlier customers will want to add a new ioc.score meta key to their Concentrator(s) in order to be able to query and take advantage of the ThreatAssess score of any matched indicator. Please see 000026912 - How to add custom meta keys in RSA NetWitness Platform  for details on how to do this. Please note that this meta key should be of type Uint16 - inside the index file, the definition should look similar to this:


11.5 and greater customers do not need to add this key, as it's already included by default.



How do I use it?

Once the feeds are deployed, any events or sessions with matching indicators will be enriched with two additional meta values, ioc and ioc.score.  These values are available for use in all search, investigation, and reporting use cases assuming those keys have been enabled.



eg. Events filter view

eg. Event reconstruction view


What happens to the "RSA FirstWatch" and Tor Exit Node feeds?

If you are running these new feeds, you do not need to run the existing RSA FirstWatch & Tor Exit Node feeds in parallel as they are highly redundant and tend to be less informative when matches occur.  At some point in the near future once we believe impact will be minimal, we will officially deprecate the RSA FirstWatch & Standalone Tor Exit Node feeds.


Do you have ideas?

If you have ideas on how to make these feeds better, ideas for content creation leveraging these feeds, or anything else in the RSA NetWitness portfolio, please submit and vote on ideas in the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

Visualization techniques can help an analyst make sense of a given data set by exposing scale, relationships, and features that would be almost impossible to derive by just looking at a list of individual data points.  As of RSA NetWitness Platform 11.4, we have added new physics and layout techniques to the nodal diagram in Respond in order to make better sense of the data both for when using Respond as an Incident/Case Management tool or when simply using Respond to group events and track non-case investigations (see Using Respond for Data Exploration for some ideas).




Clustering by Entity Type

Prior to 11.4, the nodal graph evenly distributed the nodes regardless of entity type (Host, IP, User, MAC, File). Improvements were made to introduce intelligent clustering such that entities of the same type not only retain their distinct color, but also have a higher chance of being clustered together.  This layout improvement makes it clearer to see relationships between different entity types, particularly when dealing with larger sets of data.


Variable Edge Forces Based on Relationship Type

Prior to 11.4, all edges between nodes were treated equally, resulting in lengths being rendered equally between all sets of connected nodes.  Improvements were made to adjust the relative attraction forces, helping to better distinguish attribute type relationships ("as", "is named", "belongs to", and "has file") from action type relationships ("calls", "communicates with", "uses").  Edges representing attributes will tend to be much shorter than those representing actions, which has the added benefit of reducing the number of overlapping edges, making relationships, scope, and sprawl much easier for an analyst to see at a glance.



Separation of Disconnected Clusters

Prior to 11.4, all nodes and edges were grouped into one large cluster, even if certain nodes in the data set did not have any relationship with others, requiring tedious manual dragging of nodes in order to distinguish the groupings.  Now, disjoint clusters of nodes are repelled from one another upon initial layout, making it extremely clear which sets of data are joined by some kind of relationship.  This is particularly helpful when using Respond for general data exploration of larger data sets (vs visualizing a single incident) that do not necessarily have commonality, both drawing the analysts eyes to potentially interesting outliers and once again reducing the number of overlapping edges that have historically made certain nodal graphs difficult to read, depending on the data set.

Improved Nodal Interaction

In addition to the physics governing new layouts, improvements have been made to nodal interaction to help take advantage of them.  Given the potential size and complexity of data sets, despite the introduction of layout and force techniques, the layout may not always be optimal.  The goal was to improve interaction by minimizing the number of graph drags needed by an analyst to make sense of even the most tangled data sets.  When dragged, nodes with high connectivity will generally attract other nodes with which a relationship exists.  Also, once any node is manually dragged into position, manipulating the position of other nodes will no longer impart a moving force, meaning the original dragged node will stay in place.  To "unpin" dragged nodes and have them spring back into place, simply double click.



As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 


Happy Hunting!

Did you know that you can use Respond for data exploration, even if you aren't using it for Incident Management?  While the naming convention certainly does not suggest it, Respond can be just as useful outside of incident response a place for analysts to group events of interest during investigation and hunting efforts.    Using Respond as more of an analyst workspace can help teams collaborate better, track streams of thought, and take advantage of Respond's new and improved visualization capabilities as of 11.4 (see Visualization Enhancements in RSA NetWitness Platform 11.4  for details).  




Step 1 - Create an "Incident" from Events view

Once you have a set of data that carries significance, you can select any set or subset of events contained in a data set and use it to create a new "Incident".  For our purposes here, you'll have to look past the current naming conventions of Alerts and Incidents and just think of it as a grouping of events (log, endpoint, or network sessions).


What data sets to use is largely up to you, but this type of approach is particularly useful when following a methodology that requires systematically carving larger data sets into smaller, more manageable ones.  The example above is based on RSA's Network Hunting Guide, details of which can be found here: RSA NetWitness Hunting Guide 


Step 2 - Open in Respond

Once opened, all of the capabilities available when using Respond for Incident Management are available.  It doesn't mean you have to use all of them, but you may find some of them to be a handy way to tag in other analysis (Tasks) and keep track of your analysis (Journal).  And if you do happen to find something malicious in the data set, all of the relevant information is already contained.


In the example above, we're seeing if anything interesting shows up in the data set for "All outbound HTTP sessions using the POST method".  The nodal diagram can be a useful way to see how the data is distributed between entities (larger bubbles meaning a larger number of events), which sub data sets within the larger one are dealing with disjoint sets of entities (Files, Hosts, IPs, Users, MAC Addresses), and can key your eye towards groupings that lead to deeper levels of inspection.  


Step 3 - Use Respond Tools to Track, Pivot, and Collaborate

View Event Cards

In-line Event Reconstruction (eg. Network Reconstruction)

Entity Details - Pivot To Other Views 



Add New Events 

And don't forget that you can always add more events to the same Respond incident to expand investigation if more leads are uncovered. Simply start from the top, and "Add To Incident".


As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 


Happy Hunting!

One of the most powerful features to make its way into RSA NetWitness Platform version 11.3 is also one of the most subtle in the interface.  11.3 now saves analysts one more step during incident response by integrating rich UEBA, Endpoint, Log, and Full Packet reconstruction directly into the incident panel.  This view is essentially the same as if you were looking at events directly in the Event Analysis part of the UI, or the Users (UEBA) part of the UI, just consolidated into the incident panel.  Prior to this improvement,the only way to view the raw event details was to open the event and click on "Investigate Original Event", pivoting into a new query window.  This option may still be appropriate for some, and still exists, but for those needing the fastest route possible to validating detection and event details, this feature is for you.


To use the new feature, for any individual event of interest that has been aggregated or added into an incident you'll see a small hyperlink attached to each event on the left hand side, labeled with one of: "Network", "Endpoint", "Log", "User Entity Behavior Analytics".  These labels correspond to the source of the event, and upon click will slide in the appropriate reconstruction view.


User Entity and Behavior Analytics (UEBA) view:

Network packet reconstruction view:

Endpoint reconstruction view:

Log reconstruction view:


Happy responding!

Starting in version 11.3, the RSA NetWitness Platform introduced the ability to analyze endpoint data captured by the RSA NetWitness Endpoint Agent (both the free "Insights" version and the full version). For more information on what RSA NetWitness Endpoint is all about, please start with the RSA NetWitness Endpoint Quick Start Guide for 11.3.


One of the helpful new features of the endpoint agent is the ability to not only focus the analyst on the "Hosts" context of their environment, but also the ability to gain full visibility into process behaviors and relationships whenever suspicious behaviors have been detected by the RSA NetWitness platform, or when investigating alerts from others.


The various pivot points bring an analyst into Process Analysis in the context of a specific process, including it's parent and child process(es) and based on the current analysis timeline which is adjustable if needed.


Example Process Analysis view, drilling into all related events recorded by the NW Endpoint Agent


Example Process Analysis view, focused on process properties (powershell.exe) collected by the NW Endpoint Agent


The feature is simple to use when RSA NetWitness Endpoint agent data exists, and is accessible from a number of locations in the UI depending on where the analyst is in their workflow:


Investigate > Hosts > Details (if endpoint alerts exist):

Investigate > Hosts > Processes (regardless of alert/risk score): 


Investigate > Event Analysis:


Respond > Incident > Event List (card must be expanded):


Respond > Incident > Embedded Event Analysis (reconstruction view):


Happy Hunting!

This project is an attempt at building a method of orchestrating threat hunting queries and tasks within RSA NetWitness Orchestrator (NWO).  The concept is to start with a hunting model defining a set of hunting steps (represented in JSON), have NWO ingest the model and make all of the appropriate "look ahead" queries into NetWitness, organizing results, adding context and automatically refining large result sets before distributing hunting tasks among analysts.  The overall goal is to have NWO keep track of all hunting activities, provide a platform for threat hunting metrics, and (most importantly) to offload as much of the tedious and repetitive data querying, refining, and management that is typical of threat hunting from the analyst as possible.  


Please leave a comment if you'd like some help getting everything going, or have a specific question.  I'd love to hear all types of feedback and suggestions to make this thing useful.



The primary dashboard shows the results of the most recent Hunting playbook run, which essentially contains each hunting task on it's own row in the table, a link to the dynamically generated Investigation task associated with that task, a count of the look ahead query results, and multiple links to the data in (in this case) RSA NetWitness.  The analyst has not been involved up until now, as NWO did all of this work in the background. 


Primary Hunting Dashboard


Pre-built Pivot into Toolset (NetWitness here)


The automations will also try to add extra context if the result set is Rare, has Possible Beaconing, contains Indicators, and a few other such "influencers" along with extra links to just those subsets of the overall results for that task.  Influencers are special logic embedded within the automations to help extract additional insight so that the analyst doesn't have to.  There hasn't been too much thought put into the logic behind these pieces just yet, so please consider them all proofs of concept and/or placeholders and definitely share any ideas or improvements you may have.  The automations will also try to pare down large result sets if you have defined thresholds within the hunting model JSON. The entire result set will still be reachable, but you'll get secondary counts/links where the system has tried to aggregate the rarest N results based on the "Refine by" logic also defined in the model, eg:


If defined in the huntingcontent.json, a specific query/task can be given a threshold and will try to refine results by rarity if the threshold is hit.  Example above shows a raw count of 6850 results, but a refined set of 35 results mapping to the rarest 20 {ip.dst, org.dst} tuples seen.


For each task, the assigned owner can drill directly into the relevant NetWitness data, or can drill into the Investigation associated with the task.  Right now the investigation playbooks for each task are void of any special playbooks themselves - they simply serve as a way to organize tasks, contain findings for each hunt task and a place from which to spawn child incidents if anything is found:



From here it is currently just up to the analyst to create notes, complete the task, or generate child investigations.  Future versions will do more with these sub investigation/hunt task playbooks to help the analyst. For now it's just a generic "Perform the Hunt" manual task.  Note that when these hunt task investigations get closed, the Hunt Master will updated the hunting table and mark that item as "complete", signified by a green dot and a cross-through as shown in the first screenshot.


How It Works

Playbook Logic

  1. Scheduled job or ad-hoc creation of "Hunt" incident that drives the primary logic and acts as the "Hunt Master"
  2. Retrieve hunting model JSON (content file and model definition file) from a web server somewhere
  3. Load hunting model, perform "look ahead", influencer, and refining queries
  4. Create hunting table based on query results, mark each task as "In Progress", "Complete", or "No Query Defined"
  5. Generate dynamic hunting investigations for each task that had at least 1 result from step 2
  6. Set a recurring task for the Hunt Master to continuously look for all related hunt tasks (they share a unique ID) and monitor their progress, updating the hunting table accordingly.
  7. [FUTURE] Continuously re-query the result sets in different ways to find outliers (eg. stacking different meta keys and adding new influencers/links to open hunting tasks

(Both the "Hunt Master" and the generated "Hunt Tasks" are created as incidents, tied together with a unique ID - while they could certainly be interacted with inside of the Incidents panel, the design is to have hunters operate from the hunting dashboard)


The Hunting Model

Everything is driven off of the hunting model & content.  The idea is to be able to implement any model/set of hunting tasks along with the queries that would normally get an analyst to the corresponding subset of data.  The example and templates given here corresponding with the RSA Network Hunting Labyrinth, modeled after the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide 


This file must sit on a web server somewhere, accessible by the NWO server. You will later configure your huntingcontentmodel.json file to point to it's location if you want to manage you're own (instead of the default version found here:  


This file defines hunting tasks in each full branch of the JSON file, along with queries and other information to help NWO populate discover the data and organize the results:



(snippet of hunting content json)


The JSON file can have branches of length N, but the last element in any given branch, which defines a single hunting technique/task must have an element of the following structure. Note that "threshold" and "refineby" are optional, but "query" and "description" are mandatory, even if the values are blank.



The attached (same as the github link above as of the first release) example huntingcontent.json is meant to be a template as it is currently at the very beginning stages of being mapped to the RSA Network Hunting Labyrinth methodology.  This will be updated with higher resolution queries over time. Once we can see this operate in a real environment, the plan is to leverage a lot more of the ioc/eoc/boc and *.analysis keys in RSA NetWitness to take this beyond a simple proof of concept. You may also choose to completely define your own to get you started.



This file must sit on a web server somewhere, accessible by the NWO server. A public version is available here:  but you will have to clone and update this to include references to your NWO and NW servers before it will work.  This serves as the configuration file specific to your environment that describes structure of the huntingcontent.json file, display options, icon options,  language, resource locations, and a few other configurations. It was done this way to avoid hard coding anything into the actual playbooks and automations:


model: Defines the heading for each level of huntingcontent.json.  A "-" in front means it will still be looked for programatically but will not be displayed in the table.


language: This is a basic attempt at making the hunting tasks described by the model more human readable by connecting each level of the json with a connector word.  Again, a "-" in front of the value means it will not be displayed in the table.

This tells NWO how many levels to go when grouping the tasks into separate tables. Eg. a grouping level of "0" would contain one large table with each full JSON branch in a row. A grouping level of "3" will create a separate table for each group defined 3 levels into the JSON (this is what's shown in the dashboard table above)


verbosity: 0 or 1 - a value of 1 means that an additional column will be added to the table with the entire "description" value displayed. When 0, you can still see the description information by hovering over the "hover for info" link in the table.


queryurl: This defines the base URL where the specific queries (in the 'query' element of huntingcontent.json) will be appended in order to drill into the data.  Example above is from my lab, so be sure to adjust this for your environment.


influencers: The set of influencers above are the ones that have been built into the logic so far.  This isn't as modular under the hood as it should be, but I think this is where there is a big opportunity for collaboration and innovation, and where some of the smarter & continuous data exploration will be governed.  iocs, criticalassets, blacklist, and whitelist are just additional queries the system will do to gain more insight and add the appropriate icon to the hunt item row.  rarity is not well implemented yet and just adds the icon when there are < N (10 in this case) results.  This will eventually be updated to look for rarity in the dataset against a specific entity (single IP, host, etc.) rather than the overall result count.  possiblebeacon is implemented to look for a 24 hour average of communication between two hosts signifying approximately 1 beacon per minute, 5 minutes, or 10 minutes along with a tolerance percentage.  Just experimenting with it at this point.   Note that the "weight" element doesn't affect anything just yet. The eventual concept is to build a scoring algorithm to help prioritize or add fidelity the individual hunting tasks.



Installation Instructions:

  1. Prerequisites:  RSA NetWitness Network (Packets) and Logs (original version of NetWitness query integration) integration installed.  Note that there is currently a v2 NetWitness integration, but this will not work with that version at this time due to the change in how the commands work. I will try to update the automations for the v2 integration ASAP.
    1. The v1 NetWitness Integration is included in the zip.  Settings > Integrations > Import.
  2. Create a new incident type named "Hunt Item" (don't worry about mapping a playbook yet)
  3. Import Custom Fields (Settings > Advanced > Fields) - import incidentfields.json (ignore errors)
  4. Import Custom Layouts (Settings > Advanced > Layout Builder > Hunt)
    1. Incident Summary - import layout-details.json
    2. New/Edit - import layout-edit.json
    3. Incident Quick View - import layout-details.json
  5. Import Automations (Automations > Import - one by one, unfortunately)

       - GenerateHuntingIncidents

       - PopulateHuntingTable

       - GenerateHuntingIncidentNameID

       - LoadHuntingJSON

       - NetWitness LookAhead

       - ReturnRandomUser

       - UpdateHuntingStatus

  6. Import Dashboard Widget Automations (Automations > Import)

       - GetCurrentHuntMasterForWidget

       - GetHuntParticipants

       - GetHuntTableForWidget   

  7. Import Sub-Playbooks (Playbooks > Import)

       - Initialize Hunting Instance

       - Hunting Investigation Playbook

  8. Import Primary Playbook (Playbooks > Import)

    - 0105 Hunting

  9. Map "0105 Hunting" Playbook to "Hunt" Incident Type (Settings > Incident Types > Hunt) and set the playbook to automatically start

  10. Map "Hunting Investigation Playbook" to "Hunt Item" Incident Type and set playbook to automatically start

  11. Import Dashboard

  12. Place huntingcontent.json, huntingcontentmodel.json (within the www folder of the packaged zip), onto a web server somewhere, accessible by the NWO server. Note, by default the attached/downloadable huntingcontentmodel.json points at github for the huntingcontent.json file. You can leave this as is (and over time you'll get a more complete set of hunting queries) or create your own as you see fit and place it on your web server as well.


Before the first run, you'll have to make a few changes to point the logic at your own environment:

  1. Edit huntingcontentmodel.json and update all queryURL and icon URL fields to point at your NetWitness server and web server respectively.  You cal also edit the "huntingContent" element of this file (not shown) to point at your own version of the huntingcontent.json file discussed above:
    (Top - huntingcontentmodel.json snippet, showing the references with respect to your standard NetWitness UI URL)
  2. Go into the "Initialize Hunting Instance" playbook, and click on "Playbook Triggered" and enter the path to your huntoncontentmodel.json file (that includes updated fields pointing to NetWitness). If you leave it as is, none of the look ahead queries will work since no configuration file will be loaded.
  3. Creating your first hunting incident, from Incidents > New Incident, select type "Hunt" and give it a time range. Start with 1 day for testing.
  4. Note that the playbook will automatically name the incident "Hunt Master" prepended with a unique HuntID. Everything is working if, in the Incidents page, you see a single Hunt Master and associated Hunt Items all sharing the same HuntID.

Opening up the Hunt Master incident Summary page (or Hunting Dashboard) should show you the full hunting table:


Please add comments as you find bugs or have additional ideas and content to contribute.

One of the major new features found in RSA NetWitness Platform version 11.1 is RSA NetWitness Endpoint Insights.  RSA NetWitness Endpoint Insights is a free endpoint agent that provides a subset of the full RSA NetWitness Endpoint 4.4 functionality as well as the ability to perform Windows log collection.  Details of how to configure RSA NetWitness Endpoint Insights can be found here


Additionally, as of RSA NetWitness Platform version 11.0, those with both RSA NetWitness Log & full RSA NetWitness Endpoint components have the option to start bringing the two worlds together under a unified interface.  This integration strengthens in version 11.1, and will continue to do so through version 11.2 and beyond.   Details of this integration can be found here: Endpoint Integ: RSA Endpoint Integration


I created the content below to compliment the endpoint scan data (RSA NW Endpoint and RSA NW Endpoint Insights) as well as tracking data (RSA NW Endpoint + meta integration into 11.X).  As you leverage this content, please let me know if you have any questions, and please post improvements and iterations as well.


Note:  If using the RSA NW Endpoint Insights agent (vs the full RSA NW Endpoint 4.4 agent) full process tracking data is not available. The process-centric content below will still work, but keep in mind that the process data reported is only a snapshot in time based on endpoint scan schedules and will not capture any process events in between scans.  


Content Summary:

Autoruns -  Outliers Report & Dashboard
Autoruns & Scheduled Tasks launching from or arguments containing AppData\Local\Temp
Autoruns & Scheduled Tasks launching from root of \ProgramData
Autoruns & Scheduled Tasks invoking Command Shell (cmd.exe or powershell.exe)
Autoruns & Scheduled Tasks invoking wscript.exe or cscript.exe
Autoruns & Scheduled Tasks invoking .vbs, .bat, .hta, .ps1 scripts
Autoruns - Rarest HCKU.../Run and /RunOnce keys
Processes & Files - Outliers Report & Dashboard
Rarest Child Processes of Web Server Processes
Rarest Parent Processes of cmd.exe
Rarest Parent Processes os powershell.exe
Rarest Processes running from AppData\Local\ or AppData\Roaming
Rarest Executables in Root of ProgramData
Rarest Executables in Root of C:\
Rarest Executables in Root of Windows\System32
Rarest Company Headers in Files
Rarest Code Signing CN in Files
ESA Rules
Alert: Scheduled Task running out of AppData\Local\Temp
Alert: Scheduled Tasks running cmd.exe or powershell.exe (with Whitelist expectation)
Alert: Scheduled Tasks running cscript.exe or wscript.exe (with Whitelist expectation)
Alert: Windows Reserved Process Names Running From Suspicious Directory
Alert: Process Running from $RECYCLE.BIN
Meta & Column Groups
1 x Meta Group:  Scan and Log Data
7 x Column Groups:  NWEndpoint [Autorun/DLL/File/Machine/Process/Service/General] Analysis




Meta Group


Column Group (eg. Process Analysis)

Column Group (eg. Autoruns and Tasks)

One of the major new features found in RSA NetWitness Platform version 11.1 is RSA NetWitness Endpoint Insights.  RSA NetWitness Endpoint Insights is a free endpoint agent that provides a subset of the full RSA NetWitness Endpoint 4.4 functionality as well as the ability to perform Windows log collection.  Details of how to configure RSA NetWitness Endpoint Insights can be found here:


Additionally, as of RSA NetWitness Platform version 11.0, those with both RSA NetWitness Log & full RSA NetWitness Endpoint components have the option to start bringing the two worlds together under a unified interface.  This integration strengthens in version 11.1, and will continue to do so through version 11.2 and beyond.   Details of this integration can be found here: Endpoint Integ: RSA Endpoint Integration 


The 05/16/2018 RSA Live update added 4 new reports to take advantage of the Endpoint Scan Data collected by either the free RSA NetWitness Endpoint Insights agent, or the full RSA NetWitness Endpoint 4.4 meta integration (search "Endpoint" in RSA Live):



Use these reports to gain summarized visibility into endpoints, and to prioritize hunting efforts through outlier/stack analysis.  Outliers are usually worth gaining visibility into and understanding, particularly those related to persistence techniques and post-exploit activities commonly used by adversaries.  While not every outlier implies something bad is happening, this type of analysis tends to be fruitful, particularly as you increase the accuracy of rules over time through additional whitelist logic.


Report #1 Endpoint Scan Data Autorun and Scheduled Task Report (Outliers)

Outlier (bottom N) reporting of a subset of suspicious autoruns and scheduled task, containing the tables below.


Rarest Autoruns/Tasks in AppData/X and ProgramData root folders across environment (rarity among locations commonly used by malware)

Rarest Autorun registry keys across the environment

Enumerate all Autoruns/Tasks Invoking shells or scripts  (some software will do this legitimately, but should be more or less consistent across an enterprise with common images - look specifically at the launch arguments for signs of bad behavior)


Eg. Rarest Autoruns invoking command shells table:


Report #2 Endpoint Scan Data File and Process Outliers Report

Predominately outlier (bottom N) reporting of contextually interesting processes, containing the tables below.


Rarest parent processes of powershell.exe and cmd.exe (this should be fairly uniform across an organization based on common software distribution - outliers become worth of a look)

Rarest child processes of web server processes (looking for anomalous process execution that could indicate presence of a webshell)

Rarest Code Signing Certificate CNs 

Windows Processes with Unexpected Parent Processes (based on, looking for non-typical mismatches of windows child/parent processes)


Eg. Rarest child processes of web server processes table:


Report #3 Endpoint Scan Data Host Report 

This report takes an endpoint hostname as input.  It will enumerate all scan data (eg. processes, autoruns, machine details, files, etc. collected over a period of time).  NOTE:  This data also lives directly in the NW 11.1 UI under the "Hosts" section in a much nicer layout if you want it at-a-glance.


Eg. Report alternative in 11.1 - Hosts view:


Eg. Endpoint Scan Data Host Report:


Report #4 Endpoint Machine Summary Report

A summary of the Endpoint deployment in an environment, including OS breakdown, and NW Endpoint version breakdown.  NOTE:  This data also lives directly in the NW 11.1 UI under the "Hosts" section if you want it at a glance:


Eg. Report alternative in 11.1 - Hosts view:


Eg. Endpoint Summary Report:

Here are a few column and meta groups to help get you started in NW 11.1 for either the free NW Endpoint Insights integration or the existing NW Endpoint 4.4 meta integration.  These are designed to help speed up analysis based on the category of endpoint data of interest.  It's also worth remembering that you have access to a lot of this data in a per-host context with the new 11.1 Investigate > Hosts view which is a handy way to get a snapshot of what is going on at a given point and time for a specific host, without (or prior to) querying the NWDB, eg:



When hunting or analyzing endpoint data across an entire environment, or in context with network and other log data for a specific host, you would then want to pivot into the more traditional Investigate > Navigate/Hosts view which is where you would apply the appropriate meta and column groups.



Meta Group (1) 

Top down organization of keys:

   - Host Information

   - Data Category (+Action for event tracking)

   - File/Process Keys

   - IPv4 Keys

   - User Keys

   - Service, Autoruns, Tasks


[NWEndpoint] Event and Scan Summary:

Column Groups (5)

When using column groups for analysis of NW Endpoint data, I like having both a generic column group that can show all event and scan data categories on the same page without too much clutter, as well as specific column groups mapped to individual categories (eg. Process Analysis, File Analysis, Autorun Analysis, etc.).  The NW 11.1 platform lets you toggle between these at will.  Also note that these will apply to both Event view and Event Analysis view.


Eg. [NWEndpoint] Event and Scan Summary (same keys as the Meta Group)


Eg. [NWEndpoint] Process Analysis

(Note: 'Process Event' category is only available with the full NW Endpoint Agent)

Eg. [NWEndpoint] File & DLL Analysis


Eg. [NWEndpoint] Service Analysis


Eg. [NWEndpoint] Autorun & Task Analysis


Investigation: Manage Column Groups in the Events View 

Investigate: Use Meta Groups to Focus on Relevant Meta Keys 


** NOTE:  The attached groups use the meta key 'param' to display "Launch Arguments".  11.1 out of box configuration maps this to the 'query' key instead.  'Param' will be the default as of the patch, but in the mean time you can either update your table-map.xml/concentrator index manually, or switch the meta key referenced in the groups to 'query' which is the 11.1 out of box setting.


Process: Host GS: Maintain the Table Map Files  for the table-map.xml instructions, and Core Database Tuning Guide: Index Customization  for the concentrator index.

table-map-custom.xml addition:  <mapping envisionName="param" nwName="param" flags="None"/>

index-concentrator-custom addition:  <key description="Launch Arguments" level="IndexValues" name="param" format="Text" valueMax="100000" />

Mimikatz is an open source research project with it's first commit back in 2014 via @gentilkiwi, that is now used extensively by pen testers and adversaries alike for various post-exploitation activities.  One of many write-ups on Mimikatz can be found here.


The intention of this post is to show how you might instrument your NetWitness environment to detect attempts to use Mimikatz for credential theft which is arguably it's most common application.  Designed to be as generic as possible to account for re-compilations, the primary detection mechanism focused on fingerprinting loaded dlls is based on some great research done by @Wardog, detailed here.


Here are the detection opportunities discussed in this post:


This post focuses primarily on the log-based detection via an ESA rule.  The choice to feed sysmon logs into NW is predicated by the requirement to gather ImageLoaded events every time a DLL is loaded by a process.   



Getting Sysmon logs into NetWitness

The general process for enabling sysmon capture is fairly straight forward and detailed by Eric Partington in his post: Log - Sysmon 6 Windows Event Collection. The key events we care about for this scenario are Sysmon EventID 1 (process creation) and Sysmon EventID 7 (ImageLoaded).  



Two recommendations:

  1. Although not required for detection, for stronger analysis, you may consider adjusting table-map-custom.xml on your log decoder to add the parent.process meta key and toggle from "Transient" to "None" to gain visibility into these values.  Sample addition to table-map-custom.xml (more details on this process here)

    <mapping envisionName="parent_pid" nwName="" flags="None" format="Int32"/>
    <mapping envisionName="parent_process_val" nwName="parent.process" flags="None"/>

  2. Sysmon can get very, very noisy if not properly filtered.  This part is up to you (and the capacity of your system) but I started with this sysmon config template: sysmon-config/sysmonconfig-export.xml at master · SwiftOnSecurity/sysmon-config · GitHub  and modified the EventID 7 config within the template shown below - only including ImageLoaded events necessary for the detection logic:

    <!--DATA: UtcTime, ProcessGuid, ProcessId, Image, ImageLoaded, Hashes, Signed, Signature, SignatureStatus-->
    <ImageLoad onmatch="include">
    <ImageLoaded condition="contains">cryptdll.dll</ImageLoaded>
    <ImageLoaded condition="contains">hid.dll</ImageLoaded>
    <ImageLoaded condition="contains">winscard.dll</ImageLoaded>
    <ImageLoaded condition="contains">logoncli.dll</ImageLoaded>
    <ImageLoaded condition="contains">netapi32.dll</ImageLoaded>
    <ImageLoaded condition="contains">samlib.dll</ImageLoaded>
    <ImageLoaded condition="contains">vaultcli.dll</ImageLoaded>
    <ImageLoaded condition="contains">wintrust.dll</ImageLoaded>

* full sample template attached to the bottom of this post.


Important note:  When setting up sysmon on your source endpoints, to enable capturing of ImageLoaded events you will need to add "-l" to the  installation or run time command line.



NW Log Config - ESA Rule

The logic of this ESA rule is pretty straight forward but is implemented as an Advanced EPL rule - look for any process loading the set of "fingerprint" dlls indicative of Mimikatz and alert.  The rule is attached to this post, and instructions on how to import it are located here: Alerting: Import or Export Rules. Two separate formats have been attached - one in native/exported format that can be directly imported as per the instructions, the other as a text file for you to modify/copy/paste into a new Advanced EPL ESA rule if you prefer to do it that way.


Note:  This rule was tested in a lab environment, so please feel free to test and provide feedback on performance and false/true positive rates.  You'll notice a second ESA rule suffixed with "(aggressive)" that may catch more implementations of mimikatz but might be prone to false positives.  I wasn't able to get either to falsely trip in my lab, but your mileage may vary.



NW Endpoint Config - IIOCs

No additional config is required if you see the following InstantIOCs in your system already:


If we combine endpoint & log visibility for this use case, we get increased fidelity overall due to a combination of approaches to corroborate.  NWE automatically looks for various behavioral indicators involving lsass.exe for credential theft which is a great compliment to the DLL fingerprinting approach.   For an attacker, subverting either behavior individually will be easier than subverting both.   While not covered in this post, it's certainly possible to create a new alert for the NWE IIOC detection by itself OR add them to the conditions of the DLL fingerprinting ESA rule to increase fidelity.  If your NWE IIOCs are configured to send a syslog event for every match, the output when triggered looks as follows (sample CEF logs attached):





The ESA rule triggered successfully after running mimikatz in a number of ways, including as a native binary, PowerShell Mafia via PS command line & interactive shell, PowerShell Empire via native powershell and injected process, Crackmapexec, and Metasploit.  I'd love some feedback from anyone willing to test alternate methods. 


ESA alert rolled up into an incident:



Incident details:


If you have NetWitness Endpoint, you can pivot into it to gain deeper context.  In the tracking data below, bottom up shows initial compromise (this example happened to be a simple PowerShell Empire stager), subsequent activity, and interaction with lsass.exe triggering a Level 1 IOC:


It should also be noted that as of NW 11.0 and NW Endpoint 4.4, an integration exists to bring the same NW Endpoint tracking data directly into the broader NW platform as log events.  As this integration matures, you can undoubtedly expect a closer tie between related log & endpoint behavior.




Sample Data

I've attached a set of sample logs that should trigger the rule if replayed into a log decoder.  These logs contain the suspect event as well as some benign logs from the same endpoint.  An easy way to replay these is by using the NWLogPlayer utility (How To Replay Logs in RSA NetWitness - note, if using version 11.0 the package name has changed and can be installed with yum install rsa-nw-logplayer).

The second attached log set shows 2 sample CEF syslog alerts sent directly from NW Endpoint when the discussed IIOCs trigger.


Filter Blog

By date: By tag: