Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Sean Ennis

RSA NetWitness Platform

7 Posts authored by: Sean Ennis Employee

One of the most powerful features to make its way into RSA NetWitness Platform version 11.3 is also one of the most subtle in the interface.  11.3 now saves analysts one more step during incident response by integrating rich UEBA, Endpoint, Log, and Full Packet reconstruction directly into the incident panel.  This view is essentially the same as if you were looking at events directly in the Event Analysis part of the UI, or the Users (UEBA) part of the UI, just consolidated into the incident panel.  Prior to this improvement,the only way to view the raw event details was to open the event and click on "Investigate Original Event", pivoting into a new query window.  This option may still be appropriate for some, and still exists, but for those needing the fastest route possible to validating detection and event details, this feature is for you.

 

To use the new feature, for any individual event of interest that has been aggregated or added into an incident you'll see a small hyperlink attached to each event on the left hand side, labeled with one of: "Network", "Endpoint", "Log", "User Entity Behavior Analytics".  These labels correspond to the source of the event, and upon click will slide in the appropriate reconstruction view.

 

User Entity and Behavior Analytics (UEBA) view:

Network packet reconstruction view:

Endpoint reconstruction view:

Log reconstruction view:

 

Happy responding!

Starting in version 11.3, the RSA NetWitness Platform introduced the ability to analyze endpoint data captured by the RSA NetWitness Endpoint Agent (both the free "Insights" version and the full version). For more information on what RSA NetWitness Endpoint is all about, please start with the RSA NetWitness Endpoint Quick Start Guide for 11.3.

 

One of the helpful new features of the endpoint agent is the ability to not only focus the analyst on the "Hosts" context of their environment, but also the ability to gain full visibility into process behaviors and relationships whenever suspicious behaviors have been detected by the RSA NetWitness platform, or when investigating alerts from others.

 

The various pivot points bring an analyst into Process Analysis in the context of a specific process, including it's parent and child process(es) and based on the current analysis timeline which is adjustable if needed.

 

Example Process Analysis view, drilling into all related events recorded by the NW Endpoint Agent

 

Example Process Analysis view, focused on process properties (powershell.exe) collected by the NW Endpoint Agent

 

The feature is simple to use when RSA NetWitness Endpoint agent data exists, and is accessible from a number of locations in the UI depending on where the analyst is in their workflow:

 

Investigate > Hosts > Details (if endpoint alerts exist):

Investigate > Hosts > Processes (regardless of alert/risk score): 

 

Investigate > Event Analysis:

 

Respond > Incident > Event List (card must be expanded):

 

Respond > Incident > Embedded Event Analysis (reconstruction view):

 

Happy Hunting!

This project is an attempt at building a method of orchestrating threat hunting queries and tasks within RSA NetWitness Orchestrator (NWO).  The concept is to start with a hunting model defining a set of hunting steps (represented in JSON), have NWO ingest the model and make all of the appropriate "look ahead" queries into NetWitness, organizing results, adding context and automatically refining large result sets before distributing hunting tasks among analysts.  The overall goal is to have NWO keep track of all hunting activities, provide a platform for threat hunting metrics, and (most importantly) to offload as much of the tedious and repetitive data querying, refining, and management that is typical of threat hunting from the analyst as possible.  

 

Please leave a comment if you'd like some help getting everything going, or have a specific question.  I'd love to hear all types of feedback and suggestions to make this thing useful.

 

Usage

The primary dashboard shows the results of the most recent Hunting playbook run, which essentially contains each hunting task on it's own row in the table, a link to the dynamically generated Investigation task associated with that task, a count of the look ahead query results, and multiple links to the data in (in this case) RSA NetWitness.  The analyst has not been involved up until now, as NWO did all of this work in the background. 

 

Primary Hunting Dashboard

 

Pre-built Pivot into Toolset (NetWitness here)

 

The automations will also try to add extra context if the result set is Rare, has Possible Beaconing, contains Indicators, and a few other such "influencers" along with extra links to just those subsets of the overall results for that task.  Influencers are special logic embedded within the automations to help extract additional insight so that the analyst doesn't have to.  There hasn't been too much thought put into the logic behind these pieces just yet, so please consider them all proofs of concept and/or placeholders and definitely share any ideas or improvements you may have.  The automations will also try to pare down large result sets if you have defined thresholds within the hunting model JSON. The entire result set will still be reachable, but you'll get secondary counts/links where the system has tried to aggregate the rarest N results based on the "Refine by" logic also defined in the model, eg:

 

If defined in the huntingcontent.json, a specific query/task can be given a threshold and will try to refine results by rarity if the threshold is hit.  Example above shows a raw count of 6850 results, but a refined set of 35 results mapping to the rarest 20 {ip.dst, org.dst} tuples seen.

 

For each task, the assigned owner can drill directly into the relevant NetWitness data, or can drill into the Investigation associated with the task.  Right now the investigation playbooks for each task are void of any special playbooks themselves - they simply serve as a way to organize tasks, contain findings for each hunt task and a place from which to spawn child incidents if anything is found:

 

 

From here it is currently just up to the analyst to create notes, complete the task, or generate child investigations.  Future versions will do more with these sub investigation/hunt task playbooks to help the analyst. For now it's just a generic "Perform the Hunt" manual task.  Note that when these hunt task investigations get closed, the Hunt Master will updated the hunting table and mark that item as "complete", signified by a green dot and a cross-through as shown in the first screenshot.

 

How It Works

Playbook Logic

  1. Scheduled job or ad-hoc creation of "Hunt" incident that drives the primary logic and acts as the "Hunt Master"
  2. Retrieve hunting model JSON (content file and model definition file) from a web server somewhere
  3. Load hunting model, perform "look ahead", influencer, and refining queries
  4. Create hunting table based on query results, mark each task as "In Progress", "Complete", or "No Query Defined"
  5. Generate dynamic hunting investigations for each task that had at least 1 result from step 2
  6. Set a recurring task for the Hunt Master to continuously look for all related hunt tasks (they share a unique ID) and monitor their progress, updating the hunting table accordingly.
  7. [FUTURE] Continuously re-query the result sets in different ways to find outliers (eg. stacking different meta keys and adding new influencers/links to open hunting tasks

(Both the "Hunt Master" and the generated "Hunt Tasks" are created as incidents, tied together with a unique ID - while they could certainly be interacted with inside of the Incidents panel, the design is to have hunters operate from the hunting dashboard)

 

The Hunting Model

Everything is driven off of the hunting model & content.  The idea is to be able to implement any model/set of hunting tasks along with the queries that would normally get an analyst to the corresponding subset of data.  The example and templates given here corresponding with the RSA Network Hunting Labyrinth, modeled after the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide 

huntingcontent.json

This file must sit on a web server somewhere, accessible by the NWO server. You will later configure your huntingcontentmodel.json file to point to it's location if you want to manage you're own (instead of the default version found here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontent.json  

 

This file defines hunting tasks in each full branch of the JSON file, along with queries and other information to help NWO populate discover the data and organize the results:

 

 

(snippet of hunting content json)

 

The JSON file can have branches of length N, but the last element in any given branch, which defines a single hunting technique/task must have an element of the following structure. Note that "threshold" and "refineby" are optional, but "query" and "description" are mandatory, even if the values are blank.

 

 

The attached (same as the github link above as of the first release) example huntingcontent.json is meant to be a template as it is currently at the very beginning stages of being mapped to the RSA Network Hunting Labyrinth methodology.  This will be updated with higher resolution queries over time. Once we can see this operate in a real environment, the plan is to leverage a lot more of the ioc/eoc/boc and *.analysis keys in RSA NetWitness to take this beyond a simple proof of concept. You may also choose to completely define your own to get you started.

 

huntingcontentmodel.json

This file must sit on a web server somewhere, accessible by the NWO server. A public version is available here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontentmodel.json  but you will have to clone and update this to include references to your NWO and NW servers before it will work.  This serves as the configuration file specific to your environment that describes structure of the huntingcontent.json file, display options, icon options,  language, resource locations, and a few other configurations. It was done this way to avoid hard coding anything into the actual playbooks and automations:

 

model: Defines the heading for each level of huntingcontent.json.  A "-" in front means it will still be looked for programatically but will not be displayed in the table.

 

language: This is a basic attempt at making the hunting tasks described by the model more human readable by connecting each level of the json with a connector word.  Again, a "-" in front of the value means it will not be displayed in the table.


groupingdepth:
This tells NWO how many levels to go when grouping the tasks into separate tables. Eg. a grouping level of "0" would contain one large table with each full JSON branch in a row. A grouping level of "3" will create a separate table for each group defined 3 levels into the JSON (this is what's shown in the dashboard table above)

 

verbosity: 0 or 1 - a value of 1 means that an additional column will be added to the table with the entire "description" value displayed. When 0, you can still see the description information by hovering over the "hover for info" link in the table.

 

queryurl: This defines the base URL where the specific queries (in the 'query' element of huntingcontent.json) will be appended in order to drill into the data.  Example above is from my lab, so be sure to adjust this for your environment.

 

influencers: The set of influencers above are the ones that have been built into the logic so far.  This isn't as modular under the hood as it should be, but I think this is where there is a big opportunity for collaboration and innovation, and where some of the smarter & continuous data exploration will be governed.  iocs, criticalassets, blacklist, and whitelist are just additional queries the system will do to gain more insight and add the appropriate icon to the hunt item row.  rarity is not well implemented yet and just adds the icon when there are < N (10 in this case) results.  This will eventually be updated to look for rarity in the dataset against a specific entity (single IP, host, etc.) rather than the overall result count.  possiblebeacon is implemented to look for a 24 hour average of communication between two hosts signifying approximately 1 beacon per minute, 5 minutes, or 10 minutes along with a tolerance percentage.  Just experimenting with it at this point.   Note that the "weight" element doesn't affect anything just yet. The eventual concept is to build a scoring algorithm to help prioritize or add fidelity the individual hunting tasks.

   

Instructions

Installation Instructions:

  1. Prerequisites:  RSA NetWitness Network (Packets) and Logs (original version of NetWitness query integration) integration installed.  Note that there is currently a v2 NetWitness integration, but this will not work with that version at this time due to the change in how the commands work. I will try to update the automations for the v2 integration ASAP.
    1. The v1 NetWitness Integration is included in the zip.  Settings > Integrations > Import.
  2. Create a new incident type named "Hunt Item" (don't worry about mapping a playbook yet)
  3. Import Custom Fields (Settings > Advanced > Fields) - import incidentfields.json (ignore errors)
  4. Import Custom Layouts (Settings > Advanced > Layout Builder > Hunt)
    1. Incident Summary - import layout-details.json
    2. New/Edit - import layout-edit.json
    3. Incident Quick View - import layout-details.json
  5. Import Automations (Automations > Import - one by one, unfortunately)

       - GenerateHuntingIncidents

       - PopulateHuntingTable

       - GenerateHuntingIncidentNameID

       - LoadHuntingJSON

       - NetWitness LookAhead

       - ReturnRandomUser

       - UpdateHuntingStatus

  6. Import Dashboard Widget Automations (Automations > Import)

       - GetCurrentHuntMasterForWidget

       - GetHuntParticipants

       - GetHuntTableForWidget   

  7. Import Sub-Playbooks (Playbooks > Import)

       - Initialize Hunting Instance

       - Hunting Investigation Playbook

  8. Import Primary Playbook (Playbooks > Import)

    - 0105 Hunting

  9. Map "0105 Hunting" Playbook to "Hunt" Incident Type (Settings > Incident Types > Hunt) and set the playbook to automatically start

  10. Map "Hunting Investigation Playbook" to "Hunt Item" Incident Type and set playbook to automatically start

  11. Import Dashboard

  12. Place huntingcontent.json, huntingcontentmodel.json (within the www folder of the packaged zip), onto a web server somewhere, accessible by the NWO server. Note, by default the attached/downloadable huntingcontentmodel.json points at github for the huntingcontent.json file. You can leave this as is (and over time you'll get a more complete set of hunting queries) or create your own as you see fit and place it on your web server as well.

Configuration:

Before the first run, you'll have to make a few changes to point the logic at your own environment:

  1. Edit huntingcontentmodel.json and update all queryURL and icon URL fields to point at your NetWitness server and web server respectively.  You cal also edit the "huntingContent" element of this file (not shown) to point at your own version of the huntingcontent.json file discussed above:
    (Top - huntingcontentmodel.json snippet, showing the references with respect to your standard NetWitness UI URL)
  2. Go into the "Initialize Hunting Instance" playbook, and click on "Playbook Triggered" and enter the path to your huntoncontentmodel.json file (that includes updated fields pointing to NetWitness). If you leave it as is, none of the look ahead queries will work since no configuration file will be loaded.
  3. Creating your first hunting incident, from Incidents > New Incident, select type "Hunt" and give it a time range. Start with 1 day for testing.
  4. Note that the playbook will automatically name the incident "Hunt Master" prepended with a unique HuntID. Everything is working if, in the Incidents page, you see a single Hunt Master and associated Hunt Items all sharing the same HuntID.

Opening up the Hunt Master incident Summary page (or Hunting Dashboard) should show you the full hunting table:

 

Please add comments as you find bugs or have additional ideas and content to contribute.

One of the major new features found in RSA NetWitness Platform version 11.1 is RSA NetWitness Endpoint Insights.  RSA NetWitness Endpoint Insights is a free endpoint agent that provides a subset of the full RSA NetWitness Endpoint 4.4 functionality as well as the ability to perform Windows log collection.  Details of how to configure RSA NetWitness Endpoint Insights can be found herehttps://community.rsa.com/docs/DOC-86450

 

Additionally, as of RSA NetWitness Platform version 11.0, those with both RSA NetWitness Log & full RSA NetWitness Endpoint components have the option to start bringing the two worlds together under a unified interface.  This integration strengthens in version 11.1, and will continue to do so through version 11.2 and beyond.   Details of this integration can be found here: Endpoint Integ: RSA Endpoint Integration

 

I created the content below to compliment the endpoint scan data (RSA NW Endpoint and RSA NW Endpoint Insights) as well as tracking data (RSA NW Endpoint + meta integration into 11.X).  As you leverage this content, please let me know if you have any questions, and please post improvements and iterations as well.

 

Note:  If using the RSA NW Endpoint Insights agent (vs the full RSA NW Endpoint 4.4 agent) full process tracking data is not available. The process-centric content below will still work, but keep in mind that the process data reported is only a snapshot in time based on endpoint scan schedules and will not capture any process events in between scans.  

 

Content Summary:

Autoruns -  Outliers Report & Dashboard
Autoruns & Scheduled Tasks launching from or arguments containing AppData\Local\Temp
Autoruns & Scheduled Tasks launching from root of \ProgramData
Autoruns & Scheduled Tasks invoking Command Shell (cmd.exe or powershell.exe)
Autoruns & Scheduled Tasks invoking wscript.exe or cscript.exe
Autoruns & Scheduled Tasks invoking .vbs, .bat, .hta, .ps1 scripts
Autoruns - Rarest HCKU.../Run and /RunOnce keys
Processes & Files - Outliers Report & Dashboard
Rarest Child Processes of Web Server Processes
Rarest Parent Processes of cmd.exe
Rarest Parent Processes os powershell.exe
Rarest Processes running from AppData\Local\ or AppData\Roaming
Rarest Executables in Root of ProgramData
Rarest Executables in Root of C:\
Rarest Executables in Root of Windows\System32
Rarest Company Headers in Files
Rarest Code Signing CN in Files
ESA Rules
Alert: Scheduled Task running out of AppData\Local\Temp
Alert: Scheduled Tasks running cmd.exe or powershell.exe (with Whitelist expectation)
Alert: Scheduled Tasks running cscript.exe or wscript.exe (with Whitelist expectation)
Alert: Windows Reserved Process Names Running From Suspicious Directory
Alert: Process Running from $RECYCLE.BIN
Meta & Column Groups
1 x Meta Group:  Scan and Log Data
7 x Column Groups:  NWEndpoint [Autorun/DLL/File/Machine/Process/Service/General] Analysis

 

Screenshots

Dashboards

Meta Group

 

Column Group (eg. Process Analysis)

Column Group (eg. Autoruns and Tasks)

One of the major new features found in RSA NetWitness Platform version 11.1 is RSA NetWitness Endpoint Insights.  RSA NetWitness Endpoint Insights is a free endpoint agent that provides a subset of the full RSA NetWitness Endpoint 4.4 functionality as well as the ability to perform Windows log collection.  Details of how to configure RSA NetWitness Endpoint Insights can be found here: https://community.rsa.com/docs/DOC-86450

 

Additionally, as of RSA NetWitness Platform version 11.0, those with both RSA NetWitness Log & full RSA NetWitness Endpoint components have the option to start bringing the two worlds together under a unified interface.  This integration strengthens in version 11.1, and will continue to do so through version 11.2 and beyond.   Details of this integration can be found here: Endpoint Integ: RSA Endpoint Integration 

 

The 05/16/2018 RSA Live update added 4 new reports to take advantage of the Endpoint Scan Data collected by either the free RSA NetWitness Endpoint Insights agent, or the full RSA NetWitness Endpoint 4.4 meta integration (search "Endpoint" in RSA Live):

 

 

Use these reports to gain summarized visibility into endpoints, and to prioritize hunting efforts through outlier/stack analysis.  Outliers are usually worth gaining visibility into and understanding, particularly those related to persistence techniques and post-exploit activities commonly used by adversaries.  While not every outlier implies something bad is happening, this type of analysis tends to be fruitful, particularly as you increase the accuracy of rules over time through additional whitelist logic.

 

Report #1 Endpoint Scan Data Autorun and Scheduled Task Report (Outliers)

Outlier (bottom N) reporting of a subset of suspicious autoruns and scheduled task, containing the tables below.

 

Rarest Autoruns/Tasks in AppData/X and ProgramData root folders across environment (rarity among locations commonly used by malware)

Rarest Autorun registry keys across the environment

Enumerate all Autoruns/Tasks Invoking shells or scripts  (some software will do this legitimately, but should be more or less consistent across an enterprise with common images - look specifically at the launch arguments for signs of bad behavior)

 

Eg. Rarest Autoruns invoking command shells table:

 

Report #2 Endpoint Scan Data File and Process Outliers Report

Predominately outlier (bottom N) reporting of contextually interesting processes, containing the tables below.

 

Rarest parent processes of powershell.exe and cmd.exe (this should be fairly uniform across an organization based on common software distribution - outliers become worth of a look)

Rarest child processes of web server processes (looking for anomalous process execution that could indicate presence of a webshell)

Rarest Code Signing Certificate CNs 

Windows Processes with Unexpected Parent Processes (based on https://digital-forensics.sans.org/media/poster_2014_find_evil.pdf), looking for non-typical mismatches of windows child/parent processes)

 

Eg. Rarest child processes of web server processes table:

 

Report #3 Endpoint Scan Data Host Report 

This report takes an endpoint hostname as input.  It will enumerate all scan data (eg. processes, autoruns, machine details, files, etc. collected over a period of time).  NOTE:  This data also lives directly in the NW 11.1 UI under the "Hosts" section in a much nicer layout if you want it at-a-glance.

 

Eg. Report alternative in 11.1 - Hosts view:

 

Eg. Endpoint Scan Data Host Report:

 

Report #4 Endpoint Machine Summary Report

A summary of the Endpoint deployment in an environment, including OS breakdown, and NW Endpoint version breakdown.  NOTE:  This data also lives directly in the NW 11.1 UI under the "Hosts" section if you want it at a glance:

 

Eg. Report alternative in 11.1 - Hosts view:

 

Eg. Endpoint Summary Report:

Here are a few column and meta groups to help get you started in NW 11.1 for either the free NW Endpoint Insights integration or the existing NW Endpoint 4.4 meta integration.  These are designed to help speed up analysis based on the category of endpoint data of interest.  It's also worth remembering that you have access to a lot of this data in a per-host context with the new 11.1 Investigate > Hosts view which is a handy way to get a snapshot of what is going on at a given point and time for a specific host, without (or prior to) querying the NWDB, eg:

 

 

When hunting or analyzing endpoint data across an entire environment, or in context with network and other log data for a specific host, you would then want to pivot into the more traditional Investigate > Navigate/Hosts view which is where you would apply the appropriate meta and column groups.

 

 

Meta Group (1) 

Top down organization of keys:

   - Host Information

   - Data Category (+Action for event tracking)

   - File/Process Keys

   - IPv4 Keys

   - User Keys

   - Service, Autoruns, Tasks

 

[NWEndpoint] Event and Scan Summary:

Column Groups (5)

When using column groups for analysis of NW Endpoint data, I like having both a generic column group that can show all event and scan data categories on the same page without too much clutter, as well as specific column groups mapped to individual categories (eg. Process Analysis, File Analysis, Autorun Analysis, etc.).  The NW 11.1 platform lets you toggle between these at will.  Also note that these will apply to both Event view and Event Analysis view.

 

Eg. [NWEndpoint] Event and Scan Summary (same keys as the Meta Group)

 

Eg. [NWEndpoint] Process Analysis

(Note: 'Process Event' category is only available with the full NW Endpoint Agent)

Eg. [NWEndpoint] File & DLL Analysis

 

Eg. [NWEndpoint] Service Analysis

 

Eg. [NWEndpoint] Autorun & Task Analysis

Installation

Investigation: Manage Column Groups in the Events View 

Investigate: Manage Meta Groups 

 

** NOTE:  The attached groups use the meta key 'param' to display "Launch Arguments".  11.1 out of box configuration maps this to the 'query' key instead.  'Param' will be the default as of the 11.1.0.1 patch, but in the mean time you can either update your table-map.xml/concentrator index manually, or switch the meta key referenced in the groups to 'query' which is the 11.1 out of box setting.

 

Process: Host GS: Maintain the Table Map Files  for the table-map.xml instructions, and Core Database Tuning Guide: Index Customization  for the concentrator index.

table-map-custom.xml addition:  <mapping envisionName="param" nwName="param" flags="None"/>

index-concentrator-custom addition:  <key description="Launch Arguments" level="IndexValues" name="param" format="Text" valueMax="100000" />

Mimikatz is an open source research project with it's first commit back in 2014 via @gentilkiwi, that is now used extensively by pen testers and adversaries alike for various post-exploitation activities.  One of many write-ups on Mimikatz can be found here.

 

The intention of this post is to show how you might instrument your NetWitness environment to detect attempts to use Mimikatz for credential theft which is arguably it's most common application.  Designed to be as generic as possible to account for re-compilations, the primary detection mechanism focused on fingerprinting loaded dlls is based on some great research done by @Wardog, detailed here.

 

Here are the detection opportunities discussed in this post:

 

This post focuses primarily on the log-based detection via an ESA rule.  The choice to feed sysmon logs into NW is predicated by the requirement to gather ImageLoaded events every time a DLL is loaded by a process.   

 

 

Getting Sysmon logs into NetWitness

The general process for enabling sysmon capture is fairly straight forward and detailed by Eric Partington in his post: Log - Sysmon 6 Windows Event Collection. The key events we care about for this scenario are Sysmon EventID 1 (process creation) and Sysmon EventID 7 (ImageLoaded).  

 

 

Two recommendations:

  1. Although not required for detection, for stronger analysis, you may consider adjusting table-map-custom.xml on your log decoder to add the parent.process meta key and toggle parent.pid from "Transient" to "None" to gain visibility into these values.  Sample addition to table-map-custom.xml (more details on this process here)

    <mapping envisionName="parent_pid" nwName="parent.pid" flags="None" format="Int32"/>
    <mapping envisionName="parent_process_val" nwName="parent.process" flags="None"/>


  2. Sysmon can get very, very noisy if not properly filtered.  This part is up to you (and the capacity of your system) but I started with this sysmon config template: sysmon-config/sysmonconfig-export.xml at master · SwiftOnSecurity/sysmon-config · GitHub  and modified the EventID 7 config within the template shown below - only including ImageLoaded events necessary for the detection logic:

    <!--SYSMON EVENT ID 7 : DLL (IMAGE) LOADED BY PROCESS-->
    <!--DATA: UtcTime, ProcessGuid, ProcessId, Image, ImageLoaded, Hashes, Signed, Signature, SignatureStatus-->
    <ImageLoad onmatch="include">
    <ImageLoaded condition="contains">cryptdll.dll</ImageLoaded>
    <ImageLoaded condition="contains">hid.dll</ImageLoaded>
    <ImageLoaded condition="contains">winscard.dll</ImageLoaded>
    <ImageLoaded condition="contains">logoncli.dll</ImageLoaded>
    <ImageLoaded condition="contains">netapi32.dll</ImageLoaded>
    <ImageLoaded condition="contains">samlib.dll</ImageLoaded>
    <ImageLoaded condition="contains">vaultcli.dll</ImageLoaded>
    <ImageLoaded condition="contains">wintrust.dll</ImageLoaded>
    </ImageLoad>

* full sample template attached to the bottom of this post.

 

Important note:  When setting up sysmon on your source endpoints, to enable capturing of ImageLoaded events you will need to add "-l" to the  installation or run time command line.

 

 

NW Log Config - ESA Rule

The logic of this ESA rule is pretty straight forward but is implemented as an Advanced EPL rule - look for any process loading the set of "fingerprint" dlls indicative of Mimikatz and alert.  The rule is attached to this post, and instructions on how to import it are located here: Alerting: Import or Export Rules. Two separate formats have been attached - one in native/exported format that can be directly imported as per the instructions, the other as a text file for you to modify/copy/paste into a new Advanced EPL ESA rule if you prefer to do it that way.

 

Note:  This rule was tested in a lab environment, so please feel free to test and provide feedback on performance and false/true positive rates.  You'll notice a second ESA rule suffixed with "(aggressive)" that may catch more implementations of mimikatz but might be prone to false positives.  I wasn't able to get either to falsely trip in my lab, but your mileage may vary.

 

 

NW Endpoint Config - IIOCs

No additional config is required if you see the following InstantIOCs in your system already:

 

If we combine endpoint & log visibility for this use case, we get increased fidelity overall due to a combination of approaches to corroborate.  NWE automatically looks for various behavioral indicators involving lsass.exe for credential theft which is a great compliment to the DLL fingerprinting approach.   For an attacker, subverting either behavior individually will be easier than subverting both.   While not covered in this post, it's certainly possible to create a new alert for the NWE IIOC detection by itself OR add them to the conditions of the DLL fingerprinting ESA rule to increase fidelity.  If your NWE IIOCs are configured to send a syslog event for every match, the output when triggered looks as follows (sample CEF logs attached):

 

 

 

Results

The ESA rule triggered successfully after running mimikatz in a number of ways, including as a native binary, PowerShell Mafia via PS command line & interactive shell, PowerShell Empire via native powershell and injected process, Crackmapexec, and Metasploit.  I'd love some feedback from anyone willing to test alternate methods. 

 

ESA alert rolled up into an incident:

 

 

Incident details:

 

If you have NetWitness Endpoint, you can pivot into it to gain deeper context.  In the tracking data below, bottom up shows initial compromise (this example happened to be a simple PowerShell Empire stager), subsequent activity, and interaction with lsass.exe triggering a Level 1 IOC:

 

It should also be noted that as of NW 11.0 and NW Endpoint 4.4, an integration exists to bring the same NW Endpoint tracking data directly into the broader NW platform as log events.  As this integration matures, you can undoubtedly expect a closer tie between related log & endpoint behavior.

 

 

____________________________________________________________________

Sample Data

I've attached a set of sample logs that should trigger the rule if replayed into a log decoder.  These logs contain the suspect event as well as some benign logs from the same endpoint.  An easy way to replay these is by using the NWLogPlayer utility (How To Replay Logs in RSA NetWitness - note, if using version 11.0 the package name has changed and can be installed with yum install rsa-nw-logplayer).

The second attached log set shows 2 sample CEF syslog alerts sent directly from NW Endpoint when the discussed IIOCs trigger.

____________________________________________________________________

Filter Blog

By date: By tag: