Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Josh Randall
1 2 Previous Next

RSA NetWitness Platform

18 Posts authored by: Josh Randall Employee
Josh Randall

Easy-add Recurring Feeds

Posted by Josh Randall Employee Oct 15, 2019

In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:

 

There are a number of blogs and KBs that cover this topic in varying degrees of detail:

 

 

Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.

 

The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:

 

Success!

A couple years ago, a few smart folks over at salesforce came up with the idea of fingerprinting certain characteristics of the "Client Hello" of the SSL/TLS handshake, with the goal to more accurately identify the client application initiating TLS-encrypted sessions.

 

This concept certainly has potential to provide invaluable insight during incident response, though there are some significant operational limitations that (my opinion) have so far prevented JA3 fingerprinting from gaining more widespread adoption and use.  Perhaps the biggest of these limitations is the need for some kind of known JA3 fingerprint library or repository, where the thousands (?potentially millions?) of client applications that might initiate a TLS handshake can be reliably matched with their JA3 fingerprint. There are a couple sites building out these repositories

 

...but their content is limited (after all, fingerprinting a client requires installing it, running it, capturing the PCAP, running a JA3 parser or script against the PCAP, and then adding that fingerprint to the library; that process simply does not scale) and the fidelity/accuracy/timeliness of these libraries is a pretty large question mark.

 

However, with NetWitness 11.3.1, which has a native option to enable JA3 and JA3S fingerprinting, and NetWitness Endpoint 11.3 we can bridge this gap and create our own JA3 libraries.

 

The concept is fairly simple

  • use NetWitness Endpoint to identify applications making outbound network connections
  • use NetWitness Network to identify outbound HTTPS traffic
  • link these events and sessions by their common characteristics
  • once we have that link
    • extract the filename and sha256 hash of the application from the NetWitness Endpoint event
    • along with the JA3 fingerprint from the network session
    • and then create a feed of that information that the NetWitness Platform can use for additional context

 

In order to ensure this process scales, we can make use of the ESA's rule engine to identify the sessions we want and it's script output functionality to create the feed for us. The ESA rule and python script output are attached to this blog.

 

Prior to enabling these, you'll want to make sure the "netwitness" user has either read/write access to the "/var/netwitness/common/repo" directory on the Admin Server, a.k.a Node0, or at least read/write access to the "ja3Context.csv" file in that directory that the ja3context.py script will update.

 

A good guide for setting ACLs in CentOS is here: https://www.tecmint.com/give-read-write-access-to-directory-in-linux/  and the result:

 

Once the appropriate permissions are set and you've enabled the ESA rule and its script output, your last step will be to turn that CSV output into a feed (A list two ways - Feeds and Context Hub - many thanks again to the SE formerly known as Eric Partington for this blog):

 

...and choose your meta keys:

 

And voila!  We have an automatically generated and constantly updating library of applications for our JA3 fingerprints:

It often happens to me that while I am testing new alerts and incident aggregation rules, I find that the aggregation condition(s) I chose in my Incident Rule are not what I want.  While I could re-create the raw alerts from scratch, I wanted an easier method to tell the Respond engine to re-apply its aggregation rule policies on the alerts that already exist in the database.

 

To be clear, the Respond engine is always attempting to apply all active and valid Incident Rules against un-aggregated and un-affiliated alerts in the database -- that is, any alert that has not been previously aggregated into any incident can be automatically aggregated into an incident if an incident rule with matching conditions is changed/created.  But for previously aggregated alerts whose incidents have been deleted (leaving the alerts un-aggregated but previously-affiliated), the Respond engine will not attempt to re-aggregate them.

 

So my goal, then, was to get the Respond engine to include these previously-affiliated alerts in its aggregation attempts.  To achieve this, the alerts simply needed to be updated to remove their previously-affiliated status.  And to make it easy to change dozens or even hundreds of alerts at once, I wrote a simple shell script (attached to this blog and pasted below) to do it all for me.

 

#!/bin/bash
#
#grab the deploy_admin password
DEPLOY_PW=$(security-cli-client --get-config-prop --prop-hierarchy nw.security-client --prop-name platform.deployment.password --quiet)

#set a desired time range to query for alerts
#examples: "24 hours ago" or "14 days ago" or "4 weeks ago"
timeRange=$(date +%s%N -d "30 days ago" | cut -b1-13)

#identify primaryESA host
primaryESA=$(echo -e "use orchestration-server\ndb.host.find({installedServices:\"ESAPrimary\"},{hostname:1})" | mongo admin -u deploy_admin -p $DEPLOY_PW --quiet | grep -Po "hostname.*\"" | sed -e "s/hostname.\{5\}\|\"//g")

#change status on all alerts that were part of a deleted incident
#within the timerange from "REMOVED_FROM_INCIDENT" to "NORMALIZED"
echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

A couple notes on the script:

  • I used one extremely generic parameter (timestamp within last 30 days) to limit the database query and update operation (line 15)
    • you should feel free to modify the timeRange (line 8) to suit your needs
    • you should also feel free to (carefully) modify the database query to focus on specific alerts in your environment
      • for example, given the following raw alert:

 

...you could change line 15 and add:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

  • a successful run of the script will produce output like this, showing you how many alerts in the database were modified (3, in this case):

 

Of course, I recommend testing this (and most everything else) in a pre-prod or test NetWitness environment, if you have one.  And should you have any questions about what might be a good and/or valid database query, the Link community is always on hand to help (please have screenshots and/or specifics about your alerts ready...its hard to help without knowing details...  ).

One of the changes introduced in 11.x (11.0, specifically) was the removal of the macros.ftl reference in notification templates.  These templates enable customized notifications (primarily syslog and email) using freemarker syntax. The 10.x templates relied on macros (which are basically just functions, but using freemarker terminology) to build out and populate both the OOTB and (most likely) custom notifications.

 

If you upgraded from 10.x to 11.x and you had any custom notifications, there's a very good chance you noticed that these notifications failed, and if you dug into logs you'd have probably found an error like this:

The good news is there's a very easy fix for this, and it does not require re-writing any of your 10.x notifications.  The contents of macros.ftl file that was previously used in 10.x simply need to be copy/pasted into your existing notification templates, replacing the <#include "macros.ftl"/> line, and they'll continue to work the same as they did in your 10.x environment (props to Eduardo Carbonell for the actual testing and verification of this solution).

 

Example:

 

...becomes:

 

I have attached a copy of the macros.ftl file to this blog, or if you prefer you can find the same on any 11.x ESA host in the "/var/netwitness/esa/freemarker" directory.

In 11.3 the same NWE Agent can operate in Insights (free) or Advanced Mode . This change can be made by toggling a policy configuration in the UI and does not require agent reinstall or reboot. 

There could be both Insights and Advanced agents in a single deployment. Only agents operating in Advanced mode are accounted for licensing.

 

Feature

Comments

Insights

Advanced

Operating Systems Support

Release

Windows

MacOS

Linux

Basic scans

Inventory

11.3

4.x

Tracking scans

Continuous file,network,process,thread monitors

Registry monitor(Specific to windows)

11.3

4.x

Anomaly detection

Inline hooks, kernel hooks,suspicious threads,registry discrepancies

11.3

4.x

Windows Log Collection

Collect Windows Event Logs

11.3**

Threat Detection Content

Detection Rules /Reports

11.3

Risk score

Based on Threat Content Pack

11.3

4.x

File Reputation Service

File Intel ( 3rd Party Lookup)

11.3

4.x

Live Connect

Community Intel

11.3

4.x

Analyze module

Analysis of downloaded file

11.3

4.x

Blocking

Block an executable

11.3

4.x

Agent Protection

Driver Registry Protection / User Mode Kill Protection

11.3**

Powershell , Command-line ( input)

Report user interactions within a console session

11.3**

Process Visualization

Unique identifier (VPID) for process that uniquely identifies the entire process event chain 

 

11.3**

MFT Analysis

Future

4.x

Process Memory Dump

Future

4.x

System Memory Dump

Future

4.x

Request File

Future

4.x

Automatic File Downloads

Future

4.x

Standalone Scans

Future

4.x

RAR

Future

4.x

Containment

Future

4.x

API Support

Future

4.x

Certificate CRL Validation

Future

4.x

 

** - New Capabilities , these do not exist in 4.x

 

 

11.3 Key Endpoint Features 

Feature
Value
Details
1Advanced Endpoint Agent

Full and Continuous Endpoint Visibility

Advanced Threat Detection / Threat Hunting

Performs both kernel and user level analysis

  • Tracks Behaviors such as process creation,remote thread creation,relevant registry key modifications,executable file

    creation, processes that read documents (refer doc for the detailed list)

  • Tracks Network Events

  • Tracks Console Events ( commands typed into console like cmd)
  • Windows Log Collection
  • Detects Anomalies such as Image hooks , Kernel Hooks , Suspicious Threads , Registry Discrepancies
  • Retrieves lists of drivers, processes, DLLs, files (executables), services, autoruns,
  • Host file entries, scheduled tasks
  • Gathers security information such as network share, patch level, Windows tasks,logged in Users,bash history
  • Reports the hashes (SHA-256, SHA-1, MD5) and file size of all binaries (executables, libraries (DLL and .SO)and scripts found on the system
  • Reports details on certificate,signer,file description,executable sections,imported libraries etc

2Threat Content PacksDetection of adversary tactics and techniques ( MITRE ATT&CK matrix)See attached 11.3 Endpoint Rules spreadsheet
3Risk Scoring

Prioritized List of Risky Hosts /Files

Automated Incident Creation for Hosts /Files when risk threshold exceeds

Risk Score backed up with context of contributing factors

Rapid/Easy Investigation Workflow

Risk Scores are computed based on a proprietary scoring algorithm developed by RSA's Data Sciences team

The Scoring server considers takes multiple factors into consideration for scoring

  • Critical , high ,medium indicators generated by the endpoints based on the threat content packs deployed
  • Reputation status of files - Malicious / Suspicious
  • Bias status of file - Blacklisted /Greylisted /Whitelisted
4Process Visualizations

Provides a visualization of a process and its parent-child relationships

Timeline of all activities related to a process

 

5

File Analysis/Reputation/Bias Status

Categorize Files

Saves Analysis time , Filter Out Noise , Focus on Real threats

File hashes from the environment are sent to RSA Threat Intel Cloud for reputation status updates

Live connect Lookup in Investigations

6Response Actions - File BlockingAccelerate Response /Prevent Malware ExecutionBlocks File Hash across the environment
7Response Actions - Retrieve Files

Download and Analyze File Contents for Anomalies

Static Analysis using 3rd Party Tools

8Centralized Group Policy Management

Agent Configurations Updated Dynamically Based on Group Membership

Groups can be created based on different criteria such as IP Address,Host names,Operating System Type,Operating Description

Endpoint Policies such as Agent Mode ,Scan Schedule , Server Settings , Response Actions can be automatically pushed based on group membership

Agents can be migrated to different Endpoint Servers based on Group/Policy Assignment

9Geo Distributed Scalable DeploymentConsolidated view & management of endpoints /files and the associated risk across distributed deployments

One of the more common requests and "how do I" questions I've heard in recent months centers around the Emails that the Respond Module can send when an Incident is created or updated.  Enabling this configuration is simple (https://community.rsa.com/docs/DOC-86405), but unfortunately changing the templates that Respond uses when it sends one of these emails has not been an option.

 

Or rather...has not been an accessible option.  I aim to fix that with this blog post.

 

Before getting into the weeds, I should note that this guide does not cover how to include *any* alert data within incident notification emails. The fields I have found in my tests that can be included are limited to these using JSON dot notation (e.g. "incident.id", "incident.title", "incident.summary", etc.):

 

Now, this does not necessarily mean it isn't possible to include other data, just that I have not figured out how...yet.

 

The first thing we need to do is create a new Notification Template for Respond to use.  We do this within the UI at Admin / System / Global Notifications --> Templates tab.  I recommend using either of the existing Respond Notification templates as a base, and then modifying either/both of those as necessary. (I have attached these OOTB notification templates to this blog.)

 

For this guide, I'll use the "incident-created" template as my base, and copy that into a new Notification Template in the UI.  I give my template an easy-to-remember name, choose any of the default Template Types from the dropdown - it does not matter which I choose, as it won't have any bearing on the process, but it's a required field and I won't be able to save the template without selecting one - and write in a description:

 

Then I copy the contents of the "incident-created" template into the Template field.  The first ~60% of this template is just formatting and comments, so I scroll past all that until I find the start of the HTML <body> tag.  This is where I'll be making my changes

 

One of the more useful changes that comes to mind here is to include a hyperlink in the email that will allow me to pivot directly from the email to the Incident in NetWitness.  I can also change any of the static text to whatever fits my needs.  Once I'm done making my changes, I save the template.

 

After this, I'm done in the UI (unless I decide to make additional changes to my template), and open a SSH session to the NetWitness Admin Server.  To make this next part as simple and straightforward as I can, I've written a script that will prompt me for the name of the Template I just created, use that to make a new Respond Notification template, and then prompt me one more time to choose which Respond Notification event (Created or Updated) I want to apply it to. (The script is attached to this blog.)

 

A couple notes on running the script:

  1. Must be run from the Admin Server
  2. Must be run as a superuser

 

Running the script:

 

...after a wall of text because my template is fairly long...I get a prompt to choose Created or Updated:

 

And that's it!  Now, when a new incident gets created (either manually or automatically) Respond sends me an email using my custom Notification Template:

 

And if I want to update or fix or modify it in any way, I simply make my changes to the template within the UI and then run this script again.

 

Happy customizing.

One of the features included in the RSA NetWitness 11.3 release is something called Threat Aware Authentication (Respond Config: Configure Threat Aware Authentication).  This feature is a direct integration between RSA NetWitness and RSA SecurID Access that enables NetWitness to populate and manage a list of potentially high-risk users that SecurID Access can then refer to when determining whether (and how) to require those users to authenticate.

 

The configuration guide above details the steps required to implement this feature in the RSA NetWitness Platform, and the relevant SecurID documentation for the corresponding capability is here: Determining Access Requirements for High-Risk Users in the Cloud Authentication Service.

 

On the NetWitness side, to enable this feature you must be at version 11.3 and have the Respond Module enabled (which requires an ESA), and on the SecurID Access side, you need to have Premium Edition (RSA SecurID Access Editions - check the Access Policy Attributes table at the bottom of that page).

 

At a high level, the flow goes like this:

  1. NetWitness creates an Incident
  2. If that Incident has an email address (one or more), the Respond module sends the email address(es) via HTTP PUT method to the SecurID Access API
  3. SecurID Access checks the domains of those email addresses against its Identity Sources (AD and/or LDAP servers)
  4. SecurID Access adds those email addresses with matching domains to its list of High Risk Users
  5. SecurID Access can apply authentication policies to users in that list
  6. When the NetWitness Incident is set to Closed or Closed-False Positive, the Respond module sends another HTTP PUT to the SecurID Access API removing the email addresses from the list

 

In trying out these capabilities, I ended up making a couple tools to help report on some of the relevant information contained in NetWitness and SecurID Access.

 

The first of these is a script (sidHighRiskUsers.py; attached at the bottom of this blog) to query the SecurID Access API in the same way that NetWitness does.  This script is based on the admin_api_cli.py example in the SecurID Access REST API tool (https://community.rsa.com/docs/DOC-94122).  That download contains all the python dependencies and modules necessary to interact with the SecurID API, plus some helpful README files, so if you do intend to test out this capability I recommend giving that a look.

 

Some usage examples of this script (can be run with either python2 or python3 or both, depending on whether you've installed all the dependencies and modules in the REST API tool):

 

Show Users Currently on the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o getHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>"

 

 

Add  Users to the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o addHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

**Note: my python-fu is not strong enough to capture/print the 404 response from the API if you send a partially successful PUT.  If your python-fu is strong, I'd love to know how to do that correctly.

Example - if you try to add multiple user emails and one or more of those emails are not in your Identity Sources, you should see this error for the invalid email(s):

 

Remove Users from the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o removeHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

*Note: same as above about a partially successful PUT to the API

 

The second tool is another script (nwHighRiskUsersReport.sh; also attached at the bottom of this blog) to help report on the NetWitness-specific information about the users added to the High Risk list, the Incident(s) that resulted in them being added, and when they were added.  This script should be run on a recurring basis in order to capture any new additions to the list - the frequency of that recurrence will depend on your environment and how often new incidents are created or updated.

 

The script will create a CEF log for every non-Closed incident that has added an email to the High Risk list, and will send that log to the syslog receiver of your choice.  Some notes on the script's requirements:

  1. must be run as a superuser from the Admin Server
  2. the Admin Server must have the rsa-nw-logplayer RPM installed (# yum install rsa-nw-logplayer)
  3. add the IP address/hostname and port of your syslog receiver on lines 4 & 5 in the script
  4. If you are sending these logs back into NetWitness:
    1. add the attached cef-custom.xml to your log decoder or existing cef-custom.xml (details and instructions here: Custom CEF Parser)
    2. add the attached table-map-custom.xml entries to the table-map-custom.xml on all your Log Decoders
    3. add the attached index-concentrator-custom.xml entries to the index-concentrator-custom.xml on all your Concentrators (both Log and Packet)
    4. restart your Log Decoder and Concentrator services
    5. **Note: I am intentionally not using any existing email-related metakeys in these custom.xml files in order to avoid a potential feedback loop where these events might end up in other Incidents and the same email addresses get re-added to the High Risk list
  5. Or if you are sending them to a different SIEM, perform the equivalent measures in that platform to add custom CEF keys

 

Once everything is ready, running the script:

 

And the results:

------

Quite frequently when testing ESA alerts and output options / templates, I have wanted the ability to manually or repeatedly trigger alerts.  In order to help with this type of testing, I created a couple ESA Alert templates to generate both scheduled alerting and manual, one-time alerts.

 

Each of these can take a wide variety of time- or schedule-based inputs to generate alerts according to whatever kind of frequency you might want.  The descriptions in each alert have examples, requirements, and links to official Esper documentation with more detail.

 

I see the potential for quite a bit of usefulness with the Crontab alert, especially in 11.3 now that ESA Alert script outputs run from the admin server.

 

Lastly, I created these using freemarker templates (how the ESA Rules from Live are packaged) in order to ensure that the times and schedules used in the alerts adhere to proper syntax and formatting, but of course you should feel free to convert these to advanced rules if you like.

 

 

The complete overhaul of NW-Endpoint 4.4 into NW-Endpoint 11.3 includes (among many changes) a different method for creating your own, or tuning existing, endpoint alerts.  In the old version (4.4), everything was a SQL query, but since we have moved away from Windows and SQL Server in 11.3, I'd like to shed some light on how the new process works, as well as include some tooling intended to assist folks who want to do this themselves.

 

The RSA NetWitness Endpoint Configuration Guide (https://community.rsa.com/docs/DOC-100160) has a section starting on pg. 12 that covers everything here in greater detail.  If you'd like more information on this subject, I recommend taking a look at that document.

 

At a high level, the process for Endpoint 11.3 to generate alerts and calculate file and host risk scores goes like this:

 

Let's take a look at  a couple of the OOTB examples and see how these different pieces are interacting with each other by examining the process that turns the "runs powershell decoding base64 string" rule into a potential risk score.

 

If the App Rule's condition statement is met, it creates a meta value of "runs powershell decoding base64 string" in the "boc" meta key:

 

These are then used in the corresponding ESA Rule "Runs Powershell Decoding Base64 String" contained in the OOTB Endpoint Risk Scoring Rule Bundle (I've attached all of the OOTB ESA Rules contained in the bundle to this blog).

****Take note that the app_rule_meta_value is case sensitive.  If you use capital letters in the App Rule Name field, then the "value" field in its companion ESA Rule must also contain capital letters****

 

Last up in the process is the Risk Scoring Rule.  This takes the ESA Alert and produces a score (scaled from 0 - 100) for the host where the alert occurred, and if applicable the module involved in the alert.  This last part is where I expect the most potential confusion - determining the host where an alert occurred is straightforward, but the module might not be.

 

This is because there can potentially be both a source module (filename_src, checksum_src) and a destination module (filename_dst, checksum_dst), or just the module itself without a source or destination (filename, checksum), or for some alerts there might not be a module involved in the alert at all.  I've attached all of the OOTB Risk Scoring Rules to this blog, and I'd encourage you to take a look at these variations if you intend to create your own, or tune existing, rules and alerts.

 

Now then, back to the "Runs Powershell Decoding Base64 String" Rule.  This Risk Scoring Rule looks for the ESA Alert and creates a score for the source module (checksum_src, filename_src) in the event, as well as the host where it occurred.  Any risk scores that are generated for affected hosts and modules will appear in the Investigate/Hosts and Investigate/Files pages in the UI, and can also appear as Alerts and Incidents in the Respond UI.

 

And just to be thorough, here are a couple examples of rules with different Risk Scoring.

 

A rule without a source or destination module --> "Scripting Addition In Process"

 

A rule without any module and just the Host --> "Windows Firewall Disabled"

 

Now we have some examples under our belt, and know how the different inputs and options relate to one another and the outcome.  The process for adding your own rule is covered in the configuration guide linked above, and this next section aims to assist with some of the manual CLI aspects of that process.

 

After playing around with the Blocking capabilities in 11.3, I decided I wanted to add a couple custom alerts.

 

First, I wanted to know when a module I blocked was actively running on an endpoint at the time I blocked it and was subsequently killed.  My App Rule to trigger on this activity:

 

And second, I wanted to know when an attempt was made to access or run a module that I had previously blocked.  My App Rule for this activity:

 

With these App Rules created and Applied, the next steps are to create and apply the corresponding ESA Alert and Risk Scoring Rules from a terminal session in the Admin Server (Node0).  The script "endpointCustomRule.sh" attached to this blog can help walk you through these steps, if you choose.  It aims to eliminate errors that may occur when completing these steps manually.

 

Some notes on the script:

  • must be run on the Admin Server as root
  • must be run only after creating and applying your App Rule(s)
    • be sure to make your App Rules unique, otherwise the script might not find the correct one when it is checking for a valid Log Decoder App Rule
    • if you have multiple Endpoint Log Hybrids (ELHs), be sure to Push your App Rule(s) to the other ELHs in your environment
  • applies some error checking and input validation to ensure valid Rules are created and added to the respective databases successfully

 

If you find errors or gaps in the script please let me know.

 

Prompting user for input:

 

Adding and confirming the ESA and Risk Scoring Rules:

 

And finally, confirming that we are now successfully creating alerts and re-calculating Risk Scores when the events occur:

In RSA NetWitness 11.3, one of the behind-the-scenes changes to the platform was moving the script notification server from ESA onto the Admin Server.

 

This change opens up a number of possibilities for scripting and automating processes within the NetWitness environment, but also requires a few changes to existing, pre-11.3 scripts.

 

Prior to 11.3, the raw alert data would be passed to the ESA script server as a single argument which could then be read, written to disk, parsed, etc. e.g.:

 

#!/usr/bin/env python 
import json
import sys

def dispatch(alert):
   with open("/tmp/esa_alert.json", mode='w') as alert_file:
      alert_file.write(json.dumps(alert, indent=True))

def myFunction():
   esa_alert = json.loads(open('/tmp/esa_alert.json').read())
   .....etc.....
   .....etc.....

if __name__ == "__main__":
   dispatch(json.loads(sys.argv[1]))
   myFunction()
   sys.exit(0)

 

 

But in 11.3, the raw alert gets broken up into multiple arguments that need to be joined together.  One possible solution to this change could be something like this:

 

#!/usr/bin/env python
import sys
import json

def dispatch():
   with open("/tmp/esa_alert.json", mode='w') as alert_file:
      a = sys.argv
      del a[0]
      alert_file.write(' '.join(a))

def myFunction():
   esa_alert = json.loads(open("/tmp/esa_alert.json").read())
   .....etc.....
   .....etc.....

if __name__ == "__main__":
   dispatch()
   myFunction()
   sys.exit(0)

 

...or this:

 

#!/bin/bash
OUT=()
for a in "$@"
do
    OUT += "$a "
done
echo -e "$OUT" > /tmp/esa_alert.json

 

As I mentioned above, moving the script server onto the Admin Server opens up a number of possibilities for certain queries and tasks within the NW architecture.  Some that come to mind:

  • automating backups
  • pulling host stats and ingesting them as syslog events
  • better ESA Alert <--> Custom Feed <--> Context-Hub List <-- > ESA Alert enrichment loops

 

However, one restriction I've been trying to figure out a good solution for is that the Admin Server will run these scripts as the "netwitness" user, and this user has fairly limited access.

 

I've been kicking around the possibility of adding this user to the sudoers group, possibly adding read/write/execute permissions for this user to specific directories and/or files depending on the use case, or sudo-ing to a different user within the script.

 

Each of these options present certain risks, so I'd be interested in hearing what other folks might think about these or other possible solutions to run scripts with elevated permissions in as secure a manner as possible.

There have been a few blogs recently (Gathering Stats with Salt - BIOS/iDRAC/PERC EditionRSA NetWitness Storage Retention Script) that leverage a new functionality in v11.x for querying data directly from RSA NetWitness hosts through the command line.

 

This functionality - SaltStack - is baked into v11.x (Chef pun ftw!) and enables PKI-based authentication between the salt master (AKA admin server; AKA node0) and any salt minion (everything that's not the salt master, plus itself).

 

During a recent POC, one of the customer's use cases was to gather, report, and alert against certain host information within the RSA NetWitness environment - kernel, firmware, BIOS, OS, and iDRAC versions, storage utilization (%), and some others.

 

In order for NetWitness to report and alert on this information, we needed to take these details about the physical hosts and feed it into the platform so that we could use the resulting meta.  Thankfully, others before me did all the hard work figuring out the commands to run against hosts to extract this information, so all I had to do was massage the results into a format that could be fed into NetWitness as a log event, and write a parser for it.

 

The scripts, parser, and custom index entries are attached to this blog.  All the scripts are intended to be run from your 11.x Admin Server.  If you do choose to use these or modify them for your environment/requirements, be sure to change the IP address for the log replay command within the scripts

NwLogPlayer -r 3 -f $logEvent -s 192.168.10.14 -p 514

 

...to the IP of a Log Decoder in your environment.  

 

A custom meta and custom column group are also attached.

 

I helped one of my customers implement a use case last year that entailed sending email alerts to specific users when those users logged into legacy applications within their environment.

 

Creating the alerts for this activity with the ESA was rather trivial - we knew which event source would generate the logs and the meta to trigger against - but sending the alert via email to the specific user that was ID'd in the alert itself added a bit of complexity.

 

Fortunately, others have had similar-ish requirements in the past and there are guides on the community that cover how to generate custom emails for ESA alerts through the script notification option, such as Custom ESA email template with raw event payload and 000031690 - How to send customized subjects in an RSA Security Analytics ESA alert email.

 

This meant that all we had to do was map the usernames from the log events to the appropriate email addresses, enrich the events and/or alerts with those email addresses, and then customize the email notification using that information.  Mapping the usernames to email addresses and adding this information to events/alerts could have been accomplished in a couple different ways - either a custom Feed (Live: Create a Custom Feed) or an In-Memory Table (Alerting: Configure In-Memory Table as Enrichment Source) - for this customer the In-Memory Table was the preferred option because it would not create unnecessary meta in their environment.

 

We added the CSV containing the usernames and email addresses as an enrichment source:

 

....then added that enrichment to the ESA alert:

 

With these steps done, we triggered a couple alerts to see exactly what the raw output looked like, specifically how the enrichment data was included.  The easiest way to find raw alert output is within the respond module by clicking into the alert and looking for  the "Raw Alert" pane:

 

Armed with this information, we were then able to write the script (copy/pasting from the articles linked above and modifying the details) to extract the email address and use that as the "to_addr" for the email script (also attached at the bottom of this post):

#!/usr/bin/env python
from smtplib import SMTP
import datetime
import json
import sys

def dispatch(alert):
    """
    The default dispatch just prints the 'last' alert to /tmp/esa_alert.json. Alert details
    are available in the Python hash passed to this method e.g. alert['id'], alert['severity'],
    alert['module_name'], alert['events'][0], etc.
    These can be used to implement the external integration required.
    """

    with open("/tmp/esa_alert.json", mode='w') as alert_file:
        alert_file.write(json.dumps(alert, indent=True))

def read():
    #Parameter
    smtp_server = "<your_mail_relay_server>"
    smtp_port = "25"
    # "smtp_user" and "smtp_pass" are necessary
    # if your SMTP server requires authentication
    # used in "smtp.login()" below
    #smtp_user = "<your_smtp_user_name>"
    #smtp_pass = "<your_smtp_user_password>"
    from_addr = "<your_mail_sending_address>"
    missing_msg = ""
    to_addr = ""  #defined from enrichment table

    # Get data from JSON
    esa_alert = json.loads(open('/tmp/esa_alert.json').read())
    #Extract Variables (Add as required)
    try:
        module_name = esa_alert["module_name"]
    except KeyError:
        module_name = "null"
    try:
         to_addr = esa_alert["events"][0]["user_emails"][0]["email"]
    except KeyError:
         missing_msg = "ATTN:Unable to retrieve from enrich table"
         to_addr = "<address_to_send_to_when_enrichment_fails>"
    try:
        device_host = esa_alert["events"][0]["device_host"]
    except KeyError:
        device_host = "null"
    try:
        service_name = esa_alert["events"][0]["service_name"]
    except KeyError:
        host_dst = "null"
    try:
        user_dst = esa_alert["events"][0]["user_dst"]
    except KeyError:
        user_dst = "null"
    # Sends Email
    smtp = SMTP()
    smtp.set_debuglevel(0)
    smtp.connect(smtp_server,smtp_port)

    date = datetime.datetime.now().strftime( "%m/%d/%Y %H:%M" ) + " GMT"
    subj = "Login Attempt on " + ( device_host )
    message_text = ("Alert Name: \t\t%s\n" % ( module_name ) +
        " \t\t%s\n" % ( missing_msg ) +
        "Date/Time : \t%s\n" % ( date  )  +
        "Host: \t%s\n" % ( device_host ) +
        "Service: \t%s\n" % ( service_name ) +
        "User: \t%s\n" % ( user_dst )
    )

    msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s\n" % ( from_addr, to_addr, subj, date, message_text )
    # "smtp.login()" is necessary if your
    # SMTP server requires authentication
    #smtp.login(smtp_user,smtp_pass)
    smtp.sendmail(from_addr, to_addr, msg)
    smtp.quit()

if __name__ == "__main__":
    dispatch(json.loads(sys.argv[1]))
    read()
    sys.exit(0)

 

And the result, after adding the script as a notification option within the ESA alert:

-----------------------------

 

Of course, all of this can and should be modified to include whatever information you might want/need for your use case.

The RSA NetWitness Platform has multiple new enhancements as to how it handles Lists and Feeds in v11.x.  One of the enhancements introduced in the v11.1 release was the ability to use Context Hub Lists as Blacklist and/or Whitelist enrichment sources in ESA alerts.  This feature allows analysts and administrators a much easier path to tuning and updating ESA alerts than was previously available.

 

In this post, I'll be explaining how you can take that one step further and create ESA alerts that automatically update Context Hub Lists that can in turn be used as blacklist/whitelist enrichment sources in other ESA alerts.  The capabilities you'll use to accomplish this will be the ESA's script notifications, the ESA's Enrichment Sources and the Context Hub's List Data Source.

 

Your first step is to determine what kind of data you want to put into the Context Hub List.  For my test case I chose source and destination IP addresses.  Your next step is to determine where this List should live so that the Context Hub can access it.  The Context Hub can pull Lists either via HTTP, HTTPS, or from its local file system on the ESA appliance - for my test case I chose the local filesystem.

 

With that decided, your next step is to create the file that will become the List - the Context Hub looks within the /var/netwitness/contexthub-server/data directory on the ESA, so you'll create a CSV file in this location and add headers to help you (and others) know what data the List contains:

 

**NOTE** Be sure to make this CSV writeable for all users, e.g.:

# chmod 666 esaList.csv

 

Next, add this CSV to the CH as a Data Source.  In Admin / Services / Contexthub Server / Config --> Data Sources, choose List:

 

Select "Local File Store," then give your List a name and description and choose the CSV from the dropdown:

 

If you created headers in the CSV, select "With Column Headers" and then validate that the Context Hub can see and read your file.  After validation is successful, tell the Context Hub what types of meta are in each column, whether to Append to or Overwrite values in the List when it updates, and also whether to automatically expire (delete) values once they reach a certain age (maximum value here is 30 days):

 

For my test case, I chose not to map the date_added and source_alert columns from the CSV to any meta keys, because I only want them for my own awareness to know where each value came from (i.e.: what ESA alert) and when it was added.  Also, I chose to Append new values rather than Overwrite, because the Context Hub List has built in functionality that identifies new and unique values within the CSV and adds only those to the List.  Append will also enable the List Value Expiration feature to automatically remove old values.

 

Once you have selected your options, save your settings to close the wizard.  Before moving on, there are a few additional configuration options to point out which are accessible through the gear icon on the right side of the page.  These settings will allow you to modify the existing meta mapping or add new ones, adjust the Expiration, enable or disable whether the List's values are loaded into cache, and most importantly - the List's update schedule, or Recurrence:

 

**NOTE** At the time of this writing, the Schedule Recurrence has a bug that causes the Context Hub to ignore any user-defined schedule, which means it will revert to the default setting and only automatically update every 12 hours.

 

With the Context Hub List created, you can move on to the script and notification template that you will use to auto-update the CSV (both are attached to this blog - you can upload/import them as is, or feel free to modify them however you like for your use cases / environment).  You can refer to the documentation (System Configuration Guide for RSA NetWitness Platform 11.x - Table of Contents) to add notification outputs, servers, and templates.

 

To test that all of this works and writes what you want to the CSV file (for my test case, IP source and destination values), create an ESA alert that will fire with the data points you want capture, and then add the script notification, server, and template to the alert:

 

After deploying your alert and generating the traffic (or waiting) for it to fire, verify that your CSV auto-updates with the values from the alert by keeping an eye on the CSV file.  Additionally, you can force your Context Hub List to update by re-opening your List's settings (the gear icon mentioned above), re-saving your existing settings, and then checking its values within the Lists tab:

 

 

You'll notice that in my test case, my CSV file has 5 entries in it while my Context Hub List only has 3 - this is a result of the automatic de-duplication mentioned above; the List is only going to be Appending new and unique entries from the CSV.

 

Next up, add this List as an Enrichment Source to your ESA.  Navigate to Configure / ESA Rules --> Setting tab / Enrichment Sources, and add a new Context Hub source:

 

In the wizard, select the List you created at the start of this process and the columns that you will want to use within ESA alerts:

 

With that complete, save and exit the wizard, and then move on to the last step - creating or modifying an ESA alert to use this Context Hub List as a whitelist or blacklist.

 

Unless your ESA alert requires advanced logic and functionality, you can use the ESA Rule Builder to create the alert.  Within your alert statement, build out the alert logic and add a Meta Whitelist or Meta Blacklist Condition, depending on your use case:

 

Select the Context Hub List you just added as an Enrichment Source:

 

Select the column from the Context Hub List that you want to match against within your alert:

 

Lastly, select the NetWitness meta key that you want to match against it:

 

You can add additional Statements and additional blacklists or whitelists to your alert as your use case dictates.  Once complete, save and deploy your alert, and then verify that your alerts are firing as expected:

 

And finally, give yourself a pat on the back.

With all the recent blogs from Christopher Ahearn about creating custom lua parsers, some folks who try their hand at it may find themselves wondering how to easily and efficiently deploy their new, custom parsers across their RSA NetWitness environment.

 

Manually browsing to each Decoder's Config/Parsers tab to upload there will quickly become frustrating in larger or distributed environments with more than one Decoder.

 

Manually uploading to a single Decoder and then using the Config/Files tab's Push option would help eliminate the need to upload to every single Decoder, but you would still need to reload each Decoder's parsers.  While this could, of course, be scripted, I believe there is a simpler, easier, and more efficient option available.

 

Not coincidentally, that option is the title of this blog. We can leverage the Live module within the NetWitness UI to deploy custom parsers across entire environments and automatically reload each Decoder's parsers in the process.  To do this, we will need to create a custom resource bundle that mimics an OOTB Live resource.

 

First, lets take a look at one of the newer lua parsers from Live to see how it's being packaged.  We'll select one parser and then choose Package --> Create to generate a resource bundle.

 

In this ZIP's top-level directory, we see a LUAPARSER folder and a resourceBundleInfo.xml file.

 

Navigating down through the LUAPARSER folder, we eventually come to another ZIP file:

 

This teamviewer.zip contains an encrypted lua parser and a token to allow NetWitness Decoders to decrypt it (FYI - you do not need to encrypt your custom lua parsers).

 

The main takeaway from this is that when we create our custom resource bundle, we now know to create a directory structure like in the above screenshot, and that our custom lua parser will need to be packaged into a ZIP file at the bottom of this directory tree.

 

Next, lets take a look at the resourceBundleInfo.xml file in the top-level directory of the resource bundle.  This XML is the key to getting Live to properly identify and deploy our custom lua parser.

 

Everything that we really need to know about this XML is in the <resourceInfo> section.

 

A common or friendly name for our parser:

<displayName>teamviewer</displayName>

 

The name of the ZIP file at the bottom of the directory tree:

            <fileName>teamviewer.zip</fileName>

 

The full path of this ZIP file:

            <filePath>LUAPARSER/0.1/teamviewer.zip</filePath>

 

The version number (which can really be anything you want, as long as it's reflected accurately in the filePath):

            <productionVersion>0.1</productionVersion>

 

The resourceType line is the name of the top-level folder in the resource bundle (you shouldn't need to change this):

            <resourceType>LUAPARSER</resourceType>

 

The typeTitle (which you also shouldn't change):

            <typeTitle>Lua Parser</typeTitle>

 

And lastly the uuid, which is how Live and the NetWitness platform identify Live resources:

            <uuid>e1a06b9a-db6b-45fd-85a3-6074229d8e02</uuid>

 

Modifying everything in this file should be pretty straightforward - you'll simply want to modify each line to reflect your parser's information. And for the uuid, we can simply create our own - but don't worry, it doesn't need to be anywhere near as long or complex as a Live resource uuid.

 

Now that we know what the structure of the resource bundle should look like, and what information the XML needs to contain, we can go ahead and create our own custom resource bundle.

 

Here's what a completed custom resource bundle looks like, using one of  Chris Ahearn's parsers as an example: What's on your wire: Detect Linux ELF files:

 

---

---

 

With the custom bundle packaged and ready to go, we can go into Live, select Package --> Deploy, browse to our bundle, and simply step through the process, deploying to as many or as few of our Decoders as we want:

---

---

---

 

For confirmation, we can broswe to any of our Decoders at Admin --> Services and see our custom parser deployed and enabled in the Config/General tab:

 

Lastly, for those who might have multiple custom resources they want to deploy at once in a single resource bundle, it's just a matter of adjusting the resourceBundleInfo.xml file to reflect each resource's name, version, path, and making sure each uuid is unique within the resource bundle, e.g.: uuid1, uuid2, uuid3, etc:

---

 

You can find a resource bundle template attached to this blog.

 

Happy customizing, everybody!

The Respond Engine in 11.x contains several useful pivot points and capabilities that allow analysts and responders to quickly navigate from incidents and alerts to the events that interest them.

 

In this blog post, I'll be discussing how to further enable and improve those pivot options within alert details to provide both more pivot links as well as more easily usable links.

 

During the incident aggregation process, the scripts that control the alert normalizations create several links (under Related Links) that appear within each alert's Event Details page.

 

These links allow analysts to copy/paste the URI into a browser and pivot directly to the events/session that caused the alert, or to an investigation query against the target host. 

 

What we'll we doing here is adding additional links to this Related Links section to allow for more pivot options, as well as adding the protocol and web server components to the existing URI in order to form a complete URL.

 

The files that we will be customizing for the first step are located on the Node0 (Admin) Server in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js

 

(We will not be modifying the normalize_ecat_alerts.js or normalize_wtd_alerts.js scripts because the Related Links for those pivot you outside of the NetWitness UI.)

 

As always, back up these files before committing any changes and be sure to double-check your changes for any errors.

 

Within each of these files, there is a exports.normalizeAlert function:

 

At the end of this function, just above the "return normalized;" statement, you will add the following lines of

 

//copying additional links created by the utils.js script to the event's related_links
for(var j = 0; j < normalized.events.length; j++){

if (normalized.related_links) {

normalized.events[j].related_links = normalized.events[j].related_links.concat([normalized.related_links]);

}

}

 

 

So the end of the exports.normalizeAlert function now looks like this:

 

Once you have done this, you can now move on to the next step in this process.  This step will require modification of 3 files - the two we have already changed plus the utils.js script - all still located in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js
  • utils.js

 

Within each of these files search for "url:" to locate the statements that generate the URIs in Related Links.  You will be modifying these URIs into complete URLs by adding "https://<your_UI_IP_or_Hostname>/" to the beginning of the statement.

 

For example, this: 

 

...becomes this:

 

Do this for all of the "url:" statements, except this one in "normalize_core_alerts.js," as this pulls its URI / URL from a function in the script that we are already modifying:

 

Once you have finished modifying these files and double-checking your work for syntax (or other) errors, restart the Respond Server (systemctl restart rsa-nw-respond-server) and begin reaping your rewards:

 

Filter Blog

By date: By tag: