Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Eric Partington
1 2 3 Previous Next

RSA NetWitness Platform

69 Posts authored by: Eric Partington Employee
Eric Partington

Sigma for your SIEM

Posted by Eric Partington Employee Apr 8, 2019

Over the last year a few trends have emerged in detection ruleset sharing circles.  Standards or common formats of sharing detective rulesets have emerged as the defacto way teams are communicating rulesets to then convert into local technologies.

 

  • Yara for file based detections
  • Snort/Bro/Zeek rules for network based detections
  • Sigma for SIEM based detections

 

Along with MITRE ATT&CK these appear to be a consistent common foundation for sharing methodologies.

 

Given that, taking a shot at using Sigma to create RSA NetWitness rules based on the rulesets in the github repo was the next logical step.  The hard work of creating the backed and the initial mappings for fields was done by @tuckner and my work was just adding on a few additional fieldmappings and creating a wrapper script to help make the process of running the rules easier.

 

There are still some issues in the conversion script that I have noticed and not all capabilities in Sigma have been ported over (or can be ported over programatically) but this is enough of a start to get you on your way to developing additional rulesets with this capabilities.

 

*** <disclaimer>

Please note this is not an official RSA product, this is an attempt to start the conversion process of these rules to something NetWitness can begin to understand. There will be mistakes and errors in this community developed tool, feel free to contribute fixes and enhancements to the Sigma project to make it better and more accurate

</disclaimer> ***

 

You will need to install python3 to make the Sigmac tool run, NetWitness appliances don't have the right version of python so you will need somewhere to install it, these are my instructions that i fumbled through to make it work...

 

https://github.com/epartington/rsa_nw_sigma_wrapper/blob/master/install%20python3.txt

 

Once you have the tool running you should take a look at the rules that exist in the Sigma repo to see which ones you want to take a crack at converting.

 

Those rules exist here:

https://github.com/Neo23x0/sigma/tree/master/rules

 

The tool you will use to convert the rules is sigmac and lives under tools/sigmac

The backend you will refer to is netwitness and lives under tools/sigma/backends

The last item you need to know about is the template that will be used to convert the rule using the backend which is located here tools/config/netwitness.yml

 

running the command on a single file looks something like this:

python36 sigmac -t netwitness ../rules/network/net_mal_dns_cobaltstrike.yml
(query contains 'aaa\.stage\.', 'post\.1')

 

You can use this to run individual conversions but what if you want to bulk convert all the rules in a folder?

This wrapper script will help you do that, place it in the root folder and adjust the directory paths as needed, this will output the name of the file as well as the conversion so that you know what file you are converting

 

https://github.com/epartington/rsa_nw_sigma_wrapper/blob/master/sigma-wrapper.sh

 

Which gets you something like this:

 

/root/sigma/sigma-master/rules/windows/builtin/win_susp_sdelete.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '4656', '4663', '4658') && (obj.name contains '.AAA', '.ZZZ'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_sdelete.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '4656', '4663', '4658') && (obj.name contains '.AAA', '.ZZZ'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_svchost.yml

 

Some items to be aware of:

  • IP addresses appear to be quoted which should not occur for our latest requirements
  • Keep an eye on regex usage
  • Haven't checked to far into the escaping of slashes for importing via the UI vs. the .nwr method.  Be careful which method you use that the right number of slashes are respected.

 

So far this looks like a useful method to add a bunch of current SIEM detections to the RSA NetWitness Platform, feel free to test and contribute to the converter, fieldmappings or other functions if you find this useful.

RSA NetWitness has a number of integrations with threat intel data providers but two that I have come across recently were not listed (MISP and Minemeld) so I figured that it would be a good challenge to see if they could be made to provide data in a way that NetWitness understood.

 

Current RSA Ready Integrations

https://community.rsa.com/community/products/rsa-ready/rsa-ready-documentation/content?filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bdocument%5D&filterID=contentstatus%5Bpublished%5D~tag%5Bthreat+intel%5D&sortKey=contentstatus%5Bpublished%5D~subjectAsc&sortOrder=1

 

MISP

Install the MISP server in a few different ways

https://www.misp-project.org/

 

VMWare image, Docker image or on an OS are all available (VMware image worked the best for me)

https://www.circl.lu/misp-images/latest/

 

Authenticate and setup the initial data feeds into the platform

Set the schedule to get them polling for new data

 

Once created and feeds are being pulled in you can look at the attributes to make sure you have the data you expect

 

Test the API calls using PyMISP via Jupyter Notebook

https://github.com/epartington/rsa_nw_misp/blob/master/get-misp.ipynb

  • you can edit the notebook code to change the interval of data to pull back (last 30 days, all data or such to limit impact on the MISP server)
  • You can change the indicator type (ip-dst, domain etc.) to pull back the relevant columns of data
  • You can change the column data to make sure you have what you need as other feed data

 

Once that checks out and you have the output data you want via the notebook you can add the python script to the head server of NetWitness

 

Install PyMISP on the head server of the NetWitness system so that you can crontab the query.

  • Install PyMISP using PIP

(keep in mind that updating the code on the head server could break things so be careful and test early and often before committing this change in production)

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install python-pip
OWB_FORCE_FIPS_MODE_OFF=1 python
OWB_FORCE_FIPS_MODE_OFF=1 pip install pymisp
OWB_FORCE_FIPS_MODE_OFF=1 pip install --upgrade pip
OWB_FORCE_FIPS_MODE_OFF=1 ./get-misp.py
yum repolist
vi /etc/yum.repos.d/epel.repo
change enabled from 1 to 0

Make sure you disable the epel repo after installing so that you don't create package update issues later

 

Now setup the query that is needed in a script (export the Jupyter notebook as python script)

https://github.com/epartington/rsa_nw_misp/blob/master/get-misp.py

 

Crontab the query to schedule it (the OWB is required to work around FIPS restrictions that seem to break a number of script related items in python)

23 3 * * * OWB_FORCE_FIPS_MODE_OFF=1 /root/rsa-misp/get-misp.py > /var/lib/netwitness/common/repo/misp-ip-dst.csv

 

Now setup the NetWitness recurring feed to pull from the local feed location

map the ip-dst values (for this script) to the 3rd column and the other columns as required

 

 

Minemeld

logo

Minemeld is another free intel aggregation tool from Palo Alto Networks and can be installed many ways (i tried a number of installs on different Ubuntu OSes and had difficulties), the one that worked the best for me was via a docker image.

https://www.paloaltonetworks.com/products/secure-the-network/subscriptions/minemeld

https://github.com/PaloAltoNetworks/minemeld/wiki

 

Docker image that worked well for my testing

https://github.com/jtschichold/minemeld-docker

 

docker run -it --tmpfs /run -v /somewhere/minemeld/local:/opt/minemeld/local -p 9443:443 jtschichold/minemeld

to make it run as daemon after testing add the -d command to have it continue running after you exit the terminal

 

After installing (if you do this right you can get a certificate included in the initial build of the container that will help with the Certificate trust to NW) you will log in and set up a new output action to take your feeds and map them to a format and output that can be used with RSA NetWitness.

 

This is the pipeline that we will create which will map a sample threat intel list to an output action so that NetWitness can consume that information

And it gets defined by editing the yml configuration file (specifically this section creates the outboundhcvalues section that NetWitness reads)

https://github.com/epartington/rsa_nw_minemeld/blob/master/minemeld-netwitness-hcvalues.yml

outboundfeedhcvalues:
inputs:
- aggregatorIPv4Outbound-1543370742868
output: false
prototype: stdlib.feedHCGreenWithValue

This is a good start for how to create custom miners

https://live.paloaltonetworks.com/t5/MineMeld-Articles/Using-MineMeld-to-Create-a-Custom-Miner/ta-p/227694

 

Once created and working you will have a second miner listed and the dashboard will update

 

You can test the feed output using a direct API call like this via the browser

https://192.168.x.y:9443/feeds/"$feed_name"?tr=1&v=csv&f=indicator&f=confidence&f=share_level&f=sources

the  query parameters are explained here:

https://live.paloaltonetworks.com/t5/MineMeld-Articles/Parameters-for-the-output-feeds/ta-p/146170

 

in this case:

tr=1

translate IP ranges into CIDRs. This can be used also with v=json and v=csv.

v=csv

returns the indicator list in CSV format.

 

The list of the attributes is specified by using the parameter f one or more times. The default name of the column is the name of the attribute, to specify a column name add |column_name in the f parameter value.

 

The h parameter can be used to control the generation of the CSV header. When unset (h=0) the header is not generated. Default: set.

 

Encoding is utf-8. By default no UTF-8 BOM is generated. If ubom=1 is added to the parameter list, a UTF-8 BOM is generated for compatibility.

 

F are the column names from the feed

This command testing drops a file in your browser to look at and make sure you have the data and columns that you want

 

Now once you are confident in the process and the output format you can script and crontab the output to drop into the local feed location on the head server (I did this as i couldn't figure out how to accept the self signed certificate from the docker image).

https://github.com/epartington/rsa_nw_minemeld/blob/master/script-rsa-minemeld.sh

# 22 3 * * * /root/rsa-minemeld/script-rsa-minemeld.sh

Now create the same local recurring feed file to pull in the information as feed data on your decoders.

Define the column to match column 1 for the IP in CIDR notation and map the other columns as required

 

Done

 

Now we have a pipeline for two additional threat data aggregators that you may have a need for in your environment.

These are a collection of ESA rules that create persisted in-memory tables for various different scenarios.  Hopefully they are useful as well as serve as templates for future ideas.

 

GitHub - epartington/rsa_nw_esa_whatsnew: collection of ESA rules for whats new stuff 

 

  • New JA3 hash
  • New SSH user agent
  • New useragent
  • New src MAC family
  • New certificate CA
  • New certificate CA (Endpoint)

 

These are advanced ESA rules so it will require copying and pasting the text into the rules.

 

These can also be tuned to learn more (longer learning window) so that more data is added to the known window of the ESA rule.  Just be careful about potential performance issues if you make the window too long for your environment.

 

Recently, a question came from a customer who wanted to know if it was possible to alert when a new device.ip started logging to RSA NetWitness.  Thinking about it for a second it seemed like a good test of a new template that I was testing for ESA.

 

The rule, located here, does just that:

GitHub - epartington/rsa_nw_esa_whatsnewdeviceip: ESA rule to indicate when a new device type is seen 

 

Add this rule in ESA in the advanced editor to create the rule.

 

It works as follows:

A window of a learning phase is created with the timer in the rule (1 day default)

In that learning window new device.ip + device.type are added to the window to create a known list of devices.

Once the learning window has expired the system alerts on any new combinations of device.ip and device.type that is seen after that.

 

Customizations that you possibly want to make would include changing the learning window timer from 1 day to longer (5 days potentially)

 

The data is kept in a named window and persisted to a JSON file on the ESA disk system in case there are restarts or service changes.

 

Alerts are created in ESA/Respond that can then be assigned work to validate that the new system was on-boarded properly and configured appropriately before closing.

 

Josh Randall had a recent post that showed how to use the resourceBundle package to create a custom package for content deployment.

Leveraging RSA Live to Deploy Custom Parsers in Large Environments 

 

I took the idea from Josh and took it a step further to create a script that lays down the structure of the resourceBundle for you, then provides a second script to zip up the appropriate content to create your resource bundle.  This should remove the need to hand edit the xml files to create the proper linkages.

 

The script is hosted here:

GitHub - epartington/rsa_nw_script_resourcebundle: Script to create a resource bundle for netwitness content 

 

Run the script from the site above in the folder where you want the bundle created

follow the README.MD to add supported content to the right folder structure created.

Content is placed in the version folder
-------------------
##Currently working in this script:
APPLICATION RULES
LUAPARSERS
-------------------
Not working/Not implemented
All other folders


### APPLICATION RULES
--------------------
require clear text nwr files to be placed in the version folder
if there is more than 1 line per .nwr file then it will split the file into multiples (one line per file) and rename the original file to .multiline
### LUAPARSERS
--------------------
requires lua parser in the version folder
the script will zip the lua file up

 

run the resourceBundleZipper.py script to create the XML and zip file for upload via the RSA NW UI > Configure > Deploy package.

 

Now you are able to upload content in one file, to many locations in the NW environment saving you time.

 

 

 

 

A couple of interactions with customers recently sent me down the path of designing better whitelisting options around well known services that generate a lot of benign traffic.  A lot of customers have gone down the path of Office365, Windows 10 and Chrome/Firefox as standard software in the Enterprise.  As a result, the traffic that NetWitness captures would include a lot of data for these services so enabling the ability to filter this data when needed is important.

 

The Parsers

The end result of this effort is 3 parsers that allow the filtering of Office365 traffic, Windows 10 Endpoint/Telemetry and Generic filtering (Chrome/ Firefox etc.) in the NetWitness plaform.

 

The data for filtering (metvalues) is written by default in the Filter key and looks like this

 

With these metavalues, analysts are able to select and deselect meta for these services to reduce the benign signals for these services from investigations and charting to focus more on the outliers.

filter!='whitelist'

filters all data tagged as whitelist from the view

 

filter!='windows10_connection'

filters all traffic that is related to windows 10 Connection endpoints (telemetry etc.) captured from these sites

windows-itpro-docs/windows/privacy at master · MicrosoftDocs/windows-itpro-docs · GitHub 

 

filter !='office365'

filters traffic from this endpoint related to all Office365 endpoints

Office 365 IP Address and URL Web service | Microsoft Docs 

 

filter !='generic'

filters traffic related to generic endpoints including Chome updates from gvt1.com and gtvt2.com as well as other misc endpoints related to windows telemetry (V8/V7 etc and others)

 

Automating the Parsers

To take this one step further, to make the process of creating the lua parsers easier a script was written for Office365 and Window10 to automate the process of pulling the content down, altering the data, creating a parser with the content and outputting the parser ready to be used on log and packet decoders to flag traffic.

 

Hopefully this can be rolled into a regular content pipeline to update the parsers periodically to get the latest endpoints as they are added (for instance when a new Windows 10 build comes out there will be an update to the endpoints most likely).

 

Scripts, lua parsers and descriptions are listed here and will get updated as issues pop up.

 

For each parser that has an update mechanism the python script can be run to generate the data that outputs a new parser for use in NetWitness (and the parser version is updated to the time the parser is built to let you know in the UI what the version is).

These parsers also serve as proof of concepts for other ideas that might need both exact and substring matches for say hostnames, or other threat data.

 

Currently the parsers read from hostname related keys such as alias.host, fqdn, host.src, host.dst.

 

As always, this is POC code to validate ideas and potential solutions. Deploy code and test/watch for side effects such as blizzards, sandstorms, core dumps and other natural events.

 

GitHub - epartington/rsa_nw_lua_wl_O365: whitelist office365 traffic parser and script 

GitHub - epartington/rsa_nw_lua_wl_windows10: whitelist window 10 connection traffic parser and script 

GitHub - epartington/rsa_nw_lua_wl_generic: whitelist generic traffic parser 

 

There will also be a post shortly about using resourceBundles to generate a single zip file with this content to make uploading and management of this data easier.

There have been many improvements made over the past several releases to the RSA NetWitness product on the log management side of the house to help reduce the amount of unparsed or misparsed devices.  There are still instances where manual intervention is necessary and a report such as the one provided in this blog could prove valuable for you.

 

This report provides visibility into 4 types of situations:

 

Device.IP with more than 1 device.type

Devices that have multiple parsers acting on them over this time period, sorted most parsers per IP to least

 

Unknown Devices

Unknown devices do not have a parser detected for them or no parser is installed/enabled for it.

 

Device.types with word meta

Device types with word meta indicate that a parser has matched a header for that device but no payload (message body) has matched a parser entry.

 

Device.type with parseerror

Devices that are parsing meta for most fields but have parseerror meta for particular metakey data. This can indicate the format of the data into the key does not match the format of the key (invalid MAC address into eth.src or eth.dst - MAC formatted keys), text into IP key

 

Some of these categories are legitimate but checking this report once a week should allow you to keep an eye on the logging function of your NetWitness system and make sure that it is performing at its best.

 

The code for the Report is kept here (in clear text format so you can look at the rule content without needing to import it into NetWitness):

GitHub - epartington/rsa_nw_re_logparser_health 

 

Here's a sample report output:

 

Most people don't remember the well known port number for a particular network protocol. Sometimes we need to refer to an RFC to remember what port certain protocols normally run over. 

 

In the RSA NetWitness UI, the well known name for the protocol is presented in the UI but when you drill on it you get the well known port number. 

 

This can be a little confusing at times if you aren't completely caffeinated.☕

 

Well here's some good news, you an use the name of the service in your drills and reports with the following syntax:

 

Original method:

Service=123 

 

New method:

Service="NTP"

 

You may get an error about needing quotes around the word however the system still interprets the query correctly.

 

 

This also works in profiles:

 

An in the Reporting Engine as well:

 

Good luck using this new trick!

   

(P.S you can also use AND instead of && and OR instead of || )

RSA NetWitness v11.2 introduced a very useful feature to the Investigation workflow with the improvement of the Profile feature.  In previous versions the Profile could have a pre-query set for it along with the meta and column groups, but you were locked to using only those two features unless you de-activated your profile.

 

With v11.2 you are able to keep the pre-query set from the profile and pivot to other meta and column groups.  This ability allows you to set the Profiles as bookmarks or starting points for investigations or drills.  Along with the folders that can be set in the Profile section to help organize the various groups that help frame investigations properly.

 

Below is a collection of the profiles as well as some meta and column groups to help collect various types of data or protocols together.

 

GitHub - epartington/rsa_nw_investigation_profiles 

 

Protocols

Medium

Log Device Classes

UEBA

 

Let me know if these work for you, I will be adding more as they develop to the github site so check back.

If you've ever wondered what levers you have available to pull for creating application rule logic then this is your one stop shop for an explanation.

 

There's a fully documented cheat sheet of the parameters you can use in application rules, located at the link below:

Application Rules Cheat Sheet 

 

There are some commands that I personally wasn't aware of.  For example, using ~ instead of not() to negate the contains/begins/ends functions and I had forgotten about the ucount and unique operators that are available.

 

Also, v11.x introduced the ability to have metakeys on both the left and right side of operators (the table in that link explains which ones are available).

 

Overall, this is a good resource to bookmark if you are developing application rules in RSA NetWitness.

A recent customer question about alerting on Uptime values from the REST API got me digging into the Health and Wellness Policies for a better solution.

 

The request was to alert when the uptime value for specific device families was reset indicating that something had occured with the service and reset the uptime value.  Repeated resets of the uptime value could indicate an issue with the service that needed attention (core files created as a result of decoder service crashes was the root of this request).

 

Here is my solution:

  • Admin > Health and wellness > Policies
  • Select the + and add a new policy for the service that you want to monitor
  • In this case the Archiver service is our example

  • Add a new Rule
  • The conditions
    • Alarm = Regex match on .., .. seconds.*
    • REcovery = !Regex match on .., .. seconds.*

  • Save
  • Set your notification output at the bottom
  • save and enable the policy at the top

 

Now you have a policy that alerts when the uptime is within the first 60 seconds of restarting (.. is two digits so up to 60 seconds) and recovers once the uptime doesnt match the pattern (when 60 seconds switches to minute and seconds (61 seconds +)

 

Alarm

Recovery

 

 

Details on the pattern developed:

number of seconds followed by a comma then the friendly time breakdown of the seconds in years, months, weeks, days, hours, minutes and seconds.

.. = looked for 2 digits for the seconds (between 10-59 seconds after service restarted)

, .. = looked for the same seconds value after the comma

seconds.* = the word seconds and the trailing space in the value

when this pattern is matched (between 10-59 seconds after restart) there will be an alarm, then it will clear when that pattern is not matched (60 seconds +)

Eric Partington

Hunting in RDP Traffic

Posted by Eric Partington Employee Nov 12, 2018

I was just working in the NOC for HackFest 2018 in Quebec City (https://hackfest.ca/en/) and playing with RDP traffic to see who was potentially accessing remote systems on the network.  

 

This was inspired by this deck from Brocon and some recent enhancements to the RDP parser. (https://www.bro.org/brocon2015/slides/liburdi_hunting_rdp.pdf)

 

Recent enhancements to the RDP parser include extracting the screen resolutions, the username as well as the hostname, certificate and other details.

 

With some simple charting language we can create a number of rules that look for various properties of RDP traffic based on direction (Should you have RDP inbound from the internet?, should you have RDP outbound to the internet?) as well as volume based rules (which system has the most RDP session logins by unique username?, which system connects to the most systems by distinct count of ip?)

 

The report language is hosted here, simply import it into your Reporting Engine and point it at your packet broker/concentrators.

GitHub - epartington/rsa_nw_re_rdp: RDP summary reports for hunting/identification 

 

Please let me know if there are modifications to the Report that make it more useful to you.

 

Rules included in the report:

  • most frequent RDP hostnames
  • most frequent RDP keyboard languages
  • least frequent RDP keyboard languages
  • Outbound/Inbound/Lateral RDP traffic
  • Most frequent RDP screen resolutions
  • Most frequent RDP Usernames
  • Usernames by distinct destination IP
  • RDP Hosts with more than 1 username from them

A couple of clients have asked about a generic ESA template that can be used to alert into Arcsight for correlation with other sources.  After some testing and configuration this was the template that was created.  One thing that had us stuck for a short period of time was the timezone offset in the FreeMarker template to get Arcsight to read the time as UTC and apply the correct time offset.

 

Hopefully this helps others with this need.

 

<#include "macros.ftl"/>
CEF:0|RSA|NetWitness ESA|11.0|${moduleName}|${moduleName}|${severity}|<#list events as x>externalId=${x.sessionid!" "} proto=${x.ip_proto!" "} categoryOutcome=/Attempt categoryObject=Network categorySignificance=/Informational/Warning categoryBehavior=/Communicate host=<#if x.alias_host?has_content><@value_of x.alias_host /></#if> src=${x.ip_src!" "} spt=${x.tcp_srcport!" "} dhost=${x.host_dst!" "} dst=${x.ip_dst!" "} dpt=${x.tcp_dstport!" "} act=${x.action!" "} rt=${time?datetime?string(“MMM dd yyyy HH:mm:ss z”)} duser=${x.ad_username_dst!" "} suser=${x.ad_username_src!" "} filePath=${x.filename!" "} requestMethod=${x.action!" "} destinationDnsDomain=<#if x.alias_host?has_content><@value_of x.alias_host /></#if>  destinationServiceName=${x.service!" "}</#list> cs4=${moduleName} cs5=PROD cs6=MalwareCommunication

 

This CEF template is added to the Admin > System > Global Notifications > Templates tab and referenced in the ESA rules that need to alert out to Arcsight when they fire.

Background Information:

  • v10.6.x had a method in the UI to add a standalone NW head server for investigation purposes (and to help with DR scenarios) using legacy authentication (static local credentials).  
  • v11.x appeared to have removed that capability which was blocking some of the larger upgrades, however it appears that the capability actually exists; it is just not presented in the UI as it was in v10.6.
  • Having a DR investigation server also helps to provide continuous access to data for analysts during the major upgrade from v10.6.x to v11.2 which is incredibly beneficial to have.

 

Review the upgrade guide and the "Mixed Mode" notes at the link below for more details on the upgrade and running in mixed mode:

https://community.rsa.com/community/products/netwitness/blog/2018/10/18/running-rsa-netwitness-mixed-mode

 

If you spin up a DR v11.2 standalone NW server from the ISO/OVA you can connect it to an existing set of concentrators using local credentials (Note: DO NOT expect that Live or ESA will function as they do on the actual node0 NW server.  This method gets you a window into the meta for investigation, reporting and Dashboards only!)

 

Here's the steps you'll need to follow once you have your DR v11.2 NW server spun up:

 

Create local credentials to use for authentication with the concentrator(s) or broker(s) that you will connect to under

Admin > Service > <service> > Security

 

 

You will need to add some permissions to the aggregation role to allow the Event Analysis function to work:

Replicate the role and user to the other services that you will need to authenticate to.

 

Your 11.2 DR investigation head server can connect to a 10.6.6 Broker or Concentrator with the following:

 

Broker service > Explore

Select broker

Right click select properties

Select add from the drop down

Add the concentrators that need to be connected (as they were in 10.6).  Below are the ports that are required for the connection:

  • 50005 for Concentrators
  • 56005 for SSL to Concentrators
  • 50003 to Broker 
  • 56003 for SSL to Broker

 

device=<ip>:<port> username=<> password=<>

 

Click send.

 

You should get a successful connection and in the config section you will now see the aggregation connection setup:

 

Click Start aggregation and make sure Aggregate Autostart is checked:

 

Using this DR Investigation server you can use the following process to help in upgrading from v10.6.6 to v11.2+ in the following steps:

 

Initial State:

 

Upgrade the new Investigation Head:

 

Investigators now can use the 11.2 head to investigate without interruption during the production NW head server upgrade.

 

Upgrade the primary (node0) NW head server and ESA:

Upgrade the decoder/concentrator pairs:

Note: an outage will occur here for investigation as the stacks are upgraded

Now you'll be running in v11.2 mode as you were in 10.6 with DR investigation head server so that your Investigation and Events views will be accessible.

Context menu actions have long been a part of the RSA NetWitness Platform. v11.2 brought a few nice touches to help manage the menu items as well extend the functions into more areas of the product.

 

See here for previous information on the External Lookup options:

Context Menus - OOTB Options 

 

And these for Custom Additions that are useful to Analysts:

Context Menu - Microsoft EventID 

Context Menu - VirusTotal Hash Lookup 

Context Menu - RSA NW to Splunk 

Context Menu - Investigate IP from DNS 

Context Menu - Cymon.io 

 

As always access to the administration location is located here:

Admin > System > Context Menu Actions

 

The first thing you will notice is there is a bit of a different look since a good bit of cleanup has been done in the UI.

 

Before we start trimming the menu items... here is what it looks before the changes:

Data Science/Scan for Malware/Live Lookup are all candidates for reduction.

 

When you open an existing action or create a new one you will also see some new improvements.

No longer just a large block of text that can be edited if you know what and where to change but a set of options to change to implement your custom action (or tweak existing ones)

 

You can switch to the advanced view to get back to the old freeform world if you want to.

 

Clean up

To clean up the menu for your analysts you might consider disabling these items if you don't have a warehouse from RSA installed

Sort by Group Name, Locate the Data Science group and disable all the rules for them (4)

Disable any of the External lookup items that are not used or not important for your analysts

Scan for Malware - are you logs only? Malware not needed, are you packets or endpoint but don't use Malware?

Live Lookup - mostly doesn't provide value to analysts

Now you should have a nice clean right click action menu available to investigators to do their job better and faster.

Filter Blog

By date: By tag: