Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2018 > October

Localized documents that were updated for Version 11.1 are posted in RSA Link for customers who speak Japanese, Spanish, German, and French. These are the locations.

I was recently working with Eric Partington who asked if we could get the Autonomous System Numbers from a recent update to GEOIP.  I believe at one point this was a feed, but had been deprecated.  After a little bit of research, I learned that an update had been made to the Lua libraries that allowed for the calling of a new api function named geoipLookup that would give us this information as well as some other information that might be of interest.  A few years ago, I painstakingly created a feed for my own use to map countries to continents.  I wish I had this function call back then.


The api call is as follows:



-- Examples:
-- local continent = self:geoipLookup(ip, "continent", "names", "en") -- string
-- local country = self:geoipLookup(ip, "country", "names", "en") -- string
-- local country_iso = self:geoipLookup(ip, "country", "iso_code") -- string "US"
-- local city = self:geoipLookup(ip, "city", "names", "en") -- string
-- local lat = self:geoipLookup(ip, "location", "latitude") -- number
-- local long = self:geoipLookup(ip, "location", "longitude") -- number
-- local tz = self:geoipLookup(ip, "location", "time_zone") -- string "America/Chicago"
-- local metro = self:geoipLookup(ip, "location", "metro_code") -- integer
-- local postal = self:geoipLookup(ip, "postal", "code") -- string "77478"
-- local reg_country = self:geoipLookup(ip, "registered_country", "names", "en") -- string "United States"
-- local subdivision = self:geoipLookup(ip, "subdivisions", "names", "en") -- string "Texas"
-- local isp = self:geoipLookup(ip, "isp") -- string ""
-- local org = self:geoipLookup(ip, "organization") -- string ""
-- local domain = self:geoipLookup(ip, "domain") -- string ""
-- local asn = self:geoipLookup(ip, "autonomous_system_number") -- uint32 16406
function parser:geoipLookup(ipValue, category, [name], [language]) end


As you know, we already get many of these fields already.  Meta keys such as country.src, country.dst, org.src, and org.dst are probably well known to many analysts and used for various queries.  Eric had asked for 'asn' and because I tried it previously with a feed, I wanted to include 'continent' as well.  


So....I created a Lua parser to get this for me.  My tokens were meta callbacks for ip.src and ip.dst.


[nwlanguagekey.create("ip.src", nwtypes.IPv4)] = lua_geoip_extras.OnHostSrc,
[nwlanguagekey.create("ip.dst", nwtypes.IPv4)] = lua_geoip_extras.OnHostDst,


My intent is to build this parser to work on both packet and log decoders.  I had originally wanted to use another function call, but found this was not working properly on log decoders.  However, the meta callbacks of ip.src and ip.dst did work.  Now, with this in mind, I could leverage this parser on both packet and log decoders. :-)


The meta keys I was going to write into were as follows:


nwlanguagekey.create("asn.src", nwtypes.Text),
nwlanguagekey.create("asn.dst", nwtypes.Text),
nwlanguagekey.create("continent.src", nwtypes.Text),
nwlanguagekey.create("continent.dst", nwtypes.Text),


Since I was using ip.src and ip.dst meta, I wanted to apply the same source and destination meta for my asn and continent values.  


Then, I just wrote out my functions:


-- Get ASN and Continent information from ip.src and ip.dst
function lua_geoip_extras:OnHostSrc(index, src)
   local asnsrc = self:geoipLookup(src, "autonomous_system_number")
   local continentsrc = self:geoipLookup(src, "continent", "names", "en")

   if asnsrc then
      --nw.logInfo("*** ASN SOURCE: AS" .. asnsrc .. " ***")
      nw.createMeta(self.keys["asn.src"], "AS" .. asnsrc)
   if continentsrc then
      --nw.logInfo("*** CONTINENT SOURCE: " .. continentsrc .. " ***")
      nw.createMeta(self.keys["continent.src"], continentsrc )


function lua_geoip_extras:OnHostDst(index, dst)
   local asndst = self:geoipLookup(dst, "autonomous_system_number")
   local continentdst = self:geoipLookup(dst, "continent", "names", "en")


   if asndst then
      --nw.logInfo("*** ASN DESTINATION: AS" .. asndst .. " ***")
      nw.createMeta(self.keys["asn.dst"], "AS" .. asndst)
   if continentdst then
      --nw.logInfo("*** CONTINENT DESTINATION " .. continentdst.. " ***")
      nw.createMeta(self.keys["continent.dst"], continentdst)


This was my first time using this new api call and my mind was racing with ideas on how else I could use this capability.  The one that immediately came to mind was enriching meta when X-Forwarded-For or Client-IP meta existed.  If it did exist, it should be parsed into a meta key called "orig_ip" today or "ip.orig" in the future.  The meta key "orig_ip" is formatted as Text so I need to account for that by determining the correct HostType.  We don't want to pass a domain name when we are expecting to pass an IP address.  I can do that by importing the functions from 'nwll'.


In the past, the only meta that could be enriched by GEOIP was ip.src and ip.dst (I have not tested ipv6.src or ipv6.dst).  Now with this API call, I can apply the content of GEOIP to other IP address related meta keys.  I have attached the full parser to this post.  


Hope this helps others out there in the community and as always, happy hunting.



Background Information:

  • v10.6.x had a method in the UI to add a standalone NW head server for investigation purposes (and to help with DR scenarios) using legacy authentication (static local credentials).  
  • v11.x appeared to have removed that capability which was blocking some of the larger upgrades, however it appears that the capability actually exists; it is just not presented in the UI as it was in v10.6.
  • Having a DR investigation server also helps to provide continuous access to data for analysts during the major upgrade from v10.6.x to v11.2 which is incredibly beneficial to have.


Review the upgrade guide and the "Mixed Mode" notes at the link below for more details on the upgrade and running in mixed mode:


If you spin up a DR v11.2 standalone NW server from the ISO/OVA you can connect it to an existing set of concentrators using local credentials (Note: DO NOT expect that Live or ESA will function as they do on the actual node0 NW server.  This method gets you a window into the meta for investigation, reporting and Dashboards only!)


Here's the steps you'll need to follow once you have your DR v11.2 NW server spun up:


Create local credentials to use for authentication with the concentrator(s) or broker(s) that you will connect to under

Admin > Service > <service> > Security



You will need to add some permissions to the aggregation role to allow the Event Analysis function to work:

Replicate the role and user to the other services that you will need to authenticate to.


Your 11.2 DR investigation head server can connect to a 10.6.6 Broker or Concentrator with the following:


Broker service > Explore

Select broker

Right click select properties

Select add from the drop down

Add the concentrators that need to be connected (as they were in 10.6).  Below are the ports that are required for the connection:

  • 50005 for Concentrators
  • 56005 for SSL to Concentrators
  • 50003 to Broker 
  • 56003 for SSL to Broker


device=<ip>:<port> username=<> password=<>


Click send.


You should get a successful connection and in the config section you will now see the aggregation connection setup:


Click Start aggregation and make sure Aggregate Autostart is checked:


Using this DR Investigation server you can use the following process to help in upgrading from v10.6.6 to v11.2+ in the following steps:


Initial State:


Upgrade the new Investigation Head:


Investigators now can use the 11.2 head to investigate without interruption during the production NW head server upgrade.


Upgrade the primary (node0) NW head server and ESA:

Upgrade the decoder/concentrator pairs:

Note: an outage will occur here for investigation as the stacks are upgraded

Now you'll be running in v11.2 mode as you were in 10.6 with DR investigation head server so that your Investigation and Events views will be accessible.

MuddyWater is an APT group who's targets have mainly been in the Middle East, such as the Kingdom of Saudi Arabia, the United Arab Emirates, Jordan, Iraq ... with a focus on oil, military, telco and government entities.


The group is using Spear Phishing attacks as an initial vector. The email contains an attached word document which tries to trick the user into enabling macros. The attachment's filename and its content are usually tailored towards the target, such as the language used.


In the below example, we will look at the behavior of the following malware sample:

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f


Filetype: MS Word Document



Endpoint Behavior

This specific malware sample is for an Arabic speaking victim targeted at Jordan, where the filename "معلومات هامة.doc" can translate into "important information.doc". Other variants contain content in Turkish, Pakistani ...


The file shows blurry text in Arabic, with a message telling the target to enable content (and therefore macros) to unlock the content of the document.


Once the user clicks on "Enable Content", we're able to see the following behaviors on RSA NetWitness Endpoint.


1- The user opens the file. In this case, the file was opened from the Desktop folder, but if it was from his email, it would have shown from "outlook.exe" instead of "explorer.exe"


2- The malware uses "rundll32.exe" to execute the dropped file (C:\ProgranData\EventManager.log), allowing to evade detection


3- Powershell is then used to decode the payload of another dropped file ("C:\ProgramData\WindowsDefenderService.ini") and executes it. Having the full arguments of the Powershell command, it would be possible for the analyst to use it to decode the content of the "WindowsDefenderService.ini" file for further analysis


4- Powershell modifies the "Run" Registry key to run the payload at startup


5- Scheduled tasks are also created 



After this, the malware will continue execution after a restart (this might be as a layer of protection against sandboxes).


6- The infected machine is restarted


7- an additional powershell script "a.ps1" is dropped


8- Some of the Windows security settings are disabled (such as Windows Firewall, Antivirus, ...)




By looking at the network activity on the endpoint, we can see that powershell has generated a number of connections to multiple domains and IPs (possible C2 domains).



Network Behavior

To look into the network part in more details, we can leverage the captured network traffic on RSA NetWitness Network.


We can see, on RSA NetWitness Network, the communication from the infected machine ( to multiple domains and IP addresses over HTTP that match what has been originating from powershell on RSA NetWitness Endpoint.

We can also see that most of the traffic is targeting "db-config-ini.php". From this, it seems that the attacker has compromised different legitimate websites, and the "db-config-ini.php" file is owned by the attacker.


Having the full payload of the session on RSA NetWitness network, we can reconstruct the session to confirm that it does in fact look like beaconing activity to a C2 server.



Even though the websites used might be legitimate (but compromised), we can still see suspicious indicators, such as:

  • POST request without a GET
  • Missing Headers
  • Suspicious / No User-Agent
  • High number of 404 Errors
  • ...





We can see how the attacker is using legitimate, trusted, and possibly white-listed modules, such as powershell and rundll32, to evade detection. The attacker is also using common file names for the dropped files and scripts, such as "EventManager" and "WindowsDefenderService" to avoid suspicion from analysts.


As shown in the below screenshot, even though "WmiPrvSE.exe" is a legitimate Microsoft files (it has a valid Microsot signature, as well as a known trusted hash value), but due to its behavioral activity (as shown in the Instant IOC section), we're able to assign a high behavioral score of 386. It should also be noted that any of the suspicious IIOCs that have been detected could trigger a real time alert over Syslog or E-Mail for early detection, even though the attacker is using advanced techniques to avoid detection.




Similarly, on the network, even though the attacker is leveraging (compromised) legitimate sites, and using standard known protocols (HTTP) and encrypted payloads, to avoid detection and suspicion, it is still possible to detect those suspicious behaviors using RSA NetWitness Network, and look for indicators such as Post no Get, suspicious user agents, missing headers, or other anomalies.






The following are IOCs that can be used to look if activity from this APT currently exists in your environment.

This list is not exhaustive and is only based on what has been seen during this test.


Malware Hash

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f



  • ipripak,org


IP Addresses


RSA Netwitness gives you the ability to use remote Virtual Log Collectors (VLC) to be able to reduce your footprint and reduce the amount of ports required. RSA Netwitness can leverage different mechanisms to retrieve (Pull) or send (Push) the log from or to a log collector.


Multiple customers and RSA partners will use the VLC to be able to send the logs from a remote location to a cloud or centralized infrastructure behind one or multiple firewalls in an isolated network. In an isolated network, the VLC won't have any route to this central location and the following article will help you configure your platform properly.


Before deploying your VLC, verify that the host configuration for your head unit is set to nw-node-zero :


When this is done, deploy your VLC in your virtual infrastructure and launch the nwsetup-tui to continue the installation.  When the setup asks you for the IP of the Node Zero enter the external IP of your head unit. For example, in an isolated network a firewall will control any communication to the isolated network:


(192.168.0.x) LAN Corpo --> Firewall Wan Interface ( --> Firewall Lan interface (Isolated Network --> Netwitness Head unit (


NOTE: You need to open the required ports for this installation in your firewall. You can refer to the official documentation related to network/port requirements at the following link : Deployment Guide: Network Architecture and Ports 


In this example, the Node Zero external IP will be and when completing the setup, make sure you are using the external Node Zero IP (Firewall WAN Interface for this isolated network).


When this is done, launch the install process on the VLC and after several minutes the VLC will be up and running:


Next, we need to configure the VLC to send the logs to the log decoder behind the Firewall:


During this process, the operation will work but the IP will be the internal IP of the log decoder and we need to change this information to re-establish the communication. 


We need to modify the shovel.conf file to be able to send our logs to the log decoder using the same process for this isolated network. To facilitate the process you can add another IP to your firewall and configure a one to one NAT for your log decoder. For this example, we have a one to one NAT for the log decoder using the following IP ( on the external interface of the firewall.


The shovel_confing file is located on the VLC at the following path:



Connect to your VLC using SSH and edit the file and change the IP to the external IP of your Firewall for your isolated network:


When this is completed reboot your VLC and in the RSA Netwitness UI you will have the green dot confirming that the communication is working:


Context menu actions have long been a part of the RSA NetWitness Platform. v11.2 brought a few nice touches to help manage the menu items as well extend the functions into more areas of the product.


See here for previous information on the External Lookup options:

Context Menus - OOTB Options 


And these for Custom Additions that are useful to Analysts:

Context Menu - Microsoft EventID 

Context Menu - VirusTotal Hash Lookup 

Context Menu - RSA NW to Splunk 

Context Menu - Investigate IP from DNS 

Context Menu - 


As always access to the administration location is located here:

Admin > System > Context Menu Actions


The first thing you will notice is there is a bit of a different look since a good bit of cleanup has been done in the UI.


Before we start trimming the menu items... here is what it looks before the changes:

Data Science/Scan for Malware/Live Lookup are all candidates for reduction.


When you open an existing action or create a new one you will also see some new improvements.

No longer just a large block of text that can be edited if you know what and where to change but a set of options to change to implement your custom action (or tweak existing ones)


You can switch to the advanced view to get back to the old freeform world if you want to.


Clean up

To clean up the menu for your analysts you might consider disabling these items if you don't have a warehouse from RSA installed

Sort by Group Name, Locate the Data Science group and disable all the rules for them (4)

Disable any of the External lookup items that are not used or not important for your analysts

Scan for Malware - are you logs only? Malware not needed, are you packets or endpoint but don't use Malware?

Live Lookup - mostly doesn't provide value to analysts

Now you should have a nice clean right click action menu available to investigators to do their job better and faster.

The RSA NetWitness Platform has multiple new enhancements as to how it handles Lists and Feeds in v11.x.  One of the enhancements introduced in the v11.1 release was the ability to use Context Hub Lists as Blacklist and/or Whitelist enrichment sources in ESA alerts.  This feature allows analysts and administrators a much easier path to tuning and updating ESA alerts than was previously available.


In this post, I'll be explaining how you can take that one step further and create ESA alerts that automatically update Context Hub Lists that can in turn be used as blacklist/whitelist enrichment sources in other ESA alerts.  The capabilities you'll use to accomplish this will be the ESA's script notifications, the ESA's Enrichment Sources and the Context Hub's List Data Source.


Your first step is to determine what kind of data you want to put into the Context Hub List.  For my test case I chose source and destination IP addresses.  Your next step is to determine where this List should live so that the Context Hub can access it.  The Context Hub can pull Lists either via HTTP, HTTPS, or from its local file system on the ESA appliance - for my test case I chose the local filesystem.


With that decided, your next step is to create the file that will become the List - the Context Hub looks within the /var/netwitness/contexthub-server/data directory on the ESA, so you'll create a CSV file in this location and add headers to help you (and others) know what data the List contains:


**NOTE** Be sure to make this CSV writeable for all users, e.g.:

# chmod 666 esaList.csv


Next, add this CSV to the CH as a Data Source.  In Admin / Services / Contexthub Server / Config --> Data Sources, choose List:


Select "Local File Store," then give your List a name and description and choose the CSV from the dropdown:


If you created headers in the CSV, select "With Column Headers" and then validate that the Context Hub can see and read your file.  After validation is successful, tell the Context Hub what types of meta are in each column, whether to Append to or Overwrite values in the List when it updates, and also whether to automatically expire (delete) values once they reach a certain age (maximum value here is 30 days):


For my test case, I chose not to map the date_added and source_alert columns from the CSV to any meta keys, because I only want them for my own awareness to know where each value came from (i.e.: what ESA alert) and when it was added.  Also, I chose to Append new values rather than Overwrite, because the Context Hub List has built in functionality that identifies new and unique values within the CSV and adds only those to the List.  Append will also enable the List Value Expiration feature to automatically remove old values.


Once you have selected your options, save your settings to close the wizard.  Before moving on, there are a few additional configuration options to point out which are accessible through the gear icon on the right side of the page.  These settings will allow you to modify the existing meta mapping or add new ones, adjust the Expiration, enable or disable whether the List's values are loaded into cache, and most importantly - the List's update schedule, or Recurrence:


**NOTE** At the time of this writing, the Schedule Recurrence has a bug that causes the Context Hub to ignore any user-defined schedule, which means it will revert to the default setting and only automatically update every 12 hours.


With the Context Hub List created, you can move on to the script and notification template that you will use to auto-update the CSV (both are attached to this blog - you can upload/import them as is, or feel free to modify them however you like for your use cases / environment).  You can refer to the documentation (System Configuration Guide for RSA NetWitness Platform 11.x - Table of Contents) to add notification outputs, servers, and templates.


To test that all of this works and writes what you want to the CSV file (for my test case, IP source and destination values), create an ESA alert that will fire with the data points you want capture, and then add the script notification, server, and template to the alert:


After deploying your alert and generating the traffic (or waiting) for it to fire, verify that your CSV auto-updates with the values from the alert by keeping an eye on the CSV file.  Additionally, you can force your Context Hub List to update by re-opening your List's settings (the gear icon mentioned above), re-saving your existing settings, and then checking its values within the Lists tab:



You'll notice that in my test case, my CSV file has 5 entries in it while my Context Hub List only has 3 - this is a result of the automatic de-duplication mentioned above; the List is only going to be Appending new and unique entries from the CSV.


Next up, add this List as an Enrichment Source to your ESA.  Navigate to Configure / ESA Rules --> Setting tab / Enrichment Sources, and add a new Context Hub source:


In the wizard, select the List you created at the start of this process and the columns that you will want to use within ESA alerts:


With that complete, save and exit the wizard, and then move on to the last step - creating or modifying an ESA alert to use this Context Hub List as a whitelist or blacklist.


Unless your ESA alert requires advanced logic and functionality, you can use the ESA Rule Builder to create the alert.  Within your alert statement, build out the alert logic and add a Meta Whitelist or Meta Blacklist Condition, depending on your use case:


Select the Context Hub List you just added as an Enrichment Source:


Select the column from the Context Hub List that you want to match against within your alert:


Lastly, select the NetWitness meta key that you want to match against it:


You can add additional Statements and additional blacklists or whitelists to your alert as your use case dictates.  Once complete, save and deploy your alert, and then verify that your alerts are firing as expected:


And finally, give yourself a pat on the back.

For those who are interested in becoming certified on the RSA NetWitness Platform - we have some great news for you!  This process just became a whole lot easier... you no longer have to travel to a Pearson VUE testing center to take the certification exams.  All four of the RSA NetWitness certifications can now be taken through online proctored testing!  That's right... 100% online!


You can find all of the details on the RSA Certification Program page.  There's also a page specifically for the RSA NetWitness Platform certifications where you can find details about the certifications, try out one of the practice exams, register to take a certification and much, much more.  


RSA NetWitness has 4 separate certifications available:

  1. RSA NetWitness Logs and Network Admin
  2. RSA NetWitness Logs and Network Analyst
  3. RSA NetWitness Endpoint Admin
  4. RSA NetWitness Endpoint Analyst


I wish you all the best of luck and encourage you to continue your professional development by becoming certified on our technology.  

The RSA NetWitness Platform has an integrated agent available that currently does base Endpoint Detection and Response (EDR) functions but will shortly have more complete parity with ECAT (in V 11.x).  One beneficial feature of the Insights agent (otherwise called NWE Insights Agent) is Windows Log collection and forwarding. 


Here is the agent Install Guide for v11.2:


The Endpoint packager is built from the Endpoint Server (Admin > Services) where you can define your configuration options.  To enable windows log collection check the box at the bottom of the initial screen


This expands the options for Windows log collection...

Define one or more Log Decoder/Collector services in the current RSA NetWitness deployment to send the endpoint logs to (define a primary and secondary destination)


Define your channels to collect from

The default list includes 4 channels (System, Security, Application and ForwardedEvents)

You can also add any channel you want as long as you know the EXACT name of it

In the Enter Filter Option in the selection box enter the channel name

In this case Windows PowerShell (again, make sure you match to the exact Event Channel run into issues)

We could also choose to add some other useful event channels

  • Microsoft-Windows-Sysmon/Operational
  • Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
  • Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational


You can choose to filter these channels to include or exclude certain events as well.


Finally, set the protocol to either UDP/TCP or TLS.


Generate Agent generates the download that includes the packager and the config files that define the agent settings.


From there you can build the agents for Windows, Linux and Mac from a local windows desktop.

Agents are installed as normal using local credentials or your package management tool of choice.


Now that you have windows events forwarded to your log decoders, make sure you have the Windows parser downloaded from RSA Live and deployed to your log decoders to start parsing the events.

The Windows parser is slightly different than the other windows log parsers (nic, snare, er) in that there are only 7 message sections (one each for the default channels and a TestEvent and Windows_Generic).


For the OOTB channels the Message section defines all the keys that could exist and then maps them to the table-map.xml values as well as the ec tags. 

Log Parser Customization 


The Windows_Generic is the catchall for this parser and any channel that is added custom will only parse from this section.  This catchall needs some help to make use of the keys that will come from the channels that we have selected which is where a windowsmsg-custom.xml (custom addition to the windows parser) comes in (internal feature enhancement as been added to make these OOTB)


Get the windows-custom parser from here:

GitHub - epartington/rsa_nw_log_windows: rsa windows parser for nw endpoint windows logs 

Add to your windows parser folder on the log decoder(s) that you configured in the endpoint config



Reload your parsers.

Now you should have additional meta available for these additional event channels.




What happens if you want to change your logging configuration but don't want to re-roll an agent? In the Log Collection Guide here you can see how to add a new config file to the agent directory to update the channel information

(page 113)


Currently the free NW Endpoint Insights agent doesn't have agent config management included so this needs to be manual at the moment.  Future versions will include config management to make this change easier.


Now you can accomplish things like this:

Logs - Collecting Windows Events with WEC 

Without needing a WEC/WEF server especially if you are deploying Sysmon and want to use the NWE agent to pull back the event channel.


While you are in the Log Collection frame of mind, why not create a Profile in Investigation for NWE logs. 

Pre-Query = device.type='windows'


In 11.2 you can create a profile (which isn't new) as well as meta and column groups that are flexible (new in 11.2).  Which means the pre-query is locked but you are able to switch metagroups within the profile (very handy)



Hopefully this helpful addition to our agent reduces friction to collecting windows events.  If there are specific event channels that are high on the priority list for collection add them to the comments below and i'll get them added to internal RFE.

Filter Blog

By date: By tag: