Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

548 posts

APT33 is a state-sponsored group suspected to be linked to Iran. It has been active since 2013 and has targeted organizations in the aviation and energy sectors mainly across the United States and the Middle East regions.

The group has recently been seen using private VPN networks with changing exit nodes to issue commands and collect data to and from their C&C servers.

 

In this post we will look at one of the malware files used within those campaigns and identify ways to detect it using RSA NetWitness Network and Endpoint.

 

The following is the file used in this article:

Filename

SHA256

MsdUpdate.exe

e954ff741baebb173ba45fbcfdea7499d00d8cfa2933b69f6cc0970b294f9ffd

 

This specific sample is rather basic in terms of behavior, but provides both persistence to the attacker, as well as the ability to deploy other malicious files.

 

 

 

Endpoint Visibility

By leveraging RSA NetWitness Endpoint, we can easily identify files and processes that have an elevated risk score due to their behavior. In the below screenshot, we can clearly see that the file “MsdUpdate.exe” stands out due to both its risk score and its reputation (identified as “Malicious”). In addition, we can see that the file is not signed by any valid or trusted certificate.

 

 

 

By drilling into the "MsdUpdate.exe" process, we can see in the next screenshot the different actions done by the process:

  1. It modifies the registry
  2. It communicates over the network with the “simshoshop.com” domain
  3. It copies itself to “C:\Users\<user>\Roaming\MSDUpdate\MsdUpdate.exe”

 

 

 

 

If we look in more details at the registry changes done by the file, as per the below screenshot, we can see that it modified the “Run” key to run itself at startup. This is done for persistence for the attacker to maintain access after a reboot of the machine.

 

 

 

 

Network Visibility

As seen in the previous step, we have been able to identify that the malicious file has communicated with the “simsoshop.com” domain. By drilling into this on the Network component we can look at more details regarding this network connection.

Based on the below screenshot we can see:

  • 4 different sessions separated exactly by 10 min each, which indicates a programmatic behavior typical of beaconing activity
  • All sessions are posting data to a file named “update.php”, which also suspiciously looks like beaconing

 

 

 

 

We can then reconstruct the payload of any of the above sessions to look at its content and confirm that this is indeed beaconing activity.

As seen below, we can confirm that the query is updating an entry with a payload in hexadecimal (most likely encoded).

 

 

 

 

This shows how RSA NetWitness Network and Endpoint can help in quickly detecting, identifying and investigating such attacks based on both activity on both the endpoint and the network,

 

 

 

 

Indicators of Compromise

The following are some additional indicators that can be used to detect the presence of this malware.

 

File Hashes

Filename

SHA256

MsdUpdate.exe

e954ff741baebb173ba45fbcfdea7499d00d8cfa2933b69f6cc0970b294f9ffd

MsdUpdate.exe

a67461a0c14fc1528ad83b9bd874f53b7616cfed99656442fb4d9cdd7d09e449

MsdUpdate.exe

c303454efb21c0bf0df6fb6c2a14e401efeb57c1c574f63cdae74ef74a3b01f2

MsdUpdate.exe

b58a2ef01af65d32ca4ba555bd72931dc68728e6d96d8808afca029b4c75d31e

 

 

Command & Control Domains

Domain

suncocity.com

service-explorer.com

zandelshop.com

service-norton.com

simsoshop.com

service-eset.com

zeverco.com

service-essential.com

qualitweb.com

update-symantec.com

 

 

IP Addresses

IP Address

5.135.120.57

137.74.80.220

5.135.199.25

137.74.157.84

31.7.62.48

185.122.56.232

51.77.11.46

185.125.204.57

54.36.73.108

185.175.138.173

54.37.48.172

188.165.119.138

54.38.124.150

193.70.71.112

88.150.221.107

195.154.41.72

91.134.203.59

213.32.113.159

109.169.89.103

216.244.93.137

109.200.24.114

 

In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:

 

There are a number of blogs and KBs that cover this topic in varying degrees of detail:

 

 

Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.

 

The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:

 

Success!

A couple years ago, a few smart folks over at salesforce came up with the idea of fingerprinting certain characteristics of the "Client Hello" of the SSL/TLS handshake, with the goal to more accurately identify the client application initiating TLS-encrypted sessions.

 

This concept certainly has potential to provide invaluable insight during incident response, though there are some significant operational limitations that (my opinion) have so far prevented JA3 fingerprinting from gaining more widespread adoption and use.  Perhaps the biggest of these limitations is the need for some kind of known JA3 fingerprint library or repository, where the thousands (?potentially millions?) of client applications that might initiate a TLS handshake can be reliably matched with their JA3 fingerprint. There are a couple sites building out these repositories

 

...but their content is limited (after all, fingerprinting a client requires installing it, running it, capturing the PCAP, running a JA3 parser or script against the PCAP, and then adding that fingerprint to the library; that process simply does not scale) and the fidelity/accuracy/timeliness of these libraries is a pretty large question mark.

 

However, with NetWitness 11.3.1, which has a native option to enable JA3 and JA3S fingerprinting, and NetWitness Endpoint 11.3 we can bridge this gap and create our own JA3 libraries.

 

The concept is fairly simple

  • use NetWitness Endpoint to identify applications making outbound network connections
  • use NetWitness Network to identify outbound HTTPS traffic
  • link these events and sessions by their common characteristics
  • once we have that link
    • extract the filename and sha256 hash of the application from the NetWitness Endpoint event
    • along with the JA3 fingerprint from the network session
    • and then create a feed of that information that the NetWitness Platform can use for additional context

 

In order to ensure this process scales, we can make use of the ESA's rule engine to identify the sessions we want and it's script output functionality to create the feed for us. The ESA rule and python script output are attached to this blog.

 

Prior to enabling these, you'll want to make sure the "netwitness" user has either read/write access to the "/var/netwitness/common/repo" directory on the Admin Server, a.k.a Node0, or at least read/write access to the "ja3Context.csv" file in that directory that the ja3context.py script will update.

 

A good guide for setting ACLs in CentOS is here: https://www.tecmint.com/give-read-write-access-to-directory-in-linux/  and the result:

 

Once the appropriate permissions are set and you've enabled the ESA rule and its script output, your last step will be to turn that CSV output into a feed (A list two ways - Feeds and Context Hub - many thanks again to the SE formerly known as Eric Partington for this blog):

 

...and choose your meta keys:

 

And voila!  We have an automatically generated and constantly updating library of applications for our JA3 fingerprints:

Today RSA Link implemented a new way of presenting documentation to help RSA NetWitness® Platform customers find the information they need quickly and easily. RSA NetWitness Platform 11.3 presents the documentation in a unified map of product documentation and videos, including software, hardware, and RSA content.

 

The new RSA NetWitness® Platform 11.3 Documentation page

 

The blocks represent a high-level workflow, each block a task for different RSA NetWitness® Platform activities. For example, an Incident Responder would click the “Investigate and Respond” block. Clicking a block opens a list of tasks for the selected category with quick links to product information.

 

Instead of searching through a list of document titles that may have the information you need, you can select one of the high-level tasks—Get Started, Install & Upgrade, Configure & Manage, Investigate & Respond, or Integrate & Develop—and see a list of the relevant information.

 

The widgets on the right provide direct links to:

  • The Master Table of Contents with quick links to every Version 11 document.
  • The Known Issues page with a sortable list of known issues.
  • The Troubleshooting page with information to help resolve issues from diverse RSA Link resources.
  • The Documentation Feedback email sends feedback and suggestions to the Information Design and Development team responsible for RSA NetWitness® Platform technical content.

 

Please click the Documentation Feedback under Other Resources on the right to provide your comments. We hope you find this new page useful and appreciate your comments.

One of the biggest commitments we at RSA make to our customers is to provide best-in-class security products that help manage digital risk.  Our goal is to do so with maximum reliability while also requiring minimum effort on your part.  However, we know, that even best-in-class products occasionally need help to install, use, and maintain them.  While we are continuously focused on improving our support services to ensure that every interaction you, our customers, have with us is positive and quick, we realize that even the best support interaction still requires time and effort on your part.  And what’s more valuable than time?

 

With that in mind, today I am happy to officially launch our Engineering Request dashboard within the RSA Case Management portal, which will allow you to monitor progress of Engineering Requests (ER) opened on your behalf*.  Not only will you be able to see progress of your ER’s, but you will be able to do so on your own, without the need to call support for an update. 

 

To access this information, navigate to the RSA Case Management portal by clicking on My Cases in the main menu on RSA Link.    Clicking on the Engineering Requests tab will display Engineering requests that have been opened on your behalf (linked to your support cases) since January 1, 2018.  For each of these, you will be able to see its Status to know when the issue has been addressed, and if a fix is included in a release, you’ll see the release number as well.

  

Click to enlarge

 

This is just another small improvement to your support experience.  Stay tuned for the more exciting upcoming changes.

 

In the meantime, if you have any feedback on this enhancement or other ideas to continue to improve your experience, please share! 

 

* This functionality is currently only available for the RSA Archer Suite and the RSA NetWitness Platform. Additionally, you will only be able to monitor Engineering Requests that were opened directly on your behalf and are not security issues that could have sensitive information.  We will encourage you to utilize the RSA Ideas portal to manage and monitor Enhancement requests.

Introduction to MITRE ATT&CK™

Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.

This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive, and the community adds them as they are observed and verified.

To read more about how ATT&CK™ is helpful in resolving challenges and validate our defenses, please check this article.

Some Techniques are mapped to multiple Tactics. There are total 244 unique Techniques which results in 314 Non-unique Techniques distributed over 12 Tactics.

 

RSA Threat Content Mapping with MITRE ATT&CK™

RSA has mainly three kinds of Threat Content: a. Application Rules, b. ESA Rules and c. LUA Parsers.These content types can be classified further as per the 'Medium' of each piece of content. Medium depends upon the source of the meta that particular content piece is using. For example: An application rule if using meta populated by packet data then its Medium will be packet. We can search LIVE content using Medium criteria:

 

 

We will try to measure how much ATT&CK™ matrix is covered by RSA Threat Content. Essentially mapping each piece of threat content to one or multiple ATT&CK™ techniques it detects. This mapping needs to be saved in a file and in case of ATT&CK™ the file type will be JSON. For example: In case of application rules, there will be mapping JSON files for each of the following:

  • Mapping of only RSA Application Rules with Medium = log
  • Mapping of only RSA Application Rules with Medium = packet
  • Mapping of only RSA Application Rules with Medium = endpoint
  • Mapping of only RSA Application Rules with Medium = log AND packet
  • Mapping of all RSA Application Rules (Without considering Medium)

The same pattern will follow for ESA Rules and LUA Parsers depending upon Medium value.

This JSON is graphically viewable through ATT&CK™ Navigator web GUI tool which is described later in this post with the process of observing the GUI.

 

a. Application Rules - The Rule Library contains all the Application Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK™ matrix. The mapping shows how many Tactics/Techniques are detected by RSA NetWitness Application Rules. We have generated JSON files for application rules which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA Application Rules:

 

Content TypeMediumLocation of JSON in attached archive
RSA Application RuleslogRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_log
RSA Application RulespacketRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_packet
RSA Application RulesendpointRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_endpoint
RSA Application RulesAll Rules(Without considering Medium)RSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\All_RSA_Application_Rules

 

Following is the plot which reflects number of techniques detected by all RSA Application Rules with respect to ATT&CK™:

 

b. ESA Rules - ESA is one of the defense systems that is used to generate alerts. ESA Rules provide real-time, complex event processing of log, packet, and endpoint meta across sessions. ESA Rules can identify threats and risks by recognizing adversarial Tactics, Techniques and Procedures (TTPs). We have generated JSON files for ESA rules which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA ESA Rules:

 

Content TypeMediumLocation of JSON in attached archive
RSA ESA RuleslogRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_log
RSA ESA RulespacketRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_packet
RSA ESA Ruleslog AND packetRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_log_AND_packet
RSA ESA RulesAll Rules(Without considering Medium)

RSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\All_RSA_ESA_Rules

 

Following is the plot which reflects number of techniques detected by all RSA ESA Rules with respect to ATT&CK™:

 

c. LUA Parsers - Packet parsers identify the application layer protocol of sessions seen by the Decoder, and extract meta data from the packet payloads of the session. Every packet parser is able to extract meta from every session. One of these packet parsers are LUA Parsers which can be customized by customers. We have generated JSON files for LUA Parsers which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA LUA Parsers:

 

Content TypeMediumLocation of JSON in attached archive
RSA LUA ParserspacketRSA_Threat_Content_ATTACK_JSON_Mapping\Lua_Parsers\Medium_packet
RSA LUA ParsersAll LUA Parsers(Without considering Medium)RSA_Threat_Content_ATTACK_JSON_Mapping\Lua_Parsers\All_RSA_Lua_Parsers

Note: The above two JSONs will be same as for LUA Parsers the only Medium is packet.

 

Following is the plot which reflects number of techniques detected by all RSA LUA Parsers with respect to ATT&CK™:

 

 

d. Complete RSA Threat Content (Application Rules + ESA Rules + Lua Parsers) - We have combined all three type of contents and created a combined JSON file for ATT&CK™ Navigator and can be downloaded from this blog post.

 

Content TypeMediumLocation of JSON in attached archive
RSA Threat ContentAll RSA Threat ContentRSA_Threat_Content_ATTACK_JSON_Mapping\All_RSA_Threat_Content

 

Following is the plot which reflects number of techniques detected by all three threat content types combined with respect to ATT&CK™ coverage :

Although these statistics are bound to change with time as new content is added or updated. We can update ATT&CK™ coverage periodically which will help us to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.

 

In above sections, we have talked about using JSON files (attached with blog post) in ATT&CK™ Navigator . In next section, we will discuss how to use and observe the JSON files.

 

Introduction to MITRE ATT&CK™ Navigator

ATT&CK™ Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK™ model.

ATT&CK™ Navigator stores information in JSON files and each JSON file is a layer containing multiple techniques which can be opened on Navigator web interface. The JSON contains content in STIX 2.0 format which can be fetched from a TAXII 2.0 server of your own choice. For example, we can fetch ATT&CK™ content from MITRE's TAXII 2.0 server through APIs.

The techniques in this visualization can be:

  • Highlighted with color coding.
  • Added with a numerical score to signal severity/frequency of the technique.
  • Added with a comment to describe that occurrence of technique or any other meaningful information.

These layers can be exported in SVG and excel format.

 

How to View a JSON in ATT&CK™ Navigator?

  1. Open MITRE’s ATT&CK™ Navigator web application. (https://mitre-attack.github.io/attack-navigator/enterprise/).
  2. In Navigator, open a New Tab through clicking '+' button.

    Navigator_Image
  3. Then click on 'Open Existing Layer' and then 'Upload from Local' which will let you choose a JSON file from your local machine (or, the one attached later in this blog).

    Navigator_Image

  4. After uploading JSON file the layer will be opened in Navigator and will look like this:

    Navigator_Image

 

        This visualization highlights the techniques covered in the JSON file with color and comments.

 

    5. While hovering mouse over each colored technique you can see three things:

  • Technique ID: Unique IDs of each technique as per ATT&CK™ framework.
  • Score:  Threat score given to each technique.
  • Comment: We can write anything related in comment to put things in perspective. In this case, we have commented pipes (‘||’) delimited names of content/rules/parsers which cover that technique. For example, if you have opened application rule JSON then comments will contain pipes delimited name of those application rules which detect hovered technique.

 


Other blog posts written before regarding Threat Content coverage of ATT&CK™ can be found here and here.

Overview

The RSA NetWitness is run by many of our customers on RSA's physical appliances, but the entire stack can run in AWS, Azure, VMware, or Hyper-V just fine. You can even mix-and-match hardware between physical and virtual hosts however you prefer. Our Virtual Host Installation Guide does a great job outlining the steps to building a virtual RSA NetWitness Platform host.

 

However, there is frequently a need to build smaller hosts to gather data in smaller remote locations. Small issues that don't apply to larger hosts can cause RSA NetWitness Platform folders to overrun their allotments and cause NetWitness to stop capture or aggregation. This post will primarily focus on the settings to focus on when building smaller virtual hosts. It will also include some tricks to monitor your NetWitness hosts to make sure they don't reach unhealthy levels of storage. Of course, many of these tips will also apply to virtual hosts of all sizes, so hopefully you can benefit regardless of your particular virtual implementation.

 

To ISO or Not to ISO

RSA provides both an ISO and an OVA (and a Hyper-V VHD) to use to build your virtual hosts. Which should you use? If you are building a full RSA NetWitness Platform implementation virtually, you will have to use the ISO to build your Admin Server because the OVA does not come with all of the required RPMs. As for the other hosts, using the OVA isn't a bad idea. The OVA is a much smaller file to deal with (~450MB OVA vs ~6GB ISO) and it has already completed the bootstrap, which is one of the longest steps of the installation. However, the OVA has already provisioned the logical volumes for a 195GB host. That is the recommended size for the OS drive, but if you're wanting to give more than that, the ISO is the easiest option - and I say that as someone who rather enjoys partitioning Linux file systems! As for assigning less than the 195GB, I would recommend you thin provision your host's OS drive before you install with less than what RSA recommends.

 

Keep in mind that your log, network, and endpoint data stores will be separate from this. The OS drive is strictly for holding OS files, NetWitness internal service log entries, temporary data, and some other miscellaneous data. You will add disks to accommodate storing your log, network, and/or endpoint data in the step.

 

Installing the ISO is extremely simple: create your virtual host, give it the CPU, RAM, and HDD storage as recommended in the installation guide or by your RSA engineer (different requirements for different services and different levels of throughput), attach the ISO, and turn on the VM. It will boot to the blue installation screen where you will hit <Enter>. Once you get to the following screen...

...make sure you enter "y" or "Y" and hit <Enter>. Once the bootstrap is complete, the system will reboot to the login prompt. After logging in, you will run "nwsetup-tui" and you can refer to the installation guide for instructions on how to properly orchestrate a host from there.

 

VM Host Sizing

In the previous step, you installed the bootstrapped host via the ISO or the OVA and possibly orchestrated the services as well. In the case of any host that will retain data - Decoders (network / log), Hybrids (endpoint / network / log), Concentrators, or Archivers - you will need to also provision storage for that data. Sizing that can be difficult, but I have a calculator that can help size most of those appropriately.

 

...except Archivers. Why not Archivers? Archivers are employed, generally, for regulatory purposes. You should engage your RSA Engineer to make sure you size them appropriately so that you don't run into issues with auditors. You might be logging especially large logging sources, while the calculator only uses a static 600 bytes per message. You can also retain more or less meta keys which can drastically affect how much storage to assign. And after all, while the "[Small]" in the title of this post was in hard brackets, this guide is generally geared towards smaller deployments / hosts. The sole reason to use an Archiver is because the amount of storage has reached significantly beyond any definition of the word "small".

 

To use the calculator, there are a number of things to understand:

  • The calculator is used to calculate Hybrid storage, because most "small" environments will use Hybrids rather than discrete Decoder and Concentrator pairs. If you are using separate Decoders and Concentrators, you can simply break up the calculated storage per service and split up the provisioning commands. NOTE: There is no such thing as a "discrete Endpoint Decoder". Endpoint servers only come as Hybrids, whether virtual or physical.
  • When you enter information to size up your storage, at the bottom of the calculator you will get provisioning commands to setup your hosts. If you have any Hosts entered in rows 6 or 7, you'll get commands to provision storage for an Endpoint Log Hybrid. If you don't have any Hosts, but you have Log Events >0 GB/day, you will get commands to provision storage for a Log Hybrid. If you have Log Events at 0 GB/day and you have 0 Hosts but your network traffic is >0 GB/day, you will get commands to provision storage for a Network Hybrid.
    • If you are sizing an Endpoint Log Hybrid, keep in mind that you cannot currently download modules automatically, download memory dumps, or download Master File Tables from hosts. Those features which were in ECAT 4.x will be back in the product as of 11.4, and I've included commands to provision them. However, the amount of storage you provision for those purposes is entirely up to you, so you will need to just type the numbers into that cell. They can both be relatively small (10 - 30GB) if you don't plan to auto-download unsigned, new modules. However, once the feature is back, we do highly recommend that you automatically have NetWitness Endpoint download any unsigned, unknown modules less than 5MB - 10MB, and estimate storage for your environment appropriately.
  • Once storage is provisioned for each of the given volumes, the last provisioning command is to give 100% of the remaining space to the MetaDB on the Concentrator. That is done on purpose because if I have any extra space left over, that is where I want it. However, you also must make sure (likely with df -h) that you enough storage in that logical volume. If not, you likely didn't give the entire partition enough space.
    • For this same reason, if you end up using this calculator to build a discrete Decoder, you'll likely want to change the command that would provision your PacketDB to use the "100%FREE" version of the lvcreate command. The syntax would be the same as the one I use for the Concentrator's MetaDB.
  • When you enter the scale information for Network Traffic, you might wonder, "But I don't know how many GB/day of network traffic I plan to send to NetWitness!" The easiest rule of thumb is that if you expect to see 100Mbps on average for a 24-hour period (that would mean ~175Mbps over the peak hour and 10Mbps overnight), that is 1TB/day of traffic. If you expect to see 10Mbps because it's a small office or home environment, assume 100GB/day. If you have absolutely no idea, just throw a number in there.
  • For logs, in a small environment, if you had any log management system you can probably figure out how many GB/day of day you were generating before. If you expect a certain number of Events per Second, I put a handy calculator to turn that into GB/day on row 10. If you have no idea, then once again, I suggest you just throw something in there.
  • You can edit the calculator if you like. The password is just "rsa". I only password protect it to make sure that first-time users aren't editing cells they shouldn't and break it.

 

The calculator is called NW Virtual Hybrid Sizing Calculator v1.0.xlsx. PLEASE, if you find any errors, leave a comment below or contact me somehow so that I can fix it for others.

 

Raw Event Data Storage

The Virtual Host Installation Guide covers how to add storage for the various RSA NetWitness Platform databases in Step 3. It also covers how to calculate the amount of storage you'll need to allocate to each database for any given host/service. For the Admin Server, Archiver, Broker, ESA, Log Collector, and UEBA hosts, all storage will get dumped into the /var/netwitness/ folder. The instructions for extending that volume group and logical volume are in the installation guide and generally involve: pvcreate, then vgextend, then lvextend, and finally xfs_growfs.

 

For Decoders, Concentrators, and Hybrids, I've put together the commands that you need in the attached

 *Commands.txt text files to setup the storage for those hosts. I recommend running all of these scripts to build the partitions, volume groups, and logical volumes after you run nwsetup-tui, but *BEFORE* you install the services on the hosts. A few things to note:

  • I name the volume group "vg01" for the sake of brevity. The name you assign does not matter at all.
  • In Step 5, I assign storage to the "root" folder for each respective service; /var/netwitness/decoder for Network Decoders, /var/netwitness/concentrator for Concentrators, and /var/netwitness/logdecoder for Log Decoders. This is not required, but I prefer to create these volumes so that I can monitor them in case they fill up. Note: they must have at least 5GB of storage assigned, but larger VMs can have as much as 30GB.
  • Also in Step 5, you will need to replace the lv sizes with the proper sizes based on the Installation Guide and/or your RSA NetWitness Platform engineer. In my scripts, I assign specific sizes to every volume except the last one, which I then assign whatever free space is left with the "100%FREE" command.
  • For Step 10, I wrote that so that you can copy and paste it directly into an SSH session into the /etc/fstab file on the host. You can paste that directly to the bottom of the existing file. Once that is done, before you install services, make sure to reboot the host to make sure there aren't any errors in that fstab file. The syntax is very particular and any errors will cause the system to fail to come up. If that happens, just open a Console window to the machine, hit CTRL+D to enter maintenance mode, and then fix the fstab file.
  • I want to say this again because it's very important: after adding your changes to the fstab file, reboot the machine and make sure your syntax was correct!

Just view the *Commands.txt file attached to this post that corresponds to the type of host you're trying to install.

 

Install Services

This step is straightforward. If you haven't already, go to Admin --> Hosts and enable the host. Then install the services just as outlined in the Installation Guide.

 

Validate Folder Sizes - RSA NetWitness Platform Databases

In order to properly roll off the oldest entries in NWDB (NetWitness Database, our proprietary database format), we have to make sure that the RSA NetWitness Platform knows how much storage each database has to fill. Navigate to Admin --> Services, and for any Concentrator or Decoder/Log Decoder service, go to the Explore page. Expand the "database" menu item on the left-hand side, and click on "config". Here I show the page for an RSA Log Decoder service on a physical Endpoint Log Hybrid:

The sizes you see there are 95% of the corresponding folders we built using the provisioning commands, measured in 1,073,741,824 byte blocks. If you want to get to exact, you can run "df --block-size=G", multiply a folder by 95%, and round to the nearest two digits to get the value RSA NetWitness Platform will place in the corresponding line above. Once the data in one of these folders exceeds these limits, RSA NetWitness Platform rolls off data.

 

If you followed this guide and the Virtual Host Installation Guide, you will see folder sizes here that match what you provisioned. But what if they don't match or you made a mistake? Well, you can reset those by right-clicking on the "database" menu item and clicking "Properties":

 

At the bottom-right of the window, the Properties pane will open up. Select "reconfig" from the drop-down and click the Send button:

You can see that these values match what we saw in the previous screen. If these values still don't look correct - usually, if they are all the same - then your folders aren't mounted to separate logical volumes. If these values do look correct, you can remove the "=xx.xxTB" or "=xx.xxGB" from the entries on the previous screen. Then, back in the Properties pane, in the Parameters box, type update=1 and click Send again. It will append those values to the appropriate entries at the top, though you'll have to refresh the screen to see the update.

 

The indexes for each of these services has a separate entry. On the Explore page, you will see a menu item called "Index", and the settings are under the "config" sub-menu. Just like above, if you need to reset the folder size for that, you can right-click on "Index" and run the reconfig commands like before.

 

Validate Thresholds - MongoDB

In addition to NWDB, NetWitness also stores Endpoint scan results (primarily, what you see in Navigate --> Hosts) in mongoDB on the Endpoint Log Hybrid in the /var/netwitness/mongo folder. NetWitness does not display the folder sizes in the Endpoint Server service's Explore page as it does for those services above. Instead, it just looks at the amount of storage in the /var/netwitness/mongo folder, or, if that isn't separately partitioned, in the /var/netwitness folder. Then it compares the current usage to the value in the "rollover-after" setting here:

Your system may not use this setting if your Data Retention policies (found at Admin --> Services --> Endpoint Server --> Config --> Data Retention Scheduler tab) don't already roll over data before the folder hits 80%. You should also be aware of the settings under endpoint/data-store-thresholds:

If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned separately, and /var/netwitness if it's not) crosses these thresholds, you will eventually receive Health & Wellness alerts that correspond to those thresholds.

 

Minimum Available Space - The Key to Reliability

The other setting you may have noticed in the previous screenshots that we ignored were the <database_name>.free.space.min settings. A given database can grow past the maximum size we've setup above with no issues, but capture/aggregation will stop if there is less free space than what is specified in the free.space.min setting for the corresponding service. Just like the folder size above is set as 95% of the total volume size, the free.space.min is set to 0.865% of the total size, by default. In both cases, the default setting can be replaced manually with whatever you would like to enter. For most large VMs, the default is fine. However, for smaller hosts capturing small amounts of data, this default may be a bit high and can be adjusted.

 

Please note: the indexes do not have a similar free.space.min setting, and capture/aggregation will continue to run, even if the index volumes are essentially full.

 

For Mongo, you should also be aware of the settings under Admin --> Services --> Endpoint Server --> Explore --> endpoint/data-store-thresholds:

If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned; /var/netwitness if it's not) crosses the warning-percent level, <this will happen>. If it crosses the fatal-percent level, <this will happen>.

 

Monitoring Part 1: Folder Sizes

As I mentioned in the Overview, for small hosts (roughly <1TB of total storage), I recommend monitoring your volumes to make sure that they don't fill up. To do this, I modified a script I found here to monitor file system usage:


It pulls back every folder other than temp and boot folders, and if any are at 90% or higher, it will generate a syslog, sent to the IP designated by the -n switch (10.10.10.10 in the image above). I've attached that script below as checkVolumeSizes.sh. (Remember, use chmod to make it executable!) If you run chrontab -e from an SSH terminal, the RSA NetWitness Platform's underlying Centos OS will open vi and allow you to set a schedule to run the script. I imagine most of you reading this are familiar with crontab syntax, but if you're not, or if you want to design something overly tricky, this site takes all the work out of it for you: https://crontab.guru/.

 

The messages generated will look like this:

You can ingest that into any system that can ingest syslog messages and alert on it as you see fit. Seeing as RSA NetWitness Platform *IS* a SIEM, it seemed only right to go ahead and monitor that using the RSA NetWitness Platform . The first step involved in that is properly parsing the message, so I built a parser for that using the NetWitness Log Parser Tool (download here: https://community.rsa.com/docs/DOC-94172, learn how to use it here: RSA ESI Beta 3 - YouTube and Parser Development When No Message ID Exists - YouTube). It took maybe 5 minutes.

 

But there aren't any out-of-the-box keys meant to store the size of logical volumes, and I wanted to include that in the e-mail I send to myself, so I added a meta key to the RSA NetWitness Platform for that. If you use my parser you *MUST* create a custom meta key in your system in order for the parser to work properly. Add the custom meta key to the table-map-custom.xml file on the Log Decoder where you are directing these messages.


You can find that attached as table-map-custom.txt. I didn't want to call it table-map-custom.xml because it needs to be added to the existing file, not pasted over the existing file in its entirety.

 

Now, download nwdiskalert.envision, navigate to Admin --> Log Decoder --> Config, click the Parsers tab, and upload that file. After uploading, if you want to make sure the Log Decoder reloaded its parsers, you can switch from Config to Explore:



Once the page loads, expand the "decoder" menu, right-click on "parsers", and choose "Properties".




In the Properties pane, select "reload" from the drop-down menu and then click Send. Now the parsers have been reloaded and you're all set to ingest these messages!

 

Monitoring Part 2: ESA Correlation Rules

I built three ESA rules to monitor my file system at home, one each for medium, high, and critical severity alerts. Here is what I classify as each:

  • Medium Severity:
    • Goal:
      • Monitor folders that shouldn't ever fill up when they reach high levels of utilization, but won't cause any service issues. 
    • Rules:
      • Any of the following folders are at least 90% but no more than 94% disk usage:
        • /home
        • /var/log
        • /var/netwitness
        • /var/netwitness/concentrator
        • /var/netwitness/decoder
        • /var/netwitness/logdecoder
  • High Severity:
    • Goals:
      • Monitor folders that shouldn't ever fill up when they reach extremely high levels of utilization, but won't cause any service issues
      • Monitor folders that could cause service interruption once they pass 95% (which is where many of them will sit most of the time) but haven't yet reached a point where service interruption will occur
      • Monitor the mongodb folder if it reaches concerning levels
    • Rules:
      • Any of the following folders are at least 95% but no more than 97% disk usage:
        • /home
        • /var/log
        • /var/netwitness
        • /var/netwitness/concentrator
        • /var/netwitness/decoder
        • /var/netwitness/logdecoder
        • /var/netwitness/concentrator/index
        • /var/netwitness/decoder/index
      • Any of the following folders are at 96% or 97%:
        • /var/netwitness/concentrator/sessiondb
        • /var/netwitness/concentrator/metadb
        • /var/netwitness/decoder/sessiondb
        • /var/netwitness/decoder/metadb
        • /var/netwitness/decoder/packetdb
        • /var/netwitness/logdecoder/sessiondb
        • /var/netwitness/logdecoder/metadb
        • /var/netwitness/logdecoder/packetdb
    • The /var/netwitness/mongo folder is at least 90% and no more than 94%
  • Critical Severity:
    • Goals:
      • Monitor folders that shouldn't ever fill up when they reach critical levels of utilization
      • Monitor folders that could cause service interruption once they pass 97% and will soon - or are currently - causing service interruption
      • Monitor the mongodb folder if it reaches its "fatal-percent" setting
    • Rules:
      • Any of the folders in the High Severity list are at 98% or above
      • The /var/netwitness/mongo folder is at 95% or above

 

You can find those attached as nwDiskMonitoringESARules_<severity>_Basic.txt. You might ask yourself, "Why did he call them "Basic"? Well, that's because I actually built more detailed rules in my lab to monitor for the free size returned from the event logs. It's absolutely overkill, and it causes the rules to look like this:

Do you really want to do that to yourself? You really shouldn't, but if you insist, feel free to reach out to me and I'll send you those rules as well.

 

Monitoring Part 3: Generating Notifications

When these rules detect something, of course you'll want to generate an e-mail to notify you of their current state. I use a single notification template for all three ESA Rules. I put my notification template in the attached file nwDiskMonitoringNotificationTemplate.txt. The template breaks down like this:

  • Lines 1 - 20: Builds a banner at the top of the e-mail that is yellow for medium alerts, orange for high, and red for critical
  • Line 25: Prints the time the event was generated
  • Line 27: Prints the IP of the RSA NetWitness Platform host that generated the event log
  • Line 29: Prints the folder that the alert is related to
  • Line 31: Prints the % utilization of the folder
  • Line 33: Prints the amount of free space, in MB, left in that folder
  • Line 35: Generates a hyperlink to the raw event log in the RSA NetWitness Platform; make sure you edit both the <NW_URL_or_IP> and the device ID (mine is 6)

(Have questions about any other items in this notification template? Check out my other relevant blog post here: Building the Notifications of Your Dreams in the RSA NetWitness Platform.)

 

Once you've updated those items, place it under Admin --> System --> Global Notifications --> Template (tab), and make sure you select that template when adding your ESA Rules. You can also build an Incident Rule in the RSA NetWitness Platform if you want to generate incidents for these alerts. Here is mine, for reference:

 

Summary

I can't emphasize enough that the Virtual Host Installation Guide has very comprehensive instructions for setting up a virtual RSA NetWitness Platform host, and you should make sure you follow those instructions. However, following some of the additional steps included in this guide can give you peace of mind that your RSA NetWitness Platform environment is running smoothly and collecting your critical security forensic information.

 

Future note: I plan to build some Event Source Monitoring rules to make sure that my hosts are still sending logs. For example, the packetdb folder on your Decoders and Log Decoders should reach 95% eventually and then roll off data, while your Concentrators should reach 95% on their metadb folder. Those should continue to generate logs once they hit 90% utilization at every interval you specified in the cron job. If I ever get the free time to create those, I'll update this post with that information. If someone wants to build that on their own, be my guest!!

Introducing RSA NetWitness Platform's support for AWS VPC Traffic Mirroring!

 

By partnering with AWS and integrating with their AWS VPC Traffic Mirroring, customers are able to access to the right virtual traffic and network metadata from AWS environments. The AWS VPC Traffic Mirroring allows users to capture and inspect network traffic to analyze packets without using any third-party packet forwarding agents. The solution provides insight and access to network traffic across VPC infrastructure. 

 

Packets can now be captured, retained, analyzed and stored in the AWS cloud bringing additional visibility and security with the RSA NetWitness Platform.  With this agent-less packet capture capability, we’re able to provide analysts the context they need to understand the threats they’re investigating.  Combining network visibility with other sources such as Logs, Endpoint and Netflow we’re able to provide a single view to the analyst!

 

RSA NetWitness Platform enables customers to obtain the visibility needed to secure critical infrastructure, and empowers any analyst to identify, understand, and mitigate advanced threats.   RSA’s NetWitness Platform's integration with AWS enables customers to close the visibility gap created by workloads in the cloud.  This solution provides flexible AWS deployment options which allow NetWitness components to be deployed either in a Full Stack (all cloud) or Hybrid (on premise & cloud) configurations.

 

Hybrid Deployment

RSA NetWitness - AWS VPC Traffic Mirroring

 

For technical implementation details, see our AWS Deployment Guide

It often happens to me that while I am testing new alerts and incident aggregation rules, I find that the aggregation condition(s) I chose in my Incident Rule are not what I want.  While I could re-create the raw alerts from scratch, I wanted an easier method to tell the Respond engine to re-apply its aggregation rule policies on the alerts that already exist in the database.

 

To be clear, the Respond engine is always attempting to apply all active and valid Incident Rules against un-aggregated and un-affiliated alerts in the database -- that is, any alert that has not been previously aggregated into any incident can be automatically aggregated into an incident if an incident rule with matching conditions is changed/created.  But for previously aggregated alerts whose incidents have been deleted (leaving the alerts un-aggregated but previously-affiliated), the Respond engine will not attempt to re-aggregate them.

 

So my goal, then, was to get the Respond engine to include these previously-affiliated alerts in its aggregation attempts.  To achieve this, the alerts simply needed to be updated to remove their previously-affiliated status.  And to make it easy to change dozens or even hundreds of alerts at once, I wrote a simple shell script (attached to this blog and pasted below) to do it all for me.

 

#!/bin/bash
#
#grab the deploy_admin password
DEPLOY_PW=$(security-cli-client --get-config-prop --prop-hierarchy nw.security-client --prop-name platform.deployment.password --quiet)

#set a desired time range to query for alerts
#examples: "24 hours ago" or "14 days ago" or "4 weeks ago"
timeRange=$(date +%s%N -d "30 days ago" | cut -b1-13)

#identify primaryESA host
primaryESA=$(echo -e "use orchestration-server\ndb.host.find({installedServices:\"ESAPrimary\"},{hostname:1})" | mongo admin -u deploy_admin -p $DEPLOY_PW --quiet | grep -Po "hostname.*\"" | sed -e "s/hostname.\{5\}\|\"//g")

#change status on all alerts that were part of a deleted incident
#within the timerange from "REMOVED_FROM_INCIDENT" to "NORMALIZED"
echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

A couple notes on the script:

  • I used one extremely generic parameter (timestamp within last 30 days) to limit the database query and update operation (line 15)
    • you should feel free to modify the timeRange (line 8) to suit your needs
    • you should also feel free to (carefully) modify the database query to focus on specific alerts in your environment
      • for example, given the following raw alert:

 

...you could change line 15 and add:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

  • a successful run of the script will produce output like this, showing you how many alerts in the database were modified (3, in this case):

 

Of course, I recommend testing this (and most everything else) in a pre-prod or test NetWitness environment, if you have one.  And should you have any questions about what might be a good and/or valid database query, the Link community is always on hand to help (please have screenshots and/or specifics about your alerts ready...its hard to help without knowing details...  ).

An administrator uploads custom YARA content to the RSA NetWitness Platform per instructions in the documentation. Turns out they want to change or delete it, but the only options in the user interface are to disable or enable. The naming of the YARA custom files will be different, reflecting names given during upload.

 

Can anything be done?

 

The answer is yes. The steps below explain how to manage custom YARA content via the command-line.

 

  1. Connect to the malware appliance via SSH and change to the YARA directory.
[root@malwareserver yara]# cd /var/netwitness/malware-analytics-server/spectrum/yara

 

  1. Find the custom files you want to delete.
    Rules are merged into a single file. It is unknown if you can modify that file to remove a single rule.
[root@malwareserver yara]# ll
total 492
drwxr-xr-x. 2 netwitness netwitness 6 Aug 20 15:23 error
drwxr-xr-x. 2 netwitness netwitness 4096 Aug 29 14:02 processed
-rw-r--r--. 1 netwitness netwitness 587 Jul 15 16:49 rsa_mw_pdf_artifacts.yara
-rw-r--r--. 1 netwitness netwitness 76289 Jul 15 16:49 rsa_mw_pe_artifacts.yara
-rw-r--r--. 1 netwitness netwitness 96334 Jul 15 16:49 rsa_mw_pe_packers.yara
drwxr-xr-x. 2 netwitness netwitness 6 Aug 20 16:03 watch
-rw-r--r--. 1 netwitness netwitness 317666 Aug 20 16:05 custom_merged_static_rules.yar

 

  1. Remove the file(s) or move it/them to a different directory.
[root@malwareserver yara]# rm -i custom_merged_static_rules.yar

 

  1. Change directory to the YARA processed folder, and remove (or move) the processed files.
[root@malwareserver yara]# cd /var/netwitness/malware-analytics-server/spectrum/yara/processed [root@malwareserver yara]# rm -i custom_merged.yar

 

  1. Restart the Malware service.
systemctl restart rsa-nw-malware-analytics-server

 

After performing these steps, you can verify the remove in the RSA NetWitness Platform UI under Services[name of malware server]ConfigIndicators of CompromiseYARA.

One of the changes introduced in 11.x (11.0, specifically) was the removal of the macros.ftl reference in notification templates.  These templates enable customized notifications (primarily syslog and email) using freemarker syntax. The 10.x templates relied on macros (which are basically just functions, but using freemarker terminology) to build out and populate both the OOTB and (most likely) custom notifications.

 

If you upgraded from 10.x to 11.x and you had any custom notifications, there's a very good chance you noticed that these notifications failed, and if you dug into logs you'd have probably found an error like this:

The good news is there's a very easy fix for this, and it does not require re-writing any of your 10.x notifications.  The contents of macros.ftl file that was previously used in 10.x simply need to be copy/pasted into your existing notification templates, replacing the <#include "macros.ftl"/> line, and they'll continue to work the same as they did in your 10.x environment (props to Eduardo Carbonell for the actual testing and verification of this solution).

 

Example:

 

...becomes:

 

I have attached a copy of the macros.ftl file to this blog, or if you prefer you can find the same on any 11.x ESA host in the "/var/netwitness/esa/freemarker" directory.

G Suite (formerly known as Google Business Suite or Google Apps for Business) is now supported for log collection using the RSA NetWitness Platform.  Collection is achieved via the G Suite Reports API (v1) and is enabled in RSA NetWitness via the plugin framework.

 

 

The G Suite API schema provides several types of events which can be monitored.  Below is the list of event types currently supported by this plugin:

 

  • access_transparency – The G Suite Access Transparency activity reports return information about different types of Access Transparency activity events.
  • admin – The Admin console application's activity reports return account information about different types of administrator activity events.
  • calendar – The G Suite Calendar application's activity reports return information about various Calendar activity events.
  • drive – The Google Drive application's activity reports return information about various Google Drive activity events. The Drive activity report is only available for G Suite Business customers.
  • groups – The Google Groups application's activity reports return information about various Groups activity events.
  • groups_enterprise – The Enterprise Groups activity reports return information about various Enterprise group activity events.
  • login – The G Suite Login application's activity reports return account information about different types of Login activity events.
  • mobile – The G Suite Mobile Audit activity report return information about different types of Mobile Audit activity events.
  • rules – The G Suite Rules activity report return information about different types of Rules activity events.
  • token – The G Suite Token application's activity reports return account information about different types of Token activity events.
  • user_accounts – The G Suite User Accounts application's activity reports return account information about different types of User Accounts activity events.

 

Suggested Use Cases

 

G Suite Admin Report:

 

  1. Top 5 Admin Actions: Depicts the top 5 actions by Admin
  2. Admin activity: Activities performed by admins
  3. App Token Actions: Displays details on app token actions in a pie chart
  4. Users Created and Deleted: Displays users created and deleted as a table chart including details on the user’s email, admin action, and admin email.
  5. Groups - Users Added or Removed: Displays information on Groups, with users added or removed as a table chart including details on the user email, admin action, group email, and admin email.

 

G Suite Activity Report:

 

  1. Activity by IP Address: Shows a table of actions w.r.t IPs
  2. Login State Count: A pie chart that depicts the login states by count
  3. Logins from Multiple IPs: Shows logins from multiple IP addresses by user on a pie chart
  4. Most Active IPs: Shows a table with the most active IP addresses based on the number of events performed by that IP address
  5. Top 10 Apps by Count: Shows the top ten apps by count on a column graph
  6. Login Failures by User: Shows the login failures by user on a pie chart

 

Downloads and Documentation

 

Configuration Guide: Google G Suite 
Collector Package on RSA Live: Google Business Suite Log Collector Configuration
Parser on RSA Live: CEF (device.type='gsuite')

Overview

Sending a notification based on a critical or time-sensitive event seen in your environment is table stakes functionality for any detection platform. Alerting someone in a timely manner is important, but building a custom e-mail that includes relevant, concise information that an analyst can use to determine the appropriate response is just as important. As they work to juggle their daily priorities, they need to know whether an alert requires immediate attention or whether it's something they can filter as a false positive as time permits.

 

The RSA NetWitness Platform uses Apache FreeMarker template engine to build its notifications, be they e-mail, syslog, or SNMP. For the purposes of this post, I'm going to focus on e-mail notifications as the concepts apply to all notifications, and e-mail is the most complex of the options.

 

Available Data

The first step is finding out what information you can include in your notification. All of that data can be seen in the Raw Alert section of an Alert in the Respond UI. That Raw Alert is formatted in JSON, and anything in there can be placed into a notification. To find that Raw Alert data, you can go to one of two places.

 

Location #1:

 

Location #2:

 

Example #1: Basic Email

Let's start with a basic example. I want to send an e-mail that includes the name, severity, and time of the Alert, as well as a link to the raw event (network or log) that generated the alert. Here is a snippet of the data from my Raw Alert (the full alert, with addresses changed to protect the innocent, is attached as raw_alert.json):

 

Under Admin --> System --> Global Notifications, on the Template tab, I add a new template. Give it a name, choose the template type (we're going to select Event Stream Analysis for these), and then paste in the below code (also under example_1.html):

 

Assuming a severity of 9, that gives an e-mail formatted like this (using Gmail):

 

Rows 1 - 20 give us a color-coded banner which highlights the severity of the incident. In rows 3 - 6, you can see that we're making a logical check for the severity to determine the background color of the banner. Row 22 (we'll come back to row 21) prints the rule name. Row 23 gives us the time and includes the field, the input format, and the output format. You can even take epoch time and adjust it for your local time zone, but that's another post. Row 25 builds a hyperlink to the raw event that generated the Alert. Keep in mind that by default, notifications will separate large numbers with commas, which is why row 21 is necessary. Without row 21, the notification link (which I highlighted in the e-mail screenshot) would include commas in the sessionid within the URL, which would obviously not work when clicked. Also, you will need to update two portions of the URL specific to your environment:

 

The [URL_or_IP] is self-explanatory. The [Device_ID] is different for every environment and for every service. If you login to the RSA NetWitness Platform and navigate to the Investigate --> Navigate page and load values, the Device ID will be in the URL string in your browser, and it will correspond to the data source you've selected. In this example, my Broker has a Device ID of 6.

 

Above, we used https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/AUTO. This loads the "Default Session View" that each individual user defined in their Profile --> Preferences --> Investigation settings, which by default is "Best Reconstruction" view for network sessions and the "Raw Log" view for log events. Should you prefer to jump directly to other views, you can use these formats:

  • Meta View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/DETAILS
  • Text View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/TEXT
  • Hex View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/HEX
  • Packets View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/PACKETS
  • Web View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/WEB
  • Files View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/FILES

 

Great! Now we have a notification.

 

Example #2: Multiple Values

But what if we have an array of values like analysis_service here:

 

In order to print those multiple values out, we need do some formatting with a FreeMarker macro. I'm pasting the following onto the bottom of my notification:

 

Lines 1 - 11 iterate through any meta value that has more than one value and separate them with a comma. Lines 13 - 22 print out Service Analysis with a comma-separated list of values. First, there is a logical test to see if there are any events in the first place. This was taken from the Default SMTP Template (Admin --> System --> Global Notifications --> Templates tab), and can be used to print out every meta key and all of their values. In my case, I altered it (or, well, Joshua Randall did and I stole borrowed it) to only apply to Service Analysis by adding a logical test (lines 16 and 19) and then only printing out that one meta key. Here is what that looks like:

 

 

If you would like to print out more than one key, you can add elseif statements like this:

 

Testing Your Syntax

So what if you want to use some FreeMarker concepts, but you want to see if they'll work before putting them into the RSA NetWitness Platform? Luckily, there is a tester put out by Apache here - https://try.freemarker.apache.org/.

 

In order to use it on your data, just copy that Raw Alert section from an Alert and paste it into the Data model box shown above. Then paste your FreeMarker code into the Template box and click Evaluate. Keep this in mind: this will not work the same as an RSA NetWitness Platform notification would. If I took the Raw Alert I used for my examples above along with the template I was using, I would not see the output I actually get from the RSA NetWitness Platform. This should ONLY be used to test some basic syntax concepts. For example, printing out UNIX Epoch Time in various formats, adjusted for different time zones, is something this helped me do.

 

Summary

These concepts - along with some basic HTML formatting - give you the tools to build just about any notification you would want. I also recommend taking a peek at the Default SMTP Template I referenced above to use as a starting point for more advanced formatting. If you do some other interesting things or need help getting a notification to work, please post that in the comments below.

One of the most powerful features to make its way into RSA NetWitness Platform version 11.3 is also one of the most subtle in the interface.  11.3 now saves analysts one more step during incident response by integrating rich UEBA, Endpoint, Log, and Full Packet reconstruction directly into the incident panel.  This view is essentially the same as if you were looking at events directly in the Event Analysis part of the UI, or the Users (UEBA) part of the UI, just consolidated into the incident panel.  Prior to this improvement,the only way to view the raw event details was to open the event and click on "Investigate Original Event", pivoting into a new query window.  This option may still be appropriate for some, and still exists, but for those needing the fastest route possible to validating detection and event details, this feature is for you.

 

To use the new feature, for any individual event of interest that has been aggregated or added into an incident you'll see a small hyperlink attached to each event on the left hand side, labeled with one of: "Network", "Endpoint", "Log", "User Entity Behavior Analytics".  These labels correspond to the source of the event, and upon click will slide in the appropriate reconstruction view.

 

User Entity and Behavior Analytics (UEBA) view:

Network packet reconstruction view:

Endpoint reconstruction view:

Log reconstruction view:

 

Happy responding!

Starting in version 11.3, the RSA NetWitness Platform introduced the ability to analyze endpoint data captured by the RSA NetWitness Endpoint Agent (both the free "Insights" version and the full version). For more information on what RSA NetWitness Endpoint is all about, please start with the RSA NetWitness Endpoint Quick Start Guide for 11.3.

 

One of the helpful new features of the endpoint agent is the ability to not only focus the analyst on the "Hosts" context of their environment, but also the ability to gain full visibility into process behaviors and relationships whenever suspicious behaviors have been detected by the RSA NetWitness platform, or when investigating alerts from others.

 

The various pivot points bring an analyst into Process Analysis in the context of a specific process, including it's parent and child process(es) and based on the current analysis timeline which is adjustable if needed.

 

Example Process Analysis view, drilling into all related events recorded by the NW Endpoint Agent

 

Example Process Analysis view, focused on process properties (powershell.exe) collected by the NW Endpoint Agent

 

The feature is simple to use when RSA NetWitness Endpoint agent data exists, and is accessible from a number of locations in the UI depending on where the analyst is in their workflow:

 

Investigate > Hosts > Details (if endpoint alerts exist):

Investigate > Hosts > Processes (regardless of alert/risk score): 

 

Investigate > Event Analysis:

 

Respond > Incident > Event List (card must be expanded):

 

Respond > Incident > Embedded Event Analysis (reconstruction view):

 

Happy Hunting!

Filter Blog

By date: By tag: