Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

553 posts

Over the past year, I have posted multiple blogs whereby I perform APT (Advanced Persistent Threat) emulation and analyse the forensic footprint left behind after the attack using the NetWitness platform. In this post, I take a look at an adversary emulation framework from MITRE named CALDERA - Cyber Adversary Language and Decision Engine for Red Team Automation:

This framework allows you to automate the adversary based around the MITRE ATT&CK framework (https://attack.mitre.org/matrices/enterprise/), and takes out a lot of the preparation work required to setup the attack scenarios.

 

For the purposes of this post / demo, I used a service that exposes local servers behind NATs and firewalls to the public internet over secure tunnels. In this case, the service used is localhost.run (http://localhost.run/) - I've covered others in the past and all of these should be blocked for corporate environments. With this in mind I did not blur the URL in the screenshots, but I've since killed that connection so this address may now belong to someone else if you try to reach it, for security reasons I would suggest you don't.

 

This is not an attack framework like the other posts have covered, it is more of an emulation framework, however, this could be more suitable if you are just starting out in APT emulation and want to see what you can and can't detect. As like the other cases, in this post I will not go into detail on how to setup CALDERA as there is plenty of information regarding that already available.

 

Overview

CALDERA ships with an agent named Sandcat, also referred to as 54ndc47. This agent is written in GoLang for cross platform compatibility, and is the agent we will deploy on the endpoint(s) we want to execute our operations against. Navigating to the Sandcat plugin, we are presented with two options to deploy the agent.

  • Option one will generate commands on the fly for the specific operating system selected
  • Option two supplies a URL in which you can visit from the endpoint to download and execute Sandcat manually

 

For this blog post, I opted for the PowerShell command to deploy the agent. I ran this on my endpoint and you can see the connection was successful and it starts to beacon:

 

I chose one of the default adversaries, hunter, for my operation, the output of which can be seen below. A high level overview of this emulation is the search for sensitive files, which it collects, stages, and exfiltrates:

 

 

NetWitness Packets

Firstly, let's take a look into NetWitness Packets. Focusing on outbound traffic (direction='outbound') and the HTTP protocol (service=80), we can place a focal point on outbound HTTP communication. From here, we can view the characteristics of the HTTP traffic by opening up the Service Analysis meta key. Drilling into http suspicious 4 headers, http post no get, and http suspicious no cookie we are left with 20 events:

 

Next, we can start to view other metadata related to this traffic. Opening the Client Application meta key we can see a user agent of go-http-client/1.1 - this is because the agent is built on GoLang and it is not altered from the default. The server is Python/3.7 AIOHTTP/3.4.4, which is also worth noting. The filenames associated with this traffic are also interesting: instructions, results, and ping. These are very descriptive and are basically the agent receiving instructions, returning the results, or simply checking in:

 

This traffic could easily be picked up by adding the following logic to an application rule:

(client begins 'go-http-client') && (directory = '/sand/') && (filename = 'instructions','ping','results') && (server contains 'aiohttp')

            

 

Delving into the Event Analysis view, we can see Base64 encoded data in the body of the HTTP POST's. As of NetWitness 11 and upward, decoding Base64 can be done directly from within the UI by simply highlighting the text and selecting Decode Selected Text from the popup:

 

The Sandcat agent uses Base64 encoding for the whole instruction being sent to the endpoint, this instruction is in JSON format. The actual commands that will be executed are again Base64 encoded within the JSON record. To decode the commands within, I chose to run the additional Base64 through another tool called CyberChef (https://gchq.github.io/CyberChef/):

 

The traffic for Sandcat is very easy to detect in NetWitness Packets. It does not attempt to hide itself or blend in with normal traffic, this is most likely by design as this is not an attack framework, but an emulation framework.

 

NetWitness Endpoint

Drilling into boc = 'runs powershell' and boc = 'in root of users directory', we can see a file called, sandcat.exe, executing out of C:\Users\Public with arguments to connect to the CALDERA server, and we can see a large number of PowerShell commands being executed by it - these PowerShell commands are the commands executed by the sandcat agent to perform the operation laid out at the beginning of this post. The metadata writes executable to root of users directory, or evasive powershell used over network under the BOC meta key would've also led to sandcat.exe and all of its associated commands:

 

 

Custom Attack

The out of the box adversaries are great for getting to grips with CALDERA, but I decided to crank it up a notch and make my own. This operation involved some discovery of systems, dumping of credentials, and lateral movement as can be seen below. These were all out of the box operations, I just added them to make my own adversary:

 

Analysis

Delving into NetWitness Endpoint, we can see that there is a large quantity of metadata under the BOC meta key that tags all of the actions CALDERA performed:

 

CALDERA Ability ExecutedTechniqueNetWitness Metadata CreatedDescription

 

Find system network connections

T1057

enumerates network connectionsAdversaries may attempt to get information about running processes on a system. Information obtained could be used to gain an understanding of common software running on systems within the network.

Find user processes

 

T1049

queries users logged on local systemAdversaries may attempt to get a listing of network connections to or from the compromised system they are currently accessing or from remote systems by querying for information over the network.
Run PowerKatzT1003runs powershell with http argumentThe download of Mimikatz
Run PowerKatzT1003runs powershell downloading contentThe download of Mimikatz
Run PowerKatzT1003evasive powershell used over networkThe PowerShell command used to download Mimikatz
Run PowerKatzT1003powershell opens lsass processCredential dumping is the process of obtaining account login and password information, normally in the form of a hash or a clear text password, from the operating system and software. Credentials can then be used to perform Lateral Movement and access restricted information.
Net useT1077maps administrative shareAdversaries may use this technique in conjunction with administrator-level Valid Accounts to remotely access a networked system over server message block (SMB) to interact with systems using remote procedure calls (RPCs), transfer files, and run transferred binaries through remote Execution
Net useT1077lateral movement with credentials using net utilityThe lateral movement using net.exe and explicit credentials

 

To get a better view of the commands that took place, I like to open the Events view, from here I can see what was executed in an easier to read format:

 

Conclusion

Getting into APT emulation is not an easy task, CALDERA however, makes this a whole bunch easier. It is a great tool for testing your platforms abilities against the MITRE ATT&CK matrix and seeing what you can, and can't detect, as well as getting a better understanding as to how some of those techniques are actually performed; which will massively improve you as an analyst and improve your organisation defense posture. We only covered a subset of the available techniques, as the full content is too extensive to cover completely in a single post.

I was doing some hunting through our lab traffic today and came across some strange looking traffic, it turned out to be Rui Ataide playing around with a new DNS C2. It is named WEASEL and can be found here: GitHub - facebookincubator/WEASEL: DNS covert channel implant for Red Teams. From this, we decided to put together this quick blog post to go over how the traffic looks in NetWitness Packets, therefore, this is in a slightly different format to my usual posts in this topic. We may at some point update this to a full post if needed.

 

 

NetWitness Packets Analysis

As this tool uses DNS for its communication, we first need to place our focus on DNS traffic, we can do this with a simple query like so, service=53 - from here, I like to open the SLD (Second Level Domain) meta key and look for suspicious sounding SLD's, or SLD's that are quite noisy. From the screenshot below, doh stands out as a good candidate:

 

It is also a good idea to do the same with TLD to see if anything suspect stands out. We blurred out the domain we were using in this instance, but the .ml TLD should stand out as suspect as it is free to register and commonly used for malicious purposes:

 

Upon drilling into the suspect SLD (sld='doh'), we can then open the Hostname Alias Record meta key to see how many unique values are associated with that SLD. This type of DNS activity requires uniqueness in the requests it makes in order for the DNS queries not to be resolved by the cache, and this is why you would have a large number of unique alias.host values to a single SLD when performing this type of activity using the DNS protocol. This is depicted nicely by the below screenshot:

You would also typically see a large number of these DNS requests over a small period of time, however, this would be entirely dependent on the C2 and the beaconing interval set. For the above, the time range was over ~12 hours:

 

 

This traffic could easily be detected by using the following logic in an application rule:

(alias.host regex'^[0-9a-vx-z-]{2,52}w{0,6}\.[0-9a-f]{2}\.[0-9a-f]{4}\.') && (dns.querytype = 'aaaa record')

 

For other tools, the following regex should work too, it may however need some adaptation for each specific tool:

^[0-9a-vx-z-]{2,52}w{0,6}\.[0-9a-f]{2}\.[0-9a-f]{4}\.

 

With that being said, altering the behavior of C2 communications like WEASEL can easily be done, which would then mean this application rule / regex would not trigger, and thus why it is always important to review the behaviors of protocols and not rely solely on signatures.

 

Conclusion

DNS C2's are becoming a more prevalent way to exfiltrate information in and out of a network. It is a transmission medium that often gets forgotten about as most would tie malicious communications with protocols such as HTTP or SSL and tend to only block those. DNS in nature can be very noisy, more so in this case as there is a finite amount of information you can transfer in AAAA records, and by design WEASEL also opted for shorter query names to evade detection, both these in conjunction increase even further the amount of DNS traffic being generated. Additionally, DNS requests are also cached for a period of time, they therefore need to be unique and names can't easily be reused, in order to avoid them being resolved by a cache instead of making it back to the attackers infrastructure. This inherent noisy behavior of DNS C2 makes it slightly easier to detect when the right tools are in place.

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

 

 

 

 

Special thanks to Rui Ataide for his support and guidance for these posts.

Amazon Detective is an Amazon Web Services (AWS) threat hunting platform (pre-release at the time of this writing) that offers a deep, cloud-native view of AWS resource data and history, optionally in the context of an Amazon GuardDuty alert.  Amazon Detective augments threat detection systems like RSA NetWitness Platform by providing details about the size and scope of AWS specific security threats, and to help reconstruct “security events” affecting cloud assets and infrastructure.

 

We are pleased to announce the upcoming release of a new RSA NetWitness Platform integration with Amazon Detective.  This integration will allow an analyst to pivot from a RSA NetWitness investigation directly into Amazon Detective to view the related AWS resource as needed.  In addition, any RSA NetWitness logs customers who are consuming AWS GuardDuty alerts can also pivot directly to a related finding in Amazon Detective.

 

 

 

 

Typical use case scenario for this integrationTypical use case scenario for this integration

 

 

This integration provides several benefits:

 

  • Reduced investigation time due to eliminating the manual pivot (RSA NetWitness takes you right to the entry)
  • Get the added cloud-native visibility of Amazon Detective to dive deeper into an investigation
  • Enable the analysts to use both tools for increased context around the incident, likely resulting in increased speed of investigations

 

How does the integration work?

Customers can enable this integration via the built-in custom context menu actions feature within RSA NetWitness.  These actions will show up when you right-click on an appropriate meta key's value (e.g. IP address, domain name, GuardDuty finding ID) within the Investigate view and Event Reconstruction view. 

 

Configuring a custom right-click action using the UI wizard

 

Configuring a custom right-click action using the UI wizard

 

Clicking one of these will open a new browser window directly into Amazon Detective and query the meta key value in the appropriate context.  From there the analyst can move around and investigate related data.

 

User pivoting on meta within the Events view

 

User pivoting on meta within the Events view

 

 

 

Landing page user is directed to by the browser

 

Landing page user is directed to by the browser

 

 

What kind of things can I pivot on?

There are a number of pivot options. Most searchable data points within Amazon Detective which have an equivalent meta key within RSA Netwitness Platform can be integrated.  Below are the types of entities we have identified as candidates to start with:

 

AWS Concept

RSA NetWitness Meta Key

Finding (id)

operation.id

Entity (IpAddress)

ip.src,ip.dst,alias.ip

Entity (AswAccount)  Accountid

reference.id1

Entity (AwsRole) Principalid

user.id

Entity (AwsUser) Principalid

user.id

Entity (UserAgent)

user.agent

Entity (Instanceid)

agent.id

  

Summary

Through tight UI integration, this enables RSA NetWitness analysts with a powerful addition to their threat hunting arsenal in Amazon Detective.  The integration is straightforward and easy to implement and customize and will save your analysts valuable investigation time.

 

Amazon Detective is still in preview, however once AWS releases it for general availability we will add links to the official integration guides and documentation in this post as well as in the RSA Link Integrations Catalog.   Please follow this post for updates.  For more information on Amazon Detective, see Amazon Detective on the AWS Blog or be watching for it at AWS re:Invent 2019 along with the announcement of our collaboration on this integration.

 

Good hunting!

  

Command and Control platforms are constantly evolving. In one of my previous blog posts, I detailed how to detect PoshC2 v3.8:

 

 

Since then, Nettitude have revamped PoshC2 and released v5.0. This blog post takes a look at the new and improved version, and goes into some detection mechanisms, but this time, solely over SSL.

 

Review

Reviewing the configuration for PoshC2, it appears it still generates a certificate with the same default information as its predecessor. This is not to say that it is not easy to change as you could simply edit the Python file that generates this certificate, or that it will not alter in the future, but it's worth noting:

 

Delving into NetWitness Packets, we can see this information is extracted and gets populated under the meta keys shown below:

 

This would make detecting the default certificates of PoshC2 with application rules a simple task. We would need only to look for one of the metadata values above being created due to them being very unique:

alias.host = 'p18055077' || ssl.ca = 'pajfds' || ssl.subject = 'pajfds'

The certificate is also self-signed and generated when the PoshC2 sever is started, so we also see some interesting metadata values populated under the analysis.service meta key:

 

The certificate issued within last day is a relatively new metadata value and something to look out for within your environment. There are also other relatively new metadata values that will be populated based upon the analysis NetWitness performs against the certificate, these are shown below:

analysis.serviceDescriptionReason
certificate long expirationCertificate expires more than two years since issued.Certificate validity is usually capped at two years. Longer-lived certificates may be suspicious.
certificate expiredCertificate was expired when presented.Expired certificates are invalid and won't be presented by most legitimate hosts.
certificate expired within last weekCertificate was expired by less than a week when presented.Expired certificates are not expected to be presented by most legitimate hosts.
certificate issued within last dayCertificate was presented less than a day since issued.New certificates may be suspicious in combination with other characteristics of the session.
certificate issued within last weekCertificate was presented less than a week since issued.New certificates may be suspicious in combination with other characteristics of the session.
certificate issued within last monthCertificate was presented less than a month since issued.New certificates may be suspicious in combination with other characteristics of the session.
certificate anomalous issued dateCertificate issued date is malformed, nonsensical, or invalid.Invalid or malformed certificates are suspicious.
certificate anomalous expiration dateCertificate expiration date is malformed, nonsensical, or invalid.Invalid or malformed certificates are suspicious.

 

Looking further into the configuration there are a few other interesting default settings. The User Agent string is hard coded, but of course you would need SSL inspection to see this, or for the beacons to be over HTTP - with that being said, it is a very common User Agent string and not a great indicator anyway. The default sleep, or beacon, is set 5 seconds, and the jitter to 0.20 seconds - this would make the beacons stand out in NetWitness Packets:

 

Looking into the Navigate view, and pivoting on the suspiciously named certificate, ssl.ca = 'pajfds' - it is possible to see a beacon type pattern for this traffic:

 

Delving into the Event Analysis view for this, we can obtain a better view of the cadence of communication. From here you can see the very obvious beacon pattern coupled with a payload size that does not vary greatly. Two great indications of automated check-in type behaviour:

 

With 11.3.1.0, there is a new feature available that provides the ability to generate JA3 hashes for SSL traffic. They are not enabled by default, but the following configuration guide details how to enable them:

 

For more details on what JA3 hashes are, and how they can be useful, there is a great explanation of them from the creators available on Github:

 

In this instance, a PowerShell payload was dropped onto the endpoint, and therefore it is PowerShell making the web requests. The way PowerShell sets up its TLS sessions has a unique(ish) JA3 fingerprint:

 

Perusing the available open source JA3 hash lists, we can see that we indeed have a match for this hash and it is PowerShell (Miscellaneous/ja3_hashes.csv at master · marcusbakker/Miscellaneous · GitHub ). While this is not a atomic indicator for PoshC2, it is a great way to detect PowerShell making web requests, and a great starting point for your threat hunting that could lead you to find C&C servers, such as PoshC2 where a PowerShell payload was used:

 

The following screenshot shows the PowerShell payload created by PoshC2 and the one I used to infect the endpoint:

 

These JA3 hashes could be pulled in as a Feed, so the associated hash values get generated under a meta key of your choosing, or you could also create a right-click context menu action (attached to this post):

 

While not much as changed in terms of endpoint indicators and analysis, in this post we opted to cover the encrypted (by default) traffic generated by this framework a bit in more detail, while also highlighting some of the new certificate analysis characteristics on the product. Endpoint analysis of this framework can be found in the previous post regarding Posh C2: Using RSA NetWitness to Detect Command and Control: PoshC2

 

Conclusion

With the new release of PoshC2 v5.0, it appears that not much has changed in the grand scheme of things. With that being said, it is a good idea to regularly revisit known attack frameworks as they are constantly adapting and evolving to evade known detection mechanisms. It is also important to keep up to date with the latest features of NetWitness to ensure you have every chance to detect the bad traffic in your network.

MuddyWater

MuddyWater is a state-sponsored threat group suspected to be linked to Iran. It has mainly been targeting organizations in the Telecommunications, Government and Oil sectors across the Middle East region.

The group relied on spear phishing emails with macro infected Word documents in the past (as seen in a previous post) and has recently been using similar techniques using Excel documents in a new wave of attacks during October-November 2019.

 

In this post we will look at one of those Excel files used in the latest campaign and identify ways to detect it using RSA NetWitness Network and Endpoint.

 

The following is the file used in this article:

Filename

SHA256

Report.xls

905e3f74e5dcca58cf6bb3afaec888a3d6cb7529b6e4974e417b2c8392929148

 

 

 

Execution

In a real attack, the file would be delivered via email to its target. In our case, we will manually execute it.

This particular sample must be named “Report.xls” or would fail to execute.

By opening the file, the user will get the following message telling him to enable editing and content. This is to trick the user into enabling Macros.

 

 

Once content is enabled, the following 2 files are dropped in “C:\Users\<user>\AppData\Local\Temp”.

 

 

 

 

 

Endpoint Visibility

By leveraging RSA NetWitness Endpoint, we can quickly see that Excel, even though a known legitimate file, has an elevated risk score based on its behavior.

 

 

 

By tracking the events on the endpoint, we can see the below behaviors:

 

  1. Excel creates the “wucj.exe” file
  2. The “wucj.exe” file is executed
  3. “wucj.exe” loads the “zdrqgswu” file, which appears to be a VB script, which leads to 2 network connections over TCP/80 to the “ampacindustries.com” domain.

 

 

By looking at the registry changes done by Excel, we can also see that a key has been created to run at startup for persistence after reboots.

 

 

 

If we look more closely at the “wucj.exe” file, we can notice that it is a known and valid Microsoft file. We can confirm this by searching for the hash on VirusTotal. The file is actually “wscript.exe” used to load VB scripts (which is in line with the behavior seen).

 

 

 

 

Network Visibility

In the previous steps, we have seen that the VB script has initiated a connection over TCP/80 to the “ampacindustries.com” domain.

If we look at the details of this network connection on RSA NetWitness Network, we can see that the domain is hitting one of the Threat Intelligence feeds.

 

 

If we then reconstruct the session to look at the raw data, we can identify that the malware is sending within the HTTP GET Request:

  • The username: rsa
  • The hostname: DEMO-USER-1
  • The Operating System: Windows (32-bit) NT 6.01

 

 

 

 

 

 

Indicators of Compromise

The following are some additional indicators that can be used to detect the presence of a compromise.

 

File Hashes

Filename

Hash

Report.xls

7ed6c5e8c3ec4f9499eb793d69a06758

Report.xls

b100c0cfbe59fa66cbb75de65c505ce2

Report.xls

b9ee416f2d9557be692abf448bf2f937

Report.xls

a9706c01de9364eab210ea73296bfe71

Report.xls

1cd71f39ff9fb3bf269440b63c717195

Report.xls

50ac74eb38d6fa07d9f5e788d61a92cd

Report.xls

4022bbb9df5d86226bd9a89f361c94b9

Report.xls

584479a1958a73720c4aebb52c59b21e

Report.xls

269afae11cc9837e732019a03fa02fab

Report.xls

32156247f900883d5106795ec103a624

Report.xls

e18228bee6f1cf12eaf1bb4d5be587bf

Report.xls

5ef459908d5be0672b02cdfe4f606989

Report.xls

66c783e41480e65e287081ff853cc737

Report.xls

2c3a634953a9a2c227a51e8eeac9f137

Report.xls

9d0bfb81f450de8364327a4aaa67d9b3

Report.xls

46f911014f1202e17936f627f34e6165

 

 

Command & Control Domains

URLs

hxxp://graphixo.net/wp-includes/utf8.php

hxxp://ksahosting.net/wp-includes/utf8.php

hxxps://assignmenthelptoday.com/wp-includes/utf8.php

hxxps://annapolisfirstlimo.com/editob.nvd

hxxp://ampacindustries.com/css/utf8.php

APT33 is a state-sponsored group suspected to be linked to Iran. It has been active since 2013 and has targeted organizations in the aviation and energy sectors mainly across the United States and the Middle East regions.

The group has recently been seen using private VPN networks with changing exit nodes to issue commands and collect data to and from their C&C servers.

 

In this post we will look at one of the malware files used within those campaigns and identify ways to detect it using RSA NetWitness Network and Endpoint.

 

The following is the file used in this article:

Filename

SHA256

MsdUpdate.exe

e954ff741baebb173ba45fbcfdea7499d00d8cfa2933b69f6cc0970b294f9ffd

 

This specific sample is rather basic in terms of behavior, but provides both persistence to the attacker, as well as the ability to deploy other malicious files.

 

 

 

Endpoint Visibility

By leveraging RSA NetWitness Endpoint, we can easily identify files and processes that have an elevated risk score due to their behavior. In the below screenshot, we can clearly see that the file “MsdUpdate.exe” stands out due to both its risk score and its reputation (identified as “Malicious”). In addition, we can see that the file is not signed by any valid or trusted certificate.

 

 

 

By drilling into the "MsdUpdate.exe" process, we can see in the next screenshot the different actions done by the process:

  1. It modifies the registry
  2. It communicates over the network with the “simshoshop.com” domain
  3. It copies itself to “C:\Users\<user>\Roaming\MSDUpdate\MsdUpdate.exe”

 

 

 

 

If we look in more details at the registry changes done by the file, as per the below screenshot, we can see that it modified the “Run” key to run itself at startup. This is done for persistence for the attacker to maintain access after a reboot of the machine.

 

 

 

 

Network Visibility

As seen in the previous step, we have been able to identify that the malicious file has communicated with the “simsoshop.com” domain. By drilling into this on the Network component we can look at more details regarding this network connection.

Based on the below screenshot we can see:

  • 4 different sessions separated exactly by 10 min each, which indicates a programmatic behavior typical of beaconing activity
  • All sessions are posting data to a file named “update.php”, which also suspiciously looks like beaconing

 

 

 

 

We can then reconstruct the payload of any of the above sessions to look at its content and confirm that this is indeed beaconing activity.

As seen below, we can confirm that the query is updating an entry with a payload in hexadecimal (most likely encoded).

 

 

 

 

This shows how RSA NetWitness Network and Endpoint can help in quickly detecting, identifying and investigating such attacks based on both activity on both the endpoint and the network,

 

 

 

 

Indicators of Compromise

The following are some additional indicators that can be used to detect the presence of this malware.

 

File Hashes

Filename

SHA256

MsdUpdate.exe

e954ff741baebb173ba45fbcfdea7499d00d8cfa2933b69f6cc0970b294f9ffd

MsdUpdate.exe

a67461a0c14fc1528ad83b9bd874f53b7616cfed99656442fb4d9cdd7d09e449

MsdUpdate.exe

c303454efb21c0bf0df6fb6c2a14e401efeb57c1c574f63cdae74ef74a3b01f2

MsdUpdate.exe

b58a2ef01af65d32ca4ba555bd72931dc68728e6d96d8808afca029b4c75d31e

 

 

Command & Control Domains

Domain

suncocity.com

service-explorer.com

zandelshop.com

service-norton.com

simsoshop.com

service-eset.com

zeverco.com

service-essential.com

qualitweb.com

update-symantec.com

 

 

IP Addresses

IP Address

5.135.120.57

137.74.80.220

5.135.199.25

137.74.157.84

31.7.62.48

185.122.56.232

51.77.11.46

185.125.204.57

54.36.73.108

185.175.138.173

54.37.48.172

188.165.119.138

54.38.124.150

193.70.71.112

88.150.221.107

195.154.41.72

91.134.203.59

213.32.113.159

109.169.89.103

216.244.93.137

109.200.24.114

 

Josh Randall

Easy-add Recurring Feeds

Posted by Josh Randall Employee Oct 15, 2019

In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:

 

There are a number of blogs and KBs that cover this topic in varying degrees of detail:

 

 

Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.

 

The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:

 

Success!

A couple years ago, a few smart folks over at salesforce came up with the idea of fingerprinting certain characteristics of the "Client Hello" of the SSL/TLS handshake, with the goal to more accurately identify the client application initiating TLS-encrypted sessions.

 

This concept certainly has potential to provide invaluable insight during incident response, though there are some significant operational limitations that (my opinion) have so far prevented JA3 fingerprinting from gaining more widespread adoption and use.  Perhaps the biggest of these limitations is the need for some kind of known JA3 fingerprint library or repository, where the thousands (?potentially millions?) of client applications that might initiate a TLS handshake can be reliably matched with their JA3 fingerprint. There are a couple sites building out these repositories

 

...but their content is limited (after all, fingerprinting a client requires installing it, running it, capturing the PCAP, running a JA3 parser or script against the PCAP, and then adding that fingerprint to the library; that process simply does not scale) and the fidelity/accuracy/timeliness of these libraries is a pretty large question mark.

 

However, with NetWitness 11.3.1, which has a native option to enable JA3 and JA3S fingerprinting, and NetWitness Endpoint 11.3 we can bridge this gap and create our own JA3 libraries.

 

The concept is fairly simple

  • use NetWitness Endpoint to identify applications making outbound network connections
  • use NetWitness Network to identify outbound HTTPS traffic
  • link these events and sessions by their common characteristics
  • once we have that link
    • extract the filename and sha256 hash of the application from the NetWitness Endpoint event
    • along with the JA3 fingerprint from the network session
    • and then create a feed of that information that the NetWitness Platform can use for additional context

 

In order to ensure this process scales, we can make use of the ESA's rule engine to identify the sessions we want and it's script output functionality to create the feed for us. The ESA rule and python script output are attached to this blog.

 

Prior to enabling these, you'll want to make sure the "netwitness" user has either read/write access to the "/var/netwitness/common/repo" directory on the Admin Server, a.k.a Node0, or at least read/write access to the "ja3Context.csv" file in that directory that the ja3context.py script will update.

 

A good guide for setting ACLs in CentOS is here: https://www.tecmint.com/give-read-write-access-to-directory-in-linux/  and the result:

 

Once the appropriate permissions are set and you've enabled the ESA rule and its script output, your last step will be to turn that CSV output into a feed (A list two ways - Feeds and Context Hub - many thanks again to the SE formerly known as Eric Partington for this blog):

 

...and choose your meta keys:

 

And voila!  We have an automatically generated and constantly updating library of applications for our JA3 fingerprints:

Today RSA Link implemented a new way of presenting documentation to help RSA NetWitness® Platform customers find the information they need quickly and easily. RSA NetWitness Platform 11.3 presents the documentation in a unified map of product documentation and videos, including software, hardware, and RSA content.

 

The new RSA NetWitness® Platform 11.3 Documentation page

 

The blocks represent a high-level workflow, each block a task for different RSA NetWitness® Platform activities. For example, an Incident Responder would click the “Investigate and Respond” block. Clicking a block opens a list of tasks for the selected category with quick links to product information.

 

Instead of searching through a list of document titles that may have the information you need, you can select one of the high-level tasks—Get Started, Install & Upgrade, Configure & Manage, Investigate & Respond, or Integrate & Develop—and see a list of the relevant information.

 

The widgets on the right provide direct links to:

  • The Master Table of Contents with quick links to every Version 11 document.
  • The Known Issues page with a sortable list of known issues.
  • The Troubleshooting page with information to help resolve issues from diverse RSA Link resources.
  • The Documentation Feedback email sends feedback and suggestions to the Information Design and Development team responsible for RSA NetWitness® Platform technical content.

 

Please click the Documentation Feedback under Other Resources on the right to provide your comments. We hope you find this new page useful and appreciate your comments.

One of the biggest commitments we at RSA make to our customers is to provide best-in-class security products that help manage digital risk.  Our goal is to do so with maximum reliability while also requiring minimum effort on your part.  However, we know, that even best-in-class products occasionally need help to install, use, and maintain them.  While we are continuously focused on improving our support services to ensure that every interaction you, our customers, have with us is positive and quick, we realize that even the best support interaction still requires time and effort on your part.  And what’s more valuable than time?

 

With that in mind, today I am happy to officially launch our Engineering Request dashboard within the RSA Case Management portal, which will allow you to monitor progress of Engineering Requests (ER) opened on your behalf*.  Not only will you be able to see progress of your ER’s, but you will be able to do so on your own, without the need to call support for an update. 

 

To access this information, navigate to the RSA Case Management portal by clicking on My Cases in the main menu on RSA Link.    Clicking on the Engineering Requests tab will display Engineering requests that have been opened on your behalf (linked to your support cases) since January 1, 2018.  For each of these, you will be able to see its Status to know when the issue has been addressed, and if a fix is included in a release, you’ll see the release number as well.

  

Click to enlarge

 

This is just another small improvement to your support experience.  Stay tuned for the more exciting upcoming changes.

 

In the meantime, if you have any feedback on this enhancement or other ideas to continue to improve your experience, please share! 

 

* This functionality is currently only available for the RSA Archer Suite and the RSA NetWitness Platform. Additionally, you will only be able to monitor Engineering Requests that were opened directly on your behalf and are not security issues that could have sensitive information.  We will encourage you to utilize the RSA Ideas portal to manage and monitor Enhancement requests.

Introduction to MITRE ATT&CK™

Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.

This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive, and the community adds them as they are observed and verified.

To read more about how ATT&CK™ is helpful in resolving challenges and validate our defenses, please check this article.

Some Techniques are mapped to multiple Tactics. There are total 244 unique Techniques which results in 314 Non-unique Techniques distributed over 12 Tactics.

 

RSA Threat Content Mapping with MITRE ATT&CK™

RSA has mainly three kinds of Threat Content: a. Application Rules, b. ESA Rules and c. LUA Parsers.These content types can be classified further as per the 'Medium' of each piece of content. Medium depends upon the source of the meta that particular content piece is using. For example: An application rule if using meta populated by packet data then its Medium will be packet. We can search LIVE content using Medium criteria:

 

 

We will try to measure how much ATT&CK™ matrix is covered by RSA Threat Content. Essentially mapping each piece of threat content to one or multiple ATT&CK™ techniques it detects. This mapping needs to be saved in a file and in case of ATT&CK™ the file type will be JSON. For example: In case of application rules, there will be mapping JSON files for each of the following:

  • Mapping of only RSA Application Rules with Medium = log
  • Mapping of only RSA Application Rules with Medium = packet
  • Mapping of only RSA Application Rules with Medium = endpoint
  • Mapping of only RSA Application Rules with Medium = log AND packet
  • Mapping of all RSA Application Rules (Without considering Medium)

The same pattern will follow for ESA Rules and LUA Parsers depending upon Medium value.

This JSON is graphically viewable through ATT&CK™ Navigator web GUI tool which is described later in this post with the process of observing the GUI.

 

a. Application Rules - The Rule Library contains all the Application Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK™ matrix. The mapping shows how many Tactics/Techniques are detected by RSA NetWitness Application Rules. We have generated JSON files for application rules which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA Application Rules:

 

Content TypeMediumLocation of JSON in attached archive
RSA Application RuleslogRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_log
RSA Application RulespacketRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_packet
RSA Application RulesendpointRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_endpoint
RSA Application RulesAll Rules(Without considering Medium)RSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\All_RSA_Application_Rules

 

Following is the plot which reflects number of techniques detected by all RSA Application Rules with respect to ATT&CK™:

 

b. ESA Rules - ESA is one of the defense systems that is used to generate alerts. ESA Rules provide real-time, complex event processing of log, packet, and endpoint meta across sessions. ESA Rules can identify threats and risks by recognizing adversarial Tactics, Techniques and Procedures (TTPs). We have generated JSON files for ESA rules which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA ESA Rules:

 

Content TypeMediumLocation of JSON in attached archive
RSA ESA RuleslogRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_log
RSA ESA RulespacketRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_packet
RSA ESA Ruleslog AND packetRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_log_AND_packet
RSA ESA RulesAll Rules(Without considering Medium)

RSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\All_RSA_ESA_Rules

 

Following is the plot which reflects number of techniques detected by all RSA ESA Rules with respect to ATT&CK™:

 

c. LUA Parsers - Packet parsers identify the application layer protocol of sessions seen by the Decoder, and extract meta data from the packet payloads of the session. Every packet parser is able to extract meta from every session. One of these packet parsers are LUA Parsers which can be customized by customers. We have generated JSON files for LUA Parsers which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA LUA Parsers:

 

Content TypeMediumLocation of JSON in attached archive
RSA LUA ParserspacketRSA_Threat_Content_ATTACK_JSON_Mapping\Lua_Parsers\Medium_packet
RSA LUA ParsersAll LUA Parsers(Without considering Medium)RSA_Threat_Content_ATTACK_JSON_Mapping\Lua_Parsers\All_RSA_Lua_Parsers

Note: The above two JSONs will be same as for LUA Parsers the only Medium is packet.

 

Following is the plot which reflects number of techniques detected by all RSA LUA Parsers with respect to ATT&CK™:

 

 

d. Complete RSA Threat Content (Application Rules + ESA Rules + Lua Parsers) - We have combined all three type of contents and created a combined JSON file for ATT&CK™ Navigator and can be downloaded from this blog post.

 

Content TypeMediumLocation of JSON in attached archive
RSA Threat ContentAll RSA Threat ContentRSA_Threat_Content_ATTACK_JSON_Mapping\All_RSA_Threat_Content

 

Following is the plot which reflects number of techniques detected by all three threat content types combined with respect to ATT&CK™ coverage :

Although these statistics are bound to change with time as new content is added or updated. We can update ATT&CK™ coverage periodically which will help us to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.

 

In above sections, we have talked about using JSON files (attached with blog post) in ATT&CK™ Navigator . In next section, we will discuss how to use and observe the JSON files.

 

Introduction to MITRE ATT&CK™ Navigator

ATT&CK™ Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK™ model.

ATT&CK™ Navigator stores information in JSON files and each JSON file is a layer containing multiple techniques which can be opened on Navigator web interface. The JSON contains content in STIX 2.0 format which can be fetched from a TAXII 2.0 server of your own choice. For example, we can fetch ATT&CK™ content from MITRE's TAXII 2.0 server through APIs.

The techniques in this visualization can be:

  • Highlighted with color coding.
  • Added with a numerical score to signal severity/frequency of the technique.
  • Added with a comment to describe that occurrence of technique or any other meaningful information.

These layers can be exported in SVG and excel format.

 

How to View a JSON in ATT&CK™ Navigator?

  1. Open MITRE’s ATT&CK™ Navigator web application. (https://mitre-attack.github.io/attack-navigator/enterprise/).
  2. In Navigator, open a New Tab through clicking '+' button.

    Navigator_Image
  3. Then click on 'Open Existing Layer' and then 'Upload from Local' which will let you choose a JSON file from your local machine (or, the one attached later in this blog).

    Navigator_Image

  4. After uploading JSON file the layer will be opened in Navigator and will look like this:

    Navigator_Image

 

        This visualization highlights the techniques covered in the JSON file with color and comments.

 

    5. While hovering mouse over each colored technique you can see three things:

  • Technique ID: Unique IDs of each technique as per ATT&CK™ framework.
  • Score:  Threat score given to each technique.
  • Comment: We can write anything related in comment to put things in perspective. In this case, we have commented pipes (‘||’) delimited names of content/rules/parsers which cover that technique. For example, if you have opened application rule JSON then comments will contain pipes delimited name of those application rules which detect hovered technique.

 


Other blog posts written before regarding Threat Content coverage of ATT&CK™ can be found here and here.

Overview

The RSA NetWitness is run by many of our customers on RSA's physical appliances, but the entire stack can run in AWS, Azure, VMware, or Hyper-V just fine. You can even mix-and-match hardware between physical and virtual hosts however you prefer. Our Virtual Host Installation Guide does a great job outlining the steps to building a virtual RSA NetWitness Platform host.

 

However, there is frequently a need to build smaller hosts to gather data in smaller remote locations. Small issues that don't apply to larger hosts can cause RSA NetWitness Platform folders to overrun their allotments and cause NetWitness to stop capture or aggregation. This post will primarily focus on the settings to focus on when building smaller virtual hosts. It will also include some tricks to monitor your NetWitness hosts to make sure they don't reach unhealthy levels of storage. Of course, many of these tips will also apply to virtual hosts of all sizes, so hopefully you can benefit regardless of your particular virtual implementation.

 

To ISO or Not to ISO

RSA provides both an ISO and an OVA (and a Hyper-V VHD) to use to build your virtual hosts. Which should you use? If you are building a full RSA NetWitness Platform implementation virtually, you will have to use the ISO to build your Admin Server because the OVA does not come with all of the required RPMs. As for the other hosts, using the OVA isn't a bad idea. The OVA is a much smaller file to deal with (~450MB OVA vs ~6GB ISO) and it has already completed the bootstrap, which is one of the longest steps of the installation. However, the OVA has already provisioned the logical volumes for a 195GB host. That is the recommended size for the OS drive, but if you're wanting to give more than that, the ISO is the easiest option - and I say that as someone who rather enjoys partitioning Linux file systems! As for assigning less than the 195GB, I would recommend you thin provision your host's OS drive before you install with less than what RSA recommends.

 

Keep in mind that your log, network, and endpoint data stores will be separate from this. The OS drive is strictly for holding OS files, NetWitness internal service log entries, temporary data, and some other miscellaneous data. You will add disks to accommodate storing your log, network, and/or endpoint data in the step.

 

Installing the ISO is extremely simple: create your virtual host, give it the CPU, RAM, and HDD storage as recommended in the installation guide or by your RSA engineer (different requirements for different services and different levels of throughput), attach the ISO, and turn on the VM. It will boot to the blue installation screen where you will hit <Enter>. Once you get to the following screen...

...make sure you enter "y" or "Y" and hit <Enter>. Once the bootstrap is complete, the system will reboot to the login prompt. After logging in, you will run "nwsetup-tui" and you can refer to the installation guide for instructions on how to properly orchestrate a host from there.

 

VM Host Sizing

In the previous step, you installed the bootstrapped host via the ISO or the OVA and possibly orchestrated the services as well. In the case of any host that will retain data - Decoders (network / log), Hybrids (endpoint / network / log), Concentrators, or Archivers - you will need to also provision storage for that data. Sizing that can be difficult, but I have a calculator that can help size most of those appropriately.

 

...except Archivers. Why not Archivers? Archivers are employed, generally, for regulatory purposes. You should engage your RSA Engineer to make sure you size them appropriately so that you don't run into issues with auditors. You might be logging especially large logging sources, while the calculator only uses a static 600 bytes per message. You can also retain more or less meta keys which can drastically affect how much storage to assign. And after all, while the "[Small]" in the title of this post was in hard brackets, this guide is generally geared towards smaller deployments / hosts. The sole reason to use an Archiver is because the amount of storage has reached significantly beyond any definition of the word "small".

 

To use the calculator, there are a number of things to understand:

  • The calculator is used to calculate Hybrid storage, because most "small" environments will use Hybrids rather than discrete Decoder and Concentrator pairs. If you are using separate Decoders and Concentrators, you can simply break up the calculated storage per service and split up the provisioning commands. NOTE: There is no such thing as a "discrete Endpoint Decoder". Endpoint servers only come as Hybrids, whether virtual or physical.
  • When you enter information to size up your storage, at the bottom of the calculator you will get provisioning commands to setup your hosts. If you have any Hosts entered in rows 6 or 7, you'll get commands to provision storage for an Endpoint Log Hybrid. If you don't have any Hosts, but you have Log Events >0 GB/day, you will get commands to provision storage for a Log Hybrid. If you have Log Events at 0 GB/day and you have 0 Hosts but your network traffic is >0 GB/day, you will get commands to provision storage for a Network Hybrid.
    • If you are sizing an Endpoint Log Hybrid, keep in mind that you cannot currently download modules automatically, download memory dumps, or download Master File Tables from hosts. Those features which were in ECAT 4.x will be back in the product as of 11.4, and I've included commands to provision them. However, the amount of storage you provision for those purposes is entirely up to you, so you will need to just type the numbers into that cell. They can both be relatively small (10 - 30GB) if you don't plan to auto-download unsigned, new modules. However, once the feature is back, we do highly recommend that you automatically have NetWitness Endpoint download any unsigned, unknown modules less than 5MB - 10MB, and estimate storage for your environment appropriately.
  • Once storage is provisioned for each of the given volumes, the last provisioning command is to give 100% of the remaining space to the MetaDB on the Concentrator. That is done on purpose because if I have any extra space left over, that is where I want it. However, you also must make sure (likely with df -h) that you enough storage in that logical volume. If not, you likely didn't give the entire partition enough space.
    • For this same reason, if you end up using this calculator to build a discrete Decoder, you'll likely want to change the command that would provision your PacketDB to use the "100%FREE" version of the lvcreate command. The syntax would be the same as the one I use for the Concentrator's MetaDB.
  • When you enter the scale information for Network Traffic, you might wonder, "But I don't know how many GB/day of network traffic I plan to send to NetWitness!" The easiest rule of thumb is that if you expect to see 100Mbps on average for a 24-hour period (that would mean ~175Mbps over the peak hour and 10Mbps overnight), that is 1TB/day of traffic. If you expect to see 10Mbps because it's a small office or home environment, assume 100GB/day. If you have absolutely no idea, just throw a number in there.
  • For logs, in a small environment, if you had any log management system you can probably figure out how many GB/day of day you were generating before. If you expect a certain number of Events per Second, I put a handy calculator to turn that into GB/day on row 10. If you have no idea, then once again, I suggest you just throw something in there.
  • You can edit the calculator if you like. The password is just "rsa". I only password protect it to make sure that first-time users aren't editing cells they shouldn't and break it.

 

The calculator is called NW Virtual Hybrid Sizing Calculator v1.0.xlsx. PLEASE, if you find any errors, leave a comment below or contact me somehow so that I can fix it for others.

 

Raw Event Data Storage

The Virtual Host Installation Guide covers how to add storage for the various RSA NetWitness Platform databases in Step 3. It also covers how to calculate the amount of storage you'll need to allocate to each database for any given host/service. For the Admin Server, Archiver, Broker, ESA, Log Collector, and UEBA hosts, all storage will get dumped into the /var/netwitness/ folder. The instructions for extending that volume group and logical volume are in the installation guide and generally involve: pvcreate, then vgextend, then lvextend, and finally xfs_growfs.

 

For Decoders, Concentrators, and Hybrids, I've put together the commands that you need in the attached

 *Commands.txt text files to setup the storage for those hosts. I recommend running all of these scripts to build the partitions, volume groups, and logical volumes after you run nwsetup-tui, but *BEFORE* you install the services on the hosts. A few things to note:

  • I name the volume group "vg01" for the sake of brevity. The name you assign does not matter at all.
  • In Step 5, I assign storage to the "root" folder for each respective service; /var/netwitness/decoder for Network Decoders, /var/netwitness/concentrator for Concentrators, and /var/netwitness/logdecoder for Log Decoders. This is not required, but I prefer to create these volumes so that I can monitor them in case they fill up. Note: they must have at least 5GB of storage assigned, but larger VMs can have as much as 30GB.
  • Also in Step 5, you will need to replace the lv sizes with the proper sizes based on the Installation Guide and/or your RSA NetWitness Platform engineer. In my scripts, I assign specific sizes to every volume except the last one, which I then assign whatever free space is left with the "100%FREE" command.
  • For Step 10, I wrote that so that you can copy and paste it directly into an SSH session into the /etc/fstab file on the host. You can paste that directly to the bottom of the existing file. Once that is done, before you install services, make sure to reboot the host to make sure there aren't any errors in that fstab file. The syntax is very particular and any errors will cause the system to fail to come up. If that happens, just open a Console window to the machine, hit CTRL+D to enter maintenance mode, and then fix the fstab file.
  • I want to say this again because it's very important: after adding your changes to the fstab file, reboot the machine and make sure your syntax was correct!

Just view the *Commands.txt file attached to this post that corresponds to the type of host you're trying to install.

 

Install Services

This step is straightforward. If you haven't already, go to Admin --> Hosts and enable the host. Then install the services just as outlined in the Installation Guide.

 

Validate Folder Sizes - RSA NetWitness Platform Databases

In order to properly roll off the oldest entries in NWDB (NetWitness Database, our proprietary database format), we have to make sure that the RSA NetWitness Platform knows how much storage each database has to fill. Navigate to Admin --> Services, and for any Concentrator or Decoder/Log Decoder service, go to the Explore page. Expand the "database" menu item on the left-hand side, and click on "config". Here I show the page for an RSA Log Decoder service on a physical Endpoint Log Hybrid:

The sizes you see there are 95% of the corresponding folders we built using the provisioning commands, measured in 1,073,741,824 byte blocks. If you want to get to exact, you can run "df --block-size=G", multiply a folder by 95%, and round to the nearest two digits to get the value RSA NetWitness Platform will place in the corresponding line above. Once the data in one of these folders exceeds these limits, RSA NetWitness Platform rolls off data.

 

If you followed this guide and the Virtual Host Installation Guide, you will see folder sizes here that match what you provisioned. But what if they don't match or you made a mistake? Well, you can reset those by right-clicking on the "database" menu item and clicking "Properties":

 

At the bottom-right of the window, the Properties pane will open up. Select "reconfig" from the drop-down and click the Send button:

You can see that these values match what we saw in the previous screen. If these values still don't look correct - usually, if they are all the same - then your folders aren't mounted to separate logical volumes. If these values do look correct, you can remove the "=xx.xxTB" or "=xx.xxGB" from the entries on the previous screen. Then, back in the Properties pane, in the Parameters box, type update=1 and click Send again. It will append those values to the appropriate entries at the top, though you'll have to refresh the screen to see the update.

 

The indexes for each of these services has a separate entry. On the Explore page, you will see a menu item called "Index", and the settings are under the "config" sub-menu. Just like above, if you need to reset the folder size for that, you can right-click on "Index" and run the reconfig commands like before.

 

Validate Thresholds - MongoDB

In addition to NWDB, NetWitness also stores Endpoint scan results (primarily, what you see in Navigate --> Hosts) in mongoDB on the Endpoint Log Hybrid in the /var/netwitness/mongo folder. NetWitness does not display the folder sizes in the Endpoint Server service's Explore page as it does for those services above. Instead, it just looks at the amount of storage in the /var/netwitness/mongo folder, or, if that isn't separately partitioned, in the /var/netwitness folder. Then it compares the current usage to the value in the "rollover-after" setting here:

Your system may not use this setting if your Data Retention policies (found at Admin --> Services --> Endpoint Server --> Config --> Data Retention Scheduler tab) don't already roll over data before the folder hits 80%. You should also be aware of the settings under endpoint/data-store-thresholds:

If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned separately, and /var/netwitness if it's not) crosses these thresholds, you will eventually receive Health & Wellness alerts that correspond to those thresholds.

 

Minimum Available Space - The Key to Reliability

The other setting you may have noticed in the previous screenshots that we ignored were the <database_name>.free.space.min settings. A given database can grow past the maximum size we've setup above with no issues, but capture/aggregation will stop if there is less free space than what is specified in the free.space.min setting for the corresponding service. Just like the folder size above is set as 95% of the total volume size, the free.space.min is set to 0.865% of the total size, by default. In both cases, the default setting can be replaced manually with whatever you would like to enter. For most large VMs, the default is fine. However, for smaller hosts capturing small amounts of data, this default may be a bit high and can be adjusted.

 

Please note: the indexes do not have a similar free.space.min setting, and capture/aggregation will continue to run, even if the index volumes are essentially full.

 

For Mongo, you should also be aware of the settings under Admin --> Services --> Endpoint Server --> Explore --> endpoint/data-store-thresholds:

If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned; /var/netwitness if it's not) crosses the warning-percent level, <this will happen>. If it crosses the fatal-percent level, <this will happen>.

 

Monitoring Part 1: Folder Sizes

As I mentioned in the Overview, for small hosts (roughly <1TB of total storage), I recommend monitoring your volumes to make sure that they don't fill up. To do this, I modified a script I found here to monitor file system usage:


It pulls back every folder other than temp and boot folders, and if any are at 90% or higher, it will generate a syslog, sent to the IP designated by the -n switch (10.10.10.10 in the image above). I've attached that script below as checkVolumeSizes.sh. (Remember, use chmod to make it executable!) If you run chrontab -e from an SSH terminal, the RSA NetWitness Platform's underlying Centos OS will open vi and allow you to set a schedule to run the script. I imagine most of you reading this are familiar with crontab syntax, but if you're not, or if you want to design something overly tricky, this site takes all the work out of it for you: https://crontab.guru/.

 

The messages generated will look like this:

You can ingest that into any system that can ingest syslog messages and alert on it as you see fit. Seeing as RSA NetWitness Platform *IS* a SIEM, it seemed only right to go ahead and monitor that using the RSA NetWitness Platform . The first step involved in that is properly parsing the message, so I built a parser for that using the NetWitness Log Parser Tool (download here: https://community.rsa.com/docs/DOC-94172, learn how to use it here: RSA ESI Beta 3 - YouTube and Parser Development When No Message ID Exists - YouTube). It took maybe 5 minutes.

 

But there aren't any out-of-the-box keys meant to store the size of logical volumes, and I wanted to include that in the e-mail I send to myself, so I added a meta key to the RSA NetWitness Platform for that. If you use my parser you *MUST* create a custom meta key in your system in order for the parser to work properly. Add the custom meta key to the table-map-custom.xml file on the Log Decoder where you are directing these messages.


You can find that attached as table-map-custom.txt. I didn't want to call it table-map-custom.xml because it needs to be added to the existing file, not pasted over the existing file in its entirety.

 

Now, download nwdiskalert.envision, navigate to Admin --> Log Decoder --> Config, click the Parsers tab, and upload that file. After uploading, if you want to make sure the Log Decoder reloaded its parsers, you can switch from Config to Explore:



Once the page loads, expand the "decoder" menu, right-click on "parsers", and choose "Properties".




In the Properties pane, select "reload" from the drop-down menu and then click Send. Now the parsers have been reloaded and you're all set to ingest these messages!

 

Monitoring Part 2: ESA Correlation Rules

I built three ESA rules to monitor my file system at home, one each for medium, high, and critical severity alerts. Here is what I classify as each:

  • Medium Severity:
    • Goal:
      • Monitor folders that shouldn't ever fill up when they reach high levels of utilization, but won't cause any service issues. 
    • Rules:
      • Any of the following folders are at least 90% but no more than 94% disk usage:
        • /home
        • /var/log
        • /var/netwitness
        • /var/netwitness/concentrator
        • /var/netwitness/decoder
        • /var/netwitness/logdecoder
  • High Severity:
    • Goals:
      • Monitor folders that shouldn't ever fill up when they reach extremely high levels of utilization, but won't cause any service issues
      • Monitor folders that could cause service interruption once they pass 95% (which is where many of them will sit most of the time) but haven't yet reached a point where service interruption will occur
      • Monitor the mongodb folder if it reaches concerning levels
    • Rules:
      • Any of the following folders are at least 95% but no more than 97% disk usage:
        • /home
        • /var/log
        • /var/netwitness
        • /var/netwitness/concentrator
        • /var/netwitness/decoder
        • /var/netwitness/logdecoder
        • /var/netwitness/concentrator/index
        • /var/netwitness/decoder/index
      • Any of the following folders are at 96% or 97%:
        • /var/netwitness/concentrator/sessiondb
        • /var/netwitness/concentrator/metadb
        • /var/netwitness/decoder/sessiondb
        • /var/netwitness/decoder/metadb
        • /var/netwitness/decoder/packetdb
        • /var/netwitness/logdecoder/sessiondb
        • /var/netwitness/logdecoder/metadb
        • /var/netwitness/logdecoder/packetdb
    • The /var/netwitness/mongo folder is at least 90% and no more than 94%
  • Critical Severity:
    • Goals:
      • Monitor folders that shouldn't ever fill up when they reach critical levels of utilization
      • Monitor folders that could cause service interruption once they pass 97% and will soon - or are currently - causing service interruption
      • Monitor the mongodb folder if it reaches its "fatal-percent" setting
    • Rules:
      • Any of the folders in the High Severity list are at 98% or above
      • The /var/netwitness/mongo folder is at 95% or above

 

You can find those attached as nwDiskMonitoringESARules_<severity>_Basic.txt. You might ask yourself, "Why did he call them "Basic"? Well, that's because I actually built more detailed rules in my lab to monitor for the free size returned from the event logs. It's absolutely overkill, and it causes the rules to look like this:

Do you really want to do that to yourself? You really shouldn't, but if you insist, feel free to reach out to me and I'll send you those rules as well.

 

Monitoring Part 3: Generating Notifications

When these rules detect something, of course you'll want to generate an e-mail to notify you of their current state. I use a single notification template for all three ESA Rules. I put my notification template in the attached file nwDiskMonitoringNotificationTemplate.txt. The template breaks down like this:

  • Lines 1 - 20: Builds a banner at the top of the e-mail that is yellow for medium alerts, orange for high, and red for critical
  • Line 25: Prints the time the event was generated
  • Line 27: Prints the IP of the RSA NetWitness Platform host that generated the event log
  • Line 29: Prints the folder that the alert is related to
  • Line 31: Prints the % utilization of the folder
  • Line 33: Prints the amount of free space, in MB, left in that folder
  • Line 35: Generates a hyperlink to the raw event log in the RSA NetWitness Platform; make sure you edit both the <NW_URL_or_IP> and the device ID (mine is 6)

(Have questions about any other items in this notification template? Check out my other relevant blog post here: Building the Notifications of Your Dreams in the RSA NetWitness Platform.)

 

Once you've updated those items, place it under Admin --> System --> Global Notifications --> Template (tab), and make sure you select that template when adding your ESA Rules. You can also build an Incident Rule in the RSA NetWitness Platform if you want to generate incidents for these alerts. Here is mine, for reference:

 

Summary

I can't emphasize enough that the Virtual Host Installation Guide has very comprehensive instructions for setting up a virtual RSA NetWitness Platform host, and you should make sure you follow those instructions. However, following some of the additional steps included in this guide can give you peace of mind that your RSA NetWitness Platform environment is running smoothly and collecting your critical security forensic information.

 

Future note: I plan to build some Event Source Monitoring rules to make sure that my hosts are still sending logs. For example, the packetdb folder on your Decoders and Log Decoders should reach 95% eventually and then roll off data, while your Concentrators should reach 95% on their metadb folder. Those should continue to generate logs once they hit 90% utilization at every interval you specified in the cron job. If I ever get the free time to create those, I'll update this post with that information. If someone wants to build that on their own, be my guest!!

Introducing RSA NetWitness Platform's support for AWS VPC Traffic Mirroring!

 

By partnering with AWS and integrating with their AWS VPC Traffic Mirroring, customers are able to access to the right virtual traffic and network metadata from AWS environments. The AWS VPC Traffic Mirroring allows users to capture and inspect network traffic to analyze packets without using any third-party packet forwarding agents. The solution provides insight and access to network traffic across VPC infrastructure. 

 

Packets can now be captured, retained, analyzed and stored in the AWS cloud bringing additional visibility and security with the RSA NetWitness Platform.  With this agent-less packet capture capability, we’re able to provide analysts the context they need to understand the threats they’re investigating.  Combining network visibility with other sources such as Logs, Endpoint and Netflow we’re able to provide a single view to the analyst!

 

RSA NetWitness Platform enables customers to obtain the visibility needed to secure critical infrastructure, and empowers any analyst to identify, understand, and mitigate advanced threats.   RSA’s NetWitness Platform's integration with AWS enables customers to close the visibility gap created by workloads in the cloud.  This solution provides flexible AWS deployment options which allow NetWitness components to be deployed either in a Full Stack (all cloud) or Hybrid (on premise & cloud) configurations.

 

Hybrid Deployment

RSA NetWitness - AWS VPC Traffic Mirroring

 

For technical implementation details, see our AWS Deployment Guide

It often happens to me that while I am testing new alerts and incident aggregation rules, I find that the aggregation condition(s) I chose in my Incident Rule are not what I want.  While I could re-create the raw alerts from scratch, I wanted an easier method to tell the Respond engine to re-apply its aggregation rule policies on the alerts that already exist in the database.

 

To be clear, the Respond engine is always attempting to apply all active and valid Incident Rules against un-aggregated and un-affiliated alerts in the database -- that is, any alert that has not been previously aggregated into any incident can be automatically aggregated into an incident if an incident rule with matching conditions is changed/created.  But for previously aggregated alerts whose incidents have been deleted (leaving the alerts un-aggregated but previously-affiliated), the Respond engine will not attempt to re-aggregate them.

 

So my goal, then, was to get the Respond engine to include these previously-affiliated alerts in its aggregation attempts.  To achieve this, the alerts simply needed to be updated to remove their previously-affiliated status.  And to make it easy to change dozens or even hundreds of alerts at once, I wrote a simple shell script (attached to this blog and pasted below) to do it all for me.

 

#!/bin/bash
#
#grab the deploy_admin password
DEPLOY_PW=$(security-cli-client --get-config-prop --prop-hierarchy nw.security-client --prop-name platform.deployment.password --quiet)

#set a desired time range to query for alerts
#examples: "24 hours ago" or "14 days ago" or "4 weeks ago"
timeRange=$(date +%s%N -d "30 days ago" | cut -b1-13)

#identify primaryESA host
primaryESA=$(echo -e "use orchestration-server\ndb.host.find({installedServices:\"ESAPrimary\"},{hostname:1})" | mongo admin -u deploy_admin -p $DEPLOY_PW --quiet | grep -Po "hostname.*\"" | sed -e "s/hostname.\{5\}\|\"//g")

#change status on all alerts that were part of a deleted incident
#within the timerange from "REMOVED_FROM_INCIDENT" to "NORMALIZED"
echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

A couple notes on the script:

  • I used one extremely generic parameter (timestamp within last 30 days) to limit the database query and update operation (line 15)
    • you should feel free to modify the timeRange (line 8) to suit your needs
    • you should also feel free to (carefully) modify the database query to focus on specific alerts in your environment
      • for example, given the following raw alert:

 

...you could change line 15 and add:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

  • a successful run of the script will produce output like this, showing you how many alerts in the database were modified (3, in this case):

 

Of course, I recommend testing this (and most everything else) in a pre-prod or test NetWitness environment, if you have one.  And should you have any questions about what might be a good and/or valid database query, the Link community is always on hand to help (please have screenshots and/or specifics about your alerts ready...its hard to help without knowing details...  ).

Filter Blog

By date: By tag: