Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

607 posts

As of RSA Netwitness Platform 11.5, analysts have a new landing page option to help them determine where to start upon login.  We call this new landing page Springboard.  In 11.5 it will become the new default starting page upon login (adjustable) and can be accessed from any screen simply by click the RSA logo on the top left. 

 

The Springboard is a specialized dashboard (independent of the existing "Dashboard" functionality) designed as a starting place where analysts can quickly see the variety of risks, threats, and most important events in their environment.  From the Springboard, analysts can drill into any of the leads presented in each panel and be taken directly to the appropriate product screen with the relevant filter pre-applied, saving time and streamlining the analysis process.  

 

As part of the 11.5 release, Springboard comes with five pre-configured (adjustable) panels that will be populated with the "Top 25" results in each category, depending on the components and data available:

 

Top Incidents - Sorted by descending priority.  Requires the use of the Respond module.

Top Alerts -  Sorted by descending severity, whether or not they are part of an Incident. Requires the use of the Respond module.

Top Risky Hosts -  Sorted by descending risk score.  Requires RSA NetWitness Endpoint.

Top Risky Users - Sorted by descending risk score.  Requires RSA UEBA.
Top Risky Files - Sorted by descending risk score. Requires RSA NetWitness Endpoint.

 

Springboard administrators can also create custom panels, up to a total of ten, of a 6th type for aggregating "Events" based on any existing saved query profile used in the Investigate module.  This only requires the core RSA NetWitness platform, with data being sourced from the underlying NetWitness Database (NWDB).  This enables organizations to add their own starting places for analysts that go beyond the defaults, and to customize the landing experience to adjust for deployed RSA NetWitness Platform components:

 

Example of custom Springboard Panel creation using Event data

 

For more details on management of the Springboard, please see: NW: Managing the Springboard 

 

And as always, if you have any feedback or ideas on how we can improve Springboard or anything else in the product, please submit your ideas via the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform  

RSA is pleased to announce the availability of the NetWitness Export Connector, which enables customers to export NetWitness Platform events and routes the data where you want, all in continuous, streaming fashion. Providing the flexibility to satisfy a variety of use cases. 

 

This plugin is installed on Logstash and integrates with NetWitness Platform Decoders and Log Decoders. This plugin aggregates meta data and raw logs from the Decoder or Log Decoder and converts it to Logstash JSON object, which can easily integrate with numerous consumers such as Kafka, AWS S3, TCP, Elastic and others.

 

Work Flow of NetWitness Export Connector 

 

  • The input plugin collects meta data and raw logs from the Log Decoder, and the meta data from the Decoder. The data is then forwarded to the Filter plugin.
  • The Filter plugin adds, removes, or modifies the received data and forwards it to the Output plugin.
  • The Output plugin sends the processed event data to the consumer destinations. You can use the standard Logstash output plugins to forward the data.

 

Check it out and let me know what you think!

 

Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

 

Download and Documentation

https://community.rsa.com/docs/DOC-114086

We are excited to announce the release of the new RSA OSINT Indicator feed, powered by ThreatConnect!  

 

What is it?

There are two new feeds that have been introduced to RSA Live, built on Open Source Intelligence (OSINT) that has been curated and scored by our partners at ThreatConnect:

  • RSA OSINT IP Threat Intel Feed, including Tor Exit Nodes
  • RSA OSINT Non-IP Threat Intel Feed, which includes indicators of types:
    • Email Address
    • URLs
    • Hostnames
    • File Hashes


These feeds are automatically aggregated, de-duplicated, aged and scored with ThreatConnect's ThreatAssess score. ThreatAssess is a metric combining both the severity and confidence of an indicator, giving analysts a simple indication of the potential impact when a matching indicator is observed.  Higher ThreatAssess scores mean higher potential impact.  The range is 0-1000, with RSA opting to focus on the highest fidelity indicators with scores 500 or greater (as of the 11.5 release - subject to change as needed)

 

Who gets it?

These feeds are included for any customer, with any combination of RSA NetWitness Logs, RSA NetWitness Packets, or RSA NetWitness Endpoint under active maintenance at no charge. The feed will work on any version of RSA NetWitness, but please see the How do I deploy it? section for notes on version-specific considerations.

 

How do I deploy it?

These feeds will show up in RSA Live as follows:

 

To deploy and/or subscribe to the feed, please take a look at the detailed instructions here: Live: Manage Live Resources 

 

11.4 and earlier customers will want to add a new ioc.score meta key to their Concentrator(s) in order to be able to query and take advantage of the ThreatAssess score of any matched indicator. Please see 000026912 - How to add custom meta keys in RSA NetWitness Platform  for details on how to do this. Please note that this meta key should be of type Uint16 - inside the index file, the definition should look similar to this:

 

11.5 and greater customers do not need to add this key, as it's already included by default.

 

 

How do I use it?

Once the feeds are deployed, any events or sessions with matching indicators will be enriched with two additional meta values, ioc and ioc.score.  These values are available for use in all search, investigation, and reporting use cases assuming those keys have been enabled.

 

 

eg. Events filter view

eg. Event reconstruction view

 

What happens to the "RSA FirstWatch" and Tor Exit Node feeds?

If you are running these new feeds, you do not need to run the existing RSA FirstWatch & Tor Exit Node feeds in parallel as they are highly redundant and tend to be less informative when matches occur.  At some point in the near future once we believe impact will be minimal, we will officially deprecate the RSA FirstWatch & Standalone Tor Exit Node feeds.

 

Do you have ideas?

If you have ideas on how to make these feeds better, ideas for content creation leveraging these feeds, or anything else in the RSA NetWitness portfolio, please submit and vote on ideas in the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.

 

But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.

 

This blog covers the hard way.

 

Everything that we do in the hard way must occur after the Endpoint Log Hybrid host has been fully installed and provisioned. This means you'll need to complete the entire host installation before moving on to this process.

 

There are 2 primary requirements for the hard way:

  • you must be able to create a server certificate and private key capable Server Authentication
  • you must be able to create a client certificate and private key capable of Client Authentication
    • this client certificate must have Common Name (CN) value of rsa-nw-endpoint-agent

 

I won't be going into details on how to generate these certificates and keys - your org should have some kind of process in place for this. And since the certificates and keys generated from that process can output in a number of different formats, I won't be going into details on how to convert or reformat them. There are numerous guides, documents, and instructions online to help with that.

 

Once we have our server and client certificates and keys, make sure to also grab the CA chain used to generate them (at the very least, both certs need to have a common Root or Intermediate CA to be part of the same trusted chain). This should hopefully be available through the same process used to create the certs and keys. If not, we can also export CA chains from websites - if you do this, make sure it is the same chain used to create your certificates and keys.

 

The endstate format that we'll need for everything will be PEM. The single server and/or client cert should look like this:

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----

 

The private key should look like this:

-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCuUtxhFPb+FtWD
mQyIELpYVW7isU2KA7ur6ZhWDnKI6pD1POYHfyftO6MhxYsaRrwQ+XxhRJhyT/Ht
....snip....
-----END PRIVATE KEY-----

 

And the Certificate Chain should look this (one BEGIN-END block per CA certificate in the chain...also, it will help to simplify the rest of the process if this chain only includes CA certificates):

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFBzCCAu+gAwIBAgIJAK5iXOLV5WZQMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB1Jvb3QtY2EwHhcNMjAwODA1MTk1MTMxWhcNMzAwODAzMTk1MTMxWjASMRAw
....snip....
-----END CERTIFICATE-----

 

We want to make sure we have each of these PEM files for both the server and client certs and key we generated. Once we have these, we can proceed to the next set of steps.

 

The rest of this process will assume that all of these certificates, keys, and chains are staged on the Endpoint Log Hybrid host.

Every command we run from this point forward occurs on the Endpoint Log Hybrid.

We end up replacing a number of different files on this host, so you should also consider backup all the affected files before running the following commands.

 

For the server certificates:

  • # cp /path/to/server/certificate.pem /etc/pki/nw/web/endpoint-web-server-cert.pem
  • # cp /path/to/server/key.pem /etc/pki/nw/web/endpoint-web-server-key.pem
  • # cat /path/to/server/certificate.pem > /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # cat /path/to/ca/chain.pem >> /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # openssl crl2pkcs7 -nocrl -certfile /path/to/server/certificate.pem -certfile /path/to/ca/chain.pem -out /etc/pki/nw/web/endpoint-web-server-cert.p7b
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-trust/truststore.pem
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-ca/customrootca-cert.pem
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.p12.idx
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.pem.idx

 

The end results, with all the files we modified and replaced, should be:

 

Once we're confident we've completed these steps, run:

  • # systemctl restart nginx

 

We can verify that everything so far has worked by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:

 

If this matches our server certificate and chain, then we can move on to the client certificates. If not, then we need to go back and figure out which step we did wrong.

 

For the client certificates:

  • openssl pkcs12 -export -out client.p12 -in /path/to/client/certificate.pem -inkey /path/to/client/key.pem -certfile /path/to/ca/chain.pem

 

...enter a password for the certificate bundle, and then SCP this client.p12 bundle onto a windows host. We'll come back to it in just a moment.

 

In the NetWitness UI, browse to Admin/Services --> Endpoint-Server --> Config --> Agent Packager tab. Change or validate any of the configurations you need, and then click "Generate Agent Packager." The Certificate Password field here is required to download the packager, but we won't be using the OOTB client certificate at all so don't stress about the password.

 

Unzip this packager onto the same windows host that has the client.p12 bundle we generated previously. Next, browse to the AgentPackager\config directory, replace the OOTB client.p12 file with the our custom-made client.p12 bundle, move back up up one directory, and run the AgentPackager.exe.

 

If our client.p12 bundle has been created correctly, then in the window that opens, we will be prompted for a password. This is the password we used when we ran the openssl pkcs12 command above, not the password we used in the UI to generate the packager. If they happen to be the same, fantastic....

 

We'll want to verify that the Client certificate and Root CA certificate thumbprints here match with our custom generated certificates.

 

With our newly generated agent installers, it is now time to test them. Pick a host in your org, run the appropriate agent installer, and then verify that you see the agent showing up in your UI at Investigate/Hosts.

 

If it does appear, congratulations! Make sure to record all these changes, and be ready to repeat them when certificates expire and agent installers need upgrading/updating.

 

If it doesn't, a couple things to check:

  • first, give it a couple minutes...it's not going to show up instantly
  • go back through all these steps and double-check that everything is correct
  • check the c:\windows\temp directory for a log file with the same name as your endpoint agent; e.g.: NWEAgent.log....if there are communication errors between the agent/host and the endpoint server, this log will likely have relevant troubleshooting details
  • if the agent log file has entries showing both "AgentCert" and "KnownServerCert" values, check that these thumbprints match the Client and Root CA certificate thumbprints from the AgentPackager output

    • ...I was not able to consistently reproduce this issue, but it is related to how the certs and keys are bundled together in the client.p12
    • ...when this happened to me, I imported my custom p12 bundle into the Windows MMC Certificates snap-in, and then exported it (make sure that the private key gets both imported and exported, as well as all the CAs in the chain), then re-ran my AgentPackger with this exported client.p12, and it fixed the error
    • ... ¯\_(ツ)_/¯
  • from a cmd prompt on the host, run c:\windows\system32\<service name of the agent>.exe /testnet
  • check the NGINX access log on the Endpoint Log Hybrid; along with the agent log file on the endpoint, this can show whether the agent and/or server are communication properly
    # tail -f /var/log/nginx/access.log

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.

 

But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.

 

This blog covers the easy way.

 

The only real requirement for the easy way is that we are able to create an Intermediate CA certificate and its private key from our CA chain (or use an existing pair), and that this Intermediate CA is allowed to generate an additional, subordinate CA under it.

 

For my testing, "Root-ca" was my imaginary company's Root CA, and I created "My Company Intermediate CA" for use in my 11.4 Endpoint Log Hybrid.

 

(I'm no expert in certificates, but I can say that all the Intermediate CAs I created that had explicit extendedKeyUsage extensions failed. The only Intermediate CAs I could get to work included "All" of the Intended Purposes. If you know more about CAs and the specific extendedKeyUsage extensions needed for a CA to be able to create subordinate CAs, I'd be interested to know what they are.)

 

Once we have an Intermediate CA certificate and its private key, we need to make sure they are in PEM format. There are a number of ways to convert and check keys and certificates, and a whole bunch of resources online to help with this this, so I won't cover any of the various conversion commands or methods here. 

 

If the CA certificate looks like this, then it is most likely in the correct format:

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----

 

And if the private key looks like this, then it is most likely in the correct format:

-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCuUtxhFPb+FtWD
mQyIELpYVW7isU2KA7ur6ZhWDnKI6pD1POYHfyftO6MhxYsaRrwQ+XxhRJhyT/Ht
....snip....
-----END PRIVATE KEY-----

 

Our last step in this process has to occur at a very specific point during the endpoint log hybrid's installation - after we have run the nwsetup-tui command and the host has been enabled within the NetWitness UI, but before we install the Endpoint Log Hybrid services:

  • on the endpoint host, create directory /etc/pki/nw/nwe-ca
  • place the CA certificate and CA private key files in this directory and name them nwerootca-cert.pem and nwerootca-key.pem, respectively

 

The basis for this process comes directly from the "Configure Multiple Endpoint Log Hybrid Hosts" step in the Post Installation tasks guide (https://community.rsa.com/docs/DOC-101660#NetWitne), if we want a bit more context or details on when this step should occur and how to do it properly.

 

Once we've done this, we can now install the Endpoint Log Hybrid services on the host.

 

I suggest you watch the installation log file on the endpoint server, because if the Intermediate CA does not have all the necessary capabilities, the installation will fail and this log file can help us identify which step (if my own experience is any guide, then it will most likely fail during the attempt to create the subordinate Endpoint Intermediate CA --> /etc/pki/nw/nwe-ca/esca-cert.pem):

# tail -f /var/log/netwitness/config-management/chef-solo.log

 

If all goes well, we'll be able to check that our endpoint-server is using our Intermediate CA by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:

 

And our client.p12 certificate bundle within the agentPackager will be generated from the same chain:

 

And that's it!

 

Any agent packages we generate from this point forward will use the client.p12 certificates generated from our CA. Likewise, all agent-server communications will be encrypted with the certificates generated from our CA.

Thank you for joining us for the July 22nd NetWitness Webinar covering Data Carving using Logs as presented by Leonard Chvilicek. An edited recording is available below, with the Zoom link to the original webinar recording.

 

 

https://Dell.zoom.us/rec/share/9ddSC-v1qVxITbeS5hreSJY6AZnFeaa8hyEe-fYKxEvYejAP3hl67DCXZUjQGil6

Password: V0.*h5#v

This article applies to hunting with Netwitness for Networks (packet-based). Before proceeding, it is important that you are aware of any GDPR or other applicable data collection regulations which will not be covered here.

 

Hunting for plaintext credentials is an important and easy method of finding policy violations or other enablers of compromise. Increasing numbers of the workforce in remote or work-from-home situations means that employees will be transferring data over infrastructure not controlled by your organization. This may include home WiFi, mobile hotspots, or coffee shop free WiFi.

 

Frequently, this hunting method will reveal misconfigured web servers, poor authentication handling, or applications using baked-in URLs and credentials. While Netwitness does a good job parsing this by default, there are additional steps that can be taken to increase detection and parsing.

 

Key Takeaways

  • Ensure the Form_Data_lua parser is enabled and updated
  • Also hunt for sessions where passwords are not parsed

 

Setup

Most environments will have either the HTTP or HTTP_lua parser currently enabled considering that it is one of the core network parsers. You can check this under your Decoder > Config tab in the Parsers Configuration pane. More details about system parsers and Lua equivalents can be found here: https://community.rsa.com/docs/DOC-79198

 

Form_Data_lua

This parser looks at the body of HTTP content whereas the HTTP/HTTP_lua parsers primarily extract credentials from the headers. Before enabling Form_Data_lua, it is important to understand that this can come with increased resource usage due to the amount of additional data being searched.  You can find statistic monitoring instructions here, although this itself can come with a performance impact as well: https://community.rsa.com/docs/DOC-80210

 

For the purpose of this hunting method, you can disable the “query” meta key if there are resource concerns. In either case, be sure to monitor keys for index overflow. You can adjust the per-key valueMax if needed per the Core Database Tuning Guide: https://community.rsa.com/docs/DOC-81117

 

Also, if you are not subscribed to and deploying the Form_Data_lua parser, be sure to deploy the latest copy from Live. Along with optimizations, recent changes expand the variables that the parser is searching for, as well as introduce parsing of JSON-based authentication.

 

Hunting

Once the parsers are enabled, you can go to Investigate > Navigate and begin a new query. For ease of record keeping, I like to structure my hunt in these categories:

  • Inbound
    • Password exists
    • Password does not exist
  • Lateral
    • Password exists
    • Password does not exist
  • Outbound
    • Password exists
    • Password does not exist

 

The assumption here is that you’re using the Traffic_Flow_lua parser with updated network definitions to easily identify directionality. If not, you can use other keys such as ip.src and ip.dst. More info on the Traffic_Flow_lua parser here: https://community.rsa.com/docs/DOC-44948

 

Querying where passwords exist is straightforward:

password exists && direction = “inbound” && service = 80
password exists && direction = “lateral” && service = 80
password exists && direction = “outbound” && service = 80

 

Querying where passwords do not exist requires a bit of creativity and assumptions. In many cases, authentication over HTTP will involve URLs similar to http[:]//host[.]com/admin/formLogin. This path is recorded in the directory and filename meta keys, where “/admin/” would be the directory and “formlogin” would be the filename.

 

I’ll often start with the below query (the exclamation point is used to negate “exists”):

password !exists && direction = “outbound” && service = 80 && filename contains “login”,”logon”,”auth”

 

You can follow this pattern for other directions, filenames, and directory names as you see fit. The comma-separated strings in the filename query act as a logical OR. It would be equivalent to the following. Pay attention to the parentheses:

password !exists && direction = “outbound” && service = 80 && (filename contains “login” || filename contains ”logon” || filename contains ”auth”)

 

Many authentication sessions will occur using the “POST” HTTP method. If you’d like, you can also append ‘action = “post”’ to the above query.

 

Analysis

After your query completes, you’ll be left with a dataset to review. (Hopefully) Not all of them will contain credentials, but this is where the human analysis begins. Choose a place to start, then open the Event Analysis view (now known simply as Event view in newer versions). My example here will be heavily censored for the purpose of this blog post.

Choose the “Decode Selected Text” option to make viewing this easier.

Now that you’ve found sessions of interest, you can begin appropriate follow-up action. Examples may include advising the website developer to enable HTTPS or discussing app configuration with your mobile appliance team.

 

Conclusion

This hunting method will aid in analyzing security posture from outbound, inbound, and lateral angles. It also serves as an easy gateway for analysts to quickly make a positive security impact as well as become familiar with the intricacies of HTTP communication.

 

Netwitness parsers must balance performance considerations alongside detection fidelity. While they currently have good coverage, it’s beneficial to know how to search data that is structured in a way that is malformed or formatted such that it is impractical for Netwitness to parse.

 

For more hunting ideas, see the Netwitness Hunting Guide: https://community.rsa.com/docs/DOC-62341

 

If you have any comments, feel free to leave them below. If you’re finding recurring patterns in your environment that are not parsed, you can let us know and we’ll assess the feasibility of adding the detection to the parser.

A question has come up a few times on how someone could exclude certain machines from triggering NetWitness Endpoint Agent alerts easily.

 

This particular use case were their "Gold Images" which are used for deploying machines.  As part of a bigger vision for other server roles & rules, a custom meta key was created called Server.Role to hold the various roles they have defined for servers in their environment.

 

A Custom Feed was created to associate "Gold Image" as a meta value for that Meta Key by matching against alias.host, device.host or host.src. This example is just an Adhoc feed, but a recurring feed from a CMDB or other tools could be leveraged to keep this list dynamic.

note: My example has not gold just to contrast the roles.

 

Now that the meta values are created, we can use these as whitelisting statements for the App rules.

From Admin>Services, select the Endpoint Log Decoder, click View>Config then select the App Rules tab.

 

Filter by nwendpoint to find the endpoint rules.

Edit the rule you'd like and add a server.role != 'gold image' && in front of the rule as shown in the example below:

Click OK then Apply the rules


Repeat for any other rules you would need whitelisted.

 

This is just a simple example, but you can use this approach for many other scenarios.

Summary:

Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live, for retired content you'll need to manually remove those.

Detailed configuration procedures for getting RSA NetWitness Platform setup - Content Quick Start Guide 

 

Additions:

RSA NetWitness Lua Parsers:

  • WireGuard – New Lua parser has been introduced to identify WireGuard VPN sessions. WireGuard open-source is a security-focused virtual private network (VPN) known for its simplicity and ease of use.

Read more about Identifying WireGuard (VPN) Traffic Using RSA NetWitness Network 

 

 

More information about Packet Parsers 

 

RSA NetWitness Application Rules:

More information about NetWitness 11.4 New Features andAlerting: ESA Rule Types 

 

Changes:

RSA NetWitness Lua Parsers:

  • SMB_lua – This parser is updated for significant detection improvements with named pipe parsing capabilities. Detection is expanded to track parent-child relationships to recognize operations performed on child named pipes.

Read more about SMB_lua in action -

Detecting Lateral Movement in RSA NetWitness: Winexe 

Around the Fire With Old Friends (CVE-2019–0604, and CVE-2017-0144)

Keeping an eye on your Hounds...  

 

  • DCERPC – This parser is updated for similar detection improvements with named pipe parsing capabilities.

Read more about Using the RSA NetWitness Platform to Detect Lateral Movement: SCShell (DCE/RPC) 

 

  • TLS_lua - New detections are added in TLS parser to detect suspicious cipher suites for both client and server. This will give analysts added insight into what TLS connections based on suspicious client/server setup which will help detect and analyze malicious activity.

Read more about SSL and NetWitness 

 

  • rtmp_lua – rtmp parser is updated for accuracy and efficiency.
  • HTTP_lua – This parser has been updated with added detection and better accuracy

 

 

Discontinued:

We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

Discontinued Content 

 

For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.

 

EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

Carrying on with the theme of Remote Access Tools (RATs), in this blog post will be covering Void-RAT. This tool is still in development and currently at alpha release so doesn't come with as many features as other RATs we've looked at, with that being said it still works quite nicely for controlling a remote endpoint. As always, check out the C2 Matrix for more details on its functionality.

 

The Attack

On our victim endpoint, we drop our compiled binary, client.exe, into the C:\PerfLogs\ directory and execute it:

 

 

After execution, it attempts to connect back to the C2 server, if successful it creates a slightly modified version of itself and stores it here: C:\Windows\Firewall\Firewall.exe - it then executes this binary which is the one that communicates back to the C2 server along with some information about the endpoint it is running on:

 

There are a number of options available to control the endpoint, but the most useful is the Remote CMD option. This allows us to execute commands remotely on the victim:

 

 

The Detection Using RSA Network

Void-RATs communication is in cleartext but uses a custom TCP protocol which is not directly understood by NetWitness. This means that the traffic gets tagged as OTHER, even though NetWitness does not understand the protocol, it will still analyse it. From the below screenshot, we can see that NetWitness has detected windows cli commands over some sessions using a suspect port:

 

Drilling into these sessions and reconstructing them, we can see the structure of the protocol used by Void-RAT, and the information that was sent to and from the victim:

 

Some more of the payload can be seen below. These commands are what NetWitness detected:

 

Void-RAT also reports back the public IP of the victim upon its initial check-in. It does this by making an HTTPS request to wtfismyip[.]com - this could also be used as a potential starting point for a hunt to find potentially compromised endpoints:

service = 443 && sld = 'wtfismyip'

 

These types of tools also require interaction from a remote operator, so at some point the attacker will perform actions that may supply additional indicators leading you to their presence. Here under the Indicators of Compromise meta key, we can see the meta value, hex encoded executable:

 

 

Drilling into this meta value and opening the events view to reconstruct the session, we can see that a hex encoded executable is being sent across the wire which uses the same proprietary protocol as Void-RAT, so even if we had not detected the RAT initially, we detected suspect behaviour, which led us to the RAT:

 

 

The Detection Using NetWitness Endpoint

Upon execution of Void-RAT, it sets up persistence for itself. It achieves this by creating a slightly modified version of itself here: C:\Windows\Firewall\Firewall.exe and modifies the \Current\Version\Run key to execute it upon boot. This behaviour was detected by NetWitness Endpoint and is shown as the two meta values in the following screenshot:

 

 

Drilling into these two meta values we can see these two events in more detail:

 

 

Changing our pivot in the Navigate view to focus on the new binary, filename.src = 'Firewall.exe', we can see that it is executing suspect commands (as shown under the Source Parameter meta key) and making network connections (as shown under the Context meta key):

 

Drilling into the network connections made by Firewall.exe, we can see the lookup performed to get the public IP of the victim using wtfismyip[.]com:

 

A simple application rule that could be created to look for this behaviour is shown below:

domain.dst = 'wtfismyip.com'

 

We can also see the connection back to the C2, which would have given us a nice indicator to search and see if other endpoints are infected:

 

 

Similarly, as stated in the network detection, the tool is operated remotely and will at some point have to perform actions to achieve its end goal. The attacker transferred a hex encoded binary across the wire, but this cannot be executed by the system, so they used certutil (a LOLBin) to hex decode the file into an executable, which was detected under the Behaviours of Compromise meta key as shown below:

 

 

Conclusion

While many RATs seem to use custom TCP protocols to communicate, their behaviour is easily identifiable
with NetWitness. When hunting in network traffic make sure to spend some time on service = 0 - and
remember that a RAT has to do something in order to achieve its end goal, and those actions will be picked
up by NetWitness, so make sure to look for executables performing suspicious actions and
making network connections that you typically wouldn't expect for that endpoint. While this RAT does use a custom protocol, in a lot of cases, attackers exploit security controls in organizations that allow direct internet access on well-known common ports, like port 80/HTTP, 443/HTTPS, 22/SSH, etc. In these cases, NetWitness will also flag the unknown service on these ports. For more mature organizations, using NGFWs that do a certain level of protocol inspection before allowing traffic for well known services to flow through them, RATs like this would have some difficulty surviving, and therefore attackers are more prone to use tools that rely on standard protocols, which we have covered on some of the other posts.

This month we did a live demonstration of upgrading the firmware on an iDRAC of version 8 and version 9. Sadly I wasn't able to make videos for this one, but here are Dell's official walkthrough videos: (Please keep in mind RSA only supports certain firmware versions that can be found here RSA NetWitness Availability of BIOS & iDRAC Firmware Updates)

iDRAC9 Firmware Upgrade | iDRAC8 Firmware Upgrade

 

Dell has multiple guides on IPMI-based interfacing with iDRACs, which can all be found on Dell's website depending on your firmware and hardware versions.

 

The recording of the May webinar is available here:

Webinar Recording

Access Password: 8V*6.vT@

 

PowerPoint is attached.

Summary:

Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live. For retired content, you must manually remove those items.

 

For detailed configuration procedures to setup RSA NetWitness Platform, see the Content Quick Start Guide

 

Additions:

RSA NetWitness Lua Parsers:

  • TLS_lua Options – Optional parameters to alter the behavior of the TLS_lua parser.

Available Options:

"Overwrite Service": default value false

Default behavior is that if another parser has identified a session with service other than SSL, then this parser will not overwrite the service meta.

If this option is enabled,  the parser identifies all sessions containing SSL as SSL even if a session has been identified by another parser as another service.

 

"Ports Only": default value false

Default behavior is port-agnostic: that is, the parser looks for all SSL/TLS sessions regardless of which ports a session uses.  This allows identification of encrypted sessions on unexpected and non-standard ports.

If this option is enabled,  the parser only searches for SSL/TLS sessions using the configured ports.  Ports on other sessions will not be identified as SSL/TLS.  This may improve performance, at a cost of possibly decreased visibility.

 

Note that a session on a configured port that is not SSL/TLS will still not be identified as SSL/TLS.  In other words, the parser does not assume that all sessions on configured ports are SSL/TLS.

Read more about SSL and NetWitness 

 

More information about Packet Parsers: https://community.rsa.com/docs/DOC-43422

 

RSA NetWitness Application Rules:

  • Creates Run Key – New application rule is added to detect creation of new run keys. Creating new run key can be an indication of someone trying to use startup configuration locations to execute malware, such as remote access tools, to maintain persistence through system reboots.

This rule addresses MITRE’s ATT&CK™ tactic – Persistence; Technique - Registry Run Keys / Startup Folder

 

  • Execute DLL Through Rundll32 – New application rule is introduced to detect DLL execution using Rundll32 program. Rundll32 program can be called to execute an arbitrary binary. Attackers may take advantage of this for proxy execution of code to avoid triggering security tools.

This rule addresses MITRE’s ATT&CK™ tactic – Execution, Defense Evasion; Technique - rundll32

 

  • Runs DNS Lookup Tool for TXT Record – New application rule is added to detect possible covert command and control channels. Running nslookup.exe to query TXT records can be used to establish a covert Command & Control channel to exchange commands and other malicious information. These malicious commands can be later executed on the target system.

This rule addresses MITRE’s ATT&CK™ tactic – Discovery, Command and Control; Techniques - System Network Configuration Discovery, Commonly Used Port, Standard Application Layer Protocol

 

For more information about NetWitness 11.4 New Features and Alerting: ESA Rule Types 

 

 

Changes:

RSA NetWitness Lua Parsers:

  • ethernet_oui - The list of registered OUI in the parser is updated for added detection.

Read more about Lua - Mapping MAC to Vendor (Logs/Netflow and Endpoint)  

 

More content has been tagged with MITRE ATT&CK™ metadata for better coverage and improve detection.

For detailed information about MITRE ATT&CK™:

RSA Threat Content mapping with MITRE ATT&CK™  

Manifesting MITRE ATT&CK™ Metadata in RSA NetWitness  

 

 

Discontinued:

We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

List of Discontinued Content 

 

RSA NetWitness Application Rules:

  • Stealth Email Use - Marked discontinued due to performance-to-value tradeoff.

 

For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.

 

EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

Delving back into the C2 Matrix to look for some more inspiration for blog posts, we noticed there are a number of Remote Administration Tools (RATs) listed. So we decided to start taking a look at these RATs and see how we can detect their usage in NetWitness. This post will cover QuasarRAT which is an open-source, remote access tool that is developed in C#. It has a large variety of features for controlling the victim endpoint and has been used by a number of APT groups.

 

The Attack

QuasarRAT can be compiled in two modes, debug and release - for this blog post we compiled QuasarRAT in debug mode as it is the quickest and easiest way to get up and running. Once our agent had been compiled, we dropped it onto our victim endpoint in the C:\PerfLogs\ directory and executed:

 

Shortly after execution we get a successful connection back to QuasarRAT from our victim endpoint:

 

QuasarRAT has a large feature set, here we are using the Remote Shell feature to execute some commands:

 

There is also a file explorer that allows us to easily navigate the file system, as well as upload and download files:

 

It even has a Remote Desktop feature to view and control the endpoint:

 

 

The Detection Using NetWitness Network

QuasarRAT does not have an option for insecure communication and all traffic will be over SSL, it also uses a custom TCP protocol for its communication so if intercepted the protocol would be tagged as OTHER and you would have to look for indicators similar to those outlined in our CHAOS C2 post: Using RSA NetWitness to Detect Chaos C2.

 

Under the Service Analysis meta key, we get some interesting meta values generated regarding the certificate. QuasarRAT upon compilation generates a self-signed cert, this means the certificates age is low, as is identified by the certificate issued within last week meta value, and the self-signed meta value is, ssl certificate self-signed - you'll also notice that there is an ssl over non-standard port meta value, this is generated as the default port for QuasarRAT is 4782 (this is easily changed however and would more commonly be over 443 to bypass firewall restrictions). With that being said, these are some great pivot points to start a hunt in SSL traffic to look for suspect SSL communication:

 

Looking into the parsed data from the certificate, we can see that the SSL CA and SSL Subject identify this as a Quasar Server, which are the default values given to the certificate created by QuasarRAT:

ssl.ca = 'quasar server ca' || ssl.subject = 'quasar server ca'

 

Another interesting meta value is located under the Versions meta key, where we can see that QuasarRAT uses an outdated version of TLS, tls 1.0 - this could be another starting point to look for this tool, or other applications using outdated protocols for that matter:

 

The SSL JA3 hash for this comes back as, fc54e0d16d9764783542f0146a98b300, which according to JA3 OSINT maps to, PowerShell 5.1;Invoke-WebRequest. While there is often overlap with JA3 hashes, it would still be a good place to start a hunt from:

 

On initial execution the RAT will also make an HTTP call to http://ip-api.com to obtain the public IP address of the endpoint. It would be worth hunting through the network traffic for requests to this domain and others that provide the same function:

 

The Detection Using NetWitness Endpoint

When we were setting up QuasarRAT, we modified the persistence settings to true, the following two meta values were generated based off of this. This is because QuasarRAT will copy itself to the \AppData\Roaming\ directory and use the \CurrentVersion\Run key to start it upon boot:

 

If you are using the new ATT&CK meta keys, we also see this persistence mechanism described there as well with the following meta values:

 

As stated in the network detection section, the RAT will make an HTTP connection to http://ip-api.com to get the public IP of the victim, we can also see that in the network endpoint data as shown below:

 

We can also drill into the meta value, console.remote, which is located under the Context meta key. This will show us commands executed by cmd.exe or powershell.exe as a result of inter-process communication through anonymous pipes, i.e. a reverse shell - here we can see client.exe executing suspect commands:

 

It is important to triage through all the commands executed in order to identify and follow the attackers intentions. An interesting command seen above is in relation to the esentutl.exe; this binary provides database utilities for the Extensible Storage Engine but can also be used to copy locked files for example. Drilling into this command, we can see it was used to copy the SAM hive (which is a locked file) to the C:\PerfLogs\ directory - it does this by using the volume shadow copy service (as noted by the /vss switch in the command below) to make a backup of the locked file which we are then able to copy:

 

This is an interesting LOLBin (Living off the Land Binary) as it would allow an attacker to copy any locked file from the system, this is activity that should be monitored and the following application rule logic would detect the usage of this command to copy files using the volume shadow copy service:

(filename.src = 'esentutl.exe' || filename.dst = 'esentutl.exe') && (param.src contains '/vss' || param.dst contains '/vss')

NOTE: Not all usage of esentutl.exe will necessarily be malicious, this could be a legitimate technique used by backup software for example. It is down to the defender to determine the legitimacy of the tool executing the command.

 

 

Conclusion

QuasarRAT has been around for some time and has been used in a number of targeted attacks against organizations and it is easy to see why. Remote access tools such as this pose a real risk to organizations and monitoring for their activity is paramount to ensuring the security of your network. It is also important as a defender that when these tools are found, that all command are triaged to gain a better understanding of the attackers intentions and end goal.

To round out our series explaining how to use the indicators from ASD & NSA's report for detecting web shells (Detect and prevent web shell malware | Cyber.gov.au ) with NetWitness, let's take a look at the endpoint focused indicators. If you missed the other posts, you can find them here:

 

Signature-Based Detection

To start with, the guide provides some YARA rules for static signature based analysis. However the guide then quickly moves on to say that this approach is unreliable as attackers can easily modify the web shells to avoid this type of detection. We couldn't agree more – YARA scanning is unlikely to yield many effective detections.

 

Endpoint Detection and Response (EDR) Capabilities

The guide then goes on to describe the potential benefits of using EDR tools like NetWitness Endpoint. EDR tools can be of great benefit to provide visibility into abnormal behaviour at a system level. As the paper notes:

For instance, it is uncommon for most benign web servers to launch the ipconfig utility, but this is a common reconnaissance technique enabled by web shells.

Indeed - monitoring process and commands invoked by web server processes is a good way to detect the presence of web shells. When a web shell is first accessed by an attacker, they will commonly run a few commands to figure out what sort of access they have. Appendix F of the guide includes a list of Windows Executables to watch for being launched by web server processes like IIS w3wp.exe (reproduced below):

NetWitness endpoint provides OOTB monitoring for many of these processes, and produces meta data when execution is detected. The examples below shows some of the meta generated for the execution of cmd.exe, ipconfig.exe and whoami.exe from a web shell - the Behaviors of Compromise key shows values of interest:

An important detail to be wary of is that in many cases the web server process like w3wp.exe may not invoke the target executable directly. So simply running a query looking for filename.src = 'w3wp.exe' && filename.dst = 'ipconfig.exe' won’t work. In the example below, we can see that the web server process actually invokes a script in memory, which then invokes cmd.exe to run the desired tool ipconfig.exe, similarly for whoami.exe:

The event detail shows the chain of execution across the two events:

We can see the full meta data includes the command to run ipconfig.exe passed as a parameter between the two processes:

 

We can get a clearer picture of the relationship between these processes usng the NetWitness Endpoint process analyser, which shows the links between the processes:

 

NetWitness Endpoint generates a lot of insightful metadata to describe actions on a host. It is well worth reviewing the metadata generated and which meta keys it is placed under. There is a great documentation page with all the details here: RSA NetWitness Endpoint Application Rules 

Not just IIS

Of course, web shells don't only run on IIS! The same principles can be used for detecting web shells installed on Apache Tomcat and other web servers. Application rules in NetWitness Endpoint also look for command execution by other web server processes. Make sure you check your environment for your web server daemons and add them to the rules as well:


Endnote

That’s it for this series where we’ve gone through the indicators published by ASD & NSA in their guide for detecting web shells and transcribed how to use them in NetWitness.While the indicators in the guide serve as a starting point, real life detection can get very complicated very quickly. As we stated in a previous post:

Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.

My colleague Hermes Bojaxhi recently posted about another example involving web shells from one of our cases. He goes into great detail showing the exploitation of Exchange and the installation of a web shell: Exchange Exploit Case Study – CVE-2020-0688 

 

Let me know in the comments below if you’ve used any of these techniques in your environment and what you've found - or let me know if there's anything else you'd like to see.

 

Happy Hunting!

Josh Randall

Postman for NetWitness

Posted by Josh Randall Employee May 17, 2020

If you've ever done any work testing against an API (or even just for fun), then you've likely come across a number of tools that aim to make this work (or fun) easier.

 

Postman is one of these tools, and one of its features is a method to import and export collections of API methods that enable individuals to begin using those APIs much more easily and quickly than if, say...they only have a bunch of docs to get them started.

 

As NetWitness is a platform with a number of APIs and a number of docs to go along with them, a Postman collection detailing the uses, requirements, options, etc. of these APIs should (I hope) be a useful tool that individual and teams can leverage to enable more efficient and effective use of the NetWitness platform....as well as with any other tool that you may want to integrate with NetWitness via its APIs.

 

With that in mind, I present a Postman Collection for NetWitness.  This includes all the Endpoint APIs, all the Respond APIs, and the more commonly used Events (A.K.A. SDK) APIs --> Query, Values, Content, and Packets. Simply import the attached JSON file into Postman, fill in the variables, and start API'ing.

 

A few notes, tips, and how-to's....

  • upon importing the collection, the first thing you should do is update the variables to match your environment

  • the rest_user / rest_pass variables are required for the Endpoint and Respond API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Security --> Users & Roles tabs
    • the role assigned to the account must have the integration-server.api.access permission, as well as any underlying permissions required to fulfill the request
    • e.g.: if you're querying Endpoint APIs, you'll need integration-server.api.access as well as endpoint-server permissions
  • the svc_user / svc_pass variables are required for the Events API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Services --> <core_service> --> Security --> Users & Roles tabs
    • the role assigned to the account must have the sdk.content, sdk.meta, and sdk.packets permissions, as well as any additional permissions necessary to account for Meta and Content Restriction settings you may be using
  • every Respond and Endpoint call will automatically create and update the accessToken and refreshToken used to authenticate its API call
    • so long as your rest_user and rest_pass variables are correct and the account has the appropriate permissions to call the Respond and/or Endpoint node, there is no need to manually generate these tokens
    • that said, the API calls to generate tokens are still included so you can see how they are being made
  • several of the Endpoint APIs, when called, will create and update variables used in other Endpoint APIs
    • the first of these is the Get Services call, which lists all of the endpoint-server hosts and creates variables that can be used in other Endpoint API calls
      • the names of these variables will depend on the names of each service as you have them configured in the NW UI
    • the second of these is the Get Hosts call, which lists all of the endpoint agents/hosts reporting to the queried endpoint-server and creates a variable of each hostname that can be used in other Endpoint API calls

      • this one may be a bit unwieldy for many orgs, though, because if you have 2000 agents installed, this will create 2000 variables - 1 for each host - if you have 20,000 agents installed, or 200,000.... 
      • you may not want all those hostname variables, so you can adjust how many get created, or disable it altogether, by modifying or deleting or commenting out the javascript code in the Tests section of the Get Hosts call

Any questions, comments, concerns, suggestions, etc...please let me know.

Filter Blog

By date: By tag: