Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Josh Randall
1 2 Previous Next

RSA NetWitness Platform

24 Posts authored by: Josh Randall Employee

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.

 

But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.

 

This blog covers the hard way.

 

Everything that we do in the hard way must occur after the Endpoint Log Hybrid host has been fully installed and provisioned. This means you'll need to complete the entire host installation before moving on to this process.

 

There are 2 primary requirements for the hard way:

  • you must be able to create a server certificate and private key capable Server Authentication
  • you must be able to create a client certificate and private key capable of Client Authentication
    • this client certificate must have Common Name (CN) value of rsa-nw-endpoint-agent

 

I won't be going into details on how to generate these certificates and keys - your org should have some kind of process in place for this. And since the certificates and keys generated from that process can output in a number of different formats, I won't be going into details on how to convert or reformat them. There are numerous guides, documents, and instructions online to help with that.

 

Once we have our server and client certificates and keys, make sure to also grab the CA chain used to generate them (at the very least, both certs need to have a common Root or Intermediate CA to be part of the same trusted chain). This should hopefully be available through the same process used to create the certs and keys. If not, we can also export CA chains from websites - if you do this, make sure it is the same chain used to create your certificates and keys.

 

The endstate format that we'll need for everything will be PEM. The single server and/or client cert should look like this:

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----

 

The private key should look like this:

-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCuUtxhFPb+FtWD
mQyIELpYVW7isU2KA7ur6ZhWDnKI6pD1POYHfyftO6MhxYsaRrwQ+XxhRJhyT/Ht
....snip....
-----END PRIVATE KEY-----

 

And the Certificate Chain should look this (one BEGIN-END block per CA certificate in the chain...also, it will help to simplify the rest of the process if this chain only includes CA certificates):

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIIFBzCCAu+gAwIBAgIJAK5iXOLV5WZQMA0GCSqGSIb3DQEBCwUAMBIxEDAOBgNV
BAMMB1Jvb3QtY2EwHhcNMjAwODA1MTk1MTMxWhcNMzAwODAzMTk1MTMxWjASMRAw
....snip....
-----END CERTIFICATE-----

 

We want to make sure we have each of these PEM files for both the server and client certs and key we generated. Once we have these, we can proceed to the next set of steps.

 

The rest of this process will assume that all of these certificates, keys, and chains are staged on the Endpoint Log Hybrid host.

Every command we run from this point forward occurs on the Endpoint Log Hybrid.

We end up replacing a number of different files on this host, so you should also consider backup all the affected files before running the following commands.

 

For the server certificates:

  • # cp /path/to/server/certificate.pem /etc/pki/nw/web/endpoint-web-server-cert.pem
  • # cp /path/to/server/key.pem /etc/pki/nw/web/endpoint-web-server-key.pem
  • # cat /path/to/server/certificate.pem > /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # cat /path/to/ca/chain.pem >> /etc/pki/nw/web/endpoint-web-server-cert.chain
  • # openssl crl2pkcs7 -nocrl -certfile /path/to/server/certificate.pem -certfile /path/to/ca/chain.pem -out /etc/pki/nw/web/endpoint-web-server-cert.p7b
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-trust/truststore.pem
  • # cp /path/to/ca/chain.pem /etc/pki/nw/nwe-ca/customrootca-cert.pem
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.p12.idx
  • # echo "/etc/pki/nw/nwe-ca/customrootca-cert.pem" > /etc/pki/nw/nwe-trust/truststore.pem.idx

 

The end results, with all the files we modified and replaced, should be:

 

Once we're confident we've completed these steps, run:

  • # systemctl restart nginx

 

We can verify that everything so far has worked by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:

 

If this matches our server certificate and chain, then we can move on to the client certificates. If not, then we need to go back and figure out which step we did wrong.

 

For the client certificates:

  • openssl pkcs12 -export -out client.p12 -in /path/to/client/certificate.pem -inkey /path/to/client/key.pem -certfile /path/to/ca/chain.pem

 

...enter a password for the certificate bundle, and then SCP this client.p12 bundle onto a windows host. We'll come back to it in just a moment.

 

In the NetWitness UI, browse to Admin/Services --> Endpoint-Server --> Config --> Agent Packager tab. Change or validate any of the configurations you need, and then click "Generate Agent Packager." The Certificate Password field here is required to download the packager, but we won't be using the OOTB client certificate at all so don't stress about the password.

 

Unzip this packager onto the same windows host that has the client.p12 bundle we generated previously. Next, browse to the AgentPackager\config directory, replace the OOTB client.p12 file with the our custom-made client.p12 bundle, move back up up one directory, and run the AgentPackager.exe.

 

If our client.p12 bundle has been created correctly, then in the window that opens, we will be prompted for a password. This is the password we used when we ran the openssl pkcs12 command above, not the password we used in the UI to generate the packager. If they happen to be the same, fantastic....

 

We'll want to verify that the Client certificate and Root CA certificate thumbprints here match with our custom generated certificates.

 

With our newly generated agent installers, it is now time to test them. Pick a host in your org, run the appropriate agent installer, and then verify that you see the agent showing up in your UI at Investigate/Hosts.

 

If it does appear, congratulations! Make sure to record all these changes, and be ready to repeat them when certificates expire and agent installers need upgrading/updating.

 

If it doesn't, a couple things to check:

  • first, give it a couple minutes...it's not going to show up instantly
  • go back through all these steps and double-check that everything is correct
  • check the c:\windows\temp directory for a log file with the same name as your endpoint agent; e.g.: NWEAgent.log....if there are communication errors between the agent/host and the endpoint server, this log will likely have relevant troubleshooting details
  • if the agent log file has entries showing both "AgentCert" and "KnownServerCert" values, check that these thumbprints match the Client and Root CA certificate thumbprints from the AgentPackager output

    • ...I was not able to consistently reproduce this issue, but it is related to how the certs and keys are bundled together in the client.p12
    • ...when this happened to me, I imported my custom p12 bundle into the Windows MMC Certificates snap-in, and then exported it (make sure that the private key gets both imported and exported, as well as all the CAs in the chain), then re-ran my AgentPackger with this exported client.p12, and it fixed the error
    • ... ¯\_(ツ)_/¯
  • from a cmd prompt on the host, run c:\windows\system32\<service name of the agent>.exe /testnet
  • check the NGINX access log on the Endpoint Log Hybrid; along with the agent log file on the endpoint, this can show whether the agent and/or server are communication properly
    # tail -f /var/log/nginx/access.log

By default, NetWitness Endpoint 11.x creates a self-signed Certificate Authority during its initial installation, and uses this CA to generate certificates for the endpoint agent and the local reverse proxy that handles all incoming agent communications. Because all these certificates are generated from the same CA chain, they automatically trust each other and enable seamless, easy, and secure communications between agents and the endpoint server.

 

But what if this self-signed CA cannot be used within your organization? For a number of very valid reasons, many orgs might not allow software using a self-signed certificate, and may instead be required to use their own trusted CAs. If this is the case, we have a couple options - an easy way, and a hard way.

 

This blog covers the easy way.

 

The only real requirement for the easy way is that we are able to create an Intermediate CA certificate and its private key from our CA chain (or use an existing pair), and that this Intermediate CA is allowed to generate an additional, subordinate CA under it.

 

For my testing, "Root-ca" was my imaginary company's Root CA, and I created "My Company Intermediate CA" for use in my 11.4 Endpoint Log Hybrid.

 

(I'm no expert in certificates, but I can say that all the Intermediate CAs I created that had explicit extendedKeyUsage extensions failed. The only Intermediate CAs I could get to work included "All" of the Intended Purposes. If you know more about CAs and the specific extendedKeyUsage extensions needed for a CA to be able to create subordinate CAs, I'd be interested to know what they are.)

 

Once we have an Intermediate CA certificate and its private key, we need to make sure they are in PEM format. There are a number of ways to convert and check keys and certificates, and a whole bunch of resources online to help with this this, so I won't cover any of the various conversion commands or methods here. 

 

If the CA certificate looks like this, then it is most likely in the correct format:

-----BEGIN CERTIFICATE-----
MIIFODCCAyCgAwIBAgICEAEwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UEAwwHUm9v
dC1jYTAeFw0yMDA4MDUyMDE0MTdaFw0zMDA4MDMyMDE0MTdaMCUxIzAhBgNVBAMM
....snip....
-----END CERTIFICATE-----

 

And if the private key looks like this, then it is most likely in the correct format:

-----BEGIN PRIVATE KEY-----
MIIJQwIBADANBgkqhkiG9w0BAQEFAASCCS0wggkpAgEAAoICAQCuUtxhFPb+FtWD
mQyIELpYVW7isU2KA7ur6ZhWDnKI6pD1POYHfyftO6MhxYsaRrwQ+XxhRJhyT/Ht
....snip....
-----END PRIVATE KEY-----

 

Our last step in this process has to occur at a very specific point during the endpoint log hybrid's installation - after we have run the nwsetup-tui command and the host has been enabled within the NetWitness UI, but before we install the Endpoint Log Hybrid services:

  • on the endpoint host, create directory /etc/pki/nw/nwe-ca
  • place the CA certificate and CA private key files in this directory and name them nwerootca-cert.pem and nwerootca-key.pem, respectively

 

The basis for this process comes directly from the "Configure Multiple Endpoint Log Hybrid Hosts" step in the Post Installation tasks guide (https://community.rsa.com/docs/DOC-101660#NetWitne), if we want a bit more context or details on when this step should occur and how to do it properly.

 

Once we've done this, we can now install the Endpoint Log Hybrid services on the host.

 

I suggest you watch the installation log file on the endpoint server, because if the Intermediate CA does not have all the necessary capabilities, the installation will fail and this log file can help us identify which step (if my own experience is any guide, then it will most likely fail during the attempt to create the subordinate Endpoint Intermediate CA --> /etc/pki/nw/nwe-ca/esca-cert.pem):

# tail -f /var/log/netwitness/config-management/chef-solo.log

 

If all goes well, we'll be able to check that our endpoint-server is using our Intermediate CA by browsing to https://<endpoint_server_IP_or_FQDN> and checking the certificate presented by the server:

 

And our client.p12 certificate bundle within the agentPackager will be generated from the same chain:

 

And that's it!

 

Any agent packages we generate from this point forward will use the client.p12 certificates generated from our CA. Likewise, all agent-server communications will be encrypted with the certificates generated from our CA.

Josh Randall

Postman for NetWitness

Posted by Josh Randall Employee May 17, 2020

If you've ever done any work testing against an API (or even just for fun), then you've likely come across a number of tools that aim to make this work (or fun) easier.

 

Postman is one of these tools, and one of its features is a method to import and export collections of API methods that enable individuals to begin using those APIs much more easily and quickly than if, say...they only have a bunch of docs to get them started.

 

As NetWitness is a platform with a number of APIs and a number of docs to go along with them, a Postman collection detailing the uses, requirements, options, etc. of these APIs should (I hope) be a useful tool that individual and teams can leverage to enable more efficient and effective use of the NetWitness platform....as well as with any other tool that you may want to integrate with NetWitness via its APIs.

 

With that in mind, I present a Postman Collection for NetWitness.  This includes all the Endpoint APIs, all the Respond APIs, and the more commonly used Events (A.K.A. SDK) APIs --> Query, Values, Content, and Packets. Simply import the attached JSON file into Postman, fill in the variables, and start API'ing.

 

A few notes, tips, and how-to's....

  • upon importing the collection, the first thing you should do is update the variables to match your environment

  • the rest_user / rest_pass variables are required for the Endpoint and Respond API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Security --> Users & Roles tabs
    • the role assigned to the account must have the integration-server.api.access permission, as well as any underlying permissions required to fulfill the request
    • e.g.: if you're querying Endpoint APIs, you'll need integration-server.api.access as well as endpoint-server permissions
  • the svc_user / svc_pass variables are required for the Events API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Services --> <core_service> --> Security --> Users & Roles tabs
    • the role assigned to the account must have the sdk.content, sdk.meta, and sdk.packets permissions, as well as any additional permissions necessary to account for Meta and Content Restriction settings you may be using
  • every Respond and Endpoint call will automatically create and update the accessToken and refreshToken used to authenticate its API call
    • so long as your rest_user and rest_pass variables are correct and the account has the appropriate permissions to call the Respond and/or Endpoint node, there is no need to manually generate these tokens
    • that said, the API calls to generate tokens are still included so you can see how they are being made
  • several of the Endpoint APIs, when called, will create and update variables used in other Endpoint APIs
    • the first of these is the Get Services call, which lists all of the endpoint-server hosts and creates variables that can be used in other Endpoint API calls
      • the names of these variables will depend on the names of each service as you have them configured in the NW UI
    • the second of these is the Get Hosts call, which lists all of the endpoint agents/hosts reporting to the queried endpoint-server and creates a variable of each hostname that can be used in other Endpoint API calls

      • this one may be a bit unwieldy for many orgs, though, because if you have 2000 agents installed, this will create 2000 variables - 1 for each host - if you have 20,000 agents installed, or 200,000.... 
      • you may not want all those hostname variables, so you can adjust how many get created, or disable it altogether, by modifying or deleting or commenting out the javascript code in the Tests section of the Get Hosts call

Any questions, comments, concerns, suggestions, etc...please let me know.

22APR2020 - UPDATE: Naushad Kasu has posted a video blog of this process and I have posted the template.xml and NweAgentPolicyDetails_x64.exe files from his blog here.

 

08APR2020 - UPDATE: adding a couple notes and example typespecs after some additional experimenting over the past week

  • You may find the process easier to simply copy an existing 11.4 typespec in the /var/netwitness/source-server/content/collection/file directory on the Admin Server and modify it for the custom collection source you need
  • example using IIS typespec:
    • comparison of the XML from the Log Collector/Log Decoder to the version I created on the Admin Server


  • another example using a custom typespec to collect Endpoint v4.4 (A.K.A. legacy ECAT) server logs
    • two different typespecs to collect the exact same set of logs, we can see exactly how the values in the typespec affect the raw log that ultimately gets ingested by NetWitness


**END UPDATE**

 

 

The NetWitness 11.4 release included a number of features and enhancements for NetWitness Endpoint, one of which was the ability to collect flat file logs (https://community.rsa.com/docs/DOC-110149#Endpoint_Configuration), with the intent that this collection method would allow organizations to replace existing SFTP agents with the Endpoint Agent.

 

Flat file collection via the 11.4 Endpoint agent allows for a much easier management compared to the SFTP agent, in addition to the multitude of additional investigative and forensic benefits available with both the free version of the Endpoint agent and the advanced version (NetWitness Endpoint User Guide for NetWitness Platform 11.x - Table of Contents).

 

The 11.4 release included a number of OOTB, supported Flat File collection sources, with support for additional OOTB, as well as custom, sources planned for future releases.  However, because I am both impatient and willing to experiment in my lab where there are zero consequences if I break something, I decided to see whether I could port my existing, custom SFTP-based flat file collections to the new 11.4 Endpoint collection.

 

The process ended up being quite simple and easy.  Assuming you already have your Endpoint Server installed and configured, as well as custom flat file typespecs and parsers that you are using, all you need to do is:

  1. install an 11.4+ endpoint agent onto the host(s) that have the flat file logs
  2. ...then copy the custom typespec from the Log Decoder/Log Collector filesystem (/etc/netwitness/ng/logcollection/content/collection/file)
  3. ...to the Node0/Admin Server filesystem (/var/netwitness/source-server/content/collection/file)
    1. ...if your typespec does not already include a <defaults.filePath> element in the XML, go ahead and add one (you can modify the path later in the UI)
    2. ...for example: 
  4. ...after your typespec is copied (and modifed as necessary), restart the source-server on the Node0/Admin Server
  5. ...now open the NetWitness UI and navigate to Admin/Endpoint Sources and create a new (or modify an existing) Agent File Logs policy (more details and instructions on that here: Endpoint Config: About Endpoint Sources)
    1. ...find your custom Flat File log source in the dropdown and add it to the Endpoint Policy
    2. ...modify the Log File Path, if necessary:
    3. ...then simply publish your newly modified policy
  6. ...and once you have confirmed Collection via the Endpoint Agent, you can stop the SFTP agent on the log source (https://community.rsa.com/docs/DOC-101743#Replace)

 

 

And that's it.  Happy logging.

Josh Randall

Easy-add Recurring Feeds

Posted by Josh Randall Employee Apr 16, 2020

16APR2020 Update

  • adding a modified script for NetWitness environments at-or-above version 11.4.1 (due to JDK 11)
  • renaming the original script to indicate recommended use in pre-11.4.1 NetWitness environments

 

19DEC2019 Update (with props to Leonard Chvilicek for pointing out several issues with the original script)

  • implemented more accurate java version & path detection for JDK variable
  • implemented 30 second timeout on s_client command
  • implemented additional check on address of hosting server
  • implemented more accurate keystore import error check
  • script will show additional URLs for certs with Subject Alternate Names

 

In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:

 

There are a number of blogs and KBs that cover this topic in varying degrees of detail:

 

 

Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.

 

The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:

 

Success!

The concept of multi-valued meta keys - those which can appear multiple times within single sessions - is not a new one, but has become more important and relevant in recent releases due to how other parts of the RSA NetWitness Platform handle them.

 

The most notable of these other parts is the Correlation Server service, previously known as the ESA service.  In order to enable complex, efficient, and accurate event correlation and alerting, it is necessary for us to tell the Correlation Server service exactly which meta keys it should expect to be multi-valued.

 

Every release notes PDF for each RSA NetWitness Platform version contains instructions for how to update or modify these keys to tune the platform to your organization's environment. But the question I have each time I read these instructions is this: How do I identify ALL the multi-valued keys in my RSA NetWitness Platform instance?

 

After all, my lab environment is a fraction the size of any organization's production environment, and if it's an impossible task for me to manually identify all, or even most, of the these keys then its downright laughable to expect any organization to even attempt to do the same.

 

Enter....automation and scripting to the rescue!

superhero entrance

 

The script attached to this blog attempts to meet that need.  I want to stress "attempts to" here for 2 reasons:

  1. Not every metakey identified by this script necessarily should be added to the Correlation Server's multi-valued configuration. This will depend on your environment and any tuning or customizations you've made to parsers, feeds, and/or app rules.
    1. For example, this script identified 'user.dst' in my environment.
    2. However, I don't want that key to be multi-valued, so I'm not going to add it.
    3. Which leaves me with the choice of leaving it as-is, or undoing the parser, feed, and/or app rule change I made that caused it to happen.
  2. In order to be as complete in our identification of multi-valued metas as we can, we need a large enough sample size of sessions and metas to be representative of most, if not all, of an organization's data.  And that means we need sample sizes in the hundreds-of-thousands to millions range.

 

But therein lies the rub.  Processing data at that scale requires us to first query the RSA NetWitness Platform databases for all that data, pull it back, and then process it....without flooding the RSA NetWitness Platform with thousands or millions of queries (after all, the analysts still need to do their responding and hunting), without consuming so many resources that the script freezes or crashes the system, and while still producing an accurate result...because otherwise what's the point?

 

I made a number of changes to the initial version of this script in order to limit its potential impact.  The result of these changes was that the script will process batches of sessions and their metas in chunks of 10000.  In my lab environment, my testing with this batch size resulted in roughly 60 seconds between each process iteration.

 

The overall workflow within the script is:

  1. Query the RSA NetWitness Platform for a time range and grab all the resulting sessionids.
  2. Query the RSA NetWitness Platform for 10000 sessions and all their metas at a time.
  3. Receive the results of the query.
  4. Process all the metas to identify those that are multi-valued.
  5. Store the result of #3 for later.
  6. Repeat steps 2-5 until all sessions within the time range have been process.
  7. Evaluate and deduplicate all the metas from #4/5 (our end result).

 

This is best middle ground I could find among the various factors.

  • A 10000 session batch size will still result in potentially hundreds or thousands of queries to your RSA NetWitness Platform environment
    • The actual time your RSA NetWitness Platform service (Broker or Concentrator) spends responding to each of these should be no more than ~10-15 seconds each.
  • The time required for the script to process each batch of results will end up spacing out each new batch request to about 60 seconds in between.
    • I saw this time drop to as low as 30 seconds during periods of minimal overall activity and utilization on my admin server.
  • The max memory I saw the script utilize in my lab never exceeded 2500MB.
  • The max CPU I saw the script utilize in my lab was 100% of a single CPU.
  • The absolute maximum number of sessions the script will ever process in a single run is 1,677,721. This is a hardcoded limit in the RSA NetWitness SDK API, and I'm not inclined to try and work around that.

 

The output of the script is formatted so you can copy/paste directly from the terminal into the Correlation Server's multi-valued configuration.  Now with all that out of the way, some usage screenshots:

 

 

 

 

Any comments, questions, concerns or issues with the script, please don't hesitate to comment or reach out.

HA is a common need in many enterprise architectures, so NetWitness Endpoint has some built-in capabilities that allow organizations to achieve a HA setup with fairly minimal configuration and implementation needs.

 

An overview of the setup:

  1. Install Primary and Alternate Endpoint Log Hybrids (ELH)
  2. Create a DNS CNAME record pointing to your Primary ELH
  3. Create Endpoint Policies with the CNAME record's alias value

 

The failover procedure:

  1. Change the CNAME record to point to the Alternate ELH

 

And the failback procedure:

  1. Change the CNAME record back to the Primary ELH

 

To start, you'll need to have an ELH already installed and orchestrated within your environment.  We'll assume this is your Primary ELH, where your endpoint agents will ordinarily be checking into, and what you need to have an alternate for in the event of a failure (hardware breaks, datacenter loses power, region-wide catastrophic event...whatever).

 

To install your Alternate ELH (where your endpoint agents should failover to) you'll need to follow the instructions here: https://community.rsa.com/docs/DOC-101660#NetWitne  under "Task 3 - Configuring Multiple Endpoint Log Hybrid".

**Make sure that you follow these instructions exactly...I did not the first time I set this up in my lab, and so of course my Alternate ELH did not function properly in my NetWitness environment...**

 

Once you have your Alternate ELH fully installed and orchestrated, your next step will be to create a DNS CNAME record.  This alias will be the key to entire HA implementation.  You'll want to point the record to your Primary ELH; e.g.: 

 

**Be aware that Windows DNS defaults to a 60 minute TTL...this will directly impact how quickly your endpoint agents will point to the Target Host FQDN in the CNAME record, so if 60 minutes is too long to wait for endpoints to be available during a failover you might want to consider setting the TTL lower...** (props to John Snider for helping me identify this in my lab during my testing)

 

And your last step in the initial setup will be to create Endpoint Policies that use this alias value.  In the NetWitness UI, navigate to Admin/Endpoint Sources/Policies and either modify an existing EDR policy (the Default, for instance) or create a new EDR policy.  The relevant configuration option in the EDR policy setting is the "Endpoint Server" option within the "Endpoint Server Settings" section:

 

When editing this option in the Policy, you'll want to choose your Primary ELH in the "Endpoint Server" dropdown and (the most important part) enter the CNAME record's alias as the "Server Alias":

 

Add and/or modify any additional policy settings as required for your organization and NetWitness environment, and when complete be sure to Publish your changes. (Guide to configuring Endpoint Groups and Policies here: NetWitness Endpoint Configuration Guide for RSA NetWitness Platform 11.x - Table of Contents)

 

You can test your setup and environment by running some nslookup commands from endpoints within your environment to check that your DNS CNAME is working properly and endpoints are resolving the alias to the correct target (Primary ELH at this point), as well as creating an Endpoint EDR Policy to point some active endpoint agents to the Alternate ELH (**this check is rather important, as it will help you confirm that your Alternate ELH is installed, orchestrated, and configured correctly**).

 

Prior to moving on to the next step, ensure that all your agents have received the "Updated" Policy - if any show in the UI with a "Pending" status after you've made these changes, then that means they have not yet updated:

 

Assuming all your tests come back positive and all your agents' policies are showing "Updated", you can now simulate a failure to validate that your setup is, indeed, HA-capable. This can be quite simple, and the failover process similarly straightforward.

 

  1. Shutdown the Primary ELH
  2. Modify the DNS CNAME record to point to the Secondary ELH
    1. This is where the TTL value will become important...the longer the TTL the longer it may take your endpoints to change over to the Alternate ELH
    2. Also...there is no need to change the existing Endpoint Policy, as the Server Alias you already entered will ensure your endpoints
  3. Confirm that you see endpoints talking to the Alternate ELH
    1. Run tcpdump on the Alternate ELH to check for incoming UDP and TCP packets from endpoints
    2. See hosts showing up in the UI
    3. Investigate hosts

 

After Steps 1 and 2 are complete, you can confirm that agents are communicating with the Alternate ELH by running tcpdump on that Alternate ELH to look for the UDP check-ins as well as the TCP tracking/scan data uploads:

 

...as well as confirm in the UI that you can see and investigate hosts:

 

 

Once your validation and testing is complete, you can simply revert the procedure by powering on the Primary ELH, powering off the Alternate ELH, and modifying the CNAME record to point back to the Primary ELH.

 

Of course, during an actual failure event, your only action to ensure HA of NetWitness Endpoint would be to change the CNAME record, so be sure to have a procedure in place for an emergency change control, if necessary in your organization.

19DEC2019 Update:

Modified the original ESA rule (ja3context.txt) with additional/better logic to match destination port numbers between the Endpoint and Network sessions. Additionally, I recommend disabling the "Alert" option in the rule, as this can be a noisy alert and might unnecessarily clutter your incident and alert queues; please note that disabling this will NOT prevent the script notification output, it will only stop an alert from being sent to Respond.

 

Also added a different ESA rule (ja3window.txt) that does the same thing, but instead of relying on a script to produce a finalized list, the results with the linked JA3 fingerprint and application information are held in a Named Window - ja3curated; this approach might be better if a Feed is not a desired outcome, but you still want to be able to use the JA3/Application context for other ESA alerting.

 

 

 

A couple years ago, a few smart folks over at salesforce came up with the idea of fingerprinting certain characteristics of the "Client Hello" of the SSL/TLS handshake, with the goal to more accurately identify the client application initiating TLS-encrypted sessions.

 

This concept certainly has potential to provide invaluable insight during incident response, though there are some significant operational limitations that (my opinion) have so far prevented JA3 fingerprinting from gaining more widespread adoption and use.  Perhaps the biggest of these limitations is the need for some kind of known JA3 fingerprint library or repository, where the thousands (?potentially millions?) of client applications that might initiate a TLS handshake can be reliably matched with their JA3 fingerprint. There are a couple sites building out these repositories

 

...but their content is limited (after all, fingerprinting a client requires installing it, running it, capturing the PCAP, running a JA3 parser or script against the PCAP, and then adding that fingerprint to the library; that process simply does not scale) and the fidelity/accuracy/timeliness of these libraries is a pretty large question mark.

 

However, with NetWitness 11.3.1, which has a native option to enable JA3 and JA3S fingerprinting, and NetWitness Endpoint 11.3 we can bridge this gap and create our own JA3 libraries.

 

The concept is fairly simple

  • use NetWitness Endpoint to identify applications making outbound network connections
  • use NetWitness Network to identify outbound HTTPS traffic
  • link these events and sessions by their common characteristics
  • once we have that link
    • extract the filename and sha256 hash of the application from the NetWitness Endpoint event
    • along with the JA3 fingerprint from the network session
    • and then create a feed of that information that the NetWitness Platform can use for additional context

 

In order to ensure this process scales, we can make use of the ESA's rule engine to identify the sessions we want and it's script output functionality to create the feed for us. The ESA rule and python script output are attached to this blog.

 

Prior to enabling these, you'll want to make sure the "netwitness" user has either read/write access to the "/var/netwitness/common/repo" directory on the Admin Server, a.k.a Node0, or at least read/write access to the "ja3Context.csv" file in that directory that the ja3context.py script will update.

 

A good guide for setting ACLs in CentOS is here: https://www.tecmint.com/give-read-write-access-to-directory-in-linux/  and the result:

 

Once the appropriate permissions are set and you've enabled the ESA rule and its script output, your last step will be to turn that CSV output into a feed (A list two ways - Feeds and Context Hub - many thanks again to the SE formerly known as Eric Partington for this blog):

 

...and choose your meta keys:

 

And voila!  We have an automatically generated and constantly updating library of applications for our JA3 fingerprints:

It often happens to me that while I am testing new alerts and incident aggregation rules, I find that the aggregation condition(s) I chose in my Incident Rule are not what I want.  While I could re-create the raw alerts from scratch, I wanted an easier method to tell the Respond engine to re-apply its aggregation rule policies on the alerts that already exist in the database.

 

To be clear, the Respond engine is always attempting to apply all active and valid Incident Rules against un-aggregated and un-affiliated alerts in the database -- that is, any alert that has not been previously aggregated into any incident can be automatically aggregated into an incident if an incident rule with matching conditions is changed/created.  But for previously aggregated alerts whose incidents have been deleted (leaving the alerts un-aggregated but previously-affiliated), the Respond engine will not attempt to re-aggregate them.

 

So my goal, then, was to get the Respond engine to include these previously-affiliated alerts in its aggregation attempts.  To achieve this, the alerts simply needed to be updated to remove their previously-affiliated status.  And to make it easy to change dozens or even hundreds of alerts at once, I wrote a simple shell script (attached to this blog and pasted below) to do it all for me.

 

#!/bin/bash
#
#grab the deploy_admin password
DEPLOY_PW=$(security-cli-client --get-config-prop --prop-hierarchy nw.security-client --prop-name platform.deployment.password --quiet)

#set a desired time range to query for alerts
#examples: "24 hours ago" or "14 days ago" or "4 weeks ago"
timeRange=$(date +%s%N -d "30 days ago" | cut -b1-13)

#identify primaryESA host
primaryESA=$(echo -e "use orchestration-server\ndb.host.find({installedServices:\"ESAPrimary\"},{hostname:1})" | mongo admin -u deploy_admin -p $DEPLOY_PW --quiet | grep -Po "hostname.*\"" | sed -e "s/hostname.\{5\}\|\"//g")

#change status on all alerts that were part of a deleted incident
#within the timerange from "REMOVED_FROM_INCIDENT" to "NORMALIZED"
echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

A couple notes on the script:

  • I used one extremely generic parameter (timestamp within last 30 days) to limit the database query and update operation (line 15)
    • you should feel free to modify the timeRange (line 8) to suit your needs
    • you should also feel free to (carefully) modify the database query to focus on specific alerts in your environment
      • for example, given the following raw alert:

 

...you could change line 15 and add:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

  • a successful run of the script will produce output like this, showing you how many alerts in the database were modified (3, in this case):

 

Of course, I recommend testing this (and most everything else) in a pre-prod or test NetWitness environment, if you have one.  And should you have any questions about what might be a good and/or valid database query, the Link community is always on hand to help (please have screenshots and/or specifics about your alerts ready...its hard to help without knowing details...  ).

One of the changes introduced in 11.x (11.0, specifically) was the removal of the macros.ftl reference in notification templates.  These templates enable customized notifications (primarily syslog and email) using freemarker syntax. The 10.x templates relied on macros (which are basically just functions, but using freemarker terminology) to build out and populate both the OOTB and (most likely) custom notifications.

 

If you upgraded from 10.x to 11.x and you had any custom notifications, there's a very good chance you noticed that these notifications failed, and if you dug into logs you'd have probably found an error like this:

The good news is there's a very easy fix for this, and it does not require re-writing any of your 10.x notifications.  The contents of macros.ftl file that was previously used in 10.x simply need to be copy/pasted into your existing notification templates, replacing the <#include "macros.ftl"/> line, and they'll continue to work the same as they did in your 10.x environment (props to Eduardo Carbonell for the actual testing and verification of this solution).

 

Example:

 

...becomes:

 

I have attached a copy of the macros.ftl file to this blog, or if you prefer you can find the same on any 11.x ESA host in the "/var/netwitness/esa/freemarker" directory.

In 11.3 the same NWE Agent can operate in Insights (free) or Advanced Mode . This change can be made by toggling a policy configuration in the UI and does not require agent reinstall or reboot. 

There could be both Insights and Advanced agents in a single deployment. Only agents operating in Advanced mode are accounted for licensing.

 

Feature

Comments

Insights

Advanced

Operating Systems Support

Release

Windows

MacOS

Linux

Basic scans

Inventory

11.3

4.x

Tracking scans

Continuous file,network,process,thread monitors

Registry monitor(Specific to windows)

11.3

4.x

Anomaly detection

Inline hooks, kernel hooks,suspicious threads,registry discrepancies

11.3

4.x

Windows Log Collection

Collect Windows Event Logs

11.3**

Threat Detection Content

Detection Rules /Reports

11.3

Risk score

Based on Threat Content Pack

11.3

4.x

File Reputation Service

File Intel ( 3rd Party Lookup)

11.3

4.x

Live Connect

Community Intel

11.3

4.x

Analyze module

Analysis of downloaded file

11.3

4.x

Blocking

Block an executable

11.3

4.x

Agent Protection

Driver Registry Protection / User Mode Kill Protection

11.3**

Powershell , Command-line ( input)

Report user interactions within a console session

11.3**

Process Visualization

Unique identifier (VPID) for process that uniquely identifies the entire process event chain 

 

11.3**

MFT Analysis

Future

4.x

Process Memory Dump

Future

4.x

System Memory Dump

Future

4.x

Request File

Future

4.x

Automatic File Downloads

Future

4.x

Standalone Scans

Future

4.x

RAR

Future

4.x

Containment

Future

4.x

API Support

Future

4.x

Certificate CRL Validation

Future

4.x

 

** - New Capabilities , these do not exist in 4.x

 

 

11.3 Key Endpoint Features 

Feature
Value
Details
1Advanced Endpoint Agent

Full and Continuous Endpoint Visibility

Advanced Threat Detection / Threat Hunting

Performs both kernel and user level analysis

  • Tracks Behaviors such as process creation,remote thread creation,relevant registry key modifications,executable file

    creation, processes that read documents (refer doc for the detailed list)

  • Tracks Network Events

  • Tracks Console Events ( commands typed into console like cmd)
  • Windows Log Collection
  • Detects Anomalies such as Image hooks , Kernel Hooks , Suspicious Threads , Registry Discrepancies
  • Retrieves lists of drivers, processes, DLLs, files (executables), services, autoruns,
  • Host file entries, scheduled tasks
  • Gathers security information such as network share, patch level, Windows tasks,logged in Users,bash history
  • Reports the hashes (SHA-256, SHA-1, MD5) and file size of all binaries (executables, libraries (DLL and .SO)and scripts found on the system
  • Reports details on certificate,signer,file description,executable sections,imported libraries etc

2Threat Content PacksDetection of adversary tactics and techniques ( MITRE ATT&CK matrix)See attached 11.3 Endpoint Rules spreadsheet
3Risk Scoring

Prioritized List of Risky Hosts /Files

Automated Incident Creation for Hosts /Files when risk threshold exceeds

Risk Score backed up with context of contributing factors

Rapid/Easy Investigation Workflow

Risk Scores are computed based on a proprietary scoring algorithm developed by RSA's Data Sciences team

The Scoring server considers takes multiple factors into consideration for scoring

  • Critical , high ,medium indicators generated by the endpoints based on the threat content packs deployed
  • Reputation status of files - Malicious / Suspicious
  • Bias status of file - Blacklisted /Greylisted /Whitelisted
4Process Visualizations

Provides a visualization of a process and its parent-child relationships

Timeline of all activities related to a process

 

5

File Analysis/Reputation/Bias Status

Categorize Files

Saves Analysis time , Filter Out Noise , Focus on Real threats

File hashes from the environment are sent to RSA Threat Intel Cloud for reputation status updates

Live connect Lookup in Investigations

6Response Actions - File BlockingAccelerate Response /Prevent Malware ExecutionBlocks File Hash across the environment
7Response Actions - Retrieve Files

Download and Analyze File Contents for Anomalies

Static Analysis using 3rd Party Tools

8Centralized Group Policy Management

Agent Configurations Updated Dynamically Based on Group Membership

Groups can be created based on different criteria such as IP Address,Host names,Operating System Type,Operating Description

Endpoint Policies such as Agent Mode ,Scan Schedule , Server Settings , Response Actions can be automatically pushed based on group membership

Agents can be migrated to different Endpoint Servers based on Group/Policy Assignment

9Geo Distributed Scalable DeploymentConsolidated view & management of endpoints /files and the associated risk across distributed deployments

One of the more common requests and "how do I" questions I've heard in recent months centers around the Emails that the Respond Module can send when an Incident is created or updated.  Enabling this configuration is simple (https://community.rsa.com/docs/DOC-86405), but unfortunately changing the templates that Respond uses when it sends one of these emails has not been an option.

 

Or rather...has not been an accessible option.  I aim to fix that with this blog post.

 

Before getting into the weeds, I should note that this guide does not cover how to include *any* alert data within incident notification emails. The fields I have found in my tests that can be included are limited to these using JSON dot notation (e.g. "incident.id", "incident.title", "incident.summary", etc.):

 

Now, this does not necessarily mean it isn't possible to include other data, just that I have not figured out how...yet.

 

The first thing we need to do is create a new Notification Template for Respond to use.  We do this within the UI at Admin / System / Global Notifications --> Templates tab.  I recommend using either of the existing Respond Notification templates as a base, and then modifying either/both of those as necessary. (I have attached these OOTB notification templates to this blog.)

 

For this guide, I'll use the "incident-created" template as my base, and copy that into a new Notification Template in the UI.  I give my template an easy-to-remember name, choose any of the default Template Types from the dropdown - it does not matter which I choose, as it won't have any bearing on the process, but it's a required field and I won't be able to save the template without selecting one - and write in a description:

 

Then I copy the contents of the "incident-created" template into the Template field.  The first ~60% of this template is just formatting and comments, so I scroll past all that until I find the start of the HTML <body> tag.  This is where I'll be making my changes

 

One of the more useful changes that comes to mind here is to include a hyperlink in the email that will allow me to pivot directly from the email to the Incident in NetWitness.  I can also change any of the static text to whatever fits my needs.  Once I'm done making my changes, I save the template.

 

After this, I'm done in the UI (unless I decide to make additional changes to my template), and open a SSH session to the NetWitness Admin Server.  To make this next part as simple and straightforward as I can, I've written a script that will prompt me for the name of the Template I just created, use that to make a new Respond Notification template, and then prompt me one more time to choose which Respond Notification event (Created or Updated) I want to apply it to. (The script is attached to this blog.)

 

A couple notes on running the script:

  1. Must be run from the Admin Server
  2. Must be run as a superuser

 

Running the script:

 

...after a wall of text because my template is fairly long...I get a prompt to choose Created or Updated:

 

And that's it!  Now, when a new incident gets created (either manually or automatically) Respond sends me an email using my custom Notification Template:

 

And if I want to update or fix or modify it in any way, I simply make my changes to the template within the UI and then run this script again.

 

Happy customizing.

One of the features included in the RSA NetWitness 11.3 release is something called Threat Aware Authentication (Respond Config: Configure Threat Aware Authentication).  This feature is a direct integration between RSA NetWitness and RSA SecurID Access that enables NetWitness to populate and manage a list of potentially high-risk users that SecurID Access can then refer to when determining whether (and how) to require those users to authenticate.

 

The configuration guide above details the steps required to implement this feature in the RSA NetWitness Platform, and the relevant SecurID documentation for the corresponding capability is here: Determining Access Requirements for High-Risk Users in the Cloud Authentication Service.

 

On the NetWitness side, to enable this feature you must be at version 11.3 and have the Respond Module enabled (which requires an ESA), and on the SecurID Access side, you need to have Premium Edition (RSA SecurID Access Editions - check the Access Policy Attributes table at the bottom of that page).

 

At a high level, the flow goes like this:

  1. NetWitness creates an Incident
  2. If that Incident has an email address (one or more), the Respond module sends the email address(es) via HTTP PUT method to the SecurID Access API
  3. SecurID Access checks the domains of those email addresses against its Identity Sources (AD and/or LDAP servers)
  4. SecurID Access adds those email addresses with matching domains to its list of High Risk Users
  5. SecurID Access can apply authentication policies to users in that list
  6. When the NetWitness Incident is set to Closed or Closed-False Positive, the Respond module sends another HTTP PUT to the SecurID Access API removing the email addresses from the list

 

In trying out these capabilities, I ended up making a couple tools to help report on some of the relevant information contained in NetWitness and SecurID Access.

 

The first of these is a script (sidHighRiskUsers.py; attached at the bottom of this blog) to query the SecurID Access API in the same way that NetWitness does.  This script is based on the admin_api_cli.py example in the SecurID Access REST API tool (https://community.rsa.com/docs/DOC-94122).  That download contains all the python dependencies and modules necessary to interact with the SecurID API, plus some helpful README files, so if you do intend to test out this capability I recommend giving that a look.

 

Some usage examples of this script (can be run with either python2 or python3 or both, depending on whether you've installed all the dependencies and modules in the REST API tool):

 

Show Users Currently on the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o getHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>"

 

 

Add  Users to the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o addHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

**Note: my python-fu is not strong enough to capture/print the 404 response from the API if you send a partially successful PUT.  If your python-fu is strong, I'd love to know how to do that correctly.

Example - if you try to add multiple user emails and one or more of those emails are not in your Identity Sources, you should see this error for the invalid email(s):

 

Remove Users from the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o removeHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

*Note: same as above about a partially successful PUT to the API

 

The second tool is another script (nwHighRiskUsersReport.sh; also attached at the bottom of this blog) to help report on the NetWitness-specific information about the users added to the High Risk list, the Incident(s) that resulted in them being added, and when they were added.  This script should be run on a recurring basis in order to capture any new additions to the list - the frequency of that recurrence will depend on your environment and how often new incidents are created or updated.

 

The script will create a CEF log for every non-Closed incident that has added an email to the High Risk list, and will send that log to the syslog receiver of your choice.  Some notes on the script's requirements:

  1. must be run as a superuser from the Admin Server
  2. the Admin Server must have the rsa-nw-logplayer RPM installed (# yum install rsa-nw-logplayer)
  3. add the IP address/hostname and port of your syslog receiver on lines 4 & 5 in the script
  4. If you are sending these logs back into NetWitness:
    1. add the attached cef-custom.xml to your log decoder or existing cef-custom.xml (details and instructions here: Custom CEF Parser)
    2. add the attached table-map-custom.xml entries to the table-map-custom.xml on all your Log Decoders
    3. add the attached index-concentrator-custom.xml entries to the index-concentrator-custom.xml on all your Concentrators (both Log and Packet)
    4. restart your Log Decoder and Concentrator services
    5. **Note: I am intentionally not using any existing email-related metakeys in these custom.xml files in order to avoid a potential feedback loop where these events might end up in other Incidents and the same email addresses get re-added to the High Risk list
  5. Or if you are sending them to a different SIEM, perform the equivalent measures in that platform to add custom CEF keys

 

Once everything is ready, running the script:

 

And the results:

------

Quite frequently when testing ESA alerts and output options / templates, I have wanted the ability to manually or repeatedly trigger alerts.  In order to help with this type of testing, I created a couple ESA Alert templates to generate both scheduled alerting and manual, one-time alerts.

 

Each of these can take a wide variety of time- or schedule-based inputs to generate alerts according to whatever kind of frequency you might want.  The descriptions in each alert have examples, requirements, and links to official Esper documentation with more detail.

 

I see the potential for quite a bit of usefulness with the Crontab alert, especially in 11.3 now that ESA Alert script outputs run from the admin server.

 

Lastly, I created these using freemarker templates (how the ESA Rules from Live are packaged) in order to ensure that the times and schedules used in the alerts adhere to proper syntax and formatting, but of course you should feel free to convert these to advanced rules if you like.

 

 

The complete overhaul of NW-Endpoint 4.4 into NW-Endpoint 11.3 includes (among many changes) a different method for creating your own, or tuning existing, endpoint alerts.  In the old version (4.4), everything was a SQL query, but since we have moved away from Windows and SQL Server in 11.3, I'd like to shed some light on how the new process works, as well as include some tooling intended to assist folks who want to do this themselves.

 

The RSA NetWitness Endpoint Configuration Guide (https://community.rsa.com/docs/DOC-100160) has a section starting on pg. 12 that covers everything here in greater detail.  If you'd like more information on this subject, I recommend taking a look at that document.

 

At a high level, the process for Endpoint 11.3 to generate alerts and calculate file and host risk scores goes like this:

 

Let's take a look at  a couple of the OOTB examples and see how these different pieces are interacting with each other by examining the process that turns the "runs powershell decoding base64 string" rule into a potential risk score.

 

If the App Rule's condition statement is met, it creates a meta value of "runs powershell decoding base64 string" in the "boc" meta key:

 

These are then used in the corresponding ESA Rule "Runs Powershell Decoding Base64 String" contained in the OOTB Endpoint Risk Scoring Rule Bundle (I've attached all of the OOTB ESA Rules contained in the bundle to this blog).

****Take note that the app_rule_meta_value is case sensitive.  If you use capital letters in the App Rule Name field, then the "value" field in its companion ESA Rule must also contain capital letters****

 

Last up in the process is the Risk Scoring Rule.  This takes the ESA Alert and produces a score (scaled from 0 - 100) for the host where the alert occurred, and if applicable the module involved in the alert.  This last part is where I expect the most potential confusion - determining the host where an alert occurred is straightforward, but the module might not be.

 

This is because there can potentially be both a source module (filename_src, checksum_src) and a destination module (filename_dst, checksum_dst), or just the module itself without a source or destination (filename, checksum), or for some alerts there might not be a module involved in the alert at all.  I've attached all of the OOTB Risk Scoring Rules to this blog, and I'd encourage you to take a look at these variations if you intend to create your own, or tune existing, rules and alerts.

 

Now then, back to the "Runs Powershell Decoding Base64 String" Rule.  This Risk Scoring Rule looks for the ESA Alert and creates a score for the source module (checksum_src, filename_src) in the event, as well as the host where it occurred.  Any risk scores that are generated for affected hosts and modules will appear in the Investigate/Hosts and Investigate/Files pages in the UI, and can also appear as Alerts and Incidents in the Respond UI.

 

And just to be thorough, here are a couple examples of rules with different Risk Scoring.

 

A rule without a source or destination module --> "Scripting Addition In Process"

 

A rule without any module and just the Host --> "Windows Firewall Disabled"

 

Now we have some examples under our belt, and know how the different inputs and options relate to one another and the outcome.  The process for adding your own rule is covered in the configuration guide linked above, and this next section aims to assist with some of the manual CLI aspects of that process.

 

After playing around with the Blocking capabilities in 11.3, I decided I wanted to add a couple custom alerts.

 

First, I wanted to know when a module I blocked was actively running on an endpoint at the time I blocked it and was subsequently killed.  My App Rule to trigger on this activity:

 

And second, I wanted to know when an attempt was made to access or run a module that I had previously blocked.  My App Rule for this activity:

 

With these App Rules created and Applied, the next steps are to create and apply the corresponding ESA Alert and Risk Scoring Rules from a terminal session in the Admin Server (Node0).  The script "endpointCustomRule.sh" attached to this blog can help walk you through these steps, if you choose.  It aims to eliminate errors that may occur when completing these steps manually.

 

Some notes on the script:

  • must be run on the Admin Server as root
  • must be run only after creating and applying your App Rule(s)
    • be sure to make your App Rules unique, otherwise the script might not find the correct one when it is checking for a valid Log Decoder App Rule
    • if you have multiple Endpoint Log Hybrids (ELHs), be sure to Push your App Rule(s) to the other ELHs in your environment
  • applies some error checking and input validation to ensure valid Rules are created and added to the respective databases successfully

 

If you find errors or gaps in the script please let me know.

 

Prompting user for input:

 

Adding and confirming the ESA and Risk Scoring Rules:

 

And finally, confirming that we are now successfully creating alerts and re-calculating Risk Scores when the events occur:

Filter Blog

By date: By tag: