Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2019 > January
2019

There are many reasons I enjoy working with the RSA Netwitness Platform, but it’s when our customers turn their attention to threat hunting that really makes things exciting. In one case, there was a need where they could take new threat intelligence or research and apply it to their RSA Netwitness stack. This wasn’t directed at threat intelligence via feeds, but more around how attackers could deliver malicious content. Since there is never a shortage of threat research, one customer asked about detection of zero-width spaces.

 

In a recent article in The Hacker News, research was presented showing that zero-width spaces that were embedded within URL’s would be able to bypass URL scanners in Office 365. The question now is how to go about detecting this in Netwitness?

 

We begin with a search of our network using the following query:

 

Query
alias.host contains "​","‌","‍","","0"

 

This gave us several DNS sessions and a single SMTP session.

 

If we pivot into the SMTP session, we can get slightly better view of the meta for that session.

 

That last hostname in ‘alias.host’ does look interesting, but not sure. We need to examine the session more closely.

 

We rendered the session as an email and it bore the signs of a classic phishing email.

 

However, only when we examine the raw contents of it, does the malicious indicator present itself.

 

The bytes highlighted in red (E2 80 8C) represent the zero-width non-joiner character (&#8204). This appears to be the attackers use of zero-width spaces as a bypass attempt in a phishing email. Next we look at the meta data for the session.

 

Above, we can see our suspicious hostname, but how did it get there. Turns out that the ‘phishing_lua’ parser will examine URL’s within SMTP traffic and extract the hostnames it finds into the ‘alias.host’ meta key. Fortunately for us, it included the zero-width space as part of the meta data…we just can’t see it. Or can we?

 

I copied the meta value for the hostname and then pasted it into my text editor. Sadly, I did not notice any strange characters. However, I did paste it into my good friend ‘vi’.

 

This proved that the zero-width spaces were in the meta data, which allowed our query to work successfully. The malicious site actually leads to a credential stealing webpage.  It appears that the website was a compromised wordpress site.

 

Next, I wanted to get a bigger data set. I took some of the meta data, such as the email address of the sender, and used it to find additional messages. Turned out, this helped identify an active phishing campaign.

 

Next up is to put together some kind of detection going forward. My first thought was to use an application rule, but was not successful. I think it was the way the Unicode was being interpreted or how it was inputted. I need to do more research on that. Since the app rule syntax was not working properly, I decided to build a Lua parser instead. This parser would perform the meta callback function of the “alias.host” meta key, just like an app rule would. Next, the parser would loop through a predefined list of zero-width space bytes against the returned meta value. If a match was made, it would write meta into the ‘ioc’ meta key. 

 

lua_zws_check.lua

-- Step 1 Name the parser
local lua_zws_check = nw.createParser("lua_zws_check", "Check alias.host meta for zero-width spaces")

--[[

DESCRIPTION

Check alias.host meta for zero-width spaces


VERSION

2019-01-24 - Initial development

AUTHOR

christopher.ahearn@rsa.com


DEPENDENCIES

None

META KEYS



NOTES

https://thehackernews.com/2019/01/phishing-zero-width-spaces.html?m=1
http://www.amp-what.com/unicode/search/zero%20width

--]]

 

-- Step 3 Define where your meta will be written
-- These are the meta keys that we will write meta into
lua_zws_check:setKeys({

nwlanguagekey.create("ioc", nwtypes.Text),

})

 

-- Step 4 DO SOMETHING
local zws = ({

 

["\226\128\139"] = true, -- ​ &NegativeMediumSpac zero width space
["\226\128\140"] = true, -- ‌ ‌ zero width non-joiner
["\226\128\141"] = true, -- ‍ ‍ zero width joiner
["\239\187\191"] = true, --  zero width no-break space
["\239\188\144"] = true, -- 0 fullwidth digit zero

})

 

-- This is our function. What we want to do when we match a token...or in this case, the
-- filename meta callback.
function lua_zws_check:hostMeta(index, meta)

if meta then

for i,j in pairs(zws) do

local check = string.find(meta, i)
if check then

--nw.logInfo("*** BAD HOSTNAME CHECK: " .. meta .. " ***")

nw.createMeta(self.keys["ioc"], "hostname_zero-width_space")
break

end

end 

end

end

 

-- Step 2 Define your tokens
lua_zws_check:setCallbacks({

[nwlanguagekey.create("alias.host")] = lua_zws_check.hostMeta, -- this is the meta callback key

})

 

 

After deploying the parser, I re-imported the new pcap file into my virtual packet decoder. The results came back quickly. I now had reliable detection for these zero-width space hostnames.

 

Since meta is displayed in the order in which it was written, we can get a sense as to the hostname that triggered this indicator.

 

Now that we have validated that the parser is working correctly in the lab environment, it was time to test some other capabilities of the Netwitness platform.

 

As we stated in the beginning, the query (as well as the parser) was flagging on DNS name resolutions that involved Unicode characters. Therefore, we wanted to create an alert when we saw the ‘zero-width’ meta when it was in SMTP traffic. We then created an ESA rule in the lab environment.

 

To begin this alert, I went to the Configure / ESA Rules section in Netwitness and created a new rule using the Rule Builder wizard.

 

 

We gave the rule a name, which will be important in the next phase. Next, we created the condition by giving the condition a name and then populating the fields.

The first line is looking for the meta key and the meta value. The second is looking at the service type. Once it looks good, we hit save. We then hit save again and close out the rule.

 

NOTE: In the first line, you see the “Array?” box checked. Some meta keys are defined as arrays meaning they could contain multiple values in a session. The meta key ‘ioc’ is one such meta key. You may encounter a situation where a meta key should be set as an Array but is not. If that is the case, it is a simple change on the ESA configuration.

 

Next, we want to deploy the rule to our ESA appliance. To do, we clicked the ESA appliance in our deployments table.

Next, we add the rule we want to deploy. Then, we deploy it.

 

We then imported the PCAP again to see if our ESA rule fired successfully, which it did.

 

The last piece before production is to create an Incident rule, based on the ESA alerts. We move to Configure / Incident Rules and create a new rule.

I created the Incident rule in the lab and used the parameters shown below.

 

I then enabled the rule and saved it.

 

Now, when the incidents are examined in the Respond module, we can see our incidents being created.

 

To summarize this activity, we started from some new(ish) research and wanted to find a way to detect this in Netwitness. We found traffic that we were interested in and then built a Lua parser to improve our detection going forward. Next, I wanted to alert on this traffic only when it was in SMTP traffic and, because I wanted to work on some automation, created an Incident rule to put a bow on this. We now have actionable alerting after small bit of research on our end.  My intent is to get the content of the parser added to one already in Live.  Until that time, it will be here to serve as a reference.

 

What are your use cases? What are some things you are trying to find on the network that Netwitness can help with? Let us know.

 

Good luck and happy hunting.

There have been a few blogs recently (Gathering Stats with Salt - BIOS/iDRAC/PERC EditionRSA NetWitness Storage Retention Script) that leverage a new functionality in v11.x for querying data directly from RSA NetWitness hosts through the command line.

 

This functionality - SaltStack - is baked into v11.x (Chef pun ftw!) and enables PKI-based authentication between the salt master (AKA admin server; AKA node0) and any salt minion (everything that's not the salt master, plus itself).

 

During a recent POC, one of the customer's use cases was to gather, report, and alert against certain host information within the RSA NetWitness environment - kernel, firmware, BIOS, OS, and iDRAC versions, storage utilization (%), and some others.

 

In order for NetWitness to report and alert on this information, we needed to take these details about the physical hosts and feed it into the platform so that we could use the resulting meta.  Thankfully, others before me did all the hard work figuring out the commands to run against hosts to extract this information, so all I had to do was massage the results into a format that could be fed into NetWitness as a log event, and write a parser for it.

 

The scripts, parser, and custom index entries are attached to this blog.  All the scripts are intended to be run from your 11.x Admin Server.  If you do choose to use these or modify them for your environment/requirements, be sure to change the IP address for the log replay command within the scripts

NwLogPlayer -r 3 -f $logEvent -s 192.168.10.14 -p 514

 

...to the IP of a Log Decoder in your environment.  

 

A custom meta and custom column group are also attached.

 

Although the RSA NetWitness platform gives administrators visibility into system metrics through the Health & Wellness Systems Stats Browser, we currently do not have a method to see all storage / retention across our deployment in a single instance or view.

 

Below you will find several scripts that will help us gain this visibility quickly and easily.

 

Update: Please grab the latest version of the script, some bugs were discovered that were fixed.

 

How It Works:

 

1. Dependency: get-all-systems.sh (attached) both v10 and v11 version for your particular environment. Please run this script prior to running the get-retention.py as it requires the 'all-systems' file which contains all of your appliances & services.

2. We then read through the all-systems file and look for services that have retention e.g. EndpointLogHybrid, EndpointHybrid, LogHybrid, LogDecoder, Decoder, Concentrator, Archiver.

3. Finally we use the 'tlogin' functionality of NwConsole to allow cert-based authentication, thus, no need to run this script with username/password as input to pull database statistics and output the retention (in days) for that particular service.

 

Instructions:

 

1. Run ./get-all-systems_v10.sh (for 10.x systems) or ./get-all-systems_v11.sh (for 11.x systems)

2. Run ./get-retention.py  (without any arguments). This MUST be run from Puppetmaster (v10) or Node0 (v11).

 

Sample Run: 

 

Please feel free to provide feedback, bug reports etc...

There have been many improvements made over the past several releases to the RSA NetWitness product on the log management side of the house to help reduce the amount of unparsed or misparsed devices.  There are still instances where manual intervention is necessary and a report such as the one provided in this blog could prove valuable for you.

 

This report provides visibility into 4 types of situations:

 

Device.IP with more than 1 device.type

Devices that have multiple parsers acting on them over this time period, sorted most parsers per IP to least

 

Unknown Devices

Unknown devices do not have a parser detected for them or no parser is installed/enabled for it.

 

Device.types with word meta

Device types with word meta indicate that a parser has matched a header for that device but no payload (message body) has matched a parser entry.

 

Device.type with parseerror

Devices that are parsing meta for most fields but have parseerror meta for particular metakey data. This can indicate the format of the data into the key does not match the format of the key (invalid MAC address into eth.src or eth.dst - MAC formatted keys), text into IP key

 

Some of these categories are legitimate but checking this report once a week should allow you to keep an eye on the logging function of your NetWitness system and make sure that it is performing at its best.

 

The code for the Report is kept here (in clear text format so you can look at the rule content without needing to import it into NetWitness):

GitHub - epartington/rsa_nw_re_logparser_health 

 

Here's a sample report output:

 

Most people don't remember the well known port number for a particular network protocol. Sometimes we need to refer to an RFC to remember what port certain protocols normally run over. 

 

In the RSA NetWitness UI, the well known name for the protocol is presented in the UI but when you drill on it you get the well known port number. 

 

This can be a little confusing at times if you aren't completely caffeinated.☕

 

Well here's some good news, you an use the name of the service in your drills and reports with the following syntax:

 

Original method:

Service=123 

 

New method:

Service="NTP"

 

You may get an error about needing quotes around the word however the system still interprets the query correctly.

 

 

This also works in profiles:

 

An in the Reporting Engine as well:

 

Good luck using this new trick!

   

(P.S you can also use AND instead of && and OR instead of || )

RSA NetWitness v11.2 introduced a very useful feature to the Investigation workflow with the improvement of the Profile feature.  In previous versions the Profile could have a pre-query set for it along with the meta and column groups, but you were locked to using only those two features unless you de-activated your profile.

 

With v11.2 you are able to keep the pre-query set from the profile and pivot to other meta and column groups.  This ability allows you to set the Profiles as bookmarks or starting points for investigations or drills.  Along with the folders that can be set in the Profile section to help organize the various groups that help frame investigations properly.

 

Below is a collection of the profiles as well as some meta and column groups to help collect various types of data or protocols together.

 

GitHub - epartington/rsa_nw_investigation_profiles 

 

Protocols

Medium

Log Device Classes

UEBA

 

Let me know if these work for you, I will be adding more as they develop to the github site so check back.

Often times, RSA NetWitness Packet decoders are configured to monitor not only ingress and egress traffic, but also receive internal LAN traffic as well.  On a recent engagement, we identified a significant amount of traffic going to TCP port 9997.  It did not take long to realize this traffic was from internal servers configured to forward their logs to Splunk.

 

The parser will add to the 'service' meta key and write the value '9997'.  After running the parser for several hours, we also found other ports that were used by the Splunk forwarders.  

 

While there wasn't anything malicious or suspicious with the traffic, it was a significant amount of traffic that was taking up disk space.  By identifying the traffic, we can make it a filtering candidate.  Ideally, the traffic would be filtered further upstream at a TAP, but sometimes that isn't possible.  

 

If you are running this parser, you could also update the index-concentrator-custom.xml and add an alias to the service types.  

 

 

...

 

 

If you have traffic on your network that you want better ways to identify, let your RSA account team know.  

 

Good luck, and happy hunting.

I helped one of my customers implement a use case last year that entailed sending email alerts to specific users when those users logged into legacy applications within their environment.

 

Creating the alerts for this activity with the ESA was rather trivial - we knew which event source would generate the logs and the meta to trigger against - but sending the alert via email to the specific user that was ID'd in the alert itself added a bit of complexity.

 

Fortunately, others have had similar-ish requirements in the past and there are guides on the community that cover how to generate custom emails for ESA alerts through the script notification option, such as Custom ESA email template with raw event payload and 000031690 - How to send customized subjects in an RSA Security Analytics ESA alert email.

 

This meant that all we had to do was map the usernames from the log events to the appropriate email addresses, enrich the events and/or alerts with those email addresses, and then customize the email notification using that information.  Mapping the usernames to email addresses and adding this information to events/alerts could have been accomplished in a couple different ways - either a custom Feed (Live: Create a Custom Feed) or an In-Memory Table (Alerting: Configure In-Memory Table as Enrichment Source) - for this customer the In-Memory Table was the preferred option because it would not create unnecessary meta in their environment.

 

We added the CSV containing the usernames and email addresses as an enrichment source:

 

....then added that enrichment to the ESA alert:

 

With these steps done, we triggered a couple alerts to see exactly what the raw output looked like, specifically how the enrichment data was included.  The easiest way to find raw alert output is within the respond module by clicking into the alert and looking for  the "Raw Alert" pane:

 

Armed with this information, we were then able to write the script (copy/pasting from the articles linked above and modifying the details) to extract the email address and use that as the "to_addr" for the email script (also attached at the bottom of this post):

#!/usr/bin/env python
from smtplib import SMTP
import datetime
import json
import sys

def dispatch(alert):
    """
    The default dispatch just prints the 'last' alert to /tmp/esa_alert.json. Alert details
    are available in the Python hash passed to this method e.g. alert['id'], alert['severity'],
    alert['module_name'], alert['events'][0], etc.
    These can be used to implement the external integration required.
    """

    with open("/tmp/esa_alert.json", mode='w') as alert_file:
        alert_file.write(json.dumps(alert, indent=True))

def read():
    #Parameter
    smtp_server = "<your_mail_relay_server>"
    smtp_port = "25"
    # "smtp_user" and "smtp_pass" are necessary
    # if your SMTP server requires authentication
    # used in "smtp.login()" below
    #smtp_user = "<your_smtp_user_name>"
    #smtp_pass = "<your_smtp_user_password>"
    from_addr = "<your_mail_sending_address>"
    missing_msg = ""
    to_addr = ""  #defined from enrichment table

    # Get data from JSON
    esa_alert = json.loads(open('/tmp/esa_alert.json').read())
    #Extract Variables (Add as required)
    try:
        module_name = esa_alert["module_name"]
    except KeyError:
        module_name = "null"
    try:
         to_addr = esa_alert["events"][0]["user_emails"][0]["email"]
    except KeyError:
         missing_msg = "ATTN:Unable to retrieve from enrich table"
         to_addr = "<address_to_send_to_when_enrichment_fails>"
    try:
        device_host = esa_alert["events"][0]["device_host"]
    except KeyError:
        device_host = "null"
    try:
        service_name = esa_alert["events"][0]["service_name"]
    except KeyError:
        host_dst = "null"
    try:
        user_dst = esa_alert["events"][0]["user_dst"]
    except KeyError:
        user_dst = "null"
    # Sends Email
    smtp = SMTP()
    smtp.set_debuglevel(0)
    smtp.connect(smtp_server,smtp_port)

    date = datetime.datetime.now().strftime( "%m/%d/%Y %H:%M" ) + " GMT"
    subj = "Login Attempt on " + ( device_host )
    message_text = ("Alert Name: \t\t%s\n" % ( module_name ) +
        " \t\t%s\n" % ( missing_msg ) +
        "Date/Time : \t%s\n" % ( date  )  +
        "Host: \t%s\n" % ( device_host ) +
        "Service: \t%s\n" % ( service_name ) +
        "User: \t%s\n" % ( user_dst )
    )

    msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s\n" % ( from_addr, to_addr, subj, date, message_text )
    # "smtp.login()" is necessary if your
    # SMTP server requires authentication
    #smtp.login(smtp_user,smtp_pass)
    smtp.sendmail(from_addr, to_addr, msg)
    smtp.quit()

if __name__ == "__main__":
    dispatch(json.loads(sys.argv[1]))
    read()
    sys.exit(0)

 

And the result, after adding the script as a notification option within the ESA alert:

-----------------------------

 

Of course, all of this can and should be modified to include whatever information you might want/need for your use case.

Amazon Virtual Private Clouds (VPC) are used in hybrid cloud enterprise environments to securely host certain workloads and customers need to enable their SOC to identify potential threats with these components of their infrastructure.  The RSA NetWitness Platform supports ingest of many 3rd party sources,  including Amazon CloudTrail, GuardDuty, and now VPC Flow Logs.

 

The RSA NetWitness Platform has reporting content for Analysts to leverage in assessing the VPC security and overall health.  In https://community.rsa.com/docs/DOC-97451 we illustrate out-of-the-box reporting content to allow an analyst to get quick visibility into potential operational issues, such as highest and lowest accepted/rejected connections and traffic patterns on each VPC. 

 

VPC Flow Logs is an AWS monitoring feature that captures information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. 

 

Logs from Amazon VPCs can be exported to CloudWatch. The RSA NetWitness Platform AWS VPC plugin uses CloudWatch API to capture the logs.

 

 

 

 

                                                                                                                                   

Filter Blog

By date: By tag: