Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Chris Thomas

RSA NetWitness Platform

9 Posts authored by: Chris Thomas Employee

To round out our series explaining how to use the indicators from ASD & NSA's report for detecting web shells (Detect and prevent web shell malware | Cyber.gov.au ) with NetWitness, let's take a look at the endpoint focused indicators. If you missed the other posts, you can find them here:

 

Signature-Based Detection

To start with, the guide provides some YARA rules for static signature based analysis. However the guide then quickly moves on to say that this approach is unreliable as attackers can easily modify the web shells to avoid this type of detection. We couldn't agree more – YARA scanning is unlikely to yield many effective detections.

 

Endpoint Detection and Response (EDR) Capabilities

The guide then goes on to describe the potential benefits of using EDR tools like NetWitness Endpoint. EDR tools can be of great benefit to provide visibility into abnormal behaviour at a system level. As the paper notes:

For instance, it is uncommon for most benign web servers to launch the ipconfig utility, but this is a common reconnaissance technique enabled by web shells.

Indeed - monitoring process and commands invoked by web server processes is a good way to detect the presence of web shells. When a web shell is first accessed by an attacker, they will commonly run a few commands to figure out what sort of access they have. Appendix F of the guide includes a list of Windows Executables to watch for being launched by web server processes like IIS w3wp.exe (reproduced below):

NetWitness endpoint provides OOTB monitoring for many of these processes, and produces meta data when execution is detected. The examples below shows some of the meta generated for the execution of cmd.exe, ipconfig.exe and whoami.exe from a web shell - the Behaviors of Compromise key shows values of interest:

An important detail to be wary of is that in many cases the web server process like w3wp.exe may not invoke the target executable directly. So simply running a query looking for filename.src = 'w3wp.exe' && filename.dst = 'ipconfig.exe' won’t work. In the example below, we can see that the web server process actually invokes a script in memory, which then invokes cmd.exe to run the desired tool ipconfig.exe, similarly for whoami.exe:

The event detail shows the chain of execution across the two events:

We can see the full meta data includes the command to run ipconfig.exe passed as a parameter between the two processes:

 

We can get a clearer picture of the relationship between these processes usng the NetWitness Endpoint process analyser, which shows the links between the processes:

 

NetWitness Endpoint generates a lot of insightful metadata to describe actions on a host. It is well worth reviewing the metadata generated and which meta keys it is placed under. There is a great documentation page with all the details here: RSA NetWitness Endpoint Application Rules 

Not just IIS

Of course, web shells don't only run on IIS! The same principles can be used for detecting web shells installed on Apache Tomcat and other web servers. Application rules in NetWitness Endpoint also look for command execution by other web server processes. Make sure you check your environment for your web server daemons and add them to the rules as well:


Endnote

That’s it for this series where we’ve gone through the indicators published by ASD & NSA in their guide for detecting web shells and transcribed how to use them in NetWitness.While the indicators in the guide serve as a starting point, real life detection can get very complicated very quickly. As we stated in a previous post:

Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.

My colleague Hermes Bojaxhi recently posted about another example involving web shells from one of our cases. He goes into great detail showing the exploitation of Exchange and the installation of a web shell: Exchange Exploit Case Study – CVE-2020-0688 

 

Let me know in the comments below if you’ve used any of these techniques in your environment and what you've found - or let me know if there's anything else you'd like to see.

 

Happy Hunting!

Following on from my last post that focused on analysing web server logs ASD & NSA's Guide to Detect and Prevent Web Shell Malware - Web Server Logs , this time we are going to look at the network based indicators from the ASD & NSA guide Detect and prevent web shell malware | Cyber.gov.au .

There are already some fantastic resources posted by my colleague from the IR team Lee Kirkpatrick and the NetWitness product Documentation team that provide great details on the different ways we can detect web shells using NetWitness for network visibility:

The focus of this post is taking the indicators published by the ASD & NSA in their guide, and showing how to use them in NetWitness.

Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.

Now that’s out of the way, lets take a look at the network indicators.

Web Traffic Anomaly Detection

This is really focused on the URIs being accessed on your servers and the user agents that are being used to access those pages. An easy way to detect new user agents, or new files being accessed on your website (depending on how dynamic your content is) is to use the show_whats_new report action. The show_whats_new action will filter your results from a query to only show new values that did not appear in the database prior to the timeframe of your report. Here’s an example from my lab – if I run a report to show all user agents seen in the last 6 hours I get 20 user agents in my report:

Using show_whats_new in the THEN clause of the rule filters the results and shows me only 2 user agents (which makes sense as my chrome browser recently updated):

Obviously just because a user agent is new doesn’t automatically mean it is a web shell, as web browsers get updates all the time. But it is another method for highlighting anomalies and changes in your environment.

One of the common techniques we use in the IR team is to review the HTTP request methods used against a server – finding sessions that do not follow the pattern of normal user web browsing are a good indicator for web shells. Normal user generated browsing will consist of GET requests followed POST. Sessions that have a POST action with no GET request and no referrer present are a good indicator as Lee covers in his post mentioned above.

Signature-Based Detection

As the ASD & NSA guide states itself, network signatures are an unreliable way to detect web shell traffic:

From the network perspective, signature-based detection of web shells is unreliable because web shell communications are frequently obfuscated or encrypted. Additionally, “hard-coded” values like variable names are easily modified to further evade detection. While unlikely to discover unknown web shells, signature-based network detection can help identify additional infections of a known web shell.

The guide nevertheless includes some Snort rules to detect network communication from common, unmodified web shells:

RSA NetWitness has always had the ability to use Snort rules on the Network Decoder, and that capability was recently enhanced with the 11.3 release adding the ability to map meta data generated by the snort parser to the Unified Data Model. For the steps required to install and configure Snort rules on your network decoder, follow these guides for details and more information:

Here’s the short version:

  1. Create a new folder on your Network Decoder /etc/netwitness/ng/parsers/snort
  2. Create a snort.conf file in that directory. Here’s a simple configuration to get you started:
  3. Copy the rules from the ASD & NSA guide into a file called webshells.rules
    Mitigating-Web-Shells/network_signatures.snort.txt at master · nsacyber/Mitigating-Web-Shells · GitHub 

  4. Go to the Explore view for your Decoder, and go to decoder > parsers > config and add Snort=”udm=true” to the parsers.options field

  5. While in Explore view, right click on decoder > parsers, select properties, then choose reload and hit Send to reload the parsers and activate your Snort rules.

Here we can see the Snort rules successfully loaded and available on the Network Decoder:

Unexpected Network Flows

The ASD & NSA guide suggests monitoring the network for unexpected web servers, and provides a snort signature that simply alerts when a node in the targeted subnet responds to an HTTP(s) request by looking for traffic on port 80 or 443 with a destination IP address in a given subnet:

alert tcp 192.168.1.0/24 [443,80] -> any any (msg: "potential unexpected web server"; sid 4000921)

Rather than updating this rule with the right subnet details for your environment (that will only be available to be used by this rule), we can do this natively in NetWitness utilising the Traffic Flow parser and its associated traffic_flow_options file to label subnets and IP addresses. Using the traffic_flow_options file to do this labelling means the resulting meta can be used by other parsers, feeds, and app rules as well.

For more details on the Traffic Flow parser, go here: Traffic Flow Lua Parser 

To configure your traffic_flow_options file, start with the subnet or IP addresses of known web servers and add them as a block in the INTERNAL section of the file, and label them “web servers”. When traffic is seen heading to those servers as a destination, the meta ‘web servers dst’ will be registered under the Network Name (netname) meta key.

Once the traffic_flow_options file is configured, we can translate the Snort rule from the guide into an app rule that will detect any HTTP or HTTPS traffic, or traffic destined to port 80 or 443, to any system that has not been added to our definition for web servers:

(service = 80,443 || tcp.dstport = 80,443) && netname != ‘web servers dst’

Conclusion

That covers the network based indicators included in the ASD & NSA guide. For more techniques to uncover web shell network traffic, check out the pages linked at the top of this blog, as well as the RSA IR Threat Hunting Guide for NetWitness: 

Stay tuned for the next part where we take a look at the endpoint based indicators from the guide, and see how to apply them using NetWitness Endpoint.

 

Happy Hunting!

Introduction

The Australian Signals Directorate (ASD) & US National Security Agency (NSA) have jointly released a useful guide for detecting and preventing web shell malware. If you haven't seen it yet, you can find it here:

The guide includes some sample queries to run in Splunk to help detect potential web shell traffic by analysing IIS and Apache web logs. “That’s great, but how can we do the same search in NetWitness Logs?” I hear you ask! Let’s take a look.

Web Server Logging

If you are already collecting IIS and Apache logs – or any web server audit logs for that matter – you’ve probably already made some changes to your configuration to suit your needs to get the data that you want. To run the queries suggested by the guide, we need to make a change to the default log parser settings for IIS & Apache logs. The default log parser setting for IIS & Apache does not save the URI field as meta that we can query – it is parsed at the time of capture and available as transient meta for evaluation by feeds, parsers, & app rules, but it is not saved to disk as meta. To collect the data needed to run these queries, we are going to change the setting for the meta from “Transient” to “None”.

For more information on how RSA NetWitness generates and manages meta, go here: Customize the meta framework 

The IIS and Apache log parsers both parse the URI field from the logs into a meta key named webpage. The table-map.xml file on the Log Decoder shows that this meta value is set to “Transient”.

To change the way this meta is handled, take a copy of the line from the table-map.xml and paste it into the table-map-custom.xml, and change the flags=”Transient” setting to flags=”None”:

<mapping envisionName="webpage" nwName="web.page" flags="None" format="Text"/>

Hit apply, then restart the log decoder service for the change to take effect. Remember to push the change to all Log Decoders in your environment.

Next, we want to tell the Concentrator how to handle this meta. Go to your index-concentrator-custom.xml file and add an entry for this new web.page meta key:

<key description="URI" format="Text" level="IndexValues" name="web.page" defaultAction="Closed" valueMax="10000" />

I set the display name for the key as URI – but you can set it to whatever makes sense for you. I also set a maximum value count of 10,000 for the key - you should use a value that makes sense for your website(s) and environment and review for any meta overflow errors.

Hit apply, then restart the concentrator service for the change to take effect. Remember to push the change to all Concentrators in your environment (Log & Network), especially if you use a Broker.

Now as you collect your web logs, the web.page meta key will be populated:

You may also want to change the index level for the referer key. By default it is set to IndexKey, which means a query that tests if a referer exists or doesn’t exist will return quickly, but a search for a particular referer value will be slow. If you find yourself doing a lot of searches for specific referers you can change this setting to IndexValues as well.

Optionally, you can add the web.page meta key to a meta group & column group so you can keep track of it in Navigate & Events views. I’ve attached a copy of my Web Logs Analysis meta group and column group to the end of this post.

Now we are ready for the queries themselves. While at first glance they seem pretty complicated, they really aren’t. Plus with the way NetWitness parses the data into a common taxonomy, you don’t need different queries for IIS & Apache – the same query will work for both!

Query 1 – Identify URIs accessed by few user agents and IP addresses

For this query, we need to use the countdistinct aggregation function to count how many different user agents and how many different IP addresses accessed the pages on our website.

For more information on NWDB query syntax, go here: Rule Syntax 
SELECT web.page, countdistinct(user.agent),countdistinct(ip.src)
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY web.page
ORDER BY countdistinct(user.agent) ASCENDING

Query 2 – Identify user agents uncommon for a target web server

This query simply shows the number of times each user agent accesses our web server. We can see this very easily by just using the Navigate interface and setting the result order to Ascending:

Here is the query to use in the report engine rule:

SELECT user.agent
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY user.agent
ORDER BY Total Ascending

Query 3 – Identify URIs with an uncommon HTTP referrer

This query is a bit more complicated – we want to show referrers that do not access many URIs, but also want to see how often they access each URI. This query could need some tuning if you have pages on your site that are typically only accessed by following a link from a previous page, or even an image file that is only loaded by a single page.

Our select statement will list the referrer followed by the number of URIs that the referrer is used for (sorted ASC – we’re interested in uncommon referers), then it will list those URIs where it is seen as the referer, followed by the number of hits (sorted DSC) – a URI that is accessed  

SELECT referer, countdistinct(web.page), distinct(web.page), count(referer)
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY referrer
ORDER BY countdistinct(web.page) Ascending, count(referrer) Descending

Query 4 – Identify URIs missing an HTTP referrer

This is an easy one to finish off – we’re interested in events where there is no referer present. To refine the results we want to filter events that are hitting the base of the site ‘/’ as this could easily be someone typing the URL directly into their browser.

SELECT web.page
WHERE device.class = ‘web logs’ && (referrer !exists || referrer =-) && web.page !=/&& result.code begins ‘2
GROUP BY web.page
ORDER BY Total Desceding

These rules and a report that includes the rules can be found in the attached files.

Conclusion

Let me know in the comments below how these queries work in your environment, and if you have suggestions for improvements. The goal of this post was to quickly convert the queries included in the guide published by ASD & NSA. Stay tuned for more posts that show how we can improve the fidelity of these queries, and also how to utilise the endpoint and network indicators also found in thie ASD & NSA guide.

 

Happy Hunting!

Introduction

Having recently moved into the IR team – where I now have to actually do stuff as opposed to just talking about stuff in technical sales – I have found that the best way to get up to speed with detecting attacker behaviours is to run the tools they are likely to use in my lab so I can get familiar with how they work. Reading blogs like this and the others in Lee Kirkpatrick's excellent Profiling Attackers Series is great, but I find I learn much faster by doing things and interacting with systems myself.


Covenant

Covenant is an open source C2 framework (https://github.com/cobbr/Covenant) that can be viewed as a replacement for PowerShell Empire, since its retirement.

In this blog series, Lee Kirkpatrick has already covered some examples of how to get the payload delivered and installed on the target, so we’re going to dive straight in to how our Hunting Methodology can be used to detect the activity. We are going to hunt for activity using data generated by both NetWitness Network and NetWitness Endpoint.

For the purpose of this exercise, we have used the default http settings for the Listener profile in Covenant, and only changed the default beacon setting from 5 seconds to 120 seconds to represent a more realistic use of the tool. The settings can be easily changed (such as the user-agent, directory and files used for the callback etc) but quite often the defaults are used by attackers too! We have also used the Power Shell method for creating our Launcher.


NetWitness Network Analysis

Covenant uses an HTTP connection for its communication (which can optionally be configured to run over SSL with user provided certs). By using our regular methodology of starting with Outbound HTTP traffic (direction = ‘outbound’ && service = 80), we can review the Analysis meta keys for any interesting indicators:

 

 

Reviewing the Service Analysis keys (analysis.service) we can see some interesting values:

 

 

Check the RSA NetWitness Hunting Guide for more information on these values in Service Analysis

 

By drilling into these 6 values we reduce our dataset from over 4,000 sessions to 69 sessions – this means that these 69 sessions all share the same “interesting” characteristics that suggest that they are not normal user initiated web browsing.

 

 

With 69 sessions we can use Event Analysis to view those sessions in more detail, which reveals the bulk of traffic belongs to the same Source & Destination IP address pair:

 

 

This appears to be our Covenant C2 communications. Opening the session reconstruction, we can see more details. Some things that we can observe that could be used to enhance detection of this traffic would be the strange looking User-Agent string:

 

 

The User-Agent string is strange as appears to be old. It resolves to Chrome version 41 on Windows 7 – the victim in this case is a Windows 10 system, and the version of Chrome installed on the host is version 79. If you attempt to connect to the Listener with a different User-Agent it returns a 500 Error:

 

 

Don't poke the Bear (or Panda, Kitten, Tiger etc) - if you find these indicators in your environment, don't try to establish a connection back to the attacker's system as you will give them a tip-off that you are investigating them.

Also, the HTTP Request Header “cookies” appears in all sessions:

 

 

The HTTP Request Header “cookie” also appears in all sessions after the initial callback … so sessions with both “cookies” and “cookie” request headers appear unique to this traffic:

 

 

The following query (which could be used as an App rule) identifies the Covenant traffic in our dataset:

client = 'mozilla/5.0 (windows nt 6.1) applewebkit/537.36 (khtml, like gecko) chrome/41.0.2228.0 safari/537.36' && http.request = 'cookies' && http.request = 'cookie'

Another indicator we could use is the Request Header value SESSIONID=1552332971750, as this also appears to be a static string in the default HTTP profile for Covenant - as shown in this sample that has been submitted to hybrid-analysis.com https://www.hybrid-analysis.com/sample/aed68c3667e803b1c7af7e8e10cb2ebb9098f6d150cfa584e2c8736aaf863eec?environmentId=10… 

 

 

NetWitness Endpoint Analysis

When hunting with NetWitness Endpoint, I always start with my *Compromise keys – Behaviours of Compromise, Indicators of Compromise, and Enablers of Compromise, as well as reviewing the Category of endpoint events.

 

 

Here we can see 4 meta values related to running PowerShell – which we know is the method used for creating our Covenant Launcher.

Upon viewing these events in Event Analysis we can see the encoded PowerShell script being launched

 

 

Analysis shows that we have a very large encoded parameter being passed. It’s too large for us to decode and manage in the NetWitness GUI, so we can paste the command into CyberChef and decode it from there.

 

 

We can further decode the string to reveal the command:

 

 

The output here appears to be compressed, so we can add an Inflate operation to our recipe to reveal the contents:

 

 

Looks like we have some executable code. A quick search for recognisable strings yields a URL which matches our network traffic for the callback to the Covenant server, as well as a template for the html page that should match what is served by the Covenant Listener

 

 

Also the block of text can be Base64 decoded to reveal the Request Headers to be sent by the Grunt when communicating with the Listener:

 

 

This also matches what we observed in our network analysis for a Grunt check-in:

 

 

And the command being sent to the Grunt via the response from the Listener:

 

Decoding the &data= section of the above Post shows the encrypted data being returned to the Listener - known as the GruntEncryptedMessage:

 

 

 

Happy Hunting!

CT

I'm sure you know that RSA Netwitness for Logs and Packets includes the ability to register for a Cisco AMP ThreatGrid API Key through RSA's partnership with Cisco AMP ThreatGrid. You can use this API key to enable sandbox analysis with the RSA NetWitness Malware Analysis service. If you haven't done so already, check out the documentation here MA: (Optional) Register for a ThreatGrid API Key  for details on how to register. 

 

What you may not know, is that you can also use that API key to download Cisco AMP ThreatGrid's Intelligence Feeds. Every hour or so, Cisco AMP ThreatGrid takes the artefacts from their sandbox analysis and create 15 Intelligence Feeds - we can use 12 of them directly in RSA NetWitness for Logs and Packets. It's easy to set these up as feeds using the Custom Feed Wizard in RSA NetWitness Logs and Packets.

 

Once you have your Cisco AMP ThreatGrid API key and login details, login to the portal, and click on the Help icon to access the Feeds Documentation. It will be in the middle of the page:

 

 

Follow the Cisco AMP ThreatGrid documentation to see which feeds make sense for your environment. At the time of writing, there are 15 feeds available. The feeds that end with -dns are feeds that match on a DNS lookup for a host - these are the feeds that we will integrate with RSA NetWitness for Logs and Packets:

 

 

The format for the URL to retrieve the feed is quite simple:

https://panacea.threatgrid.com/api/v3/feeds/feed_name.format?api_key=1234567890

Once you have your API key ready, and the list of feeds you want to integrate, head to the RSA NetWitness Custom Feed Wizard under Live --> Feeds, where you will see any existing custom feeds:

 

Click on the + to create a new custom feed:

Then enter the details for your feed. Here is a list of all the URL's for all the feeds - just put your key in at the end instead of 1234567890 ...

 

https://panacea.threatgrid.com/api/v3/feeds/banking-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/dll-hijacking-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/doc-net-com-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/downloaded-pe-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/dynamic-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/irc-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/modified-hosts-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/parked-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/public-ip-check-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/ransomware-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/rat-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/sinkholed-ip-dns.csv?api_key=1234567890
https://panacea.threatgrid.com/api/v3/feeds/stolen-cert-dns.csv?api_key=1234567890

 

Make sure you select Recurring as the "Feed Task Type" - this will let you download the feed directly from Cisco AMP ThreatGrid - and set the "Recur Every" variable to 1 hour for fresh feeds:

 

 

Click the Verify button to make sure RSA NetWitness can connect to the URL and get the green tick:

Next, choose which of your Decoders to apply this feed to. It will work for both Packet and Log Decoders (but it's always a good idea to test first before rolling into production!):

 

 

Next, we get to define how to use the data in the feed. This will be a Non-IP feed (we want to match on the hostname in the feed), the Index will be in column 2 (the hostname), and the Callback Key (the key we want to match against) will be alias.host.

 

 

The other columns can be mapped to whatever meta keys you want to use in your environment. For my example, I used:

  • threat.desc - Threat Description for the first column as I often use the Threat Keys (threat.source, threat.desc, threat.cat) for reviewing data
  • <key>
  • alias.ip - this is the IP address that the hostname resolved to when the feed was created. For a more advanced implementation of this feed you may want to investigate how to create a feed with multiple indexes
  • tg.date - the date of the feed
  • tg.analysis - a link to the Cisco AMP ThreatGrid portal for analysis of the hostname
  • tg.sample - a link to the Cisco AMP ThreatGrid portal for a malware sample
  • tg.md5 - MD5 hash
  • tg.sha256 - SHA256 hash
  • tg.sha1 - SHA1 hash

(None of these new keys need to be indexed (unless you want to) so there is no need to modify the index-concentrator-custom.xml files).

Next, review your settings:

 

When finished, confirm that your feed ran:

 

Repeat this process for each of the feeds that you want to integrate:

 

The last (optional) step, is to create an Application Rule that will label the Threat Source that this feed comes from. We can simply check for the tg.analysis key to see if any of our feeds have triggered:

 

Rule Name - Cisco AMP ThreatGrid

Condition - tg.analysis exists

Alert on - threat.source

 

Now we can simply search for threat.source = 'cisco amp threatgrid' to find any hits.

 

Happy Hunting!

There has been a lot of great information published about Sysmon since Mark Russinovich's presentation at RSA Conference. Eric Partington posted a great blog showing how to use Sysmon data with RSA NetWitness for Logs: Log - Sysmon 6 Windows Event Collection. This prompted RSA’s IR Team to publish details on how to get the rich tracking information generated by RSA NetWitness Endpoint that they use everyday for their incident investigations into a SIEM Here.

 

The aim of this blog is to show you how to collect this tracking data from RSA NetWitness Endpoint with RSA NetWitness for Logs. The collection is done via the Log Collector using a custom ODBC typespec.

 

*** DISCLAIMER - this is a field developed Proof of Concept, shared with the Community. It is not endorsed by RSA Engineering. The database structure used by NWE may change at any time. No testing has been done to measure the impact on performance for a production NWE Server. This has been developed and tested using RSA NetWitness Endpoint v4.3.0.1 and RSA NetWitness for Logs v10.6.2.1. /DISCLAIMER ***

 

***DISCLAIMER 2 - for this Proof of Concept, we have disabled the requirement on the NWE SQL Server to Force Encryption.  /DISCLAIMER 2 ***

 

The objective of this integration is to get the tracking data from NWE as it is being collected into NWL, so we can index it and use it for Investigations. Tracking data in NWE can only be viewed on a per machine basis - this integration allows us to get a global view of tracking data across all of our endpoints. Here's the high level summary of what we need to do (if you want to skip to the end, all files are attached as a zip):

  1. Create a new ODBC typespec definition (XML file) to query the NWE Database and get the data we want,
  2. Create a new Log Parser to map the results of the SQL query into metadata,
  3. Add the meta we are using to the table-map-custom.xml so it is persistent,
  4. Add the meta we want to index to the index-concentrator-custom.xml file,
  5. Configure a new ODBC DSN definition,
  6. Configure a new ODBC Event Collector,
  7. Configure a new Meta Group to show our data for investigations,
  8. Configure a new Column Group to show the data we want in Events view,
  9. Configure some Report Rules and Charts to visualise the data,
  10. Configure a new RSA NetWitness Endpoint Dashboard to keep track of our environment.

Here we go!

 

1. Create ODBC Definition

Thanks to Andreas Funk and his blog Integrating a MySQL (community) database with NetWitness for Logs for giving us a primer on how to create a new ODBC connection. We need to create a new Filespec to tell the ODBC collector how to query the NWE database and get the data we want. 

On the Log Collector (either the one on the Log Decoder, or a separate VLC - whichever you are going to use to collect these logs) the ODBC collection definitions are stored here: 

/etc/netwitness/ng/logcollection/content/collection/odbc/

 

We need to add a new file for our NWE tracking data - 

vi /etc/netwitness/ng/logcollection/content/collection/odbc/nwe_tracking.xml

 

Here is the query from Rui Ataide's blog, modified to work for NWL, included in our definition:

<?xml version="1.0" encoding="UTF-8"?>
<typespec>
   <name>nwe_tracking</name>
   <type>odbc</type>
   <prettyName>NetWitness Endpoint Tracking</prettyName>
   <version>2.0</version>
   <author>Chris Thomas</author>
   <description>Import NWE Tracking data</description>
   <device>
      <name>nwe_tracking</name>
   </device>
   <configuration>
   </configuration>
   <collection>
      <odbc>
         <query>
            <tag>nwe_tracking</tag>
            <outputDelimiter>||</outputDelimiter>
            <interval>30</interval>
            <dataQuery>              
           
(SELECT
      SE.PK_WinTrackingEvents,
      SE.EventUTCTIme,
      MA.MacAddress as src_mac,
      MA.LocalIp as src_ip,
      MA.MachineName,
      LOWER(PA.Path),
      LOWER(FN.FileName),
      LOWER(PA.Path + FN.FileName) AS Source,
      MO.HashSHA256,
      LA.LaunchArguments AS SLA,
      CASE      
            WHEN SE.BehaviorFileOpenPhysicalDrive = 1 THEN 'OpenPhysicalDrive'
            WHEN SE.BehaviorFileReadDocument = 1 THEN 'ReadDocument'
            WHEN SE.BehaviorFileWriteExecutable = 1 THEN 'WriteExecutable'
            WHEN SE.BehaviorFileRenameToExecutable = 1 THEN 'RenameExecutable'
            WHEN SE.BehaviorProcessCreateProcess = 1 THEN 'CreateProcess'
            WHEN SE.BehaviorProcessCreateRemoteThread = 1 THEN 'CreateRemoteThread'
            WHEN SE.BehaviorProcessOpenOSProcess = 1 THEN 'OpenOSProcess'
            WHEN SE.BehaviorProcessOpenProcess = 1 THEN 'OpenProcess'
            WHEN SE.BehaviorFileSelfDeleteExecutable = 1 THEN 'SelfDelete'
            WHEN SE.BehaviorFileDeleteExecutable = 1 THEN 'DeleteExecutable'
            WHEN SE.BehaviorRegistryModifyBadCertificateWarningSetting = 1 THEN 'ModifyBadCertificateWarningSetting'
            WHEN SE.BehaviorRegistryModifyFirewallPolicy = 1 THEN 'ModifyFirewallPolicy'
            WHEN SE.BehaviorRegistryModifyInternetZoneSettings = 1 THEN 'ModifyInternetZoneSettings'
            WHEN SE.BehaviorRegistryModifyIntranetZoneBrowsingNotificationSetting = 1 THEN 'ModifyIntranetZoneBrowsingNotificationSetting'
            WHEN SE.BehaviorRegistryModifyLUASetting = 1 THEN 'ModifyLUASetting'
            WHEN SE.BehaviorRegistryModifyRegistryEditorSetting = 1 THEN 'ModifyRegistryEditorSetting'
            WHEN SE.BehaviorRegistryModifyRunKey = 1 THEN 'ModifyRunKey '
            WHEN SE.BehaviorRegistryModifySecurityCenterConfiguration = 1 THEN 'ModifySecurityCenterConfiguration'
            WHEN SE.BehaviorRegistryModifyServicesImagePath = 1 THEN 'ModifyServicesImagePath'
            WHEN SE.BehaviorRegistryModifyTaskManagerSetting = 1 THEN 'ModifyTaskManagerSetting'
            WHEN SE.BehaviorRegistryModifyWindowsSystemPolicy = 1 THEN 'ModifyWindowsSystemPolicy'
            WHEN SE.BehaviorRegistryModifyZoneCrossingWarningSetting = 1 THEN 'ModifyZoneCrossingWarningSetting'
      END AS Action,
      LOWER(SE.Path_Target),
      LOWER(SE.FileName_Target),
      LOWER(SE.Path_Target + SE.FileName_Target) AS Destination,
      SE.LaunchArguments_Target AS TLA,
      se.HashSHA256_Target
FROM
      dbo.WinTrackingEvents_P1 AS SE WITH(NOLOCK)
      INNER JOIN dbo.Machines AS MA WITH(NOLOCK) ON MA.PK_Machines = SE.FK_Machines
      INNER JOIN dbo.MachineModulePaths AS MP WITH(NOLOCK) ON MP.PK_MachineModulePaths = SE.FK_MachineModulePaths
      INNER JOIN dbo.Modules AS MO WITH(NOLOCK) ON MO.PK_Modules = MP.FK_Modules
      INNER JOIN dbo.FileNames AS FN WITH(NOLOCK) ON FN.PK_FileNames = MP.FK_FileNames
      INNER JOIN dbo.Paths AS PA WITH(NOLOCK) ON PA.PK_Paths = MP.FK_Paths
      INNER JOIN dbo.LaunchArguments AS LA WITH(NOLOCK) ON LA.PK_LaunchArguments = SE.FK_LaunchArguments__SourceCommandLine
WHERE PK_WinTrackingEvents > '%TRACKING%'
UNION
SELECT
      SE.PK_WinTrackingEvents,
      SE.EventUTCTIme,
      MA.MacAddress as src_mac,
      MA.LocalIp as src_ip,
      MA.MachineName,
      LOWER(PA.Path),
      LOWER(FN.FileName),
      LOWER(PA.Path + FN.FileName) AS Source,
      MO.HashSHA256,
      LA.LaunchArguments AS SLA,
      CASE      
            WHEN SE.BehaviorFileOpenPhysicalDrive = 1 THEN 'OpenPhysicalDrive'
            WHEN SE.BehaviorFileReadDocument = 1 THEN 'ReadDocument'
            WHEN SE.BehaviorFileWriteExecutable = 1 THEN 'WriteExecutable'
            WHEN SE.BehaviorFileRenameToExecutable = 1 THEN 'RenameExecutable'
            WHEN SE.BehaviorProcessCreateProcess = 1 THEN 'CreateProcess'
            WHEN SE.BehaviorProcessCreateRemoteThread = 1 THEN 'CreateRemoteThread'
            WHEN SE.BehaviorProcessOpenOSProcess = 1 THEN 'OpenOSProcess'
            WHEN SE.BehaviorProcessOpenProcess = 1 THEN 'OpenProcess'
            WHEN SE.BehaviorFileSelfDeleteExecutable = 1 THEN 'SelfDelete'
            WHEN SE.BehaviorFileDeleteExecutable = 1 THEN 'DeleteExecutable'
            WHEN SE.BehaviorRegistryModifyBadCertificateWarningSetting = 1 THEN 'ModifyBadCertificateWarningSetting'
            WHEN SE.BehaviorRegistryModifyFirewallPolicy = 1 THEN 'ModifyFirewallPolicy'
            WHEN SE.BehaviorRegistryModifyInternetZoneSettings = 1 THEN 'ModifyInternetZoneSettings'
            WHEN SE.BehaviorRegistryModifyIntranetZoneBrowsingNotificationSetting = 1 THEN 'ModifyIntranetZoneBrowsingNotificationSetting'
            WHEN SE.BehaviorRegistryModifyLUASetting = 1 THEN 'ModifyLUASetting'
            WHEN SE.BehaviorRegistryModifyRegistryEditorSetting = 1 THEN 'ModifyRegistryEditorSetting'
            WHEN SE.BehaviorRegistryModifyRunKey = 1 THEN 'ModifyRunKey '
            WHEN SE.BehaviorRegistryModifySecurityCenterConfiguration = 1 THEN 'ModifySecurityCenterConfiguration'
            WHEN SE.BehaviorRegistryModifyServicesImagePath = 1 THEN 'ModifyServicesImagePath'
            WHEN SE.BehaviorRegistryModifyTaskManagerSetting = 1 THEN 'ModifyTaskManagerSetting'
            WHEN SE.BehaviorRegistryModifyWindowsSystemPolicy = 1 THEN 'ModifyWindowsSystemPolicy'
            WHEN SE.BehaviorRegistryModifyZoneCrossingWarningSetting = 1 THEN 'ModifyZoneCrossingWarningSetting'
      END AS Action,
      LOWER(SE.Path_Target),
      LOWER(SE.FileName_Target),
      LOWER(SE.Path_Target + SE.FileName_Target) AS Destination,
      SE.LaunchArguments_Target AS TLA,
      se.HashSHA256_Target
FROM
      dbo.WinTrackingEvents_P0 AS SE WITH(NOLOCK)
      INNER JOIN dbo.Machines AS MA WITH(NOLOCK) ON MA.PK_Machines = SE.FK_Machines
      INNER JOIN dbo.MachineModulePaths AS MP WITH(NOLOCK) ON MP.PK_MachineModulePaths = SE.FK_MachineModulePaths
      INNER JOIN dbo.Modules AS MO WITH(NOLOCK) ON MO.PK_Modules = MP.FK_Modules
      INNER JOIN dbo.FileNames AS FN WITH(NOLOCK) ON FN.PK_FileNames = MP.FK_FileNames
      INNER JOIN dbo.Paths AS PA WITH(NOLOCK) ON PA.PK_Paths = MP.FK_Paths
      INNER JOIN dbo.LaunchArguments AS LA WITH(NOLOCK) ON LA.PK_LaunchArguments = SE.FK_LaunchArguments__SourceCommandLine
WHERE PK_WinTrackingEvents > '%TRACKING%' )

ORDER By SE.PK_WinTrackingEvents ASC
            </dataQuery>

            <trackingColumn>PK_WinTrackingEvents</trackingColumn>
     <maxTrackingQuery> SELECT MAX(PK_WinTrackingEvents) FROM dbo.WinTrackingEvents_P0</maxTrackingQuery>
         </query>
      </odbc>
   </collection>
</typespec>

This creates a log entry with a static format, that is delimited by a double pipe ||:

This makes it easy for us to create a new log parser.

 

2. Create a new Log Parser

For information on how to create a new log parser using the new Log Parser Tool, head over here: The specified item was not found.We need to create a new directory where the Log Decoder parsers are kept, and add our ini and xml parser files

mkdir /etc/netwitness/ng/envision/etc/devices/nwe_tracking/

 

Here is the ini file that describes our parser: nwe_tracking.ini

DatabaseName=nwe_tracking

DisplayName=NetWitness Endpoint Tracking

DeviceGroup=

DeviceType=7104

 

And here is the Log Parser: v20_nwe_trackingmsg.xml - the meta keys to use were chosen to line up with where the data from sysmon gets mapped to, as shown here: Log - Sysmon 6 Windows Event Collection

<?xml version="1.0" encoding="UTF-8"?>
<DEVICEMESSAGES
        name="nwe_tracking"
        displayname="NetWitness Endpoint Tracking"
        group=""
        type="7104">

<VERSION
      xml="1"
      revision="1"
        device="2.0"/>

<HEADER
        id1="HDR1"
        id2="HDR1"
        messageid="STRCAT('NWEPMSG')"
        content="%nwe_tracking:&lt;trans_id&gt;||&lt;event_time&gt;||&lt;!payload:trans_id&gt;"/>

<MESSAGE
        id1="NWEPMSG"
        id2="NWEPMSG"
        eventcategory="1612000000"      content="&lt;trans_id&gt;||&lt;event_time&gt;||&lt;smacaddr&gt;||&lt;saddr&gt;||&lt;event_computer&gt;||&lt;directory&gt;||&lt;filename&gt;||&lt;parent_process&gt;||&lt;checksum&gt;||&lt;parent_params&gt;||&lt;category&gt;||&lt;directory&gt;||&lt;filename&gt;||&lt;process&gt;||&lt;params&gt;||&lt;checksum&gt;"/>

</DEVICEMESSAGES>

There should be 2 files in the new directory:

[root@RSAANZSCSA nwe_tracking]# pwd

/etc/netwitness/ng/envision/etc/devices/nwe_tracking

[root@RSAANZSCSA nwe_tracking]# ls -l

total 8

-rw-r--r--. 1 root root  96 Mar  9 10:01 nwe_tracking.ini

-rw-r--r--. 1 root root 761 Mar 10 02:59 v20_nwe_trackingmsg.xml

[root@RSAANZSCSA nwe_tracking]#

 

3. Add meta to table-map-custom.xml

This step can be done using the Web GUI, but since we're already on the command line we'll do it there. It's always a good idea to make a back up copy of the file first!

cp /etc/netwitness/ng/envision/etc/table-map-custom.xml /etc/netwitness/ng/envision/etc/table-map-custom.xml.old

 

Then edit the table-map-custom.xml file:

vi /etc/netwitness/ng/envision/etc/table-map-custom.xml

 

We can add the meta we are using (that is not already set as persistent (flags="None") at the end of the file:

        <!-- NWE Tracking Data
-->

<mapping envisionName="smacaddr" nwName="eth.src" flags="None" format="MAC" envisionDisplayName="SourceMacAddress" nullTokens="Unknown"/>
<mapping envisionName="checksum" nwName="checksum" flags="None"/>
<mapping envisionName="parent_params" nwName="parent.params" flags="None"/>
<mapping envisionName="process" nwName="process" flags="None"/>
<mapping envisionName="parent_process" nwName="parent.process" flags="None"/>
<mapping envisionName="params" nwName="params" flags="None"/>
<mapping envisionName="directory" nwName="directory" flags="None"/>
<mapping envisionName="category" nwName="category" flags="None"/>

Now that we've finished the modifications for the Log Collector and Log Decoder, restart those services so that the changes get loaded.

 

4. Add meta for indexing to index-concentrator-custom.xml

Again, you can do this in the GUI, but since we're on the command line already we'll do it there. Just make sure you switch to your Concentrator first! (I'm on a hybrid ). Again - make a backup first

cp /etc/netwitness/ng/index-concentrator-custom.xml /etc/netwitness/ng/index-concentrator-custom.xml.old

 

Then edit the file

vi /etc/netwitness/ng/index-concentrator-custom.xml

 

Add the new meta to index at the end of the file - you may need to add more keys depending on your existing index settings:

<!-- NWE Tracking Data -->
<key description="Checksum" format="Text" level="IndexValues" name="checksum" valueMax="1000000" defaultAction="Open"/>
<key description="Parent Process" format="Text" level="IndexValues" name="parent.process" valueMax="1000000" defaultAction="Open"/>
<key description="Parent Process Parameters" format="Text" level="IndexValues" name="parent.params" valueMax="1000000" defaultAction="Open"/>
<key description="Process Parameters" format="Text" level="IndexValues" name="params" valueMax="1000000" defaultAction="Open"/>
<key description="Category" format="Text" level="IndexValues" name="category" valueMax="1000000" defaultAction="Open"/>

Restart the Concentrator service so that the changes get loaded.

 

5. Configure new ODBC DSN definition.

Now we can switch to the GUI for our configuration. Go to your Log Collector Config page, and create a new DSN. 

 

Enter the details to connect to your NWE Database (do not use a template) and click Save:

 

6. Configure a new ODBC Event Collector

On the Log Collector Config page, create a new ODBC Event Category by selecting our new nwe_tracking source from the list:

 

Now add a new Event Source and enter the details for your NWE SQL Database:

 

Click Test Connection to see that it all works ...

 

If it's not turned on already, start ODBC collection method (and set it to auto-start). Now you should be collecting NWE Tracking events! run a query for device.type = 'nwe_tracking' to see:

 

 

The remaining steps go through ways to use the NWE Tracking data.

 

7. Configure a Meta Group to show the NWE Meta in Investigations

If you have a favourite Meta Group you use, just add these Meta keys to it. Otherwise, create a new Meta Group called NW Endpoint Tracking. Here's what I have in mine:

 

Note - I have all my Meta Keys set to open for testing purposes. Best Practice is to set did to open, and all other keys to closed. This gives better performance with large datasets by not sending 22 queries to the Concentrator at the same time.

Here's what you should be able to see:

 

By mapping the process name into the filename meta key, the data will trigger any feeds that are looking for matches on filename. The Investigation and Hunting feeds match this data:

8. Configure a new Column Group

The default view for reviewing logs in the Event Viewer is very simple:

 

We can change this view to show the meta extracted from our NWE Tracking logs. Create a new Column Group

 

Note that you can change the "Display Name" to something you like - this will be used for the column heading:

 

9. Configure Report Engine Rules and Charts

Use the new device.type = 'nwe_tracking' to create rules to use for reports and charts. Here's a rule to query on the Source Process (parent.process):

 

results:

 

We can then use the rule as a basis for a chart:

 

 

10. Create a RSA NetWitness Endpoint Dashboard

One you create the charts that you want, you can create a new Dashboard to keep track of your environment. Simply create a new Dashboard, and add your charts as Dashlets using "Reports Realtime Chart"

 

All the files mentioned in this post are available for download in the zip below.

 

Happy Hunting!

 

Thanks to Rui Ataide & Eric Partington for their contributions to this integration.

The new Investigation Data Model (community.rsa.com/docs/DOC-62313) and Hunting Pack (community.rsa.com/docs/DOC-62301) with the associated Hunting Guide (

community.rsa.com/docs/DOC-62341) provide a new way for analysts to interact with their data and hunt for threats. The attached PDF provides a summary of the key points, and what changes you need to make to your RSA NetWitness deployment to make the most of the new content. Happy Hunting!

EDIT 20161214: Fixed a typo on page 21. Thanks Jim!

If you haven't yet deployed the content behind the new Hunting Pack and Investigation Model, go here first and follow the steps:

The new Investigation Model provides a fantastic way to organise the indicators and metadata produced by NetWitness into a way for analysts to easily interact with their data. The four Investigation Categories - Threats, Assurance, Operations, & Identity - provide the basis for defining Investigation Context for indicators.

 

The Hunting Guide and its associated Hunting Pack provides new Analysis meta keys that allow Threat Hunters with an operational workflow based on Session Analysis, Service Analysis, and File Analysis. It also introduces new Compromise meta keys for organising indicators into Indicators of Compromise, Behaviors of Compromise, and Enablers of Compromise. These new meta keys should be added to your favourite metagroups for Investigations. They can also be used for Charts and Dashboards.

 

The attached zip file contains Rules and Charts that can be used to build Hunting and Investigation Dashboards. Simply import the zip file into the Charts section of the Report Engine, enable each chart and make sure it is pointing at  the right Data Source (your Concentrator or Broker), then create some dashboards. Here's a suggestion:

 

Investigation Dashboard that uses the Investigation Category and Investigation Context meta keys:

 

Hunting Analysis Dashboard that uses the File Analysis, Service Analysis and Session Analysis meta keys:

 

And Hunting Compromise Dashboard that uses the Indicators of Compromise, Enablers of Compromise and Behaviors of Compromise meta keys:

 

Happy Hunting!

There are many techniques for hunting for advanced threats. One of my favourites is reviewing outbound traffic to countries where you would not expect to see normal business traffic. On a recent engagement with a customer, I was examining traffic to the Russian Federation, where I pivoted on traffic that had a POST action:

80670

Looking through the hostnames associated with this traffic, I saw an interesting hostname: aus-post.info.

This hostname appears to be an attempt to look like the legitimate site of Australia Post - the national postal service of Australia.

I thought it would be strange for Australia Post (auspost.com.au) to outsource their parcel tracking system to a site in Russia, so did some further digging. Viewing the session details I could see a zip file being transferred as part of the session:

80671

This piqued my interest – why would there be a download of a zip file from what looked to be a parcel tracking website?

To find out more about this website and what appeared to be a malware dropper, I loaded the URL into the ThreatGrid portal to do some dynamic analysis in a safe environment using the ThreatGrid Glovebox.

80672A fairly legitimate looking site using a CAPTCHA test (albeit very weak), got loaded into the browser - waiting for input.

80674

Looking at the sessions in my live customer environment I could confirm that the user did in fact enter the code on the website:

80686

After I replicated the CAPTCHA entry within the ThreatGrid system, my download began.

80676

Firefox checks the file for viruses

80677

All good!

80678

Opening the zip had a single file: Information.exe

80679

On the glovebox system within ThreatGrid, the file had a regular application icon, on my desktop however it had a different looking icon:80680

As per usual, the exe does nothing exciting when it executes … just the hourglass.

80681

80682

According to the ThreatGrid report, the malware installs in the background, and then downloads images and other files from a remote website.  In addition, the IP address 178.89.191.130 is used for probable command and control over SSL.

80683

Looking at this traffic in Security Analytics we can see it is using a the self signed certificate for 'Mojolicious'

80684

And here is the traffic pattern of the c2 traffic observed in the in Security Analytics Timeline:

80685

When we reached out to Australia Post they informed us they had been tracking similar hostnames to the one used by this threat. Australia Post has published their own updated information on this scam:

Email scam alert Feb 2014 - Australia Post

Current scams, phishing attacks and frauds - Australia Post

It has also been reported that similar / earlier versions of this scam have resulted in the download and installation of CryptoLocker:

Australia Post Parcel Emails Pack Deadly CryptoLocker Virus - Channel News


To hunt for instances of this in your environment look for:

 

User entered CAPTCHA details on Downloader site:

     alias.host = 'aus-post.info' && action = 'post','put'

 

Command & Control hostname:

     alias.host='save-img-serv.ru'

 

SSL C2 traffic:

     risk.suspicious = 'ssl certificate self-signed' && ssl.ca= 'mojolicious'

 

Destination IP addresses for downloader:

     ip.dst = '194.58.42.11'

 

Destination IP address for C2:

     ip.dst = '178.89.191.130'

 

AS @Fielder would say - Happy Hunting!

Filter Blog

By date: By tag: