Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
Josh Randall

Postman for NetWitness

Posted by Josh Randall Employee May 17, 2020

If you've ever done any work testing against an API (or even just for fun), then you've likely come across a number of tools that aim to make this work (or fun) easier.

 

Postman is one of these tools, and one of its features is a method to import and export collections of API methods that enable individuals to begin using those APIs much more easily and quickly than if, say...they only have a bunch of docs to get them started.

 

As NetWitness is a platform with a number of APIs and a number of docs to go along with them, a Postman collection detailing the uses, requirements, options, etc. of these APIs should (I hope) be a useful tool that individual and teams can leverage to enable more efficient and effective use of the NetWitness platform....as well as with any other tool that you may want to integrate with NetWitness via its APIs.

 

With that in mind, I present a Postman Collection for NetWitness.  This includes all the Endpoint APIs, all the Respond APIs, and the more commonly used Events (A.K.A. SDK) APIs --> Query, Values, Content, and Packets. Simply import the attached JSON file into Postman, fill in the variables, and start API'ing.

 

A few notes, tips, and how-to's....

  • upon importing the collection, the first thing you should do is update the variables to match your environment

  • the rest_user / rest_pass variables are required for the Endpoint and Respond API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Security --> Users & Roles tabs
    • the role assigned to the account must have the integration-server.api.access permission, as well as any underlying permissions required to fulfill the request
    • e.g.: if you're querying Endpoint APIs, you'll need integration-server.api.access as well as endpoint-server permissions
  • the svc_user / svc_pass variables are required for the Events API calls
    • the NW account you use here can be created/managed in the NW UI in Admin/Services --> <core_service> --> Security --> Users & Roles tabs
    • the role assigned to the account must have the sdk.content, sdk.meta, and sdk.packets permissions, as well as any additional permissions necessary to account for Meta and Content Restriction settings you may be using
  • every Respond and Endpoint call will automatically create and update the accessToken and refreshToken used to authenticate its API call
    • so long as your rest_user and rest_pass variables are correct and the account has the appropriate permissions to call the Respond and/or Endpoint node, there is no need to manually generate these tokens
    • that said, the API calls to generate tokens are still included so you can see how they are being made
  • several of the Endpoint APIs, when called, will create and update variables used in other Endpoint APIs
    • the first of these is the Get Services call, which lists all of the endpoint-server hosts and creates variables that can be used in other Endpoint API calls
      • the names of these variables will depend on the names of each service as you have them configured in the NW UI
    • the second of these is the Get Hosts call, which lists all of the endpoint agents/hosts reporting to the queried endpoint-server and creates a variable of each hostname that can be used in other Endpoint API calls

      • this one may be a bit unwieldy for many orgs, though, because if you have 2000 agents installed, this will create 2000 variables - 1 for each host - if you have 20,000 agents installed, or 200,000.... 
      • you may not want all those hostname variables, so you can adjust how many get created, or disable it altogether, by modifying or deleting or commenting out the javascript code in the Tests section of the Get Hosts call

Any questions, comments, concerns, suggestions, etc...please let me know.

We are back again with another C2 framework called, Chaos: https://github.com/tiagorlampert/CHAOS. CHAOS is a PoC written in Go and comes with a healthy number of features for controlling the remote endpoints. It supports agents for Windows, Mac, and Linux, however, the feature availability does differ depending on the platform the agent is deployed on. This C2 only allows control of one agent, and all communication is over TCP sockets. More information surrounding this C2 can be found over at the C2 Matrix: C2Matrix - Google Sheets.

 

This C2 reminds a lot of one we previously covered called, HARS: Using RSA NetWitness to Detect HTTP Asynchronous Reverse Shell (HARS) - so check that post out as well if you haven't already.

 

 

The Attack

As always, we're keeping this super simple to place more of a focal point on the C2 traffic itself, rather than the delivery mechanism. So to deploy the agent, we simply copy the binary to the victim endpoint and execute it from the C:\PerfLogs\ directory:

 

After execution, we see our successful connection back to Chaos as is evident from the [+] Connected! message displayed:

 

Now we have our connection, we can use one of the available built-in features to set up persistence for Chaos to ensure it starts up again should the system reboot:

 

From here, we can start to execute commands to get information regarding the endpoint we are controlling:

 

 

 

The Detection Using NetWitness Network

Chaos has no direct support for HTTP and all communication between the C2 and the agent is over TCP sockets. As there is no structure to the traffic being generated, it is not possible to classify it under a specific service, so NetWitness tags this traffic as service = 0 - otherwise known as OTHER. The service OTHER is often overlooked as an area for hunting but should still be analysed by defenders to look for malicious traffic using proprietary protocols, or TCP sockets like Chaos. From the below, we can see that there are some meta values of interest for the Chaos C2 traffic that would stand out during the hunting process:

NOTE: The unknown service over http port meta value is interesting here, as attackers often use typical ports for web browsing to get around firewall policies that block everything but web access for endpoints.

 

Drilling into the possible base64 windows shell meta value, we can see the structure of the Chaos C2 traffic. The commands are sent as typed to the agent, but the output from the command is Base64 encoded and sent back to the C2, hence why NetWitness generated the possible base64 windows shell meta value:

 

This gives us the ability to easily observe the commands being executed, and to Base64 decode the output of the commands directly within the UI:

 

For this type of C2 there is no need to create additional detections for NetWitness Network, the detection is already there and just requires that defenders triage traffic of type OTHER where interesting meta values are generated, such as the ones shown here.

 

 

The Detection Using NetWitness Endpoint

As always, when these C2 frameworks are deployed, they have to execute and do things in order to achieve their end goal, and with NetWitness Endpoint it is easy to detect these actions. Below are the meta values generated from the small number of commands that were executed through the C2:

 

  • chaos > whoami - gets current username
  • chaos > tasklist - enumerates processes on local system
  • chaos > ipconfig - enumerates ip configuration
  • chaos > hostname - gets hostname
  • chaos > persistence_enable - runs registry tool, runs xcopy.exemodifies run key, modifies registry using command-line registry tool

 

Opening the Events view for the meta values of interest, we can get a better view of all the commands being executed:

 

 

The Detection Using NetWitness Logs

In order to better identify suspicious activities taking place on the endpoint, we have chosen to install Sysmon, and to include the detections available through its logging. More information surrounding Sysmon can be found at the following link: https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon. The collection of these logs is performed via the NetWitness Endpoint agent itself and more detail on how that was set up can be found here: https://community.rsa.com/docs/DOC-101743#createWinLog.

 

There are multiple starting points for using Sysmon to find malicious activities, but for now we are going to start with the following logic which would detect the usage of whoami being executed on a system, this is normaqlly evidence of attacker activity after successful exploitation or privilege escalation and is not overly common for most users to execute regularly:

(event.source = 'microsoft-windows-sysmon') && (service.name ends'whoami.exe') && (reference.id = '1')

NOTE: The reference.id ='1' shown in the above query is for process creations.

 

Upon executing this query in NetWitness, we can see we get hits for the whoami command being executed from the C:\PerfLogs\ directory:

 

This is a suspicious directory for processes to be created from, so we can take a look at all processes being created out of this directory by slightly modifying our query to look for all process execution out of the C:\PerfLogs\ directory:

(event.source = 'microsoft-windows-sysmon') && (directory = 'c:\\perflogs\\') && (reference.id = '1')

 

Here we can see a suspect executable running from the C:\PerfLogs\ directory named chaos.exe, and we can also see that there is a number of other suspicious commands being executed from this directory, as well:

 

 

 

We could also create an application rule that identifies the persistence that was created by looking for edits being made to the \CurrentVersion\Run key using the following logic:

(event.source = 'microsoft-windows-sysmon') && (reference.id = '1') && (param contains 'reg  add hkcu\\software\\microsoft\\windows\\currentversion\\run')

 

NOTE: While we covered Sysmon as a free alternative to EDR, our recommendation would still be to use one, as Sysmon may require a considerable amount of configuration and tweaking, and will not provide as many capabilties or visibility as an EDR solution would. We covered it here just to offer an alternative for those that don’t use EDR.

 

 

Conclusion

Chaos C2 is an easy-to-use framework that gives the attacker great control over the victim endpoint, it does not provide much in terms of obfuscation and does not attempt to blend in with normal traffic, so this should make this an easy detection for defenders whether you have NetWitness Network, Endpoint or Logs. Just remember to not shy away from the traffic type OTHER when hunting through those packets!

Intro

Octopus was presented at Black Hat London 2019 by Askar. The github page is available here. It is a pre-operation C2  for Red Teamers, based on HTTP/S and written in python. This blog post will show the detection of Octopus (over http) with NetWitness Endpoint and Network.

 

Scenario

The attacker sets up an HTTP listener in Octopus and generates an exe payload. He then builds a webpage where he embeds the payload and spreads the webpage through social media and email spam campaigns. The victim opens the webpage from his Windows 10 machine and a pop-up message is immediately shown on the browser stating the current version of Adobe Flash plugin is outdated and needs to be updated to install latest security patches. Thus the victim clicks on the pop-up and installs the update which infects his machine.

 

Part 1 -  Attack phase

Once Octopus is started this is how the attacker creates a listener and generates the payload, in this case an exe payload (hta and powershell payloads are also an option):

 

 

More in detail we have:

listen_http listen_ip port hostname interval page listener_name
generate_unmanaged_exe listener_name output_path

The attacker uses the popular ngrok tunneling service as a proxy, that is once the victim machine is infected it will communicate with the address 4dcd8c6d.ngrok.io which will  in turn create a secure tunnel to the attacker box.

 

Next the attacker uses a technique known as browser hooking to embed the exe file into a webpage. To achieve this the attacker used the BeEF framework. Explaining this whole process is out of the scope of this post but if you are interested to know more about it you should have a look at the  Autorun Rule Engine BeEF github page.

 

The victim, using a Windows 10 machine, sees an interesting website about organic food on social medias and clicks on the webpage:

 

 

As shown above once the webpage is loaded a message pops up warning to install a new version of Adobe Flash plugin which included new security updates. Interestingly the message also warns to ignore the missing certificate signature and that it is a known issue which Adobe is working on.

 

 

The victim then clicks on Install missing Plugins and then on Run ignoring the signature warning as advised. Windows Defender is activated and did not detect the exe file.

 

On the other side of the wall the attacker receives a connection to the listener.

 

To interact with the victim the attacker runs the following command:

interact 1

where 1 is number of the session.

 

The attacker also runs some other commands such as "whoami", "quser", and "report". The latter is a command built-in in Octopus which provides some additional information about the victim machine. After a little of browsing within the victim machine folders the attacker also finds a file containing potential sensitive information (TopSecret.txt) and downloads it using Octopus download command.

 

Part 2 - Detection phase with the RSA NetWitness Platform

NetWitness Endpoint

The analyst receives an email alert about a high priority incident generated in the NetWitness Respond module so he starts investigating:

 

 

The incident is generated by the NetWitness Endpoint incident rule "High Risk Alerts: NetWitness Endpoint".  However, the rule originates from an App Rule which is part of a bundle content pack available in RSA Live. More information about this bundle is available here.

 

The App Rule condition is the following:

device.type = 'nwendpoint' && category = 'network event' && context = 'network.outgoing' && direction = 'outbound' && context != 'network.nonroutable' && context.src = 'file.unsigned' && dir.path.src = 'appdatalocal','appdataroaming'

and it basically alerts if an unsigned file initiated from the Windows AppData/local or AppData/roaming directory has made an outbound network connection. The alert in turn generates an incident since it is marked with High Risk.

 

It is apparent from the incident that the file adobe_flash_update.exe made a connection to 4dcd8c6d.ngrok.io which is the name of the ngrok server the attacker uses to tunnel the connection to his machine. The fact that file is unsigned and makes a connection to a website that is not from Adobe makes things extremely suspicious.

Drilling down into the events with NetWitness Endpoint and analyzing them in details the analysts also notices this:

 

 

which clearly shows the adobe_flash_update.exe spawned few other processes among which whoami.exe and quser.exe that are Windows utilities typically used by attackers for enumeration. 

 

NetWitness Network

With the information retrieved from the incident, the analyst investigates further with NetWitness Network filtering by hostname:

 

The analyst notices some potentially malicious HTTP requests under the Service Analysis meta key. While analyzing these meta keys he finds the following event under the "http1.1 without user-agent header" meta value.

 

 

The above is the initial communication of the victim machine with the Octopus C2. Note that "home.php" in the GET request is the name the attacker used in the command to setup the listener we saw in the beginning. The response to the request contains a powershell payload that intends to setup the communication with the C2 . We can see an AES key and its Initialization Vector used to encrypt the communication. This structure looks very similar to the Ninja C2, described by my colleague Lee Kirkpatrick in another blog post available here.

 

After the agent/C2 communication has been setup the next request is the "GET /login" where the encrypted communication is established:

 

 

each further request is a beacon to the C2 and the analyst notices that the request includes the victim machine name "WINEP1" followed by a 5 characters random name:

 

 

The below two requests show the command quser launched from the C2 in the previous steps and its response (the response is contained on a separate GET request):

 

Note that when the C2 requests something we see "/bills" in the GET request.

 

The below figure shows the decryption of the above strings using the powershell decryption function seen in the very first request (GET /home.php):

 

With the same process the analyst was able to see other commands the attacker ran but more importantly was able to see the attacker exfiltrated a file named TopSecret.txt from the infected machine:

 

 

The beaconing pattern can also be observed with 120 seconds intervals and same size:

 

 

It is important to note different destination IP addresses in the above figure. This is because ngrok resolves to different IP addresses in round robin.

 

Another interesting thing to note is that the URL parameters we saw in the GET requests can be customized via  the Octopus profile.py file:

 

#!/usr/bin/python3

# this is the web listener profile for Octopus C2
# you can customize your profile to handle a specific URLs to communicate with the agent
# TODO : add the ability to customize the request headers

# handling the file downloading
# Ex : /anything
# Ex : /anything.php
file_receiver_url = "/messages"

# handling the report generation
# Ex : /anything
# Ex : /anything.php
report_url = "/calls"

# command sending to agent (store the command will be executed on a host)
# leave <hostname> as it with the same format
# Ex : /profile/<hostname>
# Ex : /messages/<hostname>
# Ex : /bills/<hostname>
command_send_url = "/view/<hostname>"

# handling the executed command
# Ex : /anything
# Ex : /anything.php
command_receiver_url = "/bills"

# handling the first connection from the agent
# Ex : /anything
# Ex : /anything.php
first_ping_url = "/login"

# will return in every response as Server header
server_response_header = "nginx"

# will return white page that includes HTA script
mshta_url = "/hta"

# auto kill value after n tries
auto_kill = 10

 

Lastly, while inspecting the network for C2 traffic the analyst  finds the following:

 

 

These are HTTP beacons. Requests are sent on port 3000 which is the default port the BeEF framework uses.

 

 

Looking at one of the sessions the analyst sees it contains several requests like the one in the above screenshot. In the Referer field we can see the address of the phishing website used by the attacker and the GET request contains the hook to the BeEF C2. The victim will be hooked to the C2 until he closes the browser. The attacker can leverage the hook to performs social engineering attacks like the fake Adobe Flash update among many others.

 

Observations

A client-side attack vector was used to get initial foothold to the victim machine. Once the victim opened the legitimate-looking webpage his browser was "hooked" to the attacker BeEF C2. The attacker had also set an automatic rule that pushed a fake pop-up message suggesting the victim to install Adobe Flash security updates. Once the victim installed the fake Adobe Flash update the incident was created in NetWitness Respond module because of the App Rule discussed earlier.

Threat actors usually use multiple techniques to distribute their malicious payloads. What would have happened if the user had downloaded the file by a different mean on his machine? The same incident would have probably not been generated in NetWitness because that specific app rule relied on the fact that an unsigned file was started from appdatalocal directory in Windows. However, even without the incident the analysts would have identified suspicious network activity with NetWitness Network such as the beaconing to the C2 and also indicators of compromise and suspicious activities in NetWitness Endpoint . For example, the Behavior of Compromise meta key of NetWitness Endpoint would have shown following values:

 

    queries users logged on local system (1)  related to the whoami command
    gets current username (1)    related to the quser command

 

The same applies if the attacker had set an HTTPS listener instead of the HTTP one. In this case the analysts would not have been able to see the content of the communication between the C2 and the victim (unless there is an interceptor in place) but they would have noticed the beaconing and the indicators of compromise in NetWitness Endpoint.

 

Conclusions

Octopus is quite new but showed similarities to other recent C2 frameworks. It is customizable and modular (external modules can be plugged-in) and can run both on HTTP and HTTPS. This article showed that the NetWitness Suite can be of great use when it comes to C2 detection with the combination of both NetWitness Network and Endpoint by providing a very granular level of visibility. In the case of HTTPS an SSL/TLS interceptor would help providing more visibility but without it NetWitness can still identify C2 patterns and indicators of compromise that will help analysts detect potential malicious activities.

Following on from my last post that focused on analysing web server logs ASD & NSA's Guide to Detect and Prevent Web Shell Malware - Web Server Logs , this time we are going to look at the network based indicators from the ASD & NSA guide Detect and prevent web shell malware | Cyber.gov.au .

There are already some fantastic resources posted by my colleague from the IR team Lee Kirkpatrick and the NetWitness product Documentation team that provide great details on the different ways we can detect web shells using NetWitness for network visibility:

The focus of this post is taking the indicators published by the ASD & NSA in their guide, and showing how to use them in NetWitness.

Not all indicators are created equally, and this post should not be taken as an endorsement by this author or RSA on the effectiveness and fidelity of the indicators published by the ASD & NSA.

Now that’s out of the way, lets take a look at the network indicators.

Web Traffic Anomaly Detection

This is really focused on the URIs being accessed on your servers and the user agents that are being used to access those pages. An easy way to detect new user agents, or new files being accessed on your website (depending on how dynamic your content is) is to use the show_whats_new report action. The show_whats_new action will filter your results from a query to only show new values that did not appear in the database prior to the timeframe of your report. Here’s an example from my lab – if I run a report to show all user agents seen in the last 6 hours I get 20 user agents in my report:

Using show_whats_new in the THEN clause of the rule filters the results and shows me only 2 user agents (which makes sense as my chrome browser recently updated):

Obviously just because a user agent is new doesn’t automatically mean it is a web shell, as web browsers get updates all the time. But it is another method for highlighting anomalies and changes in your environment.

One of the common techniques we use in the IR team is to review the HTTP request methods used against a server – finding sessions that do not follow the pattern of normal user web browsing are a good indicator for web shells. Normal user generated browsing will consist of GET requests followed POST. Sessions that have a POST action with no GET request and no referrer present are a good indicator as Lee covers in his post mentioned above.

Signature-Based Detection

As the ASD & NSA guide states itself, network signatures are an unreliable way to detect web shell traffic:

From the network perspective, signature-based detection of web shells is unreliable because web shell communications are frequently obfuscated or encrypted. Additionally, “hard-coded” values like variable names are easily modified to further evade detection. While unlikely to discover unknown web shells, signature-based network detection can help identify additional infections of a known web shell.

The guide nevertheless includes some Snort rules to detect network communication from common, unmodified web shells:

RSA NetWitness has always had the ability to use Snort rules on the Network Decoder, and that capability was recently enhanced with the 11.3 release adding the ability to map meta data generated by the snort parser to the Unified Data Model. For the steps required to install and configure Snort rules on your network decoder, follow these guides for details and more information:

Here’s the short version:

  1. Create a new folder on your Network Decoder /etc/netwitness/ng/parsers/snort
  2. Create a snort.conf file in that directory. Here’s a simple configuration to get you started:
  3. Copy the rules from the ASD & NSA guide into a file called webshells.rules
    Mitigating-Web-Shells/network_signatures.snort.txt at master · nsacyber/Mitigating-Web-Shells · GitHub 

  4. Go to the Explore view for your Decoder, and go to decoder > parsers > config and add Snort=”udm=true” to the parsers.options field

  5. While in Explore view, right click on decoder > parsers, select properties, then choose reload and hit Send to reload the parsers and activate your Snort rules.

Here we can see the Snort rules successfully loaded and available on the Network Decoder:

Unexpected Network Flows

The ASD & NSA guide suggests monitoring the network for unexpected web servers, and provides a snort signature that simply alerts when a node in the targeted subnet responds to an HTTP(s) request by looking for traffic on port 80 or 443 with a destination IP address in a given subnet:

alert tcp 192.168.1.0/24 [443,80] -> any any (msg: "potential unexpected web server"; sid 4000921)

Rather than updating this rule with the right subnet details for your environment (that will only be available to be used by this rule), we can do this natively in NetWitness utilising the Traffic Flow parser and its associated traffic_flow_options file to label subnets and IP addresses. Using the traffic_flow_options file to do this labelling means the resulting meta can be used by other parsers, feeds, and app rules as well.

For more details on the Traffic Flow parser, go here: Traffic Flow Lua Parser 

To configure your traffic_flow_options file, start with the subnet or IP addresses of known web servers and add them as a block in the INTERNAL section of the file, and label them “web servers”. When traffic is seen heading to those servers as a destination, the meta ‘web servers dst’ will be registered under the Network Name (netname) meta key.

Once the traffic_flow_options file is configured, we can translate the Snort rule from the guide into an app rule that will detect any HTTP or HTTPS traffic, or traffic destined to port 80 or 443, to any system that has not been added to our definition for web servers:

(service = 80,443 || tcp.dstport = 80,443) && netname != ‘web servers dst’

Conclusion

That covers the network based indicators included in the ASD & NSA guide. For more techniques to uncover web shell network traffic, check out the pages linked at the top of this blog, as well as the RSA IR Threat Hunting Guide for NetWitness: 

Stay tuned for the next part where we take a look at the endpoint based indicators from the guide, and see how to apply them using NetWitness Endpoint.

 

Happy Hunting!

Introduction

The Australian Signals Directorate (ASD) & US National Security Agency (NSA) have jointly released a useful guide for detecting and preventing web shell malware. If you haven't seen it yet, you can find it here:

The guide includes some sample queries to run in Splunk to help detect potential web shell traffic by analysing IIS and Apache web logs. “That’s great, but how can we do the same search in NetWitness Logs?” I hear you ask! Let’s take a look.

Web Server Logging

If you are already collecting IIS and Apache logs – or any web server audit logs for that matter – you’ve probably already made some changes to your configuration to suit your needs to get the data that you want. To run the queries suggested by the guide, we need to make a change to the default log parser settings for IIS & Apache logs. The default log parser setting for IIS & Apache does not save the URI field as meta that we can query – it is parsed at the time of capture and available as transient meta for evaluation by feeds, parsers, & app rules, but it is not saved to disk as meta. To collect the data needed to run these queries, we are going to change the setting for the meta from “Transient” to “None”.

For more information on how RSA NetWitness generates and manages meta, go here: Customize the meta framework 

The IIS and Apache log parsers both parse the URI field from the logs into a meta key named webpage. The table-map.xml file on the Log Decoder shows that this meta value is set to “Transient”.

To change the way this meta is handled, take a copy of the line from the table-map.xml and paste it into the table-map-custom.xml, and change the flags=”Transient” setting to flags=”None”:

<mapping envisionName="webpage" nwName="web.page" flags="None" format="Text"/>

Hit apply, then restart the log decoder service for the change to take effect. Remember to push the change to all Log Decoders in your environment.

Next, we want to tell the Concentrator how to handle this meta. Go to your index-concentrator-custom.xml file and add an entry for this new web.page meta key:

<key description="URI" format="Text" level="IndexValues" name="web.page" defaultAction="Closed" valueMax="10000" />

I set the display name for the key as URI – but you can set it to whatever makes sense for you. I also set a maximum value count of 10,000 for the key - you should use a value that makes sense for your website(s) and environment and review for any meta overflow errors.

Hit apply, then restart the concentrator service for the change to take effect. Remember to push the change to all Concentrators in your environment (Log & Network), especially if you use a Broker.

Now as you collect your web logs, the web.page meta key will be populated:

You may also want to change the index level for the referer key. By default it is set to IndexKey, which means a query that tests if a referer exists or doesn’t exist will return quickly, but a search for a particular referer value will be slow. If you find yourself doing a lot of searches for specific referers you can change this setting to IndexValues as well.

Optionally, you can add the web.page meta key to a meta group & column group so you can keep track of it in Navigate & Events views. I’ve attached a copy of my Web Logs Analysis meta group and column group to the end of this post.

Now we are ready for the queries themselves. While at first glance they seem pretty complicated, they really aren’t. Plus with the way NetWitness parses the data into a common taxonomy, you don’t need different queries for IIS & Apache – the same query will work for both!

Query 1 – Identify URIs accessed by few user agents and IP addresses

For this query, we need to use the countdistinct aggregation function to count how many different user agents and how many different IP addresses accessed the pages on our website.

For more information on NWDB query syntax, go here: Rule Syntax 
SELECT web.page, countdistinct(user.agent),countdistinct(ip.src)
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY web.page
ORDER BY countdistinct(user.agent) ASCENDING

Query 2 – Identify user agents uncommon for a target web server

This query simply shows the number of times each user agent accesses our web server. We can see this very easily by just using the Navigate interface and setting the result order to Ascending:

Here is the query to use in the report engine rule:

SELECT user.agent
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY user.agent
ORDER BY Total Ascending

Query 3 – Identify URIs with an uncommon HTTP referrer

This query is a bit more complicated – we want to show referrers that do not access many URIs, but also want to see how often they access each URI. This query could need some tuning if you have pages on your site that are typically only accessed by following a link from a previous page, or even an image file that is only loaded by a single page.

Our select statement will list the referrer followed by the number of URIs that the referrer is used for (sorted ASC – we’re interested in uncommon referers), then it will list those URIs where it is seen as the referer, followed by the number of hits (sorted DSC) – a URI that is accessed  

SELECT referer, countdistinct(web.page), distinct(web.page), count(referer)
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY referrer
ORDER BY countdistinct(web.page) Ascending, count(referrer) Descending

Query 4 – Identify URIs missing an HTTP referrer

This is an easy one to finish off – we’re interested in events where there is no referer present. To refine the results we want to filter events that are hitting the base of the site ‘/’ as this could easily be someone typing the URL directly into their browser.

SELECT web.page
WHERE device.class = ‘web logs’ && (referrer !exists || referrer =-) && web.page !=/&& result.code begins ‘2
GROUP BY web.page
ORDER BY Total Desceding

These rules and a report that includes the rules can be found in the attached files.

Conclusion

Let me know in the comments below how these queries work in your environment, and if you have suggestions for improvements. The goal of this post was to quickly convert the queries included in the guide published by ASD & NSA. Stay tuned for more posts that show how we can improve the fidelity of these queries, and also how to utilise the endpoint and network indicators also found in thie ASD & NSA guide.

 

Happy Hunting!

Shout out to @Casey Switzer, @Josh Randall & @Larry Hammond.  Without their help, the lab, configuration and operational considerations would not be possible.

 

Last year in RSA NetWitness 11.3, a new integration was introduced to allow NetWitness to integrate with RSA SecurID to populate high risk users from incidents in Respond.

 

@Josh Randall covered this in his blog post here: Examining Threat Aware Authentication in v11.3

 

At the time, SecurID could only add a user to the list based on an email address.  While this is good for email based alerts, the majority of Linux and Windows logs do not contain that value.

 

An easy workaround for this is to configure a recurring feed (See Decoder: Create a Custom Feed) including sAMAccountName & email address.  A simple powershell script to export sAMAccountName & email address should suffice. When you create an incident based on sAMAccountName the email address is present in the session's meta data allowing the ThreatAware authentication integration to work.   I used several callback keys to ensure I covered the various conditions to capture the username.

 

 AdUserEmailAddress Feed

 

Once this feed is live, you will see email.src & email.all metadata upon an event containing any of the meta keys above.  In this case it was a failed logon:

Email Meta

 

As of April 2020, RSA SecurID will now accept email address or username for Threat Aware Authentication and to support this, version 11.4.1 of NetWitness, introduced configuration for Respond for which field send to SecurID.  See Respond Config: Configure Threat Aware Authentication for more information.  

 

This represents a great option to using ad_username, however when you choose that value, you will lose the email_address integration.  A way around this is to do the inverse of the feed earlier to ensure you have the email address field in your sessions.  For this blog, we will continue to use the existing feed and send email_address to SecurID.  I set my synchronization to 1 minute but the default setting is 15 minutes.

 

Threat Aware Authentication Settings

Within the RSA SecurID Cloud Access Service, you will need to configure your Assurance Levels and  Risk-Based Authentication policies.  I set my Assurance Levels to require Device Biometrics for High Assurance, Approve for Medium and allow at a Low level.  I set a simple policy which will be used for the SAML test.

Assurance Levels

Assurance Levels

 

Policy

 Threat Aware Policy

 

Rule set

Threat Aware Rules

We have a test user which will be used to demonstrate Threat Aware Authentication.  Currently as you can see, Brett Cline is synchronized from lab.internal and is currently a low risk user.

Low Risk User

 

When Brett navigates to an app, he is presented with a logon screen with his password:

Test App 

Since he is low risk, after a successful authentication with User ID and password, he is now logged in to the demo app.

App Success

 

We created a simple ESA rule to catch 3x failed logins  to create an alert (ec_activity = 'Logon' and ec_outcome = 'Failure' 3x within 3 minutes) and a corresponding Incident Rule to group these alerts and create a meaningful title. 

Threat Aware Incident Rule

 

We simulated a few failed logins to create an incident:

Threat Aware Incident

 

Back in the SecurID Cloud Authentication Service, you can see that Brett has been added to the high risk users

Test User High Risk

 

 

Now when he logs into the app, he will be prompted for his userid/password

But due to being on the high risk users list, he will be required to approve via biometrics on his phone as per the policy set above:

 

Which will then lead to the successful authentication.

*** Note: The user will remain on the high risk users list until the incident is closed. ***

 

Additional Information:

 

If you are collecting logs from the Cloud Authentication Service, you will see the following meta keys:

And here is the corresponding event: 

Operational Thoughts:

@Larry Hammond for some insight into operational considerations.  He and I spoke about how NetWitness has traditionally been a passive device and cannot/should not interfere with your network or operations.  With the addition of Threat Aware Authentication, a poorly crafted rule could add many users to require step up authentication which could result in a disruption to business.  Good rule building practices should be followed and ensuring you test them before creating alerts.

 

This was the reasoning behind creating meaningful alerts in ESA to ensure the NetWitness admins have a view of the incidents which resulted in adding someone to the high risk users.

Although the RSA NetWitness platform gives administrators visibility into system metrics through the Health & Wellness Systems Stats Browser, we currently do not have a method to see all storage / retention across our deployment in a single instance or view.

 

Below you will find several scripts that will help us gain this visibility quickly and easily.

 

Update: Please grab the latest version of the script, some bugs were discovered that were fixed.

 

How It Works:

 

1. Dependency: get-all-systems.sh (attached) both v10 and v11 version for your particular environment. Please run this script prior to running the get-retention.py as it requires the 'all-systems' file which contains all of your appliances & services.

2. We then read through the all-systems file and look for services that have retention e.g. EndpointLogHybrid, EndpointHybrid, LogHybrid, LogDecoder, Decoder, Concentrator, Archiver.

3. Finally we use the 'tlogin' functionality of NwConsole to allow cert-based authentication, thus, no need to run this script with username/password as input to pull database statistics and output the retention (in days) for that particular service.

 

Instructions:

 

1. Run ./get-all-systems_v10.sh (for 10.x systems) or ./get-all-systems_v11.sh (for 11.x systems)

    NOTE: Make sure to grab the 11.4 version of the backup scripts if you are running NetWitness 11.4+

2. Run ./get-retention.py  (without any arguments). This MUST be run from Puppetmaster (v10) or Node0 (v11).

 

Sample Run: 

 

Please feel free to provide feedback, bug reports etc...

Summary:

Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live, for retired content you'll need to manually remove those.

Detailed configuration procedures for getting RSA NetWitness Platform setup - Content Quick Start Guide 

 

Additions:

RSA NetWitness Lua Parsers:

  • fingerprint_certificate Options - Optional parameters are added to alter the behavior of the fingerprint_certificate parser.
  • fingerprint_minidump - Detects Windows Minidump files. Meta will be output as filetype - 'minidump' This parser will also detect minidump files containing lsass memory and output meta as ioc – ‘lsass minidump’

Using RSA NetWitness to Detect Credential Harvesting: lsassy 

 

More information about Packet Parsers: https://community.rsa.com/docs/DOC-43422

 

RSA NetWitness Application Rules:

Following app rules are added to Endpoint Content pack for RSA NetWitness 11.4 Investigation and Alerting –

  • Autorun Invalid Signature Windows Directory
  • Autorun Unsigned Hidden Only Executable In Directory
  • Autorun Unsigned winlogon helper DLL
  • Browser Runs Command Prompt
  • Command Line Writes Script Files
  • Command Prompt Obfuscation
  • Command Prompt Obfuscation Using Value Extraction
  • Command Shell Copy Items
  • Command Shell Runs Rundll32
  • Evasive Powershell Used Over Network
  • Explorer Public Folder DLL Load
  • Hidden and Hooking
  • Lateral Movement with Credentials Using Net Utility
  • OS Process Runs Command Shell
  • Outbound from Unsigned AppData Directory
  • Outbound from Windows Directory
  • Outbound Unsigned Temporary Directory
  • Potential Outlook Exploit
  • Powershell Double Base64
  • Process Redirects to STDOUT or STDERR
  • RDP Launching Loopback Address
  • Remote Directory Traversal
  • RPM Ownership Changed
  • RPM Permissions Changed
  • Unsigned Creates Remote Thread And File Hidden
  • Unsigned Library in Suspicious Daemon
  • Unsigned Opens LSASS
  • WMIC Remote Node Activity
  • Multiple Psexec Within Short Time

 

More information about NetWitness 11.4 New Features and Alerting: ESA Rule Types 

 

 

Changes:

RSA NetWitness Lua Parsers:

  • china_chopper – Functionally has been added to detect new variants of china chopper. 
  • DCERPC – Parser now supports NTLM authentication along with Kerberos. Parser will now extract authentication meta from both Kerberos and NTLM

Using the RSA NetWitness Platform to Detect Lateral Movement: SCShell (DCE/RPC) 

  • DynDNS – Parser is updated with improved detection with addition of new dynamic DNS domains detected by RSA Incident Response. 

Read more about threat hunting/investigation using DynDNS parser What's updog? 

  • fingerprint_certificate - This parser is updated for efficiency improvements as well as added detection with more customization using options file.
  • HTTP_lua – Updated for accuracy and efficiency.
  • SMB_lua – Functionally has been added to support SMBv3.
  • MAIL_lua – Updated for accuracy and efficiency.
  • TLS_lua - Added a new option to TLS_lua to limit examination of sessions to only the ports specified in the option. If enabled, ports not listed will not be parsed by TLS_lua and thus will not be identified as service 443. This will reduce the workload of TLS_lua by eliminating identification of SSL/TLS sessions on unknown ports.

Read more about SSL and NetWitness 

  • SSH_lua - SSH_lua parser now include SSH Versions for both server and client thus providing better insights in investigation.
  • windows_command_shell_lua – Updates are made to base64 encoded command detections along with new commands.
  • xor_executable_lua – Improved detection with more xor'd executables by adding detection xor'd MZ header.

 

RSA NetWitness Application Rules:

Following app rules are updated to Endpoint Content pack for 11.4 Investigation and Alerting –

  • Office application injects remote process
  • Office Application Runs Scripting Engine
  • Creates Remote Service

 

RSA NetWitness Bundles:

Endpoint Pack has been updated with new and updated content so support Alerting for NetWitness Endpoint 11.4 and higher. 

Refer Endpoint Content for detailed information about content pack and its configuration. 

 

More content has been tagged with MITRE ATT&CK™ metadata for better coverage and improve detection.

For detailed information about MITRE ATT&CK™:

RSA Threat Content mapping with MITRE ATT&CK™ 

Manifesting MITRE ATT&CK™ Metadata in RSA NetWitness 

 

 

Discontinued:

We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

List of Discontinued Content 

 

RSA NetWitness Application Rules:

  • php put with 40x error – Marked discontinued due to performance-to-value tradeoff.
  • php botnet beaconing w - Retiring this rule as provides little-to-no value as PHP beaconing has evolved and uses different patterns.
  • Windows NTLM Network Logon Successful - Retiring as improved application rule for ‘Pass the Hash’ has been created.

 

 

For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.

 

EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

22APR2020 - UPDATE: Naushad Kasu has posted a video blog of this process and I have posted the template.xml and NweAgentPolicyDetails_x64.exe files from his blog here.

 

08APR2020 - UPDATE: adding a couple notes and example typespecs after some additional experimenting over the past week

  • You may find the process easier to simply copy an existing 11.4 typespec in the /var/netwitness/source-server/content/collection/file directory on the Admin Server and modify it for the custom collection source you need
  • example using IIS typespec:
    • comparison of the XML from the Log Collector/Log Decoder to the version I created on the Admin Server


  • another example using a custom typespec to collect Endpoint v4.4 (A.K.A. legacy ECAT) server logs
    • two different typespecs to collect the exact same set of logs, we can see exactly how the values in the typespec affect the raw log that ultimately gets ingested by NetWitness


**END UPDATE**

 

 

The NetWitness 11.4 release included a number of features and enhancements for NetWitness Endpoint, one of which was the ability to collect flat file logs (https://community.rsa.com/docs/DOC-110149#Endpoint_Configuration), with the intent that this collection method would allow organizations to replace existing SFTP agents with the Endpoint Agent.

 

Flat file collection via the 11.4 Endpoint agent allows for a much easier management compared to the SFTP agent, in addition to the multitude of additional investigative and forensic benefits available with both the free version of the Endpoint agent and the advanced version (NetWitness Endpoint User Guide for NetWitness Platform 11.x - Table of Contents).

 

The 11.4 release included a number of OOTB, supported Flat File collection sources, with support for additional OOTB, as well as custom, sources planned for future releases.  However, because I am both impatient and willing to experiment in my lab where there are zero consequences if I break something, I decided to see whether I could port my existing, custom SFTP-based flat file collections to the new 11.4 Endpoint collection.

 

The process ended up being quite simple and easy.  Assuming you already have your Endpoint Server installed and configured, as well as custom flat file typespecs and parsers that you are using, all you need to do is:

  1. install an 11.4+ endpoint agent onto the host(s) that have the flat file logs
  2. ...then copy the custom typespec from the Log Decoder/Log Collector filesystem (/etc/netwitness/ng/logcollection/content/collection/file)
  3. ...to the Node0/Admin Server filesystem (/var/netwitness/source-server/content/collection/file)
    1. ...if your typespec does not already include a <defaults.filePath> element in the XML, go ahead and add one (you can modify the path later in the UI)
    2. ...for example: 
  4. ...after your typespec is copied (and modifed as necessary), restart the source-server on the Node0/Admin Server
  5. ...now open the NetWitness UI and navigate to Admin/Endpoint Sources and create a new (or modify an existing) Agent File Logs policy (more details and instructions on that here: Endpoint Config: About Endpoint Sources)
    1. ...find your custom Flat File log source in the dropdown and add it to the Endpoint Policy
    2. ...modify the Log File Path, if necessary:
    3. ...then simply publish your newly modified policy
  6. ...and once you have confirmed Collection via the Endpoint Agent, you can stop the SFTP agent on the log source (https://community.rsa.com/docs/DOC-101743#Replace)

 

 

And that's it.  Happy logging.

The Maze ransomware has recently been making the news due to some high-profile infections. In addition to requesting, in some instances, ransoms of 6+ million USD to regain access to the files, the group behind the malware has also leaked some of these files if the ransom was not paid.

 

In this post, we will look at the detected behaviors and IOCs from the Maze ransomware as identified by RSA NetWitness Endpoint and Network.

 

The following is the malware sample tested within this post.

SHA256: fc611f9d09f645f31c4a77a27b6e6b1aec74db916d0712bef5bce052d12c971f

 

 

 

Execution of Maze

When the victim gets infected, he will 1st notice that some of his open applications, such as Word and Excel, will get closed. After some time, once the execution of the ransomware is completed, the user’s background will be changed as seen in the below screenshot, instructing the victim to pay the ransom.

 

 

 

The victim can also notice a new text file on his folder (which would get automatically open at reboot). The file provides the detailed instructions on how to do the payment.

 

 

 

 

 

RSA NetWitness Endpoint

 

By leveraging RSA NetWitness Endpoint, we can look at the behavior of the malware on the victim’s machine.

If we first look at the overall details for that specific workstation, we can see:

 

  • An elevated overall risk score (93)
  • Some specific suspicious behaviors, such as
    • Deletes Shadow Volume Copies: this is a typical ransomware technique to stop the victim from restoring his files
    • Run/Writes Malicious File by Reputation Service: the ransomware itself has a known malicious hash value
    • Floating Module: might be loading DLLs in memory

 

 

By going to the list of processes, we can see the “maze.exe” file (the filename could be different) with a risk score of 76 based on its behavior on the system, and with a known reputation of “Malicious” based on the file hash value.

 

 

If we then look at the loaded libraries, we can see that in fact, the ransomware has loaded a DLL in memory:

 

 

If we then look at the files to run at startup, we can see that the text files have been added to the startup folders, to get automatically opened at startup and display the payment instructions for the user:

 

 

If we finally look at the overall behavior of the ransomware on the system:

  1. The ransomware is executed
  2. It closes Excel
  3. It loads the DLL in memory
  4. It communicates over the network with multiple public IP address (more details in the RSA NetWitness Network part)
  5. It deletes the shadow copies
  6. All the multiple readDocument actions are the ransomware encrypted all the user’s documents

 

 

 

 

RSA NetWitness Network

By leveraging RSA NetWitness Network, we can then look at the behaviors the ransomware has done from the network’s perspective. In addition, from the Endpoint side, we already know and have confirmed that the ransomware has initiated connections to the Internet.

 

By filtering on outbound traffic over HTTP, we can identify multiple suspicious behaviors.

 

 

  • Based on the user agent, the tool used to generate those sessions advertises itself as being IE 11 on Windows 7 (this doesn’t HAVE to be true). Being from IE11 would indicate that we should expect these connections to be from a human/browser, and not from a tool/script/application…
  • Direct to IP connections, without a hostname. Even though this can be normal (specially when done to private IP addresses within the local network), it is more suspicious when done over the Internet as it is unlikely for a user to remember public addresses and directly input them in the browser’s address bar (which would be what the tool wants us to believe as it advertised itself as IE11).

  • The lack of a referrer header. This header usually includes the previous page that linked to this one. Especially when dealing with direct to IP requests, having a referrer would be needed, as a user doing such a request because he followed a link could be seen as more normal compared to the user directly typing public IP addresses.

  • HTTP Post methods without Gets. This is also a suspicious behavior when dealing with HTTP sessions initiate by a human/browser. Typically for a user to “POST” data to a website, he first needs to request and “GET” a webpage that includes a form. Directly posting data is unusual for a human and is usually expected only from tools/applications/APIs …

 

 

We can then go to the session reconstruction view to look in more details at one of those sessions.

 

By reconstructing the session, we can:

  • Identify again the user-agent, which can be used as an IOC to identify other infected machines
  • The “Host” field having an IP address instead of a hostname, as it should be expected
  • Missing expected headers, such as a referrer
  • The Entropy meta (between 0-10,000) showing a high entropy level for the request. Entropy allows us to do statistical analysis on the payload and assess how randomized it is. Low entropy would indicate clear-text content, while high-entropy would indicate encrypted content. An encoded payload would be somewhere in between. When using HTTP, which is a clear-text protocol, we would expect either clear-text, or in some cases encoded payloads. A user, through a (supposed) browser connection, wouldn’t be expected to post highly random/encrypted payloads.

 

A combination of these different indicators does lead to identifying these suspicious network sessions initiated by the ransomware, including:

  • Direct to IP requests
  • Missing headers (referrer)
  • Post without Get HTTP methods
  • High entropy for a clear-text protocol

 

 

 

 

Indicators of Compromise

The below as some IOCs that could be used on RSA NetWitness Network and Endpoint to identify potential Maze infections in your environment. It should be noted that these are based on the specific variant tested as part of this post, and these could vary for different variants. It’s usually recommended to leverage behaviors and techniques instead of specific signatures, such as the ones discussed in this post under the RSA NetWitness Network and Endpoint sections, which would allow to overcome changes in specific signatures.

 

File Hash

MD5: e69a8eb94f65480980deaf1ff5a431a6

SHA-1: dcd2ab4540bde88f58dec8e8c243e303ec4bdd87

SHA-256: fc611f9d09f645f31c4a77a27b6e6b1aec74db916d0712bef5bce052d12c971f

 

IP Addresses

91.218.114.4

91.218.114.11

91.218.114.25

91.218.114.26

91.218.114.31

91.218.114.32

91.218.114.37

91.218.114.38

91.218.114.77

91.218.114.79

 

Domain Names (the malware doesn’t initiate connections there, but this is where the victim needs to go to for the payment/more info)

mazedecrypt[.]top

 

User-Agent

Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko

 

 

Josh Randall

Easy-add Recurring Feeds

Posted by Josh Randall Employee Apr 16, 2020

16APR2020 Update

  • adding a modified script for NetWitness environments at-or-above version 11.4.1 (due to JDK 11)
  • renaming the original script to indicate recommended use in pre-11.4.1 NetWitness environments

 

19DEC2019 Update (with props to Leonard Chvilicek for pointing out several issues with the original script)

  • implemented more accurate java version & path detection for JDK variable
  • implemented 30 second timeout on s_client command
  • implemented additional check on address of hosting server
  • implemented more accurate keystore import error check
  • script will show additional URLs for certs with Subject Alternate Names

 

In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:

 

There are a number of blogs and KBs that cover this topic in varying degrees of detail:

 

 

Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.

 

The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:

 

Success!

Interested in having a central single pane of glass view across your cloud, on-prem and virtual infrastructure?. Well, then with no shadow of doubt the use of the RSA NetWitness real-time dashboards and charts will come into play. 

 

The attached dashboards,  charts and RE rules will help you in getting a real-time monitoring to what really matters across the mentioned technologies and log sources.  

 

Below snapshots explain what you would ultimately see after importing the attached content to your NW11.3+ reporting-engine and dashboards: 

(considering that you have successfully integrated those log sources and parsed their logs to the meta keys that will allow the below dashboards to be populated with the relevant information)

 

 

Qualys Vulnerability Scanner Dashboard

A new C2 framework was recently added to the C2 Matrix called, Ninja. It was built on top of the leaked MuddyC3 framework used by an Iranian APT group called, MuddyWater. It can be run on Windows, Linux, macOS, and FreeBSD; the platform is built for speed, is highly malleable and feature rich. As usual, in this blog post will cover the detection of its usage using NetWitness Network and NetWitness Endpoint.

 

The Attack

Ninja creates a variety of payloads for you upon execution. In this instance, we just chose one of the PowerShell payloads and executed it on the victim endpoint:

 

A few seconds later we see our successful connection back to Ninja whereby a second stage payload is sent along, as well information about the victim endpoint:

 

We can see the information sent back from one of the initial HTTP POSTs by listing the agents:

 

Now we can change our focus to the agent, and start to execute commands against the endpoint:

 

The Detection Using NetWitness Network

Ninja C2 works over HTTP and currently has no direct support for SSL. This is in an attempt to blend in with the large quantities of HTTP traffic typically already present in an environment: the best place to hide a leaf is in the forest.

 

Ninja has a somewhat large amount of anomalies in regard to the HTTP requests it makes to and from the C2, a few of these have been highlighted below:

NOTE: While plenty of applications (ab)use the HTTP protocol, focusing on charateristics of more mechanical type behaviour can lead to us to sessions that are more worthy of investigation.

 

Another interesting element to Ninja is that for each agent a unique five character ID is generated. All requests from or to that agent are then in the form of, "AgentId-img.jpeg" - so from the below, we can tell that two agents are communicating with Ninja. You'll also notice that the requests it is making are for JPEG images, but none are actually returned. We can tell this as the File Type meta key is populated by a parser looking for the magic bytes of files, and it found no evidence of a JPEG in these sessions:

 

 

Another interesting artefact from Ninja is that it also returns encrypted commands in GET requests, and the associated encrypted response in POST requests; these can be seen under the Querystring meta key - the initial HTTP POST however, contains information about the system and is sent in the clear delimited by **:

 

Drilling into the Events view for the Ninja traffic, we can also see a defined beaconing pattern (we set this to two minutes upon setting up Ninja), as well as the fact that the beacons typically all have the same payload size:

 

Reconstructing the sessions from the beginning, we can see the initial communication with the Ninja C2, whereby it returns a second stage PowerShell payload:

 

This payload is somewhat large and is setting up the agent itself, the communication with the C2, encrypt and decrypt functions, as well as dynamically generating the AES key that should be used. Payloads such as this should be studied in-depth as they allow you to better understand the C2's function, and in this case, will allow us to decrypt the communication:

 

The next two pieces of information directly after the second stage payload are of importance, they contain information regarding the agent ID, details of the infected endpoint, as well as the encryption key that will be used; this is not a static key and is dynamically created for each agent:

 

Continuing the reconstruction of the sessions, we can see some Base64 encoded data, these are the AES encrypted commands and associated responses from them:

 

If you remember from earlier, we managed to identify the key that was used to encrypt this data. We also identified the second stage payload that was sent, this payload contained the PowerShell code for the agent which included the encryption and decryption functions for this data. We can simply use this to our advantage and create a simple decoder for this data:

#Ninja AES Key Returned From First HTTP POST to C2
$key = 'VU9XU0VIQUpaSldVU0JET1pXUVRaTVFMRUpZVU1ZUFQ='
#User passed data to decrypt
$enc=$args[0]



function CAM ($key,$IV){
try {$a = New-Object "System.Security.Cryptography.RijndaelManaged"
} catch {$a = New-Object "System.Security.Cryptography.AesCryptoServiceProvider"}
$a.Mode = [System.Security.Cryptography.CipherMode]::CBC
$a.Padding = [System.Security.Cryptography.PaddingMode]::Zeros
$a.BlockSize = 128
$a.KeySize = 256
if ($IV)
{
if ($IV.getType().Name -eq "String")
{$a.IV = [System.Convert]::FromBase64String($IV)}
else
{$a.IV = $IV}
}
if ($key)
{
if ($key.getType().Name -eq "String")
{$a.Key = [System.Convert]::FromBase64String($key)}
else
{$a.Key = $key}
}
$a}


$b = [System.Convert]::FromBase64String($enc)
$IV = $b[0..15]
$a = CAM $key $IV
$d = $a.CreateDecryptor()
$u = $d.TransformFinalBlock($b, 16, $b.Length - 16)
[System.Text.Encoding]::UTF8.GetString($u)

 

Executing the script and passing it the encrypted Base64 will decrypt the encrypted commands and associated responses allowing us to see what the attacker executed:

 

Based on Ninja being over HTTP by default, and the initial communication being in the clear, an application rule to pick up on this would look like the following:

(service = 80) && (action = 'post') && (query contains '**')

 

To detect further potential communication to and from Ninja C2 we could use the following application rule logic:

(service = 80) && (filename regex '^[a-z]{5}-img.jpeg') && (filetype != 'jpeg')

 

The Detection Using NetWitness Endpoint

Upon deploying Ninja, NetWitness Endpoint generates four Behaviours of Compromise, runs powershell, runs powershell decoding base64 string, and runs powershell with long arguments:

 

NetWitness Endpoint also generated meta values for the reconnaissance commands that were executed by the Ninja PowerShell agent:

  • C:\>whoami: gets current username
  • C:\>quser: queries users logged on local system
  • C:\>tasklist: enumerates proceses on local system

 

This is an important point to note, that even if you miss the initial execution, the malicious process will still have to do something in order to achieve its end goal, and as a defender, you only need to pick up on one of those activities to pull the thread back to the beginning.

 

Drilling into the Events view for the meta value, runs powershell decoding base64 string, we can see the Base64 encoded PowerShell command to initiate the connection to Ninja, we can also Base64 decode this within the UI to obtain other information such as the C2 IP:

 

Drilling into the Events view for the other meta values identified, we can see that a FILELESS_SCRIPT, spawned from the initial PowerShell command, is executing the reconnaissance command, tasklist:

 

 

Conclusion

New C2 frameworks are constantly being developed but all fall prey to the same detection mechanisms. It just comes down to you, as a defender, to triage the data the system presents to you to look for anomalies in processes doing things they shouldn't.

Every SOC analyst should spend at least part of his/her day reading various blog posts and white papers on attacker profiles and their tools and techniques. Attackers often repeat at least certain aspects of their activity on various targets, and thus provide the analysts with an opportunity to incorporate such indicators into their toolset (hopefully) prior to being targeted by such attackers.

 

In addition, other sites provide continuous indicators of both advanced and opportunistic attackers, which can also be incorporated into the toolset for automatic detection.

 

Here I will provide a guide on how to format such publicly available indicators into the NetWitness Network and NetWitness Endpoint.

 

Let us briefly describe what is an Indicator of Compromise (IOC). An IOC is an indicator of something that has already been observed on a compromised system or a behavior that was part of an attack. There are multiple types of IOCs, because you can track something in many different ways, for example IP addresses, filenames, file size, URLs, a particular endpoint behavior, etc.

 

Sometimes lists of hashes such as MD5/SHA1/SHA256 are enough to quickly identify compromised machines. For this purpose, there are multiple sites where you can find a good list of MD5 / SHA1 / SHA256 based IOCs, here are some examples:

 

 

At this point, if you don't have your own list of IOCs based on MD5 / SHA1 / SHA256, you can use some of these lists, created by other analysts. However, such information is not necessarily in a suitable format for incorporating into the NetWitness toolset. One way to normalize the data is by following this process:

 

  1. Install Cmder (https://cmder.net/), which is a good console emulator for Windows that has the functionality needed for the rest of the steps.
  2. Let’s say you want to generate the MD5 list of IOC FIN7 found at:
  3. https://github.com/RedDrip7/APT_Digital_Weapon/tree/master/FIN7

 

After you download file FIN7_hash.md, you are ready to start.

  1. Open Cmder and go to the folder where the downloaded file is located.
  2. Now run following commands, as shown in the following figure.

 

grep -e "[0-9a-f]\{32\}" FIN7_hash.md | cut -c 3- | cut -c -32 | uniq -u > FIN7_tmp.txt | sed -e 's/$/,FIN7,blacklisted\ file/' FIN7_tmp.txt > FIN7_md5.txt

 

Let me explain the commands in more detail for those not familiar with these tools/commands:

 

CommandDetails
grep -e "[0-9a-f]\{32\}" FIN7_hash.mdExtract MD5 from file FIN7_hash.md
cut -c 3- | cut -c -32Remove all the unneeded characters
uniq -u > FIN7_tmp.txtMake it unique and save the output to FIN7_tmp.txt
sed -e 's/$/,FIN7,blacklisted\ file/' FIN7_tmp.txt > FIN7_md5.txtCreate the final file

 

 

The above steps are specific to this particular file, each set of IOCs will need its unique set of conversion steps, add “,FIN7,blacklisted file” to each line and write the output to FIN7_md5.txt. Where “FIN7” is the description of your APT, which we will map to the ioc key in NetWitness, and the value “blacklisted file” which we will map to the analysis.file key, this step is critical if you want the module and machine scores automatically set to 100 for these matches.

 

Figure 1

 

If you want to use your own toolset to format the data, then please ensure you follow these steps in order to generate a good list of IOC:

 

  1. Retrieve the file (can be a plain text, a pdf, a word, one HTML, the filetype is not important)
  2. Extract the IOCs from file
  3. Remove unneeded chars (in order to have only the useful strings)
  4. Make any IOC unique (ensure you remove any duplicate entries as this in an important step)
  5. You must have a value of “blacklisted file” in your resulting file if you want machine and module  scores to be affected by your feed.

 

At this point we have the source CSV file with the data necessary to create a Feed for NetWitness Endpoint.

 

To create your feed, follow these steps:

  1. Go to Configure->Custom Feed and create a custom feed.
  2. Click on + icon, select Custom Feed and configure the custom feed by giving it a name and selecting the CSV file you created above as shown in the following figure

 

Figure 2

 

 

In this case CustomAPTFeed.csv is your FIN7_md5.txt created above, which we renamed to CustomAPTFeed.csv.

Apply the feed to the LogDecoder (second Tab), and define the Columns as shown in the following figure. Here define the Callback Key as “checksum.src”, select Index Column to be the first one, which will grey it out in the grid below, select the key for Column 2 in this case “ioc” and finally select Column 3 as the “analisys.file” meta key, again this step is critical if you want the risk scores to automatically update, it will only work for this combination of key and value.

 

Figure 3

 

 

Finish the import make sure there are no errors and the task completed successfully. Now you can go to Investigate in the UI and validate your data.

 

Every time the meta key “checksum.src” contains a value defined in your custom feed, meta key “ioc” will be populated with the value provided in the Column 2 of the CSV file and the “analysis.file” meta key will have the “blacklisted file” value, as shown in the following figure.

 

Figure 4

 

In this case, the endpoint risk score for that system will automatically be increased to 100 (the highest possible risk score), and under Critical Alert you will see the relevant indicator in our case that is “Blacklisted File”.

 

Figure 5

 

The same will happen to the specific module that was Blacklisted, as shown in the following figure:

 

Figure 6

 

Multiple types of IOCs can be loaded into NetWitness Endpoint, following the steps presented in this blog post. Always remember that IOCs are static, so the resource has to match exactly to trigger an alert. In the case of MD5 hashes of files, also remember that if the file is changed even by just one byte or for example recompiled, the MD5 hash will be different and your IOC will no longer match. This is the reason why we recommend that analysts focus instead on other possible characteristics of a file (such as the file description if it is unique) or its behavior (such as any parameters that need to be passed for it to work).

 

I hope this blog post can help in importing simple and fast IOCs into the NetWitness endpoint for automatic detection of known malicious files.

 

A special thank you goes out to Lee Kirkpatrick for his assistance and support.

NetWitness already got the Health & Wellness service which provide a full overview for the health of all Netwitness Services and hosts, Yet I also created a script for health check to perform a quick analysis on Disk usage, Memory utilization, Existence of Core files & If there were any failed services on any NetWitness host

 

Also it lists all your hosts with regards to their Salt Minion IDs, hostnames, IPs and also provide a Salt Reachability check.

 

 

How It Works:

 

The procedure actually consists of 2 scripts

 

health-check.sh: This is a Script to run on the SA and it performs a simple Health-Check on your environment, it copies the Health-Check-host.sh to all hosts then turns it executable then run it at each host and provide the output and recommendation. It also lists all your hosts UUIDs "Salt Minion IDs", Hostnames & IPs and perform a Reachability Test as well

 

health-check-host.sh: This script is copied to all Netwitness hosts when you run the health-check.sh on the SA, This script analyzes the hosts disk usage,memory utilization,Existence of Core files & if there are any failed services on that host

This script (health-check-host.sh) will not run manually, it will run once you run the health-check.sh script on the SA

 

 

Instructions:

 

All Below steps are done on SSH session to the NetWitness Admin Server (SA)

 

1) under /root on the SA,

#vi health-check.sh

2) copy the content of health-check.sh (attached) into that file you created in step 1

3) under /root on the SA,

#vi health-check-host.sh

4) copy the content of health-check-host.sh (attached) into that file you created in step 3

5) You will only make the health-check.sh executable (not the health-check-host.sh)

#chmod +x health-check.sh

6) Run the health-check.sh

#./health-check.sh

 

 

Sample Run: 

.

.

 

Note:

 

Minion did not return. [Not connected] OR No Response could point to one of the below reasons

 

1) If you are facing any network slowness and Salt Master (SA) is unable to reach to the Salt Minions (hosts) within a specific time limit during fetching their IPs, Hostnames, the output of the first part of the script can provide you (No Response), Don't Panic, This does not mean that SA in totally unable to reach the Host(s), but it was unable to reach it during a specific time limit thus the salt will temporarily provide you with "No Response" output. 

If you run the script again during no network slowness, it should provide the output as expected

 

2) If the host is having 0 free memory left and utilized all its swap memory, its salt minion may not reply to salt master's request of IP,hostname & Reachability test; giving Minion did not return. [Not connected]. If you run the script again, it will show a result normally, otherwise, Thanks to check memory utilization of this host if you already isolated it's not related to above 1st point  (network slowness) or a retired host/powered off host (3rd point)

 

3) Minion did not return. [Not connected] could also point to a retired host (that was removed from the environment yet its salt minion UUID was not deleted from the salt master) or could point to a host that's currently powered off

 

 

 

 

Please feel free to provide feedback, bug reports etc...

Filter Blog

By date: By tag: