Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Lee Kirkpatrick
1 2 Previous Next

RSA NetWitness Platform

28 Posts authored by: Lee Kirkpatrick Employee
Lee Kirkpatrick

What's updog?

Posted by Lee Kirkpatrick Employee Mar 16, 2020

Updog is a replacement for Python's SimpleHTTPServer. It allows uploading and downloading via HTTP/S, can set adhoc SSL certificates and use HTTP basic auth. It was created by sc0tfree  and can be found on his GitHub page here. In this blog post we will use updog to exfiltrate information and show you the network indicators left behind from its usage.

 

The Attack

We are starting updog with all the default settings on the attacker machine, this means it will expose the directory we are currently running it from over HTTP on port 9090:

 

In order to quickly make updog publicly accessible over the internet, we will use a service called, Ngrok. This service exposes local servers behind NATs and firewalls to the public internet over secure tunnels - the free version of Ngrok creates a randomised URL and has a lifetime of 8 hours if you have not registered for a free account:

 

This now means that we can access our updog server over the internet using the randomly generated Ngrok URL, and upload a file from the victims machine:

 

The Detection using NetWitness Network

An item of interest for defenders should be the use of services such as Ngrok. They are commonly utilised in phishing campaigns as the generated URLs are randomised and short lived. With a recent update to the DynDNS parser from William Motley, we now tag many of these services in NetWitness under the Service Analysis meta key with the meta value, tunnel service:

 

 

Pivoting into this meta value, we can see there is some HTTP traffic to an Ngrok URL, an upload of a file called supersecret.txt, a suspicious sounding Server Application called werkzeug/1.0.0 python/3.8.1, and a Filename with a PNG image named, updog.png:

 

 

Reconstructing the sessions for this traffic, we can see the updog page as the attacker saw it, and we can also see the file that was uploaded by them:

 

 

NetWitness also gives us the ability to extract the file that was transferred to the updog server, so we can see exactly what was exfiltrated:

 

Detection Rules

The following table lists an application rule you can deploy to help with identifying these tools and behaviours:

 

Appliance

Description

Logic

Fidelity

Packet Decoder

Detects the usage of Updog

server begins 'werkzeug ' && filename = 'updog.png '

High

 

 

Conclusion

As a defender, it is important to monitor traffic to services such as Ngrok as they can pose a significant security risk to your organisation, there are also multiple alternatives to Ngrok and traffic to those should be monitored as well. In order for the new meta value, tunnel service to start tagging these services, make sure to update your DynDNS Lua parser.

A zero-day RCE (Remote Code Execution) exploit against ManageEngine Desktop Central was recently released by ϻг_ϻε (@steventseeley). The description of how this works in full and the code can be found on his website, https://srcincite.io/advisories/src-2020-0011/. We thought we would have a quick run of this through the lab to see what indicators it leaves behind.

 

The Attack

Here we simply run the script and pass two parameters, the target, and the command - which in this case is using cmd.exe to execute whoami and output the result to a file named si.txt:

 

We can then access the output via a browser and see that the command was executed as SYSTEM:

 

Here we execute ipconfig:

 

And grab the output:

 

The Detection in NetWitness Packets

The script sends a HTTP POST to the ManageEngine server as seen below. It targets the MDMLogUploaderServlet over its default port of 8383 to upload a file with controlled content for the deserialization vulnerability to work, in this instance the file is named logger.zip. The command to be executed can also be seen in the body of the POST:

The traffic by default for this exploit is over HTTPS, so you would need SSL interception to see what is shown here.

 

This is followed by a GET request to the file that was uploaded via the POST for the deserialization to take place, which is what executes the command passed in the first place:

 

This activity could be detected by using the following logic in an application rule:

(service = 80) && (action = 'post') && (filename = 'mdmloguploader') && (query begins 'udid=') || (service = 80) && (action = 'get') && (directory = '/cewolf/')

 

The Detection Using NetWitness Endpoint

To detect this RCE in NetWitness Endpoint, we have to look for Java doing something it normally shouldn't, as this is what ManageEngine uses. It is not uncommon for Java to execute cmd, so the analyst has to look into the commands to understand if it is normal behaviour or not - from the below we can see java.exe spawning cmd.exe and running reconaissance type commands, such as whoami and ipconfig - this should stand out as odd:

 

The following application rule logic could be used to pick up on this activity. Here we are looking for Java being the source of execution as well as looking for the string "tomcat" to narrow it down to Apache Tomcat web servers that work as the backend for the ManageEngine application, the final part is identifying fileless scripts being executed by it:

(filename.src ='java.exe') && (param.src contains'tomcat') && (filename.dst begins '[fileless','cmd.exe')

Other java based web servers will likely show a similar pattern of behavior when being exploited.

 

 

Conclusion

As an analyst it is important to stay up to date with the latest security news to understand if you organisation could potentially be at risk of compromise. Remote execution vulnerabilities such as the one outlined here can be an easy gateway into your network, and any devices reachable from the internet should be monitored for anomalous behaviour such as this. Applications should always be kept up to date and patches applied where available ASAP to avoid becoming a potential victim.

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

 

 

 

 

Special thanks to Rui Ataide for his support and guidance for these posts.

This post is going to cover a slightly older C2 framework from Silent Break Security called, Throwback C2. As per usual, we will cover the network and endpoint detections for this C2, but we will delve a little deeper into the threat hunting process for NetWitness as well.

 

The Attack

After installing Throwback and compiling the executable for infection, which in this case, we will just drop and execute manually. We will shortly see the successful connection back to the Throwback server:

 

Now we have our endpoint communicating back with our server, we can execute typical reconaissance type commands against it, such as whoami:

 

Or tasklist to get a list of running processes:

 

This C2 has a somewhat slow beacon that by default is set to ~10 minutes, so we have to wait that amount of time for our commands to be picked up and executed:

 

 

Detection Using NetWitness Network

To begin hunting, the analyst needs to prepare a hypothesis of what it is they believe is currently taking place in their network. This process would typically involve the analyst creating multiple hypotheses, and then using NetWitness to prove, or disprove them; for this post, our hypothesis is going to be that there is C2 traffic - these can be as specific or as broad as you like, and if you struggle to create them, the MITRE ATT&CK Matrix can help with inspiration.

 

Now that we have our hypothesis, we can start to hunt through the data. The below flow is an example of how we do exactly that with HTTP:

  1. Based on what we are looking for defines the direction. So in this case, we are looking for C2 communication, which means our direction will be outbound (direction = 'outbound')
  2. Secondly, you want to focus on a single protocol at a time. So for our hypothesis, we could start with SSL, if we have no findings, we can move on to another protocol such as HTTP. The idea is to navigate through them one by one to separate the data into smaller more manageable buckets without getting distracted (service = 80)
  3. Now we want to hone in on the characteristics of the protocol, and pull it apart. As we are looking for C2 communication, we would want to pull apart the protocol to look for more mechanical type behaviour - one meta key that helps with this is Service Analysis - the below figure shows some examples of meta values created based off HTTP

 

A great place to get more detail on using NetWitness for hunting can be found in the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide PDF.

 

From the Investigation view, we can start with our initial query looking for outbound traffic over HTTP, and open the Service Analysis meta key. There are a fair number of meta values generated, and all of them are great places to start pivoting on, you can choose to pivot on an individual meta value, or multiple. We are going to start by pivoting on three, which are outlined below:

  • http six or less headers: Modern day browsers typically have seven or more headers. This could indicate a more mechanical type HTTP connection
  • http single response: Typical user browsing behaviour would result in multiple requests and responses in a single TCP connection. A single request and response can indicate more mechanical type behaviour
  • http post no get no referer: HTTP connections with no referer or GET requests can be indicative of machine like behaviour. Typically the user would have requested one or more pages prior to posting data to the server, and would have been referred from somewhere

 

After pivoting into the meta values above, we reduce the number of sessions to investigate to a more manageable volume:

 

Now we can start to open other meta keys and look for values of interest without being overwhelmed by the enormous amount of data. This could involve looking at meta keys such as Filename, Directory, File Type, Hostname Alias, TLDSLD, etc. Based off the meta values below, the domain de11-rs4[.]com stands out as interesting and something we should take a look at; as an analyst, you should investigate all domains you deem of interest:

 

Opening the Events view for these sessions, we can see a beacon pattern of ~10 minutes, the filename is the same everytime, and the payload size is consistent apart from the initial communication which could be a download of a second stage module to further entrench - this could also be legitimate traffic and software simply checking in for updates, sending some usage data, etc.:

 

Reconstructing the events, we can see the body of the POST contains what looks like Base64 encoded data, and in the response we see a 200 OK but with a 404 Not Found message and a hidden attribute which references cmd.exe and whoami:

The Base64 data in the POST is encrypted, so decoding it at this point would not reveal anything useful. We may, however, be able to obtain the key and encryption mechanism if we had the executable, keep reading on to see!

 

Similarly we see another session which is the same but the hidden attribute references tasklist.exe:

The following application rule logic would detect default Throwback C2 communication:
service = 80 && analysis.service = 'http six or less headers' && analysis.service = 'http post no get no referer' && filename = 'index.php' && directory = '/' && query begins 'pd='

This definitely stands out as C2 traffic and would warrant further investigation into the endpoint. This could involve directly analysing all network traffic for this machine, or switching over to NetWitness Endpoint to analyse what it is doing, or both.

 

NOTE: The network traffic as seen here would be post proxy, or traffic in a network with no explicit proxy settings (https://www.educba.com/types-of-proxy-servers/).

 

Detection Using NetWitness Endpoint

As per usual, I start by opening the compromise keys. Under Behaviours of Compromise (BOC), there are multiple meta values of interest, but let's start with outbound from unsigned appdata directory:

 

Opening the Events view for this meta value, we can see that an executable named, dwmss.exe, is making a network connection to de11-rs4[.]com:

 

Coming back to the investigation view, we can run a query to see what other activity this executable is performing. To do this, we execute the following query, filename.src = 'dwmss.exe' - here we can see the executable is running reconaissance type commands:

 

From here we decide to download the executable directly from the machine itself and perform some analysis on it. In this case, we ran strings and analysed the output and saw there were a large number of references to API calls of interest:

 

There is also a string that references RC4, which is an encryption algorithm. This could be of potential interest to decrypt the Base64 text we saw in the network traffic:

 

RC4 requires a key, so while analysing the strings we should also look for potential candidates for said key. Not far from the RC4 string is something that looks like it could be what we are after:

 

Navigating back to the packets and copying some of the Base64 from one of the POST's, we can run it through the RC4 recipe on CyberChef with our proposed key; in the output we can see the data decoded successfully and contains information about the infected endpoint:

 

Now we have confirmed this is malware, we should go back and look at all the activity generated by this process. This could be any file that it has created, files dropped around the same time, folders it may be using, etc.

 

Conclusion

C2 frameworks are constantly being developed and improved upon, but as you can see from this C2 which is ~6 years old, their operation is fairly consistent with what we see today, and with the way NetWitness allows you to pull apart the characteristics of the protocol, they can easily be identified.

In this post we will cover CVE-2019-0604 (https://nvd.nist.gov/vuln/detail/CVE-2019-0604), albeit a somehwhat older vulnerability, it is one that is still being exploited. This post will also go a little further than just the initial exploitation of the Sharepoint server, and use EternalBlue to create a user on a remote endpoint to allow lateral movement with PAExec; again, this is an old, well-known vulnerability, but something still being used. We will then utilise Dumpert to dump the memory of LSASS (https://github.com/outflanknl/Dumpert) and obtain credentials, and then employ atexec from Impacket (https://github.com/SecureAuthCorp/impacket) to further laterally move.

 

The Attack

The initial foothold on the network is obtained is via the Sharepoint vulnerability (CVE-2019-0604). We will use the PoC code developed by Voulnet (https://github.com/Voulnet/desharialize) to drop a web shell:

 

In the above command we are targetting the vulnerable Picker.aspx page, and using cmd to echo a Base64 encoded web shell to a file in C:\ProgramData\ - we then use certutil to Base64 decode the file into a publicly accessible directory on the Sharepoint server and name it, bitreview.aspx.

 

To access the web shell we just dropped onto the Sharepoint server, we are going to use the administrative tool, AntSword (https://github.com/AntSwordProject/antSword). Here we add the URL to the web shell we dropped, and supply the associated password:

 

Now we can open a terminal and begin to execute commands on the server to get a lay of the land and find other endpoints to laterally move to:

 

The AntSword tool has a nice explorer view which allows us to easily upload additional tools to the server. In this instance we upload a scanner to check if an endpoint is vulnerable to EternalBlue:

 

Now we can iterate through some of the endpoints we uncovered earlier to see if any of them are vulerable to EternalBlue:

 

Now that we have uncovered a vulnerable endpoint, we can use this to create a user that will allow us to laterally move to it. Using the PoC code created by Worawit (https://github.com/worawit/MS17-010), we can exploit the endpoint and execute custom shellcode of our choosing. For this I compiled some shellcode to create a local administrative user called, helpdesk. I uploaded my shellcode and EternalBlue exploit executable and ran it against the vulnerable machine:

 

Now we have created a local administrative user on the endpoint, we can laterally move to it using those credentials. In this instance, we upload PAExec, and Dumpert, so we can laterally move to the endpoint and dump the memory of LSASS. The following command copies the and executes the Outflank-Dumpert.exe using PAExec and the helpdesk user we created via EternalBlue:

 

This tool will locally dump the memory of LSASS to C:\Windows\Temp - so we will mount one of the administrative shares on the endpoint, and confirm if our dump was successful:

 

Using AntSwords explorer, we can easily navigate to the file and download it locally:

 

We can then use Mimikatz on the attackers' local machine to dump the credentials which may help us laterally move to other endpoints:

 

We decide to upload the atexec tool from Impacket to execute commands on the remote endpoint to see if there are other machines we can laterally move to. Using some reconaissance commands, we find an RDP session using the username we pulled from the LSASS dump:

 

From here, we could continue to laterally move, dump credentials, and further own the network.

 

Detection using NetWitness Network

NetWitness doesn't always have to be used for threat hunting, it can also be used to search for things you know about, or have been currently researching. Taking the Sharepoint RCE as an example, we can easily search using NetWitness to see if any exploits have taken place. Given that this is a well documented CVE, we can start our searching by looking for requests to picker.aspx (filename = 'picker.aspx'), which is the vulnerable page - from the below we can see two GET requests, and a POST for Picker.aspx (inbound requests directly to this page are uncommon):

Next we can reconstruct the events to see if there is any useful information we can ascertain. Looking into the HTTP POST, we can see the URI matches what we would expect for this vulnerability, we also see that the POST parameter, ctl00$PlaceHolderDialogBodySection$ctl05$hiddenSpanData, contains the hex encoded and serialized .Net XML payload. The payload parameter also starts with two underscores which will ensure the payload reaches the XML deserialization function as is documented:

 

Seeing this would already warrant investigation on the Sharepoint server, but lets use some Python so we can take the hex encoded payload and deserialize it to see what was executed:

 

From this, we have the exact command that was run on the Sharepoint server that dropped a web shell. This also means we now know the name of the web shell and where it is located, making the next steps of investigation easier:

As analysts, it sometimes pays to do these things to find additional breadcrumbs to pull from, although be careful, as this can be time consuming and other methods can make it a lot easier, like MFT analysis that is described later on in the blog post.

 

This means we could search for this filename in NetWitness to see if we have any hits (filename = 'bitreview.aspx'):

 

As you can see from the above highlighted indicators, even without knowing the name of the web shell we would have still uncovered it as NetWitness created numerous meta values surrounding its usage. A fairly recent addition to the Lua parsers available on RSA Live is, fingerprint_minidump.lua - this parser creates the meta value, minidump, under the Filetype meta key, and also creates the meta value, lsass minidump, under the Indicators of Compromise meta key. This parser is a fantastic addition as it tags LSASS memory dumps traversing the network, which is uncommon behaviour.

 

Reconstructing the events, we can see the web shell traffic which looks strikingly similar to China Chopper. The User-Agent is also a great indicator for this traffic, which is the name of the tool used to connect to the web shell:

 

We can Base64 decode the commands from the HTTP POST's and get an insight into what was being executed. The below shows the initial POST which returns the current path, operating system, and username:

 

The following shows a directory listing of C:\ProgramData\ being executed, which is where the initial Base64 encoded bitreview.aspx web shell was dropped:

 

We should continue to Base64 decode all of these commands to gain a better understanding of what the attacker did, but for this blog post I will focus on the important pieces of the traffic. Pivoting into the Events view for the meta value, hex encoded executable, we can see the magic bytes for an executable that have been hex encoded:

 

Extracting all the hex starting from 4D5A (MZ header in hexadecimal) and decoding it with CyberChef, we can clearly see this is an executable. From here, we could save this file and perform further analysis on it:

 

Continuing on with Base64 decoding the commands we come across something interesting, it is the usage of a tool called eternalblue_exploit7.exe against an endpoint in the same network. This gives the defender additional information surrounding other endpoints of interest to the attacker and endpoint to focus on:

If you only have packet visibility, you should always decode every command. This will help you as a defender better understand the attackers actions and uncover additional breadcrumbs. But if you have NetWitness Endpoint, it may be easier to see the commands there, as we will see later.

 

Knowing EternalBlue uses SMB, we can pivot on all SMB traffic from the Sharepoint server to the targetted endpoint. Opening the Enablers of Compromise meta key we can see two meta values indicating the use of SMBv1; this is required for EternalBlue to work. There is also an meta value of not implemented under the Error meta key this is fairly uncommon and can help detect potential EternalBlue exploitation:

 

Reconstructing the events for the SMBv1 traffic, we come across a session that contains a large sequence of NULLs, this is the beginning of the EternalBlue exploit and these NULLs essentially move the SMB server state to a point where the vulnerability exists:

 

With most intrusions, there is typically some form of lateral movement that takes place using SMB. As a defender we should iterate through all possible lateral movement techniques, but for this example I want to see if PAExec has been used. To do this I use the following query (service = 139 && filename = 'svcctl' && filename contains 'paexe'). From the below, we can see that there is indeed some PAExec activity. By default, PAExec also includes the hostname of where the activity occured from, so from the below filenames, we can tell that this activity came from SP2016, the Sharepoint server. We can also see that a file was transferred, as is indicated by the paexec_move0.dat meta value - this is the Outflank-Dumpert.exe tool:

 

Back in the Investigation view, under the Indicators of Compromise meta key, we see a meta value of lsass minidump. Pivoting on this value, we see the dumpert.dmp file in the temp\ directory for the endpoint that was accessed over the ADMIN$ share - this is our LSASS minidump created using the Outflank-Dumpert.exe tool:

 

Navigating back to view all the SMB traffic, and focusing on named pipes (service = 139 && analysis.service = 'named pipe'), we can see a named pipe being used called atsvc. This named pipe gives access to the AT-Scheduler Service on an endpoint and can be used to scheduled tasks remotely. We can also see some .tmp files being created in the temp\ directory on this endpoint with what look like randomly generated names, and window cli admin commands associated with one of them:

 

Reconstructing the events for this traffic, we can see the scheduled tasks being created. In the below screenshot, we can see the XML and the associated parameters passed, in this instance using CMD to run netstat looking for RDP connections and output the results to %windir%\Temp\rWLePJvp.tmp:

 

This is lateral movement behaviour via Impackets tool, atexec. It writes the output of the command to a file so it can read it and display back to the attacker - further analysing the payload, we can see the output that was read from the file and gain insight into what the attacker was after, and subsequently endpoints to investigate:

 

 

Detection using NetWitness Endpoint

As always with NetWitness Endpoint, I like to start my hunting by opening the three compromise keys (IOC, BOC, and EOC). In this case, I only had meta values under the Behaviours of Compromise meta key. I have highlighted a few I deem more interesting with regards to this blog, but you should really investigate all of them:

 

Let's start with the runs certutil with decode arguments meta value. Opening this in the Events view, we can see the parameter that was executed, which is a Base64 encoded value being echo'ed to the C:\ProgramData\ directory, and then certutil being used to decode it and push it to a directory on the server:

 

From here, we could download the MFT of the endpoint:

 

Locate the file that was decoded, download it locally, and see what the contents are:

 

From the contents of the file, we can see that this is a web shell:

 

We also observed the attacker initially drop a file in the C:\ProgramData\ directory, so this is also a directory of interest and somewhere we should browse to within the MFT - here we uncover the attackers tools which we could download and analyse:

 

Navigating back to the Investigate view and opening the meta value http daemon runs command prompt in the Events view, we can see the HTTP daemon, w3wp.exe, executing reconaissance commands on the Sharepoint server:

This is a classic indicator for a web shell, whereby we have a HTTP daemon spawning CMD to execute commands.

 

Further analysis of the commands executed by the attacker shows EternalBlue executables being run against an endpoint, after this, the attacker uses PAExec with a user called helpdesk to connect to the endpoint - implying that the EternalBlue exploit created a user called helpdesk that allowed them to laterally move (NOTE: we will see how the user creation via this exploit looks a little later on):

 

Navigating back to Investigate, and this time opening the Events view for creates local user account, we see lsass.exe running net.exe to create a user account called, helpdesk; this is the EternalBlue exploit. LSASS should never create a user, this is a high fidelity indicator of malicious activity:

 

Another common attacker action is to dump credentials. Due to the popularity of Mimikatz, attackers are looking for other methods of dumping credentials, this typically involves creating a memory dump of the LSASS process. We can therefore use the following query (action = 'openosprocess' && filename.dst='lsass.exe'), and open the Filename Source meta key to look for something opening LSASS that stands out as anomalous. Here we can see a suspect executable opening LSASS named, Outflank-Dumpert.exe:

 

As defenders, we should continue to triage all of the meta values observed. But for this blog post, I feel we have demonstrated NetWitness' ability to detect these threats.

 

Detection Rules

The following table lists some application rules you can deploy to help with identifying these tools and behaviours:

Appliance

Description

Logic

Fiedlity

Packet Decoder

Requests of interest to Picker.aspx

service = 80 && action = ‘post’ && filename = 'picker.aspx' && directory contains '_layout'

Medium

Packet Decoder

AntSword tool usage

client begins 'antsword'

High

Packet Decoder

Possible Impacket atexec usage

service = 139 && analysis.service = 'named pipe' && filename = 'atsvc' && filename ends 'tmp'

Medium

Packet Decoder

Dumpert LSASS dump

fiilename = 'dumpert.dmp'

High

Packet Decoder

Possible EternalBlue exploit

service = 139 && error = 'not implemented' && eoc = 'smb v1 response'

Low

Packet Decoder

PAExec activity

service = 139 && filename = 'svcctl' && filename contains 'paexe'

 

High

Endpoint Log Hybrid

Opens OS process LSASS

action = 'openosprocess' && filename.dst='lsass.exe'

Low

Endpoint Log Hybrid

LSASS creates user

filename.src = ‘lsass.exe’ && filename’dst = ‘net.exe’ && param.dst contains’ /add’

High

 

Conclusion

Threat actors are consistently evolving and developing new attack methods, but they are also using tried and tested methods as well - there is no need for them to use a new exploit when a well-known one works just fine on an unpatched system. Defenders should not only be keeping up to date with the new, but also retroactively searching their data sets for the old.

A couple of months ago, Mr-Un1k0d3r released a lateral movement tool that solely relies on DCE/RPC (https://github.com/Mr-Un1k0d3r/SCShell). This tool does not create a service and drop a file like PsExec or similar tools would do, but instead uses the ChangeServiceConfigA function (and others) to edit an existing service and have it execute commands; making this a fileless lateral movement tool.

 

SCShell is not a tool designed to provide a remote semi-interactive shell. It is designed to allow the remote execution of commands on an endpoint utilising only DCE/RPC in an attempt to evade common detection mechanisms; while this tool is slightly stealthier than most in this category, it’s also a bit more limited in what an attacker can do with it.

 

When we first looked at this, we didn't have much in terms of detection, but with a prompt response from William Motley from our content team, he produced an update to the DCERPC parser that is the basis of this post.

 

The Attack

In the example screenshot below, I run the SCShell binary against an endpoint to launch calc.exe, while this is of no use to an attacker, it is just an example that we can use to visually confirm the success of the attack on the victim machine:

 

 

It could also be used to launch a Metasploit reverse shell, for example, as shown in the screenshot below. We will cover some of the interesting artifacts leftover from this execution in a seperate post. Obviously, this is not something an attacker would do, in their case something like launching an additional remote access trojan or tool would be more likely:

 

SCShell edits an existing service on the target endpoint, it does not create a new one. Therefore the service needs to already exist on the target. In the above example, I use defragsvc as it is a common service on Windows endpoints.

 

RSA NetWitness Network Analysis

There was a recent update to the DCERPC parser that is available via RSA NetWitness Live (thanks to Bill Motley), this parser now extracts the API calls made over DCE/RPC - which can be useful in detecting suspect activity over this protocol. If you have setup your subscriptions correctly for this parser (which you should have), it will be updated automatically, otherwise you will have to push it manually.

 

So, to start my investigation (as per usual) I take a look at my compromise meta keys and notice a meta value of, remote service control, under the Indicators of Compromise [ioc]

 meta key. This is an area that should be regularly explored to look for anomalous activity:

 

Pivoting on this meta value and opening up the action and filename meta keys, we can see the interaction with the svcctl interface that is being used to call API functions to query, change, and start an existing service:

 

 

  • StartServiceA - Starts a Service
  • QueryServiceConfigA - Retrieves the configuration parameters of the specified service.
  • OpenServiceAOpens an existing service
  • OpenSCManagerWEstablishes a connection to the service control manager on the specified computer and opens the specified service control manager database
  • ChangeServiceConfigAChanges the configuration parameters of a service

 

The traffic sent over DCE/RPC is encrypted, so reconstructing the sessions will not help here, but given that we have full visibility we can quickly pivot to endpoint data to get the answers we need. The following logic would allow you to identify this remote service modification behaviour taking place in your environment and subsequently the endpoints of interest for investigation:

service = 139 && filename = 'svcctl' && action = 'openservicea' && action = 'changeserviceconfiga' && action = 'startservicea'

 

RSA NetWitness Endpoint Analysis

A great way to perform threat hunting on a dataset is by performing frequency analysis, this allows us to bubble up outliers and locate suspect behaviour with respect to your environment - an anomaly in one environment, can be common in another. This could be done by looking for less common exectuables being spawned by services.exe in this instance - the following query would be a good place to start, device.type='nwendpoint' && filename.src='services.exe' && action='createprocess' - we would then open up the Filename Destination meta key and see a large number of results returned:

 

 

Typically, we tend to view the results from our queries in descending order, in this instance, we want to see the least common, so we switch the sorting to ascending to bubble up the anomalous executables. Now we can analyse the results and as shown in the screenshot below, we see a couple of interesting outliers, the calc.exe, and the cmd.exe:

 

 

Pivoting into the Events view for cmd.exe, we can see it using mshta to pull a .hta file, clearly this is not good:

 

 

This activity whereby services.exe spawns a command shell is out of the box content, and can be found under the Behaviors of Compromise [boc] meta key, so this would also be a great way to start an investigation as well:

 

 

Now that we have suspect binaries of interest, we have files and endpoints we could perform further analysis on to get our investigation fully underway, but for this post I will leave it here.

 

Conclusion

It is important to ensure that all your content in NetWitness is kept up to date - automating your subscriptions to Lua parsers for example, is a great start. It ensures that you have all the latest metadata being created from the protocols, and improves your ability as a defender to hunt and find malicious behaviours.

 

It is also important to remember that while sometimes there may not be a lot of activity from the initial execution of say a binary, at some point, it will have to perform some activity in order to achieve its end goal. Picking up on said activity will allow defenders to pull the thread back to the originating malicious event.

DNS over HTTPS (DoH) was introduced to increase privacy and help prevent against the manipulation of DNS data by utilising HTTPS to encrypt it. Mozilla and Google have been testing versions of DoH since June 2018, and have already begun to roll it out to end-users via their browsers, Firefox, Mozilla, and Chrome. With the adoption rates of DoH increasing, and the fact that C2 frameworks using DoH have been available since October 2018, DoH has become an area of interest for defenders; one C2 that stands out is goDoH by SensePost (https://github.com/sensepost/goDoH).

 

goDoH is a proof of concept Command and Control framework written in Golang that uses DNS-over-HTTPS as a transport medium. Currently supported providers include Google and Cloudflare, but it also contains the ability to use traditional DNS.

 

The Attack

With goDoH, the same binary is used for the C2 server, and the agent that will connect back to it. In the screenshot below, I am setting up the C2 on a Windows endpoint - I specify the domain I will be using, the provider to use for DoH, and that this is the C2:

 

 

On the victim endpoint I do the same, but instead specify that this is the agent:

 

 

After a short period of time, our successful connection is made, and we can begin to execute our reconnaissance commands:

 

 

RSA NetWitness Platform Network Analysis: SSL Traffic

Given its default implementation using SSL, there is not a vast amount of information we can extract, however, that does not mean that we cannot locate DoH in our networks. A great starting point is to look at who currently provides DoH - after some Googling I came across a list of DoH providers on the following GitHub page:

 

 

These providers could be converted into an application rule (or Feed) to tag them in your environment, or utilised in a query to retroactively view DoH usage in your environment. This would help defenders to initially pinpoint DoH usage:

alias.host ends 'dns.adguard.com','dns.google','cloudflare-dns.com','dns.quad9.net','doh.opendns.com','doh.cleanbrowsing.org','doh.xfinity.com','dohdot.coxlab.net','dns.nextdns.io','dns.dnsoverhttps.net','doh.crypto.sx','doh.powerdns.org','doh-fi.blahdns.com','dns.dns-over-https.com','doh.securedns.eu','dns.rubyfish.cn','doh-2.seby.io','doh.captnemo.in','doh.tiar.app','doh.dns.sb','rdns.faelix.net','doh.li','doh.armadillodns.net','jp.tiar.app','doh.42l.fr','dns.hostux.net','dns.aa.net.uk','jcdns.fun','dns.twnic.tw','example.doh.blockerdns.com','dns.digitale-gesellschaft.ch'

NOTE: This is by no means a definitive list of DoH providers. You can use the above as a base, but should collate your own.

 

Running this query through my lab, I can see there is indeed some DoH activity for the Cloudflare provider:

As this traffic is encrypted, it is difficult to ascertain whether or not it is malicious, but there are a couple of factors that may help us. Firstly, we could reduce the meta values to a more manageable amount by filtering on long connection, which is under the Session Analysis meta key, this is because C2 communications over DoH would typically be long lived:

 

 

We could then run the JA3 hash values through a lookup tool to identify any outliers (in this instance I am left with one due to my lab not capturing a lot of data):

 


For details on how to enable JA3 hashes in the RSA NetWitness Platform, take a look at one of our previous posts: Using the RSA NetWitness Platform to Detect Command and Control: PoshC2 v5.0 

Running the JA3 hash (706ea0b1920182287146b195ad4279a6) through OSINT (https://ja3er.com/form), we get results back for this being Go-http-client/1.1, this is because the goDoH application is written in Golang - this stands out as an outlier and the source of this traffic would be a machine to perform further analysis on:

 

 

 

RSA NetWitness Platform Network Analysis: SSL Intercepted Traffic

Detecting DoH when SSL interception is in place becomes far easier. DoH requests for Cloudflare, for example

, supply a Content-Type header that allows us to easily identify it (besides the alias.host value):

 

Also determining whether the DoH connections are malicious becomes far easier when SSL interception is in place, this is because it allows defenders to analyse the payload that would typically be encrypted. The following screenshot shows the decrypted DoH session between the client and Cloudflare - here we are able to see the DNS request and response in the clear, which divulges the C2 domain being used, go.doh.dns-cloud.net. We can also see that the JA3 hash we previously reported was correct, as the User-Agent is Go-http-client/1.1:

 

 

The session for this DoH C2 traffic is quite large, so I am unable to show it all - this is due to the limited amount of information that can be transmitted via each DNS query. An example of data being transmitted via an A record can be seen below - the data is encrypted so won't make sense by merely viewing it:

 

 

Within this session there are hundreds of requests for the go.doh.dns-cloud.net domain with a very high variability in the FQDN seen in the name parameter of the query; this is indicative behaviour of C2 communication over DNS. Below I have merged five of the requests together in order to help demonstrate this variability:

 

 

Given the use of TCP for HTTPS vs the common use of UDP for DNS, the traffic shows as a single session in the RSA NetWitness Platform due to TCP session/port reuse, normally this type of activity would present itself over a larger number of RSA NetWitness Platform sessions when using native DNS.

 

RSA NetWitness Endpoint Analysis

Looking at my compromise keys, I decide to start my triage by pivoting into the Events view for the meta value, runs powershell with http argument as shown below.

 

 

From the following screenshot, we can see an executable named, googupdater.exe, running out of the users AppData directory is executing a PowerShell command to get the public IP of the endpoint. We also get to see the parameter that was passed to the googupdater.exe binary that reveals the domain being contacted:

 

NOTE: googupdater.exe is the goDoH binary and was renamed for dramatic effect.

We could have also pivoted on the outbound from unsigned appdata directory meta value which would have led us to this suspect binary, as well. While from an Endpoint perspective this is just another compiled tool communicating over HTTPS, the fact that it will need to spawn external processes to execute activity would lead us to an odd parent process:

 

 

Given this scenario in terms of Endpoint, this would lead us back to common hunting techniques, but in the interest of brevity, I won't dig deeper for this tool. The key items would be an uncommon parent process for some unusual activity, and the outbound connections from an unsigned tool. While both can at times be noisy, in conjunction with other drills, they can be narrowed down to cases of interest.

 

Conclusion

This post further substantiates the requirement for SSL interception as it vastly improves the defenders capability to investigate and triage potentially malicious communications. While it is still possible to identify suspect DoH traffic without SSL interception, it can be incredibly difficult to ascertain its intentions. DNS is also a treasure trove for defenders, and the introduction and use of DoH could vastly deplete the ability for them to protect the network effectively.

A couple of days ago on Github, Hackndo released a tool (https://github.com/Hackndo/lsassy) that is capable of dumping the memory of LSASS using LOLBins (Living of the Land Binaries) - typically we would see attackers utilising SysInternals ProcDump utility to do this. Lsassy uses the MiniDump function from the comsvcs.dll in order to dump the memory of the LSASS process; this action can only be performed as SYSTEM, so it therefore creates a scheduled task as SYSTEM, runs it and deletes it.

 

We decided to take this tool for a spin in our lab and see how we would detect this with NetWitness.

 

The Attack

To further entrench themselves and find assets of interest, an attacker will need to move laterally to other endpoints in the network. Reaching this goal often involves pivoting through multiple systems, as well as dumping LSASS to extract credentials. In the screenshot below, we use the lsassy tool to dump credentials from a remote host that we currently have access to:

 

The output of this command shows us the credentials for an account we are already aware of, but also shows us credentials for an account we previously did not, tomcat-svc.

 

NetWitness Network Analysis

I like to start my investigation every morning by taking a look at the Indicators of Compromise meta key, this way I can identify any new meta values of interest. Highlighted below is one that I rarely see (of course in some environments this can be a common activity, but anomalies of what endpoints this takes place on can be identified):

 

Reconstructing the session, we can see the remote scheduled task that was created and analyse what it is doing. From the below screenshot, we can see the task created will use CMD to launch a command to locate LSASS, and subsequently dump it to \Windows\Temp\tmp.dmp using the MiniDump function within the comsvcs.dll:

 

cmd.exe /C for /f "tokens=1,2 delims= " ^%A in ('"tasklist /fi "Imagename eq lsass.exe" | find "lsass""') do C:\Windows\System32\rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump ^%B \Windows\Temp\tmp.dmp full

This task also leaves other artifacts of interest behind. From the screenshot below, we can see the tmp.dmp LSASS dump being created and read:

 

This makes the default usage of lsassy easy to detect with simple application rule logic such as the below. Of course the names and location of the dump can be altered, but attackers typically tend to leave the defaults for these types of tools:

service = 139 && directory = 'windows\\temp\\' && filename = 'tmp.dmp'

 

NetWitness Endpoint Analysis

Similarly with Endpoint, I like to start my investigations by opening up the Compromise meta keys - IOC, BOC, and EOC. From here I can view any meta values that stand out, or iteratively triage through them. One of the meta values of interest from the below is, enumerates processes on local system:

Pivoting into the Events view for this meta value, we can see cmd.exe launching tasklist to look for lsass.exe - to get proper access, the command is also executing with SYSTEM level privileges - this is something you should monitor regularly:

 

After seeing this command, it would be a good idea to look at all activity targeted toward LSASS for this endpoint. To do that, I can use the query filename.dst = 'lsass.exe' and start to investigate by opening up meta keys like the ones below. Something that stands out as interesting is the usage of rundll32.exe to load a function called minidump from the comsvcs.dll:

Pivoting into the Events view, we can see the full command a lot easier. Here we can see that rundll32.exe is loading the MiniDump function from comsvcs.dll and passing some parameters, such as the process ID for dumping (which was found by the initial process enumeration), location and name for the dump, and the keyword full:

 

This activity could be picked up by using the following logic in an application rule. This will be released via RSA Live soon, but you can go ahead and implement/check your environment now:

device.type = 'nwendpoint' && category = 'process event' && (filename.all = 'rundll32.exe') && ((param.src contains 'comsvcs.dll' && param.src contains 'minidump') || (param.dst contains 'comsvcs.dll' && param.dst contains 'minidump')

 

Conclusion

It is important to consistently monitor network and endpoint behaviour for abnormal actions taking place, and not solely rely on out of the box detections. New attack methods/tools are consistently being devleoped, but the actions these tools take always leave footprints behind, it is down to the defender(s) to spot the anomalies and triage accordingly. With that being said, RSA are consistently updating detections for attacks such as the one laid out in this post - we have been working with the content team to have this tool usage detected with out of the box content.

In this blog post, I am going to cover a C&C framework called ReverseTCP Shell,. This was recently posted to GitHub by ZHacker:

 

With this framework, a single PowerShell script is used and PowerShell is the server component of the C2. This is also a little different from other C2's as it doesn't use a common protocol such as HTTP, this is why we thought it would be a good idea to cover, as it allows us to demonstrate the power of NetWitness with proprietary or unknown protocols.

 

The Attack

Upon execution of the ReverseTCP Shell PowerShell script, it will prompt for a couple of parameters, such as the host and port to listen for connections:

 

It will then supply options to generate a payload, I chose the Base64 option and opted to deploy the CMD Payload on my endpoint. At this point, the C2 also starts to listen for new connections:

 

After executing the payload on my endpoint, I recieve a successful connection back:

 

Now I have my successful connection, I can begin to execute reconaissance commands on the endpoint, or any commands of my choosing:

 

The C2 also allows me to take srceenshots of the infected endpoint, so let's do that as well:

 

NetWitness Packets Analysis

NetWitness Packets does a fantastic job at detecting protocols and has a large range of parsers to do so. In some cases, NetWitness Packets can not classify the traffic it is analysing, this could be because it is a proprietary protocol, or is just a protocol there is not a parser for, yet; in these instances, the data gets classified as OTHER

 

This traffic will still be analysed by the parsers in NetWitness, and should therefore be analysed by you as well. So to start the investigation, we would focus on traffic of type OTHER, using the following query, service=0 - from here, we can open other meta keys to see what information NetWitness parsed out. One that instantly stands out as a great place to start investigating is the windows cli admin commands metadata under the Service Analysis meta key:

 

Reconstructing the sessions, it is possible to see raw data being transferred back and forth, there is no structure to the data and therefore why NetWitness classified it as OTHER, but because NetWitness saw CLI commands being executed, it still created a piece of metadata to tell us about it:

NOTE: You may notice that the request and response in the above screenshot are reversed, this can happen for a number of reasons and an explanation as to why this occurs can be found in this KB article: 000012891 - When investigating sessions in RSA NetWitness, the source and destination IP addresses appear reversed.

 

The following query could be used to find suspect traffic such as this:

service = 0 && analysis.service = 'windows cli admin commands'

 

Further perusing the traffic for this C2, we can also see the screenshot taking place:

 

Which returns a decimal encoded PNG image:

 

We can take these decimal values from the network traffiic and run them through a recipe in CyberChef (https://gchq.github.io/CyberChef) to render the image, and see what the attacker saw:

 

NetWitness Endpoint Analysis

In NetWitness Endpoint, I always like to start my investigation by opening the IOC, BOC, and EOC meta keys. All of the metadata below should be fully investigated, but for this blog post, I will start with runs powershell decoding base64 string:

Pivoting into the Events view, and analysing all of the sessions, I come across the command I used to infect the endpoint, this event should stand out as odd due to the random capitalisation of the characters, which is an atempt to evade case sensitive detection machanisms, as well as the randomised Base64 encoded string, which is to hide the logic of the command:

 

Due to the obfuscation put in place by the creator, we cannot directly decode the Base64 in the UI, this is because the Base64 encoded string has been shuffled. For instances like this were large amounts of obfuscation are put in place, I like to let PowerShell decode it for me by replacing IEX (Invoke-Expression) with Write-Host - so rather than executing the decoded command, it outputs it to the terminal:

Always perform any malware analysis in a safe, locked down environment. The method of deobfuscation used above does not neccessarily mean you will not be infected when performing the same on other scripts.

After decoding the initial command, it appears there is more obfuscation put in place, so I do the same as before, replacing IEX with Write-Host to get the decoded command. This final deobfuscation is a PowerShell command to open a socket to a specified address and port - I now have my C2, and can use this information to pivot to the data in NetWitness Packets (if I had not found it before):

 

The above PowerShell was a subset of the first decoded command, the final piece of the PowerShell is a while loop that waits for data from the socket it opens, this is why the IEX alteration would not work here, it is just obfuscation by using multiple poorly named variables to make it hard to understand:

 

Flipping back over to the Investigation UI, and looking at other metadata under the BOC meta key, it is possible to see a range of values being created for the reconaissance commands that were executed over the C2:

 

As of 11.3, there is a new Analyze Process view (Endpoint: Investigating a Process), it allows us to visually understand the entire process event chain. Drilling into one of the events, and then using the Analyze Process function, it is possible to see all of the additional processes spawned by the malicious PowerShell process:

 

Conclusion

Analysing all traffic and protocols is important, it is true that some protocols will be (ab)used more than others, but excluding the analysis of traffic classified as OTHER, can leave malicious communications such as the one detailed in this blog post to go under the radar. Looking for items such as files transferred, cli commands, long connections, etc. can all help with dwindling down the data set in the OTHER bucket to potentially more interesting traffic.

Over the past year, I have posted multiple blogs whereby I perform APT (Advanced Persistent Threat) emulation and analyse the forensic footprint left behind after the attack using the NetWitness platform. In this post, I take a look at an adversary emulation framework from MITRE named CALDERA - Cyber Adversary Language and Decision Engine for Red Team Automation:

This framework allows you to automate the adversary based around the MITRE ATT&CK framework (https://attack.mitre.org/matrices/enterprise/), and takes out a lot of the preparation work required to setup the attack scenarios.

 

For the purposes of this post / demo, I used a service that exposes local servers behind NATs and firewalls to the public internet over secure tunnels. In this case, the service used is localhost.run (http://localhost.run/) - I've covered others in the past and all of these should be blocked for corporate environments. With this in mind I did not blur the URL in the screenshots, but I've since killed that connection so this address may now belong to someone else if you try to reach it, for security reasons I would suggest you don't.

 

This is not an attack framework like the other posts have covered, it is more of an emulation framework, however, this could be more suitable if you are just starting out in APT emulation and want to see what you can and can't detect. As like the other cases, in this post I will not go into detail on how to setup CALDERA as there is plenty of information regarding that already available.

 

Overview

CALDERA ships with an agent named Sandcat, also referred to as 54ndc47. This agent is written in GoLang for cross platform compatibility, and is the agent we will deploy on the endpoint(s) we want to execute our operations against. Navigating to the Sandcat plugin, we are presented with two options to deploy the agent.

  • Option one will generate commands on the fly for the specific operating system selected
  • Option two supplies a URL in which you can visit from the endpoint to download and execute Sandcat manually

 

For this blog post, I opted for the PowerShell command to deploy the agent. I ran this on my endpoint and you can see the connection was successful and it starts to beacon:

 

I chose one of the default adversaries, hunter, for my operation, the output of which can be seen below. A high level overview of this emulation is the search for sensitive files, which it collects, stages, and exfiltrates:

 

 

NetWitness Packets

Firstly, let's take a look into NetWitness Packets. Focusing on outbound traffic (direction='outbound') and the HTTP protocol (service=80), we can place a focal point on outbound HTTP communication. From here, we can view the characteristics of the HTTP traffic by opening up the Service Analysis meta key. Drilling into http suspicious 4 headers, http post no get, and http suspicious no cookie we are left with 20 events:

 

Next, we can start to view other metadata related to this traffic. Opening the Client Application meta key we can see a user agent of go-http-client/1.1 - this is because the agent is built on GoLang and it is not altered from the default. The server is Python/3.7 AIOHTTP/3.4.4, which is also worth noting. The filenames associated with this traffic are also interesting: instructions, results, and ping. These are very descriptive and are basically the agent receiving instructions, returning the results, or simply checking in:

 

This traffic could easily be picked up by adding the following logic to an application rule:

(client begins 'go-http-client') && (directory = '/sand/') && (filename = 'instructions','ping','results') && (server contains 'aiohttp')

            

 

Delving into the Event Analysis view, we can see Base64 encoded data in the body of the HTTP POST's. As of NetWitness 11 and upward, decoding Base64 can be done directly from within the UI by simply highlighting the text and selecting Decode Selected Text from the popup:

 

The Sandcat agent uses Base64 encoding for the whole instruction being sent to the endpoint, this instruction is in JSON format. The actual commands that will be executed are again Base64 encoded within the JSON record. To decode the commands within, I chose to run the additional Base64 through another tool called CyberChef (https://gchq.github.io/CyberChef/):

 

The traffic for Sandcat is very easy to detect in NetWitness Packets. It does not attempt to hide itself or blend in with normal traffic, this is most likely by design as this is not an attack framework, but an emulation framework.

 

NetWitness Endpoint

Drilling into boc = 'runs powershell' and boc = 'in root of users directory', we can see a file called, sandcat.exe, executing out of C:\Users\Public with arguments to connect to the CALDERA server, and we can see a large number of PowerShell commands being executed by it - these PowerShell commands are the commands executed by the sandcat agent to perform the operation laid out at the beginning of this post. The metadata writes executable to root of users directory, or evasive powershell used over network under the BOC meta key would've also led to sandcat.exe and all of its associated commands:

 

 

Custom Attack

The out of the box adversaries are great for getting to grips with CALDERA, but I decided to crank it up a notch and make my own. This operation involved some discovery of systems, dumping of credentials, and lateral movement as can be seen below. These were all out of the box operations, I just added them to make my own adversary:

 

Analysis

Delving into NetWitness Endpoint, we can see that there is a large quantity of metadata under the BOC meta key that tags all of the actions CALDERA performed:

 

CALDERA Ability ExecutedTechniqueNetWitness Metadata CreatedDescription

 

Find system network connections

T1057

enumerates network connectionsAdversaries may attempt to get information about running processes on a system. Information obtained could be used to gain an understanding of common software running on systems within the network.

Find user processes

 

T1049

queries users logged on local systemAdversaries may attempt to get a listing of network connections to or from the compromised system they are currently accessing or from remote systems by querying for information over the network.
Run PowerKatzT1003runs powershell with http argumentThe download of Mimikatz
Run PowerKatzT1003runs powershell downloading contentThe download of Mimikatz
Run PowerKatzT1003evasive powershell used over networkThe PowerShell command used to download Mimikatz
Run PowerKatzT1003powershell opens lsass processCredential dumping is the process of obtaining account login and password information, normally in the form of a hash or a clear text password, from the operating system and software. Credentials can then be used to perform Lateral Movement and access restricted information.
Net useT1077maps administrative shareAdversaries may use this technique in conjunction with administrator-level Valid Accounts to remotely access a networked system over server message block (SMB) to interact with systems using remote procedure calls (RPCs), transfer files, and run transferred binaries through remote Execution
Net useT1077lateral movement with credentials using net utilityThe lateral movement using net.exe and explicit credentials

 

To get a better view of the commands that took place, I like to open the Events view, from here I can see what was executed in an easier to read format:

 

Conclusion

Getting into APT emulation is not an easy task, CALDERA however, makes this a whole bunch easier. It is a great tool for testing your platforms abilities against the MITRE ATT&CK matrix and seeing what you can, and can't detect, as well as getting a better understanding as to how some of those techniques are actually performed; which will massively improve you as an analyst and improve your organisation defense posture. We only covered a subset of the available techniques, as the full content is too extensive to cover completely in a single post.

I was doing some hunting through our lab traffic today and came across some strange looking traffic, it turned out to be Rui Ataide playing around with a new DNS C2. It is named WEASEL and can be found here: GitHub - facebookincubator/WEASEL: DNS covert channel implant for Red Teams. From this, we decided to put together this quick blog post to go over how the traffic looks in NetWitness Packets, therefore, this is in a slightly different format to my usual posts in this topic. We may at some point update this to a full post if needed.

 

 

NetWitness Packets Analysis

As this tool uses DNS for its communication, we first need to place our focus on DNS traffic, we can do this with a simple query like so, service=53 - from here, I like to open the SLD (Second Level Domain) meta key and look for suspicious sounding SLD's, or SLD's that are quite noisy. From the screenshot below, doh stands out as a good candidate:

 

It is also a good idea to do the same with TLD to see if anything suspect stands out. We blurred out the domain we were using in this instance, but the .ml TLD should stand out as suspect as it is free to register and commonly used for malicious purposes:

 

Upon drilling into the suspect SLD (sld='doh'), we can then open the Hostname Alias Record meta key to see how many unique values are associated with that SLD. This type of DNS activity requires uniqueness in the requests it makes in order for the DNS queries not to be resolved by the cache, and this is why you would have a large number of unique alias.host values to a single SLD when performing this type of activity using the DNS protocol. This is depicted nicely by the below screenshot:

You would also typically see a large number of these DNS requests over a small period of time, however, this would be entirely dependent on the C2 and the beaconing interval set. For the above, the time range was over ~12 hours:

 

 

This traffic could easily be detected by using the following logic in an application rule:

(alias.host regex'^[0-9a-vx-z-]{2,52}w{0,6}\.[0-9a-f]{2}\.[0-9a-f]{4}\.') && (dns.querytype = 'aaaa record')

 

For other tools, the following regex should work too, it may however need some adaptation for each specific tool:

^[0-9a-vx-z-]{2,52}w{0,6}\.[0-9a-f]{2}\.[0-9a-f]{4}\.

 

With that being said, altering the behavior of C2 communications like WEASEL can easily be done, which would then mean this application rule / regex would not trigger, and thus why it is always important to review the behaviors of protocols and not rely solely on signatures.

 

Conclusion

DNS C2's are becoming a more prevalent way to exfiltrate information in and out of a network. It is a transmission medium that often gets forgotten about as most would tie malicious communications with protocols such as HTTP or SSL and tend to only block those. DNS in nature can be very noisy, more so in this case as there is a finite amount of information you can transfer in AAAA records, and by design WEASEL also opted for shorter query names to evade detection, both these in conjunction increase even further the amount of DNS traffic being generated. Additionally, DNS requests are also cached for a period of time, they therefore need to be unique and names can't easily be reused, in order to avoid them being resolved by a cache instead of making it back to the attackers infrastructure. This inherent noisy behavior of DNS C2 makes it slightly easier to detect when the right tools are in place.

Command and Control platforms are constantly evolving. In one of my previous blog posts, I detailed how to detect PoshC2 v3.8:

 

 

Since then, Nettitude have revamped PoshC2 and released v5.0. This blog post takes a look at the new and improved version, and goes into some detection mechanisms, but this time, solely over SSL.

 

Review

Reviewing the configuration for PoshC2, it appears it still generates a certificate with the same default information as its predecessor. This is not to say that it is not easy to change as you could simply edit the Python file that generates this certificate, or that it will not alter in the future, but it's worth noting:

 

Delving into NetWitness Packets, we can see this information is extracted and gets populated under the meta keys shown below:

 

This would make detecting the default certificates of PoshC2 with application rules a simple task. We would need only to look for one of the metadata values above being created due to them being very unique:

alias.host = 'p18055077' || ssl.ca = 'pajfds' || ssl.subject = 'pajfds'

The certificate is also self-signed and generated when the PoshC2 sever is started, so we also see some interesting metadata values populated under the analysis.service meta key:

 

The certificate issued within last day is a relatively new metadata value and something to look out for within your environment. There are also other relatively new metadata values that will be populated based upon the analysis NetWitness performs against the certificate, these are shown below:

analysis.serviceDescriptionReason
certificate long expirationCertificate expires more than two years since issued.Certificate validity is usually capped at two years. Longer-lived certificates may be suspicious.
certificate expiredCertificate was expired when presented.Expired certificates are invalid and won't be presented by most legitimate hosts.
certificate expired within last weekCertificate was expired by less than a week when presented.Expired certificates are not expected to be presented by most legitimate hosts.
certificate issued within last dayCertificate was presented less than a day since issued.New certificates may be suspicious in combination with other characteristics of the session.
certificate issued within last weekCertificate was presented less than a week since issued.New certificates may be suspicious in combination with other characteristics of the session.
certificate issued within last monthCertificate was presented less than a month since issued.New certificates may be suspicious in combination with other characteristics of the session.
certificate anomalous issued dateCertificate issued date is malformed, nonsensical, or invalid.Invalid or malformed certificates are suspicious.
certificate anomalous expiration dateCertificate expiration date is malformed, nonsensical, or invalid.Invalid or malformed certificates are suspicious.

 

Looking further into the configuration there are a few other interesting default settings. The User Agent string is hard coded, but of course you would need SSL inspection to see this, or for the beacons to be over HTTP - with that being said, it is a very common User Agent string and not a great indicator anyway. The default sleep, or beacon, is set 5 seconds, and the jitter to 0.20 seconds - this would make the beacons stand out in NetWitness Packets:

 

Looking into the Navigate view, and pivoting on the suspiciously named certificate, ssl.ca = 'pajfds' - it is possible to see a beacon type pattern for this traffic:

 

Delving into the Event Analysis view for this, we can obtain a better view of the cadence of communication. From here you can see the very obvious beacon pattern coupled with a payload size that does not vary greatly. Two great indications of automated check-in type behaviour:

 

With 11.3.1.0, there is a new feature available that provides the ability to generate JA3 hashes for SSL traffic. They are not enabled by default, but the following configuration guide details how to enable them:

 

For more details on what JA3 hashes are, and how they can be useful, there is a great explanation of them from the creators available on Github:

 

In this instance, a PowerShell payload was dropped onto the endpoint, and therefore it is PowerShell making the web requests. The way PowerShell sets up its TLS sessions has a unique(ish) JA3 fingerprint:

 

Perusing the available open source JA3 hash lists, we can see that we indeed have a match for this hash and it is PowerShell (Miscellaneous/ja3_hashes.csv at master · marcusbakker/Miscellaneous · GitHub ). While this is not a atomic indicator for PoshC2, it is a great way to detect PowerShell making web requests, and a great starting point for your threat hunting that could lead you to find C&C servers, such as PoshC2 where a PowerShell payload was used:

 

The following screenshot shows the PowerShell payload created by PoshC2 and the one I used to infect the endpoint:

 

These JA3 hashes could be pulled in as a Feed, so the associated hash values get generated under a meta key of your choosing, or you could also create a right-click context menu action (attached to this post):

 

While not much as changed in terms of endpoint indicators and analysis, in this post we opted to cover the encrypted (by default) traffic generated by this framework a bit in more detail, while also highlighting some of the new certificate analysis characteristics on the product. Endpoint analysis of this framework can be found in the previous post regarding Posh C2: Using RSA NetWitness to Detect Command and Control: PoshC2

 

Conclusion

With the new release of PoshC2 v5.0, it appears that not much has changed in the grand scheme of things. With that being said, it is a good idea to regularly revisit known attack frameworks as they are constantly adapting and evolving to evade known detection mechanisms. It is also important to keep up to date with the latest features of NetWitness to ensure you have every chance to detect the bad traffic in your network.

Introduction

Cobalt Strike is a threat emulation tool used by red teams and advanced persistent threats for gaining and maintaining a foothold on networks. This blog post will cover the detection of Cobalt Strike based off a piece of malware identified from Virus Total:

 

NOTE: The malware sample was downloaded and executed in a malware VM under analysts constant supervision as this was/is live malware.

The Detection in NetWitness Packets

NetWitness Packets pulls apart characteristics of the traffic it sees. It does this via a number of Lua parsers that reside on the Packet Decoder itself. Some of the Lua parsers have option files associated with them that parse out additional metadata for analysis. One of these is the HTTP Lua parser, which has an associated HTTP Lua options file, you can view this by navigating to Admin  Services ⮞ Decoder ⮞ Config ⮞ Files - and selecting HTTP_lua_options.lua from the drop down. The option we are interested in for this blog post is the headerCatalog() - making this return true will register the HTTP Headers in the request and response under the meta keys:

  • http.request
  • http.response

 

And the associated values for the headers will be registered under:

  • req.uniq
  • resp.uniq

 

NOTE: This feature is not available in the default options file due to potential performance considerations it may have on the Decoder. This feature is experimental and may be deprecated at any time, so please use this feature with caution, and monitor the health of all components if enabling. Also, please look into the customHeader() function prior to enabling this, as that is a less intensive substitute that could fit your use cases.

 

There are a variety of options that can be enabled here. For more details, it is suggested to read the Hunting Guide - https://community.rsa.com/docs/DOC-62341.

 

These keys will need to be indexed on the Concentrator, and the following addition to the index-concentrator-custom.xml file is suggested:

<key description="HTTP Request Header" format="Text" level="IndexValues" name="http.request" defaultAction="Closed" valueMax="5000" />
<key description="HTTP Response Header" format="Text" level="IndexValues" name="http.response" defaultAction="Closed" valueMax="5000" />
<key description="Unique HTTP Request Header" level="IndexKeys" name="req.uniq" format="Text" defaultAction="Closed"/>
<key description="Unique HTTP Response Header" level="IndexKeys" name="resp.uniq" format="Text" defaultAction="Closed"/>

 

 

The purpose for this, amongst others, is that the trial version of Cobalt Strike has a distinctive HTTP Header that we, as analysts, would like to see: https://blog.cobaltstrike.com/2015/10/14/the-cobalt-strike-trials-evil-bit/. This HTTP header is X-Malware - and with our new option enabled, this header is easy to spot:

NOTE: While this is one use case to demonstrate the value of extracting the HTTP Headers, this metadata proves incredibly valueable across the board, as looking for uncommon headers can help lead analysts to uncover and track malicious activity. Another example where this was useful can be seen in one of the previous posts regarding POSH C2, whereby an application rule was created to look for the incorrectly supplied cachecontrol HTTP response header: https://community.rsa.com/community/products/netwitness/blog/2019/03/04/command-and-control-poshc2

 

Pivoting off this header and opening the Event Analysis view, we can see a HTTP GET request for KHSw, which was direct to IP over port 666 and had a low header count with no referrer - this should stand out as suspicious even without the initial indicator we used for analysis:

 

If we had decided to look for traffic using the Service Analysis key, which pulls apart the characteristics of the traffic, we would have been able to pivot of off these metadata values to whittle down our traffic to this as well:

 

Looking into the response for the GET request, we can see the X-Malware header we pivoted off of, and the stager being downloaded. Also, take notice of the EICAR test string in the X-Malware as well, this is indicative of a trial version of Cobalt Strike as well:

 

NetWitness Packets also has a parser to detect this string, and will populate the metadata, eicar test string, under the Session Analysis meta key (if the Eicar Lua parser is pushed from RSA Live) - this could be another great pivot point to detect this type of traffic:

 

Further looking into the Cobalt Strike traffic, we can start to uncover more details surrounding its behaviour. Upon analysis, we can see that there are multiple HTTP GET requests with no error (i.e. 200), and a content-length of zero, which stands out as suspicious behaviour - as well as this, there is a cookie that looks like a Base64 encoded string (equals at the end for padding) with no name/value pairs, cookies normally consist of name/value pairs, these two observations make the cookie anomalous:

 

Based off of this behaviour, we can start to think about how to build content to detect this type of behaviour. Heading back to our HTTP Lua options file on the Decoder, we can see another option named, customHeaders() - this allows us to extract the values of HTTP headers in a field of our choosing. This means we can choose to extract the cookie into a meta key named cookie, and content-length into a key named http.respsize - this allows us to map a specific HTTP header value to a key so we can create some content based off of the behaviours we previously observed:

 

After making the above change, we need to add the following keys to our index-concentrator-custom.xml file as well - these are set to the index level of, keys, as the values that can be returned are unbounded and we don't want to bloat the index:

<key description="Cookie" format="Text" level="IndexKeys" name="cookie" defaultAction="Closed"  />
<key description="HTTP Response Size" format="Text" level="IndexKeys" name="http.respsize" defaultAction="Closed" />

 

Now we can work on creating our application rules. Firstly, we wanted to alert on the suspicious GET requests we were seeing:

service = 80 && action = 'get' && error !exists && http.respsize = '0' && content='application/octet-stream'

And for the anomalous cookie, we can use the following logic. This will look for no name/value pairs being present and the use of equals signs at the end of the string which can indicate padding for Base64 encoded strings:

service = 80 && cookie regex '^[^=]+=*$' && content='application/octet-stream'

These will be two separate application rules that will be pushed to the Decoders:

 

Now we can start to track the activity of Cobalt Strike easily in the Investigate view. This could also potentially alert the analyst to other infected hosts in their environment. This is why it is important to analyse the malicious traffic and create content to track:

 

Conclusion

Cobalt Strike is a very malleable tool. This means that the indicators we have used here will not detect all instances of Cobalt Strike, with that being said, this is known common Cobalt Strike behaviour. This blog post was intended to showcase how the usage of the HTTP Lua options file can be imperative in identifying anomalous traffic in your environment whilst using real-world Live malware. The extraction of the HTTP headers, whilst a trivial piece of information, can be vital in detecting advanced tools used by attackers. This coupled with the extraction of the values themselves, can help your analysts to create more advanced higher fidelity content.

Preface
In order to prevent confusion, I wanted to add a little snippet before we jump into the analysis. The blog post
first goes over how the server became infected with Metasploit, it was using a remote execution CVE
against an Apache Tomcat Web Server, the details of which can be found here, https://nvd.nist.gov/vuln/detail/
CVE-2019-0232. Further into the blog post, details of Metasploit can be seen.

 

This CVE requires that the CGI Servlet in Apache Tomcat is enabled. This is not an abnormal servlet to be
enabled and merely requires the Administrator to uncomment a few lines in the Tomcat web.xml. This is a
normal administrative action to have taken on the Web Server:

  


Now, if the administrator has a .bat, or .cmd file in the cgi-bin directory on the Apache Tomcat Server. The
attacker can remotely execute commands as Apache will call cmd.exe to execute the .bat or .cmd file and
incorrectly handle the parameters passed; this file can contain anything, as long as it executes. So here as an
example, we place a simple .bat file in the cgi-bin directory:

 

From a browser, the attacker can call the .bat file and pass a command to execute due to the way the CGI
Servlet handles this request and passes the arguments:

 

From here, the attacker can create a payload using msfvenom and instruct the web server to download the Metasploit payload they had created:

 

The Detection in NetWitness Packets

RCE Exploit
NetWitness Packets does a fantastic job pulling apart the behaviour of network traffic. This allows analysts to
detect attacks even with no prior knowledge of them. A fantastic meta value for analysts to look at is windows
cli admin commands, this metadata is created when cli commands are detected; grouping this metadata with inbound
traffic to your web servers is a great pivot point to start looking for malicious traffic:

 

NOTE: Taking advantage of the traffic_flow_options.lua parser would be highly beneficial for your SOC. This parser allows you to define your subnets and tag them with friendly names. Editing this to contain your web servers address space for example, would be a great idea.

 

Taking the above note into account, your analysts could then construct a query like the following:
(analysis.service = 'windows cli admin commands') && (direction = 'inbound') && (netname.dst = 'webservers')
Filtering on this metadata reduces the traffic quite significantly. From here, we can open up other meta
keys to get a better understanding of what traffic is related to these windows cli commands. From the below
screenshot, we can see that this is HTTP traffic, with a GET request to a hello.bat file in the /cgi-bin/ directory,
there are also some suspicious looking queries associated with it that appear to reference command line
arguments:

 

At this point, we decide to reconstruct the raw sessions themselves as we have some suspicions surrounding
this traffic to see exactly what these HTTP sessions are. Upon doing so, we can see a GET request with the
dir command, and we can also see the dir output in the response - this will be what the windows cli admin
commands metadata was picking up on:

 

This traffic instantly stands out as something of interest and as being something that requires further
investigation. In order to get a holistic view of all data toward this server, we need to reconstruct our query, as
the windows cli admin commands metadata would have only picked up on the sessions where it saw CLI
commands, we are, however, interested in seeing it all. So we look at the metadata available for this session
and build a new query. This now allows us to see other interesting metadata and get a better idea of what the
attacker was doing. Looking at the Query meta key, we can see all of the attackers commands:

 

Navigating to the Event Analysis view, we can see the commands in the order they took place and reconstruct
what the attacker was doing. From here we can see a sequence of events whereby the attacker makes a
directory, C:\temp, downloads an executable called 2.exe to said directory, and subsequently executes it:

 

MSF File and Traffic

As we can see the attackers commands, we can also see the download for an executable they performed, a.exe. This means we can run a query and extract that file from the packet data as well. We run a simple query looking for a.exe
and we find our session. Also, take note of the user agent, why is certutil being used to download a.exe? This is also a great indicator of something suspicious:

 

We can also choose to switch to the File Analysis view and download our file(s). This would allow us to perform additional analysis on the file(s) in question:

 

Merely running a strings on one of these files yields a result of a domain this executable may potentially connect to:

 

As we also have another hostname to add to our analysis, we can now perform a query on just this hostname
to see if there is any other interesting metadata associated with it. Opening the session analysis meta key, we can see a myriad of interesting pivot points. We can group these pivot points together, or make combinations of them to whittle down the traffic to something more manageable:

NOTE: See the RSA IR Hunting guide for more details on these metadata values: https://community.rsa.com/docs/DOC-62341

 

Once we have pivoted down using some of the metadata above, we start to get down to a more manageable amount of sessions - continuing looking at the service analysis meta key we also observe some additional pieces of metadata of interest we can use to start reconstructing the sessions to get a better understanding of what this traffic is:

 

  • long connection
  • http no referer
  • http six or less headers
  • http post missing content-type
  • http no user-agent
  • watchlist file fingerprint

 

 

Opening these sessions up in the Event Analysis view, we can see an HTTP POST with binary data, and a 200 OK from the supposed Apache Server, we can also see the directory is the same as we saw from our strings analysis:

 

Continuing to browse through these sessions, yields more of the same:

 

Navigating back to the investigate view, it is also possible to see that the directory is always the same and the one we saw in our strings analysis:

 

NOTE: During the analysis, no beaconing pattern was observed, this can make the C2 harder to detect and requires continued threat hunting from your analysts to understand your environment and pick up on these types of anomalies.

 

Web Shell

Now we know that the Apache Tomcat Web Server is infected, we can look at all other traffic
associated with the Web Server and continue to monitor to see if anything else takes place, attackers like to keep
multiple entry points if possible. Focusing on our Web Server, we can also see a JSP page being accessed
that sounds odd, error2.jsp, and observe some additional queries:

 

Pivoting into the Event Analysis view and reconstructing the sessions, we can see a tasklist command being
executed:

 

And the subsequent response of the tasklist output. This is a Web Shell that has been placed on the server and
the attacker is also using to execute commands:

 

NOTE: For more information on Web Shells, see the following series: https://community.rsa.com/community/products/netwitness/blog/2019/02/12/web-shells-and-netwitness

 

It is important to note that just because you have identified one method of remote access, it does not mean that
is the only one, it is important to ascertain whether or not other access methods were made available by the
attacker.

 

The Detection in NetWitness Endpoint
As I preach in every blog post, the analyst should always log in every morning and check the following
three meta keys as a priority, IOC (Indicators of Compromise), BOC (Behaviours of Compromise), and EOC
(Enablers of Compromise). Looking at these keys, a myriad of pieces of metadata stand out as great places to
start the investigation, but let's place a focus on these three for now:

 

Let's take the downloads binary using certutil to start, and pivot into the Event Analysis view. Here we
can see the certutil binary being used to download a variety of the executable we saw in the packet data:

 

Looking into one of the other behaviours of compromise, http daemon runs command shell, we can also
see evidence of the bat file being requested and the associated commands, as well as the use of the Web
Shell, error2.jsp. It is also important to note that there is a request for the hello.bat prior to the remote code
execution vulnerability being exploited, this would be seen as legitimate traffic given that the server is working
as designed for the CGI-BIN scripts. It is down to the analyst to review the traffic and decipher whether or not
something malicious is happening, or whether this is by design of the server:

 

NOTE: Due to the nature of how the Tomcat server handles the vulnerable cgi-bin application and "legitimate" JSP files, you can see hello.bat as part of the tracking event as it's an argument passed to cmd.exe. However, with the error2.jsp, it is executed inside the Tomcat process, and only when the web shell spawns a new command shell to execute certain commands will you see cmd.exe being executed, and not every time error2.jsp is used. Having said that, the advantage for the defender is that even if not all of it is tracked, or leaves a visible footprint, at some point something will, this will/ could be the starting thread needed to detect the intrusion.

 

Coming back to the Investigate view we can see another interesting piece of metadata that would be of interest, creates remote service - let's pivot on this and see what took place:

Here we can see that cmd was used to create a service on our Web Server that would run a malicious binary dropped by the attacker in the c:\temp directory:

 

It is important to remember that as a defender, you only need to pick up on one of these artifacts leftover from
the attacker in order to start unraveling their activity.

 

Conclusion
With today's ever-changing landscape, it is becoming increasingly inefficient to create signatures for known
vulnerabilities and attacks. It is therefore far more important to pick up on behaviours of traffic that stand out as
abnormal, than generating signatures. As shown in this blog post, a fairly recent remote code execution CVE
was exploited, https://nvd.nist.gov/vuln/detail/CVE-2019-0232 - no signatures were required to pick up on this
as NetWitness pulls apart the behaviours, we just had to follow the path. Similarly, with Metasploit it is also very difficult to generate effective long life signatures that could detect this C2; performing
threat hunting through the data based on a foundation of analysing behaviours, will ensure that all knowns and
unknowns are effectively analysed.

 

It is also important to note that the packet traffic would typically be encrypted but we kept it in the clear for the purposes of this post, with that being said, the RCE exploit and Web Shell is easily detectable when NetWitness Endpoint tracking data is being ingested, and this allows the defender to have the necessary visibility if SSL decryption is not in place.

Attackers love to use readily available red team tools for various stages within their attack. They do so as this removes the labour required in creating their own custom tools. This is not to say that the more innovative APT's are going down this route, but just something that appears to be becoming more prevalent and your analysts should be aware of. This blog post covers a readily available red team tool available on GitHub.

 

Tools

In this blog post, the Koadic C2 will be used. Koadic, or COM Command & Control, is a Windows post-exploitation rootkit similar to other penetration testing tools such as Meterpreter and Powershell Empire. 

 

The Attack

The attacker sets up their Koadic listener and builds a malicious email to send to their victim. The attacker wants the victim to run their malicious code, and in order to do this, they tried to make the email look more legitimate by supplying a Dropbox link, and a password for the file:

 

The user downloads the ZIP, decompresses using the password in the email, and is presented with a Javascript file that has a .doc extension. Here the attacker is relying on the victim not being well versed with computers, and not noticing the obvious problems with this file (extension, icon, etc.):

 

 

 

Fortunately for the attacker, the victim double clicks the file to open it and they get a call back to their C2:

 

From here, the attacker can start to execute commands:

 

 

The Detection in NetWitness Packets

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. From here, the analyst the can start to pull apart the protocol and look for anomalies within its behaviour; the analyst opens the Service Analysis meta key to do this and observed two pieces of metadata of interest:

 

  • http post missing content-type

  • http post no get

 

 

 

These two queries have now reduced the data set for the analyst from 2,538 sessions to 67:

 

NOTE: This is not to say that the other sessions do not have malicious traffic, nor that the analyst will ignore them, but just at this point in time this is the analysts focal point. If this traffic after analysis turned out to be clean, they could exclude it from their search and pick apart other anomalous HTTP traffic in the same manner as before. This allows the analyst to go though the data in a more comprehensive and approachable manner.

 

Now that the data set has been reduced, the analyst can start open other meta keys to see understand the context of the traffic. The analyst wants to see if any files are being transferred, and to see what user agents are involved, to do so, they open the Extension, Filename, and Client Application meta key. Here they observe an extension they do not typically see during their daily hunting, WSF. They see what appears to be a random filename, and a user agent they are not overly familiar with:

 

There are only eight sessions for this traffic, so the analyst is now at a point where they could start to reconstruct the raw sessions and see what if they can better understand what this traffic is for. Opening the Event Analysis view, the analyst first looks to see if they can observe any pattern in the connection times, and to look at how much the payload varies in size:

NOTE: Low variation in payload size and connections that take place every x minutes is indicative of automated behaviour. Whether that behaviour is malicious or not is up to the analyst to decipher, this could be a simple weather update for example, but this sort of automated traffic is exactly what the analyst should be looking for when it comes to C2 communication; weeding out the user generated traffic to get to the automated communications.

 

Reconstructing the sessions, the analyst stumbles across a session that contains a tasklist output. This immediately stands out as suspicious to the analyst:

 

From here, the analyst can build a query to focus on this communication between these two hosts and find out when this activity started happening:

 

Looking into the first sessions of this activity, the analyst can see a GET request for the oddly named WSF file, and that BITS was used to download it:

 

The response for this file contains the malicious javascript that infected the endpoint:

 

Further perusing the sessions, it is also possible to see the commands being executed by the attacker:

 

The analyst is now extremely confident this is malicious traffic and needs to be able to track it. The best way to do this is with an application rule. The analyst looks through the traffic and decides upon the following two pieces of logic to detect this behaviour:

 

To detect the initial infection:

extension = 'wsf' && client contains 'bits'

To detect the beacons:

extension = 'wsf' && query contains 'csrf='

 

NOTE: The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The Detection in NetWitness Endpoint

Every day the analyst should review the IOC, BOC, and EOC meta keys; paying particular attention to the high-risk indicators first. Here the analyst can see a high-risk meta value, transfers file using bits:

 

Here the analyst can see cmd.exe spawning bitsadmin.exe and downloading a suspiciously named file into the \AppData\Local\Temp\ directory. This stands out as suspicious to the analyst:

 

From here, the analyst places an analytical lens on this specific host and begins to look through what other actions took place around the same time. The analyst observes commands being executed against this endpoint and now knows it is infected:

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

Analysts should also be aware that not all attackers will use proprietary tools, or even alter the readily available ones to evade detection. An attacker only needs to make one mistake and you can unravel their whole their operation. So don't always ignore the low hanging fruit.

Filter Blog

By date: By tag: