Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Authors Lee Kirkpatrick

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Smbexec will be used. the Impackets implementation of Smbexec will be used. This sets up a semi-interactive shell for the attacker.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Smbexec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

Smbexec works a little differently to some of the more common lateral movement tools such as PsExec. Instead of transferring a binary to the target endpoint and using the svcctl interface to remotely create a service using the transferred binary and start the service, Smbexec makes a call to an existing binary that already lives on that endpoint to execute its commands, cmd.exe.

 

NetWitness Packets does a great job at pulling apart packet data and pointing you in directions of interest. One of the metadata we can pivot on to focus on traffic that is of interest to us for lateral movement is, remote service control:

 

NetWitness also creates metadata when it observes windows cli commands being run, this metadata is under the Service Analysis meta key and is displayed as, windows cli admin commands. This would be another interesting pivot point for us to look into to see what type of commands are being executed:

 

NOTE: Just because an endpoint is being remotely controlled, and there are commands being executed on the endpoint, this does not mean that your network is compromised. It is up to the analyst to review the sessions of interest like we are in this blog post, and determine if something is out of the ordinary for your environment.

 

Looking into the other metadata available, we can see a connection to the C$ share, and that a filename called __output was created:

 

This does not give us much to go on and say that this is suspicious, so it is necessary to reconstruct the raw session itself to get a better idea of what is happening. Opening the Event Analysis view for the session we reduced our data set to, and analysing the payload, a suspicious string stands out as shown below:

 

Tidying up the command a little, it ends up looking like this:

%COMSPEC% /Q /c echo dir > \\127.0.0.1\C$\__output 2>&1 > %TEMP%\execute.bat & %COMSPEC% /Q /c %TEMP%\execute.bat & del %TEMP%\execute.bat

  • %COMPSEC% - Environment variable that points to cmd.exe
  • /Q - Turns echo off
  • /C - Carries out the command specified by string and then terminates
  • %TEMP% - Environment variable that points to C:\Users\username\AppData\Local\Temp

 

We can see that string above will echo the command we want to execute (dir) into a file named "__output" on the C$ share of the local machine. The command we want to execute also gets placed into execute.bat in the %TEMP% directory, which is subsequently executed, and then deleted.

 

Analysing the payload further, we can also see the data that is returned from the command that was executed by the attacker:

 

Now that suspicious traffic has been observed, we can filter on this type of traffic, and see other commands being executed, such as whoami:

 

Smbexec is quite malleable, a vast majority of the indicators can easily be edited to evade signature type detection for this behaviour. However, using NetWitness Packets ability to carve out behaviours, the following application rule logic, should be suitable to pick up on suspicious traffic over SMB that an analyst should investigate to detect this type of behaviour:

(ioc = 'remote service control') && (analysis.service = 'windows cli admin commands') && (service = 139) && (directory = '\\c$\\','\\ADMIN$\\') 

 

The Detection in NetWitness Endpoint

NetWitness Endpoint does a great job at picking up on this activity, looking at the Behaviours of Compromise meta key, two pieces of metadata point the analyst toward this activity, services runs command shell and runs chained command shell:

 

Opening the Event Analysis view for these sessions, we can see that services.exe is spawning cmd.exe, and we can also see the command that is being executed by the attacker:

 

The default behaviour of Smbexec could easily be detected with application rule logic like the following:

param.dst contains '\\127.0.0.1\C$\__output'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Winexe will be used. Winexe is a GNU/Linux based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command and uninstalls the service. Winexe allows execution of most of the windows shell commands.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Winexe, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

The use of Winexe is not overly stealthy. Its use creates a large amount of noise that is easily detectable. Searching for winexesvc.exe within the filename metadata returns the SMB transfer of the executable to the ADMIN$ share:

 

Using the time the file transfer took place as the pivot point to continue investigation, it is also possible to see the use of the Windows Service Control Manager (SCM) directly afterward to create and start a service on the remote endpoint. SCM acts as a remote procedure call (RPC) server so that services on remote endpoints can be controlled:

 

Reconstructing the raw session as text, it is possible to see the service name being created, winexesvc, and the associated executable that was previously transferred being used as the service base, winexesvc,exe:

 

Continuing to analyse the SMB traffic around the same time frame, it is also possible to see another named pipe, ahexec, being used. This is the named pipe that Winexe uses:

 

Reconstructing these raw sessions as text, it is possible to see the commands that were executed:

 

As well as the output that was returned to the attacker:

 

Based on the artefacts we have seen leftover from Winexe's execution over the network, there are multiple pieces of logic we could use for our application rule to detect this type of traffic. The following application rule logic would pick up on the initial transfer of the winexesvc.exe executable, and the subsequent use of the named pipe, ahexec:

(filename = 'ahexec','winexesvc.exe') && (service = 139)

The Detection in NetWitness Endpoint

Searching for winexesvc.exe as the filename source shows the usage of Winexe on the endpoints, this is because this is the executable that handles the commands sent to over the ahexec named pipe. The filename destination meta key shows the executables invoked via the use of Winexe:

 

A simple application rule could be created for this activity by simply looking for winexesvc.exe as the filename source:

(filename.src = 'winexesvc.exe')

 

Additional Analysis

Analysing the endpoint, you can see the winexesvc.exe process running from task manager:

 

As well as the service that was installed via SCM over the network:

 

This service creation also creates a log entry in the System event log as event ID 7045:

 

This means if you were ingesting logs into NetWitness, you could create an application rule to trigger on Winexe usage with the following logic:

(reference.id = '7045') && (service.name = 'winexesvc')

We can also see the named pipe which Winexe uses by executing Sysinternals pipelist tool:

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

What is WMI?

At a high level, Windows Management instrumentation (WMI) provides the ability to, locally or remotely, manage servers and workstations running Windows by allowing data collection, administration, and remote execution. WMI is Microsoft's implementation of the open standard, Web-Based Enterprise Management (WBEM) and Common Information Model (CIM), and comes preinstalled in Windows 2000 and newer Microsoft Operating Systems.

 

Tools

In this blog post, the Impackets implementation of WMIExec will be used. This sets up a semi-interactive shell for the attacker. WMI can be used for reconnaissance, privilege escalation (by looking for well-known misconfigurations), and lateral movement.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using WMIExec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

NetWitness Packets can easily identify WMI remote execution. All the analyst needs to do is open the Indicators of Compromise (IOC) meta key and look for wmi command:

 

Pivoting on the wmi command metadata, and opening the Action meta key, the analyst can observe the commands that were executed, as these are sent in clear text:

 

NOTE: Not all WMI commands are malicious. It is up to the analyst to understand what is normal behaviour within their environment, and what is not. The commands seen above are typical of WMIExec however, and should raise concern for the analyst.

 

The following screenshot is of the raw data itself. Here it is possible to see the parameter that was passed and subsequently registered under the action meta key:

 

Looking at the parameter passed, it is possible to see that WMIExec uses CMD to execute its command and output the result to a file (which is named the timestamp of execution) on the ADMIN$ share of the local system. The following screenshot shows an example of whoami being run, and the associated output file and contents on the remote host:

 

NOTE: This file is removed after it has been successfully read and displayed back to the attacker. Evidence of this file only exists on the system for a small amount of time.

 

We can get a better understanding of WMIExec's function from viewing the source code:

 

To detect WMIExec activity in NetWitness Packets, the following application rule logic could be created to detect it:

action contains'127.0.0.1\\admin$\\__1'

Lateral traffic is seldom captured by NetWitness Packets. More often than not, the focus of packet capture is placed on the ingress and egress points of the network, normally due to high volumes of core traffic that significantly increase costs for monitoring. This is why it is important to also have an endpoint detection product, such as NetWitness Endpoint to detect lateral movement.

 

The Detection in NetWitness Endpoint

A daily activity for the analyst should be to check the Indicators of Compromise (IOC), Behaviours of Compromise (BOC), and Enables of Compromise (EOC) meta keys. Upon doing so, the analyst would observe the following metadata, wmiprvse runs command shell:

 

Drilling into this metadata, and opening the Event Analysis view, it is possible to see the WMI Provider Service spawning CMD and executing commands:

 

To detect WMIExec activity in NetWitness Endpoint, the following application rule logic could be created to detect it:

param.dst contains '127.0.0.1\\admin$\\__1'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

 

WMI is a legitimate Microsoft tool that is used within environments by administrators, as well as by 3rd party products, it can therefore be difficult to differentiate normal from malicious, and why it is a popular tool for attackers. Performing Threat Hunting daily is an important activity for your analysts to build baselines and detect the anomalous usage from the normal activity.

There are a myriad of post exploitation frameworks that can be deployed and utilized by anyone. These frameworks are great to stand up as a defender to get an insight into what C&C (command and control) traffic can look like, and how to differentiate it from normal user behavior. The following blog post demonstrates an endpoint becoming infected, and the subsequent analysis in RSA NetWitness of the traffic from PowerShell Empire. 

 

The Attack

The attacker sets up a malicious page which contains their payload. The attacker can then use a phishing email to lure the victim into visiting the page. Upon the user opening the page, a PowerShell command is executed that infects the endpoint and is invisible to the end user:

 

 

The endpoint then starts communicating back to the attacker's C2. From here, the attacker can execute commands such as tasklistwhoami, and other tools:

 

From here onward, the command and control would continue to beacon at a designated interval to check back for commands. This is typically what the analyst will need to look for to determine which of their endpoints are infected.

 

The Detection Using RSA NetWitness Network/Packet Data

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. The analyst can then look into pulling apart the characteristics of the protocol by using the Service Analysis meta key. From here they notice a couple interesting meta values to pivot on, http with binary and http post no get no referer directtoip:

 

Upon reducing the number of sessions to a more manageable number, the analyst can then look into other meta keys to see if there are any interesting artifacts. The analyst look under the Filename, Directory, Client Application, and Server Application meta keys, and observes the communication is always towards a microsft-iis/7.5 server, from the same user agent, and toward a subset of PHP files:

 

The analyst decides to use this is as a pivot point, and removes some of the other more refined queries, to focus on all communication toward those PHP files, from that user agent, and toward that IIS server version. The analyst now observes additional communication: 

 

Opening up the visualization, the analyst can view the cadence of the communication and observes there to be a beacon type pattern:

 

Pivoting into the Event Analysis view, the analyst can look into a few more details to see if there suspicions on this being malicious are true. The analyst observes a low variance in payload, and a connection which is taking place ~every 4 minutes:

 

The analyst reconstructs some of the sessions to see the type of data being transferred, the analyst observes a variety of suspicious GET and POST's with varying data being transferred:

 

The analyst confirms this traffic is highly suspicious based of the analysis they have performed, the analyst subsequently decides to track the activity with an application rule. To do this, the analyst looks through the metadata associated with this traffic, and finds a unique combination of metadata that identifies this type of traffic:

 

(service = 80) && (analysis.service = 'http1.0 unsupported cache header') && (analysis.service = 'http post missing content-type')

 

IMPORTANT NOTE: Application rules are very useful for tracking activity. They are however, very environment specific, therefore an application rule used in one environment, may be of high fidelity, but when used in another, could be incredibly noisy. Care should be taken when creating or using application rules to make sure they work well within your environment.

 

The Detection Using RSA NetWitness Endpoint Tracking Data

The analyst, as they should on a daily basis, is perusing the IOC, BOC, and EOC meta keys for suspicious activity. Upon doing so, they observe the metadata, browser runs powershell and begin to investigate:

 

Pivoting into the Event Analysis view, the analyst can see that Internet Explorer spawned PowerShell, and subsequently the PowerShell that was executed:

 

The analyst decides to decode the base64 to get a better idea as to what the PowerShell is executing. The analyst observes the PowerShell is setting up a web request, and can see the parameters it would be supplying for said request. From here, the analyst could leverage this information and start looking for indicators of this in their packet data (this demonstrates the power behind having both Endpoint, and Packet solutions):

 

Pivoting in on the PowerShell that was launched, it is also possible to see the whoami and tasklist that was executed as well. This would help the analyst to paint a picture as to what the attacker was doing: 

 

Conclusion

The traffic outlined in this blog post is of a default configuration for PowerShell Empire; it is therefore possible for the indicators to be different depending upon who sets up the instance of PowerShell Empire. With that being said, C2's still need to check-in, C2's will still need to deploy their payload, and C2's will still perform suspicious tasks on the endpoint. The analyst only needs to pick up on one of these activities to start pulling on a thread and unwinding the attackers activity,

 

It is also important to note that PowerShell Empire network traffic is cumbersome to decrypt. It is therefore important to have an endpoint solution, such as NetWitness Endpoint, that tracks the activities performed on the endpoint for you.

 

Further Work

Rui Ataide has been working on a script to scrape Censys.io data looking for instances of PowerShell Empire. The attached Python script queries the Censys.io API looking for specific body request hashes, then subsequently gathers information surrounding the C2, including:

 

  • Hosting Server Information
  • The PS1 Script
  • C2 Information

 

Also attached is a sample output from this script with the PowerShell Empire metadata that has currently been collected.

Understanding how attackers may gain a foothold on your network is an important part of being an analyst. If attackers want to get into your environment, they typically will find a way. It is up to you to detect and respond to these threats as effectively and efficiently as possible. This blog post will demonstrate how a host became infected with PoshC2, and subsequently how the C&C (Command and Control) communication looks from the perspective of the defender.

 

The Attack

The attacker crafts a malicious Microsoft Word Document that contains a macro with their payload. This document is sent to an individual from the organisation they want to attack, in the hopes the user will open the document and subsequently execute the macro within. The Word document attempts to trick the user into enabling macros by containing content like the below:

 

The user enables the content and doesn't see any additional content, but in the background, the malicious macro executed and the computer is now part of the PoshC2 framework:

 

From here, the attacker can start to execute commands, such as tasklist, to view all currently running processes:

 

The attacker may also choose to setup persistence by creating a local service:

 

Preamble to Hunting

Prior to performing threat hunting, the analyst needs to assume a compromise, and generate a hypothesis as to what s/he is looking for. In this case, the analyst is going to focus on hunting for C2 traffic over HTTP. Now that the analyst has decided upon the hypothesis, this will dictate where they will look for that traffic, and what meta keys are of use to them to achieve the desired result. Refining the analysts approach toward threat hunting, will heed far greater results in detection, if the analysts have a path to walk down, and can exhaust all possible avenues of that path, before taking another route, the data set will be thoroughly sifted through in a more methodological manner, as there will be less distractions for the analyst.

 

The Detection Using Packet Data

Understanding how HTTP works, is vital in detecting malicious C2 over HTTP. To become familiar with this, analysts should analyse HTTP traffic generated by malware, and HTTP traffic generated by users, this allows the analyst to quickly determine what is out of place in a data set vs. what seems to be normal. This is a common strategy among malware authors, they want to blend in with regular network communications and appear as innocuous as possible, but by their very nature, Trojans are programmatic and structured, and when examined, it becomes clear the communications hold no business value.

 

Taking the information above into account, the analyst begins their investigation by focusing on the protocol of interest at this point in time, HTTP. This one simply query, quickly removes a large amount of the data set, and allows the analyst to place an analytical lens on just the protocol of interest. This is not to say that the analyst will not look at other protocols, but at this point in time, and for this hunt, their focus is on HTTP:

 

Now the data set has been significantly reduced, but that reduction needs to continue. A great way of reducing the data set to interesting sessions is to use the Service Analysis meta key. This meta key contains metadata that pulls apart the protocol, and details information about that session, that can help the analyst distinguish between user behavior, and automated malicious behavior. The analyst opens the meta key, and focuses on a few characteristics of the HTTP session that s/he thinks make the traffic more likely to be of interest:

 

Let's delve into these a little, and find out why s/he picked them:

 

  • http no referer: An interactive session from a typical user browsing the web, would mean the HTTP request should contain a referrer header, with the address of where that user came from. More mechanical type HTTP traffic, typically will not have a referrer header.
  • http four or less headers: Typical HTTP requests from users browsing the web, have seven or more HTTP headers, therefore, looking for sessions that have a lower HTTP header count, could yield more mechanical type HTTP requests.
  • http single request/response: A single TCP session can be used for multiple HTTP transactions. Therefore, if a typical user is browsing the web, you would expect to see multiple GET's and potentially POST's within a single session. Therefore placing a focus on HTTP sessions that only have a single request and response, could lead us to more mechanical type behavior.

 

There are a variety of other drills that could have been performed by the analyst, but for now, this will be sufficient for the path they want to take as they have reduced the data set to a more manageable amount. The analyst while perusing the other available metadata, observes an IP communicating directly to another IP, with requests for a diverse range of resources:

 

Opening the visualization to analyse the cadence of the communication, the analyst observes there to be some beaconing type behavior:

 

 

Reducing the time frame down, the beaconing is easier to see and appears to be ~every 5 minutes:

 

Upon opening the Event Analysis view, the analyst can see the beacon pattern, which is happening roughly every 5 minutes, the analyst also observes there to be a low variance in the payload size; this is indicative of some mechanical check-in type behavior, which is exactly what the analyst was looking for:

 

Now the analyst has found some interesting sessions, they can reconstruct the RAW payload, to see if there are further anomalies of interest. Browsing through the sessions, the analyst see's that the requests do not return any data, and are to random sounding resources. This seems like some sort of check-in type behavior:

 

The analyst comes back to the events view to see if there are any larger sessions toward to this IP, to get a better sense if any data is being sent back and forth. The analyst notices a few sessions that are larger than the others and decides to investigate those sessions:

 

Reconstructing one of the larger sessions, the analyst can see a large chunk of base64 is being returned:

 

As well as POST's with suspicious base64 encoded cookies header that does not conform to the RFC:

 

This seems to be the only type of data transferred between the two IP's and stands out as very suspicious, This should alert the analyst, that this is most likely some form of C2 activity:

 

The base64 is encrypted, and therefore the analyst cannot decode and find out the information being transferred. 

 

THOUGHT: Maybe there is another way for us to get the key to decode this? Keep reading on!

 

The analyst has now found some suspicious activity, the next stage is to track this activity and see if it is happening elsewhere. This can easily be done by using an application rule, the analyst identifies somewhat unique criteria to this traffic using the investigation view, and converts that into an application rule, the following example would pick up on this activity and make it far easier for the analyst to track:

 

(service = 80) && (server = 'microsoft-httpapi/2.0') && (filename !exists) && (http.response = 'cachecontrol') && (resp.uniq = 'no-cache, no-store, must-revalidate') && query length 14-16

 

IMPORTANT NOTE: Before adding this application rule to the environment, it is important to note that the analyst thoroughly checked how many hits this logic would create in their environment before deploying. Application rules can work well in one environment, but can be very noisy in others.

 

It is also important to note that this application rule was generated specific to this environment and the traffic that was seen, not all PoshC2 traffic would look this way. It is up to the analyst to create application rules that suit their environment. It is also important to note that the http.response and resp.uniq meta keys need to be enabled in the http_lua_options file as they are not enabled by default.

 

The analyst creates the application rule, and pushes this to all available Decoders:

 

Upon doing so, the analyst see's the application creating metadata as expected, but also notices that there is another C2, and also another host infected in their network by PoshC2:

 

This demonstrates the necessity for tracking activity on your network as and when it is found, it can uncover new endpoints infected and allows you to track that activity easily.

 

From here, the analyst from this point has multiple routes that they could take:

 

  • Perform OSINT (Open Source Intelligence) on the IP/activity in question; or
  • Investigate if there is a business need for this communication; or
  • Investigate the endpoint to see what is making the communication, and if there are any other suspicious indicators.

 

The Detection Using Endpoint Tracking

The analyst, while performing their daily activity of perusing IOC's (Indicators of Compromise), BOC's (Behaviors of Compromise), and EOC's (Enables of Compromise) - observes an BOC that stands out of interest to them, office application runs powershell:

 

Opening the Event Analysis view for this BOC, the analyst can better understand what document the user opened for this activity to happen. There are three events for this, because the user opened the document three times, probably as they weren't seeing any data from the document after enabling the macros within the document:

 

Opening the session itself, the analyst can see the whole raw payload of the PowerShell that was invoked from the Word document:

 

Running this through base64 decoding, the analyst can see that it is double base64 encoded, and the PowerShell has also been deflated, meaning more obfuscation was put in place:

 

Decoding the second set of base64 and inflating, the actual PowerShell that was executed can now be seen:

 

Perusing the PowerShell, the analyst observes that there is a decode function within. This function requires an IV and KEY to successfully decrypt. This could be useful to decrypt the information that we saw in the packet data: 

 

The analyst calculates the IV from the key, which according to the PowerShell, is the key itself, minus 15 bytes from the beginning of said key, we then convert this to hex for ease of use:

 

Now the analyst has the key and the IV, they can decrypt the information they previously saw in the packets. The analyst navigates back to the packets and finds a session that contains some base64:

 

Using the newly identified information retrieved via the endpoint tracking, the analyst can now start to decode the information and see exactly what commands and data was sent to the C2:

 

Some of which can be incredibly beneficial, such as the below, which lists all the URL's this C2 will use:

 

The analyst also wants to find out if any other interesting activity was taking place on the endpoint, upon perusing the BOC meta key, the analyst spots the metadata, creates suspicious service running command shell:

 

The analyst opens the sessions in the Event Analysis view, and can see that PowerShell was spawning sc.exe to create a suspicious looking service called, CPUpdater:

 

This is the persistence mechanism that was chosen by the attacker. The analyst now has the full PowerShell command and base64 decode it, to confirm the assumptions:

 

 

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior, gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

It is also important to note the advantages of having endpoint tracking data with this scenario as well. Without the endpoint tracking data, the original document with the malicious PowerShell may not have been recoverable, and therefore the decryption of the information between the C2 and the endpoint, would not have been possible; both tools heavily compliment one another in creating the full analytical picture.

Attackers are continuously evolving in order to evade detection.  A popular method often utilized is encoding. An attacker may choose to, for example, encode their malicious binaries in order to evade detection; attackers can use a diverse range of techniques to achieve this, but in this post, we are focusing on an example of a hex encoded executable. The executable chosen for this example was not malicious, but a legitimate signed Microsoft binary.

 

This method of evading detection was observed in the wild by the RSA Incident Response team. Due to the close relationship between the Incident Response Team and RSA's Content Team, a request for this content was submitted by IR, and was published to RSA Live yesterday. The following post demonstrates the Incident Response team testing the newly developed content.

 

The Microsoft binary was converted to hexadecimal and uploaded onto Pastebin, which is an example of what attackers are often seen doing:  

 

A simple PowerShell script was written to download and decode the hexadecimal encoded executable and save it to the Temp directory:

 

Typically, the above PowerShell would be Base64 encoded and the IR team would normally see something like the below:

 

After executing the PowerShell script. It is possible to see the dllhost.exe was successfully decoded and saved into Temp directory:

 

Upon perusing the packet metadata, the analyst would be able to easily spot the download of this hex encoded executable by looking under the Indicator of Compromise key:

 

Conclusion

It is important to always keep the RSA NetWitness platform up to date with the latest content. RSA Live allows analysts to subscribe to content, as well as receive updates on when newly developed content is available. For more information on setting up RSA Live, please see: Live: Create Live Account 

This blog post is a follow on from the following two blog posts:

 

 

 

The Attack

The attacker is not happy with executing commands via the Web Shell, so she decides to upload a new Web Shell called, reGeorg (https://sensepost.com/discover/tools/reGeorg/). This Web Shell allows the attacker to tunnel other protocols over HTTP, therefore allowing the attacker to RDP for example, directly onto the Web Server, even though RDP isn’t directly allowed from the internet.

 

The attacker can upload the Web Shell via one of the previously utilized Web Shells:

 

The attacker can now check the upload was successful by navigating to the uploaded JSP page. If all is okay, the Web Shell return the message shown in the below screenshot:

 

The attacker can now connect to the reGeorg Web Shell:

 

This means the attacker now has remote access to anything accessible from the Web Server where the Web Shell is located. This means the attacker could choose to RDP to a previously identified machine for example:

 

Attackers also like to keep other access methods to endpoints, one way of doing this is to setup an accessibility backdoor. This involves the attacker altering a registry key to load CMD when another application executes, in this case sethc.exe – this is an accessibility feature you typically see when pressing the SHIFT key five times. This now means that anyone who can RDP to that machine, can receive a system level command prompt with no credentials required; this is because sethc.exe can be invoked at the login screen by pressing the SHIFT key five times, and with the registry key altered, will spawn CMD as well.

 

To set this up, the attacker can use the Web Shell, and perform this over WMI using REG ADD:

 

Now the attacker can RDP back to the host they just setup the accessibility backdoor on, press the SHIFT key five times to initiate sethc.exe, and will be given the command prompt as system without having to use credentials:

 

 

The Analysis in RSA NetWitness

The analyst, while perusing Behaviors of Compromise, observes some suspicious indicators, runs wmi command-line tool, creates remote process using wmi command-line tool, and http daemon runs command shell just to name a few:

 

Drilling into the WMI related metadata, it is possible to see the WMI lateral movement that was used to setup the accessibility backdoor from the Web Shell:

 

The analyst also observes some interesting hits under the Indicators of Compromise meta key, enables login bypass and configures image hijacking:

 

Drilling into these sessions, we can see it is related to the WMI lateral movement performed, but this event being from the endpoint the backdoor was setup on:

 

The analyst, further perusing the metadata, drills into the Behavior of Compromise metadata, gets current username, and can see the sticky key backdoor being used (represented by the sethc.exe 211) to execute whoami:

 

The analyst, also perusing HTTP network traffic, observed HTTP headers that they typically do not see, x-cmd, x-target, and x-port:

 

Drilling into the RAW sessions for these suspicious headers, it is possible to see the command sent to the Web Shell to initiate the RDP connection:

 

Further perusing the HTTP traffic toward tunnel.jsp, we can see the RDP traffic being tunnelled over HTTP requests. The reason this shows as HTTP and not RDP, is that the RDP traffic is being tunnelled over HTTP, there are therefore more characteristics which define this as HTTP, compared to RDP:

 

Conclusion

Attackers will leverage a diverse range of tools and techniques to ensure they keep access to the environment they are interested in. The tools and techniques used here are freely available online and are often seen utilized by advanced attackers; performing proactive threat hunting will ensure that these types of events do not go unnoticed within your environment.

Following up from the previous blog, Web Shells and RSA NetWitness, the attacker has since moved laterally. Using one of the previously uploaded Web Shells, the attacker confirms permissions by running, whoami, and checks the running processes using, tasklist. Attackers, like most individuals, are creatures of habit:

 

The attacker also executes a quser command to see if any users are currently logged in, and notices that an RDP session is currently active:

 

The attacker executes a netstat command to see where the RDP session has been initiated from and finds the associated connection:

 

The attacker pivots into his Kali Linux machine and sets up a DNS Shell. This DNS Shell will allow the attacker to setup C&C on the new machine she has just discovered:

 

The attacker moves laterally using WMI, and executes the encoded PowerShell command to setup the DNS C&C:

 

The DNS Shell is now setup and the attacker can begin to execute commands, such as whoami, on the new machine though the DNS Shell:

 

Subsequently, as the attacker likes to do, she also runs a tasklist through the DNS Shell:

 

Finally, the attacker confirms if the host has internet access by pinging, www.google.com:

 

As the attacker has confirmed internet access, she decides to the download Mimikatz using a PowerShell command:

 

The attacker then performs a dir command to check if Mimikatz was successfully downloaded:

 

From here, the attacker can dump credentials from this machine, and continue to move laterally around the organisation, as well as pull down new tools to achieve their task(s). The attacker has also setup a failover (DNS Shell) in case the Web Shells are discovered and subsequently removed.

 

 

 

Analysis

Since the previous post, the analyst has upgraded their system to NetWitness 11.3, and deployed the new agents to their endpoints. The tracking data now appears in the NetWitness UI, and subsequently the analysis will solely take place, on the 11.3 UI.

 

Tracking Data

The analyst, upon perusing the metadata, uncovers some reconnaissance commands being executed, whoami.exe and tasklist.exe on two of their endpoints:

 

Refocusing their investigation on those two endpoints, and exposing the Behaviours of Compromise (BOC) meta key, the analysts uncovers some suspect indicators that relate to a potential compromise, creates remote process using wmi command-line tool, http daemon runs command shell, runs powershell using encoded command, just to name a few:

 

Pivoting into the sessions related to, creates remote process using wmi command-line tool, the analyst observes the Tomcat Web Server performing WMI lateral movement on a remote machine:

 

The new 11.3 version stores the entire Encoded PowerShell command and performs no truncation:

 

This allows the analyst to perform Base64 decoding directly within the UI using the new Base64 decode function (NOTE: the squares in between each character are due to double byte encoding and not a byproduct of NetWitness decoding):

  

 

Navigating back to the metadata view, the analyst opens the Indicators of Compromise (IOC) meta key, and observed the metadata, drops credential dumping library:

 

Pivoting into those sessions, the analyst see’s that Mimikatz was dropped onto the machine that was previously involved in WMI the lateral movement:

 

Packet Data

The analyst also is looking into the packet data, they are searching through DNS as they had seen an increase in the amount of traffic that they typically see. Upon opening the SLD (Second Level Domain) meta key, the culprit of the increase is shown:

 

Focusing the search on the offending SLD, and expanding the Hostname Alias Record (alias.host) meta key, the analyst observed a large number of suspicious unique FQDN’s:

 

This is indicative behaviour of a DNS tunnel. Focusing on the DNS Response Text meta key, it is also possible to see the commands that were being executed:

 

We can further substantiate that this is a DNS Tunnel by using a tool such as CyberChef, and taking the characters after cmd in the FQDN, and hex decoding them, this reveals that data is being sent hex encoded as part of the FQDN itself, and sent as chunks, and reconstructed on the attacker side, due to the constriction on how much data can be sent via DNS:

 

 

ESA Rule

DNS based C&C is noisy, this is because there is only a finite amount of information that can be sent with each DNS packet. Therefore returning information from the infected endpoint requires a large amount of DNS traffic. Subsequently, the DNS requests that are made, need to be unique, so as not to be resolved by the local DNS cache or internal DNS servers. Due to this high level of noise from the DNS C&C communication, and the variance in the FQDN, it is possible to create an ESA rule that looks for DNS C&C with a high rate of fidelity.

The ESA rule attached to this blog post calculates a ratio of how many unique alias host values there are toward a single Second Level Domain (SLD). Whereby we count the number of sessions toward the SLD, and divide that by the number of unique alias hosts for that SLD, to give us a ratio:

 

  • SLD Session Count ÷ Unique Alias Host Count = ratio

 

The lower the ratio, the more likely this is to be a DNS tunnel; due to the high connection count, and variance in the FQDN to a single SLD. The below screenshot shows the output of this rule which triggered on the SLD which was shown in the analysis section of this blog post:

 

 

NOTE: Legitimate products perform DNS tunnelling, such as McAfee, ESET, TrendMicro, etc. These domains would need to be filtered out based on what you observe in your environment. The filtering option for domains is at the top of the ESA rule.

 

The rule for import and pure EPL code in a text file are attached to this blog. 

IMPORTANT: SLD needs to be set as an array for the to rule to work.

 

 

Conclusion

This blog post was to further demonstrate the TTP’s (Tools, Techniques, and Procedures) attackers may utilise in a compromise to achieve their end goal(s). It demonstrates the necessity for proactive threat hunting, as well as the necessity for both Packet and Endpoint visibility to succeed in said hunting. It also demonstrates that certain aspects of hunting can be automated, but only after fully understanding the attack itself; this is not to say that all threat hunting can be automated, a human element is always needed to confirm whether something is definitely malicious or not, but it can be used to minimise some of the work the analyst needs to do.

This blog also focused on the new 11.3 UI. This allows analysts to easily switch between packet data and endpoint data in a single pane of glass; increasing efficiency and detection capabilities of the analysts and the platform itself.

Introduction

This blog post demonstrates a common method as to how organisations can get compromised. Initially, the viewpoint will be from the attacker’s perspective, it will then move on to show what artifacts are left over within the RSA NetWitness Packets and RSA NetWitness Endpoint solutions that analysts could use to detect this type of activity.

 

Scenario

Apache Tomcat server exposed to the internet with weak credentials to the Tomcat Manager App gets exploited by an attacker. The attacker uploads three Web Shells, confirms access to all of them and then uploads Mimikatz to dump credentials.

 

Definitions

Web Shells

A web shell is a script that can be uploaded to a web server to enable remote administration of the machine. Infected web servers can be either internet-facing or internal to the network, where the web shell is used to pivot further to internal hosts.

A web shell can be written in any language that the target web server supports. The most commonly observed web shells are written in languages that are widely supported, such as JSP, PHP, ASP, Perl, Ruby, Python, and Unix shell scripts are also used.

 

Mimikatz

Mimikatz is an open source credential dumping program that is used to obtain account login and password information, normally in the form of a hash or a clear text password from an operating system.

 

THC Hydra

When you need to brute force crack a remote authentication service, Hydra is often the tool of choice. It can perform rapid dictionary attacks against more than 50 protocols, including telnet, ftp, http, https, smb, several databases, and much more.

 

WAR File

In software engineering, a WAR file (Web Application Resource[1] or Web application Archive[2]) is a file used to distribute a collection of JAR-files, JavaServer Pages, Java Servlets, Java classes, XML files, tag libraries, static web pages (HTML and related files) and other resources that together constitute a web application.


 

The Attack

The attacker finds an exposed Apache Tomcat Server for the organisation. This can be achieved in many ways, such as a simple Google search to show default configured Apache Servers:

                                      

The attacker browses to the associated Apache Tomcat server and see’s it is running up to date software and appears to be mainly left at default configuration:

                 

 

The attacker attempts to access the Manager App, the manager app requires a username and password and therefore the attacker cannot login to make changes. Typically, these servers are setup with weak credentials:

               

 

Based off of this assumption, the attacker uses an application called THC Hydra to brute force the Tomcat Manager App using a list of passwords. After a short while, Hydra returns a successful set of credentials:

                


The attacker can now login to the Manager App using the brute forced credentials:

               

 

From here, the attacker can upload a WAR (Web application ARchive) file which contains their Web Shells:

                

 

The WAR file is nothing more than just a ZIP file with the JSP Web Shells inside. In this case, three Web Shells were uploaded:

                  

 

After the upload, it is possible to see the new application called, admin (which is based off the WAR file name, admin.war), has been created:

              


The attacker has now successfully uploaded three Web Shells on to the server and can begin to use them. One of the Web Shells named, resetpassword.jsp, requires authentication to help protect direct access by other individuals; this page could also be adapted to confuse analysts when visited:

            

 

The attacker enters the password, and can begin browsing the web servers file system and executing commands, typical commands such as, whoami, are often used by attackers:

           

 

The attacker may also choose to see what processes are running to see if there are any applications that could hinder their progression by running, tasklist: 

           

 

From the previous command, the attacker notices a lack of Anti-Virus so decides to upload Mimikatz via the WebShell: 

          

 

The ZIP file has now been uploaded. This Web Shell also has an UnPack feature to decompress the ZIP file: 

         

 

Now the ZIP file is decompressed:

         

 

The attacker can now use the Shell OnLine functionality within this Web Shell which emulates CMD in order to navigate to the Mimi directory and see their uploaded tools: 

         

 

The attacker can then execute Mimikatz to dump all passwords in memory:

        

 

The attacker now has credentials from the Web Server:

         

 

The attacker could then use these credentials to laterally move onto other machines.

 

The attacker also dropped two other Web Shells, potentially as backups in case some get flagged. Let’s access those to see what they look like. This Web Shell is the JSP file called, error2.jsp, it has similar characteristics to the resetpassword.jsp Web Shell:

       

 

We can browse the file system and execute commands:

        

 

The final Web Shell uploaded, login.jsp, exhibits odd behavior when accessed:

       

 

It appears to perform a redirect to a default Tomcat Page named, examples, this appears to be a trick to confuse anyone who potentially browses to that JSP page. Examining the code for this Web Shell, it is possible to see it performs a redirect if the correct password is not supplied:

     

           <SNIP>

     

 

Passing the password to this Web Shell as a parameter, which is defined at the top of this Web Shell’s code, we get the default response from the Web Shell:

    

 

Further analyzing the code, you can understand further parameters to pass in order to make the Web Shell perform certain actions, such as a directory listing:

   

 

This Web Shell is known as Cknife, and interacting it in this way is not efficient or easy, so Cknife comes with a Java based client in order to control the Web Shell. We can launch this using the command shown below:

  

 

The client is then displayed which would typically be used:

  

Note:  This web shell is listed in this blog post as it is something the RSA Incident Response team consistently sees in some of the more advanced attacks.

 

The Analysis

Understanding the attack is important, and hence why it comes prior to the analysis section. Understanding how an attacker may operate, and the steps they may take to compromise a Web Server, will significantly increase your ability to detect these types of threats, as well as better understand the viewpoint of the analysis while triage is performed.

 

RSA NetWitness Packets

While perusing the network traffic, a large number of 401 authentication errors towards one the Web Servers was observed; there is also a large variety of what look like randomly generated passwords:

        

 

Focusing on the 401 errors, and browsing other metadata available, we can see the authentication errors are toward the Manager App of Tomcat over port 8080, also take note of the Client Application being used, this is the default from THC Hydra and has not been altered:

       

 

Removing the 401 errors, and opening the Filename and Directory meta keys, we can see the Web Shells that were being accessed and the tools that were uploaded:

       

 

NOTE: In an actual environment, a large number of directories and filenames would exist, it is up to the analyst to search for the filenames of interest that sound out of the norm or are in suspicious directories, are newly being accessed, and not accessed as frequently as other pages on the web server. For a more in-depth explanation to hunting using NetWitness Packets, take a look at the hunting guide available here: https://community.rsa.com/docs/DOC-79618

 

The analyst could also use other meta keys to look for suspicious/odd behavior. Inbound HTTP traffic with windows cli admin commands would be worth investigating, as well as sessions with only POST’s for POST based Web Shell’s, http post no get or http post no get no referer, for a couple of examples:

     

 

Investigating the sessions with windows cli admin commands yields the following two sessions, you’ll notice one of the sessions is one of the Web Shells, resetpassword.jsp

    

 

Double clicking on the session will reconstruct the packets and display the session in Best Reconstruction view, in this case, web. Here we can see the Web Shell as the browser would have rendered it, this instantly should stand out as something suspicious:

    

 

This HTTP session also contains the error2.jsp Web Shell, from the RSA NetWitness rendering, it is possible to see the returned results that the attacker saw. Again, this should stand out as suspicious:

    

 

Coming back to the investigate view, and this time drilling into the sessions for post no get no referer, we can see one of the other Web Shells, login.jsp:

   

 

Double clicking on one of these sessions shows the results from the Cknife Web Shell, login.jsp:

    

 

As this was not a nicely formatted web based Web Shell, the output is not as attractive, but this still stands out as suspicious traffic: why would a JSP page on a Web Server return a tasklist?

 

Sometimes changing the view can also help to see additional data. Changing the reconstruction to text view shows the HTTP POST sent, this is where you can see the tasklist being executed and the associated response:

   

 

Further perusing the network traffic, it is also possible to see that Mimikatz was executed:

  

This is an example of what the traffic may look like in RSA NetWitness Packets. The analyst would only need to pick up on one of these sessions to know their organization has been compromised. Pro-active threat hunting and looking for anomalies in traffic toward your web servers will significantly reduce the attacker dwell time.

 

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

 

RSA NetWitness Endpoint

On a daily basis, analysts should be perusing the IIOC’s within NWE, paying particular attention to the Level 1 IIOC’s. Upon logging into NWE, we can see an interesting Level 1 IIOC has hit, HTTP Daemon Runs Command Shell. This IIOC is looking for a HTTP Daemon, such as Tomcat, spawning cmd.exe:

 

      

 

If we double click on the machine name in the Machines window, we can then navigate to the tracking data for this machine to see what actually happened. Here we can see that Tomcat9.exe is spawning cmd.exe and running commands such as whoami and tasklist, this is not normal functionality and should raise concern for the analyst. We can also see the Mimikatz execution and the associated command executed for that:

 

    

 

Another IIOC that would have led us to this behavior is, HTTP Daemon Creates Executable:

 

   

 

Again, coming back into the tracking data, we can see the Tomcat9.exe web daemon writing files. This would be note-worthy and something that should be investigated further, as web daemons can perform this activity normally. In this instance, the presence of Mimikatz is enough for us to determine this would be malicious activity:

  

 

The analyst also has the capability to request files from the endpoint currently under analysis by right-clicking on the machine name, selecting Forensics and then Request File(s):

 

The analyst can specify the files they want to collect for their analysis (NOTE: wildcards can be used for the filenames but not directories). In this case, the analyst wants to look into the Tomcat Access files, and requests that five of them be returned:

 

Once the files have been downloaded, the analyst can save them locally by navigating to the Download tab, right-clicking the files of interest and selecting Save Local Copy:

 

Perusing the access files, the analyst can also see a large number of 401 authentication errors to the Tomcat Web Server which would have been from the THC Hydra brute force:

 

And also evidence of the Web Shells themselves. Some of the commands the attacker executed can be seen in the GET requests, the data in the body of the POST's however, does not show in the log file, showing why it is important to have both Packets and Endpoint visibility to understand the interaction with the Web Shell:

 

Conclusion

Understanding the potential areas of compromise within your organisation vastly increases your chances of early detection. This post was designed to show one of those potential areas of importance for attackers, and how they may go about a compromise, while also showing how that attack may look as captured by the RSA NetWitness Platform. It is also important to understand the benefits behind proactively monitoring the RSA NetWitness products for malicious activity, simply awaiting for an alert is not enough to capture attacks in their early stages.

 

It is also important to for defenders to understand how these types of attacks look within their own environment, allowing them to better understand, and subsequently protect against them.

 

This is something our RSA Incident Response practice does on a daily basis.  If your organization needs help or your interested to learn more, please contact your account manager.  

 

As always, happy hunting!  

After reading through a few SANS resources, I came across some interesting topics regarding the detection of rare processes to help pin point malicious applications running on a host; from this, I decided to create an EPL rule to baseline processes on Windows hosts and alert if any processes deviated from the norm.

 

The principle behind this rule is to profile every Windows host in the estate and keep track of the processes which run on said hosts, should they diverge from the average they are declared as rare and an alert is generated for analysts to investigate; the rule is written in a way to learn what is normal within a specific environment and baseline accordingly.

 

 

Dependencies

The following meta keys need to be indexed for the below rule to work:-

 

  • event_computer
  • process

 

Other than that, deploy the rule and you're good to go!

 

The EPL Rule

@Name('Create Window')

CREATE WINDOW winProcess.win:time(31 days) (theDay int, event_computer string, process string, counter int);

 

@Name('Insert into Window')

on Event(process IS NOT NULL AND event_computer IS NOT NULL)

merge winProcess

WHERE Event.process = winProcess.process AND Event.event_computer = winProcess.event_computer AND current_timestamp.getDayOfWeek() = winProcess.theDay

when matched

then update set counter = counter + 1

when not matched then INSERT

SELECT current_timestamp.getDayOfWeek() as theDay, event_computer, process, 1 as counter;

 

 

@Name('Alert')

@RSAAlert

SELECT * FROM winProcess as original

WHERE counter <= 0.2* (

SELECT avg(counter) FROM winProcess as recent

WHERE original.theDay = recent.theDay and original.event_computer = recent.event_computer);

Lee Kirkpatrick

DGA Detection

Posted by Lee Kirkpatrick Employee Feb 1, 2017

In one of my previous posts (Shannon. Have you seen my Entropy?) I touched on using a custom Java entropy calculator within the ESA to calculate the entropy values for domains to assist with detecting Domain Generation Algorithms (DGA's); the post was more theory than practical so I decided to implement and test it in my lab so I could share the implementation with you all.

 

The basic principle behind this form of DGA detection is to calculate an entropy value for each domain seen and store this value in an ESA window. We can then use the values in the ESA window to calculate an average entropy for the domains seen within an environment, this subsequently allows an alert to be generated if any domains exceed the average entropy by 1.3x.

 

As an example, let's take the following four domains from the Alexa top 100 (this will be what we use as an example baseline, the rule attached to this post would actually monitor your network for what is normal):-

 

  • google.com
  • youtube.com
  • facebook.com
  • baidu.com

 

Running these each through the entropy calculator we receive the following values:-

 

DomainEntropy
google.com2.6464393446710157
youtube.com3.095795255000934
facebook.com3.0220552088742
baidu.com3.169925001442312
Average2.983553702497115

 

Using this average as our baseline, we can then say that anything that is greater than 1.3x (3.87861981324625) this average, let me know about it as this is a high entropy value.

 

Taking the following values from Zeus tracker and calculating their entropy values, we can see the results:-

 

DomainEntropyStatus
circleread-view.com.mocha2003.mochahost.com
3.952216429463629Alert
cynthialemos1225.ddns.net
3.952216429463629
Alert
moviepaidinfullsexy.kz
4.061482186720775Alert
039b1ee.netsolhost.com
3.754441845713345No alert

 

Example of the alert output below:-

 

 

Rule Logic

@Name('Learning Phase Variable')
//Change the learningPhaseMinutes variable to the number of minutes for the rule to learn
CREATE VARIABLE INTEGER learningPhaseMinutes = 1440;

 

@Name('Calculate Learning Phase')
on pattern[Every(timer:at(*, *, *, *, *))] set learningPhaseMinutes = learningPhaseMinutes - 1;

 

@Name('Create Entropy Window')
CREATE WINDOW aliasHostEntropy.win:length(999999).std:unique(entropy) (entropy double);

 

@Name('Insert entropy into Window')
INSERT INTO aliasHostEntropy
SELECT calcEntropy(alias_host) as entropy FROM Event(alias_host IS NOT NULL AND learningPhaseMinutes > 1);

 

@Name('Alert')
@RSAAlert
SELECT *, (SELECT avg(entropy) FROM aliasHostEntropy as Average), (SELECT calcEntropy(alias_host) FROM Event.win:length(1) as Entropy) FROM Event(learningPhaseMinutes <= 1 AND calcEntropy(alias_host) > 1.3* (SELECT avg(entropy) FROM aliasHostEntropy));

 

 

If you are interested in implementing this DGA Detection rule, I wrote up a little guide on how to do so. Everything you need is attached to this post.

 

DISCLAIMER: The information within this blog post is here to show the capabilities of the NetWitness product and avenues of exploration to help thwart the adversary. This content is provided as-is with no RSA direct support, use it at your own risk. Additionally, you should always confirm architecture state before running content that could impact the performance of your NetWitness architecture. 

Lee Kirkpatrick

Everything is PossiEPL

Posted by Lee Kirkpatrick Employee Oct 12, 2016

Event Processing Language is utilised within the NetWitness Event Stream Analysis (ESA) component. This language is what allows us to write advanced correlation rules to detect and thwart the advanced threats we face on a constant basis; it allows us to make sense, to organise and sift through the copious amounts of metadata which is produced on a daily basis.

 

EPL can seem a little daunting upon first glance, but understanding a few basic principles will allow you to create a plethora of use cases - I have created a document to better understand those principles, to extend my knowledge, and hopefully yours as well:-

 

 

Enjoy!

Entropy is a term I am sure most of us are familiar with. In layman’s terms, it refers to randomness and uncertainty of data; it is in this randomness that we can detect potential malicious traffic.

 

A gentleman named George Zipf lead the way in the study of character frequency in the early 1930’s, his work was further expanded upon by Claude Shannon to examine the entropy of language. These two forms of analysis have become engraved in the computer security domain and often used for cryptography – but what if we used their ideas to help detect malicious traffic?

 

Some malicious actors utilise domain generation algorithms (DGA) to produce pseudo random domain names they will utilise for their C2 communications. If we apply Shannon’s Entropy to these domains, we can calculate a score of their entropy and possibly identify these maliciously formed domains from the norm:-

 

 

Using the RSA Event Stream Analysis (ESA) component and a customised Java based Shannon Calculator, we can generate these entropy scores on the fly for any given metadata, and should they exceed a score greater than X, create an alert.

 

NOTE: Java plugins can be added to the ESA component as described by Nikolay Klender in his post - Extending ESA rules with custom function.

 

Once the Java plugin is implemented, we can then create our ESA correlation rule to utilise the new plugin available and calculate the entropy. In this example, we will use the plugin to calculate entropy for DNS domains using the following EPL:-

 

@RSAAlert

SELECT * FROM Event(service = 53 AND calcEntropy(alias_host)>4);

 

The entropy value for this is set to anything greater than ‘4’ but can be edited dependent upon what results are observed.

 

I have attached the java used for calculating Shannon's Entropy should anyone be interested.

 

DISCLAIMER: This is by no means a full proof detection method for malicious traffic. The information is here to show the capabilities of the product and avenues of exploration to help thwart the adversary. This content is provided as-is with no RSA direct support, use it at your own risk. Additionally, you should always confirm architecture state before running content that could impact the performance of your SA architecture. 

Filter Blog

By date: By tag: