Lee Kirkpatrick

Web Shells and NetWitness Part 2

Blog Post created by Lee Kirkpatrick Employee on Feb 13, 2019

Following up from the previous blog, Web Shells and RSA NetWitness, the attacker has since moved laterally. Using one of the previously uploaded Web Shells, the attacker confirms permissions by running, whoami, and checks the running processes using, tasklist. Attackers, like most individuals, are creatures of habit:


The attacker also executes a quser command to see if any users are currently logged in, and notices that an RDP session is currently active:


The attacker executes a netstat command to see where the RDP session has been initiated from and finds the associated connection:


The attacker pivots into his Kali Linux machine and sets up a DNS Shell. This DNS Shell will allow the attacker to setup C&C on the new machine she has just discovered:


The attacker moves laterally using WMI, and executes the encoded PowerShell command to setup the DNS C&C:


The DNS Shell is now setup and the attacker can begin to execute commands, such as whoami, on the new machine though the DNS Shell:


Subsequently, as the attacker likes to do, she also runs a tasklist through the DNS Shell:


Finally, the attacker confirms if the host has internet access by pinging, www.google.com:


As the attacker has confirmed internet access, she decides to the download Mimikatz using a PowerShell command:


The attacker then performs a dir command to check if Mimikatz was successfully downloaded:


From here, the attacker can dump credentials from this machine, and continue to move laterally around the organisation, as well as pull down new tools to achieve their task(s). The attacker has also setup a failover (DNS Shell) in case the Web Shells are discovered and subsequently removed.





Since the previous post, the analyst has upgraded their system to NetWitness 11.3, and deployed the new agents to their endpoints. The tracking data now appears in the NetWitness UI, and subsequently the analysis will solely take place, on the 11.3 UI.


Tracking Data

The analyst, upon perusing the metadata, uncovers some reconnaissance commands being executed, whoami.exe and tasklist.exe on two of their endpoints:


Refocusing their investigation on those two endpoints, and exposing the Behaviours of Compromise (BOC) meta key, the analysts uncovers some suspect indicators that relate to a potential compromise, creates remote process using wmi command-line tool, http daemon runs command shell, runs powershell using encoded command, just to name a few:


Pivoting into the sessions related to, creates remote process using wmi command-line tool, the analyst observes the Tomcat Web Server performing WMI lateral movement on a remote machine:


The new 11.3 version stores the entire Encoded PowerShell command and performs no truncation:


This allows the analyst to perform Base64 decoding directly within the UI using the new Base64 decode function (NOTE: the squares in between each character are due to double byte encoding and not a byproduct of NetWitness decoding):



Navigating back to the metadata view, the analyst opens the Indicators of Compromise (IOC) meta key, and observed the metadata, drops credential dumping library:


Pivoting into those sessions, the analyst see’s that Mimikatz was dropped onto the machine that was previously involved in WMI the lateral movement:


Packet Data

The analyst also is looking into the packet data, they are searching through DNS as they had seen an increase in the amount of traffic that they typically see. Upon opening the SLD (Second Level Domain) meta key, the culprit of the increase is shown:


Focusing the search on the offending SLD, and expanding the Hostname Alias Record (alias.host) meta key, the analyst observed a large number of suspicious unique FQDN’s:


This is indicative behaviour of a DNS tunnel. Focusing on the DNS Response Text meta key, it is also possible to see the commands that were being executed:


We can further substantiate that this is a DNS Tunnel by using a tool such as CyberChef, and taking the characters after cmd in the FQDN, and hex decoding them, this reveals that data is being sent hex encoded as part of the FQDN itself, and sent as chunks, and reconstructed on the attacker side, due to the constriction on how much data can be sent via DNS:



ESA Rule

DNS based C&C is noisy, this is because there is only a finite amount of information that can be sent with each DNS packet. Therefore returning information from the infected endpoint requires a large amount of DNS traffic. Subsequently, the DNS requests that are made, need to be unique, so as not to be resolved by the local DNS cache or internal DNS servers. Due to this high level of noise from the DNS C&C communication, and the variance in the FQDN, it is possible to create an ESA rule that looks for DNS C&C with a high rate of fidelity.

The ESA rule attached to this blog post calculates a ratio of how many unique alias host values there are toward a single Second Level Domain (SLD). Whereby we count the number of sessions toward the SLD, and divide that by the number of unique alias hosts for that SLD, to give us a ratio:


  • SLD Session Count ÷ Unique Alias Host Count = ratio


The lower the ratio, the more likely this is to be a DNS tunnel; due to the high connection count, and variance in the FQDN to a single SLD. The below screenshot shows the output of this rule which triggered on the SLD which was shown in the analysis section of this blog post:



NOTE: Legitimate products perform DNS tunnelling, such as McAfee, ESET, TrendMicro, etc. These domains would need to be filtered out based on what you observe in your environment. The filtering option for domains is at the top of the ESA rule.


The rule for import and pure EPL code in a text file are attached to this blog. 

IMPORTANT: SLD needs to be set as an array for the to rule to work.




This blog post was to further demonstrate the TTP’s (Tools, Techniques, and Procedures) attackers may utilise in a compromise to achieve their end goal(s). It demonstrates the necessity for proactive threat hunting, as well as the necessity for both Packet and Endpoint visibility to succeed in said hunting. It also demonstrates that certain aspects of hunting can be automated, but only after fully understanding the attack itself; this is not to say that all threat hunting can be automated, a human element is always needed to confirm whether something is definitely malicious or not, but it can be used to minimise some of the work the analyst needs to do.

This blog also focused on the new 11.3 UI. This allows analysts to easily switch between packet data and endpoint data in a single pane of glass; increasing efficiency and detection capabilities of the analysts and the platform itself.