Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2020 > April


The Australian Signals Directorate (ASD) & US National Security Agency (NSA) have jointly released a useful guide for detecting and preventing web shell malware. If you haven't seen it yet, you can find it here:

The guide includes some sample queries to run in Splunk to help detect potential web shell traffic by analysing IIS and Apache web logs. “That’s great, but how can we do the same search in NetWitness Logs?” I hear you ask! Let’s take a look.

Web Server Logging

If you are already collecting IIS and Apache logs – or any web server audit logs for that matter – you’ve probably already made some changes to your configuration to suit your needs to get the data that you want. To run the queries suggested by the guide, we need to make a change to the default log parser settings for IIS & Apache logs. The default log parser setting for IIS & Apache does not save the URI field as meta that we can query – it is parsed at the time of capture and available as transient meta for evaluation by feeds, parsers, & app rules, but it is not saved to disk as meta. To collect the data needed to run these queries, we are going to change the setting for the meta from “Transient” to “None”.

For more information on how RSA NetWitness generates and manages meta, go here: Customize the meta framework 

The IIS and Apache log parsers both parse the URI field from the logs into a meta key named webpage. The table-map.xml file on the Log Decoder shows that this meta value is set to “Transient”.

To change the way this meta is handled, take a copy of the line from the table-map.xml and paste it into the table-map-custom.xml, and change the flags=”Transient” setting to flags=”None”:

<mapping envisionName="webpage" nwName="" flags="None" format="Text"/>

Hit apply, then restart the log decoder service for the change to take effect. Remember to push the change to all Log Decoders in your environment.

Next, we want to tell the Concentrator how to handle this meta. Go to your index-concentrator-custom.xml file and add an entry for this new meta key:

<key description="URI" format="Text" level="IndexValues" name="" defaultAction="Closed" valueMax="10000" />

I set the display name for the key as URI – but you can set it to whatever makes sense for you. I also set a maximum value count of 10,000 for the key - you should use a value that makes sense for your website(s) and environment and review for any meta overflow errors.

Hit apply, then restart the concentrator service for the change to take effect. Remember to push the change to all Concentrators in your environment (Log & Network), especially if you use a Broker.

Now as you collect your web logs, the meta key will be populated:

You may also want to change the index level for the referer key. By default it is set to IndexKey, which means a query that tests if a referer exists or doesn’t exist will return quickly, but a search for a particular referer value will be slow. If you find yourself doing a lot of searches for specific referers you can change this setting to IndexValues as well.

Optionally, you can add the meta key to a meta group & column group so you can keep track of it in Navigate & Events views. I’ve attached a copy of my Web Logs Analysis meta group and column group to the end of this post.

Now we are ready for the queries themselves. While at first glance they seem pretty complicated, they really aren’t. Plus with the way NetWitness parses the data into a common taxonomy, you don’t need different queries for IIS & Apache – the same query will work for both!

Query 1 – Identify URIs accessed by few user agents and IP addresses

For this query, we need to use the countdistinct aggregation function to count how many different user agents and how many different IP addresses accessed the pages on our website.

For more information on NWDB query syntax, go here: Rule Syntax 
SELECT, countdistinct(user.agent),countdistinct(ip.src)
WHERE device.class = ‘web logs’ && result.code begins ‘2
ORDER BY countdistinct(user.agent) ASCENDING

Query 2 – Identify user agents uncommon for a target web server

This query simply shows the number of times each user agent accesses our web server. We can see this very easily by just using the Navigate interface and setting the result order to Ascending:

Here is the query to use in the report engine rule:

SELECT user.agent
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY user.agent
ORDER BY Total Ascending

Query 3 – Identify URIs with an uncommon HTTP referrer

This query is a bit more complicated – we want to show referrers that do not access many URIs, but also want to see how often they access each URI. This query could need some tuning if you have pages on your site that are typically only accessed by following a link from a previous page, or even an image file that is only loaded by a single page.

Our select statement will list the referrer followed by the number of URIs that the referrer is used for (sorted ASC – we’re interested in uncommon referers), then it will list those URIs where it is seen as the referer, followed by the number of hits (sorted DSC) – a URI that is accessed  

SELECT referer, countdistinct(, distinct(, count(referer)
WHERE device.class = ‘web logs’ && result.code begins ‘2
GROUP BY referrer
ORDER BY countdistinct( Ascending, count(referrer) Descending

Query 4 – Identify URIs missing an HTTP referrer

This is an easy one to finish off – we’re interested in events where there is no referer present. To refine the results we want to filter events that are hitting the base of the site ‘/’ as this could easily be someone typing the URL directly into their browser.

WHERE device.class = ‘web logs’ && (referrer !exists || referrer =-) && !=/&& result.code begins ‘2
ORDER BY Total Desceding

These rules and a report that includes the rules can be found in the attached files.


Let me know in the comments below how these queries work in your environment, and if you have suggestions for improvements. The goal of this post was to quickly convert the queries included in the guide published by ASD & NSA. Stay tuned for more posts that show how we can improve the fidelity of these queries, and also how to utilise the endpoint and network indicators also found in thie ASD & NSA guide.


Happy Hunting!

Shout out to @Casey Switzer, @Josh Randall & @Larry Hammond.  Without their help, the lab, configuration and operational considerations would not be possible.


Last year in RSA NetWitness 11.3, a new integration was introduced to allow NetWitness to integrate with RSA SecurID to populate high risk users from incidents in Respond.


@Josh Randall covered this in his blog post here: Examining Threat Aware Authentication in v11.3


At the time, SecurID could only add a user to the list based on an email address.  While this is good for email based alerts, the majority of Linux and Windows logs do not contain that value.


An easy workaround for this is to configure a recurring feed (See Decoder: Create a Custom Feed) including sAMAccountName & email address.  A simple powershell script to export sAMAccountName & email address should suffice. When you create an incident based on sAMAccountName the email address is present in the session's meta data allowing the ThreatAware authentication integration to work.   I used several callback keys to ensure I covered the various conditions to capture the username.


 AdUserEmailAddress Feed


Once this feed is live, you will see email.src & email.all metadata upon an event containing any of the meta keys above.  In this case it was a failed logon:

Email Meta


As of April 2020, RSA SecurID will now accept email address or username for Threat Aware Authentication and to support this, version 11.4.1 of NetWitness, introduced configuration for Respond for which field send to SecurID.  See Respond Config: Configure Threat Aware Authentication for more information.  


This represents a great option to using ad_username, however when you choose that value, you will lose the email_address integration.  A way around this is to do the inverse of the feed earlier to ensure you have the email address field in your sessions.  For this blog, we will continue to use the existing feed and send email_address to SecurID.  I set my synchronization to 1 minute but the default setting is 15 minutes.


Threat Aware Authentication Settings

Within the RSA SecurID Cloud Access Service, you will need to configure your Assurance Levels and  Risk-Based Authentication policies.  I set my Assurance Levels to require Device Biometrics for High Assurance, Approve for Medium and allow at a Low level.  I set a simple policy which will be used for the SAML test.

Assurance Levels

Assurance Levels



 Threat Aware Policy


Rule set

Threat Aware Rules

We have a test user which will be used to demonstrate Threat Aware Authentication.  Currently as you can see, Brett Cline is synchronized from lab.internal and is currently a low risk user.

Low Risk User


When Brett navigates to an app, he is presented with a logon screen with his password:

Test App 

Since he is low risk, after a successful authentication with User ID and password, he is now logged in to the demo app.

App Success


We created a simple ESA rule to catch 3x failed logins  to create an alert (ec_activity = 'Logon' and ec_outcome = 'Failure' 3x within 3 minutes) and a corresponding Incident Rule to group these alerts and create a meaningful title. 

Threat Aware Incident Rule


We simulated a few failed logins to create an incident:

Threat Aware Incident


Back in the SecurID Cloud Authentication Service, you can see that Brett has been added to the high risk users

Test User High Risk



Now when he logs into the app, he will be prompted for his userid/password

But due to being on the high risk users list, he will be required to approve via biometrics on his phone as per the policy set above:


Which will then lead to the successful authentication.

*** Note: The user will remain on the high risk users list until the incident is closed. ***


Additional Information:


If you are collecting logs from the Cloud Authentication Service, you will see the following meta keys:

And here is the corresponding event: 

Operational Thoughts:

@Larry Hammond for some insight into operational considerations.  He and I spoke about how NetWitness has traditionally been a passive device and cannot/should not interfere with your network or operations.  With the addition of Threat Aware Authentication, a poorly crafted rule could add many users to require step up authentication which could result in a disruption to business.  Good rule building practices should be followed and ensuring you test them before creating alerts.


This was the reasoning behind creating meaningful alerts in ESA to ensure the NetWitness admins have a view of the incidents which resulted in adding someone to the high risk users.

Although the RSA NetWitness platform gives administrators visibility into system metrics through the Health & Wellness Systems Stats Browser, we currently do not have a method to see all storage / retention across our deployment in a single instance or view.


Below you will find several scripts that will help us gain this visibility quickly and easily.


Update: Please grab the latest version of the script, some bugs were discovered that were fixed.


How It Works:


1. Dependency: (attached) both v10 and v11 version for your particular environment. Please run this script prior to running the as it requires the 'all-systems' file which contains all of your appliances & services.

2. We then read through the all-systems file and look for services that have retention e.g. EndpointLogHybrid, EndpointHybrid, LogHybrid, LogDecoder, Decoder, Concentrator, Archiver.

3. Finally we use the 'tlogin' functionality of NwConsole to allow cert-based authentication, thus, no need to run this script with username/password as input to pull database statistics and output the retention (in days) for that particular service.




1. Run ./ (for 10.x systems) or ./ (for 11.x systems)

    NOTE: Make sure to grab the 11.4 version of the backup scripts if you are running NetWitness 11.4+

2. Run ./  (without any arguments). This MUST be run from Puppetmaster (v10) or Node0 (v11).


Sample Run: 


Please feel free to provide feedback, bug reports etc...


Several changes have been made to the Threat Detection Content in Live. For added detection you need to deploy/download and subscribe to the content via Live, for retired content you'll need to manually remove those.

Detailed configuration procedures for getting RSA NetWitness Platform setup - Content Quick Start Guide 



RSA NetWitness Lua Parsers:

  • fingerprint_certificate Options - Optional parameters are added to alter the behavior of the fingerprint_certificate parser.
  • fingerprint_minidump - Detects Windows Minidump files. Meta will be output as filetype - 'minidump' This parser will also detect minidump files containing lsass memory and output meta as ioc – ‘lsass minidump’

Using RSA NetWitness to Detect Credential Harvesting: lsassy 


More information about Packet Parsers:


RSA NetWitness Application Rules:

Following app rules are added to Endpoint Content pack for RSA NetWitness 11.4 Investigation and Alerting –

  • Autorun Invalid Signature Windows Directory
  • Autorun Unsigned Hidden Only Executable In Directory
  • Autorun Unsigned winlogon helper DLL
  • Browser Runs Command Prompt
  • Command Line Writes Script Files
  • Command Prompt Obfuscation
  • Command Prompt Obfuscation Using Value Extraction
  • Command Shell Copy Items
  • Command Shell Runs Rundll32
  • Evasive Powershell Used Over Network
  • Explorer Public Folder DLL Load
  • Hidden and Hooking
  • Lateral Movement with Credentials Using Net Utility
  • OS Process Runs Command Shell
  • Outbound from Unsigned AppData Directory
  • Outbound from Windows Directory
  • Outbound Unsigned Temporary Directory
  • Potential Outlook Exploit
  • Powershell Double Base64
  • Process Redirects to STDOUT or STDERR
  • RDP Launching Loopback Address
  • Remote Directory Traversal
  • RPM Ownership Changed
  • RPM Permissions Changed
  • Unsigned Creates Remote Thread And File Hidden
  • Unsigned Library in Suspicious Daemon
  • Unsigned Opens LSASS
  • WMIC Remote Node Activity
  • Multiple Psexec Within Short Time


More information about NetWitness 11.4 New Features and Alerting: ESA Rule Types 




RSA NetWitness Lua Parsers:

  • china_chopper – Functionally has been added to detect new variants of china chopper. 
  • DCERPC – Parser now supports NTLM authentication along with Kerberos. Parser will now extract authentication meta from both Kerberos and NTLM

Using the RSA NetWitness Platform to Detect Lateral Movement: SCShell (DCE/RPC) 

  • DynDNS – Parser is updated with improved detection with addition of new dynamic DNS domains detected by RSA Incident Response. 

Read more about threat hunting/investigation using DynDNS parser What's updog? 

  • fingerprint_certificate - This parser is updated for efficiency improvements as well as added detection with more customization using options file.
  • HTTP_lua – Updated for accuracy and efficiency.
  • SMB_lua – Functionally has been added to support SMBv3.
  • MAIL_lua – Updated for accuracy and efficiency.
  • TLS_lua - Added a new option to TLS_lua to limit examination of sessions to only the ports specified in the option. If enabled, ports not listed will not be parsed by TLS_lua and thus will not be identified as service 443. This will reduce the workload of TLS_lua by eliminating identification of SSL/TLS sessions on unknown ports.

Read more about SSL and NetWitness 

  • SSH_lua - SSH_lua parser now include SSH Versions for both server and client thus providing better insights in investigation.
  • windows_command_shell_lua – Updates are made to base64 encoded command detections along with new commands.
  • xor_executable_lua – Improved detection with more xor'd executables by adding detection xor'd MZ header.


RSA NetWitness Application Rules:

Following app rules are updated to Endpoint Content pack for 11.4 Investigation and Alerting –

  • Office application injects remote process
  • Office Application Runs Scripting Engine
  • Creates Remote Service


RSA NetWitness Bundles:

Endpoint Pack has been updated with new and updated content so support Alerting for NetWitness Endpoint 11.4 and higher. 

Refer Endpoint Content for detailed information about content pack and its configuration. 


More content has been tagged with MITRE ATT&CK™ metadata for better coverage and improve detection.

For detailed information about MITRE ATT&CK™:

RSA Threat Content mapping with MITRE ATT&CK™ 

Manifesting MITRE ATT&CK™ Metadata in RSA NetWitness 




We strive to provide timely and accurate detection of threats as well as traits that can help analysts hunt through network and log data. Occasionally this means retiring content that provides little-to-no value.

List of Discontinued Content 


RSA NetWitness Application Rules:

  • php put with 40x error – Marked discontinued due to performance-to-value tradeoff.
  • php botnet beaconing w - Retiring this rule as provides little-to-no value as PHP beaconing has evolved and uses different patterns.
  • Windows NTLM Network Logon Successful - Retiring as improved application rule for ‘Pass the Hash’ has been created.



For additional documentation, downloads, and more, visit the RSA NetWitness Platform page on RSA Link.


EOPS Policy:

RSA has a defined End of Primary Support policy associated with all major versions. Please refer to the Product Version Life Cycle for additional details.

22APR2020 - UPDATE: Naushad Kasu has posted a video blog of this process and I have posted the template.xml and NweAgentPolicyDetails_x64.exe files from his blog here.


08APR2020 - UPDATE: adding a couple notes and example typespecs after some additional experimenting over the past week

  • You may find the process easier to simply copy an existing 11.4 typespec in the /var/netwitness/source-server/content/collection/file directory on the Admin Server and modify it for the custom collection source you need
  • example using IIS typespec:
    • comparison of the XML from the Log Collector/Log Decoder to the version I created on the Admin Server

  • another example using a custom typespec to collect Endpoint v4.4 (A.K.A. legacy ECAT) server logs
    • two different typespecs to collect the exact same set of logs, we can see exactly how the values in the typespec affect the raw log that ultimately gets ingested by NetWitness




The NetWitness 11.4 release included a number of features and enhancements for NetWitness Endpoint, one of which was the ability to collect flat file logs (, with the intent that this collection method would allow organizations to replace existing SFTP agents with the Endpoint Agent.


Flat file collection via the 11.4 Endpoint agent allows for a much easier management compared to the SFTP agent, in addition to the multitude of additional investigative and forensic benefits available with both the free version of the Endpoint agent and the advanced version (NetWitness Endpoint User Guide for NetWitness Platform 11.x - Table of Contents).


The 11.4 release included a number of OOTB, supported Flat File collection sources, with support for additional OOTB, as well as custom, sources planned for future releases.  However, because I am both impatient and willing to experiment in my lab where there are zero consequences if I break something, I decided to see whether I could port my existing, custom SFTP-based flat file collections to the new 11.4 Endpoint collection.


The process ended up being quite simple and easy.  Assuming you already have your Endpoint Server installed and configured, as well as custom flat file typespecs and parsers that you are using, all you need to do is:

  1. install an 11.4+ endpoint agent onto the host(s) that have the flat file logs
  2. ...then copy the custom typespec from the Log Decoder/Log Collector filesystem (/etc/netwitness/ng/logcollection/content/collection/file)
  3. the Node0/Admin Server filesystem (/var/netwitness/source-server/content/collection/file)
    1. ...if your typespec does not already include a <defaults.filePath> element in the XML, go ahead and add one (you can modify the path later in the UI)
    2. ...for example: 
  4. ...after your typespec is copied (and modifed as necessary), restart the source-server on the Node0/Admin Server
  5. open the NetWitness UI and navigate to Admin/Endpoint Sources and create a new (or modify an existing) Agent File Logs policy (more details and instructions on that here: Endpoint Config: About Endpoint Sources)
    1. ...find your custom Flat File log source in the dropdown and add it to the Endpoint Policy
    2. ...modify the Log File Path, if necessary:
    3. ...then simply publish your newly modified policy
  6. ...and once you have confirmed Collection via the Endpoint Agent, you can stop the SFTP agent on the log source (



And that's it.  Happy logging.

The Maze ransomware has recently been making the news due to some high-profile infections. In addition to requesting, in some instances, ransoms of 6+ million USD to regain access to the files, the group behind the malware has also leaked some of these files if the ransom was not paid.


In this post, we will look at the detected behaviors and IOCs from the Maze ransomware as identified by RSA NetWitness Endpoint and Network.


The following is the malware sample tested within this post.

SHA256: fc611f9d09f645f31c4a77a27b6e6b1aec74db916d0712bef5bce052d12c971f




Execution of Maze

When the victim gets infected, he will 1st notice that some of his open applications, such as Word and Excel, will get closed. After some time, once the execution of the ransomware is completed, the user’s background will be changed as seen in the below screenshot, instructing the victim to pay the ransom.




The victim can also notice a new text file on his folder (which would get automatically open at reboot). The file provides the detailed instructions on how to do the payment.






RSA NetWitness Endpoint


By leveraging RSA NetWitness Endpoint, we can look at the behavior of the malware on the victim’s machine.

If we first look at the overall details for that specific workstation, we can see:


  • An elevated overall risk score (93)
  • Some specific suspicious behaviors, such as
    • Deletes Shadow Volume Copies: this is a typical ransomware technique to stop the victim from restoring his files
    • Run/Writes Malicious File by Reputation Service: the ransomware itself has a known malicious hash value
    • Floating Module: might be loading DLLs in memory



By going to the list of processes, we can see the “maze.exe” file (the filename could be different) with a risk score of 76 based on its behavior on the system, and with a known reputation of “Malicious” based on the file hash value.



If we then look at the loaded libraries, we can see that in fact, the ransomware has loaded a DLL in memory:



If we then look at the files to run at startup, we can see that the text files have been added to the startup folders, to get automatically opened at startup and display the payment instructions for the user:



If we finally look at the overall behavior of the ransomware on the system:

  1. The ransomware is executed
  2. It closes Excel
  3. It loads the DLL in memory
  4. It communicates over the network with multiple public IP address (more details in the RSA NetWitness Network part)
  5. It deletes the shadow copies
  6. All the multiple readDocument actions are the ransomware encrypted all the user’s documents





RSA NetWitness Network

By leveraging RSA NetWitness Network, we can then look at the behaviors the ransomware has done from the network’s perspective. In addition, from the Endpoint side, we already know and have confirmed that the ransomware has initiated connections to the Internet.


By filtering on outbound traffic over HTTP, we can identify multiple suspicious behaviors.



  • Based on the user agent, the tool used to generate those sessions advertises itself as being IE 11 on Windows 7 (this doesn’t HAVE to be true). Being from IE11 would indicate that we should expect these connections to be from a human/browser, and not from a tool/script/application…
  • Direct to IP connections, without a hostname. Even though this can be normal (specially when done to private IP addresses within the local network), it is more suspicious when done over the Internet as it is unlikely for a user to remember public addresses and directly input them in the browser’s address bar (which would be what the tool wants us to believe as it advertised itself as IE11).

  • The lack of a referrer header. This header usually includes the previous page that linked to this one. Especially when dealing with direct to IP requests, having a referrer would be needed, as a user doing such a request because he followed a link could be seen as more normal compared to the user directly typing public IP addresses.

  • HTTP Post methods without Gets. This is also a suspicious behavior when dealing with HTTP sessions initiate by a human/browser. Typically for a user to “POST” data to a website, he first needs to request and “GET” a webpage that includes a form. Directly posting data is unusual for a human and is usually expected only from tools/applications/APIs …



We can then go to the session reconstruction view to look in more details at one of those sessions.


By reconstructing the session, we can:

  • Identify again the user-agent, which can be used as an IOC to identify other infected machines
  • The “Host” field having an IP address instead of a hostname, as it should be expected
  • Missing expected headers, such as a referrer
  • The Entropy meta (between 0-10,000) showing a high entropy level for the request. Entropy allows us to do statistical analysis on the payload and assess how randomized it is. Low entropy would indicate clear-text content, while high-entropy would indicate encrypted content. An encoded payload would be somewhere in between. When using HTTP, which is a clear-text protocol, we would expect either clear-text, or in some cases encoded payloads. A user, through a (supposed) browser connection, wouldn’t be expected to post highly random/encrypted payloads.


A combination of these different indicators does lead to identifying these suspicious network sessions initiated by the ransomware, including:

  • Direct to IP requests
  • Missing headers (referrer)
  • Post without Get HTTP methods
  • High entropy for a clear-text protocol





Indicators of Compromise

The below as some IOCs that could be used on RSA NetWitness Network and Endpoint to identify potential Maze infections in your environment. It should be noted that these are based on the specific variant tested as part of this post, and these could vary for different variants. It’s usually recommended to leverage behaviors and techniques instead of specific signatures, such as the ones discussed in this post under the RSA NetWitness Network and Endpoint sections, which would allow to overcome changes in specific signatures.


File Hash

MD5: e69a8eb94f65480980deaf1ff5a431a6

SHA-1: dcd2ab4540bde88f58dec8e8c243e303ec4bdd87

SHA-256: fc611f9d09f645f31c4a77a27b6e6b1aec74db916d0712bef5bce052d12c971f


IP Addresses


Domain Names (the malware doesn’t initiate connections there, but this is where the victim needs to go to for the payment/more info)




Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko



Josh Randall

Easy-add Recurring Feeds

Posted by Josh Randall Employee Apr 16, 2020

16APR2020 Update

  • adding a modified script for NetWitness environments at-or-above version 11.4.1 (due to JDK 11)
  • renaming the original script to indicate recommended use in pre-11.4.1 NetWitness environments


19DEC2019 Update (with props to Leonard Chvilicek for pointing out several issues with the original script)

  • implemented more accurate java version & path detection for JDK variable
  • implemented 30 second timeout on s_client command
  • implemented additional check on address of hosting server
  • implemented more accurate keystore import error check
  • script will show additional URLs for certs with Subject Alternate Names


In the past, I've seen a number of people ask how to enable a recurring feed from a hosting server that is using SSL/TLS, particularly when attempting to add a recurring feed hosted on the NetWitness Node0 server itself.  The issue presents itself as a "Failed to access file" error message, which can be especially frustrating when you know your CSV file is there on the server, and you've double- and triple-checked that you have the correct URL:


There are a number of blogs and KBs that cover this topic in varying degrees of detail:



Since all the steps required to enable a recurring feed from a SSL/TLS-protected server are done via CLI (apart from the Feed Wizard in the UI), I figured it would be a good idea to put together a script that would just do everything - minus a couple requests for user input and (y/N) prompts - automatically.


The goal here is that this will quickly and easily allow people to add new recurring feeds from any server that is hosting a CSV:



Interested in having a central single pane of glass view across your cloud, on-prem and virtual infrastructure?. Well, then with no shadow of doubt the use of the RSA NetWitness real-time dashboards and charts will come into play. 


The attached dashboards,  charts and RE rules will help you in getting a real-time monitoring to what really matters across the mentioned technologies and log sources.  


Below snapshots explain what you would ultimately see after importing the attached content to your NW11.3+ reporting-engine and dashboards: 

(considering that you have successfully integrated those log sources and parsed their logs to the meta keys that will allow the below dashboards to be populated with the relevant information)



Qualys Vulnerability Scanner Dashboard

A new C2 framework was recently added to the C2 Matrix called, Ninja. It was built on top of the leaked MuddyC3 framework used by an Iranian APT group called, MuddyWater. It can be run on Windows, Linux, macOS, and FreeBSD; the platform is built for speed, is highly malleable and feature rich. As usual, in this blog post will cover the detection of its usage using NetWitness Network and NetWitness Endpoint.


The Attack

Ninja creates a variety of payloads for you upon execution. In this instance, we just chose one of the PowerShell payloads and executed it on the victim endpoint:


A few seconds later we see our successful connection back to Ninja whereby a second stage payload is sent along, as well information about the victim endpoint:


We can see the information sent back from one of the initial HTTP POSTs by listing the agents:


Now we can change our focus to the agent, and start to execute commands against the endpoint:


The Detection Using NetWitness Network

Ninja C2 works over HTTP and currently has no direct support for SSL. This is in an attempt to blend in with the large quantities of HTTP traffic typically already present in an environment: the best place to hide a leaf is in the forest.


Ninja has a somewhat large amount of anomalies in regard to the HTTP requests it makes to and from the C2, a few of these have been highlighted below:

NOTE: While plenty of applications (ab)use the HTTP protocol, focusing on charateristics of more mechanical type behaviour can lead to us to sessions that are more worthy of investigation.


Another interesting element to Ninja is that for each agent a unique five character ID is generated. All requests from or to that agent are then in the form of, "AgentId-img.jpeg" - so from the below, we can tell that two agents are communicating with Ninja. You'll also notice that the requests it is making are for JPEG images, but none are actually returned. We can tell this as the File Type meta key is populated by a parser looking for the magic bytes of files, and it found no evidence of a JPEG in these sessions:



Another interesting artefact from Ninja is that it also returns encrypted commands in GET requests, and the associated encrypted response in POST requests; these can be seen under the Querystring meta key - the initial HTTP POST however, contains information about the system and is sent in the clear delimited by **:


Drilling into the Events view for the Ninja traffic, we can also see a defined beaconing pattern (we set this to two minutes upon setting up Ninja), as well as the fact that the beacons typically all have the same payload size:


Reconstructing the sessions from the beginning, we can see the initial communication with the Ninja C2, whereby it returns a second stage PowerShell payload:


This payload is somewhat large and is setting up the agent itself, the communication with the C2, encrypt and decrypt functions, as well as dynamically generating the AES key that should be used. Payloads such as this should be studied in-depth as they allow you to better understand the C2's function, and in this case, will allow us to decrypt the communication:


The next two pieces of information directly after the second stage payload are of importance, they contain information regarding the agent ID, details of the infected endpoint, as well as the encryption key that will be used; this is not a static key and is dynamically created for each agent:


Continuing the reconstruction of the sessions, we can see some Base64 encoded data, these are the AES encrypted commands and associated responses from them:


If you remember from earlier, we managed to identify the key that was used to encrypt this data. We also identified the second stage payload that was sent, this payload contained the PowerShell code for the agent which included the encryption and decryption functions for this data. We can simply use this to our advantage and create a simple decoder for this data:

#Ninja AES Key Returned From First HTTP POST to C2
#User passed data to decrypt

function CAM ($key,$IV){
try {$a = New-Object "System.Security.Cryptography.RijndaelManaged"
} catch {$a = New-Object "System.Security.Cryptography.AesCryptoServiceProvider"}
$a.Mode = [System.Security.Cryptography.CipherMode]::CBC
$a.Padding = [System.Security.Cryptography.PaddingMode]::Zeros
$a.BlockSize = 128
$a.KeySize = 256
if ($IV)
if ($IV.getType().Name -eq "String")
{$a.IV = [System.Convert]::FromBase64String($IV)}
{$a.IV = $IV}
if ($key)
if ($key.getType().Name -eq "String")
{$a.Key = [System.Convert]::FromBase64String($key)}
{$a.Key = $key}

$b = [System.Convert]::FromBase64String($enc)
$IV = $b[0..15]
$a = CAM $key $IV
$d = $a.CreateDecryptor()
$u = $d.TransformFinalBlock($b, 16, $b.Length - 16)


Executing the script and passing it the encrypted Base64 will decrypt the encrypted commands and associated responses allowing us to see what the attacker executed:


Based on Ninja being over HTTP by default, and the initial communication being in the clear, an application rule to pick up on this would look like the following:

(service = 80) && (action = 'post') && (query contains '**')


To detect further potential communication to and from Ninja C2 we could use the following application rule logic:

(service = 80) && (filename regex '^[a-z]{5}-img.jpeg') && (filetype != 'jpeg')


The Detection Using NetWitness Endpoint

Upon deploying Ninja, NetWitness Endpoint generates four Behaviours of Compromise, runs powershell, runs powershell decoding base64 string, and runs powershell with long arguments:


NetWitness Endpoint also generated meta values for the reconnaissance commands that were executed by the Ninja PowerShell agent:

  • C:\>whoami: gets current username
  • C:\>quser: queries users logged on local system
  • C:\>tasklist: enumerates proceses on local system


This is an important point to note, that even if you miss the initial execution, the malicious process will still have to do something in order to achieve its end goal, and as a defender, you only need to pick up on one of those activities to pull the thread back to the beginning.


Drilling into the Events view for the meta value, runs powershell decoding base64 string, we can see the Base64 encoded PowerShell command to initiate the connection to Ninja, we can also Base64 decode this within the UI to obtain other information such as the C2 IP:


Drilling into the Events view for the other meta values identified, we can see that a FILELESS_SCRIPT, spawned from the initial PowerShell command, is executing the reconnaissance command, tasklist:




New C2 frameworks are constantly being developed but all fall prey to the same detection mechanisms. It just comes down to you, as a defender, to triage the data the system presents to you to look for anomalies in processes doing things they shouldn't.

Every SOC analyst should spend at least part of his/her day reading various blog posts and white papers on attacker profiles and their tools and techniques. Attackers often repeat at least certain aspects of their activity on various targets, and thus provide the analysts with an opportunity to incorporate such indicators into their toolset (hopefully) prior to being targeted by such attackers.


In addition, other sites provide continuous indicators of both advanced and opportunistic attackers, which can also be incorporated into the toolset for automatic detection.


Here I will provide a guide on how to format such publicly available indicators into the NetWitness Network and NetWitness Endpoint.


Let us briefly describe what is an Indicator of Compromise (IOC). An IOC is an indicator of something that has already been observed on a compromised system or a behavior that was part of an attack. There are multiple types of IOCs, because you can track something in many different ways, for example IP addresses, filenames, file size, URLs, a particular endpoint behavior, etc.


Sometimes lists of hashes such as MD5/SHA1/SHA256 are enough to quickly identify compromised machines. For this purpose, there are multiple sites where you can find a good list of MD5 / SHA1 / SHA256 based IOCs, here are some examples:



At this point, if you don't have your own list of IOCs based on MD5 / SHA1 / SHA256, you can use some of these lists, created by other analysts. However, such information is not necessarily in a suitable format for incorporating into the NetWitness toolset. One way to normalize the data is by following this process:


  1. Install Cmder (, which is a good console emulator for Windows that has the functionality needed for the rest of the steps.
  2. Let’s say you want to generate the MD5 list of IOC FIN7 found at:


After you download file, you are ready to start.

  1. Open Cmder and go to the folder where the downloaded file is located.
  2. Now run following commands, as shown in the following figure.


grep -e "[0-9a-f]\{32\}" | cut -c 3- | cut -c -32 | uniq -u > FIN7_tmp.txt | sed -e 's/$/,FIN7,blacklisted\ file/' FIN7_tmp.txt > FIN7_md5.txt


Let me explain the commands in more detail for those not familiar with these tools/commands:


grep -e "[0-9a-f]\{32\}" FIN7_hash.mdExtract MD5 from file
cut -c 3- | cut -c -32Remove all the unneeded characters
uniq -u > FIN7_tmp.txtMake it unique and save the output to FIN7_tmp.txt
sed -e 's/$/,FIN7,blacklisted\ file/' FIN7_tmp.txt > FIN7_md5.txtCreate the final file



The above steps are specific to this particular file, each set of IOCs will need its unique set of conversion steps, add “,FIN7,blacklisted file” to each line and write the output to FIN7_md5.txt. Where “FIN7” is the description of your APT, which we will map to the ioc key in NetWitness, and the value “blacklisted file” which we will map to the analysis.file key, this step is critical if you want the module and machine scores automatically set to 100 for these matches.


Figure 1


If you want to use your own toolset to format the data, then please ensure you follow these steps in order to generate a good list of IOC:


  1. Retrieve the file (can be a plain text, a pdf, a word, one HTML, the filetype is not important)
  2. Extract the IOCs from file
  3. Remove unneeded chars (in order to have only the useful strings)
  4. Make any IOC unique (ensure you remove any duplicate entries as this in an important step)
  5. You must have a value of “blacklisted file” in your resulting file if you want machine and module  scores to be affected by your feed.


At this point we have the source CSV file with the data necessary to create a Feed for NetWitness Endpoint.


To create your feed, follow these steps:

  1. Go to Configure->Custom Feed and create a custom feed.
  2. Click on + icon, select Custom Feed and configure the custom feed by giving it a name and selecting the CSV file you created above as shown in the following figure


Figure 2



In this case CustomAPTFeed.csv is your FIN7_md5.txt created above, which we renamed to CustomAPTFeed.csv.

Apply the feed to the LogDecoder (second Tab), and define the Columns as shown in the following figure. Here define the Callback Key as “checksum.src”, select Index Column to be the first one, which will grey it out in the grid below, select the key for Column 2 in this case “ioc” and finally select Column 3 as the “analisys.file” meta key, again this step is critical if you want the risk scores to automatically update, it will only work for this combination of key and value.


Figure 3



Finish the import make sure there are no errors and the task completed successfully. Now you can go to Investigate in the UI and validate your data.


Every time the meta key “checksum.src” contains a value defined in your custom feed, meta key “ioc” will be populated with the value provided in the Column 2 of the CSV file and the “analysis.file” meta key will have the “blacklisted file” value, as shown in the following figure.


Figure 4


In this case, the endpoint risk score for that system will automatically be increased to 100 (the highest possible risk score), and under Critical Alert you will see the relevant indicator in our case that is “Blacklisted File”.


Figure 5


The same will happen to the specific module that was Blacklisted, as shown in the following figure:


Figure 6


Multiple types of IOCs can be loaded into NetWitness Endpoint, following the steps presented in this blog post. Always remember that IOCs are static, so the resource has to match exactly to trigger an alert. In the case of MD5 hashes of files, also remember that if the file is changed even by just one byte or for example recompiled, the MD5 hash will be different and your IOC will no longer match. This is the reason why we recommend that analysts focus instead on other possible characteristics of a file (such as the file description if it is unique) or its behavior (such as any parameters that need to be passed for it to work).


I hope this blog post can help in importing simple and fast IOCs into the NetWitness endpoint for automatic detection of known malicious files.


A special thank you goes out to Lee Kirkpatrick for his assistance and support.

NetWitness already got the Health & Wellness service which provide a full overview for the health of all Netwitness Services and hosts, Yet I also created a script for health check to perform a quick analysis on Disk usage, Memory utilization, Existence of Core files & If there were any failed services on any NetWitness host


Also it lists all your hosts with regards to their Salt Minion IDs, hostnames, IPs and also provide a Salt Reachability check.



How It Works:


The procedure actually consists of 2 scripts This is a Script to run on the SA and it performs a simple Health-Check on your environment, it copies the to all hosts then turns it executable then run it at each host and provide the output and recommendation. It also lists all your hosts UUIDs "Salt Minion IDs", Hostnames & IPs and perform a Reachability Test as well This script is copied to all Netwitness hosts when you run the on the SA, This script analyzes the hosts disk usage,memory utilization,Existence of Core files & if there are any failed services on that host

This script ( will not run manually, it will run once you run the script on the SA





All Below steps are done on SSH session to the NetWitness Admin Server (SA)


1) under /root on the SA,


2) copy the content of (attached) into that file you created in step 1

3) under /root on the SA,


4) copy the content of (attached) into that file you created in step 3

5) You will only make the executable (not the

#chmod +x

6) Run the




Sample Run: 






Minion did not return. [Not connected] OR No Response could point to one of the below reasons


1) If you are facing any network slowness and Salt Master (SA) is unable to reach to the Salt Minions (hosts) within a specific time limit during fetching their IPs, Hostnames, the output of the first part of the script can provide you (No Response), Don't Panic, This does not mean that SA in totally unable to reach the Host(s), but it was unable to reach it during a specific time limit thus the salt will temporarily provide you with "No Response" output. 

If you run the script again during no network slowness, it should provide the output as expected


2) If the host is having 0 free memory left and utilized all its swap memory, its salt minion may not reply to salt master's request of IP,hostname & Reachability test; giving Minion did not return. [Not connected]. If you run the script again, it will show a result normally, otherwise, Thanks to check memory utilization of this host if you already isolated it's not related to above 1st point  (network slowness) or a retired host/powered off host (3rd point)


3) Minion did not return. [Not connected] could also point to a retired host (that was removed from the environment yet its salt minion UUID was not deleted from the salt master) or could point to a host that's currently powered off





Please feel free to provide feedback, bug reports etc...

With the sudden surge in popularity for Zoom meetings, an increase interest has been seen by white/grey/black hats to identify potential vulnerabilities and weaknesses.

One of the recent popular security weaknesses identified is around the way a UNC path sent over Zoom chat gets interpreted as a hyperlink. If a member of the Zoom session clicks on the displayed link, this could lead to multiple risks.


If we also consider the scenarios involving zoom bombing, where an attacker can join a public session (which is the case for most Zoom sessions by default), either by obtaining the invitation link or by guessing personal meeting room names without having to be logged in, scenarios where an attacker manages to convince users to click on carefully crafted links become more probable.





Scenario 1: Interception of NTLM Password Hashes


The user receives a clickable link in the chat window. This could lead to an IP address or domain controlled by the attacker. The example shows a private IP address, but it could have been a public IP as well.


When the user clicks on the hyperlink, Windows will try to connect to the mentioned destination over SMB, sending the user’s username and NTLM password hash in the process.

As seen is the below screenshot, as the attacker has control over the destination host, he is able to capture the user's NTLM password hash.


The attacker can then use the NTLM hash value to crack the user’s password, as seen below. 






Scenario 2: Remote execution of local files


The same technique can be used to execute a local file on the victim’s machine.

In the below screenshot, the user clicks on the hyperlink, which execute the calculator. In a real scenario, something malicious could have been executed.






Scenario 3: Remote execution of a remote file/script


An attacker can use the same technique to remotely execute a malware hosted on his machine. Instead of pointing to a local executable, the link could point to an executable on a machine controlled by the attacker. This machine could be on the internet and doesn’t have to be on the same network. In the below example, the attacker is hosting a file named “YGH.exe” on his SMB server.



In this scenario, the victim would get a warning message, but it’s safe to assume that some users would click on “Run”, and the YGH.exe file would get executed without any additional actions needed from the user's end.



In this example, the malware only shows a popup message. In a real attack, an actual malware could have been executed.






Detection of these behaviors

It's possible to easily identify these behaviors with the right visibility into network traffic and endpoint data. The below are ways to identify such instances using RSA NetWitness Network and RSA NetWitness Endpoint.



RSA NetWitness Network



  • Filter on direction = ‘outbound’ && service = 139 to look for outbound SMB session leaking to the internet. Do not base your filter on the TCP destination port, as the attacker doesn’t have to use the standard SMB port. Leveraging “service”, which identifies the service based on the content of the network payload instead of the advertised port numbers, allows to account for these scenarios.
  • Monitor where the traffic is going to and if this is expected or not (typically, no SMB traffic should go out)
  • Identify any potentially risky files that might have been downloaded over SMB. In this case we can identify that an executable (ygh.exe) has been downloaded. We could then check if this file has been executed and what it did using RSA NetWitness Endpoint.





RSA NetWitness Endpoint

By looking at the process analysis details for the zoom.exe process, we can easily identify any executable that has been launched by zoom, in this case, YGH.exe. In a real scenario where the malware would have taken some actions, those actions, commands, ... would have been shown in this view as well.


We can also easily filter on files that have been executed from a remote path. Filters can be created to excludes expected trusted paths based on the organization’s naming convention. In this case, it shows the remote domain used by the attacker.






Some additional recommendations

  • Follow standard best practices, and don’t click on any link without verifying its authenticity
  • Block SMB traffic from going out to the Internet
    • This should be the norm in any environment
    • With the increase in people working from home where corporate firewall policies might not be enforced, this can be done on the local Windows Firewall
  • To avoid Zoom Bombing
    • Don’t use easily guessable Zoom room names
    • When possible, set a password to join meetings

The ability to capture network events while keeping only the header portion and truncating the payload has been available for quite some time. This has always been a great option when the lack of analytical value of the raw data (e.g. the session payload) does not justify paying for the storage cost incurred to keep it. Some typical examples being saving database transfers of your backup files or data that is encrypted that you are unable to decipher into clear text.


In RSA NetWitness Platform 11.1 we added some additional options to increase the flexibility of when the truncation is applied to an event.


  • The first new option allows for the headers along with any Secure Sockets Layer (SSL) certificate exchange to be captured prior to truncating the remaining portion of the payload. This allows for analysis like TLS certificate hashing and JA3 & JA3S fingerprints to be generated while still removing the remaining payload to save on storage space.
  • The second option allows for the administrator to choose a custom boundary, based on how many bytes into the event raw data, before truncating the payload. Any bytes prior to the boundary are saved as part of the event and anything after that boundary is not stored.


The administrative interface shown below is where an admin can modify the truncation options on application rules per network decoder.


Administration of network decoder application rule truncation options

1       Introduction

The efforts of people around the globe have suddenly forced many workers to stay at home. For a significant portion of these workers that also means working remotely either for the first time, or at least more often than their normal telecommuting schedule. As a result of this necessity, many organizations may be forced to implement new remote technologies or significantly expand their current capacities for remote users. This added capability can present a significant security risk if not implemented correctly. Furthermore, malicious actors never pass up the opportunity to capitalize on current affairs. The RSA Incident Response Team has years of experience responding to Targeted Attacks and Advanced Threat Actors while assisting our clients with improving their overall security posture. The members of our team are either working with our customers on-site or supporting them from home. Our team has frequently assisted clients remotely, providing us with extensive experience in operating a secure remote team. Given the increasing threat landscape,  we are sharing some essential tips and suggestions on how organizations can improve their security posture, as well as how their remote workforce can keep themselves secure by following some best practices.

2       Tips for Users that are Working from Home

During this time many workers will be shifting from the office life to a work from home life that is unfamiliar to most of them. Many workers will be experiencing this reality for the first time, while for others it will be the first time this has been an everyday occurrence. In addition to the recommendations provided on the RSA blog (, the RSA IR team is providing some additional details and best practices that users can utilize to help keep themselves secure while working from home.  Additionally, the RSA IR team has published a blog with tips that organizations can use to help improve their security posture (RSA IR - Best Practices for Organizations (A Starting Point)).

2.1      Use Provided Corporate Hardware

Now that you have shifted to working from home you will still need to ensure all work-related tasks are completed using your organization's provided laptop, if available. Using the work laptop allows the user to still be covered by the organization's security protections. It also helps the user with accidental disclosure of sensitive work data if that information is being stored on a personally owned machine. Some organizations have a bring-your-own-device (BYOD) policy.  In those cases, RSA recommends following your companies normal policy for remote computing.

2.2      Passwords

The passwords used for all corporate logins should comply with your organization's password policy.  However, RSA recommends use of a Password Manager to increase your security. Password managers (such as LastPass, Password Safe, Dashlane, 1Password, Apple Keychain, among other reputable password managers) allow you to randomly generate a secure and unique password for each login and store them within a database. This allows you to comply with corporate security policies without having to remember each individual password (or worse, reusing the same password). The implication of reusing passwords is that if an account's password is compromised in one location, then all other instances that have the same password are also compromised.  We will also be discussing multi factor authentication next; suffice to say that we recommend that multi factor authentication be enabled for access to your password manager for increased security.

Several password managers can be found at the below link:

NOTE: Password managers will require users to remember a single master password in order to access the others. It should be complex and not easily guessable. We recommended that you adopt the concept of passphrases rather than passwords. A passphrase can be a sentence or a combination of words that have some meaning to you. For example, a passphrase could be: “I need to be on vacation now!” or “Correct Horse Battery Staple” (reference: xkcd: Password Strength ) One example of a passphrase generator is

2.2.1    Default Passwords

Many devices require a username and password to log in for initial or further configuration.  Often these devices (such as home routers, WIFI access points, cable modems and other Internet devices) come equipped with default passwords (such as admin or password).  RSA recommends that all default passwords be changed to secure unique passwords, especially for devices that connect directly to the Internet.

2.3        Multi Factor Authentication (Also Known as Two Factor Authentication)

Using multi factor authentication (MFA) for all remote access, for systems hosting sensitive data, and for systems performing administrative functions within the organization is strongly recommended. Multi factor authentication, (which is an evolution of two factor authentication (2FA)), enhances security by requiring that a user present multiple pieces of information to authenticate themselves. Credentials typically fall into one of three categories: something you know (like a password or PIN), something you have (like a smart card or token), or something you are (like your fingerprint or Iris scan). Credentials must come from two different categories in order to be classified as multi-factor. Applications that are sensitive to the organization such as your password manager, customer databases, administrative tools, etc. should all have multi factor authentication enabled on them.


2.4      Follow Your Company's IT and Security Policies

Organizations have established IT and security policies to protect all employees as well as the organization itself.  Just because you are not in the office does not mean that you still should not follow these policies. Security policies surrounding the way you handle data, communications, installed applications, and things you can do on your laptop should all be followed. Company provided computers should not be treated the same as personal devices. This may include disallowing your family from using the company provided computer.

2.5      Allow Updates and Patching to Take Place.

If your organization has a patch management program in place users should allow these processes to function as they normally would when they are in the office. These update procedures will at times require a reboot so ensure your machine is online, connected to the corporate VPN (if available), and allow it to reboot when it asks. Do not skip patches as they are released by your organization's IT department so that your machine is not put at risk of being compromised. 

2.5.1    Update Personal Devices

In addition to allowing your corporate system to update, personal assets should be updated as well. It is easy to ignore security updates for your systems, devices, or applications by simply clicking “update later”. However, repeatedly delaying these updates can lead to serious vulnerability issues. Updates should be performed for your personal operating systems (such as Windows or MacOS for example),  web browsers (such as Chrome, Firefox, Internet Explorer or Edge), tablets (such as iPad, Kindle, or Android), smartphones (such as iPhone or Android), and any other device that requires updates.

2.6      Phishing / Scams / Link Safety

Phishing is an attempt to trick a user into believing that the email message is something that they need, want, or are interested in. Phishing scams typically revolve around current events of the world or common life events (such as shipping related to online orders, among others). The attackers know that the subject and content of the email will trigger either fear or intrigue on the recipient. This emotion will most likely cause the recipient to click a link within the email or open its attachment. The link will likely download a malicious application or present the user with a fake login page that attempts to harvest credentials for sites such as your bank, email, social media, online shopping, gaming or other important credentials.  This can result in the loss of access, fraud, or abuse of these accounts if the user proceeds to divulge this information.

If you are unfamiliar with what phishing looks like or some of the common tactics used for social engineering, we highly recommend taking the quiz linked below to improve your skills for spotting phishing attempts:

2.7      WIFI Security

RSA recommends encrypting home wireless networks with WIFI Protected Access (WPA). There are several versions (WPA, WPA2 & WPA3) with WPA3 being the current strongest. RSA does not recommend using Wired Equivalent Protocol (WEP) or unsecured wireless Internet.

2.8      Security Training

If your company offers security training, RSA recommends that you take (or retake if it has been a while) the offered training as you are potentially at a higher risk now that you are outside the office. We understand that these trainings are not always the most exciting learning experiences, however they do help to reinforce good security behavior and can act as a refresher for things you may already know. One good resource to start is the SANS Security Awareness Work-from-Home Kit (

2.9      Improve Your Household's Internet Safety

All the devices on your local network are linked to each other in one way or another. It is therefore important to ensure that all members of your household are kept safe and do not infect you by proxy. A great way of ensuring your family's internet safety on the internet is by using Microsoft family:

2.10  Non-Security Tips for Working from Home

2.10.1  A Second Monitor

A second monitor can increase your productivity, improve workflow and generally provide an improved experience while working.  Many organizations are offering to let employees borrow work resources such as monitors for use during this period of working from home. Check if your company is providing something similar.

2.10.2  A Comfortable and Supportive Chair

Since you will no doubt be spending an increased amount of time in front of your computer working, you will also likely be spending an increased amount of time in your chair.  Having a comfortable and supportive chair can help with posture and ergonomics while working from home.

2.10.3  Consider a Standing Desk or Standing Desk Converter

For many people sitting all day is not ideal. To help combat this consider using a standing desk or a standing desk converter that allows a home user to decide if they want to sit or stand at will. If you’re not able to utilize a standing desk, then be sure to take breaks where you are able to stand up and stretch.

3      Conclusion

In these uncertain times, we hope that this advice will help organizations and users stay connected and stay secure. Watch out for more posts and advice from across the RSA organization, and let us know what you're doing in the comments below.

1       Introduction

The efforts of people around the globe have suddenly forced many workers to stay at home. For a significant portion of these workers that also means working remotely either for the first time, or at least more often than their normal telecommuting schedule. As a result of this necessity, many organizations may be forced to implement new remote technologies or significantly expand their current capacities for remote users. This added capability can present a significant security risk if not implemented correctly. Furthermore, malicious actors never pass up the opportunity to capitalize on current affairs. The RSA Incident Response Team has years of experience responding to Targeted Attacks and Advanced Threat Actors while assisting our clients with improving their overall security posture. The members of our team are either working with our customers on-site or supporting them from home. Our team has frequently assisted clients remotely, providing us with extensive experience in operating a secure remote team. Given the increasing threat landscape,  we are sharing some essential tips and suggestions on how organizations can improve their security posture, as well as how their remote workforce can keep themselves secure by following some best practices.

2       Tips for Organizations (A Starting Point)

While there are many steps organizations can take to better protect themselves and their users, the RSA IR team is sharing some essential tips and suggestions that we consider to be a good starting point. However, these are by no means a complete list.  Each organization should adjust the below recommendations according to the organization’s security posture, and risk profile and acceptance.

Many vendors are offering emergency capacity extensions or trials of their products in this time of unprecedented social change.  Check with your vendors to see if they have any such offers in place for technology that your organization does not already have implemented as it pertains to the recommendations listed below. For a strategic approach, take a look at the post from our colleagues on the Advanced Cyber Defense (ACD) team Work From Home - The Paradigm Shift in Cyber Defense.

2.1      What Organizations Can Do for Their Users

2.1.1    VPN

While it may be tempting and seem like an easy option to just make resources available online via services like RDP, this is generally not recommended. Threat actors love searching for vulnerable servers that are connected to the internet regardless of the port used. Search engines like Shodan are showing an increase in the number of servers exposing RDP directly to the internet ( ). Open RDP servers are regularly used to infect organizations with Ransomware and other malware (Two weeks after Microsoft warned of Windows RDP worms, a million internet-facing boxes still vulnerable • The Register ). RSA strongly discourages organizations from exposing RDP services directly to the internet.

Organizations should utilize VPN (or VPN alternative) technologies for employee remote access. RSA IR has the following tips regarding VPN usage.

  • Ensure Licensing counts can support the increased number of remote workers.
  • Ensure that the VPN devices can handle the increased number of simultaneous connections and throughput.
  • For strong security, RSA recommends that the VPN be Always-On if possible. An Always-On VPN requires the system to be connected to the VPN whenever an authorized client is connected to the internet. If bandwidth, simultaneous connection count, or bring-your-own-device (BYOD) is of concern, this suggestion can be re-prioritized.
  • All traffic should be tunneled over the VPN (No Split Tunneling), thus enabling the same network visibility and controls as if users were in office. If bandwidth availability or bring-your-own-device (BYOD) is of concern to the organization, this recommendation can be re-prioritized.
  • Investigate VPN alternatives for certain users. Alternative remote access solutions also exist such as, Virtual Desktops Infrastructure (VDI), Cloud Infrastructure, Software as a Service (SaaS), and others.

2.1.2    Multi Factor Authentication (Also Known as Two Factor Authentication) For All Remote Access

All remote access (including VPN, VDI, Cloud, Office365, SaaS, etc.) should be required to utilize Multi Factor Authentication. Multi Factor Authentication, which is an evolution of Two Factor Authentication (2FA), enhances security by requiring that a user present multiple pieces of information for authentication. Credentials typically fall into one of three categories: something you know (like a password or PIN), something you have (like a smart card or token), or something you are (like your fingerprint). Credentials must come from two different categories in order to be classified as multi-factor. As mentioned, check with your vendors to see if they are offering any assistance with surge capacity or new solutions.

2.1.3    User Education

RSA generally recommends that all staff using computer resources within a company complete annual security training. However, during this time when more users are working remotely, RSA recommends that organizations hold a special organization-wide user education session on password safety, phishing attacks, IT security policies, as well as covering how to report issues to the IT and Security Teams. If you’re looking for a place to start, see our other blog post for tips for users that are working from home (RSA IR - Recommendations for Users Working from Home).

2.2      What Organizations Can Do for Themselves

2.2.1    Updates and Patching

RSA consistently finds out-of-date and out-of-support Operating Systems and software running in client environments. Older software often has public vulnerabilities and exploits that are freely available online and are often targeted by commodity malware as well as targeted attackers. RSA strongly recommends that any core software be aggressively updated on a regular basis, especially if a vulnerability for a particular application is publicly announced. Exploiting vulnerable software is one of the easiest ways for an attacker to find their way into the enterprise. At a minimum organizations should look to:

  • Update and Patch all external facing systems, servers and applications (including web applications or frameworks).
  • Update and Patch all Critical Systems internal or external.

2.2.2    Web Application Firewall

If not already deployed, RSA recommends implementing a Web Application Firewall (WAF) to better protect Internet facing web applications. A WAF solution can provide a reduction in the attack surface of web applications and in some cases, of the operating system itself. It is important to note that simply installing a WAF solution will not immediately secure all the web applications as all WAF solutions, regardless of vendor, need to be tuned for the specific applications and environments they are being used to protect.

If a WAF is already deployed, RSA recommends that organizations verify that it is in front of not just the business-critical web applications, but also all other external web-facing assets.

2.2.3    Leverage Freely Available Threat Intelligence Feeds

As notices have been released about increased attacker activity related to recent attacks and fraud (, many threat intelligence vendors are offering freely available intelligence of current threats and scams. Here are some of the companies offering related intelligence feeds for free, as well as providing some additional tools for analysts.

2.2.4    Endpoint Detection and Response (EDR)

Endpoint Detection and Response (EDR) is especially important to organizations that, for various reasons, are unable to enable an Always-On VPN. 

If your organization already has an Endpoint Detection and Response (EDR) solution, ensure that it is deployed to all remote users.  Since endpoints may not be sending all their traffic internally to allow for network visibility, EDR tools can help gain visibility of endpoints operating outside the internal network environment. Organizations need to ensure that data collected by the EDR tool can be transmitted to the central EDR server either continuously or while connected to the VPN. Organizations must also ensure that their licensing limits, as well as server capacity, support a potential increase in the number of endpoints.  Speak to your security vendors to see if they provide surge or Business Continuity increases during this time.

If your organization does not currently have an EDR tool, then consider deploying one.  EDR solutions now offer more than just detection and blacklisting of malware; but also, have built-in forensic capabilities such as acquiring remote system files, memory images, behavior analysis, and false positive management via whitelisting. This means that organizations can detect, respond and block malicious activity much quicker and without the need to create a full host forensic image for investigation. Additionally, once a Behavior of Compromise (BOC) is identified, the EDR solution should be able to detect where else in the enterprise that indicator has been observed. Speak to your trusted security vendors and see if they are offering any on a trial basis.

2.2.5    Remote Collaboration

If your organization does not already have a policy for remote collaboration tools (such as screen share), consider adopting one for remote users. At the very least, RSA suggests having a recommendation for users so that they do not seek out their own solutions.  Some examples include Zoom, WebEx, GoToMeeting, Microsoft Teams, as well as others.

3      Conclusion

In these uncertain times, we hope that this advice will help organizations and users stay connected and stay secure. Watch out for more posts and advice from across the RSA organization, and let us know what you're doing in the comments below.

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.









Special thanks to Rui Ataide for his support and guidance for these posts.

I recently reviewed HTTP Asynchronous Reverse Shell (HARS) for The C2 Matrix, which should be posted soon! They also have a Google Docs spreadsheet here: C2Matrix - Google Sheets. I’ve been following them for awhile and have tried to map as may of the frameworks as possible from a defensive perspective. This blog post will therefore cover just that, how to use RSA NetWitness to detect HARS.


The Attack

After editing the configuration files, we can compile the executable to run on the victim endpoint. After executing the binary we get the default error message, which is configurable, but we left it with default settings:


The error message is a ruse and the connection is still made back to the C2 server where we see the successful connection from our victim endpoint:


It drops us by default into a prompt where we can begin to execute our commands, such as whoamiquser, etc:


Detection Using NetWitness Network

By default HARS uses SSL, so to see the underlying HTTP traffic, we used a MITM proxy to intercept the communication; it is highly advisable to introduce SSL interception into your own envrionment. Within this post, we will also cover the anomalies with the communication over SSL.



An interesting meta value generated for the HARS traffic is, http invalid cookie - this meta value looks for HTTP cookies that do not follow RFC 6265:


Drilling into the Events view for these sessions before reconstructing them, we can observe that there is a beacon type pattern to the connections with some jitter, and also a low variance in the payload for each request - this indicates that this is a more mechanical type check-in behaviour:


Reconstructing the events and looking at the cookie for the requests, we can see what looks like Base64 data:



Using the built-in Base64 decoding, we can see that this decodes to HELLO. While this is not indicative of malicious activity, this is still a malformed cookie and a rather strange value:


From here, we can continue to go through the traffic and decode the values supplied within the cookie header. The next few cookies contain the text QVNL, which returns ASK when Base64 decoded:


Eventually we come across a cookie with a Base64 encoded version of what looks like the ouput from a whoami command:


As well as one that contains the output from a quser command. Both these look rather suspicious and this is information that normally shouldn't be sent to a remote host, especially in this manner as a cookie value:


Looking through the request prior to the one that returns the output of quser, and sifting though the payload, there is a Base64 encoded quser command within it:


This C2 framework disguises its commands within legitimate looking pages in an attempt to evade detection by analysts, but is easily detected with NetWitness using a single meta value, http invalid cookie.

NOTE: It is important to remember that many applications abuse the HTTP protocol and do not follow RFC's, it is therefore possible for legitimate traffic to have inavlid cookies, it is down to the defender to determine whether the activity is malicious or not, but NetWitness points you to these anomalies and makes it easier to focus on traffic of interest.


This C2 is highly malleable and therefore the following application rule would only pick up on its default configuration, however, attackers tend to be lazy and leave many of the default settings for these tools. This would allow us to easily create an application rule to detect this behaviour:

cookie = 'QVNL','SEVMTE8='


In order for the application rule to work, you would need to register the cookie HTTP header. This involves using the customHeaders() function within the HTTP_lua_options file as described on the community:


One of our previous posts also covered registering the cookie HTTP header into a meta key and can be found on the community:




As previously stated, HARS uses SSL to communicate by default. When HARS initially connects back to the C2 from the victim endpoint, it attempts to blend in with typical traffic to www[.]bing[.]com. The below screenshot shows the malicious traffic (on the left), and the legitimate traffic to Bing (on the right). Playing spot the difference, we can see a few anomalies as highlighted below:


This allows us to create logic to detect possible HARS usage with the following application rule:

service = 443 &&'' &&'microsoft corporation' && ssl.subject='microsoft corporation'


And we can also create an application rule to look for anomalous Bing certificates, this would, however, be lower fidelity in order to detect a broader range of suspicious cases to aid in threat hunting:

service = 443 && = '' && not('' &&'microsoft corporation','baltimore' && ssl.subject='')


Detection Using NetWitness Endpoint

HARS uses PowerShell to execute the commands on the victim endpoint, but does not use any form of obfuscation. Therefore in NetWitness Endpoint, we can see multiple hits under the Behaviours of Compromise meta key for the reconaissance commands executed, quser, whoami, and tasklist:


Drilling into those meta values, we can see an executable named, hars.exe, running out of a suspect directory and executing reconaissance type commands:


Pivoting on the filename, hars.exe, (filename.src = 'hars.exe'), which really could be any other name, but would still be launching your commands, we can see all the events from this suspect executable, such as the commands it executed under the Source Parameter meta key:


After every command executed, HARS adds the following, echo flag_end. We can use this to our advantage to create an application rule to detect its behaviour:

category = 'console event' && param.src ends 'echo flag_end'


Another neat indicator comes under the Context meta key, here we can see four interesting meta values associated with, hars.exe - console.remote, network.ipv4, network.nonroutable, and network.outgoing - these meta values tell us that this executable is making an outbound network connection and running console commands:


Drilling into the Events view for the network meta values, we can see where the executable is connecting to:


And drilling into the console.remote meta value, we can see the commands that were executed:


So from a defenders perspective, it could be a good idea to use the filter, context = 'console.remote' - and look for suspicious executables:



Not all C2 frameworks use advanced methods of obfuscation or encryption, some rely on confusing analysts by trying to blend in with normal traffic by mimicking legitimate web sites. It is important as a defender to spot these anomalies and fully analyse the traffic, even if it at first glance appears to be normal, and remember, the attacker would probably think none of this really matters as the attack is over SSL and this data would not be visible to analysts, which is where having SSL interception is a great advantage for analysts, it really catches attackers out.


By now, you may have already started to work from home instead of your usual workplace, like many of your co-workers and peers. As the situation continues to evolve, there is a rapidly increasing trend for organisations to shift their employees from office to work from home. In addition to the recommendations provided in the following RSA blogs: Cyber Resiliency Begins at Home, RSA IR - Best Practices for Organizations (A Starting Point), and RSA IR - Recommendations for Users Working from Home, in this post, we will be going into further details to examine the potential challenges that cybersecurity professionals are contending with as organizations around the globe start to transit more employees from offices to work-from-home arrangements and conducting meetings through virtual means; this transformation in how we work and conduct our businesses will inevitably have an impact on our threat environment. We will discuss in the subsequent paragraphs on what is the paradigm shift in our threat landscape and what should we do to continue to stay effective in safeguarding our assets from the emerging cyber threats.



There are 2 key problems that we see here which we will break it down in the following paragraphs:


Problem #1

The cyber defense architecture for many of the organizations today are designed based on the assumption that most of the daily BAU activities are performed on-premise. With the sudden need to allow a good number of employees to work-from-home, it means that many of the activities would now have to be performed remotely. The challenges in provisioning or scaling of the necessary IT infrastructure to support these sudden changes aside, this also gives rise to a shift in the threat landscape, where the existing cyber defense measures that have been working in the past, may no longer be effective now.


Problem #2

There is an increasing trend that attackers are preying on the psychology of human beings by coming up with new attacks related to the latest trending news topic or specifically targeting work-from-home employees through the remote meeting applications that they use, for example:

  • Phishing Emails and Malware Attachments disguised as legitimate meeting invites and installers from popular remote meeting applications.
  • Malicious mobile applications promising to be the most up-to-date outlet for tracking the latest breaking news and developments.
  • Domain names that are similar to popular remote meeting platforms.


Combining both the above-mentioned problems and coupled with the tendency that as humans we naturally feel more comfortable in our home setting as compared to offices, there is an increased likelihood where some of us may be letting our guards down when it comes to spotting Phishing Emails, Malicious attachments and applications, as well as malicious websites that come knocking on our door at the least expected timing. All these can lead to an exponential increase in the level of cybersecurity risks faced by your organization and when there is a sudden surge in the number of cybersecurity breaches, does your organization have the capacity to handle them?   



Here, we look at what you can possibly explore as part of the Cybersecurity Team in your organization from the perspectives of People, Process and Technology to address the above mentioned issues.



Virtual Cyber Awareness Briefings. With increasingly more employees working from home, you can no longer conduct the usual quarterly cyber awareness briefings in traditional classroom settings. Instead of halting these briefings, why not take them virtual in the form of webinars for all employees who are working remotely. There are many platforms which can allow you to do so, such as WebEx, Zoom, Adobe Connect etc. You can also record the sessions and make them available offline for employees who are not able to join the live sessions.


EDMs. Apart from virtual awareness briefings, you should also look to increase the frequency of Electronic Direct Mails (EDMs) to remind the employees on the necessary cyber hygiene that they should continue to practice even when working from home.


Reward-based Quizzes. Besides briefings and EDMs, you can also take one step further to implement regular reward-based quizzes related to different cyber hygiene topics, in order to encourage and engage your employees in an interactive manner.  


Phishing Tests. Lastly, to assess if the above initiatives are effective, the best way is to test it out by implementing a Phishing Campaign on your internal employees. This could include regular phishing tests to your employees to assess their alertness in spotting such threats. You should also look to send out such emails in batches and in a random manner across different departments and regions such that the employees are not able to “cheat” the test by sharing information with their peers on such ongoing tests.

For the above initiatives, you could potentially include phishing topics that are related to the latest trending news or emails disguised as coming from legitimate remote meeting applications (e.g. meeting invites) in order to mimic the latest threats that the organization is facing.



There are a couple of key processes which would require review and revision, to ensure that they are relevant to the work-from-home model. For example: 


Access Control. With the increasing number of employees working from home, you need to review the existing access control related processes, such as the requirements for an employee to qualify for remote access. For example, your Access Control List (ACL) for remote access could be previously role-based, but this may no longer applies if you are in the situation where practically most of the employees across different roles may require remote access. With this sudden growth of remote access employees, are the existing access control provisioning and review processes still practical and relevant? Of course, there are many other issues to consider in this area, which will be too long to be discussed in this post.    


Incident Reporting. With the work-from-home model, you need to ensure that all employees working remotely are familiar with the incident reporting mechanisms in the event of any suspicious happenings. For example, they need to know what is the reporting hotline and email address which they can reach out to on a 24/7 basis, as well as other automated reporting mechanisms such as having a tool to report on phishing emails in their outlook application.


Cybersecurity Champions. Apart from the regular Incident Reporting mechanisms, you should also consider appointing representatives across different departments or teams as “Cybersecurity Champions”, who are basically regular employees (i.e. not part of the Cybersecurity Team) but are more proficient in the area of the relevant security processes in the organization. This initiative will allow employees to reach out to someone whom they are familiar with if they are unsure of any suspicious happenings or if they would like to have a quick refresher on what are the best practices in cyber hygiene.  


Incident Response (IR). Are your existing IR processes robust enough and tailored to include the remote working model practiced by most of your employees right now? You should look to review your existing processes covering the following phases and ensure that they remain relevant to the latest Business and Operating models of your organization:

  • Triage
  • Investigation
  • Containment
  • Eradication
  • Remediation
  • After Action Review



Access Control. In terms of access control provisioning for remote working, you should consider what is the best approach to implement multi-factor authentication in a manner where you can scale up/ down the infrastructure quickly in a cost-effective manner. The options could include the following, depending on your existing set-up, requirements and budget:

  • Hardware token
  • Software token
  • SMS/ Email OTP


For operations on critical servers that need to be performed remotely, there may be a need to differentiate them from the regular 2FA that is provisioned for normal remote access, by having a further step-up in the authentication process.


Monitoring and Detection. With the shift to the remote working model, there is a need to put more focus on the SIEM Use Cases related to VPN and remote access so that you can pick up such threats early. These are some examples of the Use Cases that may be relevant to the remote working model:

  • Detecting VPN access from suspicious locations
  • Simultaneous VPN Geo login from a single user
  • Suspicious remote logon hours from critical admin accounts
  • Remote admin session reconnected from a different workstation
  • Mass phishing attempts targeting your organization
  • and many more..


Endpoint. There are many different layers of endpoint controls which become especially important for the work-from-home model, such as the following:

  • Hard Disk Encryption for all PCs, so that the corporate data remains protected even if they are misplaced
  • Mobile Device Management which allows IT Department to manage the corporate information stored in mobile devices and allow the corporate information to be securely removed remotely if they are misplaced.
  • Endpoint Detection and Response to detect advanced threats in your endpoint devices, which may not have been picked up by traditional Anti-Malware solutions.
  • Data Labelling Enforcement and Data Loss Prevention (DLP) – Enforce data labeling for all documents and emails created or modified, and implement DLP to detect or prevent unauthorized movement of sensitive data.
  • Application Whitelisting as a second layer of defense against unauthorized installation of malicious applications masqueraded as genuine ones into the corporate PC.  


Network and Servers. To ensure that you are not opening up the attack surface of your network and assets given the increased number of remote connections, you should consider the following:

  • VPN provisioning for all remote connections.
  • Network Access Control to disallow remote connections from PCs to the corporate network if the Anti-Virus definitions or patching status of the PCs are not up-to-date.
  • Jump Server. Consider placing a Jump Server in front of critical servers to serve as an added layer of defense. This is especially important if the servers are critical but need to be accessed remotely.


Email. For corporate emailing, you could look to implement a Phishing Email Reporting Tool which your employees could easily report a phishing email to the Cybersecurity Team without having to manually write an email or call the reporting hotline. Also, you should look to implement a Labelling mechanism to automatically label all emails received from external Domains as “External”, as this has been proven to be effective in raising the alertness of employees when they receive any external emails, which could potentially be a phishing email or contain malicious artefacts.


Threat Intel and Hunting. A common saying goes “Know thyself and thy adversary to win a hundred battles”, this is very true and applies in the realm of Cyber Defense as well. By having timely intel that are relevant to your threat landscape, it helps you perform sense making and correlation of threats in your environment more effectively and allows you to put in the necessary measures early to look out for such threats. You should also look to conduct regular pro-active threat hunting sessions by trained specialists (i.e. Threat Hunters) to discover low-lying and advanced attacks which could otherwise may not have been picked up by your regular controls.



Given the need to transition quickly,  securely and efficiently to a remote working model for your organization, you will need to be able to make the relevant changes to your existing Cyber Defense Architecture (in the areas of People, Process and Technology) within a short amount of time, in order to ensure that the level of cybersecurity risk which your organization could be potentially exposed to, continues to remain within an acceptable level. As such, it may be worthwhile to consider engaging external professionals for tasks which could be performed remotely, for example:

  • Perform a gap analysis on your existing processes (e.g. Incident Response and Reporting Processes, Access Provisioning Processes) through documents review and remote workshops that are focused on the remote working model and provide practical recommendations on what you can quickly implement to close the gaps.
  • Develop Use Cases that are tailored to the remote working model to ensure that the detection remains effective against the latest threat landscape.
  • Subscribe to a temporary Managed Security Service to outsource your Level 1 monitoring to an external party if you anticipate a surge in the number of alerts in the SOC during a particular period, so that you can free up the time of your internal SOC team to focus on investigation and incident response.
  • Subscribe to an IR Retainer service to implement a surge resourcing model, ensuring that you have sufficiently trained expert resources when needed most, to assist the internal IR Team in times of complex incidents which may require highly complex work such as malware analysis and digital forensics.
  • Conduct threat hunting sessions to discover any low-lying threats which may have been present for some time in your environment.



To conclude, there is no one-size-fit-all solution but we hope that the above will provide you with some useful insights in planning for your Cyber Defense Architecture. 

Filter Blog

By date: By tag: