Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2020 > January
2020

Visualization techniques can help an analyst make sense of a given data set by exposing scale, relationships, and features that would be almost impossible to derive by just looking at a list of individual data points.  As of RSA NetWitness Platform 11.4, we have added new physics and layout techniques to the nodal diagram in Respond in order to make better sense of the data both for when using Respond as an Incident/Case Management tool or when simply using Respond to group events and track non-case investigations (see Using Respond for Data Exploration for some ideas).

 

 

 

Clustering by Entity Type

Prior to 11.4, the nodal graph evenly distributed the nodes regardless of entity type (Host, IP, User, MAC, File). Improvements were made to introduce intelligent clustering such that entities of the same type not only retain their distinct color, but also have a higher chance of being clustered together.  This layout improvement makes it clearer to see relationships between different entity types, particularly when dealing with larger sets of data.

 

Variable Edge Forces Based on Relationship Type

Prior to 11.4, all edges between nodes were treated equally, resulting in lengths being rendered equally between all sets of connected nodes.  Improvements were made to adjust the relative attraction forces, helping to better distinguish attribute type relationships ("as", "is named", "belongs to", and "has file") from action type relationships ("calls", "communicates with", "uses").  Edges representing attributes will tend to be much shorter than those representing actions, which has the added benefit of reducing the number of overlapping edges, making relationships, scope, and sprawl much easier for an analyst to see at a glance.

 

 

Separation of Disconnected Clusters

Prior to 11.4, all nodes and edges were grouped into one large cluster, even if certain nodes in the data set did not have any relationship with others, requiring tedious manual dragging of nodes in order to distinguish the groupings.  Now, disjoint clusters of nodes are repelled from one another upon initial layout, making it extremely clear which sets of data are joined by some kind of relationship.  This is particularly helpful when using Respond for general data exploration of larger data sets (vs visualizing a single incident) that do not necessarily have commonality, both drawing the analysts eyes to potentially interesting outliers and once again reducing the number of overlapping edges that have historically made certain nodal graphs difficult to read, depending on the data set.


Improved Nodal Interaction

In addition to the physics governing new layouts, improvements have been made to nodal interaction to help take advantage of them.  Given the potential size and complexity of data sets, despite the introduction of layout and force techniques, the layout may not always be optimal.  The goal was to improve interaction by minimizing the number of graph drags needed by an analyst to make sense of even the most tangled data sets.  When dragged, nodes with high connectivity will generally attract other nodes with which a relationship exists.  Also, once any node is manually dragged into position, manipulating the position of other nodes will no longer impart a moving force, meaning the original dragged node will stay in place.  To "unpin" dragged nodes and have them spring back into place, simply double click.

 

 

As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

 

Happy Hunting!

Did you know that you can use Respond for data exploration, even if you aren't using it for Incident Management?  While the naming convention certainly does not suggest it, Respond can be just as useful outside of incident response a place for analysts to group events of interest during investigation and hunting efforts.    Using Respond as more of an analyst workspace can help teams collaborate better, track streams of thought, and take advantage of Respond's new and improved visualization capabilities as of 11.4 (see Visualization Enhancements in RSA NetWitness Platform 11.4  for details).  

 

 

 

Step 1 - Create an "Incident" from Events view

Once you have a set of data that carries significance, you can select any set or subset of events contained in a data set and use it to create a new "Incident".  For our purposes here, you'll have to look past the current naming conventions of Alerts and Incidents and just think of it as a grouping of events (log, endpoint, or network sessions).

 

What data sets to use is largely up to you, but this type of approach is particularly useful when following a methodology that requires systematically carving larger data sets into smaller, more manageable ones.  The example above is based on RSA's Network Hunting Guide, details of which can be found here: RSA NetWitness Hunting Guide 

 

Step 2 - Open in Respond

Once opened, all of the capabilities available when using Respond for Incident Management are available.  It doesn't mean you have to use all of them, but you may find some of them to be a handy way to tag in other analysis (Tasks) and keep track of your analysis (Journal).  And if you do happen to find something malicious in the data set, all of the relevant information is already contained.

 

In the example above, we're seeing if anything interesting shows up in the data set for "All outbound HTTP sessions using the POST method".  The nodal diagram can be a useful way to see how the data is distributed between entities (larger bubbles meaning a larger number of events), which sub data sets within the larger one are dealing with disjoint sets of entities (Files, Hosts, IPs, Users, MAC Addresses), and can key your eye towards groupings that lead to deeper levels of inspection.  

 

Step 3 - Use Respond Tools to Track, Pivot, and Collaborate


View Event Cards


In-line Event Reconstruction (eg. Network Reconstruction)


Entity Details - Pivot To Other Views 

 

Collaboration

Add New Events 

And don't forget that you can always add more events to the same Respond incident to expand investigation if more leads are uncovered. Simply start from the top, and "Add To Incident".

 

As always, if you have any feedback or suggestions on how to make this experience even better, please leave a comment or, better yet, search, submit, and vote on product enhancements on the RSA Ideas portal: RSA Ideas for the RSA NetWitness Platform 

 

Happy Hunting!

The newest version of the RSA NetWitness Platform is almost here!

 

We’re excited to release the 11.4 version of the RSA NetWitness Platform very soon. We’ve worked hard on many new features and enhancements that will help users detect, investigate, and understand threats in their organizations.

 

This version introduces new features in analyst investigation, UEBA, Respond, administrative functions, and RSA NetWitness Endpoint, that collectively make security teams more efficient and arm them with the most relevant and actionable security data. Some of the more noteworthy 11.4 features include:

 

  • Enhanced Investigation Capabilities: Fully integrated free-text search, auto-suggestion, & search profiles
  • Smarter Network Threat & Anomaly Detection: UEBA expanded to analyze packet data with 24 new indicators
  • Improved Visualization of Incidents: Respond visualizations are clearer with enhanced relationship mapping
  • Expanded Functions for Endpoint Response: Powerful Host forensic actions and dynamic analysis directly from the RSA NetWitness Platform
  • Simplified File Collection from Endpoint Agents
  • Single Sign-on Capability
  • Distributed Analyst User Interfaces
  • New Health and Wellness (BETA)


This is not an exhaustive list of all the changes, please see the Release Documentation for the nitty gritty of the release details and to understand the full list of all the changes we’ve made in this release.

 

In the coming days and weeks, we’ll be publishing additional blog entries that demonstrate how this new functionality operates, and the benefits customers can expect to realize in 11.4.

 

The RSA Product team is excited for you to try this new release!

HA is a common need in many enterprise architectures, so NetWitness Endpoint has some built-in capabilities that allow organizations to achieve a HA setup with fairly minimal configuration and implementation needs.

 

An overview of the setup:

  1. Install Primary and Alternate Endpoint Log Hybrids (ELH)
  2. Create a DNS CNAME record pointing to your Primary ELH
  3. Create Endpoint Policies with the CNAME record's alias value

 

The failover procedure:

  1. Change the CNAME record to point to the Alternate ELH

 

And the failback procedure:

  1. Change the CNAME record back to the Primary ELH

 

To start, you'll need to have an ELH already installed and orchestrated within your environment.  We'll assume this is your Primary ELH, where your endpoint agents will ordinarily be checking into, and what you need to have an alternate for in the event of a failure (hardware breaks, datacenter loses power, region-wide catastrophic event...whatever).

 

To install your Alternate ELH (where your endpoint agents should failover to) you'll need to follow the instructions here: https://community.rsa.com/docs/DOC-101660#NetWitne  under "Task 3 - Configuring Multiple Endpoint Log Hybrid".

**Make sure that you follow these instructions exactly...I did not the first time I set this up in my lab, and so of course my Alternate ELH did not function properly in my NetWitness environment...**

 

Once you have your Alternate ELH fully installed and orchestrated, your next step will be to create a DNS CNAME record.  This alias will be the key to entire HA implementation.  You'll want to point the record to your Primary ELH; e.g.: 

 

**Be aware that Windows DNS defaults to a 60 minute TTL...this will directly impact how quickly your endpoint agents will point to the Target Host FQDN in the CNAME record, so if 60 minutes is too long to wait for endpoints to be available during a failover you might want to consider setting the TTL lower...** (props to John Snider for helping me identify this in my lab during my testing)

 

And your last step in the initial setup will be to create Endpoint Policies that use this alias value.  In the NetWitness UI, navigate to Admin/Endpoint Sources/Policies and either modify an existing EDR policy (the Default, for instance) or create a new EDR policy.  The relevant configuration option in the EDR policy setting is the "Endpoint Server" option within the "Endpoint Server Settings" section:

 

When editing this option in the Policy, you'll want to choose your Primary ELH in the "Endpoint Server" dropdown and (the most important part) enter the CNAME record's alias as the "Server Alias":

 

Add and/or modify any additional policy settings as required for your organization and NetWitness environment, and when complete be sure to Publish your changes. (Guide to configuring Endpoint Groups and Policies here: NetWitness Endpoint Configuration Guide for RSA NetWitness Platform 11.x - Table of Contents)

 

You can test your setup and environment by running some nslookup commands from endpoints within your environment to check that your DNS CNAME is working properly and endpoints are resolving the alias to the correct target (Primary ELH at this point), as well as creating an Endpoint EDR Policy to point some active endpoint agents to the Alternate ELH (**this check is rather important, as it will help you confirm that your Alternate ELH is installed, orchestrated, and configured correctly**).

 

Prior to moving on to the next step, ensure that all your agents have received the "Updated" Policy - if any show in the UI with a "Pending" status after you've made these changes, then that means they have not yet updated:

 

Assuming all your tests come back positive and all your agents' policies are showing "Updated", you can now simulate a failure to validate that your setup is, indeed, HA-capable. This can be quite simple, and the failover process similarly straightforward.

 

  1. Shutdown the Primary ELH
  2. Modify the DNS CNAME record to point to the Secondary ELH
    1. This is where the TTL value will become important...the longer the TTL the longer it may take your endpoints to change over to the Alternate ELH
    2. Also...there is no need to change the existing Endpoint Policy, as the Server Alias you already entered will ensure your endpoints
  3. Confirm that you see endpoints talking to the Alternate ELH
    1. Run tcpdump on the Alternate ELH to check for incoming UDP and TCP packets from endpoints
    2. See hosts showing up in the UI
    3. Investigate hosts

 

After Steps 1 and 2 are complete, you can confirm that agents are communicating with the Alternate ELH by running tcpdump on that Alternate ELH to look for the UDP check-ins as well as the TCP tracking/scan data uploads:

 

...as well as confirm in the UI that you can see and investigate hosts:

 

 

Once your validation and testing is complete, you can simply revert the procedure by powering on the Primary ELH, powering off the Alternate ELH, and modifying the CNAME record to point back to the Primary ELH.

 

Of course, during an actual failure event, your only action to ensure HA of NetWitness Endpoint would be to change the CNAME record, so be sure to have a procedure in place for an emergency change control, if necessary in your organization.

A couple of months ago, Mr-Un1k0d3r released a lateral movement tool that solely relies on DCE/RPC (https://github.com/Mr-Un1k0d3r/SCShell). This tool does not create a service and drop a file like PsExec or similar tools would do, but instead uses the ChangeServiceConfigA function (and others) to edit an existing service and have it execute commands; making this a fileless lateral movement tool.

 

SCShell is not a tool designed to provide a remote semi-interactive shell. It is designed to allow the remote execution of commands on an endpoint utilising only DCE/RPC in an attempt to evade common detection mechanisms; while this tool is slightly stealthier than most in this category, it’s also a bit more limited in what an attacker can do with it.

 

When we first looked at this, we didn't have much in terms of detection, but with a prompt response from William Motley from our content team, he produced an update to the DCERPC parser that is the basis of this post.

 

The Attack

In the example screenshot below, I run the SCShell binary against an endpoint to launch calc.exe, while this is of no use to an attacker, it is just an example that we can use to visually confirm the success of the attack on the victim machine:

 

 

It could also be used to launch a Metasploit reverse shell, for example, as shown in the screenshot below. We will cover some of the interesting artifacts leftover from this execution in a seperate post. Obviously, this is not something an attacker would do, in their case something like launching an additional remote access trojan or tool would be more likely:

 

SCShell edits an existing service on the target endpoint, it does not create a new one. Therefore the service needs to already exist on the target. In the above example, I use defragsvc as it is a common service on Windows endpoints.

 

RSA NetWitness Network Analysis

There was a recent update to the DCERPC parser that is available via RSA NetWitness Live (thanks to Bill Motley), this parser now extracts the API calls made over DCE/RPC - which can be useful in detecting suspect activity over this protocol. If you have setup your subscriptions correctly for this parser (which you should have), it will be updated automatically, otherwise you will have to push it manually.

 

So, to start my investigation (as per usual) I take a look at my compromise meta keys and notice a meta value of, remote service control, under the Indicators of Compromise [ioc]

 meta key. This is an area that should be regularly explored to look for anomalous activity:

 

Pivoting on this meta value and opening up the action and filename meta keys, we can see the interaction with the svcctl interface that is being used to call API functions to query, change, and start an existing service:

 

 

  • StartServiceA - Starts a Service
  • QueryServiceConfigA - Retrieves the configuration parameters of the specified service.
  • OpenServiceAOpens an existing service
  • OpenSCManagerWEstablishes a connection to the service control manager on the specified computer and opens the specified service control manager database
  • ChangeServiceConfigAChanges the configuration parameters of a service

 

The traffic sent over DCE/RPC is encrypted, so reconstructing the sessions will not help here, but given that we have full visibility we can quickly pivot to endpoint data to get the answers we need. The following logic would allow you to identify this remote service modification behaviour taking place in your environment and subsequently the endpoints of interest for investigation:

service = 139 && filename = 'svcctl' && action = 'openservicea' && action = 'changeserviceconfiga' && action = 'startservicea'

 

RSA NetWitness Endpoint Analysis

A great way to perform threat hunting on a dataset is by performing frequency analysis, this allows us to bubble up outliers and locate suspect behaviour with respect to your environment - an anomaly in one environment, can be common in another. This could be done by looking for less common exectuables being spawned by services.exe in this instance - the following query would be a good place to start, device.type='nwendpoint' && filename.src='services.exe' && action='createprocess' - we would then open up the Filename Destination meta key and see a large number of results returned:

 

 

Typically, we tend to view the results from our queries in descending order, in this instance, we want to see the least common, so we switch the sorting to ascending to bubble up the anomalous executables. Now we can analyse the results and as shown in the screenshot below, we see a couple of interesting outliers, the calc.exe, and the cmd.exe:

 

 

Pivoting into the Events view for cmd.exe, we can see it using mshta to pull a .hta file, clearly this is not good:

 

 

This activity whereby services.exe spawns a command shell is out of the box content, and can be found under the Behaviors of Compromise [boc] meta key, so this would also be a great way to start an investigation as well:

 

 

Now that we have suspect binaries of interest, we have files and endpoints we could perform further analysis on to get our investigation fully underway, but for this post I will leave it here.

 

Conclusion

It is important to ensure that all your content in NetWitness is kept up to date - automating your subscriptions to Lua parsers for example, is a great start. It ensures that you have all the latest metadata being created from the protocols, and improves your ability as a defender to hunt and find malicious behaviours.

 

It is also important to remember that while sometimes there may not be a lot of activity from the initial execution of say a binary, at some point, it will have to perform some activity in order to achieve its end goal. Picking up on said activity will allow defenders to pull the thread back to the originating malicious event.

DNS over HTTPS (DoH) was introduced to increase privacy and help prevent against the manipulation of DNS data by utilising HTTPS to encrypt it. Mozilla and Google have been testing versions of DoH since June 2018, and have already begun to roll it out to end-users via their browsers, Firefox, Mozilla, and Chrome. With the adoption rates of DoH increasing, and the fact that C2 frameworks using DoH have been available since October 2018, DoH has become an area of interest for defenders; one C2 that stands out is goDoH by SensePost (https://github.com/sensepost/goDoH).

 

goDoH is a proof of concept Command and Control framework written in Golang that uses DNS-over-HTTPS as a transport medium. Currently supported providers include Google and Cloudflare, but it also contains the ability to use traditional DNS.

 

The Attack

With goDoH, the same binary is used for the C2 server, and the agent that will connect back to it. In the screenshot below, I am setting up the C2 on a Windows endpoint - I specify the domain I will be using, the provider to use for DoH, and that this is the C2:

 

 

On the victim endpoint I do the same, but instead specify that this is the agent:

 

 

After a short period of time, our successful connection is made, and we can begin to execute our reconnaissance commands:

 

 

RSA NetWitness Platform Network Analysis: SSL Traffic

Given its default implementation using SSL, there is not a vast amount of information we can extract, however, that does not mean that we cannot locate DoH in our networks. A great starting point is to look at who currently provides DoH - after some Googling I came across a list of DoH providers on the following GitHub page:

 

 

These providers could be converted into an application rule (or Feed) to tag them in your environment, or utilised in a query to retroactively view DoH usage in your environment. This would help defenders to initially pinpoint DoH usage:

alias.host ends 'dns.adguard.com','dns.google','cloudflare-dns.com','dns.quad9.net','doh.opendns.com','doh.cleanbrowsing.org','doh.xfinity.com','dohdot.coxlab.net','dns.nextdns.io','dns.dnsoverhttps.net','doh.crypto.sx','doh.powerdns.org','doh-fi.blahdns.com','dns.dns-over-https.com','doh.securedns.eu','dns.rubyfish.cn','doh-2.seby.io','doh.captnemo.in','doh.tiar.app','doh.dns.sb','rdns.faelix.net','doh.li','doh.armadillodns.net','jp.tiar.app','doh.42l.fr','dns.hostux.net','dns.aa.net.uk','jcdns.fun','dns.twnic.tw','example.doh.blockerdns.com','dns.digitale-gesellschaft.ch'

NOTE: This is by no means a definitive list of DoH providers. You can use the above as a base, but should collate your own.

 

Running this query through my lab, I can see there is indeed some DoH activity for the Cloudflare provider:

As this traffic is encrypted, it is difficult to ascertain whether or not it is malicious, but there are a couple of factors that may help us. Firstly, we could reduce the meta values to a more manageable amount by filtering on long connection, which is under the Session Analysis meta key, this is because C2 communications over DoH would typically be long lived:

 

 

We could then run the JA3 hash values through a lookup tool to identify any outliers (in this instance I am left with one due to my lab not capturing a lot of data):

 


For details on how to enable JA3 hashes in the RSA NetWitness Platform, take a look at one of our previous posts: Using the RSA NetWitness Platform to Detect Command and Control: PoshC2 v5.0 

Running the JA3 hash (706ea0b1920182287146b195ad4279a6) through OSINT (https://ja3er.com/form), we get results back for this being Go-http-client/1.1, this is because the goDoH application is written in Golang - this stands out as an outlier and the source of this traffic would be a machine to perform further analysis on:

 

 

 

RSA NetWitness Platform Network Analysis: SSL Intercepted Traffic

Detecting DoH when SSL interception is in place becomes far easier. DoH requests for Cloudflare, for example

, supply a Content-Type header that allows us to easily identify it (besides the alias.host value):

 

Also determining whether the DoH connections are malicious becomes far easier when SSL interception is in place, this is because it allows defenders to analyse the payload that would typically be encrypted. The following screenshot shows the decrypted DoH session between the client and Cloudflare - here we are able to see the DNS request and response in the clear, which divulges the C2 domain being used, go.doh.dns-cloud.net. We can also see that the JA3 hash we previously reported was correct, as the User-Agent is Go-http-client/1.1:

 

 

The session for this DoH C2 traffic is quite large, so I am unable to show it all - this is due to the limited amount of information that can be transmitted via each DNS query. An example of data being transmitted via an A record can be seen below - the data is encrypted so won't make sense by merely viewing it:

 

 

Within this session there are hundreds of requests for the go.doh.dns-cloud.net domain with a very high variability in the FQDN seen in the name parameter of the query; this is indicative behaviour of C2 communication over DNS. Below I have merged five of the requests together in order to help demonstrate this variability:

 

 

Given the use of TCP for HTTPS vs the common use of UDP for DNS, the traffic shows as a single session in the RSA NetWitness Platform due to TCP session/port reuse, normally this type of activity would present itself over a larger number of RSA NetWitness Platform sessions when using native DNS.

 

RSA NetWitness Endpoint Analysis

Looking at my compromise keys, I decide to start my triage by pivoting into the Events view for the meta value, runs powershell with http argument as shown below.

 

 

From the following screenshot, we can see an executable named, googupdater.exe, running out of the users AppData directory is executing a PowerShell command to get the public IP of the endpoint. We also get to see the parameter that was passed to the googupdater.exe binary that reveals the domain being contacted:

 

NOTE: googupdater.exe is the goDoH binary and was renamed for dramatic effect.

We could have also pivoted on the outbound from unsigned appdata directory meta value which would have led us to this suspect binary, as well. While from an Endpoint perspective this is just another compiled tool communicating over HTTPS, the fact that it will need to spawn external processes to execute activity would lead us to an odd parent process:

 

 

Given this scenario in terms of Endpoint, this would lead us back to common hunting techniques, but in the interest of brevity, I won't dig deeper for this tool. The key items would be an uncommon parent process for some unusual activity, and the outbound connections from an unsigned tool. While both can at times be noisy, in conjunction with other drills, they can be narrowed down to cases of interest.

 

Conclusion

This post further substantiates the requirement for SSL interception as it vastly improves the defenders capability to investigate and triage potentially malicious communications. While it is still possible to identify suspect DoH traffic without SSL interception, it can be incredibly difficult to ascertain its intentions. DNS is also a treasure trove for defenders, and the introduction and use of DoH could vastly deplete the ability for them to protect the network effectively.

When performing network forensics, all protocols should be analysed, however, some tend to be more commonly abused than others; one of these being DNS. While not as flexible as say HTTP, it does flow through, and outside of networks a lot easier due to its configuration. This means that DNS can be utilised to encapsulate data that would be routed outside the network to an attacker-controlled name server allowing data exfiltration or the download of tools. In this post, I will cover how we can use NetWitness Network to analyse the DNS protocol effectively, to do so we will use a tool called DNS2TCP (https://github.com/alex-sector/dns2tcp) in our lab to generate some sample traffic.

 

DNS record types

The DNS standard defines more than 80 record types, but many of these are seldom used. The most common are:

  • A Record - used to map a host and domain name to the IP address (forward lookup)
  • PTR Record - used to map an IP address to host and domain name (reverse lookup)
  • MX Record - to return host and domain mapping for mail servers
  • CNAME Record - used to return an alias to other A or CNAME records
  • TXT Record - used to provide the ability to associate arbitrary text with a host or other name

 

In this post, we will focus on TXT records being used to encapsulate data. TXT records are typically utilised as the size of this field allows for larger amounts of data to be transferred in a single request compared to A, or AAAA records. This field is also legitimately used to provide SPF records, specify the domain owner, return the full name of the organization, as well as other similar uses.

 

How Does NetWitness Network Analyse DNS?

The DNS_verbose_lua parser is available from RSA Live, it extracts metadata of interest from the DNS protocol that can help a defender in the identification of anomalous DNS traffic. We suggest that you subscribe to this parser (and others), as they are updated regularly.

 

 

Which metadata can help with the analysis?

The following meta keys are particularly useful for identifying, and subsequently analysing the DNS protocol:

Meta KeyDescriptionIndexed by default
serviceThe protocol as identified by NetWitnessYes
alias.hostThe hostnames being resolvedYes
alias.ipThe IP address that the hostname resolves toYes
service.analysisMeta values surrounding the characteristics of the protocolYes
tldThe top-level domain extracted from the hostnameYes
sldContains the part of hostname directly below the top-level domain (second level domain)No
dns.resptextThe response for the DNS TXT recordNo
dns.querytypeThe human readable value for the DNS Query type performed in the requestNo
dns.responsetypeThe human readable value for the DNS Query type returned by the responseNo

 

You will notice from the table above that some of the meta keys are not indexed by default. The following entries would therefore need to be added to the Concentrators index file so that they can be used in investigations:

<key description="SLD" format="Text" level="IndexValues" name="sld" defaultAction="Closed" valueMax="500000" />
<key description="DNS Query Type" format="Text" level="IndexValues" name="dns.querytype" valueMax="100" />
<key description="DNS Response Type" format="Text" level="IndexValues" name="dns.responsetype" valueMax="100" />
<key description="DNS Response TXT" format="Text" level="IndexValues" name="dns.resptext" valueMax="500000" />

NOTE: Details regarding the Concentrator index, such as how it works, ensuring optimal performance, and how to add entries can be found here: https://community.rsa.com/docs/DOC-100556

 

DNS2TCP

DNS2TCP is a tool that encapsulates TCP sessions within the DNS protocol. On the server side, we configure the tool to listen on UDP port 53, as per the DNS standard, we also specify our domain, "dns2tcp.slbwyfqzxn.cloud", and the resources. Resources are a local or remote service listening for TCP connections - in the example below, I specify a resource named SSH for connections to port 22 on 127.0.0.1:

The client will act as a relay for a specific resource, SSH in our example, and will listen on the specified port (2222) and forward traffic from the local machine to the remote server via DNS TXT records:

Once the communication between the client and server has been established, we can then connect to the server using SSH that will be encapsulated in DNS:

 

NetWitness Network Analysis

Homing in on DNS traffic is incredibly easy with NetWitness, we merely need to look for DNS under the Service meta key, or execute the query "service = 53". To place a focal point on possibly encoded DNS TXT records, we can pivot on the meta values, "dns base36 txt record", and "dns base64 txt record", located under the "Session Analysis" meta key. These tools will encode the data in the TXT record due to the limitations placed on the record type, such as only allowing printable ASCII symbols and a maximum length of 255 characters. 

From the screenshot below, we can see a suspicious sounding SLD with a large number of NetWitness sessions that would be worth investigating.

From here, I like to open the "SLD", "Hostname Alias", and "DNS Response Text" meta keys. What you can see from the screenshot below is a large number of unique "alias.host" or "dns.resptext" values to a single SLD; which is indicative of possible DNS tunnelling. The requests are highly unique, so they are not to be resolved by the local DNS cache, or the cache on the internal DNS servers.

The screenshot below shows the elevated number of different TXT records are related with the single SLD "slbwyfqzxn".

NOTE: Some commercial software packages such as antivirus and antispam tools show a similar behaviour and exchange data over DNS TXT record for their own security checks.

 

Reconstructing the sessions, we can see the TXT records and use the in-built Base64 decoding capability to see what data was encapsulated. In the screenshot below, we can see the initialisation of an SSH session:

 

Conclusion

DNS is commonly overlooked and an area that defenders should pay more attention toward, it is a great way to exfiltrate data out of an otherwise “secured” network. DNS2TCP is just one of the tools that allows data to be encapsulated within DNS, there are many others, but they all have similar behaviour and can be identified using similar techniques as shown in this post.

In order to defend their network effectively, analysts need to understand the threat landscape, and more specifically how individual threats present themselves in their tools. With that in mind, I started researching common Remote Access Trojans/Tools (RAT) that are publicly available for anyone to use. This will walk you through Gh0st RAT (https://attack.mitre.org/software/S0032/), its footprint, and how the RSA tools help you detect its presence from both the endpoint and packet perspective. 

 

Just like any malware, a Gh0st infection will consist of some sort of delivery mechanism. Most mature SOCs with mature tools should get an alert on either its delivery (using common methods such as phishing, drive-by download, etc) or subsequent presence on an endpoint. However, let’s assume that it does not get detected, and as an analyst you are proactively hunting in your environment.  How would you go about detecting the presence of such a Trojan?

 

Gh0st Overview and Infection

Gh0st is a very well-documented RAT but you’ll find a quick overview of some of the functionality and way it was configured for testing purposes belowAlso, I will show you how our tools can help identify Gh0st.  The Gh0st server component is a standalone portable executable (PE) file, which gives you a simple interface when executed.  Once executed, the server component is used not only to control infected systems, but also used to configure the client component that is delivered to victims.

 

                    

Figure 1: Gh0st Interface

 

The build tab that is used to configure the client executable had some default HTTP settings, which I changed to use “gh0st[.]com” for simplicity.  I also created an entry in DNS for this domain to point to our command and control (C2) server.

 

                    

Figure 2: Gh0st HTTP and Service Options

 

You can also see that the Build tab contains options for the service display name and description.  After I created the malicious client component sample, I crafted an email using Outlook Web Access on Exchange and sent it to the victim.

 

                      

Figure 3: Phishing Email

 

The victim setup had Windows 10 installed using all default settings.  Once the user received the email, I was surprised to learn that this wasn’t flagged by local Antivirus or any other tools.    Even after I executed the PE file, it was not flagged as malicious.  It installed the service as seen here but there was no initial identification and no “alarms” were raised. 

                                             

Figure 4: Service Installed

 

 The malware executed fine, and I could see the connection through the Gh0st Dashboard on the server component.

 

                   

Figure 5: Successful Client Connection

 

From here, I used some of the built-in features of Gh0st to control and interact with the endpoint.  Here are some of those features:

 

                                                

Figure 6: Options Once Connected

 

I first opened a “Remote Shell” which basically just gives you a command prompt with System level permissions.

 

               

Figure 7: Remote Shell

 

From there, I executed net user commands through the remote shell.  The net user commands are for reconnaissance and used to identify what users are on the machine.  They can be used in conjunction with a username to identify what groups the user belongs to.

 

                     

Figure 8: Running Commands Using Remote Shell

 

Next, I modified the registry to allow for cleartext credential storage in memory. I then copied procdump over to the machine using the “File Manager” feature, and then I used procdump to dump the lsass.exe memory into a .dmp file.  I finally copied that .dmp file back over to my C2 server.  This a common technique for getting credentials out of memory in cleartext.  The attacker can use these credentials to access other parts of the network.

 

                        

Figure 9: File Manager

 

Figure 10: Procdump on LSASS

 

RSA NetWitness Endpoint 4.4 Detection

Prior to performing any of the aforementioned steps, I installed RSA NetWitness Endpoint on the victim and created a baseline of the IIOCs.  This was also prior to performing any additional interaction with the hosts.  These IIOCs were coming from multiple machines since I had Windows 10 and Windows 7 victims with agents installed.  There was only one level 1 IIOC and not much else going on.

Once the email was opened and the file was clicked, there were some additional IIOCs that fired including:

  • Unsigned writes executable
  • Unsigned writes executable to appdata local directory
  • Unsigned writes executable to Windows directory
  • Renames files to executable

 

Figure 11: IIOCs Fired in RSA NetWitness Endpoint

 

 

These are all great indicators for hunting and that should be checked daily, the results triaged, and appropriate measures taken for each hit.

Then I pivoted from those IIOCs and looked for the “invoice.exe” module in the tracking data.  This showed me the “FastUserSwitchingCapability” service being created.

 

Figure 12: Service Created

 

Also, this is where the net user commands were found. 

 

Figure 13: Net User Commands Found in RSA NetWitness Endpoint

 

Once the registry change for cleartext credential storage was executed, the IIOC for that fired as seen here.

 

Figure 15: More IIOCs

 

So, going back to the service we can see that it created an Autorun entry and gave it a high IIOC score.

 

Figure 16: Autoruns in RSA NetWitness Endpoint

 

It’s was also listed in the modules.

 

Figure 17: Modules Listed

 

RSA NetWitness Endpoint 11.3 Detection

Here’s how things look in RSA NetWitness Endpoint 11.3.  First here is the baseline IOC, BOC, EOC, and File Analysis meta fields after the agent has been installed.

 

Figure 18: RSA NetWitness Endpoint 11.3 Baseline

 

This is the risk score for the host which is based on the windows firewall being disabled.

 

Figure 19: Initial RSA NetWitness Endpoint 11.3 Risk Score

 

After I executed the dropper there was some additional meta generated including:

  • Unsighted writes executable to appdatalocal directory
  • Runs service control tools
  • Starts local service
  • Auto unsigned hidden
  • Auto unsigned servicedll

 

Figure 20: Additional Meta after Dropper Execution

Figure 21: Invoice.exe Creating Files in AppData Folder

 

We can also see this alert for “Autorun Unsigned Servicedlls” which is related to the “autorun unsigned servicedll” meta in the Navigate view.

 

Figure 22: Risk Score Increase

Figure 23: Autoruns with Four DLLs

Figure 24: Autorun Unsigned Servicedll Meta

 

Next, I opened up a remote shell using the Gh0st dashboard and executed some basic reconnaissance commands (whoami, net user /domain, etc) just like in RSA NetWitness Endpoint 4.4.

 

Figure 25: Reconnaissance Commands

 

Again, I executed the registry command to enabled cleartext credential storage and procdump on the lsass.exe process.  That triggered a critical alert in RSA NetWitness Endpoint 11.3 which gave a risk score of 100 just like in RSA NetWitness Endpoint 4.4.

 

Figure 26: Registry Command to Enable Cleartext Credential Storage

 

Also, going back to look at the navigate view, there was some additional meta generated for these commands.

  • Enumerates domain users
  • Modifies registry using command-line registry tool
  • Runs registry tool
  • Enables cleartext credential storage
  • Gets current user as system
  • Gets current user

 

Figure 27: Additional Meta

 

RSA NetWitness Network Detection

While there are obviously various detection capabilities for identification of delivery of the gh0st executable, the purpose of this post is to discuss presence of the gh0st RAT once a system is infected.  As such, the RSA NetWitness Packets (NWP) Gh0st parser detected the presence of the Gh0st trojan, based on the communications between the gh0st server and client. Just by looking in the “Indicators of Compromise” the Gh0st traffic is listed there.  With just one click I was able to find the C2 activity using RSA NetWitness Platform packet data.

 

Figure 18: IOC Meta in the RSA NetWitness Platform

 

As mentioned earlier, one of the benefits of doing this is that when we identify gaps, we work with people like Bill Motley in our content team to create appropriate content.  Initially the parser wasn’t detecting this specific gh0st activity but that has been fixed. An updated parser is now available in RSA NetWitness Live.

 

Here we can see some of the Gh0st C2 traffic that generates the IOC meta mentioned before.

 

Figure 19: Gh0st C2 Seen in the RSA NetWitness Platform

 

Also here is the HTTP traffic which is  heart beat callout to check and make sure it’s still connected.  It does this HTTP get request about every two minutes and only the string “Gh0st” is returned.

 

Figure 20: Heartbeat Traffic

 

Even without this parser, Gh0st C2 traffic can be found with as little as three pieces of metadata.  First, looking at service metadata labeled as ‘OTHER’ which can be a good place to start hunting because it’s network traffic that doesn’t have a known parser and/or doesn’t follow the RFCs for known protocols.  Then, in the IOC meta there was ‘binary indicator’ which can help limit the dataset.  Finally, in the analysis.service metadata the ‘unknown service over http port’ value stuck out.  Performing these three pivots from the full dataset found all of the gh0st traffic, including some additional traffic not seen previously.  That can be seen in the screenshot below under the Indicators of Compromise column.  There are IOCs showing gh0st but there are also some that only show the traffic as binary.

 

Figure 21: Additional Traffic Not Seen Previously

 

Summary

Gh0st is one of the simplest and easiest RATs to use.  The RSA NetWitness Platform had no trouble finding this activity.  Though this is a publicly available and commonly used RAT, it frequently goes unidentified by AV and other technologies, as referenced in my example.  This is where the power of regular threat hunting comes in, since it helps you detect unknown threats that your regular tools don’t necessarily pick up on. Some of these can be automated, as we did with the parser changes.

 

This means that, in the future, you no longer need to look for this specific threat but rather follow this process, which will hopefully lead you to newer unknown threats. Using the right tools coupled with the right methodology will help you better protect your network/organization, unfortunately not all of this can be fully automated and some of the automation will still require appropriate human triage.

A couple of days ago on Github, Hackndo released a tool (https://github.com/Hackndo/lsassy) that is capable of dumping the memory of LSASS using LOLBins (Living of the Land Binaries) - typically we would see attackers utilising SysInternals ProcDump utility to do this. Lsassy uses the MiniDump function from the comsvcs.dll in order to dump the memory of the LSASS process; this action can only be performed as SYSTEM, so it therefore creates a scheduled task as SYSTEM, runs it and deletes it.

 

We decided to take this tool for a spin in our lab and see how we would detect this with NetWitness.

 

The Attack

To further entrench themselves and find assets of interest, an attacker will need to move laterally to other endpoints in the network. Reaching this goal often involves pivoting through multiple systems, as well as dumping LSASS to extract credentials. In the screenshot below, we use the lsassy tool to dump credentials from a remote host that we currently have access to:

 

The output of this command shows us the credentials for an account we are already aware of, but also shows us credentials for an account we previously did not, tomcat-svc.

 

NetWitness Network Analysis

I like to start my investigation every morning by taking a look at the Indicators of Compromise meta key, this way I can identify any new meta values of interest. Highlighted below is one that I rarely see (of course in some environments this can be a common activity, but anomalies of what endpoints this takes place on can be identified):

 

Reconstructing the session, we can see the remote scheduled task that was created and analyse what it is doing. From the below screenshot, we can see the task created will use CMD to launch a command to locate LSASS, and subsequently dump it to \Windows\Temp\tmp.dmp using the MiniDump function within the comsvcs.dll:

 

cmd.exe /C for /f "tokens=1,2 delims= " ^%A in ('"tasklist /fi "Imagename eq lsass.exe" | find "lsass""') do C:\Windows\System32\rundll32.exe C:\windows\System32\comsvcs.dll, MiniDump ^%B \Windows\Temp\tmp.dmp full

This task also leaves other artifacts of interest behind. From the screenshot below, we can see the tmp.dmp LSASS dump being created and read:

 

This makes the default usage of lsassy easy to detect with simple application rule logic such as the below. Of course the names and location of the dump can be altered, but attackers typically tend to leave the defaults for these types of tools:

service = 139 && directory = 'windows\\temp\\' && filename = 'tmp.dmp'

 

NetWitness Endpoint Analysis

Similarly with Endpoint, I like to start my investigations by opening up the Compromise meta keys - IOC, BOC, and EOC. From here I can view any meta values that stand out, or iteratively triage through them. One of the meta values of interest from the below is, enumerates processes on local system:

Pivoting into the Events view for this meta value, we can see cmd.exe launching tasklist to look for lsass.exe - to get proper access, the command is also executing with SYSTEM level privileges - this is something you should monitor regularly:

 

After seeing this command, it would be a good idea to look at all activity targeted toward LSASS for this endpoint. To do that, I can use the query filename.dst = 'lsass.exe' and start to investigate by opening up meta keys like the ones below. Something that stands out as interesting is the usage of rundll32.exe to load a function called minidump from the comsvcs.dll:

Pivoting into the Events view, we can see the full command a lot easier. Here we can see that rundll32.exe is loading the MiniDump function from comsvcs.dll and passing some parameters, such as the process ID for dumping (which was found by the initial process enumeration), location and name for the dump, and the keyword full:

 

This activity could be picked up by using the following logic in an application rule. This will be released via RSA Live soon, but you can go ahead and implement/check your environment now:

device.type = 'nwendpoint' && category = 'process event' && (filename.all = 'rundll32.exe') && ((param.src contains 'comsvcs.dll' && param.src contains 'minidump') || (param.dst contains 'comsvcs.dll' && param.dst contains 'minidump')

 

Conclusion

It is important to consistently monitor network and endpoint behaviour for abnormal actions taking place, and not solely rely on out of the box detections. New attack methods/tools are consistently being devleoped, but the actions these tools take always leave footprints behind, it is down to the defender(s) to spot the anomalies and triage accordingly. With that being said, RSA are consistently updating detections for attacks such as the one laid out in this post - we have been working with the content team to have this tool usage detected with out of the box content.

Filter Blog

By date: By tag: