Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

525 posts

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

 

 

 

 

Special thanks to Rui Ataide for his support and guidance for these posts.

Strides have been made in RSA NetWitness Platform v11.2 to provide an administrator alternatives to the standard proprietary NW database format. Now an admin can choose to have the raw packet database files written in PcapNg format allowing them to be directly accessible using third party tools like Wireshark.

 

To enable storing the raw packet data as PcapNg files, the setting packet.file.type in the network decoder database configuration node has to be changed from netwitness to pcapng. After making this change a restart of the service is not required unless you are too impatient for the existing database file (default size is 4GB) to roll-over.

 

PcapNg configuration

 

Once the change is applied any new PCAPs uploaded or network traffic ingested into the decoder will be stored as pcapng files. Now as the database files age they are more readily available while on the decoder and when backed up off the system. In the below image you can see a mixture of the formats commingling in the packet database folder. The database written format can be changed between the two options without any loss of standard functionality.

 

pcapng files

 

There are some considerations before making the switch to PcapNg format over the default nwpdb format. The PcapNg format requires approximately 5% more storage when compared to the nwpdb format. The PcapNg format is not recommended to be used when ingest rates are greater than 8 Gbps on a single decoder as can introduce approximately 5% packet drops compared to when nwpdb is in use. The PcapNg files cannot be compressed while nwpdb files can, although in general raw network data typically does not compress well compared to raw logs. The PcapNg format is an open format while the nwpdb files are in a proprietary format so as accessibility improves, privacy concerns may arise when storing as PcapNg files. However, I am not suggesting security through obscurity is the right answer when measuring your GDPR compliance.

 

Hopefully this along with the already available SDK and APIs make NetWitness data more accessible.

One of the more common requests and "how do I" questions I've heard in recent months centers around the Emails that the Respond Module can send when an Incident is created or updated.  Enabling this configuration is simple (https://community.rsa.com/docs/DOC-86405), but unfortunately changing the templates that Respond uses when it sends one of these emails has not been an option.

 

Or rather...has not been an accessible option.  I aim to fix that with this blog post.

 

Before getting into the weeds, I should note that this guide does not cover how to include *any* alert data within incident notification emails. The fields I have found in my tests that can be included are limited to these using JSON dot notation (e.g. "incident.id", "incident.title", "incident.summary", etc.):

 

Now, this does not necessarily mean it isn't possible to include other data, just that I have not figured out how...yet.

 

The first thing we need to do is create a new Notification Template for Respond to use.  We do this within the UI at Admin / System / Global Notifications --> Templates tab.  I recommend using either of the existing Respond Notification templates as a base, and then modifying either/both of those as necessary. (I have attached these OOTB notification templates to this blog.)

 

For this guide, I'll use the "incident-created" template as my base, and copy that into a new Notification Template in the UI.  I give my template an easy-to-remember name, choose any of the default Template Types from the dropdown - it does not matter which I choose, as it won't have any bearing on the process, but it's a required field and I won't be able to save the template without selecting one - and write in a description:

 

Then I copy the contents of the "incident-created" template into the Template field.  The first ~60% of this template is just formatting and comments, so I scroll past all that until I find the start of the HTML <body> tag.  This is where I'll be making my changes

 

One of the more useful changes that comes to mind here is to include a hyperlink in the email that will allow me to pivot directly from the email to the Incident in NetWitness.  I can also change any of the static text to whatever fits my needs.  Once I'm done making my changes, I save the template.

 

After this, I'm done in the UI (unless I decide to make additional changes to my template), and open a SSH session to the NetWitness Admin Server.  To make this next part as simple and straightforward as I can, I've written a script that will prompt me for the name of the Template I just created, use that to make a new Respond Notification template, and then prompt me one more time to choose which Respond Notification event (Created or Updated) I want to apply it to. (The script is attached to this blog.)

 

A couple notes on running the script:

  1. Must be run from the Admin Server
  2. Must be run as a superuser

 

Running the script:

 

...after a wall of text because my template is fairly long...I get a prompt to choose Created or Updated:

 

And that's it!  Now, when a new incident gets created (either manually or automatically) Respond sends me an email using my custom Notification Template:

 

And if I want to update or fix or modify it in any way, I simply make my changes to the template within the UI and then run this script again.

 

Happy customizing.

Hi Everyone,

We're excited to share our second issue of the RSA NetWitness Platform newsletter with you.  As a friendly reminder, the goal for this newsletter is to share more information about what is happening and what key things you should be aware of regarding our products and services.

 

This is a monthly newsletter, so you can expect the next newsletter in early June.  If you have any specific topics you would like to see in a future newsletter, please let us know!

 

Previous Newsletters:

 

 

Note: You can hit "preview" to view the pdf newsletter in your browser window, without needing to download it. 

One of the features included in the RSA NetWitness 11.3 release is something called Threat Aware Authentication (Respond Config: Configure Threat Aware Authentication).  This feature is a direct integration between RSA NetWitness and RSA SecurID Access that enables NetWitness to populate and manage a list of potentially high-risk users that SecurID Access can then refer to when determining whether (and how) to require those users to authenticate.

 

The configuration guide above details the steps required to implement this feature in the RSA NetWitness Platform, and the relevant SecurID documentation for the corresponding capability is here: Determining Access Requirements for High-Risk Users in the Cloud Authentication Service.

 

On the NetWitness side, to enable this feature you must be at version 11.3 and have the Respond Module enabled (which requires an ESA), and on the SecurID Access side, you need to have Premium Edition (RSA SecurID Access Editions - check the Access Policy Attributes table at the bottom of that page).

 

At a high level, the flow goes like this:

  1. NetWitness creates an Incident
  2. If that Incident has an email address (one or more), the Respond module sends the email address(es) via HTTP PUT method to the SecurID Access API
  3. SecurID Access checks the domains of those email addresses against its Identity Sources (AD and/or LDAP servers)
  4. SecurID Access adds those email addresses with matching domains to its list of High Risk Users
  5. SecurID Access can apply authentication policies to users in that list
  6. When the NetWitness Incident is set to Closed or Closed-False Positive, the Respond module sends another HTTP PUT to the SecurID Access API removing the email addresses from the list

 

In trying out these capabilities, I ended up making a couple tools to help report on some of the relevant information contained in NetWitness and SecurID Access.

 

The first of these is a script (sidHighRiskUsers.py; attached at the bottom of this blog) to query the SecurID Access API in the same way that NetWitness does.  This script is based on the admin_api_cli.py example in the SecurID Access REST API tool (https://community.rsa.com/docs/DOC-94122).  That download contains all the python dependencies and modules necessary to interact with the SecurID API, plus some helpful README files, so if you do intend to test out this capability I recommend giving that a look.

 

Some usage examples of this script (can be run with either python2 or python3 or both, depending on whether you've installed all the dependencies and modules in the REST API tool):

 

Show Users Currently on the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o getHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>"

 

 

Add  Users to the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o addHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

**Note: my python-fu is not strong enough to capture/print the 404 response from the API if you send a partially successful PUT.  If your python-fu is strong, I'd love to know how to do that correctly.

Example - if you try to add multiple user emails and one or more of those emails are not in your Identity Sources, you should see this error for the invalid email(s):

 

Remove Users from the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o removeHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

*Note: same as above about a partially successful PUT to the API

 

The second tool is another script (nwHighRiskUsersReport.sh; also attached at the bottom of this blog) to help report on the NetWitness-specific information about the users added to the High Risk list, the Incident(s) that resulted in them being added, and when they were added.  This script should be run on a recurring basis in order to capture any new additions to the list - the frequency of that recurrence will depend on your environment and how often new incidents are created or updated.

 

The script will create a CEF log for every non-Closed incident that has added an email to the High Risk list, and will send that log to the syslog receiver of your choice.  Some notes on the script's requirements:

  1. must be run as a superuser from the Admin Server
  2. the Admin Server must have the rsa-nw-logplayer RPM installed (# yum install rsa-nw-logplayer)
  3. add the IP address/hostname and port of your syslog receiver on lines 4 & 5 in the script
  4. If you are sending these logs back into NetWitness:
    1. add the attached cef-custom.xml to your log decoder or existing cef-custom.xml (details and instructions here: Custom CEF Parser)
    2. add the attached table-map-custom.xml entries to the table-map-custom.xml on all your Log Decoders
    3. add the attached index-concentrator-custom.xml entries to the index-concentrator-custom.xml on all your Concentrators (both Log and Packet)
    4. restart your Log Decoder and Concentrator services
    5. **Note: I am intentionally not using any existing email-related metakeys in these custom.xml files in order to avoid a potential feedback loop where these events might end up in other Incidents and the same email addresses get re-added to the High Risk list
  5. Or if you are sending them to a different SIEM, perform the equivalent measures in that platform to add custom CEF keys

 

Once everything is ready, running the script:

 

And the results:

------

With the recent news about ScreenConnect used in data breaches, I had the opportunity to examine some of the network traffic.  This was traffic that was originally in OTHER, but as you know, that just means it's an opportunity to learn about some new aspect of our networks.

 

Initially, this traffic was over TCP dest port 443, however it was not SSL traffic.  A custom parser was written to identify this traffic and register the service type as 7310.  I did not find a document that explained how the application used this custom protocol, so I built this parser with some educated guesswork.

 

 

We start with an 18 byte long token and match on it within the first 10 bytes of the payload.  If we see that, we are in the right traffic.  Next, I moved forward 1 byte and then extracted the next 64 bytes of payload.  I checked the first byte using the "payload:uint8(1,1)" method looking for either a "4" or a "6".  In researching this traffic, it appeared that different versions of ScreenConnect would have one of those values.  That value was important as it led me to determine where the hostname (or IP address) started and it's terminator.

 

 

If the value was "4", then my hostname started 7 bytes away.  If the value was "6", the hostname started 9 bytes away.  It also helped me identify the terminator.  If the initial value was "4" my terminator appeared to be "0x01".  If the initial value was "6" then the terminator appeared to be "0x02".  

 

Now that I was able to identify the start and end positions, I could extract the hostname.  However, it could be either an IP address or a fully qualified domain name.  This is where I referenced an outside function in the 'nwll' file called "determineHostType".  This way, if the extracted value was an IP address, it would be placed in 'alias.ip' and if it was a hostname, it would go in 'alias.host'.

 

Attached is the parser and PCAP.  This parser was submitted to LIVE, however I wanted you to have it while that process is underway.

 

Good luck and happy hunting.

 

Chris

Attackers love to use readily available red team tools for various stages within their attack. They do so as this removes the labour required in creating their own custom tools. This is not to say that the more innovative APT's are going down this route, but just something that appears to be becoming more prevalent and your analysts should be aware of. This blog post covers a readily available red team tool available on GitHub.

 

Tools

In this blog post, the Koadic C2 will be used. Koadic, or COM Command & Control, is a Windows post-exploitation rootkit similar to other penetration testing tools such as Meterpreter and Powershell Empire. 

 

The Attack

The attacker sets up their Koadic listener and builds a malicious email to send to their victim. The attacker wants the victim to run their malicious code, and in order to do this, they tried to make the email look more legitimate by supplying a Dropbox link, and a password for the file:

 

The user downloads the ZIP, decompresses using the password in the email, and is presented with a Javascript file that has a .doc extension. Here the attacker is relying on the victim not being well versed with computers, and not noticing the obvious problems with this file (extension, icon, etc.):

 

 

 

Fortunately for the attacker, the victim double clicks the file to open it and they get a call back to their C2:

 

From here, the attacker can start to execute commands:

 

 

The Detection in NetWitness Packets

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. From here, the analyst the can start to pull apart the protocol and look for anomalies within its behaviour; the analyst opens the Service Analysis meta key to do this and observed two pieces of metadata of interest:

 

  • http post missing content-type

  • http post no get

 

 

 

These two queries have now reduced the data set for the analyst from 2,538 sessions to 67:

 

NOTE: This is not to say that the other sessions do not have malicious traffic, nor that the analyst will ignore them, but just at this point in time this is the analysts focal point. If this traffic after analysis turned out to be clean, they could exclude it from their search and pick apart other anomalous HTTP traffic in the same manner as before. This allows the analyst to go though the data in a more comprehensive and approachable manner.

 

Now that the data set has been reduced, the analyst can start open other meta keys to see understand the context of the traffic. The analyst wants to see if any files are being transferred, and to see what user agents are involved, to do so, they open the Extension, Filename, and Client Application meta key. Here they observe an extension they do not typically see during their daily hunting, WSF. They see what appears to be a random filename, and a user agent they are not overly familiar with:

 

There are only eight sessions for this traffic, so the analyst is now at a point where they could start to reconstruct the raw sessions and see what if they can better understand what this traffic is for. Opening the Event Analysis view, the analyst first looks to see if they can observe any pattern in the connection times, and to look at how much the payload varies in size:

NOTE: Low variation in payload size and connections that take place every x minutes is indicative of automated behaviour. Whether that behaviour is malicious or not is up to the analyst to decipher, this could be a simple weather update for example, but this sort of automated traffic is exactly what the analyst should be looking for when it comes to C2 communication; weeding out the user generated traffic to get to the automated communications.

 

Reconstructing the sessions, the analyst stumbles across a session that contains a tasklist output. This immediately stands out as suspicious to the analyst:

 

From here, the analyst can build a query to focus on this communication between these two hosts and find out when this activity started happening:

 

Looking into the first sessions of this activity, the analyst can see a GET request for the oddly named WSF file, and that BITS was used to download it:

 

The response for this file contains the malicious javascript that infected the endpoint:

 

Further perusing the sessions, it is also possible to see the commands being executed by the attacker:

 

The analyst is now extremely confident this is malicious traffic and needs to be able to track it. The best way to do this is with an application rule. The analyst looks through the traffic and decides upon the following two pieces of logic to detect this behaviour:

 

To detect the initial infection:

extension = 'wsf' && client contains 'bits'

To detect the beacons:

extension = 'wsf' && query contains 'csrf='

 

NOTE: The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The Detection in NetWitness Endpoint

Every day the analyst should review the IOC, BOC, and EOC meta keys; paying particular attention to the high-risk indicators first. Here the analyst can see a high-risk meta value, transfers file using bits:

 

Here the analyst can see cmd.exe spawning bitsadmin.exe and downloading a suspiciously named file into the \AppData\Local\Temp\ directory. This stands out as suspicious to the analyst:

 

From here, the analyst places an analytical lens on this specific host and begins to look through what other actions took place around the same time. The analyst observes commands being executed against this endpoint and now knows it is infected:

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

Analysts should also be aware that not all attackers will use proprietary tools, or even alter the readily available ones to evade detection. An attacker only needs to make one mistake and you can unravel their whole their operation. So don't always ignore the low hanging fruit.

There is a new space available on RSA Link: Troubleshooting the RSA NetWitness® Platform

The purpose of this space is to consolidate the available troubleshooting information for RSA NetWitness into a single space.

Information is separated into several "widgets" that are used to categorize the types of troubleshooting items:

  • Installation information
  • Knowledge base articles that contain troubleshooting information
  • Blog posts that discuss troubleshooting the RSA NetWitness platform
  • Videos and tutorials
  • Troubleshooting topics from the user guides

The goal of this space is to be the place you can come to find a wide variety of troubleshooting information in one place. While the information is also available elsewhere in RSA Link, it may be mixed in with other types of information. In this space, all the information you see is targeted toward helping you solve problems that you encounter while using RSA NetWitness.

Quite frequently when testing ESA alerts and output options / templates, I have wanted the ability to manually or repeatedly trigger alerts.  In order to help with this type of testing, I created a couple ESA Alert templates to generate both scheduled alerting and manual, one-time alerts.

 

Each of these can take a wide variety of time- or schedule-based inputs to generate alerts according to whatever kind of frequency you might want.  The descriptions in each alert have examples, requirements, and links to official Esper documentation with more detail.

 

I see the potential for quite a bit of usefulness with the Crontab alert, especially in 11.3 now that ESA Alert script outputs run from the admin server.

 

Lastly, I created these using freemarker templates (how the ESA Rules from Live are packaged) in order to ensure that the times and schedules used in the alerts adhere to proper syntax and formatting, but of course you should feel free to convert these to advanced rules if you like.

 

 

Introduction

There are many, many ways to exfiltrate data from a network, but one common way to do it is using DNS Exfiltration.

With these specific techniques the attackers use the already open port for dns traffic as the door for uploading and downloading data between the attacked host and his own external server.

 

Obviously with the normal daemon for DNS resolution that’s not possible, but with the right software and the right configuration it is possible to use any DNS server and set it up within any infrastructure to exfiltrate data without the right permissions needed.

 

But how is possible?

 

There are many packages already built which are ready to be used for this purpose and the most common are: Dnscat2, Iodine and Powercat+Dnscat2.

 

Just a quick tip, don’t imagine an attacker using a specific version built only for you.  Attackers are lazy and they need to be sure their attack is efficient, so most of the time you will end up fighting with one of these three DNS exfiltration tools.  They'll either be renamed or exactly the same as you can find on GitHub.

 

Remember to also check the second level domain to avoid any confusion with legitimate software using dns tunneling.

 

So, let’s take a look into these three tools, both work for the same purpose and same TCP/UDP port but with some difference in how to send the data outside the network:

 

Dnscat2 (https://github.com/iagox86/dnscat2): it connects to a server component to try to resolve TXT queries and all data going up and down, to and from, the external server in an encrypted way or not, depending on your choice. This tool is widely used, also because it is ported into multiple programming languages, like Ruby, Perl, PowerShell, etc. This way it can be easier to implement and it'll essentially work on any network.

 

Iodine (https://code.kryo.se/iodine/): same basic functionality as previous one, make tunnel through DNS, but with little difference like password for accessing the tunnel, and uses the NULL type that allows the downstream data to be sent without encoding. Each DNS reply can contain over a kilobyte of compressed payload data; there’s also the android version, so all the work is almost done for example to implement it also on IoT device running Android.

 

Powercat (https://github.com/besimorhino/powercat): that tool alone don’t work as a dnstunnel, but if the server part is dnscat2, you can have a interactive Powershell over legitimate dns traffic, and you can increase your capability by adding other Powershell attack framework like Nishang, Powershell Empire, etc..

 

The purpose of these article is not “how to exfiltrate data from a network”, but let’s take a look how our products can help you to identify and track any usage of these technique in your network, and for that I’ve choose the common approach used every day by me and my colleagues of Incident Response Services.

 

For that I’ve choosen RSA NetWitness Network. Let’s take a look at the essential steps.


 

Preparation

In your NetWitness Network click on configure, select as Resource Type Bundle, click Search, choose Hunting Pack and click on Deploy to deploy the hunting pack as showed in Figure 1 to the appropriate component of your infrastructure.

Figure 1

 

Now choose Lua Parser as Resource Type, click Search and choose nwll and DNS_verbose_lua as parser and click on Deploy to deploy the parser as showed in Figure 2.

Figure 2

 

Now that your NetWitness Packets (network) environment is ready and have all you need to parse and identify in the right way DNS traffic you can start with your analysis.

 

Now let’s see how to find bad DNS traffic, or better, the traffic who cross DNS port but is not a real DNS traffic.

 

 

Dnscat2 traffic

With right package and parser deployed, if there’s traffic generated by Dnscat2 into your network, many indicators rise to your eyes and help you to fast identify it.

Figure 3

 

As showed in Figure 3, for Service Type = DNS (service = 53) Service Analysis show presence of “dns base36 TXT record” and “hostname consecutive consonants” plus “dns large answer” as Risk Information and “Hostname Aliases” like the hostname showed in Figure 3 are good sign Dnscat2 traffic presence.

Scrolling to the others meta keys as showed in Figure 4 you find DNS Query Type with value “txt record” and DNS response text with values of many chars without apparent sense, now you have sufficient alerts!

 

 

Figure 4

 

So, in my network there are query for txt record, with base36 encoding, large answer, apparently random text response and random chars host alias? To be sure that’s not a normal DNS traffic you can click on one of these events and see what inside as showed in Figure 5……

 

Figure 5

 

Now you have clear that you are in front of Dnscat2 traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && dns.querytype = 'txt record' && analysis.service = 'hostname consecutive consonants' && analysis.service = 'dns base36 txt record'


 

Iodine traffic 

Let’s check some interesting meta who give me the ability to find Iodine traffic.

 

Figure 6

 

As showed in Figure 6 there’s Service analysis with “hostname invalid” and “hostname consecutive consonants”, Risk Suspicious with “dns extremely low ttl”, Host aliases with a lot of “strange” hostname, but check if there’s something more…

 

 

Figure 7

 

As showed in Figure 7 there’s also another interesting filed, DNS query type who say “experimental null record”, so job done, all of these meta are related to Iodine activity and the packet showed in Figure 8 confirm the traffic.

 

Figure 8

 

Now you have clear that you are in front of Iodine traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && dns.querytype = 'experimental null record'

 


 

Powercat + Dnscat2

Looking into Powercat + Dnscat2 is different from previous one, let’s check why.

As showed in Figure 9 Session analysis say “single sided udp” , Service Analysis say “hostname consecutive consonants”, dns base 36 txt records”, “dns single request response” and Hostname aliases have a lot of hostname with strange names.

 

Figure 9

 

Looking on more meta as shown in Figure 10, there’s DNS query type as “txt record” and DNS Response text with a lot of strange text starting with same char.

 

 

Figure 10

 

Look similar to Dnscat2 traffic but not exactly the same there’s a specific difference between standard dnscat2 traffic and be the “single side udp” and “dns single side request response”.

 

Figure 11

 

As showed in Figure 11 there’s only one request and one response, and that’s the main difference from standard dnscat2 and Powercat with Dnscat2.

Now you have clear that you are in front of Powercat with Dnscat2 traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && analysis.service = 'hostname consecutive consonants' && analysis.service = 'dns base36 txt record' && analysis.service = 'dns single request response'


 

Addon

Most of the time, when you look into DNS traffic, maybe you encounter something like a client who work not only with UDP but also with TCP protocol.

If the information about the source is right, with these hunting methodology you can archive also some goal about network misconfiguration.

That’s because DNS infrastructure need to be managed and there are a lot of guide on "How to secure your DNS infrastructure", so in a normal situation a client try dns resolution through one internal server, most of the time a domain controller, who talk with a DNS forwarder allowed to go outside of the network for resolution ( if both have nothing into cache).

So if you see a client go to ask resolution from client network to internet , also with TCP protocol, is better if you check more your DNS infrastructure, because one backdoor on client machine using port 53, probably have direct access to internet and you can exfiltrate everything without usage of any dnstunnel, but only using the port allowed.

 

A quick query i apply every day to find its presence in the preferred time span, can be: direction = 'outbound' && service = 53 && ip.proto = 6 and if your source ipaddress are filled with a lot of ip coming from client network, you have some possible misconfiguration into the network and/or some possible hole.


 

Finally

There are many ways to hunt and dig into a system, but with the right product and the right methodology you can archive success very faster and this article want be a quick help on doing that because every day we do that with our products!

 

Hope this helps.

 

Thank you.

 

Max

 

 

The complete overhaul of NW-Endpoint 4.4 into NW-Endpoint 11.3 includes (among many changes) a different method for creating your own, or tuning existing, endpoint alerts.  In the old version (4.4), everything was a SQL query, but since we have moved away from Windows and SQL Server in 11.3, I'd like to shed some light on how the new process works, as well as include some tooling intended to assist folks who want to do this themselves.

 

The RSA NetWitness Endpoint Configuration Guide (https://community.rsa.com/docs/DOC-100160) has a section starting on pg. 12 that covers everything here in greater detail.  If you'd like more information on this subject, I recommend taking a look at that document.

 

At a high level, the process for Endpoint 11.3 to generate alerts and calculate file and host risk scores goes like this:

 

Let's take a look at  a couple of the OOTB examples and see how these different pieces are interacting with each other by examining the process that turns the "runs powershell decoding base64 string" rule into a potential risk score.

 

If the App Rule's condition statement is met, it creates a meta value of "runs powershell decoding base64 string" in the "boc" meta key:

 

These are then used in the corresponding ESA Rule "Runs Powershell Decoding Base64 String" contained in the OOTB Endpoint Risk Scoring Rule Bundle (I've attached all of the OOTB ESA Rules contained in the bundle to this blog).

****Take note that the app_rule_meta_value is case sensitive.  If you use capital letters in the App Rule Name field, then the "value" field in its companion ESA Rule must also contain capital letters****

 

Last up in the process is the Risk Scoring Rule.  This takes the ESA Alert and produces a score (scaled from 0 - 100) for the host where the alert occurred, and if applicable the module involved in the alert.  This last part is where I expect the most potential confusion - determining the host where an alert occurred is straightforward, but the module might not be.

 

This is because there can potentially be both a source module (filename_src, checksum_src) and a destination module (filename_dst, checksum_dst), or just the module itself without a source or destination (filename, checksum), or for some alerts there might not be a module involved in the alert at all.  I've attached all of the OOTB Risk Scoring Rules to this blog, and I'd encourage you to take a look at these variations if you intend to create your own, or tune existing, rules and alerts.

 

Now then, back to the "Runs Powershell Decoding Base64 String" Rule.  This Risk Scoring Rule looks for the ESA Alert and creates a score for the source module (checksum_src, filename_src) in the event, as well as the host where it occurred.  Any risk scores that are generated for affected hosts and modules will appear in the Investigate/Hosts and Investigate/Files pages in the UI, and can also appear as Alerts and Incidents in the Respond UI.

 

And just to be thorough, here are a couple examples of rules with different Risk Scoring.

 

A rule without a source or destination module --> "Scripting Addition In Process"

 

A rule without any module and just the Host --> "Windows Firewall Disabled"

 

Now we have some examples under our belt, and know how the different inputs and options relate to one another and the outcome.  The process for adding your own rule is covered in the configuration guide linked above, and this next section aims to assist with some of the manual CLI aspects of that process.

 

After playing around with the Blocking capabilities in 11.3, I decided I wanted to add a couple custom alerts.

 

First, I wanted to know when a module I blocked was actively running on an endpoint at the time I blocked it and was subsequently killed.  My App Rule to trigger on this activity:

 

And second, I wanted to know when an attempt was made to access or run a module that I had previously blocked.  My App Rule for this activity:

 

With these App Rules created and Applied, the next steps are to create and apply the corresponding ESA Alert and Risk Scoring Rules from a terminal session in the Admin Server (Node0).  The script "endpointCustomRule.sh" attached to this blog can help walk you through these steps, if you choose.  It aims to eliminate errors that may occur when completing these steps manually.

 

Some notes on the script:

  • must be run on the Admin Server as root
  • must be run only after creating and applying your App Rule(s)
    • be sure to make your App Rules unique, otherwise the script might not find the correct one when it is checking for a valid Log Decoder App Rule
    • if you have multiple Endpoint Log Hybrids (ELHs), be sure to Push your App Rule(s) to the other ELHs in your environment
  • applies some error checking and input validation to ensure valid Rules are created and added to the respective databases successfully

 

If you find errors or gaps in the script please let me know.

 

Prompting user for input:

 

Adding and confirming the ESA and Risk Scoring Rules:

 

And finally, confirming that we are now successfully creating alerts and re-calculating Risk Scores when the events occur:

WireGuard is a new open-source VPN protocol used to create point to point tunnels. It uses the most modern cryptographic protocols and it works on the network layer for both IPv4 and IPv6.
One of the advantages of WireGuard implementation is the size of it's code. It uses just 4000 rows of code, which is much smaller compared with openVPN or IPsec implementations.  Initially released for the Linux kernel, it is now cross-platform and widely deployable.  All these aspects make the WireGuard protocol a perfect software for those who need to create a secure channel.

 

Considered it's easy implementation and it's wide availability, I tried to create a parser to help to identify this type of traffic.

 

For our purpose we can ignore the header of the packet and concentrate on the payload data only.

first packet

From the payload analysis we see that the first packet starts with 01 00 00 00. The same pattern is used on the following packets, 02 00 00 00 on the response and 04 00 00 00 on the rest of the traffic.

second packet

third packet


If we cross reference this with Wireguard’s documented protocol, we can confirm that the data begins with an 8-bit pattern (0100 0000, 0200 0000, 0400 0000), so we can assume that these pattern will help to identify this type of traffic.

 

With some suggestion from Christopher Ahearn, I started to create my LUA parser.

 

The entry point for my parser is the token 01 00 00 00

WireGuard:setCallbacks({
    [nwevents.OnSessionBegin] = WireGuard.sessionBegin,
    ["\001\000\000\000"] = WireGuard.tokenMATCH, -- find the token 1 0 0 0
})

Once the token is identified inside a session, it's necessary to understand its position as we are looking for the payload that have that token on the first 4 bytes of the payload. 

function WireGuard:tokenMATCH(token, first, last)
    -- check if the token is on the first 4 bytes
    if first == 1 and last == 4 then

If this is the case, then I go to define the request stream and the response stream, then I look for the pattern that I identified before: 02 00 00 00 on first packet of the response stream and 04 00 00 00 on all the other packets.

if requestStream and responseStream then
    -- the first 4 bytes on the first packet response are 2 0 0 0
    if nwstream.getPayload(responseStream,1,4):find("\002\000\000\000",1,4) then
-- the first 4 bytes on the other packets are 4 0 0 0
if (nwpacket.getPayload(requestPacket,1,4):find("\004\000\000\000",1,4) and
    nwpacket.getPayload(responsePacket,1,4):find("\004\000\000\000",1,4)) then

If all the first 4 packets respect the pattern, it means that the analyzed session is a WireGuard session and the parser set the service type as 51820. WireGuard doesn't use a specific port for the communication, so i decided to use the 51820 because it's the value used on the configuration sample available on the WireGuard website.

nw.setAppType(51820)
nw.createMeta(self.keys.ioc, "WireGuard VPN")

 

Once I deployed the parser on the Network Decoder, it allows us to quickly identify if WireGuard VPN traffic is established on the network.

NetWitness

 

WireGuard is evolving and it is possible that something will change, but this parser can help to identify some VPN traffic that otherwise would not have been defined properly using RSA NetWitness.  

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Smbexec will be used. the Impackets implementation of Smbexec will be used. This sets up a semi-interactive shell for the attacker.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Smbexec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

Smbexec works a little differently to some of the more common lateral movement tools such as PsExec. Instead of transferring a binary to the target endpoint and using the svcctl interface to remotely create a service using the transferred binary and start the service, Smbexec makes a call to an existing binary that already lives on that endpoint to execute its commands, cmd.exe.

 

NetWitness Packets does a great job at pulling apart packet data and pointing you in directions of interest. One of the metadata we can pivot on to focus on traffic that is of interest to us for lateral movement is, remote service control:

 

NetWitness also creates metadata when it observes windows cli commands being run, this metadata is under the Service Analysis meta key and is displayed as, windows cli admin commands. This would be another interesting pivot point for us to look into to see what type of commands are being executed:

 

NOTE: Just because an endpoint is being remotely controlled, and there are commands being executed on the endpoint, this does not mean that your network is compromised. It is up to the analyst to review the sessions of interest like we are in this blog post, and determine if something is out of the ordinary for your environment.

 

Looking into the other metadata available, we can see a connection to the C$ share, and that a filename called __output was created:

 

This does not give us much to go on and say that this is suspicious, so it is necessary to reconstruct the raw session itself to get a better idea of what is happening. Opening the Event Analysis view for the session we reduced our data set to, and analysing the payload, a suspicious string stands out as shown below:

 

Tidying up the command a little, it ends up looking like this:

%COMSPEC% /Q /c echo dir > \\127.0.0.1\C$\__output 2>&1 > %TEMP%\execute.bat & %COMSPEC% /Q /c %TEMP%\execute.bat & del %TEMP%\execute.bat

  • %COMPSEC% - Environment variable that points to cmd.exe
  • /Q - Turns echo off
  • /C - Carries out the command specified by string and then terminates
  • %TEMP% - Environment variable that points to C:\Users\username\AppData\Local\Temp

 

We can see that string above will echo the command we want to execute (dir) into a file named "__output" on the C$ share of the local machine. The command we want to execute also gets placed into execute.bat in the %TEMP% directory, which is subsequently executed, and then deleted.

 

Analysing the payload further, we can also see the data that is returned from the command that was executed by the attacker:

 

Now that suspicious traffic has been observed, we can filter on this type of traffic, and see other commands being executed, such as whoami:

 

Smbexec is quite malleable, a vast majority of the indicators can easily be edited to evade signature type detection for this behaviour. However, using NetWitness Packets ability to carve out behaviours, the following application rule logic, should be suitable to pick up on suspicious traffic over SMB that an analyst should investigate to detect this type of behaviour:

(ioc = 'remote service control') && (analysis.service = 'windows cli admin commands') && (service = 139) && (directory = '\\c$\\','\\ADMIN$\\') 

 

The Detection in NetWitness Endpoint

NetWitness Endpoint does a great job at picking up on this activity, looking at the Behaviours of Compromise meta key, two pieces of metadata point the analyst toward this activity, services runs command shell and runs chained command shell:

 

Opening the Event Analysis view for these sessions, we can see that services.exe is spawning cmd.exe, and we can also see the command that is being executed by the attacker:

 

The default behaviour of Smbexec could easily be detected with application rule logic like the following:

param.dst contains '\\127.0.0.1\C$\__output'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

In RSA NetWitness 11.3, one of the behind-the-scenes changes to the platform was moving the script notification server from ESA onto the Admin Server.

 

This change opens up a number of possibilities for scripting and automating processes within the NetWitness environment, but also requires a few changes to existing, pre-11.3 scripts.

 

Prior to 11.3, the raw alert data would be passed to the ESA script server as a single argument which could then be read, written to disk, parsed, etc. e.g.:

 

#!/usr/bin/env python 
import json
import sys

def dispatch(alert):
   with open("/tmp/esa_alert.json", mode='w') as alert_file:
      alert_file.write(json.dumps(alert, indent=True))

def myFunction():
   esa_alert = json.loads(open('/tmp/esa_alert.json').read())
   .....etc.....
   .....etc.....

if __name__ == "__main__":
   dispatch(json.loads(sys.argv[1]))
   myFunction()
   sys.exit(0)

 

 

But in 11.3, the raw alert gets broken up into multiple arguments that need to be joined together.  One possible solution to this change could be something like this:

 

#!/usr/bin/env python
import sys
import json

def dispatch():
   with open("/tmp/esa_alert.json", mode='w') as alert_file:
      a = sys.argv
      del a[0]
      alert_file.write(' '.join(a))

def myFunction():
   esa_alert = json.loads(open("/tmp/esa_alert.json").read())
   .....etc.....
   .....etc.....

if __name__ == "__main__":
   dispatch()
   myFunction()
   sys.exit(0)

 

As I mentioned above, moving the script server onto the Admin Server opens up a number of possibilities for certain queries and tasks within the NW architecture.  Some that come to mind:

  • automating backups
  • pulling host stats and ingesting them as syslog events
  • better ESA Alert <--> Custom Feed <--> Context-Hub List <-- > ESA Alert enrichment loops

 

However, one restriction I've been trying to figure out a good solution for is that the Admin Server will run these scripts as the "netwitness" user, and this user has fairly limited access.

 

I've been kicking around the possibility of adding this user to the sudoers group, possibly adding read/write/execute permissions for this user to specific directories and/or files depending on the use case, or sudo-ing to a different user within the script.

 

Each of these options present certain risks, so I'd be interested in hearing what other folks might think about these or other possible solutions to run scripts with elevated permissions in as secure a manner as possible.

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Winexe will be used. Winexe is a GNU/Linux based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command and uninstalls the service. Winexe allows execution of most of the windows shell commands.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Winexe, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

The use of Winexe is not overly stealthy. Its use creates a large amount of noise that is easily detectable. Searching for winexesvc.exe within the filename metadata returns the SMB transfer of the executable to the ADMIN$ share:

 

Using the time the file transfer took place as the pivot point to continue investigation, it is also possible to see the use of the Windows Service Control Manager (SCM) directly afterward to create and start a service on the remote endpoint. SCM acts as a remote procedure call (RPC) server so that services on remote endpoints can be controlled:

 

Reconstructing the raw session as text, it is possible to see the service name being created, winexesvc, and the associated executable that was previously transferred being used as the service base, winexesvc,exe:

 

Continuing to analyse the SMB traffic around the same time frame, it is also possible to see another named pipe, ahexec, being used. This is the named pipe that Winexe uses:

 

Reconstructing these raw sessions as text, it is possible to see the commands that were executed:

 

As well as the output that was returned to the attacker:

 

Based on the artefacts we have seen leftover from Winexe's execution over the network, there are multiple pieces of logic we could use for our application rule to detect this type of traffic. The following application rule logic would pick up on the initial transfer of the winexesvc.exe executable, and the subsequent use of the named pipe, ahexec:

(filename = 'ahexec','winexesvc.exe') && (service = 139)

The Detection in NetWitness Endpoint

Searching for winexesvc.exe as the filename source shows the usage of Winexe on the endpoints, this is because this is the executable that handles the commands sent to over the ahexec named pipe. The filename destination meta key shows the executables invoked via the use of Winexe:

 

A simple application rule could be created for this activity by simply looking for winexesvc.exe as the filename source:

(filename.src = 'winexesvc.exe')

 

Additional Analysis

Analysing the endpoint, you can see the winexesvc.exe process running from task manager:

 

As well as the service that was installed via SCM over the network:

 

This service creation also creates a log entry in the System event log as event ID 7045:

 

This means if you were ingesting logs into NetWitness, you could create an application rule to trigger on Winexe usage with the following logic:

(reference.id = '7045') && (service.name = 'winexesvc')

We can also see the named pipe which Winexe uses by executing Sysinternals pipelist tool:

Filter Blog

By date: By tag: