Skip navigation
All Places > Products > RSA NetWitness Platform > Blog

One of the features included in the RSA NetWitness 11.3 release is something called Threat Aware Authentication (Respond Config: Configure Threat Aware Authentication).  This feature is a direct integration between RSA NetWitness and RSA SecurID Access that enables NetWitness to populate and manage a list of potentially high-risk users that SecurID Access can then refer to when determining whether (and how) to require those users to authenticate.

 

The configuration guide above details the steps required to implement this feature in the RSA NetWitness Platform, and the relevant SecurID documentation for the corresponding capability is here: Determining Access Requirements for High-Risk Users in the Cloud Authentication Service.

 

On the NetWitness side, to enable this feature you must be at version 11.3 and have the Respond Module enabled (which requires an ESA), and on the SecurID Access side, you need to have Premium Edition (RSA SecurID Access Editions - check the Access Policy Attributes table at the bottom of that page).

 

At a high level, the flow goes like this:

  1. NetWitness creates an Incident
  2. If that Incident has an email address (one or more), the Respond module sends the email address(es) via HTTP PUT method to the SecurID Access API
  3. SecurID Access checks the domains of those email addresses against its Identity Sources (AD and/or LDAP servers)
  4. SecurID Access adds those email addresses with matching domains to its list of High Risk Users
  5. SecurID Access can apply authentication policies to users in that list
  6. When the NetWitness Incident is set to Closed or Closed-False Positive, the Respond module sends another HTTP PUT to the SecurID Access API removing the email addresses from the list

 

In trying out these capabilities, I ended up making a couple tools to help report on some of the relevant information contained in NetWitness and SecurID Access.

 

The first of these is a script (sidHighRiskUsers.py; attached at the bottom of this blog) to query the SecurID Access API in the same way that NetWitness does.  This script is based on the admin_api_cli.py example in the SecurID Access REST API tool (https://community.rsa.com/docs/DOC-94122).  That download contains all the python dependencies and modules necessary to interact with the SecurID API, plus some helpful README files, so if you do intend to test out this capability I recommend giving that a look.

 

Some usage examples of this script (can be run with either python2 or python3 or both, depending on whether you've installed all the dependencies and modules in the REST API tool):

 

Show Users Currently on the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o getHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>"

 

 

Add  Users to the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o addHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

**Note: my python-fu is not strong enough to capture/print the 404 response from the API if you send a partially successful PUT.  If your python-fu is strong, I'd love to know how to do that correctly.

Example - if you try to add multiple user emails and one or more of those emails are not in your Identity Sources, you should see this error for the invalid email(s):

 

Remove Users from the High Risk List

# python highRiskUsers.py -f /path/to/SIDAccess/API.key -o removeHighRiskUsers -u "https://<URL_of_your_SID_Access_Cloud_Console_API>" -e <single_or_multiple_email_address>

 

*Note: same as above about a partially successful PUT to the API

 

The second tool is another script (nwHighRiskUsersReport.sh; also attached at the bottom of this blog) to help report on the NetWitness-specific information about the users added to the High Risk list, the Incident(s) that resulted in them being added, and when they were added.  This script should be run on a recurring basis in order to capture any new additions to the list - the frequency of that recurrence will depend on your environment and how often new incidents are created or updated.

 

The script will create a CEF log for every non-Closed incident that has added an email to the High Risk list, and will send that log to the syslog receiver of your choice.  Some notes on the script's requirements:

  1. must be run as a superuser from the Admin Server
  2. the Admin Server must have the rsa-nw-logplayer RPM installed (# yum install rsa-nw-logplayer)
  3. add the IP address/hostname and port of your syslog receiver on lines 4 & 5 in the script
  4. If you are sending these logs back into NetWitness:
    1. add the attached cef-custom.xml to your log decoder or existing cef-custom.xml (details and instructions here: Custom CEF Parser)
    2. add the attached table-map-custom.xml entries to the table-map-custom.xml on all your Log Decoders
    3. add the attached index-concentrator-custom.xml entries to the index-concentrator-custom.xml on all your Concentrators (both Log and Packet)
    4. restart your Log Decoder and Concentrator services
    5. **Note: I am intentionally not using any existing email-related metakeys in these custom.xml files in order to avoid a potential feedback loop where these events might end up in other Incidents and the same email addresses get re-added to the High Risk list
  5. Or if you are sending them to a different SIEM, perform the equivalent measures in that platform to add custom CEF keys

 

Once everything is ready, running the script:

 

And the results:

------

With the recent news about ScreenConnect used in data breaches, I had the opportunity to examine some of the network traffic.  This was traffic that was originally in OTHER, but as you know, that just means it's an opportunity to learn about some new aspect of our networks.

 

Initially, this traffic was over TCP dest port 443, however it was not SSL traffic.  A custom parser was written to identify this traffic and register the service type as 7310.  I did not find a document that explained how the application used this custom protocol, so I built this parser with some educated guesswork.

 

 

We start with an 18 byte long token and match on it within the first 10 bytes of the payload.  If we see that, we are in the right traffic.  Next, I moved forward 1 byte and then extracted the next 64 bytes of payload.  I checked the first byte using the "payload:uint8(1,1)" method looking for either a "4" or a "6".  In researching this traffic, it appeared that different versions of ScreenConnect would have one of those values.  That value was important as it led me to determine where the hostname (or IP address) started and it's terminator.

 

 

If the value was "4", then my hostname started 7 bytes away.  If the value was "6", the hostname started 9 bytes away.  It also helped me identify the terminator.  If the initial value was "4" my terminator appeared to be "0x01".  If the initial value was "6" then the terminator appeared to be "0x02".  

 

Now that I was able to identify the start and end positions, I could extract the hostname.  However, it could be either an IP address or a fully qualified domain name.  This is where I referenced an outside function in the 'nwll' file called "determineHostType".  This way, if the extracted value was an IP address, it would be placed in 'alias.ip' and if it was a hostname, it would go in 'alias.host'.

 

Attached is the parser and PCAP.  This parser was submitted to LIVE, however I wanted you to have it while that process is underway.

 

Good luck and happy hunting.

 

Chris

Attackers love to use readily available red team tools for various stages within their attack. They do so as this removes the labour required in creating their own custom tools. This is not to say that the more innovative APT's are going down this route, but just something that appears to be becoming more prevalent and your analysts should be aware of. This blog post covers a readily available red team tool available on GitHub.

 

Tools

In this blog post, the Koadic C2 will be used. Koadic, or COM Command & Control, is a Windows post-exploitation rootkit similar to other penetration testing tools such as Meterpreter and Powershell Empire. 

 

The Attack

The attacker sets up their Koadic listener and builds a malicious email to send to their victim. The attacker wants the victim to run their malicious code, and in order to do this, they tried to make the email look more legitimate by supplying a Dropbox link, and a password for the file:

 

The user downloads the ZIP, decompresses using the password in the email, and is presented with a Javascript file that has a .doc extension. Here the attacker is relying on the victim not being well versed with computers, and not noticing the obvious problems with this file (extension, icon, etc.):

 

 

 

Fortunately for the attacker, the victim double clicks the file to open it and they get a call back to their C2:

 

From here, the attacker can start to execute commands:

 

 

The Detection in NetWitness Packets

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. From here, the analyst the can start to pull apart the protocol and look for anomalies within its behaviour; the analyst opens the Service Analysis meta key to do this and observed two pieces of metadata of interest:

 

  • http post missing content-type

  • http post no get

 

 

 

These two queries have now reduced the data set for the analyst from 2,538 sessions to 67:

 

NOTE: This is not to say that the other sessions do not have malicious traffic, nor that the analyst will ignore them, but just at this point in time this is the analysts focal point. If this traffic after analysis turned out to be clean, they could exclude it from their search and pick apart other anomalous HTTP traffic in the same manner as before. This allows the analyst to go though the data in a more comprehensive and approachable manner.

 

Now that the data set has been reduced, the analyst can start open other meta keys to see understand the context of the traffic. The analyst wants to see if any files are being transferred, and to see what user agents are involved, to do so, they open the Extension, Filename, and Client Application meta key. Here they observe an extension they do not typically see during their daily hunting, WSF. They see what appears to be a random filename, and a user agent they are not overly familiar with:

 

There are only eight sessions for this traffic, so the analyst is now at a point where they could start to reconstruct the raw sessions and see what if they can better understand what this traffic is for. Opening the Event Analysis view, the analyst first looks to see if they can observe any pattern in the connection times, and to look at how much the payload varies in size:

NOTE: Low variation in payload size and connections that take place every x minutes is indicative of automated behaviour. Whether that behaviour is malicious or not is up to the analyst to decipher, this could be a simple weather update for example, but this sort of automated traffic is exactly what the analyst should be looking for when it comes to C2 communication; weeding out the user generated traffic to get to the automated communications.

 

Reconstructing the sessions, the analyst stumbles across a session that contains a tasklist output. This immediately stands out as suspicious to the analyst:

 

From here, the analyst can build a query to focus on this communication between these two hosts and find out when this activity started happening:

 

Looking into the first sessions of this activity, the analyst can see a GET request for the oddly named WSF file, and that BITS was used to download it:

 

The response for this file contains the malicious javascript that infected the endpoint:

 

Further perusing the sessions, it is also possible to see the commands being executed by the attacker:

 

The analyst is now extremely confident this is malicious traffic and needs to be able to track it. The best way to do this is with an application rule. The analyst looks through the traffic and decides upon the following two pieces of logic to detect this behaviour:

 

To detect the initial infection:

extension = 'wsf' && client contains 'bits'

To detect the beacons:

extension = 'wsf' && query contains 'csrf='

 

NOTE: The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The Detection in NetWitness Endpoint

Every day the analyst should review the IOC, BOC, and EOC meta keys; paying particular attention to the high-risk indicators first. Here the analyst can see a high-risk meta value, transfers file using bits:

 

Here the analyst can see cmd.exe spawning bitsadmin.exe and downloading a suspiciously named file into the \AppData\Local\Temp\ directory. This stands out as suspicious to the analyst:

 

From here, the analyst places an analytical lens on this specific host and begins to look through what other actions took place around the same time. The analyst observes commands being executed against this endpoint and now knows it is infected:

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

Analysts should also be aware that not all attackers will use proprietary tools, or even alter the readily available ones to evade detection. An attacker only needs to make one mistake and you can unravel their whole their operation. So don't always ignore the low hanging fruit.

There is a new space available on RSA Link: Troubleshooting the RSA NetWitness® Platform

The purpose of this space is to consolidate the available troubleshooting information for RSA NetWitness into a single space.

Information is separated into several "widgets" that are used to categorize the types of troubleshooting items:

  • Installation information
  • Knowledge base articles that contain troubleshooting information
  • Blog posts that discuss troubleshooting the RSA NetWitness platform
  • Videos and tutorials
  • Troubleshooting topics from the user guides

The goal of this space is to be the place you can come to find a wide variety of troubleshooting information in one place. While the information is also available elsewhere in RSA Link, it may be mixed in with other types of information. In this space, all the information you see is targeted toward helping you solve problems that you encounter while using RSA NetWitness.

Quite frequently when testing ESA alerts and output options / templates, I have wanted the ability to manually or repeatedly trigger alerts.  In order to help with this type of testing, I created a couple ESA Alert templates to generate both scheduled alerting and manual, one-time alerts.

 

Each of these can take a wide variety of time- or schedule-based inputs to generate alerts according to whatever kind of frequency you might want.  The descriptions in each alert have examples, requirements, and links to official Esper documentation with more detail.

 

I see the potential for quite a bit of usefulness with the Crontab alert, especially in 11.3 now that ESA Alert script outputs run from the admin server.

 

Lastly, I created these using freemarker templates (how the ESA Rules from Live are packaged) in order to ensure that the times and schedules used in the alerts adhere to proper syntax and formatting, but of course you should feel free to convert these to advanced rules if you like.

 

 

Introduction

There are many, many ways to exfiltrate data from a network, but one common way to do it is using DNS Exfiltration.

With these specific techniques the attackers use the already open port for dns traffic as the door for uploading and downloading data between the attacked host and his own external server.

 

Obviously with the normal daemon for DNS resolution that’s not possible, but with the right software and the right configuration it is possible to use any DNS server and set it up within any infrastructure to exfiltrate data without the right permissions needed.

 

But how is possible?

 

There are many packages already built which are ready to be used for this purpose and the most common are: Dnscat2, Iodine and Powercat+Dnscat2.

 

Just a quick tip, don’t imagine an attacker using a specific version built only for you.  Attackers are lazy and they need to be sure their attack is efficient, so most of the time you will end up fighting with one of these three DNS exfiltration tools.  They'll either be renamed or exactly the same as you can find on GitHub.

 

Remember to also check the second level domain to avoid any confusion with legitimate software using dns tunneling.

 

So, let’s take a look into these three tools, both work for the same purpose and same TCP/UDP port but with some difference in how to send the data outside the network:

 

Dnscat2 (https://github.com/iagox86/dnscat2): it connects to a server component to try to resolve TXT queries and all data going up and down, to and from, the external server in an encrypted way or not, depending on your choice. This tool is widely used, also because it is ported into multiple programming languages, like Ruby, Perl, PowerShell, etc. This way it can be easier to implement and it'll essentially work on any network.

 

Iodine (https://code.kryo.se/iodine/): same basic functionality as previous one, make tunnel through DNS, but with little difference like password for accessing the tunnel, and uses the NULL type that allows the downstream data to be sent without encoding. Each DNS reply can contain over a kilobyte of compressed payload data; there’s also the android version, so all the work is almost done for example to implement it also on IoT device running Android.

 

Powercat (https://github.com/besimorhino/powercat): that tool alone don’t work as a dnstunnel, but if the server part is dnscat2, you can have a interactive Powershell over legitimate dns traffic, and you can increase your capability by adding other Powershell attack framework like Nishang, Powershell Empire, etc..

 

The purpose of these article is not “how to exfiltrate data from a network”, but let’s take a look how our products can help you to identify and track any usage of these technique in your network, and for that I’ve choose the common approach used every day by me and my colleagues of Incident Response Services.

 

For that I’ve choosen RSA NetWitness Network. Let’s take a look at the essential steps.


 

Preparation

In your NetWitness Network click on configure, select as Resource Type Bundle, click Search, choose Hunting Pack and click on Deploy to deploy the hunting pack as showed in Figure 1 to the appropriate component of your infrastructure.

Figure 1

 

Now choose Lua Parser as Resource Type, click Search and choose nwll and DNS_verbose_lua as parser and click on Deploy to deploy the parser as showed in Figure 2.

Figure 2

 

Now that your NetWitness Packets (network) environment is ready and have all you need to parse and identify in the right way DNS traffic you can start with your analysis.

 

Now let’s see how to find bad DNS traffic, or better, the traffic who cross DNS port but is not a real DNS traffic.

 

 

Dnscat2 traffic

With right package and parser deployed, if there’s traffic generated by Dnscat2 into your network, many indicators rise to your eyes and help you to fast identify it.

Figure 3

 

As showed in Figure 3, for Service Type = DNS (service = 53) Service Analysis show presence of “dns base36 TXT record” and “hostname consecutive consonants” plus “dns large answer” as Risk Information and “Hostname Aliases” like the hostname showed in Figure 3 are good sign Dnscat2 traffic presence.

Scrolling to the others meta keys as showed in Figure 4 you find DNS Query Type with value “txt record” and DNS response text with values of many chars without apparent sense, now you have sufficient alerts!

 

 

Figure 4

 

So, in my network there are query for txt record, with base36 encoding, large answer, apparently random text response and random chars host alias? To be sure that’s not a normal DNS traffic you can click on one of these events and see what inside as showed in Figure 5……

 

Figure 5

 

Now you have clear that you are in front of Dnscat2 traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && dns.querytype = 'txt record' && analysis.service = 'hostname consecutive consonants' && analysis.service = 'dns base36 txt record'


 

Iodine traffic 

Let’s check some interesting meta who give me the ability to find Iodine traffic.

 

Figure 6

 

As showed in Figure 6 there’s Service analysis with “hostname invalid” and “hostname consecutive consonants”, Risk Suspicious with “dns extremely low ttl”, Host aliases with a lot of “strange” hostname, but check if there’s something more…

 

 

Figure 7

 

As showed in Figure 7 there’s also another interesting filed, DNS query type who say “experimental null record”, so job done, all of these meta are related to Iodine activity and the packet showed in Figure 8 confirm the traffic.

 

Figure 8

 

Now you have clear that you are in front of Iodine traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && dns.querytype = 'experimental null record'

 


 

Powercat + Dnscat2

Looking into Powercat + Dnscat2 is different from previous one, let’s check why.

As showed in Figure 9 Session analysis say “single sided udp” , Service Analysis say “hostname consecutive consonants”, dns base 36 txt records”, “dns single request response” and Hostname aliases have a lot of hostname with strange names.

 

Figure 9

 

Looking on more meta as shown in Figure 10, there’s DNS query type as “txt record” and DNS Response text with a lot of strange text starting with same char.

 

 

Figure 10

 

Look similar to Dnscat2 traffic but not exactly the same there’s a specific difference between standard dnscat2 traffic and be the “single side udp” and “dns single side request response”.

 

Figure 11

 

As showed in Figure 11 there’s only one request and one response, and that’s the main difference from standard dnscat2 and Powercat with Dnscat2.

Now you have clear that you are in front of Powercat with Dnscat2 traffic and you have to do further analysis on Source Ipaddress who generate this traffic.

 

A quick query i apply every day to find its presence in the preferred time span, can be: service = 53 && analysis.service = 'hostname consecutive consonants' && analysis.service = 'dns base36 txt record' && analysis.service = 'dns single request response'


 

Addon

Most of the time, when you look into DNS traffic, maybe you encounter something like a client who work not only with UDP but also with TCP protocol.

If the information about the source is right, with these hunting methodology you can archive also some goal about network misconfiguration.

That’s because DNS infrastructure need to be managed and there are a lot of guide on "How to secure your DNS infrastructure", so in a normal situation a client try dns resolution through one internal server, most of the time a domain controller, who talk with a DNS forwarder allowed to go outside of the network for resolution ( if both have nothing into cache).

So if you see a client go to ask resolution from client network to internet , also with TCP protocol, is better if you check more your DNS infrastructure, because one backdoor on client machine using port 53, probably have direct access to internet and you can exfiltrate everything without usage of any dnstunnel, but only using the port allowed.

 

A quick query i apply every day to find its presence in the preferred time span, can be: direction = 'outbound' && service = 53 && ip.proto = 6 and if your source ipaddress are filled with a lot of ip coming from client network, you have some possible misconfiguration into the network and/or some possible hole.


 

Finally

There are many ways to hunt and dig into a system, but with the right product and the right methodology you can archive success very faster and this article want be a quick help on doing that because every day we do that with our products!

 

Hope this helps.

 

Thank you.

 

Max

 

 

The complete overhaul of NW-Endpoint 4.4 into NW-Endpoint 11.3 includes (among many changes) a different method for creating your own, or tuning existing, endpoint alerts.  In the old version (4.4), everything was a SQL query, but since we have moved away from Windows and SQL Server in 11.3, I'd like to shed some light on how the new process works, as well as include some tooling intended to assist folks who want to do this themselves.

 

The RSA NetWitness Endpoint Configuration Guide (https://community.rsa.com/docs/DOC-100160) has a section starting on pg. 12 that covers everything here in greater detail.  If you'd like more information on this subject, I recommend taking a look at that document.

 

At a high level, the process for Endpoint 11.3 to generate alerts and calculate file and host risk scores goes like this:

 

Let's take a look at  a couple of the OOTB examples and see how these different pieces are interacting with each other by examining the process that turns the "runs powershell decoding base64 string" rule into a potential risk score.

 

If the App Rule's condition statement is met, it creates a meta value of "runs powershell decoding base64 string" in the "boc" meta key:

 

These are then used in the corresponding ESA Rule "Runs Powershell Decoding Base64 String" contained in the OOTB Endpoint Risk Scoring Rule Bundle (I've attached all of the OOTB ESA Rules contained in the bundle to this blog).

****Take note that the app_rule_meta_value is case sensitive.  If you use capital letters in the App Rule Name field, then the "value" field in its companion ESA Rule must also contain capital letters****

 

Last up in the process is the Risk Scoring Rule.  This takes the ESA Alert and produces a score (scaled from 0 - 100) for the host where the alert occurred, and if applicable the module involved in the alert.  This last part is where I expect the most potential confusion - determining the host where an alert occurred is straightforward, but the module might not be.

 

This is because there can potentially be both a source module (filename_src, checksum_src) and a destination module (filename_dst, checksum_dst), or just the module itself without a source or destination (filename, checksum), or for some alerts there might not be a module involved in the alert at all.  I've attached all of the OOTB Risk Scoring Rules to this blog, and I'd encourage you to take a look at these variations if you intend to create your own, or tune existing, rules and alerts.

 

Now then, back to the "Runs Powershell Decoding Base64 String" Rule.  This Risk Scoring Rule looks for the ESA Alert and creates a score for the source module (checksum_src, filename_src) in the event, as well as the host where it occurred.  Any risk scores that are generated for affected hosts and modules will appear in the Investigate/Hosts and Investigate/Files pages in the UI, and can also appear as Alerts and Incidents in the Respond UI.

 

And just to be thorough, here are a couple examples of rules with different Risk Scoring.

 

A rule without a source or destination module --> "Scripting Addition In Process"

 

A rule without any module and just the Host --> "Windows Firewall Disabled"

 

Now we have some examples under our belt, and know how the different inputs and options relate to one another and the outcome.  The process for adding your own rule is covered in the configuration guide linked above, and this next section aims to assist with some of the manual CLI aspects of that process.

 

After playing around with the Blocking capabilities in 11.3, I decided I wanted to add a couple custom alerts.

 

First, I wanted to know when a module I blocked was actively running on an endpoint at the time I blocked it and was subsequently killed.  My App Rule to trigger on this activity:

 

And second, I wanted to know when an attempt was made to access or run a module that I had previously blocked.  My App Rule for this activity:

 

With these App Rules created and Applied, the next steps are to create and apply the corresponding ESA Alert and Risk Scoring Rules from a terminal session in the Admin Server (Node0).  The script "endpointCustomRule.sh" attached to this blog can help walk you through these steps, if you choose.  It aims to eliminate errors that may occur when completing these steps manually.

 

Some notes on the script:

  • must be run on the Admin Server as root
  • must be run only after creating and applying your App Rule(s)
    • be sure to make your App Rules unique, otherwise the script might not find the correct one when it is checking for a valid Log Decoder App Rule
    • if you have multiple Endpoint Log Hybrids (ELHs), be sure to Push your App Rule(s) to the other ELHs in your environment
  • applies some error checking and input validation to ensure valid Rules are created and added to the respective databases successfully

 

If you find errors or gaps in the script please let me know.

 

Prompting user for input:

 

Adding and confirming the ESA and Risk Scoring Rules:

 

And finally, confirming that we are now successfully creating alerts and re-calculating Risk Scores when the events occur:

WireGuard is a new open-source VPN protocol used to create point to point tunnels. It uses the most modern cryptographic protocols and it works on the network layer for both IPv4 and IPv6.
One of the advantages of WireGuard implementation is the size of it's code. It uses just 4000 rows of code, which is much smaller compared with openVPN or IPsec implementations.  Initially released for the Linux kernel, it is now cross-platform and widely deployable.  All these aspects make the WireGuard protocol a perfect software for those who need to create a secure channel.

 

Considered it's easy implementation and it's wide availability, I tried to create a parser to help to identify this type of traffic.

 

For our purpose we can ignore the header of the packet and concentrate on the payload data only.

first packet

From the payload analysis we see that the first packet starts with 01 00 00 00. The same pattern is used on the following packets, 02 00 00 00 on the response and 04 00 00 00 on the rest of the traffic.

second packet

third packet


If we cross reference this with Wireguard’s documented protocol, we can confirm that the data begins with an 8-bit pattern (0100 0000, 0200 0000, 0400 0000), so we can assume that these pattern will help to identify this type of traffic.

 

With some suggestion from Christopher Ahearn, I started to create my LUA parser.

 

The entry point for my parser is the token 01 00 00 00

WireGuard:setCallbacks({
    [nwevents.OnSessionBegin] = WireGuard.sessionBegin,
    ["\001\000\000\000"] = WireGuard.tokenMATCH, -- find the token 1 0 0 0
})

Once the token is identified inside a session, it's necessary to understand its position as we are looking for the payload that have that token on the first 4 bytes of the payload. 

function WireGuard:tokenMATCH(token, first, last)
    -- check if the token is on the first 4 bytes
    if first == 1 and last == 4 then

If this is the case, then I go to define the request stream and the response stream, then I look for the pattern that I identified before: 02 00 00 00 on first packet of the response stream and 04 00 00 00 on all the other packets.

if requestStream and responseStream then
    -- the first 4 bytes on the first packet response are 2 0 0 0
    if nwstream.getPayload(responseStream,1,4):find("\002\000\000\000",1,4) then
-- the first 4 bytes on the other packets are 4 0 0 0
if (nwpacket.getPayload(requestPacket,1,4):find("\004\000\000\000",1,4) and
    nwpacket.getPayload(responsePacket,1,4):find("\004\000\000\000",1,4)) then

If all the first 4 packets respect the pattern, it means that the analyzed session is a WireGuard session and the parser set the service type as 51820. WireGuard doesn't use a specific port for the communication, so i decided to use the 51820 because it's the value used on the configuration sample available on the WireGuard website.

nw.setAppType(51820)
nw.createMeta(self.keys.ioc, "WireGuard VPN")

 

Once I deployed the parser on the Network Decoder, it allows us to quickly identify if WireGuard VPN traffic is established on the network.

NetWitness

 

WireGuard is evolving and it is possible that something will change, but this parser can help to identify some VPN traffic that otherwise would not have been defined properly using RSA NetWitness.  

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Smbexec will be used. the Impackets implementation of Smbexec will be used. This sets up a semi-interactive shell for the attacker.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Smbexec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

Smbexec works a little differently to some of the more common lateral movement tools such as PsExec. Instead of transferring a binary to the target endpoint and using the svcctl interface to remotely create a service using the transferred binary and start the service, Smbexec makes a call to an existing binary that already lives on that endpoint to execute its commands, cmd.exe.

 

NetWitness Packets does a great job at pulling apart packet data and pointing you in directions of interest. One of the metadata we can pivot on to focus on traffic that is of interest to us for lateral movement is, remote service control:

 

NetWitness also creates metadata when it observes windows cli commands being run, this metadata is under the Service Analysis meta key and is displayed as, windows cli admin commands. This would be another interesting pivot point for us to look into to see what type of commands are being executed:

 

NOTE: Just because an endpoint is being remotely controlled, and there are commands being executed on the endpoint, this does not mean that your network is compromised. It is up to the analyst to review the sessions of interest like we are in this blog post, and determine if something is out of the ordinary for your environment.

 

Looking into the other metadata available, we can see a connection to the C$ share, and that a filename called __output was created:

 

This does not give us much to go on and say that this is suspicious, so it is necessary to reconstruct the raw session itself to get a better idea of what is happening. Opening the Event Analysis view for the session we reduced our data set to, and analysing the payload, a suspicious string stands out as shown below:

 

Tidying up the command a little, it ends up looking like this:

%COMSPEC% /Q /c echo dir > \\127.0.0.1\C$\__output 2>&1 > %TEMP%\execute.bat & %COMSPEC% /Q /c %TEMP%\execute.bat & del %TEMP%\execute.bat

  • %COMPSEC% - Environment variable that points to cmd.exe
  • /Q - Turns echo off
  • /C - Carries out the command specified by string and then terminates
  • %TEMP% - Environment variable that points to C:\Users\username\AppData\Local\Temp

 

We can see that string above will echo the command we want to execute (dir) into a file named "__output" on the C$ share of the local machine. The command we want to execute also gets placed into execute.bat in the %TEMP% directory, which is subsequently executed, and then deleted.

 

Analysing the payload further, we can also see the data that is returned from the command that was executed by the attacker:

 

Now that suspicious traffic has been observed, we can filter on this type of traffic, and see other commands being executed, such as whoami:

 

Smbexec is quite malleable, a vast majority of the indicators can easily be edited to evade signature type detection for this behaviour. However, using NetWitness Packets ability to carve out behaviours, the following application rule logic, should be suitable to pick up on suspicious traffic over SMB that an analyst should investigate to detect this type of behaviour:

(ioc = 'remote service control') && (analysis.service = 'windows cli admin commands') && (service = 139) && (directory = '\\c$\\','\\ADMIN$\\') 

 

The Detection in NetWitness Endpoint

NetWitness Endpoint does a great job at picking up on this activity, looking at the Behaviours of Compromise meta key, two pieces of metadata point the analyst toward this activity, services runs command shell and runs chained command shell:

 

Opening the Event Analysis view for these sessions, we can see that services.exe is spawning cmd.exe, and we can also see the command that is being executed by the attacker:

 

The default behaviour of Smbexec could easily be detected with application rule logic like the following:

param.dst contains '\\127.0.0.1\C$\__output'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

In RSA NetWitness 11.3, one of the behind-the-scenes changes to the platform was moving the script notification server from ESA onto the Admin Server.

 

This change opens up a number of possibilities for scripting and automating processes within the NetWitness environment, but also requires a few changes to existing, pre-11.3 scripts.

 

Prior to 11.3, the raw alert data would be passed to the ESA script server as a single argument which could then be read, written to disk, parsed, etc. e.g.:

 

#!/usr/bin/env python 
import json
import sys

def dispatch(alert):
   with open("/tmp/esa_alert.json", mode='w') as alert_file:
      alert_file.write(json.dumps(alert, indent=True))

def myFunction():
   esa_alert = json.loads(open('/tmp/esa_alert.json').read())
   .....etc.....
   .....etc.....

if __name__ == "__main__":
   dispatch(json.loads(sys.argv[1]))
   myFunction()
   sys.exit(0)

 

 

But in 11.3, the raw alert gets broken up into multiple arguments that need to be joined together.  One possible solution to this change could be something like this:

 

#!/usr/bin/env python
import sys
import json

def dispatch():
   with open("/tmp/esa_alert.json", mode='w') as alert_file:
      a = sys.argv
      del a[0]
      alert_file.write(' '.join(a))

def myFunction():
   esa_alert = json.loads(open("/tmp/esa_alert.json").read())
   .....etc.....
   .....etc.....

if __name__ == "__main__":
   dispatch()
   myFunction()
   sys.exit(0)

 

As I mentioned above, moving the script server onto the Admin Server opens up a number of possibilities for certain queries and tasks within the NW architecture.  Some that come to mind:

  • automating backups
  • pulling host stats and ingesting them as syslog events
  • better ESA Alert <--> Custom Feed <--> Context-Hub List <-- > ESA Alert enrichment loops

 

However, one restriction I've been trying to figure out a good solution for is that the Admin Server will run these scripts as the "netwitness" user, and this user has fairly limited access.

 

I've been kicking around the possibility of adding this user to the sudoers group, possibly adding read/write/execute permissions for this user to specific directories and/or files depending on the use case, or sudo-ing to a different user within the script.

 

Each of these options present certain risks, so I'd be interested in hearing what other folks might think about these or other possible solutions to run scripts with elevated permissions in as secure a manner as possible.

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

Tools

In this blog post, Winexe will be used. Winexe is a GNU/Linux based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command and uninstalls the service. Winexe allows execution of most of the windows shell commands.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using Winexe, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

The use of Winexe is not overly stealthy. Its use creates a large amount of noise that is easily detectable. Searching for winexesvc.exe within the filename metadata returns the SMB transfer of the executable to the ADMIN$ share:

 

Using the time the file transfer took place as the pivot point to continue investigation, it is also possible to see the use of the Windows Service Control Manager (SCM) directly afterward to create and start a service on the remote endpoint. SCM acts as a remote procedure call (RPC) server so that services on remote endpoints can be controlled:

 

Reconstructing the raw session as text, it is possible to see the service name being created, winexesvc, and the associated executable that was previously transferred being used as the service base, winexesvc,exe:

 

Continuing to analyse the SMB traffic around the same time frame, it is also possible to see another named pipe, ahexec, being used. This is the named pipe that Winexe uses:

 

Reconstructing these raw sessions as text, it is possible to see the commands that were executed:

 

As well as the output that was returned to the attacker:

 

Based on the artefacts we have seen leftover from Winexe's execution over the network, there are multiple pieces of logic we could use for our application rule to detect this type of traffic. The following application rule logic would pick up on the initial transfer of the winexesvc.exe executable, and the subsequent use of the named pipe, ahexec:

(filename = 'ahexec','winexesvc.exe') && (service = 139)

The Detection in NetWitness Endpoint

Searching for winexesvc.exe as the filename source shows the usage of Winexe on the endpoints, this is because this is the executable that handles the commands sent to over the ahexec named pipe. The filename destination meta key shows the executables invoked via the use of Winexe:

 

A simple application rule could be created for this activity by simply looking for winexesvc.exe as the filename source:

(filename.src = 'winexesvc.exe')

 

Additional Analysis

Analysing the endpoint, you can see the winexesvc.exe process running from task manager:

 

As well as the service that was installed via SCM over the network:

 

This service creation also creates a log entry in the System event log as event ID 7045:

 

This means if you were ingesting logs into NetWitness, you could create an application rule to trigger on Winexe usage with the following logic:

(reference.id = '7045') && (service.name = 'winexesvc')

We can also see the named pipe which Winexe uses by executing Sysinternals pipelist tool:

Introduction

Lateral movement is a technique that enables an adversary to access and control remote systems on a network. It is a critical phase in any attack, and understanding the methods that can be used to perform lateral movement, along with how those protocols display themselves in NetWitness, is paramount in detecting attackers moving laterally in your environment. It is also important to understand that many of the mechanisms used by attackers for lateral movement, are also used by administrators for legitimate reasons, and thus why it is important to monitor these mechanisms to understand what is typical behaviour, and what is not.

 

What is WMI?

At a high level, Windows Management instrumentation (WMI) provides the ability to, locally or remotely, manage servers and workstations running Windows by allowing data collection, administration, and remote execution. WMI is Microsoft's implementation of the open standard, Web-Based Enterprise Management (WBEM) and Common Information Model (CIM), and comes preinstalled in Windows 2000 and newer Microsoft Operating Systems.

 

Tools

In this blog post, the Impackets implementation of WMIExec will be used. This sets up a semi-interactive shell for the attacker. WMI can be used for reconnaissance, privilege escalation (by looking for well-known misconfigurations), and lateral movement.

 

The Attack

The attacker has successfully gained access to your network and dumped credentials, all without any detection from your Security Operations Center (SOC). The attacker decides to move laterally using WMIExec, they connect to one of the hosts they had previously identified and begin to execute commands:

 

The Detection in NetWitness Packets

NetWitness Packets can easily identify WMI remote execution. All the analyst needs to do is open the Indicators of Compromise (IOC) meta key and look for wmi command:

 

Pivoting on the wmi command metadata, and opening the Action meta key, the analyst can observe the commands that were executed, as these are sent in clear text:

 

NOTE: Not all WMI commands are malicious. It is up to the analyst to understand what is normal behaviour within their environment, and what is not. The commands seen above are typical of WMIExec however, and should raise concern for the analyst.

 

The following screenshot is of the raw data itself. Here it is possible to see the parameter that was passed and subsequently registered under the action meta key:

 

Looking at the parameter passed, it is possible to see that WMIExec uses CMD to execute its command and output the result to a file (which is named the timestamp of execution) on the ADMIN$ share of the local system. The following screenshot shows an example of whoami being run, and the associated output file and contents on the remote host:

 

NOTE: This file is removed after it has been successfully read and displayed back to the attacker. Evidence of this file only exists on the system for a small amount of time.

 

We can get a better understanding of WMIExec's function from viewing the source code:

 

To detect WMIExec activity in NetWitness Packets, the following application rule logic could be created to detect it:

action contains'127.0.0.1\\admin$\\__1'

Lateral traffic is seldom captured by NetWitness Packets. More often than not, the focus of packet capture is placed on the ingress and egress points of the network, normally due to high volumes of core traffic that significantly increase costs for monitoring. This is why it is important to also have an endpoint detection product, such as NetWitness Endpoint to detect lateral movement.

 

The Detection in NetWitness Endpoint

A daily activity for the analyst should be to check the Indicators of Compromise (IOC), Behaviours of Compromise (BOC), and Enables of Compromise (EOC) meta keys. Upon doing so, the analyst would observe the following metadata, wmiprvse runs command shell:

 

Drilling into this metadata, and opening the Event Analysis view, it is possible to see the WMI Provider Service spawning CMD and executing commands:

 

To detect WMIExec activity in NetWitness Endpoint, the following application rule logic could be created to detect it:

param.dst contains '127.0.0.1\\admin$\\__1'

Conclusion

Understanding the Tools, Techniques, and Procedures (TTP's) used by attackers, coupled with understanding how NetWitness interprets those TTP's, is imperative in being able to identify them within your network. The NetWitness suite has great capabilities to pull apart network traffic and pick up on anomalies, which makes it easier for the analysts to hunt down and detect these threats.

 

WMI is a legitimate Microsoft tool that is used within environments by administrators, as well as by 3rd party products, it can therefore be difficult to differentiate normal from malicious, and why it is a popular tool for attackers. Performing Threat Hunting daily is an important activity for your analysts to build baselines and detect the anomalous usage from the normal activity.

Eric Partington

Sigma for your SIEM

Posted by Eric Partington Employee Apr 8, 2019

Over the last year a few trends have emerged in detection ruleset sharing circles.  Standards or common formats of sharing detective rulesets have emerged as the defacto way teams are communicating rulesets to then convert into local technologies.

 

  • Yara for file based detections
  • Snort/Bro/Zeek rules for network based detections
  • Sigma for SIEM based detections

 

Along with MITRE ATT&CK these appear to be a consistent common foundation for sharing methodologies.

 

Given that, taking a shot at using Sigma to create RSA NetWitness rules based on the rulesets in the github repo was the next logical step.  The hard work of creating the backed and the initial mappings for fields was done by @tuckner and my work was just adding on a few additional fieldmappings and creating a wrapper script to help make the process of running the rules easier.

 

There are still some issues in the conversion script that I have noticed and not all capabilities in Sigma have been ported over (or can be ported over programatically) but this is enough of a start to get you on your way to developing additional rulesets with this capabilities.

 

*** <disclaimer>

Please note this is not an official RSA product, this is an attempt to start the conversion process of these rules to something NetWitness can begin to understand. There will be mistakes and errors in this community developed tool, feel free to contribute fixes and enhancements to the Sigma project to make it better and more accurate

</disclaimer> ***

 

You will need to install python3 to make the Sigmac tool run, NetWitness appliances don't have the right version of python so you will need somewhere to install it, these are my instructions that i fumbled through to make it work...

 

https://github.com/epartington/rsa_nw_sigma_wrapper/blob/master/install%20python3.txt

 

Once you have the tool running you should take a look at the rules that exist in the Sigma repo to see which ones you want to take a crack at converting.

 

Those rules exist here:

https://github.com/Neo23x0/sigma/tree/master/rules

 

The tool you will use to convert the rules is sigmac and lives under tools/sigmac

The backend you will refer to is netwitness and lives under tools/sigma/backends

The last item you need to know about is the template that will be used to convert the rule using the backend which is located here tools/config/netwitness.yml

 

running the command on a single file looks something like this:

python36 sigmac -t netwitness ../rules/network/net_mal_dns_cobaltstrike.yml
(query contains 'aaa\.stage\.', 'post\.1')

 

You can use this to run individual conversions but what if you want to bulk convert all the rules in a folder?

This wrapper script will help you do that, place it in the root folder and adjust the directory paths as needed, this will output the name of the file as well as the conversion so that you know what file you are converting

 

https://github.com/epartington/rsa_nw_sigma_wrapper/blob/master/sigma-wrapper.sh

 

Which gets you something like this:

 

/root/sigma/sigma-master/rules/windows/builtin/win_susp_sdelete.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '4656', '4663', '4658') && (obj.name contains '.AAA', '.ZZZ'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_sdelete.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '4656', '4663', '4658') && (obj.name contains '.AAA', '.ZZZ'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_security_eventlog_cleared.yml
((device.class='windows hosts') && (event.source='microsoft-windows-security-auditing') && (reference.id = '517', '1102'))
/root/sigma/sigma-master/rules/windows/builtin/win_susp_svchost.yml

 

Some items to be aware of:

  • IP addresses appear to be quoted which should not occur for our latest requirements
  • Keep an eye on regex usage
  • Haven't checked to far into the escaping of slashes for importing via the UI vs. the .nwr method.  Be careful which method you use that the right number of slashes are respected.

 

So far this looks like a useful method to add a bunch of current SIEM detections to the RSA NetWitness Platform, feel free to test and contribute to the converter, fieldmappings or other functions if you find this useful.

RSA NetWitness has a number of integrations with threat intel data providers but two that I have come across recently were not listed (MISP and Minemeld) so I figured that it would be a good challenge to see if they could be made to provide data in a way that NetWitness understood.

 

Current RSA Ready Integrations

https://community.rsa.com/community/products/rsa-ready/rsa-ready-documentation/content?filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bdocument%5D&filterID=contentstatus%5Bpublished%5D~tag%5Bthreat+intel%5D&sortKey=contentstatus%5Bpublished%5D~subjectAsc&sortOrder=1

 

MISP

Install the MISP server in a few different ways

https://www.misp-project.org/

 

VMWare image, Docker image or on an OS are all available (VMware image worked the best for me)

https://www.circl.lu/misp-images/latest/

 

Authenticate and setup the initial data feeds into the platform

Set the schedule to get them polling for new data

 

Once created and feeds are being pulled in you can look at the attributes to make sure you have the data you expect

 

Test the API calls using PyMISP via Jupyter Notebook

https://github.com/epartington/rsa_nw_misp/blob/master/get-misp.ipynb

  • you can edit the notebook code to change the interval of data to pull back (last 30 days, all data or such to limit impact on the MISP server)
  • You can change the indicator type (ip-dst, domain etc.) to pull back the relevant columns of data
  • You can change the column data to make sure you have what you need as other feed data

 

Once that checks out and you have the output data you want via the notebook you can add the python script to the head server of NetWitness

 

Install PyMISP on the head server of the NetWitness system so that you can crontab the query.

  • Install PyMISP using PIP

(keep in mind that updating the code on the head server could break things so be careful and test early and often before committing this change in production)

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install python-pip
OWB_FORCE_FIPS_MODE_OFF=1 python
OWB_FORCE_FIPS_MODE_OFF=1 pip install pymisp
OWB_FORCE_FIPS_MODE_OFF=1 pip install --upgrade pip
OWB_FORCE_FIPS_MODE_OFF=1 ./get-misp.py
yum repolist
vi /etc/yum.repos.d/epel.repo
change enabled from 1 to 0

Make sure you disable the epel repo after installing so that you don't create package update issues later

 

Now setup the query that is needed in a script (export the Jupyter notebook as python script)

https://github.com/epartington/rsa_nw_misp/blob/master/get-misp.py

 

Crontab the query to schedule it (the OWB is required to work around FIPS restrictions that seem to break a number of script related items in python)

23 3 * * * OWB_FORCE_FIPS_MODE_OFF=1 /root/rsa-misp/get-misp.py > /var/lib/netwitness/common/repo/misp-ip-dst.csv

 

Now setup the NetWitness recurring feed to pull from the local feed location

map the ip-dst values (for this script) to the 3rd column and the other columns as required

 

 

Minemeld

logo

Minemeld is another free intel aggregation tool from Palo Alto Networks and can be installed many ways (i tried a number of installs on different Ubuntu OSes and had difficulties), the one that worked the best for me was via a docker image.

https://www.paloaltonetworks.com/products/secure-the-network/subscriptions/minemeld

https://github.com/PaloAltoNetworks/minemeld/wiki

 

Docker image that worked well for my testing

https://github.com/jtschichold/minemeld-docker

 

docker run -it --tmpfs /run -v /somewhere/minemeld/local:/opt/minemeld/local -p 9443:443 jtschichold/minemeld

to make it run as daemon after testing add the -d command to have it continue running after you exit the terminal

 

After installing (if you do this right you can get a certificate included in the initial build of the container that will help with the Certificate trust to NW) you will log in and set up a new output action to take your feeds and map them to a format and output that can be used with RSA NetWitness.

 

This is the pipeline that we will create which will map a sample threat intel list to an output action so that NetWitness can consume that information

And it gets defined by editing the yml configuration file (specifically this section creates the outboundhcvalues section that NetWitness reads)

https://github.com/epartington/rsa_nw_minemeld/blob/master/minemeld-netwitness-hcvalues.yml

outboundfeedhcvalues:
inputs:
- aggregatorIPv4Outbound-1543370742868
output: false
prototype: stdlib.feedHCGreenWithValue

This is a good start for how to create custom miners

https://live.paloaltonetworks.com/t5/MineMeld-Articles/Using-MineMeld-to-Create-a-Custom-Miner/ta-p/227694

 

Once created and working you will have a second miner listed and the dashboard will update

 

You can test the feed output using a direct API call like this via the browser

https://192.168.x.y:9443/feeds/"$feed_name"?tr=1&v=csv&f=indicator&f=confidence&f=share_level&f=sources

the  query parameters are explained here:

https://live.paloaltonetworks.com/t5/MineMeld-Articles/Parameters-for-the-output-feeds/ta-p/146170

 

in this case:

tr=1

translate IP ranges into CIDRs. This can be used also with v=json and v=csv.

v=csv

returns the indicator list in CSV format.

 

The list of the attributes is specified by using the parameter f one or more times. The default name of the column is the name of the attribute, to specify a column name add |column_name in the f parameter value.

 

The h parameter can be used to control the generation of the CSV header. When unset (h=0) the header is not generated. Default: set.

 

Encoding is utf-8. By default no UTF-8 BOM is generated. If ubom=1 is added to the parameter list, a UTF-8 BOM is generated for compatibility.

 

F are the column names from the feed

This command testing drops a file in your browser to look at and make sure you have the data and columns that you want

 

Now once you are confident in the process and the output format you can script and crontab the output to drop into the local feed location on the head server (I did this as i couldn't figure out how to accept the self signed certificate from the docker image).

https://github.com/epartington/rsa_nw_minemeld/blob/master/script-rsa-minemeld.sh

# 22 3 * * * /root/rsa-minemeld/script-rsa-minemeld.sh

Now create the same local recurring feed file to pull in the information as feed data on your decoders.

Define the column to match column 1 for the IP in CIDR notation and map the other columns as required

 

Done

 

Now we have a pipeline for two additional threat data aggregators that you may have a need for in your environment.

There are a myriad of post exploitation frameworks that can be deployed and utilized by anyone. These frameworks are great to stand up as a defender to get an insight into what C&C (command and control) traffic can look like, and how to differentiate it from normal user behavior. The following blog post demonstrates an endpoint becoming infected, and the subsequent analysis in RSA NetWitness of the traffic from PowerShell Empire. 

 

The Attack

The attacker sets up a malicious page which contains their payload. The attacker can then use a phishing email to lure the victim into visiting the page. Upon the user opening the page, a PowerShell command is executed that infects the endpoint and is invisible to the end user:

 

 

The endpoint then starts communicating back to the attacker's C2. From here, the attacker can execute commands such as tasklistwhoami, and other tools:

 

From here onward, the command and control would continue to beacon at a designated interval to check back for commands. This is typically what the analyst will need to look for to determine which of their endpoints are infected.

 

The Detection Using RSA NetWitness Network/Packet Data

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. The analyst can then look into pulling apart the characteristics of the protocol by using the Service Analysis meta key. From here they notice a couple interesting meta values to pivot on, http with binary and http post no get no referer directtoip:

 

Upon reducing the number of sessions to a more manageable number, the analyst can then look into other meta keys to see if there are any interesting artifacts. The analyst look under the Filename, Directory, Client Application, and Server Application meta keys, and observes the communication is always towards a microsft-iis/7.5 server, from the same user agent, and toward a subset of PHP files:

 

The analyst decides to use this is as a pivot point, and removes some of the other more refined queries, to focus on all communication toward those PHP files, from that user agent, and toward that IIS server version. The analyst now observes additional communication: 

 

Opening up the visualization, the analyst can view the cadence of the communication and observes there to be a beacon type pattern:

 

Pivoting into the Event Analysis view, the analyst can look into a few more details to see if there suspicions on this being malicious are true. The analyst observes a low variance in payload, and a connection which is taking place ~every 4 minutes:

 

The analyst reconstructs some of the sessions to see the type of data being transferred, the analyst observes a variety of suspicious GET and POST's with varying data being transferred:

 

The analyst confirms this traffic is highly suspicious based of the analysis they have performed, the analyst subsequently decides to track the activity with an application rule. To do this, the analyst looks through the metadata associated with this traffic, and finds a unique combination of metadata that identifies this type of traffic:

 

(service = 80) && (analysis.service = 'http1.0 unsupported cache header') && (analysis.service = 'http post missing content-type')

 

IMPORTANT NOTE: Application rules are very useful for tracking activity. They are however, very environment specific, therefore an application rule used in one environment, may be of high fidelity, but when used in another, could be incredibly noisy. Care should be taken when creating or using application rules to make sure they work well within your environment.

 

The Detection Using RSA NetWitness Endpoint Tracking Data

The analyst, as they should on a daily basis, is perusing the IOC, BOC, and EOC meta keys for suspicious activity. Upon doing so, they observe the metadata, browser runs powershell and begin to investigate:

 

Pivoting into the Event Analysis view, the analyst can see that Internet Explorer spawned PowerShell, and subsequently the PowerShell that was executed:

 

The analyst decides to decode the base64 to get a better idea as to what the PowerShell is executing. The analyst observes the PowerShell is setting up a web request, and can see the parameters it would be supplying for said request. From here, the analyst could leverage this information and start looking for indicators of this in their packet data (this demonstrates the power behind having both Endpoint, and Packet solutions):

 

Pivoting in on the PowerShell that was launched, it is also possible to see the whoami and tasklist that was executed as well. This would help the analyst to paint a picture as to what the attacker was doing: 

 

Conclusion

The traffic outlined in this blog post is of a default configuration for PowerShell Empire; it is therefore possible for the indicators to be different depending upon who sets up the instance of PowerShell Empire. With that being said, C2's still need to check-in, C2's will still need to deploy their payload, and C2's will still perform suspicious tasks on the endpoint. The analyst only needs to pick up on one of these activities to start pulling on a thread and unwinding the attackers activity,

 

It is also important to note that PowerShell Empire network traffic is cumbersome to decrypt. It is therefore important to have an endpoint solution, such as NetWitness Endpoint, that tracks the activities performed on the endpoint for you.

 

Further Work

Rui Ataide has been working on a script to scrape Censys.io data looking for instances of PowerShell Empire. The attached Python script queries the Censys.io API looking for specific body request hashes, then subsequently gathers information surrounding the C2, including:

 

  • Hosting Server Information
  • The PS1 Script
  • C2 Information

 

Also attached is a sample output from this script with the PowerShell Empire metadata that has currently been collected.

Filter Blog

By date: By tag: