Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Christopher Ahearn
1 2 Previous Next

RSA NetWitness Platform

20 Posts authored by: Christopher Ahearn Employee

With the recent news about ScreenConnect used in data breaches, I had the opportunity to examine some of the network traffic.  This was traffic that was originally in OTHER, but as you know, that just means it's an opportunity to learn about some new aspect of our networks.


Initially, this traffic was over TCP dest port 443, however it was not SSL traffic.  A custom parser was written to identify this traffic and register the service type as 7310.  I did not find a document that explained how the application used this custom protocol, so I built this parser with some educated guesswork.



We start with an 18 byte long token and match on it within the first 10 bytes of the payload.  If we see that, we are in the right traffic.  Next, I moved forward 1 byte and then extracted the next 64 bytes of payload.  I checked the first byte using the "payload:uint8(1,1)" method looking for either a "4" or a "6".  In researching this traffic, it appeared that different versions of ScreenConnect would have one of those values.  That value was important as it led me to determine where the hostname (or IP address) started and it's terminator.



If the value was "4", then my hostname started 7 bytes away.  If the value was "6", the hostname started 9 bytes away.  It also helped me identify the terminator.  If the initial value was "4" my terminator appeared to be "0x01".  If the initial value was "6" then the terminator appeared to be "0x02".  


Now that I was able to identify the start and end positions, I could extract the hostname.  However, it could be either an IP address or a fully qualified domain name.  This is where I referenced an outside function in the 'nwll' file called "determineHostType".  This way, if the extracted value was an IP address, it would be placed in 'alias.ip' and if it was a hostname, it would go in ''.


Attached is the parser and PCAP.  This parser was submitted to LIVE, however I wanted you to have it while that process is underway.


Good luck and happy hunting.



I've come across ICMP tunneling only a handful of times, but this was the first time I had seen it used as part of a VPN client.  The VPN client was SoftEther VPN and, in addition to SSL VPN, it can also perform ICMP and DNS tunneling.  During a recent hunting engagement, I had the opportunity to identify and create content to detect this activity.


Let's have a look.



I drilled into ICMP traffic (ip.proto = 1) and then looked at Session Characteristics (analysis.session).  This led me to the meta 'icmp large session'.  Previously, I had created meta to describe when sessions had 'session.split' meta.  Session.split occurs when a session is either very large or very long.  You can find more about session.split in a previous post I wrote here.  However, one simple way to identify when session.split exists, is to simply write an application rule.  



As we look at the sessions associated with this activity, we see the following:



The data called out above is around the transmitted (requestpayload) and received (responsepayload) as well as the payload size and overall size of the session.  If you've ever looked at ICMP traffic before, its typically small.  This is not small.


Furthermore, this does not look like typical ICMP traffic as shown below:



With RSA Netwitness Packets, we have an opportunity to describe our network traffic pretty effectively.  When I saw this traffic, I wanted to improve some of the meta I had to describe the size.  The one below will let me know if a session is greater than 1mb.



Now, circling back to the ICMP tunneling, we can do this with another application rule.



By taking these steps, I am now better equipped at identifying ICMP tunneling when I observe it.


App Rules
name="session split" rule="session.split exists" alert=analysis.session type=application
name="session size greater than 1mb" rule="streams=2 && size=1024000 -u" alert=analysis.session type=application
name=possible_icmp_tunneling rule="ip.proto=1 && session.split exists && analysis.session = 'icmp large session' && analysis.session = 'session size greater than 1mb'" alert=ioc type=application


Note that 'session split' and 'session size greater than 1mb' should be before the 'possible_icmp_tunneling' rule.  Order is important.


I'll try to get some DNS tunneling created with this SoftEther VPN client soon.


Good luck and happy hunting.

There are many reasons I enjoy working with the RSA Netwitness Platform, but it’s when our customers turn their attention to threat hunting that really makes things exciting. In one case, there was a need where they could take new threat intelligence or research and apply it to their RSA Netwitness stack. This wasn’t directed at threat intelligence via feeds, but more around how attackers could deliver malicious content. Since there is never a shortage of threat research, one customer asked about detection of zero-width spaces.


In a recent article in The Hacker News, research was presented showing that zero-width spaces that were embedded within URL’s would be able to bypass URL scanners in Office 365. The question now is how to go about detecting this in Netwitness?


We begin with a search of our network using the following query:


Query contains "​","‌","‍","","0"


This gave us several DNS sessions and a single SMTP session.


If we pivot into the SMTP session, we can get slightly better view of the meta for that session.


That last hostname in ‘’ does look interesting, but not sure. We need to examine the session more closely.


We rendered the session as an email and it bore the signs of a classic phishing email.


However, only when we examine the raw contents of it, does the malicious indicator present itself.


The bytes highlighted in red (E2 80 8C) represent the zero-width non-joiner character (&#8204). This appears to be the attackers use of zero-width spaces as a bypass attempt in a phishing email. Next we look at the meta data for the session.


Above, we can see our suspicious hostname, but how did it get there. Turns out that the ‘phishing_lua’ parser will examine URL’s within SMTP traffic and extract the hostnames it finds into the ‘’ meta key. Fortunately for us, it included the zero-width space as part of the meta data…we just can’t see it. Or can we?


I copied the meta value for the hostname and then pasted it into my text editor. Sadly, I did not notice any strange characters. However, I did paste it into my good friend ‘vi’.


This proved that the zero-width spaces were in the meta data, which allowed our query to work successfully. The malicious site actually leads to a credential stealing webpage.  It appears that the website was a compromised wordpress site.


Next, I wanted to get a bigger data set. I took some of the meta data, such as the email address of the sender, and used it to find additional messages. Turned out, this helped identify an active phishing campaign.


Next up is to put together some kind of detection going forward. My first thought was to use an application rule, but was not successful. I think it was the way the Unicode was being interpreted or how it was inputted. I need to do more research on that. Since the app rule syntax was not working properly, I decided to build a Lua parser instead. This parser would perform the meta callback function of the “” meta key, just like an app rule would. Next, the parser would loop through a predefined list of zero-width space bytes against the returned meta value. If a match was made, it would write meta into the ‘ioc’ meta key. 



-- Step 1 Name the parser
local lua_zws_check = nw.createParser("lua_zws_check", "Check meta for zero-width spaces")



Check meta for zero-width spaces


2019-01-24 - Initial development








-- Step 3 Define where your meta will be written
-- These are the meta keys that we will write meta into

nwlanguagekey.create("ioc", nwtypes.Text),



local zws = ({


["\226\128\139"] = true, -- ​ &NegativeMediumSpac zero width space
["\226\128\140"] = true, -- ‌ ‌ zero width non-joiner
["\226\128\141"] = true, -- ‍ ‍ zero width joiner
["\239\187\191"] = true, --  zero width no-break space
["\239\188\144"] = true, -- 0 fullwidth digit zero



-- This is our function. What we want to do when we match a token...or in this case, the
-- filename meta callback.
function lua_zws_check:hostMeta(index, meta)

if meta then

for i,j in pairs(zws) do

local check = string.find(meta, i)
if check then

--nw.logInfo("*** BAD HOSTNAME CHECK: " .. meta .. " ***")

nw.createMeta(self.keys["ioc"], "hostname_zero-width_space")






-- Step 2 Define your tokens

[nwlanguagekey.create("")] = lua_zws_check.hostMeta, -- this is the meta callback key




After deploying the parser, I re-imported the new pcap file into my virtual packet decoder. The results came back quickly. I now had reliable detection for these zero-width space hostnames.


Since meta is displayed in the order in which it was written, we can get a sense as to the hostname that triggered this indicator.


Now that we have validated that the parser is working correctly in the lab environment, it was time to test some other capabilities of the Netwitness platform.


As we stated in the beginning, the query (as well as the parser) was flagging on DNS name resolutions that involved Unicode characters. Therefore, we wanted to create an alert when we saw the ‘zero-width’ meta when it was in SMTP traffic. We then created an ESA rule in the lab environment.


To begin this alert, I went to the Configure / ESA Rules section in Netwitness and created a new rule using the Rule Builder wizard.



We gave the rule a name, which will be important in the next phase. Next, we created the condition by giving the condition a name and then populating the fields.

The first line is looking for the meta key and the meta value. The second is looking at the service type. Once it looks good, we hit save. We then hit save again and close out the rule.


NOTE: In the first line, you see the “Array?” box checked. Some meta keys are defined as arrays meaning they could contain multiple values in a session. The meta key ‘ioc’ is one such meta key. You may encounter a situation where a meta key should be set as an Array but is not. If that is the case, it is a simple change on the ESA configuration.


Next, we want to deploy the rule to our ESA appliance. To do, we clicked the ESA appliance in our deployments table.

Next, we add the rule we want to deploy. Then, we deploy it.


We then imported the PCAP again to see if our ESA rule fired successfully, which it did.


The last piece before production is to create an Incident rule, based on the ESA alerts. We move to Configure / Incident Rules and create a new rule.

I created the Incident rule in the lab and used the parameters shown below.


I then enabled the rule and saved it.


Now, when the incidents are examined in the Respond module, we can see our incidents being created.


To summarize this activity, we started from some new(ish) research and wanted to find a way to detect this in Netwitness. We found traffic that we were interested in and then built a Lua parser to improve our detection going forward. Next, I wanted to alert on this traffic only when it was in SMTP traffic and, because I wanted to work on some automation, created an Incident rule to put a bow on this. We now have actionable alerting after small bit of research on our end.  My intent is to get the content of the parser added to one already in Live.  Until that time, it will be here to serve as a reference.


What are your use cases? What are some things you are trying to find on the network that Netwitness can help with? Let us know.


Good luck and happy hunting.

Often times, RSA NetWitness Packet decoders are configured to monitor not only ingress and egress traffic, but also receive internal LAN traffic as well.  On a recent engagement, we identified a significant amount of traffic going to TCP port 9997.  It did not take long to realize this traffic was from internal servers configured to forward their logs to Splunk.


The parser will add to the 'service' meta key and write the value '9997'.  After running the parser for several hours, we also found other ports that were used by the Splunk forwarders.  


While there wasn't anything malicious or suspicious with the traffic, it was a significant amount of traffic that was taking up disk space.  By identifying the traffic, we can make it a filtering candidate.  Ideally, the traffic would be filtered further upstream at a TAP, but sometimes that isn't possible.  


If you are running this parser, you could also update the index-concentrator-custom.xml and add an alias to the service types.  






If you have traffic on your network that you want better ways to identify, let your RSA account team know.  


Good luck, and happy hunting.

I was recently working with Eric Partington who asked if we could get the Autonomous System Numbers from a recent update to GEOIP.  I believe at one point this was a feed, but had been deprecated.  After a little bit of research, I learned that an update had been made to the Lua libraries that allowed for the calling of a new api function named geoipLookup that would give us this information as well as some other information that might be of interest.  A few years ago, I painstakingly created a feed for my own use to map countries to continents.  I wish I had this function call back then.


The api call is as follows:



-- Examples:
-- local continent = self:geoipLookup(ip, "continent", "names", "en") -- string
-- local country = self:geoipLookup(ip, "country", "names", "en") -- string
-- local country_iso = self:geoipLookup(ip, "country", "iso_code") -- string "US"
-- local city = self:geoipLookup(ip, "city", "names", "en") -- string
-- local lat = self:geoipLookup(ip, "location", "latitude") -- number
-- local long = self:geoipLookup(ip, "location", "longitude") -- number
-- local tz = self:geoipLookup(ip, "location", "time_zone") -- string "America/Chicago"
-- local metro = self:geoipLookup(ip, "location", "metro_code") -- integer
-- local postal = self:geoipLookup(ip, "postal", "code") -- string "77478"
-- local reg_country = self:geoipLookup(ip, "registered_country", "names", "en") -- string "United States"
-- local subdivision = self:geoipLookup(ip, "subdivisions", "names", "en") -- string "Texas"
-- local isp = self:geoipLookup(ip, "isp") -- string ""
-- local org = self:geoipLookup(ip, "organization") -- string ""
-- local domain = self:geoipLookup(ip, "domain") -- string ""
-- local asn = self:geoipLookup(ip, "autonomous_system_number") -- uint32 16406
function parser:geoipLookup(ipValue, category, [name], [language]) end


As you know, we already get many of these fields already.  Meta keys such as country.src, country.dst, org.src, and org.dst are probably well known to many analysts and used for various queries.  Eric had asked for 'asn' and because I tried it previously with a feed, I wanted to include 'continent' as well.  


So....I created a Lua parser to get this for me.  My tokens were meta callbacks for ip.src and ip.dst.


[nwlanguagekey.create("ip.src", nwtypes.IPv4)] = lua_geoip_extras.OnHostSrc,
[nwlanguagekey.create("ip.dst", nwtypes.IPv4)] = lua_geoip_extras.OnHostDst,


My intent is to build this parser to work on both packet and log decoders.  I had originally wanted to use another function call, but found this was not working properly on log decoders.  However, the meta callbacks of ip.src and ip.dst did work.  Now, with this in mind, I could leverage this parser on both packet and log decoders. :-)


The meta keys I was going to write into were as follows:


nwlanguagekey.create("asn.src", nwtypes.Text),
nwlanguagekey.create("asn.dst", nwtypes.Text),
nwlanguagekey.create("continent.src", nwtypes.Text),
nwlanguagekey.create("continent.dst", nwtypes.Text),


Since I was using ip.src and ip.dst meta, I wanted to apply the same source and destination meta for my asn and continent values.  


Then, I just wrote out my functions:


-- Get ASN and Continent information from ip.src and ip.dst
function lua_geoip_extras:OnHostSrc(index, src)
   local asnsrc = self:geoipLookup(src, "autonomous_system_number")
   local continentsrc = self:geoipLookup(src, "continent", "names", "en")

   if asnsrc then
      --nw.logInfo("*** ASN SOURCE: AS" .. asnsrc .. " ***")
      nw.createMeta(self.keys["asn.src"], "AS" .. asnsrc)
   if continentsrc then
      --nw.logInfo("*** CONTINENT SOURCE: " .. continentsrc .. " ***")
      nw.createMeta(self.keys["continent.src"], continentsrc )


function lua_geoip_extras:OnHostDst(index, dst)
   local asndst = self:geoipLookup(dst, "autonomous_system_number")
   local continentdst = self:geoipLookup(dst, "continent", "names", "en")


   if asndst then
      --nw.logInfo("*** ASN DESTINATION: AS" .. asndst .. " ***")
      nw.createMeta(self.keys["asn.dst"], "AS" .. asndst)
   if continentdst then
      --nw.logInfo("*** CONTINENT DESTINATION " .. continentdst.. " ***")
      nw.createMeta(self.keys["continent.dst"], continentdst)


This was my first time using this new api call and my mind was racing with ideas on how else I could use this capability.  The one that immediately came to mind was enriching meta when X-Forwarded-For or Client-IP meta existed.  If it did exist, it should be parsed into a meta key called "orig_ip" today or "ip.orig" in the future.  The meta key "orig_ip" is formatted as Text so I need to account for that by determining the correct HostType.  We don't want to pass a domain name when we are expecting to pass an IP address.  I can do that by importing the functions from 'nwll'.


In the past, the only meta that could be enriched by GEOIP was ip.src and ip.dst (I have not tested ipv6.src or ipv6.dst).  Now with this API call, I can apply the content of GEOIP to other IP address related meta keys.  I have attached the full parser to this post.  


Hope this helps others out there in the community and as always, happy hunting.



I have found that there is quite a lot of incredibly useful meta packed into the 'query' meta key over the past several years.  The HTTP parser puts arguments and passed variables in there when used in GET's and POST's.  While examining some recent PCAP's from the Malware Traffic Analysis site, there are some common elements we can use to identify Trickbot infections.  This was not an exhaustive look at Trickbot, but simply a means to identify some common traits as meta values.  As Trickbot, or any malware campaign changes, IOC's will need to be updated.


First things first, let's look at the index level for the 'query' meta key.  By default, the 'query' meta key is set to 'IndexKeys'.  This means that you could perform a search where the key existed in a session, but could not query for the values stored within that key.



There are pro's and con's to setting the index level to 'IndexValues' in your 'index-concentrator-custom.xml' file on your concentrators.  Some pro's include being able to search for values in there during an investigation.  The con's are that these queries would likely involve 'contains' which taxes the query from a performance perspective.  Furthermore, 'query' is a Text formatted meta key and limited to 256 bytes.  Therefore, anything that after 256 bytes would be truncated and you may not have the complete query string.


Whether 'query' is set to 'IndexKeys' or 'IndexValues' or even 'IndexNone', we can take advantage of it in App rule creation.  In one Trickbot pcap, we can see an HTTP POST to an IP address on a non-standard port.



If we look at the meta created for this session, we can see the 'proclist' and 'sysinfo' as pieces in the 'query' meta.



Combine these with a service type (service = 80) and an action (action = 'post'), we can create an application rule that can help find Trickbot infections in the environment.  For good measure, we can add additional meta from analysis.service to help round it out.



Trickbot application rule
service = 80 && action = 'post' && query = 'name="sysinfo"' && query = 'name="proclist"' && analysis.service = 'windows cli admin commands'



The flexibility of app rule creation allows for analysts and threat hunters take a handful of indicators (meta) and combine them to make detection easier.



App rules help make detection easier.  Once a threat is identified, we can use this method to find the traffic easier moving forward so that we can go find the next new bad thing.  If the app rule fires too often on normal traffic, then we can adjust the rule to add or exclude other meta to ensure it is firing correctly.


As always, good luck, and happy hunting.



I was reviewing a packet capture file I had from a recent engagement. In it, the attacker had tried unsuccessfully to compress the System and SAM registry hives on the compromised web server. Instead, the attacker decided to copy the hives into a web accessible directory and give them a .jpg file extension. Given that the Windows Registry hives contain a well documented file structure, I decided to write a parser to detect them on the network.



If we see something on the wire, there is a pretty good chance we can create some content to detect it in the future. This is the premise behind most threat-hunting or content creation. Make it easier to detect the next time. This is the same approach I take when building Lua Parsers for the RSA NetWitness platform.


Here, we can see what appears to be the magic bytes for a registry file “regf”.



Let’s shift our view into View Hex and examine this file.



When creating a parser, we want to make it as consistent as possible to reduce false positives or errors. What I found was that immediately following the ‘regf’ signature the Primary Sequence Number (4 bytes) and Secondary Sequence Number (4 bytes) would be different. Then, there was the FileTime UTC (8 bytes) field which would most definitely be unique.


However, the Major and Minor versions were relatively consistent. Therefore, I could skip over those 16 bytes to land on the first byte of the Major Version immediately after my initial token matches.  Let’s create a token to start with.



   ["\114\101\103\102"] = fingerprint_reg.magic,   -- regf



If you notice, this token is in DECIMAL format, not HEX. Also, 4 bytes is quite small for a token. What happens is that when a parser is loaded into the decoder, the tokens are stored in memory and compared as network traffic is going through the decoder. Once a token matches, the function(s) within the parser are run. Too small of a token means the parser may run quite frequently with or without matching on the right traffic. Too large of a token means the parser may only run on those specific bytes and you could miss other relevant traffic. When creating a parser token, you may want to error on the side of caution and make it a little smaller but know that you will have to add additional checks to ensure it is the correct traffic you want.


In Lua for parsers, you are always on a byte. Therefore, we need to know where we are and where we want to go. I like to set a variable called ‘current_position’ to denote where my pointer is in the stream of data. When the parser matches on a token, it will return 3 values. The three values are the token itself, the first position of the token in the data stream and the last position of the token in the data stream. This helps me as I want to find the ‘regf’ token and move forward 17 bytes to land on the Major version field.


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")





This will put the pointer on the first byte (0x01) of the Major Version field. Next what I want to do is extract only the payload I need to do my next set of checks, which will involve reading the bytes.


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")






Here, I created a variable called ‘payload’ and used the built-in function ‘nw.getPayload’ to get the payload I wanted. Since I previously declared a variable called ‘current_position’, I use that as my starting point and tell it to go forward 7 bytes. This gives me a total of 8 bytes of payload. Next, I make sure that I have payload and that it is, in fact, 8 bytes in length (#payload == 8).


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")







If the payload checks out, then in this parser, I want to read the first 4 bytes, since that should be the Major Version. In the research I did, I saw that the Major Version was typically ‘1’ and was represented as ‘0x01000000’. Since I want to read those 4 bytes, I use “payload:uint32(1,4)”. Since those bytes will be read in as one value, I pre-calculate what that should be and use it as a check. The value should be ‘16777216’. If it is, then it should move to the next check.


function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")






The Minor Version check winds up being the second and last check to make sure it is a Registry hive. For this to run, the Major version had to have been found and validated based on the IF statement. Here, we grab the next 4 bytes and store those in a variable called ‘minorversion’. There were four possible values that I found in my research. Those would be ‘0x03000000’, ‘0x04000000’, ‘0x05000000’, and ‘0x06000000’. Therefore, I pre-calculated those values in decimal form like I did with the Major Version and did a comparison (==). If the value matched, then the parser will write the text ‘registry hive’ as meta into the ‘filetype’ meta key.


The approach shown here was useful in examining a particular type of file as it was observed in network traffic. The same approach could be used for protocol analysis, identifying new service types, and many others as well.  If you would like expert assistance with creating a custom parser for traffic that is unique in your environment then that is a common service offering provided by RSA.  If you're interested in this type of service offering, please feel free to contact your local sales rep.  


The parser is attached, and I have also submitted it to RSA Live for future use.  I hope you find this parser breakdown helpful and as always, happy hunting.



Servers are attacked every day and sometimes, those attacks are successful.  There is a lot of attention to Windows executables that come down on the wire, but I also wanted to know when my systems were downloading ELF files, typically used by Linux systems.  With some recent exploits that target Linux web servers and the delivery of crypto-mining software, I wrote a parser that attempts to identify Linux ELF files and places that meta in the 'filetype' meta key.




This isn't limited to crypto-mining ELF files and has detected many others in testing.  The parser is attached below.


I hope you find this parser useful, and as always, happy hunting.



Whenever I am on an engagement that involves the analysis of network traffic, my preferred tool of choice is the RSA NetWitness Network (Packets) solution.  This provides full packet capture and allows for analysts to "go back to the video tape" to see what happened on the wire.  When the decoder examines the traffic, it tries to identify the service type associated with it.  HTTP, DNS, SSL and many others are some examples.  However, there are times when there is no defined service.  This results in 'service = 0'.  


When time allows, I like to go in there, but as you may notice, there can be quite a lot of data to go through.  Therefore, I like to focus on small slices of time and attributes about those sessions that makes sense.  For example, I might choose the following query over the last 3 hours.


   service = 0 && ip.proto = 6 && direction = 'outbound' && tcpflags = 'syn' && tcpflags = 'ack' && tcpflags = 'psh'


This query will get to the sessions where:

   service = 0 [OTHER traffic not associated with a service type]

   ip.proto = 6 [TCP traffic]

   direction = 'outbound' [traffic that starts internally and destined for public IP space]

   tcpflags = [Focus on SYN, ACK, and PSH because those TCP flags would have to be present for the starting of a session and the sending of data]


Next, I look at associated TCP ports (tcp.srcport and tcp.dstport) as well as some IP's and org.dst meta.  What we recently found was a pipe delimited medical record in clear text.  After some additional research, we came across this fantastic blog post from Tripwire discussing Health Level 7 (HL7).  In it, the author, Dallas Haselhorst, even showed the pipe delimited format that the HL7 protocol uses to transfer this data.  It was this format that was observed on the wire.


While the idea of medical records being transmitted on the wire in clear text was alarming at first, it was determined that this was in fact, a standard practice.  If used to cross the Internet, VPN tunnels would be used.


To get a sense of how much traffic I could see, I created a parser to identify this as 'service = 6046'.  I chose '6046' because that was the first port I observed, however in truth, we eventually saw it on numerous tcp.dstport's.  This parser is just going to identify this as HL7 and will not parse out the information contained in the fields.  Some of that data will likely contain Personal Health Information and it is not something I wanted as meta.  But, knowing it is on the wire in the clear was important to me and my client.  


If you work in an organization that handles this kind of data, this parser might help identify and validate where it's going.  


Good luck, and happy hunting.  Also..special thanks to one of my new team-mates, Jeremy Warren, who helped find this traffic.



If you attended my sessions on Lua Parsing in NetWitness, we referenced some materials as well as a parser template I use when starting to write a Lua parser.  I wanted to share that material here.  Be sure to check out the examples as well as the nw-api.lua as references when building your own.


As always, if you have questions, please reach out.





By now the InfoSec community had a chance to digest the recent findings around the popular software "CCleaner" and a compromised version.  Great research was provided by the TALOS Intelligence group here and here.  The question on the minds of senior leadership becomes what the impact could be to the organization.  The ability to query the systems in the enterprise for such threats is essential to answering that business impact question.  Avast posted additional findings in their own blog and this is where our post begins.


Avast provided several indicators of compromise (IOC's) that would allow security teams to quickly scan their environment to identify known or suspicious files or communications.  Let's start with the first stage indicators.


There were twenty (20) SHA256 hashes of files in the list.  Since the list was not in a particular format (STIX, TAXI, CSV, etc) we can scrape them from the page and paste them into our old friend "vi".



Essentially what we need to do is get the provided indicator into a form that our tools can use.  Our first attempt is to just show the hash itself.


      awk -F' - ' '{print $1}' ccleaner



We can then go over to NetWitness Endpoint looking for these hashes.  One could be looking for all instances of 'ccleaner' in the Global Modules and looking at the SHA256 hash value.  Sometimes looking at Compile Time is also helpful.


You can also go into the Filter Editor and enter the hashes here as well.  


Another option is performing the query directly against the SQL database.  Similar to using the Filter Editor method above, we simply need to get the query built in a way that works.  Since it will be a large OR statement, we just need the right syntax and the location where the values are stored.  The hashes are stored in the database in dbo.Modules.HashSHA256.  Knowing this, we can get the necessary syntax with our other good friend 'awk'.


      awk -F" - " '{print "OR mo.HashSHA256 = 0x"$1}' ccleaner


NOTE:  "OR mo.HashSHA256 = 0x" was prepended to query that column.  0x was also prepended to the hash as the data is stored in that way.


This returns the values in a form that I can easily query.  Now, I just need the query.



--Search for a machinename based on the hash of a module

select mn.machinename, mo.HashSHA256


    [dbo].[MachineModulePaths] AS mp

    INNER JOIN [dbo].[Machines] AS [mn] WITH(NOLOCK) ON ([mn].[PK_Machines] = [mp].[FK_Machines])

    INNER JOIN [dbo].[Modules] AS [mo] WITH(NOLOCK) ON ([mo].[PK_Modules] = [mp].[FK_Modules])


    --mo.HashMD5 = 0xCEDC22719DE1B1316BDC556FED989335

    --mo.HashSHA256 = 0x069F24378A0A6EEA078D30D971542741D0F51E1F933EEEB23FDB559763FF0ACD

    --mo.HashSHA1 = 0x39E0F0F2F64B50FB9783A49B7940BF326D7B6B65


-- First Stage

mo.HashSHA256 = 0x04bed8e35483d50a25ad8cf203e6f157e0f2fe39a762f5fbacd672a3495d6a11

OR mo.HashSHA256 = 0x0564718b3778d91efd7a9972e11852e29f88103a10cb8862c285b924bc412013

OR mo.HashSHA256 = 0x1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff

OR mo.HashSHA256 = 0x276936c38bd8ae2f26aab14abff115ea04f33f262a04609d77b0874965ef7012

OR mo.HashSHA256 = 0x2fe8cfeeb601f779209925f83c6248fb4f3bfb3113ac43a3b2633ec9494dcee0

OR mo.HashSHA256 = 0x3c0bc541ec149e29afb24720abc4916906f6a0fa89a83f5cb23aed8f7f1146c3

OR mo.HashSHA256 = 0x4f8f49e4fc71142036f5788219595308266f06a6a737ac942048b15d8880364a

OR mo.HashSHA256 = 0x7bc0eaf33627b1a9e4ff9f6dd1fa9ca655a98363b69441efd3d4ed503317804d

OR mo.HashSHA256 = 0xa013538e96cd5d71dd5642d7fdce053bb63d3134962e2305f47ce4932a0e54af

OR mo.HashSHA256 = 0xbd1c9d48c3d8a199a33d0b11795ff7346edf9d0305a666caa5323d7f43bdcfe9

OR mo.HashSHA256 = 0xc92acb88d618c55e865ab29caafb991e0a131a676773ef2da71dc03cc6b8953e

OR mo.HashSHA256 = 0xe338c420d9edc219b45a81fe0ccf077ef8d62a4ba8330a327c183e4069954ce1

OR mo.HashSHA256 = 0x36b36ee9515e0a60629d2c722b006b33e543dce1c8c2611053e0651a0bfdb2e9

OR mo.HashSHA256 = 0x6f7840c77f99049d788155c1351e1560b62b8ad18ad0e9adda8218b9f432f0a9

OR mo.HashSHA256 = 0xa3e619cd619ab8e557c7d1c18fc7ea56ec3dfd13889e3a9919345b78336efdb2

OR mo.HashSHA256 = 0x0d4f12f4790d2dfef2d6f3b3be74062aad3214cb619071306e98a813a334d7b8

OR mo.HashSHA256 = 0x9c205ec7da1ff84d5aa0a96a0a77b092239c2bb94bcb05db41680a9a718a01eb

OR mo.HashSHA256 = 0xbea487b2b0370189677850a9d3f41ba308d0dbd2504ced1e8957308c43ae4913

OR mo.HashSHA256 = 0x3a34207ba2368e41c051a9c075465b1966118058f9b8cdedd80c19ef1b5709fe

OR mo.HashSHA256 = 0x19865df98aba6838dcc192fbb85e5e0d705ade04a371f2ac4853460456a02ee3


-- Second Stage


OR mo.HashSHA256 = 0xdc9b5e8aa6ec86db8af0a7aa897ca61db3e5f3d2e0942e319074db1aaccfdc83

OR mo.HashSHA256 = 0xa414815b5898ee1aa67e5b2487a11c11378948fcd3c099198e0f9c6203120b15

OR mo.HashSHA256 = 0x7ac3c87e27b16f85618da876926b3b23151975af569c2c5e4b0ee13619ab2538

OR mo.HashSHA256 = 0x4ae8f4b41dcc5e8e931c432aa603eae3b39e9df36bf71c767edb630406566b17

OR mo.HashSHA256 = 0xb3badc7f2b89fe08fdee9b1ea78b3906c89338ed5f4033f21f7406e60b98709e

OR mo.HashSHA256 = 0xa6c36335e764b5aae0e56a79f5d438ca5c42421cae49672b79dbd111f884ecb5


I added the second stage hashes as well.  This query returns some results that would need additional checking.  



Next, we can move over to NetWitness for Packets and Logs and see if we have any hits.





No hits here, thankfully.  


There were also some domain generated algorithms (DGA's) used and provided in the listing of IOC's.  Using "vi" again, we copied the contents into a file like before.



Then, using a similar "awk" statement we generate the query for use in the NetWitness suite.


      awk -F" - " '{print "\x27"$1"\x27,"}' c2 | sed 's/ //g' | tr -d '\n'


NOTE: \x27 prints a single quote

sed 's/ //g' removes some trailing whitespace as a result of the copy/paste.

tr -d '\n' removes the new line so they all appear on the same line.


Armed with this syntax, I can copy and paste into NetWitness.  Since we are querying the same key for multiple values, we can separate using a comma.  However, since we are using "", which is a Text formatted meta key, we need to ensure the values are enclosed in quotes for our query.



Again, no findings.


The presence of compromised files might mean the declaration of an incident and the launching of larger forensic investigation depending on the organization.  At this point, we know the files were here, but we might not have been a target based on currently available research.   


In summary, searching for indicators of compromise using the NetWitness suite is a great first step in identifying potential problems in your environment.  Sometimes the data isn't provided in an easy to use format, however with some quick command line techniques, you can have that data massaged into a format ready to query.  This whole exercise took a few moments to complete and we can begin to answer what the impact is to the business.


As always, know your data and happy hunting.




If you did identify the presence of these or other suspicious or compromised files in your organization, our RSA Incident Response team is here to assist with the triage.  If you have an IR Retainer in place with RSA then you already have rapid access to our analysts who can get engaged and rapidly identify the scope of the incident.  If you don’t have an IR Retainer or are interested in learning more about our Incident Response services, please visit our Incident Response Services page on

Recently, I was using Netwitness Endpoint (ECAT) to help triage a large environment.  During this time, I identified a few systems that were exploited by malicious html file.  It was part of a phishing campaign that came in via email.  Unfortunately, I was unable to find the file because it was no longer in the Outlook Temporary Internet Files folder.  However, since we have tracking data coming in from the agents, I was able to recreate the scene even without the initial malicious code.


The original compromise showed tracking data like the one below:



Here we can see that Outlook starts up Chrome to open a file in the Outlook Temporary Internet Files directory.  From there we see regsvr32.exe kicked off that had a URL in it's launch arguments.  The regsvr32.exe is a legitimate and signed Microsoft file used to register DLL's and other controls into the Windows Registry.  Last year, researcher Casey Smith described how this component could take a URL to a remote file as an argument to bypass various security controls.  The URL could be over HTTP or HTTPS and would point to an SCT file.  This SCT file is really just an XML file with instructions on what regsvr32.exe should do.


With the tracking data showing us step by step what occurred on the system, we can use these commands on a different system and attempt to recreate the infection.  


On my analysis system, I opened a command prompt and ran that started this off.  


This URL took us to the malicious script on a Google API site over SSL.  The contents of that SCT file can be seen below:



In there we see the syntax the JSRat is going to execute leveraging mshta as well as another URL.  This new 'terra' URL sends another XML scriptlet to download and install a malicious dll called 'rubyonrais.dll'.




Tracking data in our analysis system looks very similar to what our original host showed along with the registering of the DLL.



If we take a look at the network traffic associated with this, we can get insight into what was happening as well.  



We can see the request to '' over SSL and then the connection to 'meubackup.terra[.]com[.]br'.  This downloaded a 1.4mb file based on the network traffic.  Even though this is an SSL connection, we can still see the meta data about that session.  I can now go and find the file where the script told us it would be.  In the C:\Users\Public\Administrator folder.



Currently this could be picked up with the IIOC "Runs mshta with javascript arguments".


Another we could create is slightly different than one out of the box to cover both HTTP and HTTPS connections.  



--Runs_REGSVR32_HTTP.sql Runs REGSVR32.EXE with HTTP argument 

/* DB Query


SELECT mn.MachineName, se.EventUTCTime, sfn.Filename, se.FileName_Target, se.Path_Target, se.LaunchArguments_Target, sla.LaunchArguments



[dbo].[WinTrackingEvents_P0] AS [se] WITH(NOLOCK) -- Also try P1
INNER JOIN [dbo].[MachineModulePaths] AS [mp] WITH(NOLOCK) ON ([mp].[PK_MachineModulePaths] = [se].[FK_MachineModulePaths])
INNER JOIN [dbo].[FileNames] AS [sfn] WITH(NOLOCK) ON ([sfn].[PK_FileNames] = [mp].[FK_FileNames])
INNER JOIN [dbo].[machines] AS [mn] WITH(NOLOCK) ON [mn].[PK_Machines] = [se].[FK_Machines]
INNER JOIN [dbo].[LaunchArguments] AS [sla] WITH(NOLOCK) ON [sla].[PK_LaunchArguments] = [se].[FK_LaunchArguments__SourceCommandLine]



[se].[BehaviorProcessCreateProcess] = 1 AND
[se].FileName_Target = N'regsvr32.exe' AND
[se].LaunchArguments_Target LIKE N'%/i:http%'


--ORDER BY se.EventUTCTime desc
ORDER BY mn.MachineName desc




--,[se].[PK_WinTrackingEvents] AS [FK_mocSentinelEvents]  

-- If you are using, remove the comment dash above


[dbo].[WinTrackingEventsCache] AS [se] WITH(NOLOCK)



[se].[BehaviorProcessCreateProcess] = 1 AND
[se].FileName_Target = N'regsvr32.exe' AND
[se].LaunchArguments_Target LIKE N'%/i:http%'





I hope you find this useful and as always, happy hunting.



If you have been using RSA Netwitness Packets for any length of time, you might have noticed that many large sessions are maxed out at approximately 32mb.  Furthermore, there maybe multiple 32mb sessions between the two hosts. 



Beginning in 10.5, a new meta key was added called 'session.split' to track follow-on sessions that are related.  While the decoder settings may draw the line at 32mb (the default setting and I don't recommend changing) for a session, network traffic is not bound by such restraints.  Network traffic can be as large as it has to be.  All of this traffic is still captured, but there wasn't anything really tying all the sessions together.  However, with session.split, we can see that there is more network data to be found.  In the 'List View' screenshot above, you can see the numbers on the far right of the session.  You can right click on that number and find the session fragments in a new tab.


If that view doesn't work for you, you can build your own custom view like the one below.



Recently, I was having a discussion with some colleagues about how to find uploads greater than 1 GB in size that are going outbound.  This was to identify some potential exfiltration use cases.  One thing that came to mind was using meta in 'session.split'.  In a few short minutes, I had an application rule built by using some of the content from the Hunting Pack content (RSA Announces the Availability of the Hunting Pack in Live  ).  Let's break it down.


First, we know that it would be outbound network traffic.  Therefore, we could start our application rule with:


('medium = 1 && direction = 'outbound')


If you don't have directionality setup in your decoders, you could substitute "direction = 'outbound'" with "org.dst exists".


Next, we look at the new meta key from the hunting pack called 'analysis.session' (aka Session Characteristics).  This purpose of this meta key is to tell the analyst things that were observed about the network session.  In our case, we are looking for 'ratio high transmitted'.


The meta 'ratio high transmitted' is a reference to a calculation of the transmitted bytes (requestpayload) vs the received bytes (responsepayload) in a network session.  It provides a ratio score of 0 - 100 showing which side sent more data.  A score of 0 means more bytes were received than transmitted.  A score of 100 means more bytes were transmitted than received.  Since we are looking for uploads, that would typically have more data being transmitted than received in a network session.  Therefore, we can add this meta to our app rule.


('medium = 1 && direction = 'outbound' && analysis.session = 'ratio high transmitted')


However, we aren't done yet.  How do we tell if it is around or over 1 GB?  This is where session.split comes in.  Since the sessions were being maxed at 32mb per the default decoder configuration, we can do some math to find out how many sessions it would take to get to approximately 1 GB.


1024 mb / 32 mb = 32 sessions.


Since there could be retransmitted data or some other anomalies in the traffic, lets give ourselves an approximate session count of 30.  This means that if session.split reached 30 splits (really 31 since it starts from 0), then we have a large session and may want to have a closer look.


Therefore, our application rule looks like:


('medium = 1 && direction = 'outbound' && analysis.session = 'ratio high transmitted' && session.split >=30)



I called mine "large_outbound_transmit' but you can call it whatever you like.  This will tag any of those follow-on sessions that matched the criteria we set in the app rule starting at session.split 30.  To find all the session fragments, go back into the Investigation Events, select the List view.  Right-click on the session.split number (not the little icon to the left of it) and select 'Refocus or Refocus New Tab'.




What's nice about this rule is that it works whether the content is encrypted or unencrypted.  It is simply working against meta we've already collected.  Now, I can tell if I have large network sessions leaving my network.  If you regularly have large sessions, perhaps creating a filtering application rule or feed may help reduce some of that noise.


More information about session.split and understanding its configuration can be found here: Investigation: Combine Events from Split Sessions 


I hope you found this post helpful, and as always, happy hunting.



UPDATE - March 21, 2017 

Due to continued interest in this event and continued public exploitation, we’ve added detection to the HTTP_lua parser.   Customers will get this update automatically via the LIVE update process if they are subscribed to this content.    The following meta is created if the parser is triggered:


ioc=”apache struts exploit attempt”

analysis.service=”content-disposition  filename contains null character”





Since the release of CVE-2017-5638, the RSA Incident Response team has fielded several questions about how to detect this activity.  Proof of concept code is already available and being used to identify vulnerable servers.  Fortunately, detection with Netwitness Packets it is quite easy.


The packet decoder contains HTTP parsers that will parse out much of the HTTP headers.  Since this exploit appears to be using a malformed Content-Type entry, we can detect this by examining meta already coming in.


One thing to note is that this traffic still appears as valid HTTP traffic.  It appears like a typical POST and has valid HTTP headers with the exception of the malformed Content-Type.



Another included an HTTP GET.



One way to find this traffic is using existing meta combined to make new meta.  We can make an app rule out of this.  Lets examine the meta.



We already have service type of 80 (HTTP)  and a long content meta.  We can pick out interesting pieces from the content meta key such as "_member".


Therefore an application rule might look like:


service=80 && content contains '_member'


If we turn that into a query to double-check our work, we should get the session(s) of interest:




I've attached a sample PCAP of the POST if you need to replay.  Another, showing an HTTP GET was found on the SANS Internet Storm Center site here:  Critical Apache Struts 2 Vulnerability (Patch Now!) - SANS Internet Storm Center 


Happy Hunting,



This came out of a separate discussion but I thought it could be helpful for others.


A customer was looking to write an ESA rule that essentially was doing an 'ends' against meta.  For example, '' or '' could be looked for by ''  Things like this could actually be done on the decoder and created as meta for easy searching.


You could create application rules on your decoders that specifically look for the domain of interest.


name=maliciousdomain rule=" ends ''" alert=alert type=application


Then, just have ESA look for    alert = 'maliciousdomain'  since it will already be meta at that point.


You could also look for the root host for any and all sessions where is populated.  I wrote a parser to help with that.  The purpose being that if I wanted to exclude any domain, I could.  This uses a custom meta key called '', so an index change on the concentrators would be needed if you wanted to query against it.  If you want to change the meta key, feel free to do so.



The parser works by performing a meta callback against '' and then examining the location of all the dots in the hostname.  It then compares the last position against the TLD's listed in a table and then moves to the left if found.  


Since this is just performing a meta callback, it can work on both packet and log decoders.  Just remember that on log decoders, you would need to add the nwll.lua file.  You can download it from Live and deploy it manually.


Happy hunting.



Filter Blog

By date: By tag: