Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2018 > September

We at RSA value your thoughts and feedback on our products. Please tell us what you think about RSA NetWitness by participating directly in our upcoming user research studies. 


What's in it for you?


You will get a chance to play around with new and exciting features and help us shape the future of our product through your direct feedback. After submitting your information in the survey below, if you are a match for an upcoming study, one of our researchers will work with you in facilitating a study session either in a lab setting or remote. There are no right or wrong answers in our studies - every single piece of your feedback will help us improve the RSA NetWitness experience. Please join us in this journey by completing the short survey below so that we can see if you are a match for one of our studies.


This survey should take less than a minute of your time.


Take the survey here.

I have found that there is quite a lot of incredibly useful meta packed into the 'query' meta key over the past several years.  The HTTP parser puts arguments and passed variables in there when used in GET's and POST's.  While examining some recent PCAP's from the Malware Traffic Analysis site, there are some common elements we can use to identify Trickbot infections.  This was not an exhaustive look at Trickbot, but simply a means to identify some common traits as meta values.  As Trickbot, or any malware campaign changes, IOC's will need to be updated.


First things first, let's look at the index level for the 'query' meta key.  By default, the 'query' meta key is set to 'IndexKeys'.  This means that you could perform a search where the key existed in a session, but could not query for the values stored within that key.



There are pro's and con's to setting the index level to 'IndexValues' in your 'index-concentrator-custom.xml' file on your concentrators.  Some pro's include being able to search for values in there during an investigation.  The con's are that these queries would likely involve 'contains' which taxes the query from a performance perspective.  Furthermore, 'query' is a Text formatted meta key and limited to 256 bytes.  Therefore, anything that after 256 bytes would be truncated and you may not have the complete query string.


Whether 'query' is set to 'IndexKeys' or 'IndexValues' or even 'IndexNone', we can take advantage of it in App rule creation.  In one Trickbot pcap, we can see an HTTP POST to an IP address on a non-standard port.



If we look at the meta created for this session, we can see the 'proclist' and 'sysinfo' as pieces in the 'query' meta.



Combine these with a service type (service = 80) and an action (action = 'post'), we can create an application rule that can help find Trickbot infections in the environment.  For good measure, we can add additional meta from analysis.service to help round it out.



Trickbot application rule
service = 80 && action = 'post' && query = 'name="sysinfo"' && query = 'name="proclist"' && analysis.service = 'windows cli admin commands'



The flexibility of app rule creation allows for analysts and threat hunters take a handful of indicators (meta) and combine them to make detection easier.



App rules help make detection easier.  Once a threat is identified, we can use this method to find the traffic easier moving forward so that we can go find the next new bad thing.  If the app rule fires too often on normal traffic, then we can adjust the rule to add or exclude other meta to ensure it is firing correctly.


As always, good luck, and happy hunting.



Encrypted traffic has always posed more challenges to network defenders than plaintext traffic but thanks to some recent enhancements in NetWitness 11.2 and a really useful feed from defenders have a new tool in their toolbox.


11.2 Added the ability to enable TLS certificate hashing by adding an optional parameter on your decoders

Decoder Configuration Guide for Version 11.2 

(search for TLS certificate hashing - page 164)

  • Explore > /decoder/parsers/config/parsers.options
  • add this after the entropy line (space delimited) HTTPS="cert.sha1=true"
  • Make sure the https native parser is enabled on the decoder


This new meta is the SHA1 hash of any DER encoded cerificates during the TLS handshake which is written to cert.checksum which is the same key that NetWitness Endpoint writes its values to.


Now is a good time to revisit your application rules that might be truncating encrypted traffic.  Take advantage of new parameters that were added in 11.1 related to truncation after the handshake



Now that we have a field for the certificate hash we need a method to track known certificate checksums to match against. has a feed that tracks these blacklisted certificates as long with information to identify the potential attacker campaign.


This is the feed (SSLBL Extended) could also leverage the Dyre list as well. 


Headers look like this

# Timestamp of Listing (UTC),Referencing Sample (MD5),Destination IP,Destination Port,SSL certificate SHA1 Fingerprint,Listing reason
Map the feed as follows

Configure > Custom Feeds > New Feed > Custom Feed


Add the url as above, recur every hour (get new data into the feed in reasonable time)


Apply to your decoders (and you will notice that the feed is also added to your Context Hub as well in 11.2 - which means you can create a feed that is used as feed and as well as ESA whitelist or blacklist)



Non-IP type, map Column 5 to cert.checksum and column 6 to IOC (as if we have a match this is pretty confidant that this traffic is bad)


And now you have an updated feed that will alert you to certificate usage that matches known lists of badness.


an example output looks like this (always ends <space>c&c in IOC key)


(the client value is from another science project related to JA3 signatures ...  in this case double confirmation of gootkit)


the testing data that was used to play with this came from here - 2018-09-05 - Emotet infection with IcedID banking Trojan and AZORult 


Great resource and challenges if you are looking for some live fire exercises.


To wrap this up an ESA rule can be created with the following criteria to identify these communications and create an Alert

Module debug section. If this is empty then debugging is off.
@Name("outbound_blacklisted_ssl_cert: {ioc}")
@Description('cert.checksum + ssl abuse blacklist all have ioc ends with <space>c&c')
/* Statement: outound_ssl_crypto_cnc */
direction.toLowerCase() IN ( 'outbound' ) AND
service IN ( 443 ) AND
matchLike(ioc,'% C&C' )
/*isOneOfIgnoreCase(ioc,{ '%c&c' })*/
) ;

The reason advanced mode was needed was that the IOC metakey needed to be wildcarded to look for any match of <name><space>C&C and I didnt want to enumerate all the potential names from the feed (the UI doesnt provide a means to do this in the basic rule builder for arrays - of which IOC is string[]).


Another thing to notice is that the @Name syntax creates a parameterized name that is only available in the alert details of the raw alert.

I was hoping to do more with that data but so far not having much luck.


You can also wrap this into a Respond alert to make sure you group all potential communications together for a system and these alerts (grouping by source IP)


If everything works correctly then you get Resond alerts like this that you should investigate 

With all the recent blogs from Christopher Ahearn about creating custom lua parsers, some folks who try their hand at it may find themselves wondering how to easily and efficiently deploy their new, custom parsers across their RSA NetWitness environment.


Manually browsing to each Decoder's Config/Parsers tab to upload there will quickly become frustrating in larger or distributed environments with more than one Decoder.


Manually uploading to a single Decoder and then using the Config/Files tab's Push option would help eliminate the need to upload to every single Decoder, but you would still need to reload each Decoder's parsers.  While this could, of course, be scripted, I believe there is a simpler, easier, and more efficient option available.


Not coincidentally, that option is the title of this blog. We can leverage the Live module within the NetWitness UI to deploy custom parsers across entire environments and automatically reload each Decoder's parsers in the process.  To do this, we will need to create a custom resource bundle that mimics an OOTB Live resource.


First, lets take a look at one of the newer lua parsers from Live to see how it's being packaged.  We'll select one parser and then choose Package --> Create to generate a resource bundle.


In this ZIP's top-level directory, we see a LUAPARSER folder and a resourceBundleInfo.xml file.


Navigating down through the LUAPARSER folder, we eventually come to another ZIP file:


This contains an encrypted lua parser and a token to allow NetWitness Decoders to decrypt it (FYI - you do not need to encrypt your custom lua parsers).


The main takeaway from this is that when we create our custom resource bundle, we now know to create a directory structure like in the above screenshot, and that our custom lua parser will need to be packaged into a ZIP file at the bottom of this directory tree.


Next, lets take a look at the resourceBundleInfo.xml file in the top-level directory of the resource bundle.  This XML is the key to getting Live to properly identify and deploy our custom lua parser.


Everything that we really need to know about this XML is in the <resourceInfo> section.


A common or friendly name for our parser:



The name of the ZIP file at the bottom of the directory tree:



The full path of this ZIP file:



The version number (which can really be anything you want, as long as it's reflected accurately in the filePath):



The resourceType line is the name of the top-level folder in the resource bundle (you shouldn't need to change this):



The typeTitle (which you also shouldn't change):

            <typeTitle>Lua Parser</typeTitle>


And lastly the uuid, which is how Live and the NetWitness platform identify Live resources:



Modifying everything in this file should be pretty straightforward - you'll simply want to modify each line to reflect your parser's information. And for the uuid, we can simply create our own - but don't worry, it doesn't need to be anywhere near as long or complex as a Live resource uuid.


Now that we know what the structure of the resource bundle should look like, and what information the XML needs to contain, we can go ahead and create our own custom resource bundle.


Here's what a completed custom resource bundle looks like, using one of  Chris Ahearn's parsers as an example: What's on your wire: Detect Linux ELF files:





With the custom bundle packaged and ready to go, we can go into Live, select Package --> Deploy, browse to our bundle, and simply step through the process, deploying to as many or as few of our Decoders as we want:





For confirmation, we can broswe to any of our Decoders at Admin --> Services and see our custom parser deployed and enabled in the Config/General tab:


Lastly, for those who might have multiple custom resources they want to deploy at once in a single resource bundle, it's just a matter of adjusting the resourceBundleInfo.xml file to reflect each resource's name, version, path, and making sure each uuid is unique within the resource bundle, e.g.: uuid1, uuid2, uuid3, etc:



You can find a resource bundle template attached to this blog.


Happy customizing, everybody!

Filter Blog

By date: By tag: