Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2016 > June

Hackers are people too.


Sometimes it is difficult to remember that fact based on the Hollywood attack stories we may hear on the news, in presentations and ghost stories. But it is true, hackers are people. They know how they want things done, and they reuse the same tools. One of those shortcuts they may use is the HTTP Accept-Language header.


HTTP is an application protocol that browsers use to communicate to web servers. A fundamental part of that communicate is the use of HTTP headers. HTTP headers are traditionally sent as text with each header on a new line. The HTTP Accept-Language header is used by a browser to indicate to the web server the language(s) in which to present the reply. Hackers may be lazy, not thorough, or simply sloppy and forget to change these settings. These settings can also be used to get an idea of the country in which an attacker lives though it is not a very high fidelity indicator. Security Analytics has insight into the Accept-Language header, and can extract this potential indicator.


Here is a link to a parser that takes this HTTP Accept-Language header and extracts the language keys and places them in the language meta key in a human readable format. This allows the analyst to query for certain languages, or create reports looking for abnormal languages in their environment. Further documentation on the parser is available here.


Thanks for taking the time to read, and hopefully you find this useful in your environment.


UPDATE- here is an updated version of the lua parser to improve performance.




The RSA Live Content team has released the Traffic Flow LUA and associated options parsers.  The traffic flow parser brings directionality information and netblock identification into the product, which exist as part of the IR content pack.  Directionality (direction meta) provides the context of whether a session was initiated from an internal host to an external host (outbound), from an external host to an internal host (inbound), or was between two internal hosts (lateral). The netblock name (netname meta) provides the context of where on your network a host resides.  By default, netblocks are defined for private, broadcast, loopback, link-local, multicast, and reserved traffic.  The screenshot below shows the Investigation view of these two pieces of meta being populated.



You download the parser from Live to deploy to a packet decoder, in the same manner as you download and deploy all of the RSA parsers. In addition to the parser, there is an options file. You only need the options file if the default settings are not sufficient for your use case. 




At this time, only manual deployment to a Log Decoder is supported.  You can find detailed information about the current implementation in SA docs, at Traffic Flow Lua Parser


What to expect going forward?

Our goal going forward is to make the parser easier to deploy and configure.  It is expected in upcoming releases:

  • Ease of customization and deployment across multiple devices
  • Support for Log Decoders via Live

Sometimes, you just want the number.  The HTTP Response code number that is.


HTTP traffic represents one of the largest traffic types that our decoders see.  We parse quite a lot of traffic into meta and then indexed for future searching.  While we will parse certain error codes into the 'error' meta key, sometimes analysts just want the code number.  No description.  Just the code number, like 200 or 404 or 302.


I wrote a quick Lua parser that does this and puts the data into 'result.code'.  The reason for using 'result.code' is that it is already used by log decoders and parsing of web proxy logs.  Having meta from both packets and logs in the same place seemed an ideal choice in this case.



A copy of the parser is attached.  It would only be deployed to packet decoders.  This functionality may be added to one of the existing parsers (http_lua most likely) in the future.


I hope you find this parser useful.  Happy hunting.



Last year, I was on an incident response engagement where we were investigating several drive-by attacks.  Packet decoders were deployed and picking up the sessions rather easily but we were trying to identify the path of redirection that the malware compromises were taking.  We had 'referer' meta but it was not indexed and would come in as a URL.  While valuable, it was not something we would query against. However, I could extract some key elements from that 'referer' meta and help tell the story.  And in case anyone is wondering why it's spelled 'referer' instead of 'referrer', please consult the Google.


The result was a Lua parser for extracting the PATH information from the referer meta. Essentially what it does is a meta-callback against the 'referer' meta key.  A meta callback is simply a lookup of meta already created in the session.  The parser then breaks out the key elements into individual meta keys.


Because I really only wanted the host that it came from, I created a custom key called '' and indexed that.  You could remove the comments around the other elements such as directory, filename, extension, etc but I did not find a lot of value in using those in an investigation.




The result helped tell the story and looked a little like this:



As I stated above, this was done on a packet decoder.  However, you can run it on a log decoder too.  Several web proxies will log the referer information in the log which would get parsed into the referer meta key.  Since this parser is doing a meta callback, it can callback the meta from log sessions just as easily as it can from packets.  The only difference is that log decoders need the nwll.lua (Netwitness Lua Library) file.  By default, log decoders do not come with it.  You can download it as a package from Live and then deploy it to your log decoders.  The parser requires it for packet decoders too, but it is already there on a packet decoder.


Since this is going to use a custom meta key, you will want to add this to your index-concentrator-custom.xml on your concentrators.


<key description="Referer Host" level="IndexValues" name="" format="Text" valueMax="2500000" />


Edit that file and add the entry above.  Then, save it, and restart the concentrator service to start indexing the meta.


I hope you find this useful to your investigations.  Happy hunting.



We are pleased to announce the addition of threat indicators directly from RSA's world class Incident Response team.   These indicators include Domains and IPs that are sourced from Incident Response activities in a variety of ways:


- Direct observation during RSA Incident Response engagements

- Related indicators developed via malware, DNS, and whois analysis

- 3rd Party Indicators with connections to RSA IR activity


These indicators can be loaded into Security Analytics by subscribing to the following feeds in RSA Live:


RSA FirstWatch Command and Control Domains

RSA FirstWatch Command and Control IPs


The following pivot can be used to located hits to these indicators in the Security Analytics UI:


threat.source = "rsa ir indicators"


Thanks and Happy Hunting!


RSA FirstWatch

You asked and we listened! 


Ransomware continues to be a significant threat to our customers, so this is a very timely addition. has added a ransomeware tracker which tracks the following families of ransomware:











We’ve added these indicators to the following feeds in LIVE:


Third Party IOC Domains

Third Party IOC IPs


They can be located with the following pivot in the Security Analytics UI:


Threat.category = “ ransomware”


Happy Hunting!



The RSA Live Content team recently released a new Content Categorization Model for ALL content available via the RSA Live service.  This means over 1000+ pieces of content such as app & ESA rules, reports and parsers are now tagged with one or more Categories.  This compliments previous categorization models provided in Security Analytics (SA).


The first two-levels of the new four-level deep categorization model are currently exposed as Live Search Tags in the SA UI.  You can find detailed information about the current implementation in SA docs, Live Content Search Tags.


Screen Shot 2016-06-03 at 11.42.01 AM.png



Our analyst practitioners crafted the new categorization model by use scenario’s to closely replicate an Incident Response service-based approach. 


For example,

The attack phase category is designed to assist incident response practitioners with the escalation, remediation and classification of observed indicator of compromise activity.  The malware category, ties content that looks for malicious behaviors attributable to remote access trojans, crimeware, web shells and key loggers.  The risk category, ties intelligence SA may discover about the enterprise such as vulnerabilities, organizational hazards or business context provided through integrations with risk management systems like RSA Archer.


Our goal in Phase-I (which is now available for consumption) was to make it easier for users on all versions of SA to find and deploy Live content by Use Scenarios.


What to expect going forward?

Our goal in Phase-II is to allow SA users to leverage the new categories in rules and screens throughout SA to make it easier to detect, filter and drill-down using these categories (e.g. category=malware AND attack_phase=delivery) in addition to providing valuable context for all events and alerts.


Users can also expect a re-imagined user experience when working with Live Content in our upcoming major version of SA.  It will not only allow for more intuitive browsing & searching of Live content, but it will also include content versioning and reporting to determine at a glance what’s deployed in your environment and how it compares with the latest content on the RSA Live portal.

Filter Blog

By date: By tag: