Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Authors William Hart

RSA NetWitness Platform

5 Posts authored by: William Hart Employee
William Hart

Email Parsing Options

Posted by William Hart Employee Jul 1, 2016

We have got several requests to generate additional parser updates to some of the parsers we distribute in Live. One approach we have taken to make these optional changes available for customers is to allow for an external configuration file to be read by the original parser. This provides for some customization depending on environment requirements while eliminating the need for customers to be able to write a parser. The reason for them being optional is some configuration changes may generate more meta requiring further storage or may require additional parsing resources or not be appropriate to an environment.

 

As an example the default email parser, MAIL_Lua, reads email messages regardless of transport protocol (e.g. SMTP, IMAP, POP3) and registers all email addresses into the email meta key. In the options file a user can enable to register the email sender to email.src and the recipient to email.dst instead of registering them all to email. There are additional options available in the attached MAIL_lua_options.lua file that can be enabled/disabled as well.

 

To have these options take effect the following steps are required.

 

1) Upload the MAIL_lua_options.lua file into the /etc/netwitness/ng/parsers folder on the appropriate decoder where the MAIL_Lua parser is applied.

 

2) To enable the source and destination email change mentioned above modify the word false to true on line 24. This is the last line before the end of the function named registerEmailSrcDst

 

3) Validate that the default email source and destination meta keys are included in the appropriate concentrator default index file (e.g. index-concentrator.xml) located in /etc/netwitness/ng on the concentrator.  This file can be viewed by command line after logging in using secure shell or in administration section of the user interface at Administration > Services > <concentrator name> > config > Files tab. The lines that should be in there and that will be populated by this change are:

 

<key description="Source E-mail Address" level="IndexValues" name="email.src" format="Text" valueMax="2500000" />

<key description="Destination E-mail Address" level="IndexValues" name="email.dst" format="Text" valueMax="2500000" />

 

4) For this to take effect, a decoder service restart is required. This will cause a service interruption so recommend making this change during a maintenance window.

 

I welcome any suggestions on importance of these types of options for this scenario as well as others. As well if it would be better to have these options available in the Security Analytics user interface.

 

Note: I am just a conduit for this information and have to give credit for the creation of these parser options to RSA Content Engineers.

Are you looking for a way to trigger those PCAP downloads so they automatically open in a third party tool? There is a way to do this in Security Analytics 10.4 and above. It does require enabling some settings that may not be enabled by default depending on which version of Security Analytics you are running.

 

To make your PCAP extractions more efficient do the following steps.

 

1) Make sure the Download Completed PCAPs setting is enabled. This is available in the Security Analytics interface through the Investigate > Navigate > Settings widget as shown below. The download will still be tracked in the download job queue on the SA server but after completion of the download it will save it to your client machine in your browsers designated download folder.

 

PCAP_Download_Setting_hl.png

 

 

2) Optionally setup file associations on your operating system so that files with a .pcap extension open in your tool of choice, say Wireshark for example.

 

Note: Although you can configure this method for downloading files (other then PCAPs) I do not suggest it unless the system you are running your browser on is a machine you are allowed to download and execute malware on. By that I mean the machine is on a segmented network or is a sandboxed virtual machine or some other endpoint software is in place to limit the effect of malware. The reason I bring up this warning is that typically the files being downloaded from Security Analytics are ones suspected of being malware or related to malware and if they are automatically opened by their native application you could end up infecting your own system.

It is a surprise to me how many people do not know all the operators available to them in the query language for investigations. Hence why it made sense to through some of the lesser known ones here.

 

To start, the group NOT statement which effectively does the same thing as the ! when attempting to negate an entire statement. For example can easily execute the query username !='monkey' but does not work when attempt to do

!(username='monkey'). Instead the proper syntax is ~(username='monkey') or alternatively NOT(username='monkey'). This works for all functions such as NOT(username contains 'monkey') or ~(username ends 'monkey').

 

Another one that is useful is <= which along with >= can be used on numerical values. For example if wanted to find all sessions with TCP destination ports less then or equal to 1024 can execute: tcp.dstport <='1024'

 

These can be utilized in execution of a report using the where clause as well as investigation queries.

 

Further details on these as well as other syntax specifics can be found in the online Security Analytics documentation:

Queries - RSA Security Analytics Documentation

William Hart

How to Filter Feeds

Posted by William Hart Employee Mar 31, 2016

Do you have a feed that is valuable but causes some false positives occasionally? Then this post is for you! The capability exists to filter out specific values from feeds without modifying the original data set. Why would you want such a thing? Well in some cases maybe the feed is generated from RSA or another entity in your organization that limits your ability to manipulate it before ingested in the Security Analytics decoders. In general the Intelligence is worth keeping the feed but there are some values that you wish could be ignored.

 

How do we achieve this? Simple, follow the steps below:

 

1) Determine the feed that is generating the errant value(s).

 

In Security Analytics investigation focus on the threat source meta key to determine what feed(s) generated the alert or meta for the IP, domain, or other value you want to whitelist or filter out.

 

feed_source2_hl.png

In the example above at the time of this old sample data the domain was listed as suspicious by various threat sources highlighted. If it was decided that this determination was incorrect a filter file with the hostname alias could be added for each feed listed in the threat source.

 

2) Create a filter file and add the errant value(s) to it.

 

To pick one as an example the malwaredomainlist-domain.feed is generating meta that the purehtc.com domain is malicious. In the feed file named malwaredomainlist-domain.filter add the purehtc.com domain on a single line. If additional values generated by this feed are incorrect can add subsequent rows of those domains as well to the same file.

 

3) Deploy the filter file in the same directory on the decoder(s) that the feed has been applied.

 

To get the filter file in the appropriate location on the decoder (e.g. /etc/netwitness/ng/feeds) either secure copy the file to the system or use the feed upload option on the feed tab located on the decoder configuration page.

 

feed-filter_upload.png

Unfortunately that page does not allow you to view the filter files loaded once they have been uploaded. You can however view the feed filter in the file tab located on the decoder configuration page.

 

feed-filter_view.png

 

 

4) Disable and enable the feed so decoder realizes filter file exists.

 

5) That is it. Enjoy!

 

Please let me know if there are any additional questions or improvement suggestions in this area.

There is no denying the power that Security Analytics (SA) brings to the table. However, knowing where to start or providing the tools to get an analyst started down the path of finding the nasty bits is an area we would like to improve. In SA 10.4 the Incident Management definitely does help facilitate this once issues or combinations are known, but without that information how does an analyst know where to start hunting?

 

What follows is a basic primer on how to investigate certain situations in SA 10.4. The focus is on determining which meta values are of importance for each use case and the different ways each can be tackled using the attached profiles and groups.


First example use case: file analysis.

Most files have extensions and those typically indicate what type of file it is. That is if you believe everything you see which from a security standpoint SA does not. SA looks at the extension (if there is one) and tracks it in the meta key called extension. However to determine the actual file type SA uses much further analysis. The main reason being if a malicious actor is trying to get a piece of malware through the network and past some security controls one simple way is to modify the extension to make it seem like another less harmful file type. Malware in general is some form of an executable meaning it would, if not trying to be inconspicous, have an extension of EXE, DLL, etc. Of course there are many other ways to get around security defenses (like using JavaScript to manipulate a file after it has traversed the network) then just changing a file extension but for now we will focus on this simple case.

If you want to see a file as it is represented by the user or application you can examine meta keys filename and extension but if you really want to validate what type of file is being transmitted as that filename then the filetype meta key should be used. There are a lot of complex parsers, that I will not go into here, that compare the official documented sections for different file formats (portable executables, PDFs, Office Docs, etc) to what is actually in the file being transmitted. This additional parsing allows for finding such items as file magic numbers, if objects are at the appropriate offsets, if the file is encoded a certain way, or if there is Javascript embedded in your PDF file. Therefore having these meta keys along with some pertinent contextual meta like source and destination IP addresses or countries will provide a good picture of what types of files are being transmitted through an environment. 

To make this a little easier you can either enable malware analysis (MA) in your Security Analytics deployment since it does all this file analysis for you or you can upload the attached column groups, meta groups, and profiles which contain the file analysis templates. If you choose the later, which I will add is still useful in file analysis cases even if you have MA deployed, here are some screen shots of what the outcome can look like.

The file analysis meta group shown here provides a view of the four main meta keys to focus on during doing file analysis - extension, filename, forensic fingerprint, and destination country. Of course these can be added to if you prefer to have additional information like the source/destination IP addresses or other meta keys you think relevant. In below I have enabled the file analysis meta group.


108480


 

I have then drilled into the filetype (or forensic fingerprint) windows executable to see all the executables traversing the environment since malware needs to execute so usually in some form of an executable; unless very obfuscated that is.

 

 

108481

 

Below is the same example but using the file analysis column group in the event view to compare values versus the navigate view.

 

108486

 

 

Second example use case: web analysis.

Most malicious traffic is either overt and hoping to blend in with the sea of network traffic or it is covert and attempting to stay undetected by using techniques like encryption and obfuscation to avoid detection. In either case for the most part there are rules of engagement for networking protocols just like there are for file formats. This handy trinket of information can be very useful to an analyst and in general how a lot of our network parsers view the world. The RFC for HTTP (RFC 2616, 7230-7235 - HTTP/1.1) dictates that a host field is required in HTTP/1.1 (was optional in HTTP/1.0) and if it is missing or an IP address is there instead of a FQDN then possibly will lead to something interesting. Now that could be interesting in that the header was crafted by a lazy programmer at a commercial vendor or a malware author. If you have applied content from Live there are parsers that will generate meta like for this example direct to ip http request or http direct to ip request or http1.0 unsupported host header. These meta values will be depicted in the risk.informational, suspicious, or warning meta keys that are included in the web analysis meta and column groups. The profile web analysis utilizes these as well as limits the traffic to only service=80 which will find HTTP traffic on any port. This is because the decoder parsing is port agnostic and will determine it is HTTP whether on a non-standard port or on tcp-80 and tcp-8080; primary and secondary ports for HTTP. This very specific indicator of abnormality can lead to other indicators of compromise and help find those malicious sessions that are more challenging to find directly because their payload is XOR encoded or obfuscated inside javascript inside a PDF file. 

 

108482

 

This is an example why the risk meta keys were chosen just like why the client application meta key was included in this meta group to find indicators of abnormality such as internet explorer spelled out versus MSIE 10.0 or Trident/6.0 (some normal representations of IE web browser in an HTTP header).

 

108483

 


Third and final example use case: querying for IP addresses.

The idea behind the meta groups is not only to use them to limit your view into the data but to also use them to query against. This allows an inexperienced analysts or someone who has better things to do then learn our rule syntax to execute a query against all relevant meta keys. The meta groups are exposed for use in this way in the investigation query pulldown menu provided right before the list of all available meta keys as shown in the figure below.

 

108484

 

Now if I wanted to query for an IP address it is much simpler to select Query IPs then to know all the different possible meta keys that could contain an IP address and search each one of those. All the possible meta key candidates for IP addresses are shown as Query IPs in below figure.

 

108485

 

There are several listed as Query <a value> and these have been modified to allow for querying against these using all operators by default. Now what this means is that Query Files will not actually query for a file in the filename meta key where it would obviously be provided. The reason for this is because by default the filename meta key is set to indexKeys in the index database (can be modified in index-concentrator-custom.xml) which limits a query to using only exists or !exists. The Query Files does search through some additional meta keys that their relation to files is not as obvious. Now the concept here is to simplify the ability to query without the user knowing all the meta keys so you ask how could they know if the key index has to change? A new user would not know this and why we have these Query groups versus the other groups like File Analysis which has filename contained in it because that group is more focused on limiting the user view into the data versus provided an easy query capability. The filename will still show up in that case, but in a closed state because it is by default indexed at the key level and showing all the values requires the index to be built for that key on the fly which is obviously slower. I must also mention that the Query groups are not the best when it comes to performance either since effectively executing a single query against multiple meta keys. 

 

Remember these are all attempts to make it easier for an analyst when first getting familiar with the product and possibly with long-term use. If you have any suggestions or comments around these groups please let me know.

 

Logistics:

 

Now that was all fine and dandy you say but how do I get these groups into my SA system for use? Well, your in luck because they are attached to this blog and there is a way to import/export these JSON files into SA in the areas where they apply. Start by importing the column groups in the investigation event view area. Then in the investigation navigate area import the meta groups followed by the profiles. If you do not follow this order the imports will not work because the profiles are dependent on the other groups to function. These have been tested (in general terms) by me in SA 10.4 and above without issue minus the aforementioned import order limitation and not being able to import if an existing group or profile has the same name. 

 

Attachments:

CustomColumnGroups_ootb.jsn

MetaGroups_ootb_w_query.jsn

Profiles_ootb.jsn

Filter Blog

By date: By tag: