Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: William Hart

RSA NetWitness Platform

11 Posts authored by: William Hart Employee

RSA NetWitness Platform 11.5 has expanded support for Snort rules (also known as signatures) that can be imported into the network Decoders. Some of the newly supported rule parameters are:

  • nocase
  • byte-extract
  • byte-jump
  • threshold
  • depth
  • offset

This additional coverage enables administrators to use more commonly available detection rules that were not previously supported. The ability to use further Snort rules arms administrators with another mechanism, in addition to application rules and Lua parsers, to extend the detection of known threats. 


To expand your knowledge on what is and is not supported, along with a much more detailed initial setup guide, check out Decoder Snort Detection 


Once configured, to Investigate the threats that Snort rules have triggered, examine the Events pivoting in the metadata (, populated from the rules themselves or query for threat.source = "snort rule" to find all Snort events. The Signature Identifier ( corresponds to the sid attribute in the Snort rule while the Signature Name ( corresponds to the msg attribute of the rule options.

Snort rules found

As always, we welcome your feedback!


Please leave any feedback or suggestion on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit you own.

As of RSA NetWitness 11.5, configuring what network traffic your Decoders collect and to what degree it should collect it has become much easier. Administrators can now define a collection policy containing rules for many network protocols and choose whether to collect only metadata, collect all data (metadata and packets), or drop all data.


NW 11.5 Selective Collection Policy Creation


This is made simpler by out-of-the-box (OOTB) policies that cover most typical situations. These can also be cloned and turned into a custom policy that fits your environment best. 


NW 11.5 Initial Selective Collection Policies


The policies are managed out of a new central location that has the ability to publish these policies to multiple network Decoders at once. This allows an administrator to configure one collection policy for DMZ traffic and distribute that to all the DMZ Decoders while simultaneously using a separate policy for egress traffic and distribute that to all the egress Decoders.


NW 11.5 Selective Collection Policy Status


An administrator can view which policies are published, the Decoders they have been applied to, when the last update was made and by whom. The policies can also be created in draft form (unpublished) and not distributed to Decoders until a maintenance window is available.


Initially this capability focuses on network collection, but long-term plan is to continue adding types of configurations and content to be administered using this centralized management approach. Please reference the RSA NetWitness Platform 11.5 documentation for further details at Decoder: (Optional) Configure Selective Network Data Collection 


As always, we welcome your feedback!


Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

RSA NetWitness 11.5 introduces the ability to interactively filter events using the metadata associated with all the events. This is seen as a new Filter button inside the Event screen that opens the Filter Events panel.


NW 11.5 Event Filter Button


This new capability functions in two modes.


NW 11.5 Event Filter Panel


The first presents a familiar search experience for analysts of all skill levels as many websites have a similar layout where filters (attributes or categories of the data) exist on the left side of the page and the matching results display on the right side. As an example in the below image, clicking the metadata (#1) in this integrated panel automatically builds the query (#2) and retrieves the resulting table (#3) of matching events.


NW 11.5 Event Filter Interactive Workflow


As analysts use this, it helps build the relationship between the metadata associated with the events and how to use those to structure a query.


NW 11.5 Full Screen Filter Events Panel


The second mode allows the panel to extend full screen giving more real-estate to show more metadata at once. This mode may seem very familiar to those who have used Navigate previously. As meta data values are clicked they are added as filters to the query bar and updates a new filter list based on the events filtered out. What it does not do is execute the query to retrieve the resulting table of events. This allows the analyst to hunt through the data and then when ready to see the results they can minimize (highlighted in above image) the Filter Events panel to reveal the results.


In both modes, the meta values associated to the meta keys can be organized by event count or event size and sorted by the count or value. This allows for analysts to sort descending by event count to find outliers, a small limited number of communications, for example. The meta keys can also be shown in smaller meta groups to help analysts focus in on the most specific values for certain use cases. Analysts can use the query profiles to execute a query with a predefined query, meta group, and column group allowing them to jump right into a specific subset of data. The right click actions that provide additional query and lookup options are also available. To get a further deep dive into the capability check out the Investigate documentation Investigate: Drill into Metadata in the Events View (Beta)  


As always, we welcome your feedback!


Please leave any feedback or suggestions on how to make this experience even better. To see what else may be in store for future releases, go to the RSA Ideas portal for the RSA NetWitness Platform to see enhancements that have been suggested, vote on them, and submit your own. 

The ability to capture network events while keeping only the header portion and truncating the payload has been available for quite some time. This has always been a great option when the lack of analytical value of the raw data (e.g. the session payload) does not justify paying for the storage cost incurred to keep it. Some typical examples being saving database transfers of your backup files or data that is encrypted that you are unable to decipher into clear text.


In RSA NetWitness Platform 11.1 we added some additional options to increase the flexibility of when the truncation is applied to an event.


  • The first new option allows for the headers along with any Secure Sockets Layer (SSL) certificate exchange to be captured prior to truncating the remaining portion of the payload. This allows for analysis like TLS certificate hashing and JA3 & JA3S fingerprints to be generated while still removing the remaining payload to save on storage space.
  • The second option allows for the administrator to choose a custom boundary, based on how many bytes into the event raw data, before truncating the payload. Any bytes prior to the boundary are saved as part of the event and anything after that boundary is not stored.


The administrative interface shown below is where an admin can modify the truncation options on application rules per network decoder.


Administration of network decoder application rule truncation options

Unfortunately sometimes sensitive data can find its way where it is not wanted. It should not, but it happens. Perhaps your IT Person decided connecting the high side network to the low side was a good idea. Maybe someone accidentally uploaded the wrong PCAP (packet capture) to the system. However it happened, there are options to remove that data. If a large amount of data needs to be purged, probably want to start with the storage component (e.g. SAN) to see what capabilities are available. In terms of RSA NetWitness Platform software, one option is to utilize the wipe utility that allows the administrator to strategically overwrite events.


  1. The first step is to find the data in question. This can be done via a query either in the RSA NetWitness Investigate user interface, the REST API interface, or the NwConsole. If use the first option will require additional steps to clear user interface cache on the admin server. This is an example of an event found using the Investigate user interface. The PCAP used in this example has one event and was tagged by name during import to make it easier to query.

  2. After you execute the query make note of the session ID (sid) and remote ID (rid) that can be seen here using a custom column group. They are both in the above view as well, but have to scroll down the list of meta to find the remote id. 

  3. Starting with the concentrator, use the wipe command against those session IDs to overwrite them with a pattern.
    • There are multiple options to the wipe command.
      • session - <uint64> The session id whose packets will be wiped
      • payloadOnly - <bool, optional> If true (default), will only overwrite the packet payload
      • pattern - <string, optional> The pattern to use, by default it uses all zeros
      • metaList - <string, optional> Comma separated list of meta to wipe, default (empty) is all meta
      • source - <string, optional, {enum-any:m|p}> The types of data to wipe, meta and/or packets, default is just packets
    • Note that if you use a string as your pattern it will not overwrite any meta values that are not a string type. Therefore best to keep the pattern as a numerical value.
    • Initially go to the concentrator that was found to have those session IDs (sids) and use the wipe command to overwrite the session meta data on disk.

  4. Rinse and repeat this on the upstream service (e.g. decoder, log decoder) in the path of the query. This time use the remote session IDs (rids) to overwrite the raw sessions on disk.

  5. To ensure that the indexed meta values that were stored on the Concentrator are removed, rebuild the index. This can take a long time but is necessary because the wipe command does not remove any data from the Concentrator index. Refer to the Core Database Tuning Guide for instructions.
  6. Now that you have overwritten the data on the decoder, where it was ingested, and the concentrator, where meta related to it was created, you're done right? Well it depends on how you discovered the data in the first place. If you know for sure no one found the data by way of the RSA NetWitness Platform user interface you should be done. If the user interface was used or you just want to be on the safe side continue to the next step. Otherwise might still see the raw event data being rendered from cache like below.

    • If the Investigate > Event Analysis was used to find the data the cache for the event reconstruction should be cleared by restarting the Investigate service.

    • If the Investigate > Events was used to find the data the event reconstruction cache should be cleared by removing the contents of the service folders on the admin server as shown below.

    • The cache for the concentrator and the decoder can also be cleared by executing the delCache command in Admin > Services > sdk > properties for each as shown below.

    • After clearing the cache attempting to view the same session that was wiped you will see the event is unavailable for viewing.


To gain further knowledge on protecting the data stored within your RSA NetWitness system take a look at the Data Privacy Management Guide.

Strides have been made in RSA NetWitness Platform v11.2 to provide an administrator alternatives to the standard proprietary NW database format. Now an admin can choose to have the raw packet database files written in PcapNg format allowing them to be directly accessible using third party tools like Wireshark.


To enable storing the raw packet data as PcapNg files, the setting packet.file.type in the network decoder database configuration node has to be changed from netwitness to pcapng. After making this change a restart of the service is not required unless you are too impatient for the existing database file (default size is 4GB) to roll-over.


PcapNg configuration


Once the change is applied any new PCAPs uploaded or network traffic ingested into the decoder will be stored as pcapng files. Now as the database files age they are more readily available while on the decoder and when backed up off the system. In the below image you can see a mixture of the formats commingling in the packet database folder. The database written format can be changed between the two options without any loss of standard functionality.


pcapng files


There are some considerations before making the switch to PcapNg format over the default nwpdb format. The PcapNg format requires approximately 5% more storage when compared to the nwpdb format. The PcapNg format is not recommended to be used when ingest rates are greater than 8 Gbps on a single decoder as can introduce approximately 5% packet drops compared to when nwpdb is in use. The PcapNg files cannot be compressed while nwpdb files can, although in general raw network data typically does not compress well compared to raw logs. The PcapNg format is an open format while the nwpdb files are in a proprietary format so as accessibility improves, privacy concerns may arise when storing as PcapNg files. However, I am not suggesting security through obscurity is the right answer when measuring your GDPR compliance.


Hopefully this along with the already available SDK and APIs make NetWitness data more accessible.

William Hart

Email Parsing Options

Posted by William Hart Employee Jul 1, 2016

We have got several requests to generate additional parser updates to some of the parsers we distribute in Live. One approach we have taken to make these optional changes available for customers is to allow for an external configuration file to be read by the original parser. This provides for some customization depending on environment requirements while eliminating the need for customers to be able to write a parser. The reason for them being optional is some configuration changes may generate more meta requiring further storage or may require additional parsing resources or not be appropriate to an environment.


As an example the default email parser, MAIL_Lua, reads email messages regardless of transport protocol (e.g. SMTP, IMAP, POP3) and registers all email addresses into the email meta key. In the options file a user can enable to register the email sender to email.src and the recipient to email.dst instead of registering them all to email. There are additional options available in the attached MAIL_lua_options.lua file that can be enabled/disabled as well.


To have these options take effect the following steps are required.


1) Upload the MAIL_lua_options.lua file into the /etc/netwitness/ng/parsers folder on the appropriate decoder where the MAIL_Lua parser is applied.


2) To enable the source and destination email change mentioned above modify the word false to true on line 24. This is the last line before the end of the function named registerEmailSrcDst


3) Validate that the default email source and destination meta keys are included in the appropriate concentrator default index file (e.g. index-concentrator.xml) located in /etc/netwitness/ng on the concentrator.  This file can be viewed by command line after logging in using secure shell or in administration section of the user interface at Administration > Services > <concentrator name> > config > Files tab. The lines that should be in there and that will be populated by this change are:


<key description="Source E-mail Address" level="IndexValues" name="email.src" format="Text" valueMax="2500000" />

<key description="Destination E-mail Address" level="IndexValues" name="email.dst" format="Text" valueMax="2500000" />


4) For this to take effect, a decoder service restart is required. This will cause a service interruption so recommend making this change during a maintenance window.


I welcome any suggestions on importance of these types of options for this scenario as well as others. As well if it would be better to have these options available in the Security Analytics user interface.


Note: I am just a conduit for this information and have to give credit for the creation of these parser options to RSA Content Engineers.

Are you looking for a way to trigger those PCAP downloads so they automatically open in a third party tool? There is a way to do this in Security Analytics 10.4 and above. It does require enabling some settings that may not be enabled by default depending on which version of Security Analytics you are running.


To make your PCAP extractions more efficient do the following steps.


1) Make sure the Download Completed PCAPs setting is enabled. This is available in the Security Analytics interface through the Investigate > Navigate > Settings widget as shown below. The download will still be tracked in the download job queue on the SA server but after completion of the download it will save it to your client machine in your browsers designated download folder.





2) Optionally setup file associations on your operating system so that files with a .pcap extension open in your tool of choice, say Wireshark for example.


Note: Although you can configure this method for downloading files (other then PCAPs) I do not suggest it unless the system you are running your browser on is a machine you are allowed to download and execute malware on. By that I mean the machine is on a segmented network or is a sandboxed virtual machine or some other endpoint software is in place to limit the effect of malware. The reason I bring up this warning is that typically the files being downloaded from Security Analytics are ones suspected of being malware or related to malware and if they are automatically opened by their native application you could end up infecting your own system.

It is a surprise to me how many people do not know all the operators available to them in the query language for investigations. Hence why it made sense to through some of the lesser known ones here.


To start, the group NOT statement which effectively does the same thing as the ! when attempting to negate an entire statement. For example can easily execute the query username !='monkey' but does not work when attempt to do

!(username='monkey'). Instead the proper syntax is ~(username='monkey') or alternatively NOT(username='monkey'). This works for all functions such as NOT(username contains 'monkey') or ~(username ends 'monkey').


Another one that is useful is <= which along with >= can be used on numerical values. For example if wanted to find all sessions with TCP destination ports less then or equal to 1024 can execute: tcp.dstport <='1024'


These can be utilized in execution of a report using the where clause as well as investigation queries.


Further details on these as well as other syntax specifics can be found in the online Security Analytics documentation:

Queries - RSA Security Analytics Documentation

William Hart

How to Filter Feeds

Posted by William Hart Employee Mar 31, 2016

Do you have a feed that is valuable but causes some false positives occasionally? Then this post is for you! The capability exists to filter out specific values from feeds without modifying the original data set. Why would you want such a thing? Well in some cases maybe the feed is generated from RSA or another entity in your organization that limits your ability to manipulate it before ingested in the Security Analytics decoders. In general the Intelligence is worth keeping the feed but there are some values that you wish could be ignored.


How do we achieve this? Simple, follow the steps below:


1) Determine the feed that is generating the errant value(s).


In Security Analytics investigation focus on the threat source meta key to determine what feed(s) generated the alert or meta for the IP, domain, or other value you want to whitelist or filter out.



In the example above at the time of this old sample data the domain was listed as suspicious by various threat sources highlighted. If it was decided that this determination was incorrect a filter file with the hostname alias could be added for each feed listed in the threat source.


2) Create a filter file and add the errant value(s) to it.


To pick one as an example the malwaredomainlist-domain.feed is generating meta that the domain is malicious. In the feed file named malwaredomainlist-domain.filter add the domain on a single line. If additional values generated by this feed are incorrect can add subsequent rows of those domains as well to the same file.


3) Deploy the filter file in the same directory on the decoder(s) that the feed has been applied.


To get the filter file in the appropriate location on the decoder (e.g. /etc/netwitness/ng/feeds) either secure copy the file to the system or use the feed upload option on the feed tab located on the decoder configuration page.



Unfortunately that page does not allow you to view the filter files loaded once they have been uploaded. You can however view the feed filter in the file tab located on the decoder configuration page.





4) Disable and enable the feed so decoder realizes filter file exists.


5) That is it. Enjoy!


Please let me know if there are any additional questions or improvement suggestions in this area.

There is no denying the power that Security Analytics (SA) brings to the table. However, knowing where to start or providing the tools to get an analyst started down the path of finding the nasty bits is an area we would like to improve. In SA 10.4 the Incident Management definitely does help facilitate this once issues or combinations are known, but without that information how does an analyst know where to start hunting?


What follows is a basic primer on how to investigate certain situations in SA 10.4. The focus is on determining which meta values are of importance for each use case and the different ways each can be tackled using the attached profiles and groups.

First example use case: file analysis.

Most files have extensions and those typically indicate what type of file it is. That is if you believe everything you see which from a security standpoint SA does not. SA looks at the extension (if there is one) and tracks it in the meta key called extension. However to determine the actual file type SA uses much further analysis. The main reason being if a malicious actor is trying to get a piece of malware through the network and past some security controls one simple way is to modify the extension to make it seem like another less harmful file type. Malware in general is some form of an executable meaning it would, if not trying to be inconspicous, have an extension of EXE, DLL, etc. Of course there are many other ways to get around security defenses (like using JavaScript to manipulate a file after it has traversed the network) then just changing a file extension but for now we will focus on this simple case.

If you want to see a file as it is represented by the user or application you can examine meta keys filename and extension but if you really want to validate what type of file is being transmitted as that filename then the filetype meta key should be used. There are a lot of complex parsers, that I will not go into here, that compare the official documented sections for different file formats (portable executables, PDFs, Office Docs, etc) to what is actually in the file being transmitted. This additional parsing allows for finding such items as file magic numbers, if objects are at the appropriate offsets, if the file is encoded a certain way, or if there is Javascript embedded in your PDF file. Therefore having these meta keys along with some pertinent contextual meta like source and destination IP addresses or countries will provide a good picture of what types of files are being transmitted through an environment. 

To make this a little easier you can either enable malware analysis (MA) in your Security Analytics deployment since it does all this file analysis for you or you can upload the attached column groups, meta groups, and profiles which contain the file analysis templates. If you choose the later, which I will add is still useful in file analysis cases even if you have MA deployed, here are some screen shots of what the outcome can look like.

The file analysis meta group shown here provides a view of the four main meta keys to focus on during doing file analysis - extension, filename, forensic fingerprint, and destination country. Of course these can be added to if you prefer to have additional information like the source/destination IP addresses or other meta keys you think relevant. In below I have enabled the file analysis meta group.



I have then drilled into the filetype (or forensic fingerprint) windows executable to see all the executables traversing the environment since malware needs to execute so usually in some form of an executable; unless very obfuscated that is.





Below is the same example but using the file analysis column group in the event view to compare values versus the navigate view.





Second example use case: web analysis.

Most malicious traffic is either overt and hoping to blend in with the sea of network traffic or it is covert and attempting to stay undetected by using techniques like encryption and obfuscation to avoid detection. In either case for the most part there are rules of engagement for networking protocols just like there are for file formats. This handy trinket of information can be very useful to an analyst and in general how a lot of our network parsers view the world. The RFC for HTTP (RFC 2616, 7230-7235 - HTTP/1.1) dictates that a host field is required in HTTP/1.1 (was optional in HTTP/1.0) and if it is missing or an IP address is there instead of a FQDN then possibly will lead to something interesting. Now that could be interesting in that the header was crafted by a lazy programmer at a commercial vendor or a malware author. If you have applied content from Live there are parsers that will generate meta like for this example direct to ip http request or http direct to ip request or http1.0 unsupported host header. These meta values will be depicted in the risk.informational, suspicious, or warning meta keys that are included in the web analysis meta and column groups. The profile web analysis utilizes these as well as limits the traffic to only service=80 which will find HTTP traffic on any port. This is because the decoder parsing is port agnostic and will determine it is HTTP whether on a non-standard port or on tcp-80 and tcp-8080; primary and secondary ports for HTTP. This very specific indicator of abnormality can lead to other indicators of compromise and help find those malicious sessions that are more challenging to find directly because their payload is XOR encoded or obfuscated inside javascript inside a PDF file. 




This is an example why the risk meta keys were chosen just like why the client application meta key was included in this meta group to find indicators of abnormality such as internet explorer spelled out versus MSIE 10.0 or Trident/6.0 (some normal representations of IE web browser in an HTTP header).




Third and final example use case: querying for IP addresses.

The idea behind the meta groups is not only to use them to limit your view into the data but to also use them to query against. This allows an inexperienced analysts or someone who has better things to do then learn our rule syntax to execute a query against all relevant meta keys. The meta groups are exposed for use in this way in the investigation query pulldown menu provided right before the list of all available meta keys as shown in the figure below.




Now if I wanted to query for an IP address it is much simpler to select Query IPs then to know all the different possible meta keys that could contain an IP address and search each one of those. All the possible meta key candidates for IP addresses are shown as Query IPs in below figure.




There are several listed as Query <a value> and these have been modified to allow for querying against these using all operators by default. Now what this means is that Query Files will not actually query for a file in the filename meta key where it would obviously be provided. The reason for this is because by default the filename meta key is set to indexKeys in the index database (can be modified in index-concentrator-custom.xml) which limits a query to using only exists or !exists. The Query Files does search through some additional meta keys that their relation to files is not as obvious. Now the concept here is to simplify the ability to query without the user knowing all the meta keys so you ask how could they know if the key index has to change? A new user would not know this and why we have these Query groups versus the other groups like File Analysis which has filename contained in it because that group is more focused on limiting the user view into the data versus provided an easy query capability. The filename will still show up in that case, but in a closed state because it is by default indexed at the key level and showing all the values requires the index to be built for that key on the fly which is obviously slower. I must also mention that the Query groups are not the best when it comes to performance either since effectively executing a single query against multiple meta keys. 


Remember these are all attempts to make it easier for an analyst when first getting familiar with the product and possibly with long-term use. If you have any suggestions or comments around these groups please let me know.




Now that was all fine and dandy you say but how do I get these groups into my SA system for use? Well, your in luck because they are attached to this blog and there is a way to import/export these JSON files into SA in the areas where they apply. Start by importing the column groups in the investigation event view area. Then in the investigation navigate area import the meta groups followed by the profiles. If you do not follow this order the imports will not work because the profiles are dependent on the other groups to function. These have been tested (in general terms) by me in SA 10.4 and above without issue minus the aforementioned import order limitation and not being able to import if an existing group or profile has the same name. 






Filter Blog

By date: By tag: