Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Davide Veneziano

In my previous post, Trend Analysis with the Netwitness Suite, I've presented an approach to develop a baseline and perform a trend analysis with ESA. As mentioned many times, every threat is different and detection techniques not only can, but must vary to effectively protect the businesses of our organizations.


There are situations in which threat patterns can be identified by simply reporting on new values of a given meta key, without the need of performing complicated statistical analysis. For example if a new browser or a new TLD never seen before shows up in our environment.


The Netwitness reporting engine has a very handy function called show_whats_new() which is doing the job for you. However, if you want to leverage the power of ESA to achieve the same, it will be more challenging since there is the need to work with large timeframes which must be handle with care within ESA.


By using the same approach detailed in my previous post, the attached EPL can safely look at the last 30 days of every meta key you want to monitor and alert once there is a new value. Events are aggregated every minute, hour and day so to limit the impacts on ESA performance and store in memory only the information required for achieving the use case.


Multiple meta keys can be monitored by replicating and customizing the last statement.


From an implementation standpoint, the model creates a history of meta key - value pairs which is checked on a daily basis to alert for each new value found. In order to setup a learning phase, the model internally stores also the current date so to prevent alerting until the warm up period is over.


Please note this is not RSA official/supported content so use it at your own risk!

The Netwitness Suite provides out-of-the-box a number of tools to analyze your data. But there is a capability hidden under the hood which if implemented correctly may be precious to identify additional suspicious patterns: the development of a baseline to perform a trend analysis.


This approach can help whenever a significant change in the rate  of a given value could imply a security issue. Of course not all the threats can be identified in this way!


To perform any statistical analysis, numbers are an obvious requirement and these have to be derived from the collected events first. The attached (unofficial) model, inspired by the new 10.6 Event source Automatic Monitoring functionality, offers a solid way to count the number of occurrences without requiring to buffer all the events in memory for a long timeframe. 


For each value of a given meta key, the number of occurrences are counted every minute and then aggregated every five minutes, hour and day to minimize the impact on ESA performance. Then, for each hour (and for each day), a baseline is created.



In case there is a significant deviation in the rate of any meta value, an alert is generated.

The duration of the learning phase, the entity of the deviation and the duration of the baseline are all configurable parameters. 


As an implementation best practice, do not use meta keys with too many unique values (e.g. ip.src) since would generate too many false positives. Start focusing on those with a few but significant unique values, like as:

  • Browsers - uncommon client may be associated with malicious codes
  • Country source/destination - can help identifying attacks or potential data exfiltration
  • TLDs - uncommon TLDs can be an indicator of something strange happening


All the details regarding the model, how it works, how to implement it and all the technical details can be found in the attached presentation together with the full EPL code.


For a different but complementary approach, I'd suggest reading this excellent post by Nikolay Klender: 


Please note this is not RSA official/supported content so use it at your own risk!

When working in a Security Operation Center, it is not uncommon to continuously adapt the people, the processes and the technologies to objectives that evolve over the time because both the business requirements are dynamic and the threat landscape out there is never the same. Every single environment is somehow unique and each organization has peculiar needs which are eventually reflected in the way the SOC operates and achieve its goals.


While adapting to new situations is inherent to the human nature, each piece of technology has instead embedded a logic that is not always easy to subvert. This is why relying on products which would allow a high degree of customization could become a key element in an organization's SOC strategy, leading to easier integrations with the enterprise environment, increased quality of the service and eventually a better Return on Investment.


Flexibility has always been a central element in Security Analytics and easily adapting the platform to handle custom use cases is a key factor. But you would say, let’s prove it then!


During the last few weeks, I have posted a few articles here about customizing the platform, intended to demonstrate how to get more value out of it or to achieve complex use cases.


In my first post (available at I shared some simple rules intended to promote a standard naming convention and approach to "tag" inbound/outbound connections as well as to name our networks.

Understanding which connection is going in or out our network is key to better focus our investigation, running our reports, configuring our alerts. Tagging our network is on the other hand relevant to better determine which service is impacted, evaluate the risk and prioritize our follow-up actions accordingly.


In my second article (available at I focused on how to enhance the log parsing mechanism by leveraging parsers commonly used to analyze a network stream, which are more flexible and powerful. I demonstrated a specific use case by providing a sample parser which is generating a hash of the entire log message and storing it in a meta key. This is for example a common scenario when a compliance requirement mandates to achieve a per-event integrity check.


In my third post (available at I discussed a simple but interesting scenario. The Event Stream Analysis module, responsible in Security Analytics to correlate logs and packets meta to identify potentially malicious or anomalous activities, is often then the last link of the chain, transferring the elaboration outside the platform (to the analyst, to a ticketing system, etc.). There are however many relevant use cases that can be accomplished by feeding this information back into Security Analytics to, just to name a few, provide additional context during an investigating making available all the alerts triggered by a specific user/endpoint or implement a sort of multi-step correlation scenario. Sample parsers have been provided also in this case.


In my last post (available at I wanted to recall a capability already mentioned a few times in the discussions here but never emphasized enough that is leveraging parsers to post-process the meta values created by a log parser with the purpose of generating new piece of meta. A typical example is to split a URL identified by all the parsers in domain, TLD, directory, page and extension. Applying the logic in all the log parsers generating URLs may be possible but does not scale very well. A single parser can instead do the job easily and effectively.


All of those examples are intended to prove how a technology, when designed to be flexible, can easily adapt to specific situations so supporting the achievement of complex use cases or ad-hoc requirements.

Filter Blog

By date: By tag: