|Applies To||RSA Product Set: RSA NetWitness Logs & Network|
RSA Product/Service Type: Decoder, Concentrator, Broker, Admin Server, Archiver, Event Stream Analysis (ESA), Malware Analysis, Warehouse, Warehouse Connector
RSA Version/Condition: 10.x, 11.x
|Tasks||This article presents basic concepts related to the RSA NetWitness Platform, specifically the RSA NetWitness Logs & Network product. |
The mission of the RSA NetWitness Platform is to discover, investigate and remediate advanced security threats.
The core components of the RSA NetWitness Platform are:
Brokers are used in analysis. They aggregate and combine meta from Archivers, Concentrators and Brokers, drawing upon data from both short-term and long-term storage and making it available for the Admin Server and the RSA NetWitness Platform UI.
The Admin Server, also commonly referred to as the Security Analytics (SA) Server or NetWitness Headunit, is a web server used for all user interaction with NetWitness.
Its functionality includes:
The extended components for the RSA NetWitness Platform include:
As part of the Investigation module, Malware Analysis allows RSA NetWitness users to investigate network sessions flagged by the Concentrator based on a risk score. Malware Analysis is packets-only.
Event Stream Analysis (ESA) correlates multiple events into a single alert based on a set of defined rules. It differs from the Reporting Engine in that it can correlate multiple sessions rather than just a single session. The ESA draws upon meta from both Log and Packet Concentrators and feeds the alerts to the Incident Management database on the Admin Server. ESA utilizes no additional storage.
The Archiver indexes and compresses logs and log meta for long-term storage, providing extra storage because log meta requires so much more storage capacity than packet meta. The Archiver is the source for investigations and Reporting Engine reports. The Archiver utilizes tiered storage, with the highest, default tier storing the log data that is in active use as part of the business process, accessible by the Reporting Engine.
The Archiver storage tiers are defined as follows:
The Data Warehouse is used for very long-term storage, allowing reporting that spans date ranges of months or even years. It stores data as Avro files. (Avro is a data serialization framework for Hadoop that stores data in a compact binary format.) For the Data Warehouse the Avro files are generated by the Warehouse Connector and consist of compressed and serialized raw logs, log meta, and packet meta.
One additional way that the Data Warehouse differs from the Archiver is that it has analytical capabilities, while the Archiver is for long-term storage only. Big Data analysis is provided for the Data Warehouse by Pivotal or MapR.
NOTE: RSA no longer sells the Data Warehouse.
The Warehouse Connector can be run as a service on a Log Decoder or as a virtual appliance. It takes aggregated data from Log and Packet Decoders, and compresses and serializes it into AVRO files that ultimately consist of raw logs, log meta, and packet meta.
There are five basic ways to deploy the RSA NetWitness Platform in a business-security environment:
Decoders and Concentrators create meta by the use of parsers, feeds, and application rules.
Feeds act upon existing meta to enrich it and create additional meta. There are different types of feeds. For example:
Application rules dynamically generate new information based on existing meta. They are applied at the single-session level. They not only aid in creating custom alert meta values, but can filter out the traffic that does not add value when analyzing the data.
For example, a “Failed Logon” application rule would detect an activity such as:
When a rule resolves as a binary positive then it can be used to generate an alert.
Application rules also simplify the task of querying. For example, when looking for failed logins, instead of having to search on the entire string given above, the search could be simply for “Failed Logon”.
In a manner similar to application rules, correlation rules also dynamically generate new information based on existing meta, but do so for multiple sessions over a sliding time window. When a match is found, the service creates a new “super session” that identifies other sessions that match the rule.