000036489 - Basic concepts relating to the RSA NetWitness Platform

Document created by RSA Customer Support Employee on Jun 28, 2018Last modified by RSA Customer Support Employee on Jun 29, 2018
Version 3Show Document
  • View in full screen mode

Article Content

Article Number000036489
Applies ToRSA Product Set: RSA NetWitness Logs & Network
RSA Product/Service Type: Decoder, Concentrator, Broker, Admin Server, Archiver, Event Stream Analysis (ESA), Malware Analysis, Warehouse, Warehouse Connector
RSA Version/Condition: 10.x, 11.x
Platform: CentOS
TasksThis article presents basic concepts related to the RSA NetWitness Platform, specifically the RSA NetWitness Logs & Network product. 
 

1.0 Mission



The mission of the RSA NetWitness Platform is to discover, investigate and remediate advanced security threats.

The RSA NetWitness Platform is more than a SIEM (Security Incident and Event Management) as SIEMs only correlate and connect network events. The RSA NetWitness Platform additionally includes capture-time processing and data enrichment, enabling analysts to carry out real-time analysis and batch processing in response to security-related events and incidents.
 



2.0 Architecture



2.1 Core Components



The core components of the RSA NetWitness Platform are:



  • Decoder
  • Concentrator
  • Broker
  • Admin Server (aka Security Analytics Server, NetWitness Server/Headunit)
 

Decoders (Log Decoders and Packet Decoders)



  • Decoders generate meta. There are two sources of meta: logs and packets.
  • Log Decoders collect raw logs, then use RSA log parsers and custom log parsers to generate meta.
  • Packet Decoders collect raw network packets for use in reconstructing network sessions. They use RSA packet parsers and custom packet parsers to generate meta.
  • Storage on Decoders is short-term, using Direct Attached Capacity (DAC) for logs, packets, meta, and indexes. (Long-term meta and raw log storage is provided by the Archiver.)
 

Concentrators (Log Concentrators and Packet Concentrators)



  • Concentrators aggregate meta from both Log and Packet Decoders, indexing it for reporting and alerting purposes and making it searchable in queries.
  • Storage on Concentrators is short-term, using Direct Attached Capacity (DAC). Indexes are stored on SSD DACs while meta is retained on traditional drives.
  • Older data is deleted on an ongoing basis to make room for newer data: First In, First Out (FIFO).
 

Brokers


Brokers are used in analysis. They aggregate and combine meta from Archivers, Concentrators and Brokers, drawing upon data from both short-term and long-term storage and making it available for the Admin Server and the RSA NetWitness Platform UI.
 

Admin Server



The Admin Server, also commonly referred to as the Security Analytics (SA) Server or NetWitness Headunit, is a web server used for all user interaction with NetWitness.

Its functionality includes:


  • Administration
  • Investigation (queries)
  • Interaction with RSA Live for parsers
  • Incident Management
  • Malware Analysis (via the Investigation module, packets only)
  • Reporting Engine (reports and alerts, but for a single session only; for correlation of multiple sessions Event Stream Analysis [ESA] is required)
 

2.2 Extended Components



The extended components for the RSA NetWitness Platform include:


  • Malware Analysis
  • Event Stream Analysis (ESA)
  • Archiver
  • Data Warehouse (no longer sold)
  • Warehouse Connector
 

Malware Analysis



As part of the Investigation module, Malware Analysis allows RSA NetWitness users to investigate network sessions flagged by the Concentrator based on a risk score.  Malware Analysis is packets-only.
 

Event Stream Analysis (ESA)



Event Stream Analysis (ESA) correlates multiple events into a single alert based on a set of defined rules. It differs from the Reporting Engine in that it can correlate multiple sessions rather than just a single session.  The ESA draws upon meta from both Log and Packet Concentrators and feeds the alerts to the Incident Management database on the Admin Server.  ESA utilizes no additional storage.
 

Archiver



The Archiver indexes and compresses logs and log meta for long-term storage, providing extra storage because log meta requires so much more storage capacity than packet meta. The Archiver is the source for investigations and Reporting Engine reports.  The Archiver utilizes tiered storage, with the highest, default tier storing the log data that is in active use as part of the business process, accessible by the Reporting Engine. 

The Archiver storage tiers are defined as follows:


  • Hot Tier - As the Archiver’s default mode of storage the Hot Tier stores readily accessible logs for reporting and other tasks. Hot storage is typically on Direct-Access Capacity (DAC) or Storage Area Network (SAN) storage.
  • Warm Tier (Optional) - Logs stored in the Warm Tier are older than those in the Hot Tier but are also accessible for reporting and other tasks. Data access is slower than with the Hot Tier. Warm storage is usually Network Attached Storage (NAS).
  • Cold Tier (Optional) - Logs in the Cold Tier are offline, so this tier is used for retaining data that is needed for regulatory reasons or other long-term purposes. Data in the Cold Tier is no longer managed by the Archiver and must be restored to the Hot or Warm Tier if access is required.
 

Data Warehouse (no longer sold by RSA)



The Data Warehouse is used for very long-term storage, allowing reporting that spans date ranges of months or even years. It stores data as Avro files. (Avro is a data serialization framework for Hadoop that stores data in a compact binary format.) For the Data Warehouse the Avro files are generated by the Warehouse Connector and consist of compressed and serialized raw logs, log meta, and packet meta.

One additional way that the Data Warehouse differs from the Archiver is that it has analytical capabilities, while the Archiver is for long-term storage only.  Big Data analysis is provided for the Data Warehouse by Pivotal or MapR.

NOTE: RSA no longer sells the Data Warehouse.


 

Warehouse Connector


The Warehouse Connector can be run as a service on a Log Decoder or as a virtual appliance. It takes aggregated data from Log and Packet Decoders, and compresses and serializes it into AVRO files that ultimately consist of raw logs, log meta, and packet meta.
 

3.0 Deployments


There are five basic ways to deploy the RSA NetWitness Platform in a business-security environment:


  1. Incident Detection and Compliance (logs only) – For compliance purposes, the Archiver is used to store logs and log meta. ESA is used for detection.
  2. Network Security Monitoring and Investigation (packets only) – Utilizes the reports and alerts from the Analytical Server. ESA can also be used for alerting purposes.
  3. Advanced Analysis (logs and packets) – Provides for research into long-term trends and patterns, including visualization and statistical analysis. Needs ESA and a warehouse for long-term storage.
  4. Archiving and Advanced Analysis (logs and packets) – Adds the Archiver to the Advanced Analysis deployment.
  5. As part of a SIEM – NetWitness feeds packets to the SIEM. ESA is used for advanced analytics and alerts.
 

4.0 Meta Creation


Decoders and Concentrators create meta by the use of parsers, feeds, and application rules.
 

4.1 Parsers



  • Parsers act upon the raw data in Decoders.
  • Network parsers identify protocols, file types, and data from network sessions. Together, this information allows the reconstruction of network sessions.
  • Log parsers map raw logs to an event taxonomy. For example, syslog information is parsed to obtain the login/logoff usernames.
  • Custom parsers are written by customers to apply to a specific purpose needed for their business. For example, a custom network parser might look for specific tokens or the use of particular applications.
 

4.2 Feeds


Feeds act upon existing meta to enrich it and create additional meta. There are different types of feeds. For example:

  • Threat intelligence feeds – subscriptions to RSA First Watch, Spy Eyes, etc.
  • Identity feeds – mapping of active directory usernames to actions within the environment.
  • Custom feeds – feeds that give context that is meaningful within a particular business unit, facility, etc.
 

4.3 Application Rules


Application rules dynamically generate new information based on existing meta. They are applied at the single-session level. They not only aid in creating custom alert meta values, but can filter out the traffic that does not add value when analyzing the data.

For example, a “Failed Logon” application rule would detect an activity such as:

activity=logon && ec.outcome=failure && user=”jdoe”

When a rule resolves as a binary positive then it can be used to generate an alert. 

Application rules also simplify the task of querying. For example, when looking for failed logins, instead of having to search on the entire string given above, the search could be simply for “Failed Logon”.
 

4.4 Correlation Rules


In a manner similar to application rules, correlation rules also dynamically generate new information based on existing meta, but do so for multiple sessions over a sliding time window. When a match is found, the service creates a new “super session” that identifies other sessions that match the rule.
 

    Basic configuration for NetWitness
    Figure 1: A basic RSA NetWitness deployment


    Attachments

      Outcomes