Content Quick Start Guide

Document created by RSA Information Design and Development on Jul 14, 2017Last modified by RSA Information Design and Development on Nov 15, 2018
Version 83Show Document
  • View in full screen mode
 

This topic discusses configuration procedures for getting RSA NetWitness Platform set up initially in your environment.

Sections:

Configuring Services

Throughout this document, you may come across the following:

  • Services that your installation does not have or use. For example, if you only use RSA NetWitness Platform to capture packet data, you may not have any Log Decoders. In this case, skip the sections that do not apply to you.
  • Services for which your installation has multiple instances. For example, you may have several Log Decoders. In this case, repeat the instructions so that you have set up each of your individual services.

If you have all of the following services, this is the preferred order for configuring your system:

  1. Decoder(s)
  2. Log Decoder(s)
  3. Concentrator(s)
  4. Broker(s)
  5. Reporting Engine
  6. ESA

Decoder

The Decoder service captures network data in packet form. RSA recommends that you begin setup with your Decoder.

  1. Assign a capture interface. For details, see Assign Capture Interface in the Appendix.
  2. Enable Capture Autostart. For details, see Capture Autostart in the Appendix.

For more details, see the "Configure Capture Settings" topic in the Decoder and Log Decoder Configuration Guide.

Log Decoder

The Log Decoder service captures log data as events. Setup for your Log Decoder is similar to setting up your Decoder:

  1. Assign a capture interface. For details, see Assign Capture Interface in the Appendix.
  2. Enable Capture Autostart. For details, see Capture Autostart in the Appendix.

For more details, see the "Configure Capture Settings" topic in the Decoder and Log Decoder Configuration Guide.

Concentrator

Concentrators aggregate data captured by Decoders and Log Decoders. This allows you to investigate, query and alert on both log and packet meta data in real time. You need to add your Decoder and Log Decoder services to the Concentrator to begin the aggregation process.

Note: RSA recommends that if you are capturing both log and packet data, you should have a dedicated Concentrator service for each, and use a Broker service and then have a broker to aggregate data between the two services.

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Concentrator, and select View > Config.

    Note: If you have both Decoder and Log Decoder services, you can add them in any order.

  3. To add a service, perform the following steps:

    1. Click in the Aggregate Services toolbar to add a service.
    2. Select and add your service (for example, a Decoder).
    3. Enter the administrator credentials for the Decoder service.
  4. Repeat step 3 until you have added all of the Decoder and Log Decoder services from which you want to aggregate.

    Note: Optionally, you can configure your Concentrator to aggregate from both your Log Decoders and Decoders. For details, see the "Configure Aggregate Services" topic in the Broker and Concentrator Configuration Guide.

  5. In the Aggregation Configuration panel, under Aggregation Settings, select Aggregate Autostart.

    When a Decoder or Log Decoder starts up, it automatically begins capturing data if Capture Autostart is enabled. You can always start and stop data capture manually.

  6. Click Apply, then click Start Aggregation.
  7. The Aggregate Autostart takes effect on the next service restart. To restart the service:

    1. From the toolbar, change the View from Config to System, by opening the View menu (Config ) and selecting System.
    2. From the toolbar, select Reboot.
    3. The system displays a message asking you to confirm the reboot: click Yes, then the service restarts.

For more details, see the "Broker and Concentrator Configuration" topic in the Broker and Concentrator Configuration Guide.

Broker

The broker service aggregates meta data from configured concentrators. This allows you to investigate and monitor data from multiple concentrators. You need to add your Concentrator service to your Broker.

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Broker, and select View > Config.
  3. Click in the Aggregate Services toolbar to add a service.
  4. Select and add your Concentrator.
  5. Enter the administrator credentials for the Concentrator service.
  6. In the Aggregation Configuration panel, under Aggregation Settings, select Aggregate Autostart.

    Note: This option determines whether aggregation starts automatically each time the Broker is started. Checked means yes, unchecked means no.

  7. Click Apply, then click Start Aggregation.

    Note: Changes take effect immediately.

For more details, see the "Broker and Concentrator Configuration" topic in the Broker and Concentrator Configuration Guide.

Reporting Engine

A Reporting Engine runs reports and alerts based on the data drawn from a data source, so you must associate a data source, or multiple data sources, to a Reporting Engine. There are three types of data sources:

  • NWDB Data Sources—The NetWitness Database (NWDB) data sources are Decoders, Log Decoders, Brokers, Concentrators, Archiver, and Collection.
  • IPDB Data Sources—The Internet Protocol Database (IPDB) data source contains both normalized and raw event messages. It stores all collected messages in a file system organized by event source (device), IP address, and time (year/month/day) with index files to facilitate searches (report and queries).
  • Warehouse Data Sources—The Warehouse data sources are Pivotal and MapR.

To associate a data source with a Reporting Engine:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Reporting Engine, and select View > Config.
  3. Select the Sources tab.
  4. Click > Available Services to display the list of available services.

    Note: The UI presents a list of all services that have already been configured, and that can be used as a source for the reporting engine. This may include any of the following services (depending on your NetWitness installation): Archivers, Brokers, Concentrators, Log Decoders, Malware Analysis, Network Decoders, Incident Management or IPDB Extractor.

  5. Select a Concentrator or Broker service and click OK.

  6. Enter the administrator credentials for the service and click OK.

For more details, see the "Configure Data Sources" topic in the Reporting Engine Configuration Guide.

Event Stream Analysis (ESA)

The RSA NetWitness Platform Event Stream Analysis (ESA) service provides advanced stream analytics such as correlation and complex event processing of disparate event data from Concentrators, Decoders, and Log Decoders, which results in incident detection and alerting.

To associate a data source with the ESA service:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select an ESA service, and select View > Config.
  3. Select the Data Sources tab.
  4. Click to display the list of available services.
  5. Select the Concentrator (or Broker, if it is being used) and click OK.

    Note: RSA recommends Concentrators as the data source for ESA. For more details, see the "Add a Data Source to an ESA Service" topic in the Event Stream Analysis Configuration Guide.

  6. Click .

    The Edit Service dialog is displayed.

  7. Enter the administrator credentials for the service and click Save.
  8. Click Enable.
  9. Click Apply for your changes to take effect.

For more details, see the "Configure ESA" and "Add a Data Source to an ESA Service" topics in the Event Stream Analysis Configuration Guide.

Deploying Content

Content developed by the RSA team may be found in RSA Live within the RSA NetWitness Platform. See the Live Services Guide for deploying the content from Live. Content may also be created and deployed through a Professional Services engagement or directly by RSA customers.

The following table lists the types of content, and the guide and topic where you can find more information for that content type.

                                                
Resource TypeGuideTopic

RSA Log Collector

Log Collector Configuration Guide

Configure Event Source Types

RSA Log Device (i.e. Log Parser)

Log Parser Tool User Guide

Parser Structure

RSA Lua Parser

Decoder and Log Decoder Configuration Guide

Use Custom Parsers

RSA Feeds

Decoder and Log Decoder Configuration Guide

Create and Deploy Custom Feed Using Wizard

RSA Application Rules

Decoder and Log Decoder Configuration Guide

Configure Decoder Rules

RSA Event Stream Analysis Rule

Alerting Using ESA Guide

Alerting: Add Rules to the Rule Library

RSA Security Analytics Reports, Charts, Alerts and Lists

Reporting Guide

Working with Reports in the Reporting Module

Developing Use Cases

RSA recommends that you begin with a conversation of your required use cases and the desired outcomes. Once you and your team have determined what you want to be achieve, understand the components of RSA NetWitness Platform that you have purchased, and they can help in meeting your use cases. Review the content in RSA Live, understand what each resource type does, how they are used and their output. If existing content does not meet your use case requirements, you may need to develop custom content. For details, see Developing Content.

Inventory Customer System

RSA recommends that you begin with an inventory of the customer system. Some information that may be helpful when addressing customer use cases are as follows:

  • What are their critical assets?
  • What parsing capabilities does the customer have configured? Are they a packet or log customer or do they have both?
  • What are their alerting capabilities? Will their resources and product licensing support reporting alerts and ESA?
  • What protocols are configured within the environment?
  • What event sources does the customer forward to RSA NetWitness Platform?
  • Do they have an Endpoint product that will forward logs to RSA NetWitness Platform?
  • What are their vulnerabilities and needs related to business-driven security.

Gather Use Cases

Once you have an inventory of the customer environment, you gather the use cases they need. Addressing the use cases may follow this general flow:

  1. Look at existing bundled content to determine if the content within it aligns with the use cases.
  2. Review Tags and Medium to search for other content.

    Note: Tags catalog existing content according to an incident response approach, for example attack phase or authentication. Medium categorizes content based on whether it applies to log or packet customers (or both).

  3. Read descriptions and requirements for the rules to determine if they match the environment, for example the Windows log being collected.
  4. Consider custom content if you do not find any existing content that matches the use cases.

For details of RSA Live Content, see RSA Live Content. For details on developing custom content, see Developing Content.

Example Use Case: Detection of Malware

Use Case: I want to detect malware within my network.

Environment: Packet and log customer. Full alerting capabilities including ESA. Log sources include an IDS, Firewall, Anti-virus and web logs.

Potential Implementation:

With a search of RSA Live Content for Bundle types, you can see that the Known Threats Pack and Hunting Pack exist and are tagged with a Medium of packet.

The customer environment supports this and the pack descriptions match the use case. Additionally, the customer has ESA and can enable the Advanced Threat Detection module (see the "Configure Automated Threat Detection" topic in the Alerting using ESA guide) to be alerted to command and control traffic within their environment using their packet capture and web logs. This is a good start for content for the detection of malware. Between the parsers, app rules, feeds, and the c2 atd module, this content provides coverage for multiple malware delivery and infection vectors, as well as beaconing and c2 behavior. This enables you to catch a potential infection as well as existing incidents before they worse.

Since the bundle is geared mostly towards packet content, you search within RSA Live for a Medium of log, log and packet, and with a Category (Tag in Security Analytics 10.x) of malware.

This search returns the same ESA rules that were not by default deployed by the bundles, such as Backdoor Activity detected or Windows Worm Activity Detected Logs. Since the customer has ESA, you read the description and determine their environment and use case match these rules and deploy them to the service.

Finally, you review the logs for events generated that may indicate malware signatures and decide you want them to exist in the malware report. At this time, you may decide to customize the log parser to generate the metadata that aligns with the Malware Activity report (i.e. add static tags to messages of inv.category = 'threat' && inv.context = 'malware’).

RSA Live Content

The following is a brief overview of the content types available within RSA NetWitness Platform and links to existing RSA Live content.

                                                     
Resource TypeSupported MediumDescription

Bundle

Log, Packet, Log and Packet

A container for a themed or related set of content. Each piece of content is specified as a dependency within the bundle.

List of supported content is in Content Bundles or Packs.

Note: Content Bundles do not support subscription. You can view the list of content in each bundle in the documentation (RSA Link Content Space). You can then periodically redeploy these pieces of content.

RSA Log Collector

Log

Event sources are the assets on the network, such as servers, switches, routers, storage arrays, operating systems, and firewalls. In most cases, your Information Technology (IT) team configures event sources to send their logs to the Log Collector and the Security Analytics administrator configures the Log Collector to poll event sources and retrieve their logs.

List of supported content:

RSA Log Device

Log

Defines how a NetWitness Log Decoder identifies, parses, and extracts information from the events of a specific event source.

List of supported content: https://community.rsa.com/community/products/netwitness/parser-network/event-sources

RSA Lua Parser

Packet (with few exceptions for Log)

Packet parsers identify the application layer protocol of sessions seen by the Decoder, and extract meta data from the packet payloads of the session.

List of supported content: Packet Parsers

RSA Feeds

Log, Packet

A feed is a list of data that is compared to sessions as they are captured or processed. For each match, additional metadata is created. This data could identify and classify malicious IPs or incorporate additional information such as department and location based on internal network assignments.

List of supported content: In Depth Feeds Information

RSA Application Rule

Log, Packet

Application layer rules are applied at the session level on a Log Decoder or a Decoder to output a single piece of meta key and value once the rule logic has been matched.

List of supported content: RSA Application Rules

RSA Event Stream Analysis Rule

Log, Packet, Log and Packet

ESA's advanced Event Processing Language allows you to express filtering, aggregation, joins, pattern recognition and correlation across multiple disparate event streams. Event Stream Analysis helps to perform powerful incident detection and alerting.

List of supported content: RSA ESA Rules

RSA Security Analytics Report

Log, Packet, Log and Packet

A container for RSA Security Analytics rules. A rule represents a unique query that detects and summarizes the requested information within a collection of network data.

List of supported content:

Using Bundles

Bundles are a grouping of related content around a theme that are easily deployed at one time instead of needing to individually select and deploy each piece of content within the set. For the list of available bundles and dependencies, see Content Bundles or Packs.

Note: Be sure to use the medium attribute assigned to the bundle to ensure it matches your RSA NetWitness Platform’s deployment of either a packet (Decoder), log (Log Decoder) or log and packet combination (Decoder + Log Decoder).

Filtering Content with Live Search

You can filter the content for deployment by Resource Type, Medium and Tags.

  • Resource Type is valuable to deploy content types only supported by your environment. For example, if you are not using Event Stream Analytics (ESA), there would be no reason to attempt to deploy that content type.
  • Medium is helpful to filter on just log (applied to content that uses meta derived from log data), packet (applied to content that uses meta derived from network packets) or log and packet (applied to content that correlates meta derived across log and packet data).
  • Depending on your version, you can filter by tag or category:

    • In Security Analytics 10.x, Tags you can select tags to further narrow results if one or more align with your use case topics. For details, see Live Content Search Tags.
    • In NetWitness 11.x, you can use Categories to narrow results. Categories offer a richer set of items than was available as Tags. This is a hierarchical model, four levels deep. Each category catalogs content with an Incident Response service-based approach. For details, see the NetWitness Investigation Model.

For more information, see the topic "Search Criteria Panel" in the Live Services Guide.

Removing Discontinued Content

Content that has become irrelevant or outdated is discontinued. For a list of Discontinued Content, see Discontinued Content. To remove the discontinued content from your system, use the Discontinued Resources tab in the Live > Configure view. See the "Live: Discontinued Resources Tab" topic the Live Services guide for more details.

Note: This workflow applies to both Logs and Packet content data.

The workflow for removing discontinued content from your system is as follows:

  1. RSA FirstWatch/Research is the group that monitors current threats. On an ongoing basis, they identify content that is no longer relevant or useful.
  2. RSA developers mark the content as discontinued in RSA Live. Users can view discontinued content or hide it from view in Live.
  3. Discontinued content is added to the Discontinued Content list in the documentation.
  4. Users should disable Discontinued Parsers (see below).

Network Decoder Deployments

The content deprecated within this section applies only to those customers with network decoders deployed.

Disable Discontinued System Parsers

There are some parsers that have become obsolete over time. Many of these will be removed in future versions of RSA NetWitness Platform. Depending on your version, you may still see these parsers.

To remove out-of-date parsers:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Admin Services view, select a Decoder service, and select View > Config.

    The General tab of the Decoder service view is displayed.

  3. In the Parsers Configuration section, disable the following parsers:

    • AIM
    • LotusNotes
    • MSN
    • Net2Phone
    • SAMETIME
    • WEBMAIL
    • YCHAT
    • YMSG
    1. If the parser is enabled, click Enabled in the Config Value column for the parser.
    2. From the pull-down menu, select Disabled.

    3. Repeat for each of the obsolete parsers.
  4. Click Apply for your changes to take effect.

Disable System Parsers for Lua Parser Equivalent

In general, the system parsers have a Lua parser equivalent that is more comprehensive in terms of metadata output. If deploying a Lua parser, RSA recommends that you disable the corresponding system parser. For details on this mapping, see Mapping of System to Lua Parsers.

To disable system parsers:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Decoder service, and select View > Config.

    The General tab is displayed, and the system parsers are listed in the Parsers Configuration section of the screen.

  3. To disable a parser, select Enabled in the Config Value column for the parser, and select Disabled from the drop-down menu.

Caution: Do not disable the following system parsers. The Decoder service uses them as described below:

  • ALERTS: This parser enables or disables the application rules. If you disable it entirely, the rules are not evaluated at all. If you disable the keys, they are evaluated, but the key are not registered (means these keys will not be indexed, and thus not seen in investigations).
  • FeedParser: This parser enables or disables the feeds. If you disable it entirely, feeds are not evaluated at all. If you disable a key, feeds are evaluated, but meta going to that key from a feed are not registered (means these keys will not be indexed, and thus not seen in investigations).
  • GeoIP: Geographic data based on source and destination information (ip.src, ip.dst, country.src, country.dst, city.src, city.dst) that may be helpful during investigations and writing content for alerting.
  • NETWORK: The Network Layer parser is required to extract basic information about the session such as the service, IPs, ports and payload.

Delete Flex Parsers

Flex parsers have been discontinued in favor of Lua parsers for meta extraction from packets. Identify any current flex parsers you are using. You should delete any parsers that have the .flex extension, after you have identified equivalent Lua parsers and deployed them through Live. For mappings, see Mapping of Flex to Lua Parsers.

To remove Flex parsers:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Administration Services view, select a Decoder service, and select View > Config.
  3. Click Parsers to display the list of deployed Flex parsers.

  4. To delete parsers, select their checkbox and click .

    A confirmation message is displayed.

  5. You can select No to cancel the deletion or select Yes to delete the selected parsers.

Developing Content

When writing content, you want to get the most accurate results with the best performance. Accuracy can be achieved by adhering to the data model and careful query construction. Before you begin developing content, however, it is important to understand what type of content is for your use case.

This flowchart may help you determine the appropriate content type.

If you do not have metadata available to the RSA NetWitness Platform, you can deploy either a log parser or packet Lua parser dependent upon the source of the data. Use feeds and application rules to examine the data parsed from the event sources and generate additional metadata when conditions are met. You can use this additional metadata during the investigation process, for alerting or integration with incident management tools.

Alerting may be achieved through the creation of a report alert or an ESA rule. If metadata exists and you want to be notified when a single event occurs, you should create a reporting engine alert. If you want to be notified about multiple different events across disparate sources within a timeframe, you should use ESA.

If you do not need notification when an event occurs, you can simply create a report. If you want a report that has more enhanced aggregation criteria, create a report against the warehouse using the criteria listed on the screen. Also, if you want to use a timeframe, you can create a report based on the last day or week against warehouse data, last month up to a year on Archiver data and the last hour up to a day using Concentrator or Broker (NWDB) data.

Understanding the Data Model

Metadata standardization makes it easier to develop content across disparate sources such as logs, packets and the endpoint. RSA recommends that you keep within this model as much as possible when you create custom content, so the existing rules may apply to it as well. As a best practice, document any custom meta keys and possible values, if applicable, with clear definitions for future content developers and researchers.

The Unified Data Model is described in detail on RSA Link here: Unified Data Model for the RSA NetWitness® Platform.

Updating the Data Model

If RSA Live or customer-developed content use nonstandard meta keys, configuration files for the Log Decoder and Concentrator may need to be updated. (The Decoder does not need to have manual updates to the meta model as new meta keys are automatically stored.) See Customize the meta framework for information about updating the meta model within the product.

The data model is the set of meta keys, aliases and data types that are used across the different content types. The individual services control and manage this data model differently:

  • With Network Decoders, any new meta keys generated by the packet parsers or feeds are dynamically added to the data model. The only time the data model needs to be updated is if aliases are needed for a particular key.
  • With Log Decoders, you need to explicitly set the meta keys to store for use with content analysis
  • Concentrator services need to be explicitly configured with keys to store in the index and make available on the Investigation page for an analyst.
  • Brokers automatically sync with the Concentrator’s index file and do not need to be manually adjusted
  • ESA updates its data model dynamically based on a poll of the Concentrator(s) set as its data source.

To see the current meta model for Packets, you can view the index-decoder.xml file within the product. From the NetWitness UI, navigate to Administration > Services > Decoder service > File, and select the file from the drop-down menu.

Note: You no longer need to manually update this file. The Decoder service automatically updates its language as parsers and feeds are uploaded to the system. The only time this file should be modified is if you need to add <aliases/> for a particular key. Add any customer-specific aliases to the index-decoder-custom.xml file.

To see the current meta model for Logs, you can view the table-map.xml file within the product. From the NetWitness UI, navigate to Administration > Services > Log Decoder service > File, and select the file from the drop-down menu.

To see the current meta model for the Concentrator, you can view the index-concentrator.xml file within the product. From the NetWitness UI, navigate to Administration > Services > Concentrator service > File, and select the file from the drop-down menu.

Investigation and Hunting Meta Keys

For content types of Lua parsers and application rules, additional meta data is generated using the Investigation Feed in order to describe the logic being detected. This additional metadata assists analysts during the investigation process as well as helps with accuracy of content development across disparate data sources.

The Investigation meta keys allow an analyst to quickly drill down to a filtered set of events based on the NetWitness Investigation Model. There are two keys:

  • The Investigation Category key (inv.category) pinpoints the purpose of a log's or session's escalation. These investigation categories help dictate one’s analysis approach. There are four Investigation Categories:

    • Threat
    • Identity
    • Assurance
    • Operations
  • The Investigation Context key (inv.context) expands on the aforementioned category key, but also describes the literal intent or functional objective of the resource itself. This key may contain any of the subcategories listed within the Investigation Model except the highest level reserved for use in the Investigation Category key.

The Hunting meta keys allow an analyst to see suspicious or anomalous events through population of six meta keys. See the Hunting Guide’s Appendix: Static Meta Values for a list of keys and values populated by parser through deployment of the Hunting Pack bundle.

                                                   

Display Name

Meta Key

Format

Description

Session Analysis

analysis.session

Text

Client-Server communication summations, deviations, conduct and session attributes. 

Service Analysis

analysis.service

Text

Core application protocols identification. An underlying powerhouse of service-based inspection. 

File Analysis

analysis.file

Text

A large inspection library that will highlight file characteristics and anomalies.

Indicators of Compromise

ioc

Text

Datatypes used in Threat Indicator Portals, or known signature-type resources should be pushed here.

Anything worthy of analysis which denotes high confidence.

Behaviors of Compromise

boc

Text

Tactics or techniques employed by malware and/or adversaries. Sometimes this observed behavior could be an anomaly or just poorly written code, or simply administrator activity. It should be used only when there is no datatype indicator present but signifies potential cause for concern if high value hosts or parties are involved.

Enablers of Compromise

eoc

Text

This key should be reserved for activities or policies that may contribute to an incident such as servers running default credentials or shared access amongst admins.

Writing Log Collectors

See the following topics for creating custom definition files in your RSA NetWitness Platform environment:

Writing Log Parsers

A standalone graphical tool, the Event Source Integrator (ESI) tool, has been developed to create or update Log Parsers. The blog post RSA NetWitness ESI 1.0 Beta 3 discusses how to download the tool, videos for using it, and more detailed usage information and release notes as well. Look for other recent blog posts regarding the tool and updates in the RSA NetWitness Platform blog space.

Writing Lua Parsers

To write your own custom parsers, see the Parsers Book, on RSA Link. The book gives details on the development and debugging of parsers based on the Lua programming language. There are also sample parsers available for review.

Creating Feeds

See the topic "Manage Custom Feeds" in the Live Services Guide, which explains how to manage and create a custom feed.

Writing Application Rules

The Decoder and Log Decoder Configuration Guide contains details about how to write application rules:

  • See the topic "Configure Application Rules" for samples of rule creation.
  • To see the accepted syntax and performance expectation of the rules, see the topic "Configure Decoder Rules." The Capture Rule Syntax section of this topic describes the rule syntax.

For syntax and examples for application rules, see Application Rules Cheat Sheet.

Writing Report Rules

Report rules contain the conditions to return the desired result set within the report. One or more report rules must be assigned to a Report in order for the report to be scheduled to run. The Reporting Guide explains how to configure the report with rules. For some best practices for report rule performance, see the Report topic " Reporting Guidelines" in the Reporting Guide.

Report Performance

This section provides some guidelines concerning reporting performance. For more details on report performance considerations, see the "Core DB: Optimization Techniques" topic in the Core Database Tuning Guide.

LookupAndAdd Rule

The LookupAndAdd Rule action is very performance intensive and should be used only when another solution does not meet the use case. Instead of using this function, select the Custom report type and select up to 6 meta keys within the Select statement.

Choosing Meta Keys

Avoid using meta keys that have many unique values in the select statement as it could affect performance of the report rule. For example, an event description for log messages may be different for each log message within each parser.

Query Operator Cost

For operators such as regex and contains, keep usage to a minimum, and only used these operators when required. Regex operators that are commonly used can be created as application rules on the respective Log Decoders.

Writing ESA Rules

Inefficiently written EPL rules can have a detrimental impact on how the ESA appliance functions. – therefore it is important to write effective EPL rules.

See the following topics in the Alerting Using ESA guide for details:

  • See "Alerting: Best Practices" for best practices while working with ESA rules.
  • See "Alerting: ESA Rule Types" for details on the available rule types.

When working with RSA Live ESA or the ESA Rule Builder, you should not need to know the EPL syntax used within the rules. However, if your use case exceeds the capabilities of either of these, you should become familiar with at least the basics of the EsperTech EPL language used with ESA.

Event Processing Language (EPL) Syntax

This section describes some aspects of how RSA NetWitness Platform uses Esper. For a more complete guide, download EPL Essentials from RSA Link here: https://community.rsa.com/docs/DOC-59978.

The Basics

Esper is what allows RSA NetWitness Platform to perform advanced correlation of metadata. Said metadata is consumed by the ESA service from one or more Concentrator services. This data is all fed through a stream, which is just a sequence of events. Within NetWitness, this stream is called the Event stream. EPL looks similar to SQL.

This is an example of an EPL rule that would alert on everything:

SELECT * FROM Event;

You can see that we are selecting everything (*) from the stream mentioned earlier, Event.

We can also specify to filter for specific Meta from our stream:

SELECT * FROM Event(user_dst = 'Lee')

Or multiple pieces of Meta:

SELECT * FROM Event(user_dst = 'Lee' AND event_cat_name = 'User.Activity.Failed Logins');

Esper also allows NetWitness to perform advanced correlation of metadata.

SELECT * FROM Event(user_dst = 'Lee' AND event_cat_name = 'User.Activity.Failed Logins') GROUP BY user_dst HAVING COUNT(*) > 4

We can also add a time window (events must occur within a specified time window):

SELECT * FROM Event(user_dst = 'Lee' AND event_cat_name = 'User.Activity.Failed Logins').win:time(5 min) GROUP BY user_dst HAVING COUNT(*) > 4

Note: The time window we specified above is based upon when the Esper engine sees the events, rather than the time within the event itself. There are a variety of these data window types available to use.

Case Sensitivity

EPL is case sensitive and matches exactly how the metadata is written. Metadata in the Investigation view is all lowercase. Therefore, you need to navigate to the Events View to see the true case of the metadata in question. If the case of the metadata is not and will not be known, EPL comes equipped with the following functions you can use to mange case:

  • .toLowerCase()
  • .equalsIgnoreCase

Examples:

SELECT * FROM Event(event_cat_name.equalsIgnoreCase('user.activity.failed logins'))

SELECT * FROM Event(event_cat_name.toLowerCase() = 'user.activity.failed logins')

Rule Order

EPL rules are loaded into the Esper engine based on the time that they have been deployed—first deployed means first loaded into the Esper engine.

There are some scenarios which are based on multiple rules, and it is important to define the loading order if there are dependencies between them. You can use the EPL statement “uses <module_name>” to force the pre-loading of the required rules. For example:

Rule 1

uses createcontext;
Look for login in not working hours

Rule 2

module createcontext;
Create context workinghours

Alerting

Alerts are not generated by default within the ESA appliance. To generate alerts, you must add the RSA specific annotation, @RSAAlert, before the EPL statements upon which you want to alert. For ecample:

@RSAAlert

SELECT * FROM Event(user_dst IS NOT NULL)

Boundaries

All EPL rules that contain windows or grouping should be bounded, either by a time window or event count (unless they are only matching on a single event). Boundaries ensure that the EPL rules do not consume more and more memory over time and clean up old data that is no longer required. This can be achieved by using EPL views:

.win:time(30 min)

.win_time_length_batch(30 min, 10)

Grouped Windows

The @Hint("reclaim_group_aged=age_in_seconds") hint instructs the engine to discard aggregation state that has not been updated for age_in_seconds seconds.  The age_in_seconds value should match the time window added to the statement.

Pattern Matching

When using pattern matching, a new thread will be created for every ‘a’ event in the first statement below.  This means that multiple ‘a’ events will match with the same ‘b’ event. 

This could result in unexpected and undesirable number of alerts for the same user during the time window. RSA recommends that you use the hint @SuppressOverlappingMatches with the PATTERN syntax.

       

SELECT * FROM PATTERN  [

   every a = Event(device_class='Web Logs'
   AND host_dst = 'icanhazip.com')  

   -> b = Event(category LIKE '%Botnet%' AND device_class='Web Logs'
   AND user_dst=a.user_dst)
   where timer:within(300 seconds)

Arrays

The following examples detail some of the common ways of working with arrays within EPL.

To check if any of the array variables equals ‘value’:

       

'deny' = ALL( action )

To check if any of the array variables do not equal ‘value’:

       

'deny' != ALL( action )

To compare multiple values against array variables and ignore case:

       

SELECT * FROM Event((isOneOfIgnoreCase(action,{ 'monitor' , 'session' }))

Regular Expressions

The rexexp function matches the entire region against the pattern via java.util.regex.Matcher.matches()method. Consult the Java API documentation for more information, or refer to Regular Expressions Reference Table of Contents.

Esper Reference:

http://espertech.com/esper/release-5.3.0/esper-reference/html_single/index.html#epl-operator-ref-keyword-regexp

Count vs. Time Length Batch

When the time window of 1 minute is reached, it will output everything within it that matches an ip_src. The HAVING count clause instructs the engine to only output after the time window if the count of events is greater than 10. The GROUP BY ip_src aggregation instructs the count to apply to only a single ip_src instead of across all ip_src that match the filter criteria.

       

SELECT * FROM Event(filter_criteria)

.std:groupwin(ip_src)

.win:time_batch(1 minute)

GROUP BY ip_src

HAVING count(*) > 10;

ESA Use Cases

This section discusses use cases.

Use Case: Number of events occur within a time interval as long as absence of a specific event detected

Generate an alert after receiving 10 different IDS events from the same source within 10 minutes, but only if within those 10 minutes, we do not see a TCP RST sent from the destination IP. This is an example of correlating packet and log data. Our F5s do a TCP RST on the inbound web requests for unknown paths, so in this instance, we only want to be alerted when a source receives 10 unique attacks to a single destination, and that destination has not responded to the web requests.

Solution:

SELECT * FROM pattern @SuppressOverlappingMatches

Intrushield event followed by 9 others each with a unique policy_name and the same ip_src and ip_dst. The unique policy_name is controlled by the clause

where b.distinctOf(i => i.policy_name).countOf() = 9

The 10 minute time window following the first event is expressed by

timer:interval(600 seconds)

Both the statement for event b and event c must evaluate to true for the syntax to match. In other words, no TCP RST can occur to match the pattern:

AND NOT c=Event (medium=1 AND tcp_flags_seen ='rst' AND ip_dst=a.ip_dst)

The complete rule syntax is listed below.

       

@RSAAlert

SELECT * FROM pattern @SuppressOverlappingMatches

[

   every a=Event (
   device_type IN ( 'intrushield' )
   AND ip_src is not null
   AND ip_dst is not null
   AND policy_name is not null
   AND policy_name NOT LIKE '%P2P%'
   )

   -> (timer:interval(600 seconds)

   AND

   [9] b= Event (
   device_type IN ( 'intrushield' )
   AND ip_src = a.ip_src
   AND ip_dst = a.ip_dst
   AND policy_name is not null
   AND policy_name NOT LIKE '%P2P%'
   AND policy_name != a.policy_name
   )

   AND NOT

   c=Event (medium=1 AND tcp_flags_seen ='rst' AND ip_dst=a.ip_dst)
   )

] where b.distinctOf(i => i.policy_name).countOf() = 9;

Use Case:  How to Correlate events that arrive out of order?

This example correlates 3 events that populate the same ip_dst and occur within 30 of each other in any order.

       

/*
Intrusion Detection with Nonstandard HTTPS Traffic and ECAT Alert
Single host generates IPS alert on destination IP on port TCP/443
accompanied by traffic to TCP/443 that is not HTTPS with the target
host generating an ECAT alert within 5 minutes.
*/

 

/*
Create a window to store the IPS, nonstandard traffic and ECAT alerts
*/

@Name('create')

Create Window HttpsJoinedWindow.win:time(15 minutes)(device_class string, ip_dstport integer, service integer , tcp_dstport integer, device_type string, ip_dst string);

 

/*
Insert into the window the IPS, nonstandard traffic and ECAT alerts
*/

@Name('insert')

INSERT INTO HttpsJoinedWindow

SELECT * FROM

Event

(

(ip_dst IS NOT NULL and device_class IN ('IPS', 'IDS', 'Firewall') AND ip_dstport=443)

OR

(ip_dst IS NOT NULL and service!=443 and tcp_dstport=443)

OR

(ip_dst IS NOT NULL and device_type='rsaecat')

);

 

/*
Alert to the combination of all three events: IPS, nonstandard traffic and ECAT alerts
*/

@RSAAlert

INSERT INTO HttpsIntrusionTrigger

SELECT * FROM

HttpsJoinedWindow(ip_dst IS NOT NULL and device_class IN ('IPS', 'IDS', 'Firewall') AND ip_dstport=443) as s1,

HttpsJoinedWindow(ip_dst IS NOT NULL and service!=443 and tcp_dstport=443) as s2,

HttpsJoinedWindow(ip_dst IS NOT NULL and device_type='rsaecat') as s3

where s1.ip_dst = s2.ip_dst and s1.ip_dst = s3.ip_dst;

 

/*
Delete all events from the joined window that caused the alert so they won't be reused
*/

@Name('delete')

on HttpsIntrusionTrigger delete from HttpsJoinedWindow as j where s1.ip_dst=j.ip_dst;

Esper Reference:

http://espertech.com/esper/release-5.3.0/esper-reference/html_single/index.html#epl-join-inner

Use Case: Only Fire Rules That are Within Business Hours

Define the non-working hours for this use case:

  • Set the working hours as '09:00' – '18:00'
  • Any 'event.cat.name LIKE system.config%' after the working hours will trigger an alert.
       

create context NotWorkingHours start (0, 18, *, *, *) end (0, 9, *, *, *);

@RSAAlert

context NotWorkingHours select * from Event(event_cat_name LIKE ‘system.config%’);

Esper Reference:

http://espertech.com/esper/release-5.3.0/esper-reference/html_single/index.html#context_def_nonoverlapping

Use Case: Leverage Referencing Lists via Databases or Files from ESA Rules

Create a Named Window to store IPs and update the window based on matching filter criteria. Only trigger if a second event occurs and the IP is on the watchlist. The user is only kept on the watchlist for 15 minutes. Use may delete from a named window based on a triggering event.

       

create window WatchListIPs.win:time(15 min) (ip_src string);

 

insert into WatchListIPs select ip_src from Event(category LIKE '%scan%');

 

@RSAAlert

select * from Event(category LIKE '%malicious%') WHERE ip_src in (SELECT ip_src from WatchListIPs );

SELECT * FROM Event(

(ip_dst IS NOT NULL ) AND NOT EXISTS (SELECT * FROM GeoIpLookup WHERE ( ipv4 = Event.ip_dst ) ))

)

Esper Reference:

http://espertech.com/esper/release-5.3.0/esper-reference/html_single/index.html#named_delete

Use Case: Computations Within a Time Window

Perform calculations, such as percentages, rations, averages, counts, min and max, within a given time window.

Note: Computations over a large number of events and time periods are performance and memory intensive. Use caution when deploying the rules. For details, see the topic "Alerting: ESA Enablement Guide" in the Alerting Using ESA guide.

One way is to use named windows. However, this stores events in memory, which may cause issues if storing over a long period or large number of events.

       

CREATE WINDOW SizePerIP.win:length(100) (ip_src string,size long);

INSERT INTO SizePerIP SELECT ip_src AS ip_src, sum(size) AS size FROM Event.win:time_batch(1 minute) GROUP BY ip_src;

@RSAAlert(oneInSeconds=0)

SELECT ip_src FROM SizePerIP GROUP BY ip_src HAVING size > avg(size)*2;

Using a non-overlapping context does not retain events in memory and should be the preferred solution.

/*
Create a non-overlapping context to store data by second
*/

create context PerSecond start @now end after 1 second;

/*
Sum session size per second
*/

context PerSecond

insert into OneSecondBucket

select ip_src, sum(size) as size

from Event group by ip_src output snapshot when terminated;

 

/*
Alert if the total size for one second within an hour is two times greater than average
*/

@RSAAlert

select ip_src from OneSecondBucket.win:time(1 hour) group by ip_src HAVING size > avg(size)*2;

Testing Rules

Always test rules on the Esper EPL try-out website prior to using them in a production environment. There are some differences between the syntax used within ESA and the EPL referenced by EsperTech noted in the EPL Syntax section.

The Esper practice site is here: http://esper-epl-tryout.appspot.com/epltryout/mainform.html

Subsequently, put rules into trial mode on the ESA service prior to enabling in production. See the topic "Work with Trial Rules" in the Alerting Using ESA guide.

Specify a Multi-valued Meta key in the EPL Tryout Tool

Within the schema definition for the action meta key:

       

action string[]

Within the event definition:

       

Event={user_dst ='adminuser', device_class='firewall', action=eval("{'GET','POST'}") }

Esper References

The following links provide details on Esper.

Maintaining Content

Threats and the corporate landscape change over time. RSA periodically reviews existing content to determine whether it needs to be updated based upon current campaigns, or has become irrelevant due to changes in technology or attack techniques and tools.

You can discover new content by using the What’s New dashlet within the Default Dashboard, or by searching through RSA Live by data range since last deployed. Be sure to subscribe to any content for which you want to receive update notifications. See the RSA Security Analytics Live Services guide for more information about subscriptions.

Note: Content Bundles do not support subscription. You can view the list of content in each bundle in the documentation (RSA Link Content Space). You can then periodically redeploy these pieces of content.

Investigating Metadata

The RSA NetWitness Hunting Guide provides a recommendation of content to deploy in order to generate metadata specific to finding threats. This guide also details the investigation process within the product that an analyst can use to track down suspicious events.

Appendix

This section contains the Assign Capture Interface and Capture Autostart procedures that were referenced earlier in this topic.

Assign Capture Interface

For Decoders and Log Decoders, select the capture interface.

To select the Capture Interface:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Admin Services view, select a Decoder or Log Decoder service, and select View > Config.

    The General tab of the Services view is displayed.

  3. In the Decoder/Log Decoder Configuration section, select Capture Interface Selected.
  4. Select an adapter through which the Decoder or Log Decoder captures packets.

    • For Log Decoders, there is a single choice: log_events,Log Events.
    • For Decoders, the available choices depend upon your environment.

    This screen shows an example of the available choices for a Log Decoder service:

  5. Click Apply for your changes to take effect.

Capture Autostart

When a Decoder or Log Decoder starts up, it automatically begins capturing data if Capture Autostart is enabled. You can always start and stop data capture manually. However, if the service goes down or is restarted, then if Capture Autostart is enabled, capture is automatically restarted. You will still need to manually start the capture the first time, though.

To enable Capture Autostart:

  1. Depending on your version:

    • For NetWitness 11.x: Navigate to ADMIN > Services.
    • For Security Analytics 10.x: In the Security Analytics menu, select Administration > Services.
  2. In the Admin Services view, select a Decoder or Log Decoder service, and select View > Config.

    The General tab of the Services view is displayed.

  3. In the Decoder/Log Decoder Configuration section, select Capture Autostart, as shown in the screen below (showing a Decoder service).

  4. Click Apply to save your changes.
  5. The change takes effect on service restart. To restart the service:

    1. From the toolbar, change the View from Config to System, by opening the View menu (Config ) and selecting System.

    2. From the toolbar, select Reboot.

    3. The system displays a message asking you to confirm the reboot: click Yes, then the service restarts.

Previous Topic:Investigation Model
You are here
Table of Contents > Content Development

Attachments

    Outcomes