Configure Logstash Event Sources in NetWitness Platform

You can configure the Logstash collection protocol.

Note: Make sure you start Syslog collection to send logs from Logstash to destination host.

IMPORTANT:
- Do not change logstash.yml file as it breaks the functionality.
- Do not change sincedb_path input configuration. If you change the sincedb_path, the back up and restore functionality breaks.

To configure a Logstash Event Source:

  1. Go to netwitness_adminicon_25x22.png (Admin) > Services from the NetWitness Platform menu.
  2. Select a Log Collection service.
  3. Under Actions, select actions menu > View > Config to display the Log Collection configuration parameter tabs.
  4. Click the Event Sources tab.

  1. In the Event Sources tab, select Logstash/Config from the drop-down menu.
  2. In the Event Categories panel toolbar, click add icon.

    The Available Event Source Types dialog is displayed: currently, Custom is the only available entry.

  3. Select Custom and click OK.

    The newly added event source type is displayed in the Event Categories panel.

  4. Select the new type in the Event Categories panel and click add icon in the Sources toolbar.

    The Add Source dialog is displayed.

  5. Fill in the fields, based on the Logstash event source you are adding. General details about the available parameters are described below in Logstash Collection Parameters.

  6. Click OK.

Note: Once you configure the Logstash you must restart syslog collection for the changes to take effect.

Logstash Collection Parameters

The following tables provides descriptions of the Logstash Collection source parameters.

Note: Items that are followed by an asterisk (*) are required.

Basic Parameters

Custom Event Source

This table lists the common parameters for all plugin event sources: specific event source types have additional parameters.

Name Description

Name *

Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen.

Enabled

Select the checkbox to enable the event source configuration to start collection. The checkbox is selected by default.

Description

Enter a text description for the event source.

Input Configuration *

An input plugin enables a specific source of events to be read by Logstash. The following code represents an example input plugin. Paste in the text for your input plugin.

Filter Configuration *

A filter plugin performs intermediary processing on an event. Paste in any filter details for your Logstash event source.

Event Destination *

The NetWitness Platform consumes Syslog from Log Collectors and Log Decoders: specify the specific Log Collector or Log Decoder to which you want to send the information.

Test Connection

Checks the configuration parameters specified in this dialog to make sure they are correct.

Beats Event Source

The following table lists the beats event source parameters.

Name Description

Name *

Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen.

Enabled

Select the check box to enable the event source configuration to start collection. The check box is selected by default.

Description

Enter a text description for the event source.

Port Number*

Enter the port number (for example, 5044) that you configured for your event sources.

Linux-Audit

Select the checkbox to enable processing for Linux audit.
Linux-System Select the checkbox to enable processing for Linux system.
Ngnix Select the checkbox to enable processing for Nginx.
Event Destination* Select the NetWitness Log Collector or Log Decoder to which event needs to be send from the drop-down list.

Test Configuration

Checks the configuration parameters specified in this dialog to make sure they are correct.

Export Connector Event Source

The following table lists the custom export connector event source parameters.

Name Description

Name *

Enter an alpha-numeric, descriptive name for the source. This value is only used for displaying the name on this screen.

Enabled

Select the check box to enable the event source configuration to start collection. The check box is selected by default.

Description

Enter a text description for the event source.

Host*

Select the hostname of the Decoder or Log Decoder for data aggregation from the drop-down list.

Username*

Username used to access the Decoder or Log Decoder for data aggregation.
Authentication

Note: If you upgrade from NetWitness Platform 11.6.0.0 to 11.6.1.0, automatic key is generated and stored in the key store management for the password set in Logstash pipeline configuration. You can view the key instead of password in the Authentication field.

Select the authentication type used for data aggregation. By default, SSL field is enabled, if you select trusted authentication.

Note: For trusted authentication, make sure you add the PEM file at /etc/pki/nw/node/node-cert.pem to the source Decoders REST APIs (/sys/trustpeer and /sys/caupload).

SSL

Select the check box to communicate using SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. By default, SSL option is enabled, if you select trusted authentication type in the Authentication field.

Decoder Type*

Decoder Type is a read only field and it is auto populated when you select the Host.

Output Configuration* Logstash pipeline output configuration that send the input events.

Test Configuration

Checks the configuration parameters specified in this dialog to make sure they are correct.

Advanced Parameters

Click expand icon next to Advanced to view and edit the advanced parameters, if necessary.

Name Description

Debug

Caution: Only enable debugging (set this parameter to On or Verbose) if you have a problem with an event source and you need to investigate this problem. Enabling debugging will adversely affect the performance of the Log Collector.

Caution: Enables or disables debug logging for the event source. Valid values are:

  • Off = (default) disabled
  • On = enabled
  • Verbose = enabled in verbose mode ‐ adds thread information and source context information to the messages.

This parameter is designed to debug and monitor isolated event source collection issues. If you change this value, the change takes effect immediately (no restart required). The debug logging is verbose, so limit the number of event sources to minimize performance impact.

Beats SSL

(This option is applicable only for beats typespec)

Select this checkbox to communicate using beats SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This check box is not selected by default.

Note: Ensure that you copy the server SSL certificate and the key (generated in your system) to /etc/logstash/pki on Log Collector, which is used during SSL connection. /etc/logstash/pki is a path in the Log Collector node.

Certificate

(This option is applicable only for beats typespec)

Select the name of a server SSL certificate located at /etc/logstash/pki.

Key

(This option is applicable only for beats typespec)

Select the name of a server SSL key located at /etc/logstash/pki.

SSL Enabled

  • For custom typespec - Select the check box to communicate using SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This check box is selected by default.

  • For beats typespec - Select the check box to communicate using SSL. The security of data transmission is managed by encrypting information and providing authentication with SSL certificates. This check box is selected by default.

  • For export connector typespec - Select the check box for Logstash to communicate with Packet Decoder and Log Decoder in SSL and Non-SSL mode. This check box is not selected by default.

Export Logs

(This option is applicable only for export_connector typespec)

Select the check box to export logs from Log Decoders.

Starting Session

(This option is not applicable for export_connector typespec)

Specify the session id from which you want to start the aggregation from instead of the default.

Include Metas

(This option is not applicable for export_connector typespec)

Specify the list of metas separated by comma that you want to aggregate. The default metas such as time, did, sessionid are collected in addition to the metas you added for aggregation.

Exclude Metas

(This option is not applicable for export_connector typespec)

Specify the list of metas separated by comma that you want to exclude from aggregration.

Query

(This option is not applicable for export_connector typespec)

Specify the query so the data matching the query is only aggregated. For example, select * where user.dst = 'john'.

Enable Metrics Collection

(This option is applicable only for export_connector typespec)

Enables metrics collection in Elastic.

IMPORTANT: If you enable the metrics collection, you must provide the Elastic host, username, and password.

Elastic Host (This option is applicable only for export_connector typespec)

Specify the Elastic host to forward metrics.

Elastic Port

(This option is applicable only for export_connector typespec)

Port number through which metrics are forwarded.

Elastic Username

(This option is applicable only for export_connector typespec)

Specify the name of the Elastic search.

Elastic Password

(This option is applicable only for export_connector typespec)

Note: If you upgrade from NetWitness Platform 11.6.0.0 to 11.6.1.0, automatic key is generated and stored in the key store management for the password set in Logstash pipeline configuration. You can view the key instead of password in the Elastic Password field.

Specify the Elastic search password.

Minutes Back

(This option is applicable only for export_connector typespec)

Starts collecting data from last xx minutes.

For example, if the value is set to 30 minutes, Log Collector starts collecting the logs and metas starting from last 30 minutes.

Persistent Queue
(This option is applicable only for export_connector typespec
Persistent queue stores the message queue on disk to avoid data loss. By default, this option is not enabled.

Additional Custom Configuration

Use this text box for any additional configuration, in case you have multiple inputs or another set of outputs to send somewhere in addition to a NetWitness Log Collector or Log Decoder. For example, you can configure the data to be sent to Elasticsearch. In this case each event that is sent to NetWitness Platform will also be send to Elasticsearch.

Required Plugins

Specify the required plugins in a comma separated list.

Note: Backup and restore is not supported for custom plugins.

Note: If the test connection failed due to required plugin is not installed, you must install the required plugin, for more information, see Install or Manage Logstash Plugin.

Pipeline Workers

Number of pipeline worker threads allocated for logstash pipeline.

Ports

(This option is not applicable for export_connector typespec)

Enter a port number (for example, 5000 or UDP:5000, TCP:5000) , and ensure the Enabled box is checked. This allows the plugins to collect logs over the network (For example, UDP, TCP).

IMPORTANT: If you are configuring beats event source, make sure you provide beats event source port (For example, 5044) in the advance configuration even if you have updated the port in the basic parameters.

Exclude Metas

(This option is not applicable for export_connector typespec)

Specify the list of metas separated by comma that you want to exclude from aggregration.

Query

(This option is not applicable for export_connector typespec)

Specify the query so the data matching the query is only aggregated. For example, select * where user.dst = 'john'.

Note: You can monitor the health and throughput for Logstash pipeline using Logstash Input Plugin Overview dashboard. For more information, see "New Health and Wellness Dashboards" topic in the System Maintenance Guide.

View logstash collection in the Investigation > Events view

To view logstash collection in the Events view

  1. Go to the Investigation > Events view.

  2. Select a Log Decoder service
    (for example, LD1) which collects logstash events, from the drop-down list.

  3. Select the time range.

  4. Click netwitness_qryiconblue.png.

  5. Look for events with a device.type value that matches the one defined in the Logstash pipeline.

    Note: Though some meta are parsed by default, a custom parser or Log Parser Rules are required for full parsing.

Install or Manage Logstash Plugin

By default, Logstash related plugins are installed when Logstash is installed. In addition, you can add or customize the plugins based on your requirement.

List All Lostash Plugins

You can list all the Logstash plugins available on your environment using the following command: /usr/share/logstash/bin/logstash-plugin list --verbose

Install New Logstash Plugin

You can install new plugin using the following command:

/usr/share/logstash/bin/logstash-plugin install <plugin-name>

For example, see the following command:

/usr/share/logstash/bin/logstash-plugin install logstash-input-github

Manage Logstash Plugin

You can manage the existing plugins using the following commands:

  • To update all Logstash plugins:

    /usr/share/logstash/bin/logstash-plugin update

  • To update specific Logstash plugin:

    /usr/share/logstash/bin/logstash-plugin update <plugin-name>

Configure Keystore Management

Keystore management allows you to securely store secret values (key and password). These keys are used as a placeholder for authentication credentials within a Logstash pipeline configuration.

To configure keystore management:

  1. Go to netwitness_adminicon_32x27.png (ADMIN) > Services from the NetWitness Platform menu.

  2. Select a Log Collection service.

  3. Under Actions, select netwitness_actions_icon.png > View > Config to display the Log Collection configuration parameter tabs.

  4. Click the Event Sources tab.

  5. In the Event Sources tab, select Logstash > Keystore management from the drop-down menu.

  6. Click netwitness_add.png.

  7. In the key field, enter the name of the key.

  8. In the password field, enter the password.

  9. Click Save.