Skip navigation
All Places > Products > RSA NetWitness Platform > RSA NetWitness Platform Online Documentation > Documents
Log in to create and rate content, and to follow, bookmark, and share content with other members.

Logstash:Linux Event Source Example

Document created by RSA Information Design and Development Employee on Sep 9, 2020
Version 1Show Document
  • View in full screen mode
 

This section shows sample input, filters and output configuration to collect system and audit events from CentOS.

Input Plugin

An input plugin enables a specific source of events to be read by Logstash. The following code represents an example input plugin.

input-beats.conf

                        
# Below input block collects events using beats plugins (e.g filebeats, auditbeats)
# Skip this block if it's already defined in another pipeline.
input {
    beats {
        port => 5044
    }
}

Make sure that port 5044 is open on the Logstash machine. As an example, if Logstash is on a CentOS system, run the following commands to open port 5044:

firewall-cmd --add-port=5044/tcp
firewall-cmd --add-port=5044/tcp --permanent
firewall-cmd --reload

Output Plugin

An output plugin sends event data to a particular destination. Outputs are the final stage in the event pipeline.

output-netwitness-tcp.conf

                                                                                          
# Below is tcp output plugin with netwitness codec to tranform events in syslog and send it to LogDecoder
# Only one of these configurations can be within the same pipeline.
output {
  #if [@metadata][nw_type] { # Only targeted Netwitness items
    tcp {
      id => "netwitness-tcp-output-conf-output"
      host => "10.10.100.100"  ## LogDecoder IP
      port => 514
      ssl_enable => false
      #ssl_verify => true
      #ssl_cacert => "/path/to/certs/nw-truststore.pem"
      #ssl_key => "/path/to/certs/privkey.pem"
      #ssl_cert => "/path/to/certs/cert.pem"
 
      codec => netwitness {
        # Payload format mapping by nw_type.
        # If nw_type is absent or formatting fails, JSON event is used as the payload
        payload_format => {
          "apache" => "%APACHE-4-%{verb}: %{message}"
        }
        # Failover format, if above format fails
        # If nw_type is absent or formatting fails, JSON event is used as the payload
        payload_format_failover => {
          "apache" => "%APACHE-4: %{message}"  # When verb is missing
        }
      }
    }
  #}
}

Filter Plugin

A filter plugin performs intermediary processing on an event. Below is a filter plugin configuration for system events collected from linux using the Filebeat plugin.

linux-system.conf

                                                               
# Filters are often applied conditionally depending on the characteristics of the events.
# Requires these additional configurations within the same pipeline: 
#   input-beats.conf
#   output-netwitness-tcp.conf
 
filter {
if ![@metadata][nw_type] {
  if [ecs][version] and [host][hostname] and [agent][type] == "filebeat" {
    if [event][module] == "system" {
      mutate {
        add_field => {
          "[@metadata][nw_type]" => "linux"
          "[@metadata][nw_msgid]" => "LOGSTASH001"
          "[@metadata][nw_source_host]" => "%{[host][hostname]}"
        }
      }
    } 
  }
 }
}

Below is filter plugin configuration for audit events collected from linux using the Auditbeat plugin.

linux-audit.conf

                                                
filter {
    if ![@metadata][nw_type] { # Update Once
      if [ecs][version] and [host][hostname] and [agent][type] == "auditbeat" {
        if [event][module] == "audit" {
          mutate {
            add_field => {
              "[@metadata][nw_type]" => "linux"
              "[@metadata][nw_msgid]" => "LOGSTASH002"
              "[@metadata][nw_source_host]" => "%{[host][hostname]}"
            }
          }
        }  
      }
    }
  }

Create a Pipeline

It is recommended to have one pipeline for each input type. For example, all beats collection should be in the same pipeline. To run collection as separate pipeline, create a directory and add above input, filters, and output configuration files to it.

Example Pipeline for Beats

                  
 /etc/logstash/pipeline1/
 /etc/logstash/pipeline1/input-beats.conf
 /etc/logstash/pipeline1/output-netwitness-tcp.conf
 /etc/logstash/pipeline1/linux-system.conf
 /etc/logstash/pipeline1/linux-audit.conf

Modify /etc/logstash/pipeline.yml and add the following entries:

Add to pipeline.yml

         
  - pipeline.id: my-sample-pipeline-1
    path.config: "/etc/logstash/pipeline1/*.conf"

You are here
Table of Contents > Coding Appendix: Linux event Source Example

Attachments

    Outcomes