SA Cfg: Troubleshoot Global Audit Logging

Document created by RSA Information Design and Development on Mar 22, 2017Last modified by RSA Information Design and Development on Sep 26, 2017
Version 2Show Document
  • View in full screen mode
  

This topic provides information about possible issues that Security Analytics users may encounter when implementing Global Audit Logging in Security Analytics. Look for explanations and solutions in this topic.

After you configure Global Audit Logging, you should test your audit logs to ensure that they show the audit events as defined in your audit logging template. If you cannot view the audit logs on your third-party syslog server or Log Decoder, or the audit logs do not appear as expected, look at the basic troubleshooting suggestions below. If you are still having issues, you can look at the advanced troubleshooting suggestions.  

Basic Troubleshooting

If you cannot view audit logs on a third-party syslog server or Log Decoder:

  • Verify that Puppet and RabbitMQ are up and running.
  • Verify the syslog notification server configuration and make sure it is enabled.
    (This configuration is located at Administration > System > Global Notifications. Do not select Legacy Notifications.)
  • Check the Global Audit Logging configuration.

Configure Global Audit Logging and Verify Global Audit Logs provide instructions. If you are sending audit logs to a Log Decoder:

  • Ensure that the Log Decoder is aggregating on the Concentrator on the same host (Administration > Services > (Select Concentrator) > View > Config).
  • Verify that the latest CEF parser is deployed and enabled.
  • Check the audit logging notification template. You must use a CEF template and all logs feeding into the Log Decoder must use a CEF template.

If you are sending audit logs to a third-party syslog server:

  • Ensure that the destination port configured for the third-party syslog server is not blocked by a firewall. 

Advanced Troubleshooting

In order to use Global Audit Logging on your network, Puppet and RabbitMQ must be functioning. The following Global Audit Logging architectural diagram shows the necessary audit logging components and the flow of audit logs from the individual services to the Logstash on the Security Analytics Server and then to the configured third-party syslog server or Log Decoder.

For centralized audit logging, each of the Security Analytics services writes audit logs to rsyslog listening on port 50514 using UDP on the local host. The rsyslog plugin provided in the audit logging package adds additional information and uploads these logs to RabbitMQ. Logstash running on the Security Analytics Server host aggregates audit logs from all of the Security Analytics services, coverts them to the required format, and sends them to a third-party syslog server or Log Decoder for investigation. You configure the format of the global audit logs and the destination used by Logstash through the Security Analytics user interface.  

Define a Global Audit Logging Configuration provides instructions.  

Verify the Packages and Services on the Hosts

Security Analytics Host

The following packages or services must be present on the Security Analytics Server host:

  • rsyslog-8.4.1
  • rsa-audit-rt
  • logstash-1.5.4-1
  • rsa-audit-plugins
  • rabbitmq server
  • puppet master
  • puppet agent

Services on a Host other than the Security Analytics Host

The following packages or services must be present on each of the Security Analytics hosts other than the Security Analytics Server host:

  • rsyslog-8.4.1
  • rsa-audit-rt
  • rabbitmq server
  • puppet agent

Log Decoder

If you forward global audit logs to a Log Decoder, the following parser should be present and enabled:

  • CEF

Possible Issues

What if I perform an action on a service but audit logs do not reach the configured third-party syslog server or Log Decoder? 

The possible causes could be one or all of the following:

  • A service is not logging to the local syslog server.
  • Audit logs are not getting uploaded to RabbitMQ from the local syslog. 
  • Audit logs are not aggregated on the Security Analytics Server host.
  • Aggregated logs on the Security Analytics Server host are not being forwarded to the configured third-party syslog server or Log Decoder.
  • The Log Decoder is not configured to receive global audit logs in CEF format:
    • Log Decoder capture is not turned on
    • CEF Parser is not present
    • CEF Parser is not enabled

Possible Solutions

The following table provides possible solutions for the issues.

                             
IssuePossible Solutions
A service is not logging to the local syslog server.
  • Ensure that rsyslog is up and running.
    You could use the following command:
    service rsyslog status
  • Ensure that rsyslog is listening on port 50514 using UDP.
    You could use the following command:
    netstat -tulnp|grep rsyslog
  • Ensure the application or component is sending audit logs to port 50514. Run the tcpdump utility on the local interface for port 50514.
    You could use the following command:
    sudo tcpdump -i lo -A udp and port 50514

See "Solution Examples" below to view the command outputs.

Audit logs are not getting uploaded to RabbitMQ from the local syslog.
  • Ensure that the rsyslog plugin is up and running.
    You could use the following command:
    ps -ef|grep rsa_audit_onramp
  • Ensure the RabbitMQ server is up and running.
    You could use the following command:
    service rabbitmq-server status

See "Solution Examples" to view the command outputs.

Audit logs are not aggregated on the Security Analytics Server host.

  • Ensure Logstash is up and running.
    You could use the following commands:
    ps -ef|grep logstash
    service logstash status
  • Ensure the RabbitMQ server is up and running.
    You could use the following command:
    service rabbitmq-server status
  • Ensure the RabbitMQ server is listening on port 5672.
    You could use the following command:
    netstat -tulnp|grep 5672
  • Check for any errors generated at the Logstash level.
    You could use the following command for the location of the log files:
    ls -l /var/log/logstash/logstash.*

See "Solution Examples" to view the command outputs.

Aggregated logs on the Security Analytics Server host are not being forwarded to the configured third-party syslog server or Log Decoder.

  • Ensure Logstash is up and running.
    You could use the following commands:
    ps -ef|grep logstash
    service logstash status
  • Check for any errors generated at the Logstash level.
    You could type the following command for the location of the log files:
    ls -l /var/log/logstash/logstash.

See "Solution Examples" below to view the command outputs.

  • Ensure that the destination service is up and running.
  • Ensure that the destination service is listening on the correct port using the correct protocol.
  • Ensure that the configured port on the destination host is not blocked.

Audit logs forwarded from the Logstash lead to parse failure at the Log Decoder.

  • Ensure that you are using an appropriate notification template.
    Audit Logs parsed by a Log Decoder must be in CEF format. The destination from which audit logs directly or indirectly make their way to the Log Decoder must also use a CEF Template.
  • The Notification Template must follow the CEF standard.
    Follow the steps in this guide to either use the default CEF template or create a custom CEF template following strict guidelines. Define a Template for Global Audit Logging provides additional information.
  • Verify the Logstash configuration.

Why can't we see the custom meta data in Investigation?

Usually, if a meta is not visible in Investigation, it is not being indexed. If you need to use custom meta keys for Investigations and Reporting, ensure that the meta keys that you select are indexed in the table-map-custom.xml file on the Log Decoder. Follow the "Maintain the Table Map Files" procedure to modify the table-map-custom.xml file on the Log Decoder.

Ensure that the custom meta keys are also indexed in the index-concentrator-custom.xml on the Concentrator. "Edit a Service Index File" provides additional information.

The following figure shows an example table-map-custom.xml file in Security Analytics (Administration > Services > (select the Log Decoder) > View > Config) with a custom meta url example highlighted.

The url custom meta example is highlighted in the following code sample from the table-map-custom.xml file above:

 <mapping envisionName="url" nwName="url" flags="None" envisionDisplayName="Url"/> <mapping envisionName="protocol" nwName="protocol" flags="None" envisionDisplayName="Protocol"/><mapping envisionName="cs_devservice" nwName="cs.devservice" flags="None" envisionDisplayName="DeviceService" /><mapping envisionName="cs_paramkey" nwName="cs.paramkey" flags="None" envisionDisplayName="ParamKey" /><mapping envisionName="cs_paramvalue" nwName="cs.paramvalue" flags="None" envisionDisplayName="ParamValue" /><mapping envisionName="cs_operation" nwName="cs.operation" flags="None" envisionDisplayName="Operation" /><mapping envisionName="sessionid" nwName="log.session.id" flags="None" envisionDisplayName="sessionid" /><mapping envisionName="group" nwName="group" flags="None" envisionDisplayName="group" /><mapping envisionName="process" nwName="process" flags="None" envisionDisplayName="process" /><mapping envisionName="user_agent" nwName="user.agent" flags="None"/><mapping envisionName="info" nwName="index" flags="None"/> 

The following figure shows an example index-concentrator-custom.xml file in Security Analytics (Administration > Services > (select the Concentrator) > View > Config) with a custom meta url example highlighted.

The url custom meta example is highlighted in the following code sample from the index-concentrator-custom.xml file above:

 <key description="Severity" level="IndexValues" name="severity" valueMax="10000" format="Text"/><key description="Result" level="IndexValues" name="result" format="Text"/><key level="IndexValues" name="ip.srcport" format="UInt16" description="SourcePort"/><key description="Process" level="IndexValues" name="process" format="Text"/><key description="Process ID" level="IndexValues" name="process_id" format="Text"/><key description="Protocol" level="IndexValues" name="protocol" format="Text"/><key description="UserAgent" level="IndexValues" name="user_agent" format="Text"/><key description="DestinationAddress" level="IndexValues" name="ip.dst" format="IPv4"/><key description="SourceProcessName" level="IndexValues" name="process.src" format="Text"/><key description="Username" level="IndexValues" name="username" format="Text"/><key description="Info" level="IndexValues" name="index" format="Text"/><key description="customdevservice" level="IndexValues" name="cs.devservice" format="Text"/> <key description="url" level="IndexValues" name="url" format="Text"/> <key description="Custom Key" level="IndexValues" name="cs.paramkey" format="Text"/><key description="Custom Value" level="IndexValues" name="cs.paramvalue" format="Text"/><key description="Operation" level="IndexValues" name="cs.operation" format="Text"/><key description="CS Device Service" level="IndexValues" name="cs.device" format="Text" valueMax="10000" defaultAction="Closed"/> 

Solution Examples

The following possible solution examples show the outputs of the example commands. See the above table for the complete listing of possible solutions.

Ensure that rsyslog is up and running

You can use the following command:
service rsyslog status

Ensure that rsyslog is listening on port 50514 using UDP

You can use the following command:
netstat -tulnp|grep rsyslog

Ensure that the application or component is sending audit logs to port 50514

The following figure shows the output of running the tcpdump utility on the local interface for port 50514.

You can use the following command:
sudo tcpdump -i lo -A udp and port 50514

Ensure that the rsyslog plugin is up and running

You can use the following command:
ps -ef|grep rsa_audit_onramp

Ensure the RabbitMQ server is up and running

You can use the following command:
service rabbitmq-server status

Ensure logstash is up and running

You can use the following commands:
ps -ef|grep logstash
service logstash status

Ensure the RabbitMQ server is listening on port 5672

For example, type the following command:
netstat -tulnp|grep 5672

Check for any errors generated at the Logstash level

You can type the following command for the location of the log files:
ls -l /var/log/logstash/logstash.*

See the Possible Solutions table above for the complete listing of issues and possible solutions.

You are here
Table of Contents > Troubleshooting System Configuration > Troubleshoot Global Audit Logging

Attachments

    Outcomes