Security Analytics System Maintenance Checklist

Document created by RSA Information Design and Development on Jun 26, 2017Last modified by RSA Information Design and Development on Jul 27, 2017
Version 2Show Document
  • View in full screen mode
  

This checklist describes tasks to be performed daily, weekly, and monthly for maintaining the health of your Security Analytics systems. If you need assistance with these tasks, contact Customer Support. For information about how to contact Customer Support, go to the "Contact Customer Support" page in RSA Link (Contact RSA Customer Supporthttps://community.rsa.com/docs/DOC-1294).

Introduction

It is important to perform regular maintenance checks on the Security Analytics Server (also known as the SA Head Unit) and hosts to keep them running smoothly. This checklist is divided into sections for daily, weekly, and monthly tasks to be performed on specific types of services.

Audience

The primary audience for this guide is members of the Administration team who are responsible for maintaining Security Analytics.

All Host Types Health Checks

In this section, we describe the most common health checks that apply across all the Security Analytics platforms. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

Checks for all Host Types Using the Security Analytics UI

All Host Types Daily Tasks

                       
Task TitleDescription
Check services
  1. Go to Administration > Hosts and ensure that all the boxes in the Services column are green.
  2. Go to Administration > Services and ensure that all the services that are listed include green circles ().
 
Check alarms

In the Security Analytics UI, go to Administration > Health & Wellness and click the Alarms tab. For information about interpreting the alarms, see Monitor Alarms.

 

Checks for All Host Types Using SSH-Session/ CLI

All Host Types Daily Tasks

                                                          
Task TitleDescription

Check memory usage

Run the following command:
free -g ; top

 

Check CPU usage

Run the following command:
iostat

 

Check for any Security Analytics configuration changes

Run the following command:
puppet agent -t

 

Check the status of nwappliance, mcollective and collectd services

Run the following commands:
status nwappliance
service mcollective status
service collectd status

 

Log maintenance

It is a best practice to monitor service and system logs for content and physical size on a daily basis. It is important to verify that logs are being rolled over to keep disk partitions from getting full. (A log is rotated after it reaches a certain size, for example, 50 MB, and a log control tool such as logrotate creates a new file in its place for logging purposes.) Some of the services might not function properly if the root partition runs over 80%. Follow the steps in Log Maintenance to address problems that can arise if the root partition runs over 80%.

 

Monitor Reporting Engine

Monitor the Reporting Engine to ensure that it does not fill up the /home/rsasoc/ partition. For information about how to monitor Reporting Engine, see Monitor Reporting Engine.

 

Monitor Malware Co-Located service

The Malware Analysis colo service may fail if the spectrum.h2.db database size is over 10 GB. Avoid running the Malware Analysis colo service for continuous scans and check the size of the database frequently. This service is located on all Security Analytics servers. Do not confuse it with the stand-alone Malware Analysis appliance or virtual machine. If the service fails due to unavailable disk space, follow the steps described in Malware Analysis Colo Service Failure.

 

Monitor RabbitMQ server

Security Analytics servers use the RabbitMQ service for features such as federation, Health and Wellness,
and Incident Management. Ensure that the RabbitMQ service is in a healthy state by running a report and looking for alarms, memory usage, and sockets used. To run this report, follow the steps described in RabbitMQ Service Report.

 

Back up host systems and services

Scheduled daily backups of all essential Security Analytics configurations should be taken for each of the following components:

  • Log Decoder
  • Archiver
  • Concentrator
  • Broker
  • ESA
  • Remote Log Collectors (VLC)
  • Reporting Engine
  • Security Analytics server

For information about backing up these components, see Back Up and Restore Data for Hosts and Services.

 

 

All Host Types Weekly Tasks

                                           
Task TitleDescription

Check Storage Usage

Run the following command:
df -h

 

Sort the files consuming the largest amount of disk space

Run the following command:
du -hsh * | sort -rh

 

 

Check for core service dump files

In the Security Analytics console, run the following command:
find /var/netwitness/ -iname core*

 

Check for any process claiming space for a deleted file

Run the following command:
lsof | grep –i deleted

 

Check for date and time to make sure they are synchronized

In the Security Analytics console, run the following command:
date

 

Check size of H2 database

Security Analytics uses an in-memory H2 database. Check the size of the H2 database on a weekly basis. The H2 database is located in var/lib/netwitness/uax/db. Notifications and recurring jobs can increase the database size to over 10 GB. Delete old notifications and unwanted recurring jobs from the Security Analytics UI. Recovery steps are documented in Monitor Size of H2 Database.

 

 

All Host Types Monthly Tasks

                                                
Task TitleDescription

Check for Security Analytics current version

Run the following command:
rpm –qa | grep -i –nwappliance

 

Check for all attached storage disks status and RAID configurations

Run the following command:
/usr/sbin/nwraidutil.pl

 

 

Check for NTP operations

Run the following command:
ntpstat

 

Check for kernel version

Run the following command:
uname –a

 

Verify Custom Index File Configurations

Validate that the following files are consistent across all of the same type of host, for example, all Concentrators have a consistent index-concentrator-custom.xml file.

  • Decoders:/etc/netwitness/ng/index-decoder-custom.xml
  • Concentrators:/etc/netwitness/ng/index-concentrator-custom.xml
  • Brokers:/etc/netwitness/ng/index-broker-custom.xml

If there are any file discrepancies, verify which host has the correct version and push that version to the other hosts. For instructions, see Push Correct Versions of Custom Index Files to Hosts.

 

Verify Custom Feeds

Verify that custom feeds are correctly deployed to hosts. For instructions, see Verify Custom Feeds.

 

Back Up Feeds, Rules and Parsers

Backing up feeds, correlation rules, parsers, and application rules regularly ensures that your configuration is correct if recovery is necessary and makes the recovery procedure easier and faster. If these items are not changed often, they can be backed up less frequently, but you should back them up regularly.

 

Security Analytics Head Server Health Checks

In this section, we describe regular health checks to perform on the Security Analytics Head Server. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

Head Server checks using the Security Analytics UI

Daily Tasks for Security Analytics Head Server

                            
Task TitleDescription

Verify that all hosts are reachable

Go to Administration > Hosts and ensure that all the boxes in the Services column are green.

 

Verify that all services are functional

Go to Administration > Services and ensure that none of the services are red.

 

List Health and Wellness alarms for false and true positive

Create a list of Health and Wellness alarms and filter for false-positive and true-positive alarms, so that you can address them. Go to AdministrationHealth & Wellness and select the Alarms tab.

 

 

Weekly Tasks for Security Analytics Head Server

                                           
Task TitleDescription

Check connectivity with external auth servers

Go to Administration > Security > Settings. Scroll down to External Authentication and check the connectivity status of the servers.

 

Check current deployed license status

Go to System > Licensing and check the license status on the Overview tab.

 

Test connectivity with external notification servers

  1. Go to System > Global Notifications, select the Servers tab and ensure that Enable is selected.
  2. In the left panel, select Email and click Test Connection.
 

Check job status

Go to System > Jobs and check the Progress column.

 

Check Live configurations

Go to System > Live Services and ensure that the live-status is green.

 

Check Context Hub config status

Go to System > Live Services and ensure that both Threat Insights and Analyst Behaviors are enabled.

 

Head Server Checks Using SSH-Session/ CLI

Daily Tasks for Security Analytics Head Server

                                      
Task TitleDescription

Check storage usage

Run the command:
df –h

 

Check memory usage

Run the command:
free –g

 

Restart Puppet Master service

Run the command:
service puppetmaster restart

 

Restart Rabbitmq service

Run the command:
service rabbitmq-server restart

 

Check for MCollective Federation and Incident-Management RabbitMQ RDQ backlog files

Run the following command:
rabbitmqadmin --ssl --port=15671 list vhosts /var/lib/rabbitmq/mnesia/sa@localhost/msg_store_persistent/0.rdq /var/lib/rabbitmq/mnesia/sa@localhost/msg_store_transient/0.rdq

 

 

Weekly Tasks for Security Analytics Head Server

                                 
Task TitleDescription

Test Puppet provisioning script execution

Run the command:
puppet agent –t

 

Test connectivity of SA Head Server with other hosts

Run the command:
mco ping

 

Check for Reporting Engine critical errors

Run the command:
tailf /home/rsasoc/rsa/soc/reporting-engine/logs/reporting-engine.log | grep -i error

 

Check for Mcollective errors

Run the command:
tailf /var/log/mcollective.log | grep –i error

 

 

Monthly Tasks for Security Analytics Head Server

                                      
Task TitleDescription

Check SA certificates keystore contents

Run the command:
keytool -list -keystore /etc/pki/java/cacerts -storetype JKS -storepassword changeit

 

Check for any SA Jetty server critical errors

Run the command:
more /var/lib/netwitness/uax/logs/sa.log | grep -i error

 

Verify all attached storage disks

Run the command: nwraidutil.pl

 

Verify network connectivity

  • Run the command: curl host_IP:port

  • With outgoing SMTP servers, run the command:
    curl smtp_server_IP:25

 

Verify required Security Analytics ports are open

Run the command:
netstat –alnp | grep “port_no”

 

Concentrator Health Checks

Indexes

By default, Security Analytics hosts create index slices every eight hours. Within this eight hour window there is a set number of values that can be indexed. This is defined by the valueMax setting in the index-concentrator-custom.xml file. If this setting is not present or is set to 0, this tells the Concentrator that the maximum values for this meta key is limitless, which means that the index can become bloated and degrade performance. Therefore, the valueMax value must always be set.

It is also important to monitor the number of slices created on a Log Hybrid or Log Concentrator. If they reach a certain number, performance can degrade. When hosts reach the following number of index slices, an index reset is required:

  • LogHbyrid: 250 index-slices
  • LogConcentrator: 500 index-slices

NWDatabase Configuration Verification

Professional Services usually configures the Core NW database parameters and handles related issues. The information below is quoted from an internal support document: "The Core Database Tuning Guide” and provides information about the syntax that can be used and configuration best-practice for the core NW database.

Syntax used

The following example shows the syntax for NW database configuration: /var/netwitness/decoder/packetdb=10tb;/var/netwitness/decoder0/packetdb==20.5tb

The size values are optional. If set, they indicate the maximum total size of files stored before databases roll over. If the size is not present, the database does not automatically roll over, but its size can be managed using other mechanisms.

The use of = or == is significant. The default behavior of the databases is to automatically create directories specified when the Core service starts. However, this behavior can be overridden by using the == syntax. If == is used, the service does not create any directories. If the directories do not exist when the service starts, the service does not successfully start processing. This gives the service resilience against file systems that are missing or unmounted when the host boots.

Verification of the 95% threshold

To ensure that the NW database directory sizes are configured with the correct 95% threshold, in the Security Analytics UI:

  1. Go to the Security Analytics service Explore view, right-click on Properties and select reconfig.
  2. In the parameters field, type Update=0 and click Send. The response output will check the host storage and attached storage, and automatically calculates what the 95% threshold is.
  3. When you type Update=1 and click Send, the response output displays the same response as in the previous step, but when you refresh the Explore view, you will see that the session, meta, and packet database directories size have been updated to 95% of the current available storage.
  4. Restart the Concentrator or Decoder service for the changes to take effect.

Concentrator Health Checks using the Security AnalyticsUI

Daily Tasks for Concentrator Health Checks

                            
Task TitleDescription

Check Health and Wellness for any related errors to hosts.

Go to AdministrationHealth & Wellness and click on the Alarms tab.

 

Check aggregation status, rate and auto start

  1. Go to Administration > Services and select a Concentrator service.

  2. Click View > Config. From the Config drop-down menu at the top of the page, select Stats.

  3. In Key Stats, check the values Rate, Behind and Status and make sure sessions-behind are 0.

 

Confirm metadata at the Concentrator is available for investigation

Go to Investigation > Navigate and select Load Values. 

 

Weekly Tasks for Concentrator Health Checks

                       
Task TitleDescription

Set query.parse to strict

  1. Go to Administration > Services and select a Concentrator service.

  2. In the Actions menu, click View > Explore.

  3. In the left pane, expand sdk and select config. Ensure that query.parse is set to strict.

 

Review configured storage for NWDB

  1. Go to Administration > Services and select a Concentrator service.

  2. In the Actions menu, click View > Explore and in the left pane, select database > config.

  3. Look in the configured storage for NWDB (packet.dir, meta.dir, session.dir, index.dir), which should be using up to 95% of available storage (local storage and DAC).

 

 

Monthly Tasks for Concentrator Health Checks

                            
Task TitleDescription

Ensure NWDB storage configuration is correct

  1. Go to Administration > Services and select a Concentrator service.

  2. In the Actions menu, click View > Explore.

  3. In the left panel, right-click on database and select Properties.

  4. From the drop down menu, select reconfig, and in Parameters, type update=0, and click Send. This calculates what the NWDB size-configuration should be for all available storage to the server.

  5. If this configuration does not match the current configuration, in Parameters, type update=1 and then restart the nwconcentrator service to implement the correct NWDB storage configuration.

 

Verify all meta keys are configured with correct format and valueMax entries

  1. Go to Administration > Services and select a Concentrator service.

  2. In the Actions menu, click View > Config.

  3. Select the Files tab, and from the drop down list, select the index-concentrator-custom.xml file and verify that all the meta keys are configured with the correct format and valueMax entries.

 

Check /index/
config/save.
session.count

  1. Go to Administration > Services and select a Concentrator service

  2. In the Actions menu, click View > Config.

  3. From the Config menu at the top of the page, select Explore.

  4. In the left pane, select index > config. save.session.count is displayed in the right pane. save.session.count is 600000000 by default (in 10.5.X and later). If save.session.count=0, then index slice creation is still controlled by the service scheduler.

  5. In View > Config, select the Files tab and from the drop down list, select scheduler. Scheduler should look similar to : /sys/config/scheduler/351 = hours=8 pathname=/index msg=save

  6. If /index/config/save.session.count=0 and the index save schedule is every 8 hours, there are at least 21 index slices created every week. Assuming that the majority of queries are seven days or less + 1 for the current index slice, update the index slice to:
    /index/config/index.slices.open (Index Open Slice Count) = 0 => 22

This change should reduce the maximum amount of memory that the Concentrator service can use for queries.

Note: The change is immediate and does not require a service restart.

 

 

Concentrator Checks Using SSH-Session/ CLI

Daily Tasks for Concentrator

                                      
Task TitleDescription

Check storage usage

Run the command: df –h

 

Check memory usage

Run the command: free –g

 

Check for meta keys exceeding their valueMax per slice

Run the command:
cat /var/log/messages | grep –i index | grep –i max

 

Test execution of Puppet provisioning script

Run the command: puppet agent –t

 

Verify required ports are open

Run the command netstat –alnp | grep “port_no”

 

 

Weekly Tasks for Concentrator

                       
Task TitleDescription

Index-check: Check the number of slices

The number of slices should be 400 or less. Run the command:
/index/stats/slices.total
(Index Slice Total)

 

Index check: Check size of index slices

Run the command:
cd /var/netwitness/concentrator ; du -h index

Note: Index slices on hybrid systems should be less than or equal to 10 GB.

 

Event Stream Analysis (ESA) Health Checks

In this section, we describe regular health checks to perform for ESA. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

ESA checks using the Security Analytics UI

Daily Tasks for ESA

                            
Task TitleDescription

Check the Events per Second (EPS) rate

ESA hosts are capable of ingesting 90,000 EPS as a maximum. Monitoring EPS ensures this limit is not breached and allows for appropriate planning. Monitor EPS for an ESA host at the following location:

Alerts > Configure > Services, select an ESA host and check Offered Rate.

 

Check Mongo database status

The Mongo database on the ESA host is responsible for storing the alerts and incident management information. After a period of time, it is possible for this database to grow large and cause performance issues. RSA recommends that the Mongo database does not exceed 5 GB in size. Ensure that you set up database maintenance to prevent it from exceeding 5 GB at the following location:

Administration > Services, select an ESA service. From the Actions menu, select View > Explore > Alert > Storage > Maintenance.

 

Ensure that all data source connections are enabled

  1. Go to Services and select an ESA service.

  2. From the Actions menu, select View > Config.

 

 

Weekly Tasks for ESA

                                      
Task TitleDescription

Ensure that ESA rules resource-usage monitoring is enabled

  1. Go to Administration > Services, select an ESA service. From the Actions menu, select View > Explore.

  2. In the left pane, expand CEP and go to Metrics > configuration, and ensure that EnabledMemoryMetric, EnabledCaptureSnapshot and EnableStats are set to true.

  3. Restart the ESA service.

 

Monitor ESA rules memory usage

  1. Go to Administration > Health & Wellness > System Stats Browser.

  2. Enter the following options in the fields at the top of the page:
    Host = ESA
    Component = Event Stream Analytics
    Category, type esa-metrics

  3. Click Apply.

 

Ensure that the correct Concentrators are added to the ESA service as data sources

  1. Go to Administration > Services and select an ESA service.

  2. From the Actions menu, select View > Config and ensure that the list of Concentrators is correct.

  3. Ensure that all Concentrators are enabled and that the default port is set to 56005.

 

Ensure that the number of enabled rules meets requirements

Go to Alerts > Configure > Services > Rule Stats.

 

Ensure all ESA rules are deployed after updates

  1. Go to Alerts > Configure > Rules.

  2. Ensure that there is no exclamation mark beside the deployment.

 

ESA Checks Using SSH-Session/ CLI

Weekly Tasks for Concentrator

                  
Task TitleDescription

Make sure that there are NO sessions-behind between ESA and downstream data-sources like (concentrator, decoders)

SSH to the ESA host and run the following commands

Note: The commands in RED are user inputs and the ones in BLACK are system outputs.

root@ESA]# /opt/rsa/esa/client/bin/esa-client --profiles carlos

carlos:offline||jmx:localhost:com.rsa.netwitness.esa:/>carlos-connect

RemoteJmsDirectEndpoint { jms://localhost:50030?carlos.useSSL=true } ; running = true

carlos:localhost||jmx:localhost:com.rsa.netwitness.esa:/>cd nextgen

/Workflow/Source/nextgenAggregationSource

carlos:localhost||jmx:localhost:com.rsa.netwitness.esa:/Workflow/Source/next

genAggregationSource>
get .

"name" : "10.xx.xx.xx:56005",
"note" : "",
"sessionId" : 24462390949,
"sessionsBehind" : 58501036,
"state" : "IDLE_QUEUED",
"status" : "Streaming",
"time" : 1459508373000
}, {

 

Log Collector Health Checks

In this section, we describe regular health checks to perform for Log Collector. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

Log Collector checks using the Security Analytics UI

Daily Tasks for Log Collector

                       
Task TitleDescription

Ensure all subcollections are started

  1. Go to Administration > Services, and select a Log Collector service.

  2. From the Actions menu, select View > System and check the collection status to ensure that the relevant collections have been started.

 

Check the Start collection on service startup status

  1. Go to Administration > Services, and select a Log Collector service.

  2. From the Actions menu, select View > Config > Collector Configuration.

 

 

Weekly Tasks for Log Collector

                                 
Task TitleDescription

Ensure Remote Log Collectors (VLCs) are configured

If VLCs are available and their configuration is the Pull model, ensure that VLCs are included in the Log Collector configuration.

  1. Go to Administration > Services, and select a Log Collector service.

  2. In the Actions menu, select View > Config > Remote Collectors.

 

Ensure the Decoder is defined for the Log Collector in Event Destinations

  1. Go to Administration > Services, and select a Log Collector service.

  2. In the Actions menu, select View > Config > Event Destinations and make sure the status is “started”.

 

Ensure ports are set to default 50006 and 56006 for SSL

  1. Go to Administration > Services, and select a Log Collector service.

  2. In the Actions menu, select View > Config. Select the Appliance Service Configuration tab and ensure that:
    Port is set to 50006
    SSL Port is set to 56006

 

Ensure that tcpconnector communication is non-SSL

  1. Go to Administration > Services, and select a Log Collector service.

  2. In the Actions menu, select View > Explore.

  3. In the left panel, expand event-processors as follows: event-processors > logdecoder > destinations > logdecoder > consumer > processors > tcpconnector > config > connector > channel > tcp and change ssl from true to false.

 

Log Collector Checks Using SSH-Session/ CLI

Daily Tasks for Log Collector

                       
Task TitleDescription

Ensure rabbitmq-server is started

From logcollector SSH, run the command:
service rabbitmq-server status

 

Ensure nwlogcollector service is up and running

From logcollector SSH, run the command:
status nwlogcollector
 

 

Weekly Tasks for Log Collector

                       
Task TitleDescription

Make sure all queues have at least one consumer

Run the command:
rabbitmqctl list_queues -p logcollection messages_ready name consumers

 

Ensure that there are no stuck rdq files

Navigate to the following location and ensure that msg_store_persistent has no more than one file:
/var/lib/rabbitmq/mnesia/sa@localhost/msg_store_persistent

 

 

Monthly Tasks for Log Collector

                  
Task TitleDescription

If host is VLC, verify that the logCollectionType is set to RC

Run the command:
cat /etc/netwitness/ng/logcollection/{logCollectionType}

 

Log Decoder Health Checks

In this section, we describe regular health checks to perform for Log Decoder. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

Log Decoder checks using the Security Analytics UI and Explore/REST

Daily Tasks for Log Decoder

                       
Task TitleDescription

Ensure that capture has been started.

  1. Go to Administration > Services, and select a Log Decoder service.

  2. From the Actions menu, select View > System and check the capture status.

 

Ensure that the capture rate is within the EPS range

  1. Go to Administration > Services, and select a Log Decoder service.

  2. From the Actions menu, select View > Config.

  3. From the Config dropdown menu, select Stats.

 

 

Weekly Tasks for Log Decoder

                            
Task TitleDescription

Ensure that parsers are enabled

  1. Go to Administration > Services, and select a Log Decoder service.

  2. From the Actions menu, select View > Config and on the General tab, check the Parsers Configuration section.

 

Ensure that the ports are set to default 50002 and 56002 for SSL

  1. Go to Administration > Services, and select a Log Decoder service.

  2. From the Actions menu, select View > Config and on the General tab in the System Configuration section, ensure that:
    Port is set to 50002
    SSL Port is set to 56002

 

Ensure that the correct capture interface is selected

  1. Go to Administration > Services, and select a Log Decoder service.

  2. From the Actions menu, select View > Config.

  3. On the General tab, check the Decoder Configuration section.

 

 

Monthly Tasks for Log Decoder

                       
Task TitleDescription

Ensure that databases are correctly configured

  1. Go to Administration > Services, and select a Log Decoder service. From the Actions menu, select View > Explore.

  2. In the left pane, right-click on database and select Properties, and from the drop-down menu, select reconfig.

  3. In Parameters, type update=0, and then click Send.

  4. Compare the response output with the current configuration.

 

Ensure Decoders are Synchronized with Enabled Parsers

To ensure accurate analysis and that forensic data is available to analysts, Decoders must be in sync with enabled parsers. Because the list of enabled parsers can grow in a typical installation, the easiest way to validate the list of parsers that are enabled on Decoders is to check which parsers are currently disabled by using the parsers.disabled attribute in the Explore/REST interface. Perform the steps in Validate the List of Enabled Parsers on Decoders.

 

Log Decoder Checks Using SSH-Session/ CLI

Daily Tasks for Log Decoder

                  
Task TitleDescription

Ensure that Log Decoder is listening on port 514

Run the command: netstat –anp | grep 514  

Archiver Health Checks

In this section, we describe regular health checks to perform for Archiver. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

Archiver checks using the Security Analytics UI

Daily Tasks for Archiver

                            
Task TitleDescription

Ensure that aggregation has started

  1. Go to Administration > Services, and select an Archiver service.

  2. From the Actions menu, select View > System and check the aggregation status.

 

Ensure that Log Decoder aggregation is correct and status is consuming

  1. Go to Administration > Services, and select an Archiver service.

  2. From the Actions menu, select View > Config and check the General tab.

 

Ensure that aggregation is automatically started

  1. Go to Administration > Services, and select an Archiver service.

  2. From the Actions menu, select View > Config.

  3. On the General tab, under Aggregation Configuration > Aggregation Settings, ensure that Aggregate Autostart is selected.

 

 

Weekly Tasks for Archiver

                       
Task TitleDescription

Ensure that all metas are added to Meta Include

  1. Go to Administration > Services, and select an Archiver service.

  2. From the Actions menu, select View > Config and on the General tab, check the Meta Include column.

 

Ensure that ports are set to default 50006 and SSL 56006

  1. Go to Administration > Services, and select an Archiver service.

  2. In the Actions menu, select View > Config. Select the Appliance Service Configuration tab and ensure that:
    Port is set to 50006
    SSL Port is set to 56006

 

 

Monthly Tasks for Archiver

                       
Task TitleDescription

Ensure that all database directories are set to 0 B

  1. Go to Administration > Services, and select an Archiver service. From the Actions menu, select View > Explore.

  2. In the left pane, expand archiver > collections > default > database > config.

  3. In the right pane, check meta.dir, packet.dir, and session.dir and ensure that they are set to 0 B.

 

Ensure that 95% of usable storage is set

  1. Go to Administration > Services, and select an Archiver service. In the Actions menu, select View > Config.

  2. Select the Data Retention tab, and under Collections, check the values for Hot Storage, Warm Storage, and Cold Storage.

 

Packet Decoder Health Checks

In this section, we describe regular health checks to perform for Packet Decoders. You perform these tasks using both the Security Analytics user interface and SSH-Session/ CLI.

Packet Decoder checks using the Security Analytics UI

Daily Tasks for Packet Decoder

                                 
Task TitleDescription

Ensure that capture has been started.

  1. Go to Administration > Services, and select a Packet Decoder service.

  2. From the Actions menu, select View > System and check the capture status.

 

Ensure that the correct capture interface is selected

  1. Go to Administration > Services, and select a Packet Decoder service.

  2. From the Actions menu, select View > Config and on the General tab, check the Decoder Configuration section.

 

Ensure that parsers are enabled

  1. Go to Administration > Services, and select a Packet Decoder service.

  2. From the Actions menu, select View > Config and on the General tab, check the Parsers Configuration section.

 

Ensure that the capture rate is within the EPS range

  1. Go to Administration > Services, and select a Packet Decoder service.

  2. From the Actions menu, select View > Config.

  3. From the Config dropdown menu, select Stats.

 

 

Weekly Tasks for Packet Decoder

                       
Task TitleDescription

Ensure that ports are set to default 50004 and SSL 56004

  1. Go to Administration > Services, and select a Packet Decoder service.

  2. In the Actions menu, select View > Config.

  3. Select the Appliance Service Configuration tab and ensure that:
    Port is set to 50004
    SSL Port is set to 56004

 

Ensure that there are no Flex parsers or duplicate parsers enabled

  1. Go to Administration > Services, and select a Packet Decoder service.

  2. From the Actions menu, select View > Config and on the General tab, check the Parsers Configuration section.

 

Packet Decoder Checks Using SSH-Session/ CLI

Daily Tasks for Packet Decoder

                  
Task TitleDescription

Monitor current transmission rate, packet-drops and errors on all interfaces

Run the command: netstat -i 

Log Locations

If issues arise with any component of the Security Analytics platform, the following table will help you find component log files to assist with troubleshooting.

                                                           
ComponentLog Location
SA Server UI/var/lib/netwitness/uax/logs/sa.log

SA Server Jetty

/var/lib/netwitness/uax/logs/sa.log
/opt/rsa/jetty9/logs/<YYYY>_<MM>_<DD>.stderrorout.log

RabbitMQ

/var/log/rabbitmq/sa\@localhost.log

/var/log/rabbitmq/startup_log

Reporting Engine

/home/rsasoc/rsa/soc/reporting-engine/logs/reporting-engine.log

Upgrading

/var/log/yum.log

CollectD /var/log/messages

Puppet

/var/log/messages

Puppet Master

/var/log/puppet/masterhttp.log

MCollective

/var/log/mcollective.log

ESA

/opt/rsa/esa/logs/esa.log

Log Collector

/var/netwitness/logcollector/rabbitmq/logcollector@localhost.log

General

/var/log/messages

Capacity and Support Matrix

The following sections describe the supported capacity for Archiver storage and Concentrators.

Archiver Storage

                   
HostNumber of DACs

Series 4 Log Hybrid

Up to 5 for hot tier

NAS for warm tier (no limit)

Cold tier offline backup (no limit)

Series 5 Log Hybrid

Up to 8 for hot tier

NAS for warm tier (no limit)

Cold tier offline backup (no limit)

EPS Rates

Concentrators

                           
EPS RateEvaluation
20,000 Anticipate procurement of new hosts if EPS is expected to increase.
25,000

If EPS is intended to be increased beyond 25,000, procure a new Concentrator and keep a 1:1 ratio for Log Decoders to Concentrators.

30,000

Reduce EPS to Concentrator and monitor performance.

> 30,000

Sustained rates above 30,000 can cause the Concentrator to fall behind in aggregation.

 

LogHybrid

                           
EPS RateEvaluation
6000 Anticipate procurement of new hosts if the EPS rate is expected to increase
8000

If the EPS rate is intended to be increased beyond 8000, procure a Log Decoder and Concentrator pair with respective DACs, or a Log Hybrid with DAC (if required).

10,000

Reduce the EPS rate to Log Decoder and monitor performance.

> 10,000

Sustained rates above 10,000 can cause log drops. The EPS rate should be reduced.

 

LogDecoder

                           
EPS RateEvaluation
20,000 Anticipate procurement of new hosts if the EPS rate is expected to increase.
25,000

If the EPS rate is intended to be increased beyond 25,000, procure a Log Decoder and Concentrator pair with respective DACs, or a Log Hybrid with DAC if required.

30,000 Reduce the EPS rate to Log Decoder and monitor performance.
> 30,000

Sustained rates above 30,000 can cause log drops. The EPS rate should be reduced.

Number of DACs per Host

                   
HostNumber of DACs

Series 4 Log Decoder

Up to 5, or one UltraDAC.

Series 5 Log Decoder

Up to 8, or one UltraDAC.

VLC

                                             
EPS Rate

Quantity of CPUs

CPU Specifications

RAM (GB)Disk (GB)
Up to 1,000 2

Intel Xeon CPU @2.00 GHz

2150
Up to 2,500 2

Intel Xeon CPU @2.00 GHz

2.5150
Up to 5,000 3

Intel Xeon CPU @2.00 GHz

3150
Up to 20,000 8

Intel Xeon CPU @2.00 GHz

8150

Supported Browsers

When using the Security Analytics user interface, RSA recommends that you use the following browsers:

  • Google Chrome
  • Firefox

List of Ports

The following table describes the port numbers that need to be opened for each service.

Important: Maintain this list of ports on your firewalls, and check these ports every month.

                                                                                                                                                                                                                                                                     

From Host

To Host

To Ports (Protocol)

Comments
Any host

Security Analytics Server

80 (TCP)

Yum/HTTP: All SA hosts receive RPM package updates from Yum repository located in Security Analytics Server over HTTP.  This is a two-way communication from any host to the Security Analytics server

Any host

Security Analytics Server

8140 (TCP)

Puppet-master.HTTPS: All communication from any host to the Security Analytics Server (Puppet master) is over HTTPS.

Any host

Security Analytics Server

61614 (STOMP/TCP)

rabbitmq-server(Mcollective/STOMP): All communication from any host to the Security Analytics Server (rabbitmq-server) is over Mcollective/STOMP.

Security Analytics server

Any host 5671 (AMQPS/TCP)

rabbitmq-server(RabbitMQ/AMQPS): All communication from Security Analytics  server (rabbitmq-server) to any host is over RabbitMQ/AMQPS. Note: Port 5761 must be open both ways between the SA Server and the other hosts for version updates.

Security Analytics server

Log Decoder 56002 (SSL / TCP)

50002 (non-SSL / TCP)

50102 (REST / TCP) - For Security Analytics 10.3 and earlier only

Security Analytics server

Broker56003 (SSL / TCP)

50003 (non-SSL / TCP)

50103 (REST / TCP) - For Security Analytics 10.3 and earlier only

Security Analytics server

Concentrator 56005 (SSL / TCP)

50005 (non-SSL / TCP)

50105 (REST / TCP) - For Security Analytics 10.3 and earlier only

Security Analytics server

Packet DecoderService: 56004 (SSL / TCP)

50004 (non-SSL / TCP)

50104 (REST / TCP)

Security Analytics server

Log Collector (Local, Remote and Windows Legacy)

56001 (SSL / TCP)

50001 (non-SSL / TCP)

50101 (REST / TCP) - For Security Analytics 10.3 and earlier only

Security Analytics server

Archiver

56008 (SSL / TCP)

50008 (non-SSL / TCP)

50108 (REST / TCP)

Security Analytics server

ESA

50030 (SSL / TCP)

27017 (SSL / TCP) (Default)

27017 is for one ESA host only.

Security Analytics server

ESA - Context Hub50022 (SSL / TCP)  

Security Analytics server

Malware (Malware-colocated on Security Analytics Server)

60007 (TCP)

 

Security Analytics server

Reporting Engine (rsa-re on Security Analytics Server)

51113 (SSL / TCP)

 

Security Analytics server

Incident Management (rsa-im on Security Analytics Server)

50040 (TCP)  

Security Analytics server (IPDB Extractor)

enVision IPDB

135,138,139,445 (TCP/UDP)

 

Security Analytics server

IPDB Extractor

56025 (SSL / TCP)

50025 (non-SSL / TCP)

50125 (REST / TCP)

Security Analytics server

Warehouse Connector (on Packet Decoder/Log Decoder)

56020 (SSL) 50020 (non-SSL)

Security Analytics server

ECAT

443 (TCP)

 

Security Analytics serverHost Service (all hosts) 56006 (SSL/TCP) 50006 (non-SSL/TCP)

Security Analytics server

Workbench (Archiver)

56007 (SSL / TCP)

50007 (non-SSL / TCP)

50107 (REST / TCP)

Security Analytics server

Audit Log Syslog Receiver: This can be a third-party syslog receiver or a Log  Decoder

514 (TCP / UDP)

Required only if SA audit logs are sent to Log Decoder/third-party syslog receiver to be parsed.

Concentrator

Packet Decoder

56004

 

Concentrator