Backing up your configuration data for all your hosts from 10.6.6.x is the first step in upgrading from Security Analytics 10.6.6.x releases to NetWitness Platform 22.214.171.124. Go to the Master Table of Contents to find all NetWitness Platform Logs & Network 11.x documents.
The following types of hosts can be backed up and are automatically restored during the upgrade process:
- Security Analytics Admin Server
- Standalone Malware Analysis
- Event Stream Analysis (including Context Hub and Incident Management database)
- Log Decoder (including Local Log Collector and Warehouse Connector, if installed)
- Log Hybrid
- Network Decoder (including Warehouse Connector, if installed)
- Network Hybrid
- Virtual Log Collector
The following types of files are automatically backed up but must be restored manually after the upgrade process:
PAM configuration files: For information about restoring the PAM configuration files, refer to "Task 8 - Reconfigure Pluggable Authentication Module (PAM) in 126.96.36.199", in the "General" section of the Post Upgrade Tasks.
/etc/pfring/mtu.conf and /etc/init.d/pf_ring: To restore these files you must manually retrieve them. The /etc/pfring/mtu.conf files will be located in /var/netwitness/database/nw-backup/restore/etc/pfring/mtu.conf, and the /etc/init.d/pf_ring files will be located in /var/netwitness/database/nw-backup/restore/etc/init.d/pf_ring. Complete the following steps to restore the files.
- Restore the pf_ring file to /etc/init.d/ directory in 188.8.131.52.
- Restore the mtu.conf file to /etc/pf_ring/ directory in 184.108.40.206.
- Restore the pf_ring file to /etc/init.d/ directory in 220.127.116.11.
The following diagram shows the high-level task flow of the steps you perform to back up your hosts.
The following sections describe each of these tasks:
- Task 1 - Set up an External Host for Backing up Files
- Task 2 - Create a List of Hosts to Back up
- Task 3 - Set up Authentication Between Backup and Target Hosts
- Task 4 - Check for Backup Requirements for Specific Types of Hosts
- Task 5 - Check for Adequate Space for the Backup
- Task 6 - Back up Your Host Systems
- Post Backup Tasks
You must set up an external host to use for backing up files. The host must be running CentOS 6 (including the "openssh-clients" package) with connectivity through SSH to the Security Analytics stack of hosts. CentOS 6 Minimal does not include the "openssh-clients" package.
Ensure that the host names for the systems to be backed up are resolvable on the backup host machine, either by DNS or listed in the /etc/hosts file.
There are several scripts that you run during the backup process. You must download the zip file that contains the scripts (nw-backup-v4.3.zip or later) from RSA Link at this location: https://community.rsa.com/docs/DOC-81514 and copy it over to your CentOS 6 backup system. Extract the zip file to access the scripts. The scripts are:
- get-all-systems.sh: Creates the all-systems file, which contains a list of all your Security Analytics Servers and host systems to be backed up.
- ssh-propagate.sh: Automates sharing keys between the systems you are backing up and the backup host system so that you are not prompted for passwords multiple times.
- nw-backup.sh: Performs the backup of your hosts.
- azure-mac-retention.ps1: Applies only if you are using AZURE. See the Azure Installation Guide on for more information.
The script that you use to back up your files depends on the all-systems and all-systems-master-copy files, which contain a list of the hosts that you want to back up. The all-systems-master-copy file contains a list of all your hosts. The all-systems file is used for each backup session, and contains only those hosts which are being backed up for a particular session. You run the get-all-systems.sh script to generate these files. RSA recommends that you back up your hosts in groups, and not all at once. The recommended order and grouping of hosts for backup sessions is shown in the following diagram:
Limit each backup session to five hosts to ensure that you do not run out of space for the backup files. You create all-systems files for your backup sessions by using the all-systems-master-copy file as a reference and then manually editing the all-systems file to contain specific hosts.
To generate the all-systems and the all-systems-master-copy files:
- From the host on which you are running the backup process, make the get-all-systems.sh script executable by running the following command:
chmod u+x get-all-systems.sh
- At the root level, run the get-all-systems.sh script:
You will be prompted for the password for each host system once per host.
This script saves the all-systems file and the all-systems-master-copy file to /var/netwitness/database/nw-backup/.
- Validate that the all-systems and all-systems-master-copy files were generated and that they contain the right hosts.
- Edit the all-systems file to contain only the systems you are backing up. You can do this by using the all-systems-master-copy file as a reference, and then opening the all-systems file in an editor (such as vi) and modifying it to include only the systems you want to back up. RSA recommends that you comment out the hosts that you do not want to back up (add the number sign (#) to the beginning of the line that contains the host that will not be backed up).
The following examples shows how to comment out the 10.6.6 Security Analytics Server:
Here is an example of an all-systems-master-copy file:
And here is an example of an all-systems file that could be used in the first backup session, where only the Security Analytics Server, ESA host, and Malware Analysis host are backed up:
Be sure to save copies of the all-systems and all-systems-master-copy files in a safe location. Follow these recommendations.
- Do not edit the all-systems-master-copy file.
- If you create several different versions of the all-systems file (for example, for several backup sessions), be sure that each version of the file lists only those hosts that are currently being backed up, and the other hosts are commented out. For more information, see Post Backup Tasks.
If any host systems are down while you are running the get-all-systems.sh script, the script creates a list of hosts for which it cannot find information. After the script completes and the all-systems file is created, you must edit the all-systems file manually and add the missing information for these hosts.
The get-all-systems.sh script generates a list of hosts that were defined in the Security Analytics user interface. Ensure that all hosts and services are provisioned properly. If any hosts or services are not provisioned properly, they will not be backed up. RSA recommends that when you add hosts and services to Security Analytics, you use the Security Analytics user interface to ensure that they are provisioned properly. However, if there are any hosts or services that were not defined in the user interface, you must add them to the all-systems file manually.
At the end of the get-all-systems.sh script, the script will check for any differences between the systems that the Security Analytics Server has listed, and the ones for which the script was able to find all the required information. If any Node ID’s or system names are listed as missing, verify the existence of those systems, that their services are all running, and that they are properly communicating with the Security Analytics Server. (Any Windows Legacy Collectors or AWS Cloud Collectors will not be added to the all-systems file, and may account for discrepancies. DO NOT add these items to the all-systems file manually.)
If the syntax in the all-systems file is incorrect, the script will fail. For example, if there is an extra space at the beginning or the end of a host entry, the script will fail.
RSA recommends that you run the ssh-propagate.sh script to automate sharing keys between the backup host and the host systems.
Complete the following task to set up authentication between backup and target hosts.
- On the external backup host system, make the ssh-propagate.sh script executable by running the following command:
chmod u+x ssh-propagate.sh
- At the root directory, run the following command, where <path-to-all-systems-file> is the path to the directory where the all-systems file is stored:
- You are prompted for a password once on every host, but you will not need to enter it repeatedly later during the backup process.
After you create the all-systems file to use for backup, you must check to see if any of the hosts listed in the file have requirements that must be met before you run the backup process.
Perform the following steps for all host types.
- On the Security Analytics Server, place Custom Certificate files and any other certificate authority (CA) files in the /root/customcerts folder to ensure that these certificate files are backed up. Your custom certificate files that are placed in this directories will be automatically restored during the upgrade process. After upgrading to 18.104.22.168, your custom certificate files will be located in /etc/pki/nw/trust/import.
You can convert CA certificates and keys to different formats to make them compatible with specific types of servers or software using OpenSSL. For example, you can convert a normal PEM file that would work with Apache to a PFX (PKCS#12) file and use it with Tomcat or IIS. To convert the files, SSH to the Security Analytics Server and run the following command strings to perform the conversions listed.
Convert a DER file (.crt .cer .der) to PEM
openssl x509 -inform der -in certificate.cer -out certificate.pem
Convert a PEM file to DER
openssl x509 -outform der -in certificate.pem -out certificate.der
Convert a PEM Certificate File and a Private Key to PKCS#12 (.pfx .p12)
openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt
Convert a PKCS#12 File (.pfx .p12) Containing a Private Key and Certificates to PEM
openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes
- Manually record any custom configurations made to CentOS 6 (for example, driver customizations) for restoration after you update to CentOS 7. Custom configurations to CentOS 6 are not automatically backed up and restored.
For ESA Hosts with Mongo Databases
The default 10.6.x Mongo database password is netwitness. If you have customized this password, you could encounter an error while running the backup script. You can either use your custom Mongo database password during the backup, or you could change that password back to netwitness before running the nw-backup.sh script.
- Find out if the Mongo database password is netwitness or if it has been modified.
- If it has been modified, either change it back to netwitness, or be sure you know what the customized password is so that you can enter it during the backup.
For Decoder, Concentrator, or Broker Hosts: Stop Data Capture and Aggregation
In addition to the tasks described in For All Host Types, for Decoder, Concentrator, or Broker hosts, stop data capture and aggregation on all the systems that you are backing up. For instructions, refer to "Appendix B. Stopping and Restarting Data Capture and Aggregation ."
You need the following information before you prepare LCs and VLCs for upgrade.
- If Lockbox was initialized on the LC and VLC, you must know the Lockbox password. It is required to reconfigure the Lockbox after the upgrade.
- If you set the password for logcollector user for RabbitMQ, you must know the password so you can set it again after the upgrade.
- Read For File Collection Event Sources later in this chapter. You may need to restore some SSHD configuration settings from the 10.6.x LC/VLC.
- Read For Bluecoat Event Sources later in this chapter. You may need to backup VSFTPD key material manually.
Prepare LCs and VLCs for Upgrade
Complete the following task to prepare Log Collectors and Virtual Log Collectors for the upgrade.
- SSH to the Log Collector.
- Submit the following command string.
# /opt/rsa/nwlogcollector/nwtools/prepare-for-migrate.sh --prepare
Stops the Puppet Agent service.
Disables the file collection accounts (“sftp” and all users in the group “upload”) used for uploading log files to the Log Collector. The log files accumulate on the event sources until the Log Collector has been upgraded to 22.214.171.124.
Stops all the collection protocols in the Log Collector service.
Saves the list of Plugin accounts and RabbitMQ accounts.
Configures the RabbitMQ server so that new events cannot be published to it any longer. Consumers of events in the queues, such as shovels and Log Decoder Event Processors, will continue to run.
Waits until the Log Collector queues are empty.
Stops the Log Collector service.
Creates a marker file indicating that the Log Collector has been successfully prepared for upgrade.
The prepare-for-migrate.sh script:
- Sends informational, warning, and error messages to the console.
- Saves a session log in the /var/log/backup/ directory.
You must fix any of the following errors and resume the preparation. Contact RSA Customer Support (https://community.rsa.com/docs/DOC-1294) for assistance.
- Log Collector queues with events but without consumers are found.
- Unable to stop the Puppet Agent service.
- Unable to stop a collection protocol in the Log Collector service.
- Unable to block event publishers to the RabbitMQ server.
- Unable to or taking too long for queue events to be consumed. The script makes 30 attempts waiting for the events to be consumed. After each attempt, it sleeps for 30 seconds.
- Unable to stop the Log Collector service.
For more information about troubleshooting, see Appendix A. Troubleshooting.
If you are using older Windows and UNIX SFTP agents to upload log files, they may not be able to connect to the SSHD on the upgraded 11.x LC/VLC if they are using older Ciphers, MACs and Key Exchange Algorithms. If possible, upgrade to the latest Windows and UNIX SFTP agents. If you cannot upgrade to the latest Windows and UNIX SFTP agents, note the Ciphers, MACs and Key Exchange Algorithms, if any, from the /etc/ssh/sshd_config file on the 10.6.x LC/VLC on the 10.6.x LC/VLC so that they can be added back to the file after upgrade.
Bluecoat ProxySG event sources use FTPS protocol to upload log files to the Log Collector (LC) and Virtual Log Collector (VLC). The Event Source Documentation on RSA Link contains the steps to configure VSFTPD service on the LC and VLC.
- If key material exists in the /root/vsftpd/ directory in 10.6.6.x, this material area will be backed up and restored. If the material was in another location, you must back it up and restore it manually.
- If the /etc/vsftpd/vsftpd.conf file exits in 10.6.6.x, it is backed up and restored.
For Integrations with Web Threat Detection, Archer Cyber Incident & Breach Response or NetWitness Endpoint - List RabbitMQ Usernames and Passwords
On the 10.6.6.x Security Analytics Server host, you must get a list of all RabbitMQ usernames and passwords so that after you perform the 126.96.36.199. upgrade, you can restore RabbitMQ user accounts.
To get a list of RabbitMQ usernames and passwords, run the following command:
rabbitmqctl list_users >> /root/rabbitmq_users.txt
To restore RabbitMQ user accounts, refer to "Task 42 - For Integrations with Web Threat Detection, RSA Archer Cyber Incident & Breach Response or NetWitness Endpoint" in the "NetWitness Platform Integrations" in the Post Upgrade Tasks.
You can run the backup test script to check the amount of disk space that is required for the backup using the -t option described in Test Options. You run the script without actually backing up files or stopping any services. RSA recommends that you perform this step to ensure that you provide adequate space for the backup so that the backup captures all your data.
Complete the following task to check for adequate disk space.
- Make the backup script executable by running the following command:
chmod u+x nw-backup.sh
- Run the following command at the root directory level:
The output displays the amount of disk space that is required for the backup.
The following figure shows an example of the output from using the -t option.
Before you run the backup script to do the actual backup, be sure that you have plenty of space. To back up your hosts, you run the nw-backup.sh script using the -u option. This option is required for upgrading to 188.8.131.52.
When you run the backup script, you can choose from several options that are described in the following sections.
./nw-backup.sh [-u -t -d -D -l -x -e] <external-mnt> -b <backup file path>
-u : This option is required for upgrading to 184.108.40.206. Enables the upgrade flag to run backup for upgrading to 220.127.116.11. It also enables disk space check (-d), backing up reporting engine reports (-r) and stores backup content locally (-l). Default: (no)
-d : enables disk space check in 'fast' mode (quick estimate of space using uncompressed data). Default: (no)
-D : enables disk space check in 'full' mode (estimate of space using compressed data, ~10X slower). Default: (no)
-l : stores backup content locally on each host (automatically set if -u is used). Default: (no)
-e <path to mount point> : copies backup files of all devices onto an external mount point. Default: (/mnt/external_backup)
-x : move all backup files to an external mount point. Default: (no) - COPY
-b <path to write backups> : path to the location for storing backup files on a backup server. For upgrading to 18.104.22.168, please use the default location! Default: (/var/netwitness/database/nw-backup)
Advanced Content Selection Options
-c : back up Colocated Malware Analysis on SA servers. Default: (no)
-i : back up IPDB data (/var/netwitness/ipdbextractor). Default: (no)
-m : back up Malware Analysis File Repository. Default: (no)
-r : back up Reporting Engine Report Repository (automatically set if -u is used). Default: (no)
-v : back up system logs (/var/log). Default: (no)
-y : back up YUM Web Server & RPM Repository. Default: (no)
-S : If set: DISABLES back up of SMS RRD files. Default: (not-set)
-C : If set: DISABLES back up of Context-Hub configuration and database. Default: (not-set)
-E : If set: DISABLES back up of ESA Mongo database. Default: (not-set)
-t : performs script test run for disk space check only. Services are not stopped and excludes execution of backup. Can be combined with (-d) or (-D) and other flags. Default: (-t)
For example, the command:
would run the backup with options as set in the Header of the script itself.
OR, the command:
./nw-backup.sh -ue /mnt/external_backup
would run a normal backup using the backup path defined in the script, with the following options:
-u : enables the upgrade flag to run backup for upgrading to 22.214.171.124. It also enables disk space check (-d), backing up reporting engine reports (-r) and stores backup content locally (-l). Default: (no)
-e : Copy the backup files to external mount point, mounted on /mnt/external_backup
For Help: ./nw-backup.sh –h
When you run the script, the following text is displayed at the top of the script:
Complete the following task to back up your hosts.
- Ensure that the all-systems file contains only the hosts to back up. For information, see Task 2 - Create a List of Hosts to Back up.
- Make the backup script executable by running the following command:
chmod u+x nw-backup.sh
Begin the backup process by running the following command at the root directory level:
When the text "Backup completed with no errors" is displayed, the backup has completed successfully.
A log file, with a name similar to the following example, is created in the backup directory which provides information on the files being backed up:
- When the backup has completed, to ensure that the intended files were backed up, you can run the following command to see a list of all the files that were backed up:
tar -tzvf hostname-ip-address-backup.tar.gz
The following archive files are created:
For all hosts:
tar checksum files
For Security Analytics Servers:
<All ESA hostname-IPaddress>-mongodb.tar.gz
tar checksum files
For ESA Hosts:
tar checksum files
The archived files are located in the /var/netwitness/database/nw-backup directory. If any of the tar files appear smaller than expected, open them to be sure that the files were properly backed up.
Task 1 - Save a Copy of the all-systems File and the Backup Tar files
Make copies of the all-systems file, the all-systems-master-copy file, and the backup tar files and put the copies in a secure location. You cannot regenerate these files after you upgrade the Security Analytics Server (specifically the Admin service) to 126.96.36.199.
Task 2 - Ensure Required Backup Files Were Generated
After you run the backup scripts, several files are generated. These files are required for the 188.8.131.52 upgrade process. Before you begin the upgrade process, you must ensure that the required backup files are on the hosts that you are upgrading, and that you perform the following tasks.
The following files are generated on all hosts by the backup scripts:
In addition to the files listed above, the following files will be generated on the Security Analytics Server and ESA hosts:
- <hostname>-<host IP address>-mongodb.tar.gz
- <hostname>-<host IP address>-mongodb.tar.gz.sha256
The backup script will also generate the following controldata-mongodb.tar.gz files.
- <esa hostname>-<esa hostip>-controldata-mongodb.tar.gz
- <esa hostname>-<esa hostip>-controldata-mongodb.tar.gz.sha256
Task 3 - (Conditional) For Multiple ESA Hosts, Copy mongodb tar files to Primary ESA Host
If your environment has multiple ESA appliances, when you executed the nw-backup.sh script, it should have moved all the ESA files to the appropriate folders. Make sure that the ESA host (Where the Context Hub service is running) is designated as primary and that the controldatamongodb.tar.gz.* files were copied from the secondary ESAs to designated primary ESA default backup path.
Task 4 - Ensure All Required Backup Files are on Each Host
Before you upgrade to 184.108.40.206, ensure that the appropriate files exist on the hosts that you are upgrading as described in the following lists.
Required Files for NetWitness Servers
Required Files for ESA Hosts
Required Files for All Other Hosts