NOTE: Updated to support 184.108.40.206
You need to remotely backup your NetWitness hosts to a central location, to satisfy Disaster Recovery Requirements, perform a Tech Refresh, or to be prepared for RMA replacement of a device.
Solution – A Wrapper script for NRT
Building off the framework of the original nw-backup scripts written for 10.x backup/restore and migration to 11.x, a new set of version 11 scripts has been written as a "wrapper" to the built in NRT functionality of NetWitness Version 11.2+.
The solution consists of 3 scripts (all run from the NW Admin server (node-zero)) and an example "nw-base.nrt" file:
- get-all-systems11.sh – generates an inventory file of all hosts in the NW server deployment.
- ssh-propagate11.sh – Generates an ECDSA key pair on the NW Server and propagates the public key to all hosts.
- nw-base.nrt – An NRT script to add any custom files to the backup directory as part of the backup process.
- nw-backup11.sh – The main backup script.
Copy the attached zip file (containing the above files) to the NW Admin Server host (node-zero) and unzip:
unzip nw-backup11.zip -d /root/scripts
chmod +x *.sh
Step 1: Run get-all-systems11.sh
This runs on the NW Admin Server, no options necessary. It will create the /var/netwitness/nw-backup directory on the NW Admin Server. Using a combination of mongo and salt commands, it creates the "all-systems" file in the created directory. The all-systems file is used by the other scripts (and for many other scripts I've developed), with entries that contain the following in comma separated format:
DeviceType = NRT Category Type (i.e. AdminServer,Broker,Concentrator,Decoder,ESAPrimary, etc.)
Hostname = Short hostname (not FQDN)
IPAddress = Management Interface IP address of host
MinionID = Unique Salt MinionID of host
SerialNumber = Device Serial Number for Reference and Support
The script is designed to run from the cron on a regular basis (if you are in a dynamic environment were systems are added/removed on a regular basis), it has a 30 second timeout on the one question it asks.
One new feature is the generation of "new-systems" and "old-systems" files if it finds new hosts or missing hosts compared to the last time run. If new systems have been added to the environment, the new-systems file can be used by other scripts for running specific targeted actions against the newer hosts. If a system is “offline” the old-systems file will have the entries for those hosts, so they are not lost and can be added back into the main “all-systems” file if needed.
Step 2: Run ssh-propagate11.sh
Propagate the Nw Admin Server root user ecdsa-521 bit public ssh key to target hosts
- Run with no options, uses the /var/netwitness/nw-backup/all-systems file to run against all hosts in the all-systems file (note: if key already exists on host, it is just verified).
- Run with argument of "new-systems" will just run against the hosts found in /var/netwitness/nw-backup/new-systems.
- Run with argument of <hostname> (a specific system hostname, or part of a hostname) it will grep the matching host(s) out of the all-systems file and run against those hosts.
Step 3: Modify the nw-base.nrt file
The default /etc/netwitness/recoverytool/nw-base.nrt file, distributed file with 11.2 and 11.3 systems, only contains the following entries:
# unmanaged files
# for azure
The “STASH” entries are NOT restored during an NRT import action, but are available for reference in the /var/netwitness/backup/unmanaged folder, after the restore. I have included an expanded list of stash files and directories to include modifications I have seen at several customers, that would be needed to restore complete functionality after a restore. Edit the file to include any additional locations (files or directories) that contain customizations in your deployment, and save the file in the same directory as the nw-backup11.sh script is located (if you move it), the backup script will verify the file on each system matches your modified file and if not, will automatically copy the modified file to each host before running NRT on that host.
Step 4: Run nw-backup11.sh
./nw-backup11.sh -b <NRTBackupPath> -m <RemoteTransferMode> -l <NFSMountPoint> -p <NWBackupPath> -s <RemoteServerIP> -d <RemotePath> -t <DeviceType>
Note: All command-line options are optional. With no options selected, the script will use options set within the file itself and backup ALL devices listed in the all-systems file (except for any commented(#) out).
-b <NRTBackupPath>: Path to the location for write NRT backup files on each server. Default:(/var/netwitness/backup)
-m <RemoteTransferMode>: Remote transfer mode (scp or nfs). Default:(scp)
-p <NWBackupPath>: NW Admin server path for logs and all-systems file(s). Default:(/var/netwitnes/nw-backup)
-d <RemotePath>: Path on remote server to store backup files, via nfs or scp. Default:(/var/netwitnes/nw-backup)
-s <RemoteServerIP>: Server IP address to store backup files, via nfs or scp. Default:(IP of NW Admin Server)
-l <NFSMountPoint>: Local mount point for NFS share. Default:(/mnt/backup)
-t <DeviceType>: Backup ONLY targeted device type or device(s). Default:(all)
core (Broker, Concentrator, (Log)Decoder, Archiver, LogCollector(vlc), NetworkHybrid, LogHybrid)
nonw (all devices excluding AdminServer)
nwonly (AdminServer only)
esaonly (all ESAPrimary & ESASecondary devices only)
endpoint (Gateway, EndpointHybrid, EndpointLogHybrid)
Would run the backup with options as set in the Header of the
Example2: ./nw-backup11.sh -b /var/netwitness/backup -m nfs -l /mnt/backup
-d /var/netwitness/nw-backup -s 192.168.1.129 -t NetworkHybrid
Would run a backup with the following options:
-b: store NRT output in /var/netwitness/backup on each server
-m: use NFS to move the backup files to the remote backup store
-l: mount the remote nfs share to /mnt/backup.
-s: use 192.168.1.129 as the nfs server IP address.
-d: nfs mount /var/netwitness/nw-backup from the remote server.
-t: target only Network Hybrids on this backup run
- If using SCP to a server other than the NW Admin Server, the copy uses the "-3" option and makes the transfer of the backup file via the NW Admin Server, so ONLY the NW Admin server needs to have SSH key authentication configured to all hosts.
- By Default the /etc/netwitness/recovery-tool/category.sequence file does not include the mongo databases in the backup of the ESAPrimary server. This script checks that file and ADDS the line for backing up the mongo databases automatically.
- Make sure the modified nw-base.nrt file is in the same directory you are running the nw-backup11.sh script from, the script will hash that file, and verify that hash against the file on each server, if they do not match, it will copy the file in the script directory (where you ran the nw-backup11.sh script from) to the remote host, before triggering the NRT backup on that If you do not want to make any modifications to the default file, don’t include the file in the script run directory.
- Deploy Admin password is programmatically called, so no exposure of password in the scripts
- The /var/netwitness/nw-backup/all-systems file can be used for a myriad of other scripting calls, or for targeting a specific type of host, especially when using “salt” commands.
- If there are issues with backups failing due to "0" sized files, this is due to improper error checking of the tar command in the nw-recovery-tool script (when files being backed up change during the tar run), please refer to the nw-recovery-tool fix.txt file for a temporary workaround until 11.5 version, which should have the fix applied.
RESTORE - Follow the published backup/restore document for the version of NW you are running, The only change will be that the file is archived in a tar file, and after copying the file to the /var/netwitness/backup directory on a NEW host, run tar -xvf <backup-file-name>.tar to extract it there. Then follow the remaining instructions for running nwsetup-tui and nw-recovery-tool --import