Consolidating your backups and maximizing NRT (NetWitness Recovery Tool)

Document created by Thomas Jones Employee on Jul 9, 2019Last modified by Thomas Jones Employee on Jul 10, 2019
Version 5Show Document
  • View in full screen mode

Changes are inevitable and no one knows when a restore is going to be needed.  Today backup and restore processes  are standard, required, and are part of nearly all basic deployment strategies.  With that in mind, I was tasked/required to backup a environment of 32 hosts and running NRT manually was not a option, so I had to try to automate it.

 

Scenario -

Regular backups from all the netwitness hosts throughout the environment in the event of a problem, emergency, or a upgrade.

 

Problem -

NRT is a manual process out of the box and requires a significant amount of time to process (run the script and relocate the backup files).

 

Reference:

 

Resolution -

Through some trial and error I built a script to automate the process that can be controlled with a cron job.  The issue was not NRT per se, it was pulling the backup files to a central location.   I initially thought chef may be able to pull these items back but unfortunately, pulling was not something Chef did well.  I did experiment with some Chef solutions but none worked the way I wanted them to.  So, I went back to old school private and public keys (because of scp interactive) to move the backups to secondary ESA - for storage only.  The next issue I came across with NRT was that it was device specific (--category Decoder).  This was extremely problematic due to the automation.   

 

Solution - 

  1. Get the keys situated (interactive responses make this step necessary)
    1. On the backup host (where all the backups will reside)
      1. cd ~/.ssh
      2. ssh-keygen -t rsa -b 2048 -N "" -f ~/.ssh/nwbackup_key
      3. Copy the public key file to the SA Server
    2. On the SA Server
      1. Put the public nwbackup_key.pub in /tmp
      2. salt '*' cmd.run 'mkdir ~/.ssh' (creates the ssh directory on all netwitness hosts)
      3. salt '*' cmd.run 'chmod 700 ~/.ssh' (changes the permission of the ssh directory on all the netwitness hosts)
      4. salt-cp '*' file.copy /tmp/nwbackup_key.pub /tmp/nwbackup_key.pub
      5. salt '*' cmd.run 'chmod 755 /tmp/nwbackup_key.pub'
      6. salt '*' cmd.run 'cat /tmp/nwbackup_key.pub >> ~/.ssh/authorized_keys'
      7. salt '*' cmd.run 'rm -f /tmp/nwbackup_key.pub'
      8. Check to make sure the public key has been saved in the authorized_keys file on the hosts
    3. Backup server
      1. Test the key by ssh root@host from the backup host server to one of the servers being backed up.
    4. Command for SA Server (Backup)

      nw-recovery-tool --export --dump-dir /var/netwitness/nw-recovery-tool_backup --category AdminServer
    5. Commands for  (Backup)

       

      nw-recovery-tool --export --dump-dir /var/netwitness/nw-recovery-tool_backup --category NetworkHybrid
      nw-recovery-tool --export --dump-dir /var/netwitness/nw-recovery-tool_backup --category Decoder
      nw-recovery-tool --export --dump-dir /var/netwitness/nw-recovery-tool_backup --category Concentrator
      nw-recovery-tool --export --dump-dir /var/netwitness/nw-recovery-tool_backup --category Broker
    6. Other host types: Archiver, Broker, Concentrator, Decoder, EndpointHybrid, EndpointLogHybrid, ESAPrimary, ESASecondary, LogCollector, LogDecoder, LogHybrid, Malware, NetworkHybrid, UEBA
    7. Create a script for each host type in your environment
      1. Put the scripts on the SA Server and distribute to the environment using salt-cp '*' file.copy command.
        1. Example script for a broker
        2. Notice the --category this is what will change from script to script.  The zip file created contains the host name and date.
    8. Create a script for the backup server (in my case it is a Secondary ESA)
      1. Example of the manual backup
        1. Iterate through the hosts using your script.

Summary:

  • Generated public/private key for the backup host
  • Distributed the public key via salt on the SA Server
  • Validated shh non-interactive connectivity between the hosts from the backup server
  • Created a backup script for each host type (broker, decoder, etc.)
  • Distributed the host scripts via salt on the SA Server
  • Created a script on the backup server to run the host scripts and then scp them back to the backup server.

 

In the end, my solution ended up having 8 scripts (same script with edits to the device type by services and category).   The goal was to put all the backup files into one place so I could copy them and put them in a safe place.  Please keep in mind, this is just one way of many possibilities to centralize backups and it is a solution I chose based on the environment I was working on.

 

Let me know your thoughts.

Tom J

Attachments

    Outcomes