Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2018 > March
2018

NwLogPlayer is a log replay utility that is available for RSA NetWitness Logs. This utility reads a log event text file that you have created by exporting the logs from Investigation. The first question that comes to mind is "Why would I want to do that?". There are three typical reasons why I use it. First, is when you are developing ESA rules and you need a specific set of crafted events to reproduce your conditions for your alert. Second, is when you are developing a custom parser for those "unknown" device types. Third, is when you have a system that is a lab or development system that does not have an event source or the event source that you need.  I actually prefer to use an isolated lab/development system that has no other log sources other than what I replay to do my development work.  This way I can accurately track my replayed events vs my parsed events, so 100 replayed events should equal 100 parsed events.

 

To use the utility, all you need to do is install it on the system that you want to run it from. This can be any system in the NetWitness Logs stack. I typically use the Log Decoder, as it the system I am working with the most. If the total space of the log sample files are not very large (less than 100M total), I just create a "/root/logsamples" directory and put them there, then delete them when I am finished. If I am working with large log sample files, I create a "/var/netwitness/warehouseconnector/logsamples" directory as the warehouseconnector is not typically used on most Log Decoders unless you're exporting data to a Hadoop environment.

 

Installation

NetWitness 10.x

To Install NwLogPlayer:

  1. SSH into system you wish to run it from. (Typically a Log Decoder or Main Broker)
  2. Type "yum install nwlogplayer"
  3. Type "y" to install
  4. Press "Enter"

 

NetWitness 11.x

To Install NwLogPlayer:

  1. SSH into system you wish to run it from. (Typically a Log Decoder or Main Broker)
  2. Type "yum install rsa-nw-logplayer"
  3. Type "y" to install
  4. Press "Enter"

 

To use NwLogPlayer:

  1. Upload your Log sample text files to your sample directory on system that you installed NwLogPlayer
  2. SSH into system that you installed NwLogPlayer
  3. Type "NwLogPlayer --file <Your Sample Log Text file> --server <Log Decoder IP or FQDN>"

 

Examples 

Target and Destination:

Path = "/root/logsamples"
Log Sample File = "ESA-Alert-Firing-Sample.txt"
Virtual Log Collector = "VLC60.local"

NwLogPlayer --file /root/logsamples/ESA-Alert-Firing-Sample.txt --server VLC60.local

The above example will send logs to destination with the device IP being the system you ran the command from.

NwLogPlayer --file /root/logsamples/ESA-Alert-Firing-Sample.txt --server VLC60.local --ip 10.1.1.1 -r4

The above example will send logs to destination with the device IP of 10.1.1.1.

NwLogPlayer --file /root/logsamples/ESA-Alert-Firing-Sample.txt --server VLC60.local --rate 100 --ip 10.1.1.1 -r4

The above example will send logs to destination with the device IP of 10.1.1.1 at the rate of 100 EPS.

 

NwLogPlayer command line syntax:
--priority argset log priority value
-h [ --help ]how this message
-f [ --file ] arg (=stdin)input file
-d [ --dir ] arginput directory
-s [ --server ] arg (=localhost)remote server
-p [ --port ] arg (=514)remote port
-r [ --raw ] arg (=0)Determines raw mode. 1= File contents will be copied line by line to the server. 0 = add priority mark. 3 = auto detect. 4 = envision stream. 5 = binary object. 6 = protobuf stream
-m [ --memory ] argSpeed test mode. Reads up to MB of messages from the file contents and replays.
--rate argNumber of events per second. No effect if rate > eps which program can achieve at continuous mode
--maxcnt argmax number of messages to be sent
-c [ --multiconn ]multiple connection
-t [ --time ] argsimulate time stamp time. Format as yyyy-m-d-hh:mm:ss
-v [ --verbose ]if true will verbose output
--ip argsimulate ip tag.
--devicetype argsimulate device type. Applies only to envision heades (raw=4).
--cid arg simulate collector id. Applies only to
envision headers (raw=4). (NetWitness 11.x versions)
--sslconnect with SSL
--certdir argOpenSSL certificate authority directory.
--clientcert arguse this PEM-encoded SSL client certificate
--clientkey arguse this PEM-encoded private key file. If not specified the clientcert path is used.
--udpsend in udp
-g [ --gzip ]treat input stream as compressed gzip
--versionoutput the version of this program

Feeds have been part of the core RSA NetWitness Platform for a long time and form one of the basic logic blocks in the product for capture time enrichment and meta creation.  Context-hub lists have been recently added to the RSA NetWitness product and provide a search time list capability to overlay information on top of meta already created to provide additional context to the analyst.  At the moment those lists are in separate locations, but with a small amount of effort they can be pointed together so that as a feed is updated the context list is also updated to provide context on events that might have occurred before the feed data was created.

 

Here are some architecture things to note as this is focused on the RSA NW11.x codebase:

 

The directory where feeds are read from in RSA NW11.x is different than RSA NW10.6.  The idea behind using this directory, which is mentioned below, is to have a data feed pulled from an external source to this local web directory that the native RSA NetWitness feed wizard and the native Context Hub wizard can both pull from to create information.

 

This is the new location of the directory where feeds can be placed so that the feed wizard can read from the local RSA NW Head Server to get access.  Note: you might need to do this if your data requires pre-processing to remove stray commas or remove/add columns of data.

 

Your csv feed file will go here or a directory here:

 

NW Head Server (node0)

/var/lib/netwitness/common/repo

 

You will also need to make a slight modification to your /etc/hosts file on the head server so that the Context Hub can read this location.  This is because we force https connections now and this helps get past the certification checking process.

 

Host File(node0)

/etc/hosts

127.0.0.1   netwitness

 

Create your feed as usual but use the recurring feed option (even if the feed doesn't get updated it provides us the URL option)

 

 

Next, set the url to be https://netwitness/nwrpmrepo/<feed_name.csv>

(netwitness is used here to point to localhost via the host file change we made above.  Use whatever you added to your /etc/hosts file)

 

Next, set the other items as normal, apply the feed to your decoder service group and select the columns as you would for a normal feed and select apply.

 

Now you have a capture time feed ready to go as you normally would.

 

On to the Context-Hub

  1. RIght click on the context-hub service > configure
  2. Create your list > enable it
  3. Set your connection to http(s)
  4. URL: https://<nw11headip>/rwrpmrepo/<feed_name.csv> (keep in mind here that the Context hub is on ESA so the address needs to be the NW head server IP and not localhost)
  5. Description:
  6. Enable with headers if you have #headers for the columns in your feed

 

I chose to select overwrite the list and not append values to keep the list fresh with just what is needed from the source.

I then selected the column to match as IP.

 

*** one thing to note, if you use headers to remind yourself what data in the column goes into what metakey make sure you do not use PERIODS.  In the feed wizard this doesn't matter, but in context hub this data is stored in Mongo and PERIODS have a different meaning, so replace your dots with _ in the headers. ***

for instance

#alias.ip

should be 

#alias_ip

 

The first option will error, the second is good to go.

 

The Reveal

Now once the feeds and context are setup you can look for hits on the feed and the additional meta created from it, and then see the grey meta overlay for context which you can open with the context fly out panel.

 

Based on our configured feed we should have an entry in the IOC key for (ioc contains 'sinkhole').

 

Next, you can look for the ip addresses to see if there are grey indicators:  Right click on Live Connect Lookup which opens the right panel

 

Now you can see the lists that this IP lives in

 

You can see the lists that match (in this case 2 lists)

 

This will also work if an IP was investigated before the Feed was created as the context-hub is search time not index time overlay.

 

Now as the feed is updated both areas get updated together.

 

If you would like to perform these steps but are uncomfortable doing so yourself, please contact your account manger who can help you get a very affordable Professional Services engagement to assist.  Otherwise, good luck and happy hunting!

 

- Eric  

Filter Blog

By date: By tag: