Jim Hollar

Mass Data Extraction in NetWitness Suite

Discussion created by Jim Hollar Employee on Feb 27, 2018

There are times when pcaps and logs must be extracted in bulk with the NetWitness Suite. It could be, because of an incident, or because there is very little storage and there is a fear of rollover, for whatever reason. The purpose of this guide is to provide the best options to extract data in different scenarios.

The UI in NetWitness has some limitations that make it not optimized for large file extractions, or multiple GB of pcaps or logs. There are a number of different time out settings to consider, multiple different services that are in play, and java issues that can make using the UI the least effective method.  The remaining options are the NetWitness “fat”, or windows client, the rest API, and the NwConsole utility.

Step 1 – Set Up Timeout Settings and Session Limits

If you’ve accepted the fact, that the UI is not the best approach, it simplifies the user timeouts that you must be aware of.  The “service” timeouts that get pushed to the devices for each user are the first that you must consider, which are defined for each of the roles in the UI (under Security – Roles) here:



You want to make sure that the role session threshold is set to 0, for unlimited sessions. You also want to increase the query timeout to 99 minutes.

The next user that you must take a look at is the service account used, when creating the bridge between the concentrator and decoder services (notice I didn’t mention a Broker here).

We will be discussing the advantages of extracting via the decoder and concentrator only. For bulk extraction, we will avoid using a broker, because it will decrease the amount of data which is being dragged across the network during the extraction.

The user used in the service account must have the role settings above (user can be found here):

Keep in mind, that if you have made a change in the “Security – Roles” screen, that the change WILL NOT AUTOMATICALLY TAKE AFFECT here, without toggling the service offline and online. The default timeout may be 20 minutes when your server was installed, and the file extraction will be killed off after 20 minutes for the user in question as a result. This timeout settings works the same for a decoder service (the other service you MAY want to use for pcap/log extraction). The settings for this can also be found here within the security tab of the service:

There is one more setting we need to modify if we’re going to do mass extraction of logs/pcaps, and that’s the “max.where.clause.sessions”, and this applies for decoders as well. Decoders? You ask, yes, you can use the time range in a where clause for a decoder extraction/query.  This can be found in explorer here on a log decoder, packet decoder, or concentrator (under sdk/config):

You want to change this value to 0, for unlimited sessions in a where clause.

Step 2 – Decide Which Method to Use

The most important first step (especially in a large environment), is to try and narrow down the scope of the extraction. Depending on the incident, you may ONLY be able to filter based on time, i.e. March 17th. Ideally, you will already have enough information to narrow it down to a class A or B network block, which is better. Better still, would be a list of IPs, or some other meta data. Let’s take a look at the options here.

Since performance and timing is key, the best method is to use NwConsole from the command line. You have two methods for efficiently executing commands within NwConsole, which are:

  • Use the -f <filename> option to pass login parameters and switches from a file
  • Use a single command, but using the unix excape parameter “\” for every special character, like quotes, /,&&, etc.

The best options is to use a file. The guide for this, can be found here:  https://community.rsa.com/docs/DOC-83142

There are three options for pcap/log extraction within NwConsole:

  • makepcap – most efficient (timewise), but requires the services to be stopped (not capturing) and works on the local file system
  • sdk packets – next most efficient (timewise) and can execute while capturing (usually the best option)
  • sdk content- least efficient, but flexible and can extract NWD files (pcap plus meta)

reference: https://community.rsa.com/docs/DOC-83095

Step 3 – Perform the Extraction

Make pcap extracts from the local file system to the local file system and operates on a stopped decoder.

makepcap usage and examples:

Usage: makepcap {source=<pathname>} [dest=<pathname>] [filenum=#[-#]]

                [packetid=#[-#]] [time1=<time>] [time2=<time>] [ip=<IP

                Address>] [single=<0,1>] [gzip=<0,1>]

                [fileType=<pcap,pcapng,log,json>] [delimiter=<string>]


Convert packet database files to pcap or log files


    source    - Required, the directory where the packet db files reside

    dest      - The destination directory for pcap files, uses source dir by default

    filenum   - The packet db file numbers to operate on or range of file numbers, use 999999999 for no limit on upper range

    packetid  - Only extract the packet IDs within the specified range

    time1     - Start time (UTC) for all packets to be extracted from packet db (e.g., time1="2009-Dec-17 08:00:00")

    time2     - Stop time (UTC) for all packets to be extracted from packet db (e.g., time2="2009-Dec-17 08:05:00")

    ip        - Limit to a single IP address, will match both source and destination

    single    - Create a single output file if 1, 0 is default

    gzip      - Compress output files if 1, 0 is default

    fileType  - pcap, pcapng, log or json, default is pcap. If json, it will write out single json objects, one per line, with packet metadata in them.

    delimiter - Log output line ending, default is \n (e.g., for windows you can use \r\n)

    delete    - If true, deletes the db file after processing, default is false


makepcap source=/var/netwitness/decoder/packetdb time1="2015-03-01 14:00:00" time2="2015-03-02 07:30:00" fileType=pcapng

makepcap source=/var/netwitness/decoder/packetdb dest=/media/usb/sde1

sdk packets usage and examples:


Can open a remote service (like concentrator from decoder), but doesn’t “HAVE TO” . Services can be running, and it first compiles a list of session IDs, then performs the extraction.

Usage: packets [sessions=<#,#-#,#>] [time1=<"YYYY-MM-DD HH:MM:SS">]

               [time2=<"YYYY-MM-DD HH:MM:SS">] {pathname=<output pathname>}

               [where=<where clause>] [packets=<#,#-#,#>] [append=<0|1>]

               [verbose=<0|1>] [random=<#>] [--output-pathname=<pathname>]



Retrieve packets/logs from the logged in service.  The current node must be

/sdk ("cd /sdk"), use extension ".pcap.gz" to compress output. If you would

like stats (like total packet payload/data size) to be output at the end of

the command, see the parameters starting with --output-*.


    sessions                 - Comma separated list of session IDs or session ID ranges whose packets will be downloaded  (e.g., "1,3-10,98-120,1001"

    time1                    - Start time (UTC) for all packets to be extracted from packet db (e.g., time1="2009-Dec-17 08:00:00")

    time2                    - Stop time (UTC) for all packets to be extracted from packet db (e.g., time2="2009-Dec-17


    pathname                 - The pcap or log file that the packets will be written to.  Use extension .log for a plaintext log file. Use /dev/null to drop all packets received (if you just want stats).

    where                    - Where clause to determine which sessions to download (must be submitted to a service with an index)

    packets                  - Comma separated list of packet IDs or packet ID ranges whose packets will be downloaded (e.g., "1,3-10,98-120,1001"

    append                   - If 1, will append to an existing file, otherwise  it overwrites, default is overwrite

    verbose                  - If 1, the command will be extra chatty about messages it receives and will provide more  output

    random                   - Pick N random sessions IDs between the range entered for the sessions parameter or the full range of the device if sessions is not found. Mainly useful to generate random data for a                           pcap or log file for troubleshooting/testing  purposes

    --output-pathname        - Writes packet stats to the given pathname, overwriting any existing file

    --output-append-pathname - Writes packet stats to the given pathname, will append output to an existing file

    --output-format          - Writes packet stats in one of the given formats,  the default is text



Create pcap file on concentrator (because we’re using ip block to filter meta):

Sample file.txt file on concentrator:

login localhost:56005:ssl admin netwitness

cd /sdk

packets where="(ip.src= && time='2018-01-1 00:00:00'-'2018-01-1 12:00:00'” pathname="/var/netwitness/pcaps/test.pcap"              *****(not on separate line)

NwConsole -f /path/to/file.txt

Decoder example without file (with escaping and filtering time only):

NwConsole -c login localhost:56004:ssl admin netwitness -c sdk open nws://admin:netwitness@localhost:56004:ssl  -c packets time1=\"2018-01-01 00:00:00\" time2=\"2018-01-24 00:00:00\" pathname=\”/root\”

Concentrator example without file (with escaping):

NwConsole -c login nwconcentrator:56005:ssl admin netwitness -c cd sdk -c packets where=\"ip.src= \&\& time=\'2018-01-23 00:00:00\'-\'2018-01-23 12:00:00\'\" pathname=\"/var/netwitness/pcaps/test.pcap\"


Escaping information:  https://unix.stackexchange.com/questions/46987/passing-arguments-with-quotes-and-doublequotes-to-bash-script

This command writes 10 minutes of HTTP only packets from March 1st to the file /tmp/march-1.pcap. All times are in UTC.

packets time1=\"2015-04-01 12:30:00\" time2=\"2015-04-01 12:35:00\" pathname=\”/media/sdd1/packets.pcap.gz\”

This command writes all packets between the two times to a GZIP compressed file at /media/sdd1/packets.pcap.gz.

packets time1=\"2015-04-01 12:30:00\" time2=\"2015-04-01 12:35:00\" pathname=\”/media/sdd1/mylogs.log\”


sdk content usage and examples:


You must first log into the service and open the remote (or local) service for extraction using the open command.

Usage: content [input=<pathname>] [session=<sessionid>] [maxSize=#]



               [--output-format={text,json,xml,html}] [pcap=<pathname>]

               [log=<pathname>] [type={s|m|p}] [outputDir=<directory>]


Reads or downloads (via /sdk content) a NWD file and provides stats on the contents. You must first 'cd' to the sdk node to use.


    input                    - The pathname to a NWD file to read in

    session                  - The session ID from which to download a NWD file

    maxSize                  - The max number of bytes to return (for the 'session' parameter), zero means no limit

    --output-pathname        - Writes the response output to the given pathname, overwriting any existing file

    --output-append-pathname - Writes the response output to the given pathname, will append output to an existing file

    --output-format          - Writes the response in one of the given formats, the default is text

    pcap                     - Writes all packets in the NWD to the specified pcap file

    log                      - Writes all logs in the NWD to the specified log file (text only)

    type                     - Indicates which parts of the NWD to output (session, meta or packets).  Default is all.

    outputDir                - If provided, any files that are received will be written to this directory

    verbose                  - If true, adds additional info about each object to the output, default is false



Create pcap file on concentrator (because we’re using ip block to filter meta):

Sample file.txt file on concentrator:

login localhost:56005:ssl admin netwitness

cd /sdk

sdk open nws://admin:netwitness@localhost:56005:ssl

sdk content sessions=1-now render=nwd dir=/pcaps where="(ip.src= && time='2018-01-01 00:00:00'-'2018-02-24 12:00:00'" fileExt=.nwd append=testnwd3.nwd maxFileSize=5mb maxDirSize=10mb

NwConsole -f /path/to/file.txt

This extracts NWD files from the concentrator service, similar to sdk packets, but will take much longer, as it has to extract and build the NWD file which includes metadata as well. Sessions 1-now will extract all sessions from session ID 1 on the concentrator until now for the time frame specified and filtering clause.

For sdk packets and sdk content, the session ID list must first be built. Once it is built, the extraction will  begin. Occasionally, the extraction will get to 99% and the file size will gradually increase for a long time. It is ok to stop the process once this happens, as the data should be completely extracted.

Additional Considerations:

In most cases, you will want to have a filtering query, which means you will need to access the concentrator. Data will typically be too large to extract “everything” from a date range. The decoder usually has the most extra drive space, so  extracting from the concentrator, to the decoder drive space is the typical usage. From there, you can scp the data to a SAN/NAS for longer term storage, so you can free the disk space and extract for the next time slice.

It is also possible to mount a usb drive to a decoder/concentrator , and write data directly to the device, if there isn’t a lot of disk space required.