The first NwConsole SDK content command example below is simple and shows all of the commands that you need to enter. After that, the examples show only the
sdk content commands. The first example creates a log file and grabs the first 1000 logs out of a Concentrator aggregating from a Log Decoder:
sdk open nw:://admin:email@example.com:50005
sdk output /tmp
sdk content sessions=1-1000 render=logs append=mylogs.log fileExt=.log
This script outputs 1000 logs (assuming sessions 1 through 1000 exist on the service) to the file /tmp/mylogs.log. The logs are in a plain text format. The parameter
fileExt=.logis necessary to indicate to the command that we want to output a log file.
sdk content sessions=1-1000 render=logs append=mylogs.log fileExt=.log includeHeaders=true separator=","
This command grabs the same 1000 logs as above, but it parses the log header and extracts the log timestamp, forwarder, and other information, and puts them in a CSV formatted file.
The timestamp is in Epoch time. The
separatorparameters can only be used on NetWitness Platform installs 10.4.0.2 and later.
sdk content sessions=l-now render=logs append=mylogs.log fileExt=.log includeHeaders=true separator="," where="risk.info='nw35120'"
This command writes a log file across the current session range, but only logs that match
risk.info='nw35120'. Keep in mind that when you add a
whereclause, it performs a query in the background to gather the session IDs for export. The query should be run on a service with the proper fields indexed (which is typically a Broker or Concentrator). In this case, since you are querying the field
risk.info, double-check the service where you run the command to make sure it is indexed at the value level (
IndexValues, see index-concentrator.xml for examples). By default, most Decoders only have time indexed. If you use any field but time in the
whereclause, you need to move the query from the Decoder to a Concentrator, Broker, or Archiver with the proper index levels for the query. You can find more information on indexing and writing queries in the NetWitness Platform Core Database Tuning Guide.
sdk content sessions=l-now render=logs append=mylogs.log fileExt=.log includeHeaders=true separator="," where="threat.category exists && time='2020-01-05 15:00:00'-'2020-01-05 16:00:00'"
This command is the same as above, but it only searches for matching logs between 3 PM and 4 PM (UTC) on Jan 5, 2020 that have a meta key
threat.category. Again, because this query has a field other than time in the
threat.category), it should be run on a service with
threat.categoryindexed at least at the
IndexKeyslevel (the operators
!existsonly require an index at the key level, although values work fine, too).
sdk content sessions=l-now render=logs append=mylogs fileExt=.log where="event.source begins 'microsoft'" maxFileSize=1gb
This command creates multiple log files, each one no larger than 1 GB in size. It prepends the filenames with mylogs and appends them with the date-time of the first packet/log timestamp in the file. Some example filenames: mylogs-1-2020-Jan-28T11_08_14.log, mylogs-2-2020-Jan-28T11_40_08.log and mylogs-3-2020-Jan-28T12_05_47.log. On versions older than 10.5, the T separator between date and time is a space.
sdk content sessions=l-now render=pcap append=mypackets where="service=80,21 && time='2020-01-28 10:00:00'-'2020-01-28 15:00:00'" splitMinutes=5 fileExt=.pcap
This command grabs all the packets between the five-hour time period for service types 80 and 21 and writes a PCAP file. Every five minutes, it starts a new PCAP file.
sdk content time1="2020-01-28 14:00:00" time2="2020-01-28 14:15:00" render=pcap append=mydecoder fileExt=.pcap maxFileSize=512mb sessions=l-now
Pay attention to this command. Why? It works for both packets and logs and is extremely fast. The downside is that you get everything between the two time ranges and you cannot use a
whereclause. Again, it starts streaming everything back almost immediately and does not require a query to run first on the backend. Because everything is read using sequential I/O, it can completely saturate the network link between the server and client. It starts creating files prepended with mydecoder and splits to a new file once it reaches 512 MBs in size.
or (the equivalent command):
sdk content render=pcap console=true sessions=now-u
This is a fun little command. It actually uses
sdk contentbehind the scenes. The purpose of this command is to view all incoming logs on a Log Decoder. As logs come into the Log Decoder (you can also run it on a Broker or Concentrator), they are output on the console screen. It is a great way to see if the Log Decoder is capturing and what exactly is coming into the Log Decoder. This command runs in continuous mode. Do not use it if the Log Decoder is capturing at a high ingest rate (this command cannot keep up with it). However, it is helpful for verification or troubleshooting purposes.
sdk tailLogs where="device.id='ciscoasa'" pathname=/mydir/anotherdir/mylogs
This command is the same as above, except it only outputs logs that match the
whereclause and instead of outputting to the console, it writes them to a set of log files under
/mydir/anotherdirthat do not grow larger than 1 GB. Obviously, you can accomplish this with the
sdk contentcommand as well, but this command requires a little less typing if you like the default behavior.
sdk content sessions=now-u render=pcap where="service=80" append=web-traffic fileExt=.pcap maxFileSize=2gb maxDirSize=100gb
This command starts writing PCAPs of all web traffic from the most recent session and all new incoming sessions that match
service=80. It writes out PCAPs no larger than 2 GBs and if all the PCAPs in the directory grow larger than 100 GBs, it deletes the oldest PCAPs until the directory is 10% smaller than the maximum size. Keep in mind that the directory size checking is not exact and it only checks every 15 minutes by default. You can adjust the number of minutes between checks by passing
cacheMinutesas a parameter, but this only works with versions 10.5 and later.
sdk content sessions=79000-79999 render=nwd append=content-%1%.nwd metaFormatFilename=did
This is a basic backup command. It grabs 1000 sessions and outputs the full content (sessions, meta, packets, or logs) to the NWD (NetWitness Data Format) format. NWD is a special format that can be re-imported to a Packet or Log Decoder without reparsing. So essentially, the original parsed session imports without changes. The timestamp does not change as well, so if it was originally parsed six months ago, the timestamp upon import will be retained as six months ago.
metaFormatFilenameparameter is very helpful in this command. If this command is run on a Concentrator with more than one service, the NWD filenames will be created with the
didmeta for each session (the
%1%in the append parameter is substituted with the value of
did). Each filename will indicate exactly which Decoder the data came from.
sdk content session=l-u where="service=80,139,25,110" render=files cacheMinutes=10 dir=/tmp/content-files maxDirSize=200mb
This is another fun little command. It works very similar to our old Visualize product if you pair the output directory with something like Windows Explorer in Icon mode. It extracts files from all web, email, and SMB traffic. This includes all kinds of files, such as images, zip files, videos, PDFs, office documents, text files, executables, and audio files. If it extracts malware, your virus scanner will flag it. Nothing will be executed by the command, so it does not infect the machine (unless you try to execute it yourself). However, it can be useful because if you do find malware, the filename indicates the session ID where it was extracted. You can then query that session ID and see which host the malware possibly infected and take action. You can filter what gets extracted with the parameters
excludeFileTypes(see the command help). For example, adding
excludeFileTypes=".exe;.dmg;.msi"prevents executables and installers from being extracted. This command just runs nonstop extracting files from all existing and any new sessions. After the directory gets littered with more than 200 MBs of files, it automatically starts cleaning up the files every 10 minutes.
sdk content session=1-now render=files where="time='2015-01-27 12:00:00'-'2015-01-27 13:00:00' && (service=25,110,80)" subdirFileTypes="audio=.wav;.mp3;.aac; video=.wmv;.flv;.mp4;.mpg;.swf; documents=.doc;.xls;.pdf;.txt;.htm;.html images=.png;.gif;.jpg;.jpeg;.bmp;.tif;.tiff archive=.zip;.rar; other=*" renameFileTypes=".download|.octet-stream|.program|.exe;.jpeg|.jpg" maxDirSize=500mb
This command extracts files from HTTP and email sessions from a one-hour period and then groups the extracted files into directories specified by the
subdirFileTypesparameter. For example, any extracted audio file with the extension
.aacwill be placed into the subdirectory audio, which will be created under the specified output directory. The same goes for all the other groups specified in that parameter. Some files will also be automatically renamed based on their file extension, which is handled by
renameFileTypes. Any file with an extension
.programwill be renamed to
.exe. Files with the extension
.jpegwill be renamed
.jpg. Once the top-level directory exceeds 500 MBs, the oldest files are cleaned. This command stops at the last session at the time the command started.
sdk content session=1-u render=111 where="filename exists" maxDirSize=20mb fileHash=md5,sha1,sha256 linkMeta=sessionid,time,did,service_uuid deviceType=filehash logDecoder="<hostname>:<port>[:ssl]" sessionPersist=/tmp/last-session.txt
If you want to extract file hashes from network protocols like HTTP, SMB, POP3 and SMTP, this is the command you want to use. Not only will it give you the file hash, but it will send an event to a Log Decoder with this information, which will be processed into a session for querying. For example, use
sdk opento talk to a Broker or Concentrator that is aggregating from one or more Network Decoders. Set the
whereclause to find sessions of interest that have embedded files in the protocols we support for file extraction (use Live to install Malware Analysis content for example). Next, set up a Log Decoder to receive the file hash information. This information will go into the meta keys
link. The meta key link is used to indicate the session (with the file that has been hashed) on the device used by
sdk open. When hashes of interest are discovered on the Log Decoder, the link meta can be used to find the original packet session that contains the file. The parameters
deviceTypecan be used to customize how the log is generated that is sent to the Log Decoder. The ports for sending Syslog to a Log Decoder are usually 514 or 6514 (for SSL). You may need to add a
table-map-custom.xmlfile on the Log Decoder to properly save the meta keys sent in the file hash log. For the linkMeta parameter, service_uuid is a special meta key that inserts the UUID of the service the session ID belongs to. This can be used to link back to the original session from the Log Decoder. sessionPersist is used to write out our session IDs to pick up where we left off from the last run.
sdk search session=l-now where="service=80,25,110" search="keyword='party' sp ci"
This command searches all packets and logs (the
spparameter) for the keyword
party. If party is found anywhere in the packets or logs, it outputs the session ID along with the text it found and the surrounding text for context. The
whereclause indicates that it only searches web and email traffic. The
ciparameter means that it is a case-insensitive search. You can substitute
keywordand it performs a regex search.
sdk search session=l-now search="keyword='checkpoint' sp ci" render=log append=checkpoint-logs.log fileExt=.log
This is an interesting command example. It searches all logs (or it could be packets) for the keyword
checkpointand if that keyword is seen, it extracts the log to a file checkpoint-logs.log. There are all kinds of possibilities with this command. Essentially, when a hit is detected, it hands off the session to the content call. So any parameters you pass to
sdk searchthat it does not recognize are just passed along to the content call. This allows the full capabilities of the
sdk contentcall, but they only work on those sessions with content search hits.