The first NwConsole sdk content command example below is simple and shows all of the commands that you need to enter. After that, the examples show only the
sdk content commands. The first example creates a log file and grabs the first 1000 logs out of a Concentrator aggregating from a Log Decoder:
sdk open nw:://admin:email@example.com:50005
sdk output /tmp
sdk content sessions=1-1000 render=logs append=mylogs.log fileExt=.log
This script outputs 1000 logs (assuming sessions 1 thru 1000 exist on the service) to the file /tmp/mylogs.log. The logs are in a plain text format. The parameter
fileExt=.log is necessary to indicate to the command that we want to output a log file.
sdk content sessions=1-1000 render=logs append=mylogs.log fileExt=.log includeHeaders=true separator=","
This command grabs the same 1000 logs as above, but it parses the log header and extracts the log timestamp, forwarder, and other information, and puts them in a CSV formatted file.
Example CSV: 1422401778,10.250.142.64,10.25.50.66,hop04b-LC1,%MSISA-4: 220.127.116.11...
The timestamp is in Epoch time. The
separator parameters can only be used on NetWitness Platform installs 10.4.0.2 and later.
sdk content sessions=l-now render=logs append=mylogs.log fileExt=.log includeHeaders=true separator="," where="risk.info='nw35120'"
This command writes a log file across the current session range, but only logs that match
risk.info='nw35120'. Keep in mind that when you add a where clause, it performs a query in the background to gather the session ids for export. The query should be run on a service with the proper fields indexed (which is typically a Broker or Concentrator). In this case, since you are querying the field
risk.info, double-check the service where you run the command to make sure it is indexed at the value level (IndexValues, see index-concentrator.xml for examples). By default, most Decoders only have time indexed. If you use any field but time in the where clause, you need to move the query from the Decoder to a Concentrator, Broker, or Archiver with the proper index levels for the query. You can find more information on indexing and writing queries in the NetWitness PlatformCore Database Tuning Guide.
sdk content sessions=l-now render=logs append=mylogs.log fileExt=.log includeHeaders=true separator="," where="threat.category exists && time='2015-01-05 15:00:00'-'2015-01-05 16:00:00'"
This command is the same as above, but it only searches for matching logs between 3 PM and 4 PM (UTC) on Jan 5, 2015 that have a meta key
threat.category. Again, because this query has a field other than time in the where clause (
threat.category), it should be run on a service with
threat.category indexed at the
IndexKeys level (the operators
!exists only require an index at the key level, although values work fine, too).
sdk content sessions=l-now render=logs append=mylogs fileExt=.log where="event.source begins 'microsoft'" maxFileSize=1gb
This command creates multiple log files, each one no larger than 1 GiB in size. It prepends the filenames with mylogs and appends them with the date-time of the first packet/log timestamp in the file. Some example filenames: mylogs-1-2015-Jan-28T11_08_14.log, mylogs-2-2015-Jan-28T11_40_08.log and mylogs-3-2015-Jan-28T12_05_47.log. On versions older than Security Analytics 10.5, the T separator between date and time is a space.
sdk content sessions=l-now render=pcap append=mypackets where="service=80,21 && time='2015-01-28 10:00:00'-'2015-01-28 15:00:00'" splitMinutes=5 fileExt=.pcap
This command grabs all packets in between the five-hour time period for service types 80 and 21 and writes a PCAP file. Every 5 minutes, it starts a new PCAP file.
sdk content time1="2015-01-28 14:00:00" time2="2015-01-28 14:15:00" render=pcap append=mydecoder fileExt=.pcap maxFileSize=512mb sessions=l-now
Pay attention to this command. It works for both packets and logs and is extremely fast. The downside is that you get everything between the two time ranges and you cannot use a where clause. Again, it starts streaming everything back almost immediately and does not require a query to run first on the backend. Because everything is read using sequential I/O, it can completely saturate the network link between the server and client. It starts creating files prepended with mydecoder and splits to a new file once it reaches 512 MiBs in size.
or (the equivalent command):
sdk content render=pcap console=true sessions=now-u
This is a fun little command. It actually uses
sdk content behind the scenes. The purpose of this command is to view all incoming logs on a Log Decoder. As logs come into the Log Decoder (you can also run it on a Broker or Concentrator), they are output on the console screen. It is a great way to see if the Log Decoder is capturing and what exactly is coming into the Log Decoder. This command runs in continuous mode. Do not use it if the Log Decoder is capturing at a high ingest rate (this command cannot keep up with it). However, it is helpful for verification or troubleshooting purposes.
sdk tailLogs where="device.id='ciscoasa'" pathname=/mydir/anotherdir/mylogs
This command is the same as above, except it only outputs logs that match the where clause and instead of outputting to the console, it writes them to a set of log files under
/mydir/anotherdir that do not grow larger than 1 GiB. Obviously, you can accomplish this with the
sdk content command as well, but it is a little less typing with this command if you like the default behavior.
sdk content sessions=now-u render=pcap where="service=80" append=web-traffic fileExt=.pcap maxFileSize=2gb maxDirSize=100gb
This command starts writing PCAPs of all web traffic from the most recent session and all new incoming sessions that match service=80. It writes out PCAPs no larger than 2 GiBs and if all the PCAPs in the directory grow larger than 100 GiBs, then it deletes the oldest PCAPs until the directory is 10% smaller than the max size. Keep in mind that the directory size checking is not exact and it only checks every 15 minutes by default. You can adjust the number of minutes between checks by passing
cacheMinutes as a parameter, but this only works with Security Analytics 10.5 and later.
sdk content sessions=79000-79999 render=nwd append=content-%1%.nwd metaFormatFilename=did
This is a poor person’s backup command. It grabs 1000 sessions and outputs the full content (sessions, meta, packets, or logs) to the NWD (NetWitness Data Format) format. NWD is a special format that can be re-imported to a Packet or Log Decoder without reparsing. So essentially, the original parsed session imports without changes. The timestamp does not change as well, so if it was originally parsed 6 months ago, the timestamp upon import will be retained as 6 months ago.
metaFormatFilename parameter is very helpful in this command. If this command is run on a Concentrator with more than one service, the NWD filenames will be created with the
did meta for each session (the %1% in the append parameter is substituted with the value of
did). Each filename will indicate exactly which Decoder the data came from.
sdk content session=l-u where="service=80,139,25,110" render=files maxDirSize=200mb cacheMinutes=10
This is another fun little command. It works very similar to our old Visualize product if you pair the output directory with something like Windows Explorer in Icon mode. It extracts files from all web, email, and SMB traffic. This includes all kinds of files, such as images, zip files, videos, PDFs, office documents, text files, executables, and audio files. If it extracts malware, your virus scanner will flag it. Nothing will be executed by the command, so it does not infect the machine (unless you try to execute it yourself). However, it can be useful because if you do find malware, the filename indicates the session id where it was extracted. You can then query that session id and see what host the malware possibly infected and take action. You can filter what gets extracted with the parameters
excludeFileTypes (see the command help). For instance, adding
excludeFileTypes=".exe;.dmg;.msi" prevents executables and installers from being extracted. This command just runs nonstop extracting files from all existing and any new sessions. After the directory gets littered with more than 200 MiBs of files, it automatically starts cleaning up the files every 10 minutes.
sdk content session=1-now where="time='2015-01-27 12:00:00'-'2015-01-27 13:00:00' && (service=25,110,80)" subdirFileTypes="audio=.wav;.mp3;.aac; video=.wmv;.flv;.mp4;.mpg;.swf; documents=.doc;.xls;.pdf;.txt;.htm;.html images=.png;.gif;.jpg;.jpeg;.bmp;.tif;.tiff archive=.zip;.rar; other=*" renameFileTypes=".download|.octet-stream|.program|.exe;.jpeg|.jpg" render=files maxDirSize=500mb
This command extracts files from HTTP and email sessions from a one-hour period and then groups the extracted files into directories specified by the
subdirFileTypes parameter. For instance, any extracted audio file with the extension .wav, .mp3, or .aac will be placed into the subdirectory audio, which will be created under the specified output directory. The same goes for all the other groups specified in that parameter. Some files will also be automatically renamed based on their file extension, which is handled by
renameFileTypes. Any file with an extension .download, .octet-stream or .program will be renamed to .exe. Files with the extension .jpeg will be renamed .jpg. Once the top-level directory exceeds 500 MiBs, the oldest files get cleaned. This command stops at the last session at the time the command started.
sdk search session=l-now where="service=80,25,110" search="keyword='party' sp ci"
This command searches all packets and logs (the
sp parameter) for the keyword
party. If party is found anywhere in the packets or logs, it outputs the session id along with the text it found and the surrounding text for context. The where clause indicates that it only searches web and email traffic. The
ci parameter means that it is a case insensitive search. You can substitute
keyword and it performs a regex search.
sdk search session=l-now search="keyword='checkpoint' sp ci" render=log append=checkpoint-logs.log fileExt=.log
This is an interesting command example. It searches all logs (or it could be packets) for the keyword
checkpoint and if that keyword is seen, it extracts the log to a file checkpoint-logs.log. There are all kinds of possibilities with this command. Essentially, when a hit is detected, it hands off the session to the content call. So any parameters you pass to
sdk search that it does not recognize, it just passes along to the content call. This allows the full capabilities of the
sdk content call, but only working on those sessions with content search hits.