Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Authors Eric Partington
1 2 3 Previous Next

RSA NetWitness Platform

64 Posts authored by: Eric Partington Employee

A couple of interactions with customers recently sent me down the path of designing better whitelisting options around well known services that generate a lot of benign traffic.  A lot of customers have gone down the path of Office365, Windows 10 and Chrome/Firefox as standard software in the Enterprise.  As a result, the traffic that NetWitness captures would include a lot of data for these services so enabling the ability to filter this data when needed is important.

 

The Parsers

The end result of this effort is 3 parsers that allow the filtering of Office365 traffic, Windows 10 Endpoint/Telemetry and Generic filtering (Chrome/ Firefox etc.) in the NetWitness plaform.

 

The data for filtering (metvalues) is written by default in the Filter key and looks like this

 

With these metavalues, analysts are able to select and deselect meta for these services to reduce the benign signals for these services from investigations and charting to focus more on the outliers.

filter!='whitelist'

filters all data tagged as whitelist from the view

 

filter!='windows10_connection'

filters all traffic that is related to windows 10 Connection endpoints (telemetry etc.) captured from these sites

windows-itpro-docs/windows/privacy at master · MicrosoftDocs/windows-itpro-docs · GitHub 

 

filter !='office365'

filters traffic from this endpoint related to all Office365 endpoints

Office 365 IP Address and URL Web service | Microsoft Docs 

 

filter !='generic'

filters traffic related to generic endpoints including Chome updates from gvt1.com and gtvt2.com as well as other misc endpoints related to windows telemetry (V8/V7 etc and others)

 

Automating the Parsers

To take this one step further, to make the process of creating the lua parsers easier a script was written for Office365 and Window10 to automate the process of pulling the content down, altering the data, creating a parser with the content and outputting the parser ready to be used on log and packet decoders to flag traffic.

 

Hopefully this can be rolled into a regular content pipeline to update the parsers periodically to get the latest endpoints as they are added (for instance when a new Windows 10 build comes out there will be an update to the endpoints most likely).

 

Scripts, lua parsers and descriptions are listed here and will get updated as issues pop up.

 

For each parser that has an update mechanism the python script can be run to generate the data that outputs a new parser for use in NetWitness (and the parser version is updated to the time the parser is built to let you know in the UI what the version is).

These parsers also serve as proof of concepts for other ideas that might need both exact and substring matches for say hostnames, or other threat data.

 

Currently the parsers read from hostname related keys such as alias.host, fqdn, host.src, host.dst.

 

As always, this is POC code to validate ideas and potential solutions. Deploy code and test/watch for side effects such as blizzards, sandstorms, core dumps and other natural events.

 

GitHub - epartington/rsa_nw_lua_wl_O365: whitelist office365 traffic parser and script 

GitHub - epartington/rsa_nw_lua_wl_windows10: whitelist window 10 connection traffic parser and script 

GitHub - epartington/rsa_nw_lua_wl_generic: whitelist generic traffic parser 

 

There will also be a post shortly about using resourceBundles to generate a single zip file with this content to make uploading and management of this data easier.

There have been many improvements made over the past several releases to the RSA NetWitness product on the log management side of the house to help reduce the amount of unparsed or misparsed devices.  There are still instances where manual intervention is necessary and a report such as the one provided in this blog could prove valuable for you.

 

This report provides visibility into 4 types of situations:

 

Device.IP with more than 1 device.type

Devices that have multiple parsers acting on them over this time period, sorted most parsers per IP to least

 

Unknown Devices

Unknown devices do not have a parser detected for them or no parser is installed/enabled for it.

 

Device.types with word meta

Device types with word meta indicate that a parser has matched a header for that device but no payload (message body) has matched a parser entry.

 

Device.type with parseerror

Devices that are parsing meta for most fields but have parseerror meta for particular metakey data. This can indicate the format of the data into the key does not match the format of the key (invalid MAC address into eth.src or eth.dst - MAC formatted keys), text into IP key

 

Some of these categories are legitimate but checking this report once a week should allow you to keep an eye on the logging function of your NetWitness system and make sure that it is performing at its best.

 

The code for the Report is kept here (in clear text format so you can look at the rule content without needing to import it into NetWitness):

GitHub - epartington/rsa_nw_re_logparser_health 

 

Here's a sample report output:

 

Most people don't remember the well known port number for a particular network protocol. Sometimes we need to refer to an RFC to remember what port certain protocols normally run over. 

 

In the RSA NetWitness UI, the well known name for the protocol is presented in the UI but when you drill on it you get the well known port number. 

 

This can be a little confusing at times if you aren't completely caffeinated.☕

 

Well here's some good news, you an use the name of the service in your drills and reports with the following syntax:

 

Original method:

Service=123 

 

New method:

Service="NTP"

 

You may get an error about needing quotes around the word however the system still interprets the query correctly.

 

 

This also works in profiles:

 

An in the Reporting Engine as well:

 

Good luck using this new trick!

   

(P.S you can also use AND instead of && and OR instead of || )

RSA NetWitness v11.2 introduced a very useful feature to the Investigation workflow with the improvement of the Profile feature.  In previous versions the Profile could have a pre-query set for it along with the meta and column groups, but you were locked to using only those two features unless you de-activated your profile.

 

With v11.2 you are able to keep the pre-query set from the profile and pivot to other meta and column groups.  This ability allows you to set the Profiles as bookmarks or starting points for investigations or drills.  Along with the folders that can be set in the Profile section to help organize the various groups that help frame investigations properly.

 

Below is a collection of the profiles as well as some meta and column groups to help collect various types of data or protocols together.

 

GitHub - epartington/rsa_nw_investigation_profiles 

 

Protocols

Medium

Log Device Classes

UEBA

 

Let me know if these work for you, I will be adding more as they develop to the github site so check back.

If you've ever wondered what levers you have available to pull for creating application rule logic then this is your one stop shop for an explanation.

 

There's a fully documented cheat sheet of the parameters you can use in application rules, located at the link below:

Application Rules Cheat Sheet 

 

There are some commands that I personally wasn't aware of.  For example, using ~ instead of not() to negate the contains/begins/ends functions and I had forgotten about the ucount and unique operators that are available.

 

Also, v11.x introduced the ability to have metakeys on both the left and right side of operators (the table in that link explains which ones are available).

 

Overall, this is a good resource to bookmark if you are developing application rules in RSA NetWitness.

A recent customer question about alerting on Uptime values from the REST API got me digging into the Health and Wellness Policies for a better solution.

 

The request was to alert when the uptime value for specific device families was reset indicating that something had occured with the service and reset the uptime value.  Repeated resets of the uptime value could indicate an issue with the service that needed attention (core files created as a result of decoder service crashes was the root of this request).

 

Here is my solution:

  • Admin > Health and wellness > Policies
  • Select the + and add a new policy for the service that you want to monitor
  • In this case the Archiver service is our example

  • Add a new Rule
  • The conditions
    • Alarm = Regex match on .., .. seconds.*
    • REcovery = !Regex match on .., .. seconds.*

  • Save
  • Set your notification output at the bottom
  • save and enable the policy at the top

 

Now you have a policy that alerts when the uptime is within the first 60 seconds of restarting (.. is two digits so up to 60 seconds) and recovers once the uptime doesnt match the pattern (when 60 seconds switches to minute and seconds (61 seconds +)

 

Alarm

Recovery

 

 

Details on the pattern developed:

number of seconds followed by a comma then the friendly time breakdown of the seconds in years, months, weeks, days, hours, minutes and seconds.

.. = looked for 2 digits for the seconds (between 10-59 seconds after service restarted)

, .. = looked for the same seconds value after the comma

seconds.* = the word seconds and the trailing space in the value

when this pattern is matched (between 10-59 seconds after restart) there will be an alarm, then it will clear when that pattern is not matched (60 seconds +)

Eric Partington

Hunting in RDP Traffic

Posted by Eric Partington Employee Nov 12, 2018

I was just working in the NOC for HackFest 2018 in Quebec City (https://hackfest.ca/en/) and playing with RDP traffic to see who was potentially accessing remote systems on the network.  

 

This was inspired by this deck from Brocon and some recent enhancements to the RDP parser. (https://www.bro.org/brocon2015/slides/liburdi_hunting_rdp.pdf)

 

Recent enhancements to the RDP parser include extracting the screen resolutions, the username as well as the hostname, certificate and other details.

 

With some simple charting language we can create a number of rules that look for various properties of RDP traffic based on direction (Should you have RDP inbound from the internet?, should you have RDP outbound to the internet?) as well as volume based rules (which system has the most RDP session logins by unique username?, which system connects to the most systems by distinct count of ip?)

 

The report language is hosted here, simply import it into your Reporting Engine and point it at your packet broker/concentrators.

GitHub - epartington/rsa_nw_re_rdp: RDP summary reports for hunting/identification 

 

Please let me know if there are modifications to the Report that make it more useful to you.

 

Rules included in the report:

  • most frequent RDP hostnames
  • most frequent RDP keyboard languages
  • least frequent RDP keyboard languages
  • Outbound/Inbound/Lateral RDP traffic
  • Most frequent RDP screen resolutions
  • Most frequent RDP Usernames
  • Usernames by distinct destination IP
  • RDP Hosts with more than 1 username from them

A couple of clients have asked about a generic ESA template that can be used to alert into Arcsight for correlation with other sources.  After some testing and configuration this was the template that was created.  One thing that had us stuck for a short period of time was the timezone offset in the FreeMarker template to get Arcsight to read the time as UTC and apply the correct time offset.

 

Hopefully this helps others with this need.

 

<#include "macros.ftl"/>
CEF:0|RSA|NetWitness ESA|11.0|${moduleName}|${moduleName}|${severity}|<#list events as x>externalId=${x.sessionid!" "} proto=${x.ip_proto!" "} categoryOutcome=/Attempt categoryObject=Network categorySignificance=/Informational/Warning categoryBehavior=/Communicate host=<#if x.alias_host?has_content><@value_of x.alias_host /></#if> src=${x.ip_src!" "} spt=${x.tcp_srcport!" "} dhost=${x.host_dst!" "} dst=${x.ip_dst!" "} dpt=${x.tcp_dstport!" "} act=${x.action!" "} rt=${time?datetime?string(“MMM dd yyyy HH:mm:ss z”)} duser=${x.ad_username_dst!" "} suser=${x.ad_username_src!" "} filePath=${x.filename!" "} requestMethod=${x.action!" "} destinationDnsDomain=<#if x.alias_host?has_content><@value_of x.alias_host /></#if>  destinationServiceName=${x.service!" "}</#list> cs4=${moduleName} cs5=PROD cs6=MalwareCommunication

 

This CEF template is added to the Admin > System > Global Notifications > Templates tab and referenced in the ESA rules that need to alert out to Arcsight when they fire.

Background Information:

  • v10.6.x had a method in the UI to add a standalone NW head server for investigation purposes (and to help with DR scenarios) using legacy authentication (static local credentials).  
  • v11.x appeared to have removed that capability which was blocking some of the larger upgrades, however it appears that the capability actually exists; it is just not presented in the UI as it was in v10.6.
  • Having a DR investigation server also helps to provide continuous access to data for analysts during the major upgrade from v10.6.x to v11.2 which is incredibly beneficial to have.

 

Review the upgrade guide and the "Mixed Mode" notes at the link below for more details on the upgrade and running in mixed mode:

https://community.rsa.com/community/products/netwitness/blog/2018/10/18/running-rsa-netwitness-mixed-mode

 

If you spin up a DR v11.2 standalone NW server from the ISO/OVA you can connect it to an existing set of concentrators using local credentials (Note: DO NOT expect that Live or ESA will function as they do on the actual node0 NW server.  This method gets you a window into the meta for investigation, reporting and Dashboards only!)

 

Here's the steps you'll need to follow once you have your DR v11.2 NW server spun up:

 

Create local credentials to use for authentication with the concentrator(s) or broker(s) that you will connect to under

Admin > Service > <service> > Security

 

 

You will need to add some permissions to the aggregation role to allow the Event Analysis function to work:

Replicate the role and user to the other services that you will need to authenticate to.

 

Your 11.2 DR investigation head server can connect to a 10.6.6 Broker or Concentrator with the following:

 

Broker service > Explore

Select broker

Right click select properties

Select add from the drop down

Add the concentrators that need to be connected (as they were in 10.6).  Below are the ports that are required for the connection:

  • 50005 for Concentrators
  • 56005 for SSL to Concentrators
  • 50003 to Broker 
  • 56003 for SSL to Broker

 

device=<ip>:<port> username=<> password=<>

 

Click send.

 

You should get a successful connection and in the config section you will now see the aggregation connection setup:

 

Click Start aggregation and make sure Aggregate Autostart is checked:

 

Using this DR Investigation server you can use the following process to help in upgrading from v10.6.6 to v11.2+ in the following steps:

 

Initial State:

 

Upgrade the new Investigation Head:

 

Investigators now can use the 11.2 head to investigate without interruption during the production NW head server upgrade.

 

Upgrade the primary (node0) NW head server and ESA:

Upgrade the decoder/concentrator pairs:

Note: an outage will occur here for investigation as the stacks are upgraded

Now you'll be running in v11.2 mode as you were in 10.6 with DR investigation head server so that your Investigation and Events views will be accessible.

Context menu actions have long been a part of the RSA NetWitness Platform. v11.2 brought a few nice touches to help manage the menu items as well extend the functions into more areas of the product.

 

See here for previous information on the External Lookup options:

Context Menus - OOTB Options 

 

And these for Custom Additions that are useful to Analysts:

Context Menu - Microsoft EventID 

Context Menu - VirusTotal Hash Lookup 

Context Menu - RSA NW to Splunk 

Context Menu - Investigate IP from DNS 

Context Menu - Cymon.io 

 

As always access to the administration location is located here:

Admin > System > Context Menu Actions

 

The first thing you will notice is there is a bit of a different look since a good bit of cleanup has been done in the UI.

 

Before we start trimming the menu items... here is what it looks before the changes:

Data Science/Scan for Malware/Live Lookup are all candidates for reduction.

 

When you open an existing action or create a new one you will also see some new improvements.

No longer just a large block of text that can be edited if you know what and where to change but a set of options to change to implement your custom action (or tweak existing ones)

 

You can switch to the advanced view to get back to the old freeform world if you want to.

 

Clean up

To clean up the menu for your analysts you might consider disabling these items if you don't have a warehouse from RSA installed

Sort by Group Name, Locate the Data Science group and disable all the rules for them (4)

Disable any of the External lookup items that are not used or not important for your analysts

Scan for Malware - are you logs only? Malware not needed, are you packets or endpoint but don't use Malware?

Live Lookup - mostly doesn't provide value to analysts

Now you should have a nice clean right click action menu available to investigators to do their job better and faster.

The RSA NetWitness Platform has an integrated agent available that currently does base Endpoint Detection and Response (EDR) functions but will shortly have more complete parity with ECAT (in V 11.x).  One beneficial feature of the Insights agent (otherwise called NWE Insights Agent) is Windows Log collection and forwarding. 

 

Here is the agent Install Guide for v11.2:

https://community.rsa.com/docs/DOC-96206

 

The Endpoint packager is built from the Endpoint Server (Admin > Services) where you can define your configuration options.  To enable windows log collection check the box at the bottom of the initial screen

 

This expands the options for Windows log collection...

Define one or more Log Decoder/Collector services in the current RSA NetWitness deployment to send the endpoint logs to (define a primary and secondary destination)

 

Define your channels to collect from

The default list includes 4 channels (System, Security, Application and ForwardedEvents)

You can also add any channel you want as long as you know the EXACT name of it

In the Enter Filter Option in the selection box enter the channel name

In this case Windows PowerShell (again, make sure you match to the exact Event Channel run into issues)

We could also choose to add some other useful event channels

  • Microsoft-Windows-Sysmon/Operational
  • Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
  • Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational

 

You can choose to filter these channels to include or exclude certain events as well.

 

Finally, set the protocol to either UDP/TCP or TLS.

 

Generate Agent generates the download that includes the packager and the config files that define the agent settings.

 

From there you can build the agents for Windows, Linux and Mac from a local windows desktop.

Agents are installed as normal using local credentials or your package management tool of choice.

 

Now that you have windows events forwarded to your log decoders, make sure you have the Windows parser downloaded from RSA Live and deployed to your log decoders to start parsing the events.

The Windows parser is slightly different than the other windows log parsers (nic, snare, er) in that there are only 7 message sections (one each for the default channels and a TestEvent and Windows_Generic).

 

For the OOTB channels the Message section defines all the keys that could exist and then maps them to the table-map.xml values as well as the ec tags. 

Log Parser Customization 

 

The Windows_Generic is the catchall for this parser and any channel that is added custom will only parse from this section.  This catchall needs some help to make use of the keys that will come from the channels that we have selected which is where a windowsmsg-custom.xml (custom addition to the windows parser) comes in (internal feature enhancement as been added to make these OOTB)

 

Get the windows-custom parser from here:

GitHub - epartington/rsa_nw_log_windows: rsa windows parser for nw endpoint windows logs 

Add to your windows parser folder on the log decoder(s) that you configured in the endpoint config

/etc/netwitness/ng/envision/etc/devices/windows/

 

Reload your parsers.

Now you should have additional meta available for these additional event channels.

 

 

 

What happens if you want to change your logging configuration but don't want to re-roll an agent? In the Log Collection Guide here you can see how to add a new config file to the agent directory to update the channel information

https://community.rsa.com/docs/DOC-95781

(page 113)

 

Currently the free NW Endpoint Insights agent doesn't have agent config management included so this needs to be manual at the moment.  Future versions will include config management to make this change easier.

 

Now you can accomplish things like this:

Logs - Collecting Windows Events with WEC 

Without needing a WEC/WEF server especially if you are deploying Sysmon and want to use the NWE agent to pull back the event channel.

 

While you are in the Log Collection frame of mind, why not create a Profile in Investigation for NWE logs. 

Pre-Query = device.type='windows'

 

In 11.2 you can create a profile (which isn't new) as well as meta and column groups that are flexible (new in 11.2).  Which means the pre-query is locked but you are able to switch metagroups within the profile (very handy)

 

 

Hopefully this helpful addition to our agent reduces friction to collecting windows events.  If there are specific event channels that are high on the priority list for collection add them to the comments below and i'll get them added to internal RFE.

Encrypted traffic has always posed more challenges to network defenders than plaintext traffic but thanks to some recent enhancements in NetWitness 11.2 and a really useful feed from Abuse.ch defenders have a new tool in their toolbox.

 

11.2 Added the ability to enable TLS certificate hashing by adding an optional parameter on your decoders

Decoder Configuration Guide for Version 11.2 

(search for TLS certificate hashing - page 164)

  • Explore > /decoder/parsers/config/parsers.options
  • add this after the entropy line (space delimited) HTTPS="cert.sha1=true"
  • Make sure the https native parser is enabled on the decoder

 

This new meta is the SHA1 hash of any DER encoded cerificates during the TLS handshake which is written to cert.checksum which is the same key that NetWitness Endpoint writes its values to.

 

Now is a good time to revisit your application rules that might be truncating encrypted traffic.  Take advantage of new parameters that were added in 11.1 related to truncation after the handshake

 

 

Now that we have a field for the certificate hash we need a method to track known certificate checksums to match against.

sslbl.abuse.ch has a feed that tracks these blacklisted certificates as long with information to identify the potential attacker campaign.

 

This is the feed (SSLBL Extended) could also leverage the Dyre list as well.

https://sslbl.abuse.ch/downloads/ssl_extended.csv 

 

Headers look like this

# Timestamp of Listing (UTC),Referencing Sample (MD5),Destination IP,Destination Port,SSL certificate SHA1 Fingerprint,Listing reason
Map the feed as follows

Configure > Custom Feeds > New Feed > Custom Feed

 

Add the url as above, recur every hour (get new data into the feed in reasonable time)

 

Apply to your decoders (and you will notice that the feed is also added to your Context Hub as well in 11.2 - which means you can create a feed that is used as feed and as well as ESA whitelist or blacklist)

 

 

Non-IP type, map Column 5 to cert.checksum and column 6 to IOC (as if we have a match this is pretty confidant that this traffic is bad)

 

And now you have an updated feed that will alert you to certificate usage that matches known lists of badness.

 

an example output looks like this (always ends <space>c&c in IOC key)

 

(the client value is from another science project related to JA3 signatures ...  in this case double confirmation of gootkit)

 

the testing data that was used to play with this came from here

Malware-Traffic-Analysis.net - 2018-09-05 - Emotet infection with IcedID banking Trojan and AZORult 

 

Great resource and challenges if you are looking for some live fire exercises.

 

To wrap this up an ESA rule can be created with the following criteria to identify these communications and create an Alert

/*
Module debug section. If this is empty then debugging is off.
*/
@Name("outbound_blacklisted_ssl_cert: {ioc}")
@Description('cert.checksum + ssl abuse blacklist all have ioc ends with <space>c&c')
@RSA
SELECT * FROM Event(
/* Statement: outound_ssl_crypto_cnc */
(
direction.toLowerCase() IN ( 'outbound' ) AND
service IN ( 443 ) AND
ioc IS NOT NULL AND
matchLike(ioc,'% C&C' )
/*isOneOfIgnoreCase(ioc,{ '%c&c' })*/
)
) ;

The reason advanced mode was needed was that the IOC metakey needed to be wildcarded to look for any match of <name><space>C&C and I didnt want to enumerate all the potential names from the feed (the UI doesnt provide a means to do this in the basic rule builder for arrays - of which IOC is string[]).

 

Another thing to notice is that the @Name syntax creates a parameterized name that is only available in the alert details of the raw alert.

I was hoping to do more with that data but so far not having much luck.

 

You can also wrap this into a Respond alert to make sure you group all potential communications together for a system and these alerts (grouping by source IP)

 

If everything works correctly then you get Resond alerts like this that you should investigate 

A recent advisory was sent out for firmware updates to a number of base components in NetWitness.

 

RSA NetWitness Availability of BIOS & iDRAC Firmware Updates

 

There were three components that were mentioned that needed potential updates and instructions to update them were provided in the Advisory.

 

How do you do gather the state of the environment quickly with the least amount of steps so that you can determine if there is work that needs to be done?

 

Chef to the rescue ...

You might need these tools installed on your appliances to run later commands (install perccli and ipmitool)

 

From the NW11 head server (node0)

salt '*' pkg.install "perccli,ipmitool" 2>/dev/null

then you can query for the current versions of the software for PERC, BIOS and iDRAC

salt '*' cmd.run 'hostname; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "FW Package Build"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "FW Package Build"' 2>/dev/null
salt '*' cmd.run 'hostname; ipmitool -I open bmc info | grep "Firmware Revision"' 2>/dev/null
salt '*' cmd.run 'hostname; ip address show dev eth0 | grep inet; dmidecode -s system-serial-number; dmidecode -s bios-version; dmidecode -s system-product-name;' 2>&-

 

The output will list the host, and the version of the software that exists which can be used to determine if an update is required to your NetWitness Appliances.

 

Ideally this is in Health and Wellness where policies can be written against it with alerts (and an export to csv function would be handy).

Wireshark has been around for a long time and the display filters that exist are good reference points to learn about network (packet) traffic as well as how to navigate around various parts of sessions or streams.

 

Below you will find a handy reference which allows you to cross-reference many of the common Wireshark filters with their respective RSA NetWitness queries. 

 

This is where I pulled the Wireshark display filters from:  DisplayFilters - The Wireshark Wiki 

 

Show only SMTP (port 25) and ICMP traffic:

WiresharkNetWitness
tcp.port eq 25 or icmpservice=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)
tcp.dstport=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)

 

Show only traffic in the LAN (192.168.x.x), between workstations and servers -- no Internet:

WiresharkNetWitness
ip.src==192.168.0.0/16 and ip.dst==192.168.0.0/16ip.src=192.168.0.0/16 && ip.dst=192.168.0.0/16
direction='lateral' (RFC1918 to RFC1918)

 

Filter on Windows -- Filter out noise, while watching Windows Client - DC exchanges

WiresharkNetWitness
smb || nbns || dcerpc || nbss || dnsservice=139,137,135,139,53

 

Match HTTP requests where the last characters in the uri are the characters "gl=se":

WiresharkNetWitness
http.request.uri matches "gl=se$"service=80 && query ends 'gl=se'

 

Filter by a protocol ( e.g. SIP ) and filter out unwanted IPs:

WiresharkNetWitness
ip.src != xxx.xxx.xxx.xxx && ip.dst != xxx.xxx.xxx.xxx && sipservice=5060 && ip.src!=xxx.xxx.xxx.xxx && ip.dst != xxx.xxx.xxx.xxx

 

ip.addr == 10.43.54.65 equivalent to

WiresharkNetWitness
ip.src == 10.43.54.65 or ip.dst == 10.43.54.65ip.all=10.43.54.65
ip.src=10.43.54.65 || ip.dst=10.43.54.65

 

Here's where I pulled some additional filters for mapping:  HTTP Packet Capturing to debug Apache 

 

View all http traffic

WiresharkNetWitness
httpservice=80

 

View all flash video stuff

WiresharkNetWitness
http.request.uri contains "flv" or http.request.uri contains "swf" or http.content_type contains "flash" or http.content_type contains "video"service=80 && ( query contains 'flv' || query contains 'swf' || content contains 'flash' || content contains 'video')

 

Show only certain responses

WiresharkNetWitness
http.response.code == 404service=80 && error begins 404
service=80 && result.code ='404'
http.response.code==200service=80 && error !exists (200 are not explicitly captured)
service=80 && result.code !exists (200 are not explicitly captured)

 

Show only certain http methods

WiresharkNetWitness
http.request.method == "POST" || http.request.method == "PUT"service=80 && action='post','put'

 

Show only filetypes that begin with "text"

WiresharkNetWitness
http.content_type[0:4] == "text"service=80 && filetype begins 'text'
service=80 && filename begins 'text'

 

Show only javascript

WiresharkNetWitness
http.content_type contains "javascript"service=80 && content contain 'javascript'

 

Show all http with content-type="image/(gif|jpeg|png|etc)" §

WiresharkNetWitness
http.content_type[0:5] == "image"service=80 && content ='image/gif','image/jpeg','image/png','image/etc'

 

Show all http with content-type="image/gif" §

WiresharkNetWitness
http.content_type == "image/gif"service=80 && content ='image/gif'

 

Hope this is helpful for everyone and as always, Happy Hunting!

Microsoft has been converting customers to O365 for a while, as a result more and more traffic is being routed from on-premise out to Microsoft clouds potentially putting it into visibility of NetWitness.  Being able to group that traffic into a bucket for potential whitelisting or at the very least identification could be a useful ability.

 

Microsoft used to provide an XML file for all the required IPv4, IPv6 and URLs that were required for accessing their O365 services.  This is being deprecated in October of 2018 in favor of API access.

 

This page gives a great explainer on the data in the API and how to interact with it as well as a python and Powershell script to grab data for use in firewalls etc.

 

Managing Office 365 endpoints - Office 365 

 

The powershell script is where I started so that a script could be run on client workstation to determine if there was any updates and then apply the relevant data to the NW environment.  Eventually, hopefully this gets into the generic whitelisting process that is being developed so that it is programatically delivered to NW environments.

 

GitHub - epartington/rsa_nw_lua_feed_o365_whitelist: whitelisting Office365 traffic using Lua and Feeds 

 

The script provided by Microsoft was modified to create 3 output files for use in NetWitness

o365ipv4out.txt

o365ipv6out.txt

o365urlOut.txt

 

the IP feeds are in a format that can be used as feeds in NetWitness, the github link with the code provides the xml for them to map to the same keys as the lua parser so there is alignment between the three.

 

the o365urlOut.txt is used in a lua parser to map against the alias.host key.  The reason the lua parser was used is as a result of a limitation of the feeds engine which prevents wildcard matching.  The matches in feeds need to be exact, and some of the hosts provided by the feeds are *.domain.com.  The Lua parser attempts to match direct exact match first then falls back to subdomain matches to see if there are any hits there.

 

The Lua parser has the updated host list as of the published version, as Microsoft updates their API the list needs to be changed.  Thats where the PS1 script comes in.  That can be run from client workstation, the output  txt file then opened up if there are changes and the text copied to the decoder > config > files tab and replace the text in the parser to include any changes published.  The decoder probably needs to have the parsers reloaded which can be done from REST or explore menu to reload the content into the decoder.  You can also push the updated parser to all your other Log and Packet decoders to keep them up to date as well.

 

The output of all the content is data in the filter metakey

filter='office365'

filter='whitelist'

 

Sample URL output

["aadrm.com"] = "office365",
["acompli.net"] = "office365",
["adhybridhealth.azure.com"] = "office365",
["adl.windows.com"] = "office365",
["api.microsoftstream.com"] = "office365",

 

sample IPv4 output

104.146.0.0/19,whitelist,office365
104.146.128.0/17,whitelist,office365
104.209.144.16/29,whitelist,office365
104.209.35.177/32,whitelist,office365

 

My knowledge of powershell is pretty close to 0 at the beginning of this exercise, now it's closer to 0.5.

 

To Do Items you can help with:

Ideally i would like the script to output the serviceArea of each URL or IP network so that you can tell which service from O365 the content belongs to to give you more granular data on what part of the suite is being used.

serviceArea = "Exchange","sway","proplus","yammer" ...

If you know how to modify the script to do this, more than happy to update the script to include those.  Ideally 3-4 levels of filter would be perfect.

 

whitelist,office365,yammer

 

would be sufficient granularity i think

 

Changes you might make:

The key to read from is alias.host, if you have logs that write values into domain.dst or host.dst that you want considered and you are on NW11 you can change the key to be host.all to include all of those at once in the filtering (just make sure that key is in your index-decoder-custom.xml)

 

Benefits of using this:

Ability to reduce the noise on the network for known or trusted communications to Microsoft that could be treated as lower priority.  Especially when investigating traffic outbound and you can remove known O365 traffic (powershell from endpoint to internet != microsoft)

 

As any FYI, so far all the test data that I have lists the outbound traffic as heading to org.dst='Microsoft Hosting', i'm sure on wider scale of data that isn't true, but so far the whitelist lines up 100% with that org.dst.

Filter Blog

By date: By tag: