Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

448 posts

Microsoft has been converting customers to O365 for a while, as a result more and more traffic is being routed from on-premise out to Microsoft clouds potentially putting it into visibility of NetWitness.  Being able to group that traffic into a bucket for potential whitelisting or at the very least identification could be a useful ability.

 

Microsoft used to provide an XML file for all the required IPv4, IPv6 and URLs that were required for accessing their O365 services.  This is being deprecated in October of 2018 in favor of API access.

 

This page gives a great explainer on the data in the API and how to interact with it as well as a python and Powershell script to grab data for use in firewalls etc.

 

Managing Office 365 endpoints - Office 365 

 

The powershell script is where I started so that a script could be run on client workstation to determine if there was any updates and then apply the relevant data to the NW environment.  Eventually, hopefully this gets into the generic whitelisting process that is being developed so that it is programatically delivered to NW environments.

 

GitHub - epartington/rsa_nw_lua_feed_o365_whitelist: whitelisting Office365 traffic using Lua and Feeds 

 

The script provided by Microsoft was modified to create 3 output files for use in NetWitness

o365ipv4out.txt

o365ipv6out.txt

o365urlOut.txt

 

the IP feeds are in a format that can be used as feeds in NetWitness, the github link with the code provides the xml for them to map to the same keys as the lua parser so there is alignment between the three.

 

the o365urlOut.txt is used in a lua parser to map against the alias.host key.  The reason the lua parser was used is as a result of a limitation of the feeds engine which prevents wildcard matching.  The matches in feeds need to be exact, and some of the hosts provided by the feeds are *.domain.com.  The Lua parser attempts to match direct exact match first then falls back to subdomain matches to see if there are any hits there.

 

The Lua parser has the updated host list as of the published version, as Microsoft updates their API the list needs to be changed.  Thats where the PS1 script comes in.  That can be run from client workstation, the output  txt file then opened up if there are changes and the text copied to the decoder > config > files tab and replace the text in the parser to include any changes published.  The decoder probably needs to have the parsers reloaded which can be done from REST or explore menu to reload the content into the decoder.  You can also push the updated parser to all your other Log and Packet decoders to keep them up to date as well.

 

The output of all the content is data in the filter metakey

filter='office365'

filter='whitelist'

 

Sample URL output

["aadrm.com"] = "office365",
["acompli.net"] = "office365",
["adhybridhealth.azure.com"] = "office365",
["adl.windows.com"] = "office365",
["api.microsoftstream.com"] = "office365",

 

sample IPv4 output

104.146.0.0/19,whitelist,office365
104.146.128.0/17,whitelist,office365
104.209.144.16/29,whitelist,office365
104.209.35.177/32,whitelist,office365

 

My knowledge of powershell is pretty close to 0 at the beginning of this exercise, now it's closer to 0.5.

 

To Do Items you can help with:

Ideally i would like the script to output the serviceArea of each URL or IP network so that you can tell which service from O365 the content belongs to to give you more granular data on what part of the suite is being used.

serviceArea = "Exchange","sway","proplus","yammer" ...

If you know how to modify the script to do this, more than happy to update the script to include those.  Ideally 3-4 levels of filter would be perfect.

 

whitelist,office365,yammer

 

would be sufficient granularity i think

 

Changes you might make:

The key to read from is alias.host, if you have logs that write values into domain.dst or host.dst that you want considered and you are on NW11 you can change the key to be host.all to include all of those at once in the filtering (just make sure that key is in your index-decoder-custom.xml)

 

Benefits of using this:

Ability to reduce the noise on the network for known or trusted communications to Microsoft that could be treated as lower priority.  Especially when investigating traffic outbound and you can remove known O365 traffic (powershell from endpoint to internet != microsoft)

 

As any FYI, so far all the test data that I have lists the outbound traffic as heading to org.dst='Microsoft Hosting', i'm sure on wider scale of data that isn't true, but so far the whitelist lines up 100% with that org.dst.

The Respond Engine in 11.x contains several useful pivot points and capabilities that allow analysts and responders to quickly navigate from incidents and alerts to the events that interest them.

 

In this blog post, I'll be discussing how to further enable and improve those pivot options within alert details to provide both more pivot links as well as more easily usable links.

 

During the incident aggregation process, the scripts that control the alert normalizations create several links (under Related Links) that appear within each alert's Event Details page.

 

These links allow analysts to copy/paste the URI into a browser and pivot directly to the events/session that caused the alert, or to an investigation query against the target host. 

 

What we'll we doing here is adding additional links to this Related Links section to allow for more pivot options, as well as adding the protocol and web server components to the existing URI in order to form a complete URL.

 

The files that we will be customizing for the first step are located on the Node0 (Admin) Server in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js

 

(We will not be modifying the normalize_ecat_alerts.js or normalize_wtd_alerts.js scripts because the Related Links for those pivot you outside of the NetWitness UI.)

 

As always, back up these files before committing any changes and be sure to double-check your changes for any errors.

 

Within each of these files, there is a exports.normalizeAlert function:

 

At the end of this function, just above the "return normalized;" statement, you will add the following lines of

 

//copying additional links created by the utils.js script to the event's related_links
for(var j = 0; j < normalized.events.length; j++){

if (normalized.related_links) {

normalized.events[j].related_links = normalized.events[j].related_links.concat([normalized.related_links]);

}

}

 

 

So the end of the exports.normalizeAlert function now looks like this:

 

Once you have done this, you can now move on to the next step in this process.  This step will require modification of 3 files - the two we have already changed plus the utils.js script - all still located in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js
  • utils.js

 

Within each of these files search for "url:" to locate the statements that generate the URIs in Related Links.  You will be modifying these URIs into complete URLs by adding "https://<your_UI_IP_or_Hostname>/" to the beginning of the statement.

 

For example, this: 

 

...becomes this:

 

Do this for all of the "url:" statements, except this one in "normalize_core_alerts.js," as this pulls its URI / URL from a function in the script that we are already modifying:

 

Once you have finished modifying these files and double-checking your work for syntax (or other) errors, restart the Respond Server (systemctl restart rsa-nw-respond-server) and begin reaping your rewards:

 

RSA SecurID Access (Cloud Authentication Service) is an access and authentication platform with a hybrid on-premise and cloud-based service architecture. The Cloud Authentication Service helps secure access to SaaS and on-premise web applications for users, with a variety of authentication methods that provide multi-factor identity assurance. The Cloud Authentication Service can also accept authentication requests from a third-party SSO solution or web application that has been configured to use RSA SecurID Access as the identity provider (IdP) for authentication.

 

For More details:

RSA SecurID Access Overview 

Cloud Authentication Service Overview 

 

 

The RSA NetWitness Platform uses the Plugin Framework to connect with the RSA SecurID Access (Cloud Authentication Service) RestFul API to periodically query for Admin activity. This provides visibility into all the administrative activities like: Policy, Cluster, User, Radius Server and various other configuration changes.  

 

Here is a detailed list of all the administrative activity that can monitored via this integration

Administration Log Messages for the Cloud Authentication Service 

 

Downloads and Documentation:

 

Configuration Guide: RSA SecurID Access Event Source Configuration Guide

(Note: This is Only supported on RSA NetWitness 10.6.6 currently.  And it will be in 11.2 (Coming Soon..))

Collector Package on RSA Live:  "RSA SecurID"

Parser on RSA Live: "CEF". (device.type=rsasecuridaccess) 

Servers are attacked every day and sometimes, those attacks are successful.  There is a lot of attention to Windows executables that come down on the wire, but I also wanted to know when my systems were downloading ELF files, typically used by Linux systems.  With some recent exploits that target Linux web servers and the delivery of crypto-mining software, I wrote a parser that attempts to identify Linux ELF files and places that meta in the 'filetype' meta key.

 

 

 

This isn't limited to crypto-mining ELF files and has detected many others in testing.  The parser is attached below.

 

I hope you find this parser useful, and as always, happy hunting.

 

Chris

Whenever I am on an engagement that involves the analysis of network traffic, my preferred tool of choice is the RSA NetWitness Network (Packets) solution.  This provides full packet capture and allows for analysts to "go back to the video tape" to see what happened on the wire.  When the decoder examines the traffic, it tries to identify the service type associated with it.  HTTP, DNS, SSL and many others are some examples.  However, there are times when there is no defined service.  This results in 'service = 0'.  

 

When time allows, I like to go in there, but as you may notice, there can be quite a lot of data to go through.  Therefore, I like to focus on small slices of time and attributes about those sessions that makes sense.  For example, I might choose the following query over the last 3 hours.

 

   service = 0 && ip.proto = 6 && direction = 'outbound' && tcpflags = 'syn' && tcpflags = 'ack' && tcpflags = 'psh'

 

This query will get to the sessions where:

   service = 0 [OTHER traffic not associated with a service type]

   ip.proto = 6 [TCP traffic]

   direction = 'outbound' [traffic that starts internally and destined for public IP space]

   tcpflags = [Focus on SYN, ACK, and PSH because those TCP flags would have to be present for the starting of a session and the sending of data]

 

Next, I look at associated TCP ports (tcp.srcport and tcp.dstport) as well as some IP's and org.dst meta.  What we recently found was a pipe delimited medical record in clear text.  After some additional research, we came across this fantastic blog post from Tripwire discussing Health Level 7 (HL7).  In it, the author, Dallas Haselhorst, even showed the pipe delimited format that the HL7 protocol uses to transfer this data.  It was this format that was observed on the wire.

 

While the idea of medical records being transmitted on the wire in clear text was alarming at first, it was determined that this was in fact, a standard practice.  If used to cross the Internet, VPN tunnels would be used.

 

To get a sense of how much traffic I could see, I created a parser to identify this as 'service = 6046'.  I chose '6046' because that was the first port I observed, however in truth, we eventually saw it on numerous tcp.dstport's.  This parser is just going to identify this as HL7 and will not parse out the information contained in the fields.  Some of that data will likely contain Personal Health Information and it is not something I wanted as meta.  But, knowing it is on the wire in the clear was important to me and my client.  

 

If you work in an organization that handles this kind of data, this parser might help identify and validate where it's going.  

 

Good luck, and happy hunting.  Also..special thanks to one of my new team-mates, Jeremy Warren, who helped find this traffic.

 

Chris

If you haven’t seen the new RSA NetWitness Platform, you are missing out. Over the past 12 months, we have released new innovative capabilities, redesigned the user experience and invested in our core functionality to ultimately increase the speed of detection and response to threats.  We believe that we not only have to enable organizations to detect incidents earlier – before there is business impact, but that we must focus on the precious time of the human analysts – no matter what their skill level is.

 

That is why the RSA NetWitness Platform evolved SIEM provides security monitoring, detection and investigation tools under a single unified platform – across logs, network and endpoint data, with our new orchestration and automation capabilities to aggregate, standardize and normalize alerts from your entire stack of security technologies. And, we are excited to announce we are now offering user and entity behavior analytics as part of the RSA NetWitness Platform. In addition, because we believe it is absolutely critical to have end to end visibility, we are offering free endpoint insights to RSA NetWitness Platform customers.

 

I’ve only shared 4 of the 11 reasons so far (UEBA, Free Endpoint Insights, Orchestration & Automation and a redesigned and intuitive UI) – but there is so much more! Read more about the significant functionality the RSA NetWitness Platform 11.x provides to enable rapid detection and response.

Here's the steps you'll need to follow to initiate a fork of the RSA NetWitness Log Parsers Repository

 

  • Create a fork (your copy of the full repo) from the link on top right corner of page https://github.com/netwitness/nwlogparsers
  • Create a new branch in your repo for your work and add your new parser work under community folder
  • Each new parser should be kept in a new folder with its name
    • only add the parser.xml file (not zip or .envision file)
  • Create a new folder for your parser by clicking new file button, when the box shows up add the folder name then a slash and then the file name (this creates a folder for your file which isn’t obvious from the UI)
  • Copy and paste the text of your parser into the editor
  • Only include the .xml and .ini file and nothing else (no .envision or .zip)
  • Add data to the Commit description at the bottom and click commit new file
  • Raise a pull request to merge your changes to the RSA NetWitness repo
    • Open your repo page on github.com
    • Click create pull request
    • Name the pull request
    • Request will go to the RSA content team for review and merging into the parser(s)

How to Update your forked log-parsers repository to get latest version

  • Log into your github account
  • Locate the forked nw-logparsers repository in your account

  • Click on compare (right side)

You will get a notification like this if it’s the first time for comparing

There isn't anything to compare.
someone:master is up to date with all commits from me:master. Try switching the base for your comparison.

Click on switching the base

Or you will see this if you have compared before:

 

*** important  ***

Github defaults to sync your changes to the upstream fork, in this case we want the opposite.

Chagne the base fork (left option) to be your fork (not the netwitness/nw-logparsers)

Now you will see a different comparing changes screen and a note about comparing the same two things:

 

Click the compare across forks:

Click the head fork and change to the netwitness/ fork:

Now you see the commits since the repository was forked:

Click on Create pull request:

Give it a title and if required a description

 

On the next page click Create pull request

Click confirm merge:

Your copy of the RSA Netwitness nw-logparsers repo is now updated

You can review the latest code and also submit new parsers or updates to your already submitted parsers using the above process.

 

The resource I used which helped me along with this was the following very helpful GitHub link:

https://github.com/KirstieJane/STEMMRoleModels/wiki/Syncing-your-fork-to-the-original-repository-via-the-browser

The Google Cloud Platform provides Infrastructure as a Service, Platform as a Service and Server less computing environments.

 

The Google Cloud Platform services deliver audit logging to help answer the question of "who did what, where and when?" Google Cloud Audit Logs are captured by Google StackDriver, which provides powerful monitoring, logging, and diagnostics; equipping users with insight into the health, performance, and availability of cloud-powered applications. These insights enable users to find and fix issues faster and is natively integrated with Google Cloud Platform. For more information please visit the following links:

GCP: https://cloud.google.com/

Stackdriver: https://cloud.google.com/stackdriver/

Cloud Audit Logs: https://cloud.google.com/logging/docs/audit/

 

The logs from StackDriver can be imported into the RSA NetWitness Platform using the RSA NetWitness Google Cloud plugin. This plugin pulls logs from StackDriver via a Google Cloud Pub/Sub subscription.

 

Below is a basic flow diagram that outlines how the logs flow into the RSA NetWitness Platform:

 

 

Here are a few example use-cases that can provide insights into the capabilities of the Google Cloud Platform, using the Google Cloud Audit Logs:

 

  1. Resource creation, update or deletion.
  2. Addition of a user to a new IAM role.
  3. Access to sensitive Data and Resources.

 

To take advantage of this new capability within RSA NetWitness, please visit the link below and search for the terms below in RSA Live.

 

 

Configuration Guide:  Google Cloud Platform Event Source Configuration Guide

Collector Package on RSA Live: "Google Cloud Log Collector Configuration"

Parser on RSA Live: CEF

One of the useful features that was released with RSA NetWitness 11.1 was the NetWitness API which provides access to the Incidents and Alerts from the Respond Engine.

 

Documentation is located at the link below which is very useful from a schema perspective.

 

NetWitness Suite API User Guide for Version 11.1 

 

Using that Guide and a helpful internal training video, I found a very useful Google Chrome plugin to help test integrations with the API.

 

Restlet Client - REST API Testing - Chrome Web Store 

 

Using this plugin you can simulate RSA NetWitness Orchestrator web calls or anything that is calling the API to validate what to expect and test.

 

The first thing to do is follow general security best practice and create a role and user in RSA NetWitness to reduce the required permissions to just what is required.  Currently I am still testing to see if i can reduce the roles further but the current permissions are much less than the default 'admin' account.

 

Create a new Role (I called it Orchestration)

  • Admin > Security > Roles
  • Add the following rights
  • Alerting - access alerting module view alerts, view rules
  • Incidents - Access incident module, delete alerts and incidents, manage alert handling rules, view and manage incidents
  • Integration server - integration-server.api.access (this is the required criteria according to the api doc)
  • Respond Server - respond-server.alert.delete,respond-server.alert.manage,respond-server.alert.read,respond-server.incident.delete,respond-server.incident.manage,respond-server.incident.read,respond-server.journal.manage,respond-server.journal.read,respond-server.notifications.manage,respond-server.notifications.read, respond-server.process.manage,respond-server.remediation.manage, respond-server.remediation.read,respond-server.security.manage, respond-server.security.read

 

create a new User (I called it Orchestrator)

  • Add it to the Role: Orchestration

 

Now there is an account to use for testing with the API and integrating with RSA NetWitnessOrchestrator.

 

Using Restlet-client import the three 'requests' from the github link below:

GitHub - epartington/rsa_nw_netwitnessapi 

 

This will get you a nw-getauth, nw-get-incident, nw-get-alert

 

Use nw-getauth to request a security token from the RSA NetWitness API (update for your RSA NetWitness interface)

 

 

Hit send and you should get back a 200 OK result with the security tokens to use in the next submissions.

 

Now you have the accessToken value to use to authenticate your next commands (copy the accessToken value)

Use the nw-get-incident request to get the details for a specific incident (INC-XXX)

 

Insert the value for the accessToken into the RSA NetWitness-Token field and hit send.

 

If everything works well you should get back another 200 OK with the json dump of the values on that specific incident

 

 

You can click download to grab a json export of this incident to use to work offline, investigate, upload to a demo RSA NetWitness Orchestrator system ... A sample one is included in the github link.

 

To grab the alert details from this incident use the 3rd 'request' nw-get-alert

 

Again you should get a 200 OK with the details of the Alerts for the incident requested

 

 

 

Again you can download the json file to get the full details of the alert to know what you can work with in RSA NetWitness Orchestrator/Crystal Reports.

 

This is the equivalent output from the Respond Incident window (alerts are the same missing items), the area in the red box don't appear to be available in the API.  An internal Jira has been opened on this to enhance or resolve this (I can't figure out if this is a bug or feature request).

 

Version 11.0 NetWitness Logs and Network documentation is now available in French, Japanese, German, and Spanish.

 

RSA NetWitness Logs & Packets 11.0 (French) 

RSA NetWitness Logs & Packets 11.0 (Japanese)

RSA NetWitness Logs & Packets 11.0 (German) 

RSA NetWitness Logs & Packets 11.0 (Spanish) 

I've seen and heard a fair bit of discussion recently about whether it's possible to create custom matchCondition and groupBy fields within the new 11.x Respond Server.  "We have the capability within 10.x," the question goes, "but can we do this in 11.x?"

 

The answer is "Yes," but the process is slightly different, hence the reason for this blog post.

 

First, I think it will be useful to lay some groundwork and establish a common understanding of the incident creation process within Respond.

 

When the Respond Server consumes alerts off the message bus, those original, raw alerts can have many different meta fields.  The Respond Server needs to create a common schema for these alerts so that it knows where and how to store each piece of incoming data.  To do this, the Respond Server relies on a group of scripts to extract, normalize, and group meta.

 

With this common schema formed, the Respond Server can then begin to aggregate these alerts into incidents.  The initial aggregation process relies on matchCondition values within the Incident Rule.  For example, the OOTB User Behavior incident aggregation rule:

 

After aggregating incoming alerts based on these matchCondition values, the Respond Server then attempts to group them into separate Incidents (or suppress them) according to the groupBy values:

 

The common use case that we will be discussing here is in response to the need for aggregation and grouping using non-default options.  For instance, if we want to group incidents according to email subject, or threat description, or any other arbitrary or custom metakey, how to add those so that they appear as options within the UI, AND so that the aggregation and grouping works?

 

****Before getting into any of the details, I strongly recommend that you try these procedures on a lab or test system first, both to familiarize yourself with the process and to ensure it works, before making any changes to a production system.****

 

Now then, on to the good stuff.

 

First thing we need to do is identify the locations and names of the specific files that we will be modifying.  This is one area where the process in 11.x is slightly different compared to 10.x, as these files are in a different location.

 

To modify the available groupBy and matchCondition fields, we need these two files on the Node0 Server (aka Head Server; aka Admin Server):

  • /var/netwitness/respond-server/scripts/normalize_alerts.js
  • /var/netwitness/respond-server/data/aggregation_rule_schema.json

 

AND, depending on the source(s) of the alert(s), we will ALSO need to modify (at least) one of the following:

  • /var/netwitness/respond-server/scripts/normalize_core_alerts.js
    • for alert sources:
      • ESA
      • Reporting Engine
      • NetWitness Investigate
  • /var/netwitness/respond-server/scripts/normalize_ecat_alerts.js
    • for alert source NetWitness Endpoint (aka ECAT)
  • /var/netwitness/respond-server/scripts/normalize_ma_alerts.js
    • for alert source Malware Analysis
  • /var/netwitness/respond-server/scripts/normalize_wtd_alerts.js
    • for alert source Web Threat Detection

 

Once we know the source of the incoming alerts, we will then need to identify the key(s) and/or value(s) within those raw alerts that we want to match and group against.  At this point, you will most likely need to examine the raw alert within the Respond Server.

 

Browse to Respond --> Alerts and select your specific alert.  The Raw Alert will be visible in the window on the right (or, if you clicked on the hyperlink Alert Name, it will be on the left in the newly-opened window), allowing you to scroll through the raw data and identify the key or value specific to your use case.

 

For my test case, I generated alerts from the ESA with different "event_type" values:

 

...which meant I first needed to modify the "/var/netwitness/respond-server/scripts/normalize_core_alerts.js" file and add the "event_type" key.

 

Within "normalize_core_alerts.js" there is a "generateEventInfo" function, which is where we can define additional keys to be normalized by the Respond Server, and where I added my "event_type" key.

****NOTE: it is VERY important that you pay close attention to the formatting and syntax within this file when you add a new key, especially where trailing commas are needed/not needed.****

 

Next I modified the "/var/netwitness/respond-server/scripts/normalize_alerts.js" file and added a new line for my "event_type" key to the "normalizeAlert" function.

****Again, it is very important that you pay close attention to the formatting and syntax when you add your keys to this function block.****

 

Then I modified the "/var/netwitness/respond-server/data/aggregation_rule_schema.json" file and added a new schema for the "event_type" key.

****And yet again, pay very close attention to formatting and syntax when modifying this file, especially where commas are needed/not needed.****

 

****I recommend saving copies of each of these modified files, as they get over-written during the upgrade process.****

 

And finally, restart the Respond Server service, either from within the UI:

 

...or via command line from the Node0 Server:

# systemctl restart rsa-nw-respond-server

 

Give it a minute or two for the service to fully restart, refresh your browser, and you can now select your custom matchCondition and groupBy keys from the drop down menus:

 

...and view the fruits of your labor:

 

**Bonus note for anyone still reading: if you're like me and it bugs you if something is not in alphabetical order, you can adjust where your custom keys appear within the dropdown menus by inserting your custom schema within the "aggregation_rule_schema.json" file in a different location.

 

In the example shown above, I added my custom schema to the very end of the file, which is why my key appeared at the very bottom of each dropdown menu.

 

But if I place my custom schema in alphabetical order within the JSON file, it will appear within the dropdown menu in its new location:

 

 

 

Happy customizing, everybody!

I leverage many sources to get ideas around spotting anomalies in an environment. One of the sources I leverage comes from the following Twitter account: Jack Crook (@jackcr).  @Jackcr provides many ideas around methods and approaches to separate known from unknown or common from rare.

 

This post inspired me to see if something similar could be implemented using RSA NetWitness Platform.

 

https://twitter.com/jackcr/status/993561834375598080

 

The basis for the report was to look for outbound communications where a domain only has one useragent accessing it (over a period of time) and that useragent contains 'mozilla'.

 

After a few tests in the lab this was the rule that was developed.

 

name: "DomainsWithOneUserAgent(1)"
description: ""
path_for_export: "rsa-custom/rareUaDomain/DomainsWithOneUserAgent(1)"
query {
data_source_type: NWDB
netwitness_query {
select: "alias.host,countdistinct(client),distinct(client),org.dst,countdistinct(ip.src)"
where: "alias.host exists && client exists && direction = \'outbound\' && client contains \'mozilla\'"
group_by_keys: "AGGREGATE"
order_by_keys {
column_name: "countdistinct(client)"
sort_order: ASCENDING
}
order_by_keys {
column_name: "alias.host"
sort_order: ASCENDING
}
limit_results_count: 100
then_clause: "max_threshold(1,countdistinct(client))"
agg_session_threshold: 0
group_by: "alias.host"
group_by: "org.dst"
alias_names: ""
}
data_source_name: ""
}

We limit the returned results to top 100, and looking for results that have a max threshold for count(distinct(client)) of 1 to limit to domains that have only one unique domain accessing it over the reporting time frame.

 

Results look like this (lab results)

The report is included at the github link below.  As always, I'm curious to see how this tested on a larger network to see validity and if tweaks are necessary.  If you have any feedback please let me know.

 

GitHub - epartington/rsa_nw_re_useragent_domain_rare 

 

Feedback always appreciated

 

Happy Hunting

Security Monitoring is no longer in its infancy and most organizations have some level of monitoring in place today.  So the question begs, if this is in place then why do we continue to see organizations failing to secure their networks and protect what matters most to their business?

 

In reality there is no single reason for these breaches nor is there a silver bullet for curing the problem. If you had a chance to listen or watch the keynote from this year’s RSA Conference, delivered by RSA's President Rohit Ghai, you’ll recall that he said we have to look to the silver linings and see where small changes made across the Security Monitoring arena can add up to make significant overall improvements to our security.

 

There is sometimes a perception that deploying multiple security technologies will protect an organization.  In several recent discussions it's apparent that organizations continue to experience major breaches even with technology in place.  Sometimes they simply have the wrong technology.  Other times they have the right technology, but they're not actively using it or using it to its full potential.  The point is that it is less about what technology you have in place and more about what you actually do with it.  We've seen a number of examples where smaller security teams excel purely by knowing their own environment and having a thorough understanding of their tools, capabilities and making the most of what they have been able to invest in.  

 

This is indicative of another issue: skill shortages and finding the right security staff. It’s not necessarily about having the perfect team from day 1, but it’s about growing their skills in-house to make sure they know what they are defending (and why).  This involves having a development path to increase the organization’s Security Operations Maturity.

Knowing your own threat landscape and what gaps you have in threat detection are crucial in a modern Intelligence-led Security Operations Center.  The fact is that understanding your own network landscape is going to be crucial when you are defending it against the most sophisticated attackers.

 

In short, what we are saying here is that it is incredibly difficult to develop a SOC or any other Security Monitoring capability which is going to be effective from day 1. It is all about the journey. SOC Managers, CISO’s, CIO’s and others have to identify what is important to them and develop a plan which will provide the enhancements in capabilities (Tools, Technologies & Procedures) and ensure that these are supported both financially and by metrics.  This includes having a roadmap of where you want your Security Monitoring program to grow to and being able to test how well the team is performing via Red Team engagements as well as Controlled Attack and Response Exercises.

 

Join us on our upcoming webinar next month on June 12th to learn more.  We will discuss this journey with one of our customers who has taken this exact approach in building and developing their team into one of the most skilled Security Operations Centers that we’ve seen to date.   

 

Click here to register.

 

I wanted to give a special thanks to Azeem Aleem, Gareth Pritchard and David Gray for their contributions to this blog and upcoming webinar. 

The Http_lua_options file has many functions that can be enabled and disabled to ensure proper parsing of your traffic.  One of the functions that is not listed in the OOTB options file is the browserprint function.  This is often deployed during RSA Incident Response engagements to give a bit more detail about the headers in HTTP sessions and their order/occurrence.

 

To enable browserprint, do the following:

 

Edit the http_lua_options file (on the decoder):

 

function browserprint()
--[=[
"Browserprint" : default FALSE
Whether to register a "fingerprint" of the browser based upon the specific headers
seen in the request. The format of this browserprint value is:
Position 1 is HTTP version: 0 = HTTP/1.0, 1 = HTTP/1.1
Remaining positions are in order that the header was seen. Only
the below listed headers are included in the fingerprint:
1 = accept / 2 = accept-encoding
3 = accept-language / 4 = connection
5 = host / 6 = user-agent
Example "15613":
HTTP/1.1
HOST:
USER-AGENT:
ACCEPT:
ACCEPT-LANGUAGE:
(other headers may have appeared between those headers)
The usefulness of this meta is not necessarily in determining "good" or "bad"
browser fingerprints. Rather, it is more useful to look for outliers. For
example if the majority of values are 15613 with just a few being 15361, then
the sessions with 15361 may be worth investigation.
--]=]
return true
end

 

Next, restart the decoder service.

 

On your index-concentrator-custom.xml file (on the concentrator), add the following key:

<key description="Browser Print" format="Text" level="IndexValues" name="browserprint" defaultAction="Closed" valueMax="500000" />

 

Next, restart the concentrator service.

 

Update the ESA Explore setting below to make sure ESA sets the values as string[] (array) and not string.

 

Admin > ESA > Explore >
/workflow/source/netGenAggregationSource/ArrayFieldNames

Add browserprint to the end of the line

 

Now you have the ability to generate the browserprint numbers as new sessions arrive at the decoder.

 

Next question... what do those numbers actually mean in the browserprint key?

 

From the code above this is the explanation:

Registers a "fingerprint" of the browser based upon the specific headers seen in the request. The format of this browserprint value is:

 

Position 1 is HTTP version: 0 = HTTP/1.0, 1 = HTTP/1.1

 

Remaining positions are in order that the header was seen. Only the below listed headers are included in the fingerprint:
1 = accept

2 = accept-encoding
3 = accept-language

4 = connection
5 = host

6 = user-agent


Example "15613":


1 - HTTP/1.1
5 - HOST:
6 - USER-AGENT:
1 - ACCEPT:
3 - ACCEPT-LANGUAGE:
(other headers may have appeared between those headers)


The usefulness of this meta is not necessarily in determining "good" or "bad" browser fingerprints. Rather, it is more useful to look for outliers. For example if the majority of values are 15613 with just a few being 15361, then the sessions with 15361 may be worth investigation.

 

What does this look like with more traffic?

 

Simply by looking over a large set of data you can start to see that certain patterns are uncommon and unusual.. those could be investigated to see what they were and if they were interesting.

 

Lets Take this a step farther, can we use this Browserprint data point in combination with other metakeys to  find more specific unusual communication patterns?  Browserprint allows us to take a look at the 'crowd' of common HTTP headers and orders and determine outliers.  What if that was combined with user agent information, destination hosts, outbound traffic direction and the number of source IP addresses?

 

This could give us a method to see the rare combinations of source IP to destination host, outbound, with the same rare Browserprint number (along with the destination org, and the de-duped list of unique UA that match the HTTP communications).

 

The Report and Rule syntax

name: "browserprintUA-Rare-Mozilla"
description: ""
path_for_export: "rsa-custom/browserprint/browserprintUA-Rare-Mozilla"
query {
data_source_type: NWDB
netwitness_query {
select: "browserprint,count(browserprint),count(ip.src),distinct(client),distinct(alias.host),distinct(org.dst)"
where: "browserprint exists && client exists && direction=\'outbound\' && client contains \'mozilla\'"
group_by_keys: "AGGREGATE"
order_by_keys {
column_name: "count(browserprint)"
sort_order: ASCENDING
}
order_by_keys {
column_name: "count(ip.src)"
sort_order: ASCENDING
}
limit_results_count: 100
then_clause: "max_threshold(50,count(ip.src))"
agg_session_threshold: 0
group_by: "browserprint"
alias_names: ""
}
data_source_name: ""
}

 

name: "browserprintUA-Rare-NonMozilla"
description: ""
path_for_export: "rsa-custom/browserprint/browserprintUA-Rare-NonMozilla"
query {
data_source_type: NWDB
netwitness_query {
select: "browserprint,count(browserprint),count(ip.src),distinct(client),distinct(alias.host),distinct(org.dst)"
where: "browserprint exists && client exists && direction=\'outbound\' && not(client contains \'mozilla\')"
group_by_keys: "AGGREGATE"
order_by_keys {
column_name: "count(browserprint)"
sort_order: ASCENDING
}
order_by_keys {
column_name: "count(ip.src)"
sort_order: ASCENDING
}
limit_results_count: 100
then_clause: "max_threshold(50,count(ip.src))"
agg_session_threshold: 0
group_by: "browserprint"
alias_names: ""
}
data_source_name: ""
}

 

When put in a report and run in a lab environment you get something like this when broken down between mozilla or non-mozilla useragents.  Sorted by least occurrance and with a max threshhold of 50 to make sure we focus on the rare Browserprint combinations.

 

Report and rules are listed here to test and provide feedback.

GitHub - epartington/rsa_nw_re_browserprint-rare: Browserprint http_lua_options for rarity 

 

Happy hunting

Overview

This version will now parse over 1,400 events from the devices, however the parser does not parse audit events that are generated in the "Administration-->Security" user interface.  Those events are handled by the Global Audit, Global Notification settings and parsed by the CEF parser.  However, if you made modifications to the "Security" settings on the individual device, that event will be parsed by this parser.

This version was developed and tested on 10.6.2.0 using available log samples from 10.4.x thru 10.6.2.0.

 

Improvements

New Headers have been added to accommodate the log format change in 10.5.1 and above.

Logs from the Virtual Log Collector are now parsed, particularly Windows Collection Errors.

Error/Failure Logs are consolidated under the Event Category Name of "System.Errors"

Puppet Logs are parsed

Collectd Logs are parsed

Added "maxValues" kb 00031300 modification

Custom Index reduction in size and maxValues adjusted accordingly

Overall cleanup of some variable/index clutter

Improved accuracy for parsing of Query and Queue Times

Duration added for Query Times, they are now converted to seconds under the "duration.time" metakey

 

Contents

This package includes:

   Custom Log parser

   Custom Index for Concentrator*

   Custom Table Map*
   Event Categories Spreadsheet

  

*I have revised the custom index and table map to reflect the new changes in the default settings for 10.6.2.  If you are using a prior version to 10.6, you may need to add some additional index keys to the custom index.

 

Parser Content

Content, such as reports and dashboards, written by me for this parser will be published separately and links will be added here.  Currently content for Index operations, queries, cancelled queries, system errors, configuration changes, security changes, service restarts, and content updates for feeds/parsers are being tested on an enterprise system at the time of this writing.  These will start appearing in the next few days.

 

Report:  ValueMax Has Been Reached 

 

Installation

Log Decoder

Remove the prior version of the parser

  1. SSH into each log decoder as "root" that has the prior version.
  2. Remove the old parser directory
    rm -r /etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/
    You should see the prompts like below:
    [root@logdecoder60 SA_Logs]# rm -r /etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/
    rm: descend into directory `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics'? y
    rm: remove regular file `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/rsasecurityanalytics.ini'? y
    rm: remove regular file `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/rsasecurityanalyticsmsg.xml'? y
    rm: remove directory `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics'? y

Download and unzip parser

  1. Download the parser file "rsasecurityanalytics_2.3.99.zip" from the bottom of this page.
  2. Unzip the file using Winzip, or 7zip.
    The unzipped parser file name will be "rsasecurityanalytics.envision"

Upload the parser on the Log Decoder

  1. Login to the Web Interface as "admin" or user who is a member of the "Administrators" Role.
  2. Choose "Administration-->Services" from the navigation menu in the upper left corner of the screen.
  3. Locate the Log Decoder and click on the gear icon, located at the far right of the screen.
  4. Hover over "View", then click "Config".
  5. Click on the "Parsers" Tab.
  6. Click on the "Upload" icon in the upper left portion of the window.
  7. Click on the "+" in the upper left of the "Upload Parsers" dialog box.
  8. Navigate to the folder where the "rsasecurityanalytics.envision" is located and select it.  Click "Open"
  9. Click on "Upload"
  10. Click on the "X" in the upper right corner of the dialog box or click "Cancel"

Remove prior version custom table map entries

  1. On the same screen as above, Click on the "Files" Tab
  2. On the left side of the screen click on the dropdown and select "table-map-custom.xml".
  3. Locate the section related to the custom table entries for the log parser typically labelled
    RSA Security Analytics Log Parser Revision 2.1.63 xx/xx/xx
  4. Remove that section.
  5. Replace with new table map entries from the table-map-custom.xml file.
  6. Click "Apply"

Load the new log parser and custom table map.

  1. On the same screen as above, click on "Config" just above the "App Rules" Tab.
  2. Click on "System"
  3. Click on "Stop Capture" at the top left of the screen.
  4. Wait for capture to stop.
  5. Click on "Shutdown Service" at the top center of the screen.
  6. On the "Confirm Shutdown" dialog, type "RSA Security Analytics Parser update"
  7. Click "OK"

Concentrator

Update The Concentrator Custom Index

  1. Login to the Web Interface as "admin" or user who is a member of the "Administrators" Role.
  2. Choose "Administration-->Services" from the navigation menu in the upper left corner of the screen.
  3. Locate the Concentrator and click on the gear icon, located at the far right of the screen.
  4. Hover over "View", then click "Config".
  5. Click on the "Files" Tab
  6. On the left side of the screen click on the dropdown and select "index-concentrator-custom.xml".
  7. Locate the section related to the custom table entries for the log parser typically labelled
    RSA Security Analytics Log Parser Revision 2.1.63 xx/xx/xx
  8. Remove that section.
  9. Replace with new custom index entries from the index-concentrator-custom.xml file.
  10. Click "Apply"

Load The New Custom Index.

  1. On the same screen as above, click on "Config" just above the "Correlation Rules" Tab.
  2. Click on "System"
  3. Click on "Stop Aggregation" at the top left of the screen.
  4. Wait for aggregation to stop.
  5. Click on "Shutdown Service" at the top center of the screen.
  6. On the "Confirm Shutdown" dialog, type "RSA Security Analytics Parser update"
  7. Click "OK"

ALL Appliances

Configure Rsyslog to Forward Logs

  1. SSH into each NetWitness Appliance.
  2. Modify the /etc/rsyslog.conf file.  
    vi /etc/rsyslog.conf
  3. Press the letter "i" or the "Insert" key.  You should see "-- INSERT --" at the bottom left of your screen.
  4. Scroll to the bottom of the file and look for the following line:
    #*.* @@remote-host:514
  5. Remove the "#" and change "remote-host" to the destination Log Decoder or Virtual Log Collector (VLC).
    *.* @@<Log Decoder or VLC IP Address Here>:514
  6. Press the  "ESC" key
  7. You should see a colon ":" in the lower left of the screen.
  8. Save the file by typing ":wq"
    :wq
  9. Restart the Rsyslog service.
    service rsyslog restart
  10. Rsyslog is now forwarding logs to the Log Decoder or VLC.

Filter Blog

By date: By tag: