Skip navigation
All Places > Products > RSA NetWitness Platform > Blog

The RSA NetWitness Platform has multiple new enhancements as to how it handles Lists and Feeds in v11.x.  One of the enhancements introduced in the v11.1 release was the ability to use Context Hub Lists as Blacklist and/or Whitelist enrichment sources in ESA alerts.  This feature allows analysts and administrators a much easier path to tuning and updating ESA alerts than was previously available.

 

In this post, I'll be explaining how you can take that one step further and create ESA alerts that automatically update Context Hub Lists that can in turn be used as blacklist/whitelist enrichment sources in other ESA alerts.  The capabilities you'll use to accomplish this will be the ESA's script notifications, the ESA's Enrichment Sources and the Context Hub's List Data Source.

 

Your first step is to determine what kind of data you want to put into the Context Hub List.  For my test case I chose source and destination IP addresses.  Your next step is to determine where this List should live so that the Context Hub can access it.  The Context Hub can pull Lists either via HTTP, HTTPS, or from its local file system on the ESA appliance - for my test case I chose the local filesystem.

 

With that decided, your next step is to create the file that will become the List - the Context Hub looks within the /var/netwitness/contexthub-server/data directory on the ESA, so you'll create a CSV file in this location and add headers to help you (and others) know what data the List contains:

 

**NOTE** Be sure to make this CSV writeable for all users, e.g.:

# chmod 666 esaList.csv

 

Next, add this CSV to the CH as a Data Source.  In Admin / Services / Contexthub Server / Config --> Data Sources, choose List:

 

Select "Local File Store," then give your List a name and description and choose the CSV from the dropdown:

 

If you created headers in the CSV, select "With Column Headers" and then validate that the Context Hub can see and read your file.  After validation is successful, tell the Context Hub what types of meta are in each column, whether to Append to or Overwrite values in the List when it updates, and also whether to automatically expire (delete) values once they reach a certain age (maximum value here is 30 days):

 

For my test case, I chose not to map the date_added and source_alert columns from the CSV to any meta keys, because I only want them for my own awareness to know where each value came from (i.e.: what ESA alert) and when it was added.  Also, I chose to Append new values rather than Overwrite, because the Context Hub List has built in functionality that identifies new and unique values within the CSV and adds only those to the List.  Append will also enable the List Value Expiration feature to automatically remove old values.

 

Once you have selected your options, save your settings to close the wizard.  Before moving on, there are a few additional configuration options to point out which are accessible through the gear icon on the right side of the page.  These settings will allow you to modify the existing meta mapping or add new ones, adjust the Expiration, enable or disable whether the List's values are loaded into cache, and most importantly - the List's update schedule, or Recurrence:

 

**NOTE** At the time of this writing, the Schedule Recurrence has a bug that causes the Context Hub to ignore any user-defined schedule, which means it will revert to the default setting and only automatically update every 12 hours.

 

With the Context Hub List created, you can move on to the script and notification template that you will use to auto-update the CSV (both are attached to this blog - you can upload/import them as is, or feel free to modify them however you like for your use cases / environment).  You can refer to the documentation (System Configuration Guide for Version 11.x - Table of Contents) to add notification outputs, servers, and templates.

 

To test that all of this works and writes what you want to the CSV file (for my test case, IP source and destination values), create an ESA alert that will fire with the data points you want capture, and then add the script notification, server, and template to the alert:

 

After deploying your alert and generating the traffic (or waiting) for it to fire, verify that your CSV auto-updates with the values from the alert by keeping an eye on the CSV file.  Additionally, you can force your Context Hub List to update by re-opening your List's settings (the gear icon mentioned above), re-saving your existing settings, and then checking its values within the Lists tab:

 

 

You'll notice that in my test case, my CSV file has 5 entries in it while my Context Hub List only has 3 - this is a result of the automatic de-duplication mentioned above; the List is only going to be Appending new and unique entries from the CSV.

 

Next up, add this List as an Enrichment Source to your ESA.  Navigate to Configure / ESA Rules --> Setting tab / Enrichment Sources, and add a new Context Hub source:

 

In the wizard, select the List you created at the start of this process and the columns that you will want to use within ESA alerts:

 

With that complete, save and exit the wizard, and then move on to the last step - creating or modifying an ESA alert to use this Context Hub List as a whitelist or blacklist.

 

Unless your ESA alert requires advanced logic and functionality, you can use the ESA Rule Builder to create the alert.  Within your alert statement, build out the alert logic and add a Meta Whitelist or Meta Blacklist Condition, depending on your use case:

 

Select the Context Hub List you just added as an Enrichment Source:

 

Select the column from the Context Hub List that you want to match against within your alert:

 

Lastly, select the NetWitness meta key that you want to match against it:

 

You can add additional Statements and additional blacklists or whitelists to your alert as your use case dictates.  Once complete, save and deploy your alert, and then verify that your alerts are firing as expected:

 

And finally, give yourself a pat on the back.

For those who are interested in becoming certified on the RSA NetWitness Platform - we have some great news for you!  This process just became a whole lot easier... you no longer have to travel to a Pearson VUE testing center to take the certification exams.  All four of the RSA NetWitness certifications can now be taken through online proctored testing!  That's right... 100% online!

 

You can find all of the details on the RSA Certification Program page.  There's also a page specifically for the RSA NetWitness Platform certifications where you can find details about the certifications, try out one of the practice exams, register to take a certification and much, much more.  

 

RSA NetWitness has 4 separate certifications available:

  1. RSA NetWitness Logs and Network Admin
  2. RSA NetWitness Logs and Network Analyst
  3. RSA NetWitness Endpoint Admin
  4. RSA NetWitness Endpoint Analyst

 

I wish you all the best of luck and encourage you to continue your professional development by becoming certified on our technology.  

The RSA NetWitness Platform has an integrated agent available that currently does base Endpoint Detection and Response (EDR) functions but will shortly have more complete parity with ECAT (in V 11.x).  One beneficial feature of the Insights agent (otherwise called NWE Insights Agent) is Windows Log collection and forwarding. 

 

Here is the agent Install Guide for v11.2:

https://community.rsa.com/docs/DOC-96206

 

The Endpoint packager is built from the Endpoint Server (Admin > Services) where you can define your configuration options.  To enable windows log collection check the box at the bottom of the initial screen

 

This expands the options for Windows log collection...

Define one or more Log Decoder/Collector services in the current RSA NetWitness deployment to send the endpoint logs to (define a primary and secondary destination)

 

Define your channels to collect from

The default list includes 4 channels (System, Security, Application and ForwardedEvents)

You can also add any channel you want as long as you know the EXACT name of it

In the Enter Filter Option in the selection box enter the channel name

In this case Windows PowerShell (again, make sure you match to the exact Event Channel run into issues)

We could also choose to add some other useful event channels

  • Microsoft-Windows-Sysmon/Operational
  • Microsoft-Windows-TerminalServices-LocalSessionManager/Operational
  • Microsoft-Windows-TerminalServices-RemoteConnectionManager/Operational

 

You can choose to filter these channels to include or exclude certain events as well.

 

Finally, set the protocol to either UDP/TCP or TLS.

 

Generate Agent generates the download that includes the packager and the config files that define the agent settings.

 

From there you can build the agents for Windows, Linux and Mac from a local windows desktop.

Agents are installed as normal using local credentials or your package management tool of choice.

 

Now that you have windows events forwarded to your log decoders, make sure you have the Windows parser downloaded from RSA Live and deployed to your log decoders to start parsing the events.

The Windows parser is slightly different than the other windows log parsers (nic, snare, er) in that there are only 7 message sections (one each for the default channels and a TestEvent and Windows_Generic).

 

For the OOTB channels the Message section defines all the keys that could exist and then maps them to the table-map.xml values as well as the ec tags. 

Log Parser Customization 

 

The Windows_Generic is the catchall for this parser and any channel that is added custom will only parse from this section.  This catchall needs some help to make use of the keys that will come from the channels that we have selected which is where a windowsmsg-custom.xml (custom addition to the windows parser) comes in (internal feature enhancement as been added to make these OOTB)

 

Get the windows-custom parser from here:

GitHub - epartington/rsa_nw_log_windows: rsa windows parser for nw endpoint windows logs 

Add to your windows parser folder on the log decoder(s) that you configured in the endpoint config

/etc/netwitness/ng/envision/etc/devices/windows/

 

Reload your parsers.

Now you should have additional meta available for these additional event channels.

 

 

 

What happens if you want to change your logging configuration but don't want to re-roll an agent? In the Log Collection Guide here you can see how to add a new config file to the agent directory to update the channel information

https://community.rsa.com/docs/DOC-95781

(page 113)

 

Currently the free NW Endpoint Insights agent doesn't have agent config management included so this needs to be manual at the moment.  Future versions will include config management to make this change easier.

 

Now you can accomplish things like this:

Logs - Collecting Windows Events with WEC 

Without needing a WEC/WEF server especially if you are deploying Sysmon and want to use the NWE agent to pull back the event channel.

 

While you are in the Log Collection frame of mind, why not create a Profile in Investigation for NWE logs. 

Pre-Query = device.type='windows'

 

In 11.2 you can create a profile (which isn't new) as well as meta and column groups that are flexible (new in 11.2).  Which means the pre-query is locked but you are able to switch metagroups within the profile (very handy)

 

 

Hopefully this helpful addition to our agent reduces friction to collecting windows events.  If there are specific event channels that are high on the priority list for collection add them to the comments below and i'll get them added to internal RFE.

We at RSA value your thoughts and feedback on our products. Please tell us what you think about RSA NetWitness by participating directly in our upcoming user research studies. 

 

What's in it for you?

 

You will get a chance to play around with new and exciting features and help us shape the future of our product through your direct feedback. After submitting your information in the survey below, if you are a match for an upcoming study, one of our researchers will work with you in facilitating a study session either in a lab setting or remote. There are no right or wrong answers in our studies - every single piece of your feedback will help us improve the RSA NetWitness experience. Please join us in this journey by completing the short survey below so that we can see if you are a match for one of our studies.

 

This survey should take less than a minute of your time.

 

Take the survey here.

I have found that there is quite a lot of incredibly useful meta packed into the 'query' meta key over the past several years.  The HTTP parser puts arguments and passed variables in there when used in GET's and POST's.  While examining some recent PCAP's from the Malware Traffic Analysis site, there are some common elements we can use to identify Trickbot infections.  This was not an exhaustive look at Trickbot, but simply a means to identify some common traits as meta values.  As Trickbot, or any malware campaign changes, IOC's will need to be updated.

 

First things first, let's look at the index level for the 'query' meta key.  By default, the 'query' meta key is set to 'IndexKeys'.  This means that you could perform a search where the key existed in a session, but could not query for the values stored within that key.

 

 

There are pro's and con's to setting the index level to 'IndexValues' in your 'index-concentrator-custom.xml' file on your concentrators.  Some pro's include being able to search for values in there during an investigation.  The con's are that these queries would likely involve 'contains' which taxes the query from a performance perspective.  Furthermore, 'query' is a Text formatted meta key and limited to 256 bytes.  Therefore, anything that after 256 bytes would be truncated and you may not have the complete query string.

 

Whether 'query' is set to 'IndexKeys' or 'IndexValues' or even 'IndexNone', we can take advantage of it in App rule creation.  In one Trickbot pcap, we can see an HTTP POST to an IP address on a non-standard port.

 

 

If we look at the meta created for this session, we can see the 'proclist' and 'sysinfo' as pieces in the 'query' meta.

 

 

Combine these with a service type (service = 80) and an action (action = 'post'), we can create an application rule that can help find Trickbot infections in the environment.  For good measure, we can add additional meta from analysis.service to help round it out.

 

 

Trickbot application rule
service = 80 && action = 'post' && query = 'name="sysinfo"' && query = 'name="proclist"' && analysis.service = 'windows cli admin commands'

 

 

The flexibility of app rule creation allows for analysts and threat hunters take a handful of indicators (meta) and combine them to make detection easier.

 

 

App rules help make detection easier.  Once a threat is identified, we can use this method to find the traffic easier moving forward so that we can go find the next new bad thing.  If the app rule fires too often on normal traffic, then we can adjust the rule to add or exclude other meta to ensure it is firing correctly.

 

As always, good luck, and happy hunting.

 

Chris

Encrypted traffic has always posed more challenges to network defenders than plaintext traffic but thanks to some recent enhancements in NetWitness 11.2 and a really useful feed from Abuse.ch defenders have a new tool in their toolbox.

 

11.2 Added the ability to enable TLS certificate hashing by adding an optional parameter on your decoders

Decoder Configuration Guide for Version 11.2 

(search for TLS certificate hashing - page 164)

  • Explore > /decoder/parsers/config/parsers.options
  • add this after the entropy line (space delimited) HTTPS="cert.sha1=true"
  • Make sure the https native parser is enabled on the decoder

 

This new meta is the SHA1 hash of any DER encoded cerificates during the TLS handshake which is written to cert.checksum which is the same key that NetWitness Endpoint writes its values to.

 

Now is a good time to revisit your application rules that might be truncating encrypted traffic.  Take advantage of new parameters that were added in 11.1 related to truncation after the handshake

 

 

Now that we have a field for the certificate hash we need a method to track known certificate checksums to match against.

sslbl.abuse.ch has a feed that tracks these blacklisted certificates as long with information to identify the potential attacker campaign.

 

This is the feed (SSLBL Extended) could also leverage the Dyre list as well.

https://sslbl.abuse.ch/downloads/ssl_extended.csv 

 

Headers look like this

# Timestamp of Listing (UTC),Referencing Sample (MD5),Destination IP,Destination Port,SSL certificate SHA1 Fingerprint,Listing reason
Map the feed as follows

Configure > Custom Feeds > New Feed > Custom Feed

 

Add the url as above, recur every hour (get new data into the feed in reasonable time)

 

Apply to your decoders (and you will notice that the feed is also added to your Context Hub as well in 11.2 - which means you can create a feed that is used as feed and as well as ESA whitelist or blacklist)

 

 

Non-IP type, map Column 5 to cert.checksum and column 6 to IOC (as if we have a match this is pretty confidant that this traffic is bad)

 

And now you have an updated feed that will alert you to certificate usage that matches known lists of badness.

 

an example output looks like this (always ends <space>c&c in IOC key)

 

(the client value is from another science project related to JA3 signatures ...  in this case double confirmation of gootkit)

 

the testing data that was used to play with this came from here

Malware-Traffic-Analysis.net - 2018-09-05 - Emotet infection with IcedID banking Trojan and AZORult 

 

Great resource and challenges if you are looking for some live fire exercises.

 

To wrap this up an ESA rule can be created with the following criteria to identify these communications and create an Alert

/*
Module debug section. If this is empty then debugging is off.
*/
@Name("outbound_blacklisted_ssl_cert: {ioc}")
@Description('cert.checksum + ssl abuse blacklist all have ioc ends with <space>c&c')
@RSA
SELECT * FROM Event(
/* Statement: outound_ssl_crypto_cnc */
(
direction.toLowerCase() IN ( 'outbound' ) AND
service IN ( 443 ) AND
ioc IS NOT NULL AND
matchLike(ioc,'% C&C' )
/*isOneOfIgnoreCase(ioc,{ '%c&c' })*/
)
) ;

The reason advanced mode was needed was that the IOC metakey needed to be wildcarded to look for any match of <name><space>C&C and I didnt want to enumerate all the potential names from the feed (the UI doesnt provide a means to do this in the basic rule builder for arrays - of which IOC is string[]).

 

Another thing to notice is that the @Name syntax creates a parameterized name that is only available in the alert details of the raw alert.

I was hoping to do more with that data but so far not having much luck.

 

You can also wrap this into a Respond alert to make sure you group all potential communications together for a system and these alerts (grouping by source IP)

 

If everything works correctly then you get Resond alerts like this that you should investigate 

With all the recent blogs from Christopher Ahearn about creating custom lua parsers, some folks who try their hand at it may find themselves wondering how to easily and efficiently deploy their new, custom parsers across their RSA NetWitness environment.

 

Manually browsing to each Decoder's Config/Parsers tab to upload there will quickly become frustrating in larger or distributed environments with more than one Decoder.

 

Manually uploading to a single Decoder and then using the Config/Files tab's Push option would help eliminate the need to upload to every single Decoder, but you would still need to reload each Decoder's parsers.  While this could, of course, be scripted, I believe there is a simpler, easier, and more efficient option available.

 

Not coincidentally, that option is the title of this blog. We can leverage the Live module within the NetWitness UI to deploy custom parsers across entire environments and automatically reload each Decoder's parsers in the process.  To do this, we will need to create a custom resource bundle that mimics an OOTB Live resource.

 

First, lets take a look at one of the newer lua parsers from Live to see how it's being packaged.  We'll select one parser and then choose Package --> Create to generate a resource bundle.

 

In this ZIP's top-level directory, we see a LUAPARSER folder and a resourceBundleInfo.xml file.

 

Navigating down through the LUAPARSER folder, we eventually come to another ZIP file:

 

This teamviewer.zip contains an encrypted lua parser and a token to allow NetWitness Decoders to decrypt it (FYI - you do not need to encrypt your custom lua parsers).

 

The main takeaway from this is that when we create our custom resource bundle, we now know to create a directory structure like in the above screenshot, and that our custom lua parser will need to be packaged into a ZIP file at the bottom of this directory tree.

 

Next, lets take a look at the resourceBundleInfo.xml file in the top-level directory of the resource bundle.  This XML is the key to getting Live to properly identify and deploy our custom lua parser.

 

Everything that we really need to know about this XML is in the <resourceInfo> section.

 

A common or friendly name for our parser:

<displayName>teamviewer</displayName>

 

The name of the ZIP file at the bottom of the directory tree:

            <fileName>teamviewer.zip</fileName>

 

The full path of this ZIP file:

            <filePath>LUAPARSER/0.1/teamviewer.zip</filePath>

 

The version number (which can really be anything you want, as long as it's reflected accurately in the filePath):

            <productionVersion>0.1</productionVersion>

 

The resourceType line is the name of the top-level folder in the resource bundle (you shouldn't need to change this):

            <resourceType>LUAPARSER</resourceType>

 

The typeTitle (which you also shouldn't change):

            <typeTitle>Lua Parser</typeTitle>

 

And lastly the uuid, which is how Live and the NetWitness platform identify Live resources:

            <uuid>e1a06b9a-db6b-45fd-85a3-6074229d8e02</uuid>

 

Modifying everything in this file should be pretty straightforward - you'll simply want to modify each line to reflect your parser's information. And for the uuid, we can simply create our own - but don't worry, it doesn't need to be anywhere near as long or complex as a Live resource uuid.

 

Now that we know what the structure of the resource bundle should look like, and what information the XML needs to contain, we can go ahead and create our own custom resource bundle.

 

Here's what a completed custom resource bundle looks like, using one of  Chris Ahearn's parsers as an example: What's on your wire: Detect Linux ELF files:

 

---

---

 

With the custom bundle packaged and ready to go, we can go into Live, select Package --> Deploy, browse to our bundle, and simply step through the process, deploying to as many or as few of our Decoders as we want:

---

---

---

 

For confirmation, we can broswe to any of our Decoders at Admin --> Services and see our custom parser deployed and enabled in the Config/General tab:

 

Lastly, for those who might have multiple custom resources they want to deploy at once in a single resource bundle, it's just a matter of adjusting the resourceBundleInfo.xml file to reflect each resource's name, version, path, and making sure each uuid is unique within the resource bundle, e.g.: uuid1, uuid2, uuid3, etc:

---

 

You can find a resource bundle template attached to this blog.

 

Happy customizing, everybody!

Introduction to MITRE’s ATT&CK™

 

Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.

 

Consequently, two other frameworks are also developed namely, PRE-ATT&CK™ and ATT&CK Mobile Profile. PRE-ATT&CK™ is developed to categorize pre-compromise tactics, techniques and procedures (TTPs) independent of platform/OS. This framework categorizes the adversaries planning, information gathering, reconnaissance and setup before compromising the victim.

 

ATT&CK™ Mobile Profile is specific to Android and iOS mobile environments and has three matrices that classifies tactics and techniques. This does not just include post-compromise tactics and techniques but also deal with pre-compromise TTPs in mobile environments.

 

This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive and the community adds them as they are observed and verified.

 

                This matrix is helpful in validation of defenses already in place and designing new security measures. It can be used in the following ways to improve and validate the defenses:

 

  1. This framework can be used to create adversary emulation plans which can be used by hunters and defenders to test and verify their defenses. Also, these plans will make sure you are testing against an ever-evolving industry standard framework.
  2. Adversary behavior can be mapped using ATT&CK™ matrix which can be used for analytics purposes to improve your Indicators of Compromise (IOCs) or Behavior of Compromise (BOCs). This will enhance your detection capabilities with greater insight into threat actor specific information.
  3. Mapping your existing defense with this matrix can give a visualization of tactics and techniques detected and thus can present an opportunity to assess gaps and prioritize your efforts to build new defenses.
  4. ATT&CK™ framework can help to build the threat intelligence with perspective of not just TTPs but threat groups and software that are being used. This approach will enhance your defenses in a way that detection will not be just dependent upon TTPs but the relationship it has with threat groups and software that are in play.

 

Relationships between Threat-Group, Software, Tactic and Techniques

Figure 1: Relationships between Threat-Group, Software, Tactics and Techniques

 

This framework resolves the following problems:

 

  1. Existing Kill Chain concepts were too abstract to relate new techniques with new types of detection capabilities and defenses. ATT&CK can be called a Kill Chain on steroids.
  2. Techniques added or considered should be observed in a real environment and not just from theoretical concepts. The community adding techniques insures that the techniques have been seen in the wild and thus are suitable for people using this model in real environments.
  3. This model gives common language and terminology across different environments and threat actors. This factor is important in making this model industry standard.
  4. Granular indicators like domain names, hashes, protocols et cetera do not provide enough information to see the bigger picture of how the threat actor is exploiting the system, and its relationship with various sub-systems and tools used by the adversary. This model gives a good understanding and relationship between tactics and techniques used which can be used further to drill down into only the important granular details.
  5. This model helps with making a common repository from where this information can be used with APIs and programming. This model is available via public TAXII 2.0 server and serve STIX 2.0 content.

 

ATT&CK Navigator

 

ATT&CK Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK model.

 

ATT&CK Navigator

 

Figure 2: ATT&CK Navigator

 

By default, this uses MITRE’s TAXII server but it can be changed to use any TAXII server of choice. Navigator uses JSON files to create layers which can be programmatically created and thus used to generate layers.

 

RSA NetWitness Event Stream Analysis (ESA)

 

ESA is one of the defense systems that is used to generate alerts. ESA Rules provide real-time, complex event processing of log, packet, and endpoint meta across sessions. ESA Rules can identify threats and risks by recognizing adversarial Tactics, Techniques and Procedures (TTPs).

 

The following are ESA Components:

 

  1. Alert - Output from a rule that matches data in the environment.
  2. Template - Convert the rule syntax into code (Esper) that ESA understands.
  3. Constituent Events - All of the events involved in an alert, including the trigger event.
  4. Rule Library - A list of all the ESA Rules that have been created.
  5. Deployments - A list of the ESA Rules that have been deployed to an ESA device.

 

The Rule Library contains all the ESA Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK matrix. The mapping shows how many tactics/techniques are detected by ESA. Please find attached with this blog post the excel workbook of mapping between ESA Rules and ATT&CK Tactics/Techniques.

 

In other words, overlap between ESA Rules and ATT&CK matrix can not only show us how far our detection capabilities reach across the matrix but also can quantify the evolution of product. We can measure how much we are improving and in which directions we are improving.

 

We have created a layer as a JSON file which has all the ESA Rules mapped to techniques. Then we have imported that layer on ATT&CK Navigator matrix to show the overlap. In the following image, we can see all the techniques highlighted that are detected by ESA Rules:

 

ATT&CK Navigator ESA Rules Mapping

 

Figure 3: ATT&CK Navigator Mapping to ESA Rules

 

To quantify how much ESA Rules spread across the matrix we can refer to the following plot:

 

ATT&CK Navigator ESA Rules Mapping Plot

 

                               Figure 4: Plot for ATT&CK Matrix Mapping to ESA Rules

 

Moving forward we can map our other detection capabilities with ATT&CK matrix. This will help to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.

 

References:

[1] https://www.mitre.org/sites/default/files/publications/pr-18-0944-11-mitre-attack-design-and-philosophy.pdf

[2] https://attack.mitre.org/wiki/Main_Page

[3] https://attack.mitre.org/pre-attack/index.php/Main_Page

[4] https://attack.mitre.org/mobile/index.php/Main_Page

[5] https://www.mitre.org/capabilities/cybersecurity/overview/cybersecurity-blog/using-attck-to-advance-cyber-threat

[6] https://www.mitre.org/capabilities/cybersecurity/overview/cybersecurity-blog/using-attck-to-advance-cyber-threat-0

 

Thanks to Michael Sconzo and Raymond Carney for their valuable suggestions.

A recent advisory was sent out for firmware updates to a number of base components in NetWitness.

 

RSA NetWitness Availability of BIOS & iDRAC Firmware Updates

 

There were three components that were mentioned that needed potential updates and instructions to update them were provided in the Advisory.

 

How do you do gather the state of the environment quickly with the least amount of steps so that you can determine if there is work that needs to be done?

 

Chef to the rescue ...

You might need these tools installed on your appliances to run later commands (install perccli and ipmitool)

 

From the NW11 head server (node0)

salt '*' pkg.install "perccli,ipmitool" 2>/dev/null

then you can query for the current versions of the software for PERC, BIOS and iDRAC

salt '*' cmd.run 'hostname; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "FW Package Build"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "FW Package Build"' 2>/dev/null
salt '*' cmd.run 'hostname; ipmitool -I open bmc info | grep "Firmware Revision"' 2>/dev/null
salt '*' cmd.run 'hostname; ip address show dev eth0 | grep inet; dmidecode -s system-serial-number; dmidecode -s bios-version; dmidecode -s system-product-name;' 2>&-

 

The output will list the host, and the version of the software that exists which can be used to determine if an update is required to your NetWitness Appliances.

 

Ideally this is in Health and Wellness where policies can be written against it with alerts (and an export to csv function would be handy).

Wireshark has been around for a long time and the display filters that exist are good reference points to learn about network (packet) traffic as well as how to navigate around various parts of sessions or streams.

 

Below you will find a handy reference which allows you to cross-reference many of the common Wireshark filters with their respective RSA NetWitness queries. 

 

This is where I pulled the Wireshark display filters from:  DisplayFilters - The Wireshark Wiki 

 

Show only SMTP (port 25) and ICMP traffic:

WiresharkNetWitness
tcp.port eq 25 or icmpservice=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)
tcp.dstport=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)

 

Show only traffic in the LAN (192.168.x.x), between workstations and servers -- no Internet:

WiresharkNetWitness
ip.src==192.168.0.0/16 and ip.dst==192.168.0.0/16ip.src=192.168.0.0/16 && ip.dst=192.168.0.0/16
direction='lateral' (RFC1918 to RFC1918)

 

Filter on Windows -- Filter out noise, while watching Windows Client - DC exchanges

WiresharkNetWitness
smb || nbns || dcerpc || nbss || dnsservice=139,137,135,139,53

 

Match HTTP requests where the last characters in the uri are the characters "gl=se":

WiresharkNetWitness
http.request.uri matches "gl=se$"service=80 && query ends 'gl=se'

 

Filter by a protocol ( e.g. SIP ) and filter out unwanted IPs:

WiresharkNetWitness
ip.src != xxx.xxx.xxx.xxx && ip.dst != xxx.xxx.xxx.xxx && sipservice=5060 && ip.src!=xxx.xxx.xxx.xxx && ip.dst != xxx.xxx.xxx.xxx

 

ip.addr == 10.43.54.65 equivalent to

WiresharkNetWitness
ip.src == 10.43.54.65 or ip.dst == 10.43.54.65ip.all=10.43.54.65
ip.src=10.43.54.65 || ip.dst=10.43.54.65

 

Here's where I pulled some additional filters for mapping:  HTTP Packet Capturing to debug Apache 

 

View all http traffic

WiresharkNetWitness
httpservice=80

 

View all flash video stuff

WiresharkNetWitness
http.request.uri contains "flv" or http.request.uri contains "swf" or http.content_type contains "flash" or http.content_type contains "video"service=80 && ( query contains 'flv' || query contains 'swf' || content contains 'flash' || content contains 'video')

 

Show only certain responses

WiresharkNetWitness
http.response.code == 404service=80 && error begins 404
service=80 && result.code ='404'
http.response.code==200service=80 && error !exists (200 are not explicitly captured)
service=80 && result.code !exists (200 are not explicitly captured)

 

Show only certain http methods

WiresharkNetWitness
http.request.method == "POST" || http.request.method == "PUT"service=80 && action='post','put'

 

Show only filetypes that begin with "text"

WiresharkNetWitness
http.content_type[0:4] == "text"service=80 && filetype begins 'text'
service=80 && filename begins 'text'

 

Show only javascript

WiresharkNetWitness
http.content_type contains "javascript"service=80 && content contain 'javascript'

 

Show all http with content-type="image/(gif|jpeg|png|etc)" §

WiresharkNetWitness
http.content_type[0:5] == "image"service=80 && content ='image/gif','image/jpeg','image/png','image/etc'

 

Show all http with content-type="image/gif" §

WiresharkNetWitness
http.content_type == "image/gif"service=80 && content ='image/gif'

 

Hope this is helpful for everyone and as always, Happy Hunting!

I was reviewing a packet capture file I had from a recent engagement. In it, the attacker had tried unsuccessfully to compress the System and SAM registry hives on the compromised web server. Instead, the attacker decided to copy the hives into a web accessible directory and give them a .jpg file extension. Given that the Windows Registry hives contain a well documented file structure, I decided to write a parser to detect them on the network.

 

 

If we see something on the wire, there is a pretty good chance we can create some content to detect it in the future. This is the premise behind most threat-hunting or content creation. Make it easier to detect the next time. This is the same approach I take when building Lua Parsers for the RSA NetWitness platform.

 

Here, we can see what appears to be the magic bytes for a registry file “regf”.

 

 

Let’s shift our view into View Hex and examine this file.

 

 

When creating a parser, we want to make it as consistent as possible to reduce false positives or errors. What I found was that immediately following the ‘regf’ signature the Primary Sequence Number (4 bytes) and Secondary Sequence Number (4 bytes) would be different. Then, there was the FileTime UTC (8 bytes) field which would most definitely be unique.

 

However, the Major and Minor versions were relatively consistent. Therefore, I could skip over those 16 bytes to land on the first byte of the Major Version immediately after my initial token matches.  Let’s create a token to start with.

 

fingerprint_reg:setCallbacks({

   ["\114\101\103\102"] = fingerprint_reg.magic,   -- regf

}) 

 

If you notice, this token is in DECIMAL format, not HEX. Also, 4 bytes is quite small for a token. What happens is that when a parser is loaded into the decoder, the tokens are stored in memory and compared as network traffic is going through the decoder. Once a token matches, the function(s) within the parser are run. Too small of a token means the parser may run quite frequently with or without matching on the right traffic. Too large of a token means the parser may only run on those specific bytes and you could miss other relevant traffic. When creating a parser token, you may want to error on the side of caution and make it a little smaller but know that you will have to add additional checks to ensure it is the correct traffic you want.

 

In Lua for parsers, you are always on a byte. Therefore, we need to know where we are and where we want to go. I like to set a variable called ‘current_position’ to denote where my pointer is in the stream of data. When the parser matches on a token, it will return 3 values. The three values are the token itself, the first position of the token in the data stream and the last position of the token in the data stream. This helps me as I want to find the ‘regf’ token and move forward 17 bytes to land on the Major version field.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

This will put the pointer on the first byte (0x01) of the Major Version field. Next what I want to do is extract only the payload I need to do my next set of checks, which will involve reading the bytes.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

Here, I created a variable called ‘payload’ and used the built-in function ‘nw.getPayload’ to get the payload I wanted. Since I previously declared a variable called ‘current_position’, I use that as my starting point and tell it to go forward 7 bytes. This gives me a total of 8 bytes of payload. Next, I make sure that I have payload and that it is, in fact, 8 bytes in length (#payload == 8).

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

 

If the payload checks out, then in this parser, I want to read the first 4 bytes, since that should be the Major Version. In the research I did, I saw that the Major Version was typically ‘1’ and was represented as ‘0x01000000’. Since I want to read those 4 bytes, I use “payload:uint32(1,4)”. Since those bytes will be read in as one value, I pre-calculate what that should be and use it as a check. The value should be ‘16777216’. If it is, then it should move to the next check.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

The Minor Version check winds up being the second and last check to make sure it is a Registry hive. For this to run, the Major version had to have been found and validated based on the IF statement. Here, we grab the next 4 bytes and store those in a variable called ‘minorversion’. There were four possible values that I found in my research. Those would be ‘0x03000000’, ‘0x04000000’, ‘0x05000000’, and ‘0x06000000’. Therefore, I pre-calculated those values in decimal form like I did with the Major Version and did a comparison (==). If the value matched, then the parser will write the text ‘registry hive’ as meta into the ‘filetype’ meta key.

 

The approach shown here was useful in examining a particular type of file as it was observed in network traffic. The same approach could be used for protocol analysis, identifying new service types, and many others as well.  If you would like expert assistance with creating a custom parser for traffic that is unique in your environment then that is a common service offering provided by RSA.  If you're interested in this type of service offering, please feel free to contact your local sales rep.  

 

The parser is attached, and I have also submitted it to RSA Live for future use.  I hope you find this parser breakdown helpful and as always, happy hunting.

 

Chris

Microsoft has been converting customers to O365 for a while, as a result more and more traffic is being routed from on-premise out to Microsoft clouds potentially putting it into visibility of NetWitness.  Being able to group that traffic into a bucket for potential whitelisting or at the very least identification could be a useful ability.

 

Microsoft used to provide an XML file for all the required IPv4, IPv6 and URLs that were required for accessing their O365 services.  This is being deprecated in October of 2018 in favor of API access.

 

This page gives a great explainer on the data in the API and how to interact with it as well as a python and Powershell script to grab data for use in firewalls etc.

 

Managing Office 365 endpoints - Office 365 

 

The powershell script is where I started so that a script could be run on client workstation to determine if there was any updates and then apply the relevant data to the NW environment.  Eventually, hopefully this gets into the generic whitelisting process that is being developed so that it is programatically delivered to NW environments.

 

GitHub - epartington/rsa_nw_lua_feed_o365_whitelist: whitelisting Office365 traffic using Lua and Feeds 

 

The script provided by Microsoft was modified to create 3 output files for use in NetWitness

o365ipv4out.txt

o365ipv6out.txt

o365urlOut.txt

 

the IP feeds are in a format that can be used as feeds in NetWitness, the github link with the code provides the xml for them to map to the same keys as the lua parser so there is alignment between the three.

 

the o365urlOut.txt is used in a lua parser to map against the alias.host key.  The reason the lua parser was used is as a result of a limitation of the feeds engine which prevents wildcard matching.  The matches in feeds need to be exact, and some of the hosts provided by the feeds are *.domain.com.  The Lua parser attempts to match direct exact match first then falls back to subdomain matches to see if there are any hits there.

 

The Lua parser has the updated host list as of the published version, as Microsoft updates their API the list needs to be changed.  Thats where the PS1 script comes in.  That can be run from client workstation, the output  txt file then opened up if there are changes and the text copied to the decoder > config > files tab and replace the text in the parser to include any changes published.  The decoder probably needs to have the parsers reloaded which can be done from REST or explore menu to reload the content into the decoder.  You can also push the updated parser to all your other Log and Packet decoders to keep them up to date as well.

 

The output of all the content is data in the filter metakey

filter='office365'

filter='whitelist'

 

Sample URL output

["aadrm.com"] = "office365",
["acompli.net"] = "office365",
["adhybridhealth.azure.com"] = "office365",
["adl.windows.com"] = "office365",
["api.microsoftstream.com"] = "office365",

 

sample IPv4 output

104.146.0.0/19,whitelist,office365
104.146.128.0/17,whitelist,office365
104.209.144.16/29,whitelist,office365
104.209.35.177/32,whitelist,office365

 

My knowledge of powershell is pretty close to 0 at the beginning of this exercise, now it's closer to 0.5.

 

To Do Items you can help with:

Ideally i would like the script to output the serviceArea of each URL or IP network so that you can tell which service from O365 the content belongs to to give you more granular data on what part of the suite is being used.

serviceArea = "Exchange","sway","proplus","yammer" ...

If you know how to modify the script to do this, more than happy to update the script to include those.  Ideally 3-4 levels of filter would be perfect.

 

whitelist,office365,yammer

 

would be sufficient granularity i think

 

Changes you might make:

The key to read from is alias.host, if you have logs that write values into domain.dst or host.dst that you want considered and you are on NW11 you can change the key to be host.all to include all of those at once in the filtering (just make sure that key is in your index-decoder-custom.xml)

 

Benefits of using this:

Ability to reduce the noise on the network for known or trusted communications to Microsoft that could be treated as lower priority.  Especially when investigating traffic outbound and you can remove known O365 traffic (powershell from endpoint to internet != microsoft)

 

As any FYI, so far all the test data that I have lists the outbound traffic as heading to org.dst='Microsoft Hosting', i'm sure on wider scale of data that isn't true, but so far the whitelist lines up 100% with that org.dst.

The Respond Engine in 11.x contains several useful pivot points and capabilities that allow analysts and responders to quickly navigate from incidents and alerts to the events that interest them.

 

In this blog post, I'll be discussing how to further enable and improve those pivot options within alert details to provide both more pivot links as well as more easily usable links.

 

During the incident aggregation process, the scripts that control the alert normalizations create several links (under Related Links) that appear within each alert's Event Details page.

 

These links allow analysts to copy/paste the URI into a browser and pivot directly to the events/session that caused the alert, or to an investigation query against the target host. 

 

What we'll we doing here is adding additional links to this Related Links section to allow for more pivot options, as well as adding the protocol and web server components to the existing URI in order to form a complete URL.

 

The files that we will be customizing for the first step are located on the Node0 (Admin) Server in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js

 

(We will not be modifying the normalize_ecat_alerts.js or normalize_wtd_alerts.js scripts because the Related Links for those pivot you outside of the NetWitness UI.)

 

As always, back up these files before committing any changes and be sure to double-check your changes for any errors.

 

Within each of these files, there is a exports.normalizeAlert function:

 

At the end of this function, just above the "return normalized;" statement, you will add the following lines of

 

//copying additional links created by the utils.js script to the event's related_links
for(var j = 0; j < normalized.events.length; j++){

if (normalized.related_links) {

normalized.events[j].related_links = normalized.events[j].related_links.concat([normalized.related_links]);

}

}

 

 

So the end of the exports.normalizeAlert function now looks like this:

 

Once you have done this, you can now move on to the next step in this process.  This step will require modification of 3 files - the two we have already changed plus the utils.js script - all still located in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js
  • utils.js

 

Within each of these files search for "url:" to locate the statements that generate the URIs in Related Links.  You will be modifying these URIs into complete URLs by adding "https://<your_UI_IP_or_Hostname>/" to the beginning of the statement.

 

For example, this: 

 

...becomes this:

 

Do this for all of the "url:" statements, except this one in "normalize_core_alerts.js," as this pulls its URI / URL from a function in the script that we are already modifying:

 

Once you have finished modifying these files and double-checking your work for syntax (or other) errors, restart the Respond Server (systemctl restart rsa-nw-respond-server) and begin reaping your rewards:

 

RSA SecurID Access (Cloud Authentication Service) is an access and authentication platform with a hybrid on-premise and cloud-based service architecture. The Cloud Authentication Service helps secure access to SaaS and on-premise web applications for users, with a variety of authentication methods that provide multi-factor identity assurance. The Cloud Authentication Service can also accept authentication requests from a third-party SSO solution or web application that has been configured to use RSA SecurID Access as the identity provider (IdP) for authentication.

 

For More details:

RSA SecurID Access Overview 

Cloud Authentication Service Overview 

 

 

The RSA NetWitness Platform uses the Plugin Framework to connect with the RSA SecurID Access (Cloud Authentication Service) RestFul API to periodically query for Admin activity. This provides visibility into all the administrative activities like: Policy, Cluster, User, Radius Server and various other configuration changes.  

 

Here is a detailed list of all the administrative activity that can monitored via this integration

Administration Log Messages for the Cloud Authentication Service 

 

Downloads and Documentation:

 

Configuration Guide: RSA SecurID Access Event Source Configuration Guide

(Note: This is Only supported on RSA NetWitness 10.6.6 currently.  And it will be in 11.2 (Coming Soon..))

Collector Package on RSA Live:  "RSA SecurID"

Parser on RSA Live: "CEF". (device.type=rsasecuridaccess) 

Servers are attacked every day and sometimes, those attacks are successful.  There is a lot of attention to Windows executables that come down on the wire, but I also wanted to know when my systems were downloading ELF files, typically used by Linux systems.  With some recent exploits that target Linux web servers and the delivery of crypto-mining software, I wrote a parser that attempts to identify Linux ELF files and places that meta in the 'filetype' meta key.

 

 

 

This isn't limited to crypto-mining ELF files and has detected many others in testing.  The parser is attached below.

 

I hope you find this parser useful, and as always, happy hunting.

 

Chris

Filter Blog

By date: By tag: