Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

475 posts

I helped one of my customers implement a use case last year that entailed sending email alerts to specific users when those users logged into legacy applications within their environment.

 

Creating the alerts for this activity with the ESA was rather trivial - we knew which event source would generate the logs and the meta to trigger against - but sending the alert via email to the specific user that was ID'd in the alert itself added a bit of complexity.

 

Fortunately, others have had similar-ish requirements in the past and there are guides on the community that cover how to generate custom emails for ESA alerts through the script notification option, such as Custom ESA email template with raw event payload and 000031690 - How to send customized subjects in an RSA Security Analytics ESA alert email.

 

This meant that all we had to do was map the usernames from the log events to the appropriate email addresses, enrich the events and/or alerts with those email addresses, and then customize the email notification using that information.  Mapping the usernames to email addresses and adding this information to events/alerts could have been accomplished in a couple different ways - either a custom Feed (Live: Create a Custom Feed) or an In-Memory Table (Alerting: Configure In-Memory Table as Enrichment Source) - for this customer the In-Memory Table was the preferred option because it would not create unnecessary meta in their environment.

 

We added the CSV containing the usernames and email addresses as an enrichment source:

 

....then added that enrichment to the ESA alert:

 

With these steps done, we triggered a couple alerts to see exactly what the raw output looked like, specifically how the enrichment data was included.  The easiest way to find raw alert output is within the respond module by clicking into the alert and looking for  the "Raw Alert" pane:

 

Armed with this information, we were then able to write the script (copy/pasting from the articles linked above and modifying the details) to extract the email address and use that as the "to_addr" for the email script (also attached at the bottom of this post):

#!/usr/bin/env python
from smtplib import SMTP
import datetime
import json
import sys

def dispatch(alert):
    """
    The default dispatch just prints the 'last' alert to /tmp/esa_alert.json. Alert details
    are available in the Python hash passed to this method e.g. alert['id'], alert['severity'],
    alert['module_name'], alert['events'][0], etc.
    These can be used to implement the external integration required.
    """

    with open("/tmp/esa_alert.json", mode='w') as alert_file:
        alert_file.write(json.dumps(alert, indent=True))

def read():
    #Parameter
    smtp_server = "<your_mail_relay_server>"
    smtp_port = "25"
    # "smtp_user" and "smtp_pass" are necessary
    # if your SMTP server requires authentication
    # used in "smtp.login()" below
    #smtp_user = "<your_smtp_user_name>"
    #smtp_pass = "<your_smtp_user_password>"
    from_addr = "<your_mail_sending_address>"
    missing_msg = ""
    to_addr = ""  #defined from enrichment table

    # Get data from JSON
    esa_alert = json.loads(open('/tmp/esa_alert.json').read())
    #Extract Variables (Add as required)
    try:
        module_name = esa_alert["module_name"]
    except KeyError:
        module_name = "null"
    try:
         to_addr = esa_alert["events"][0]["user_emails"][0]["email"]
    except KeyError:
         missing_msg = "ATTN:Unable to retrieve from enrich table"
         to_addr = "<address_to_send_to_when_enrichment_fails>"
    try:
        device_host = esa_alert["events"][0]["device_host"]
    except KeyError:
        device_host = "null"
    try:
        service_name = esa_alert["events"][0]["service_name"]
    except KeyError:
        host_dst = "null"
    try:
        user_dst = esa_alert["events"][0]["user_dst"]
    except KeyError:
        user_dst = "null"
    # Sends Email
    smtp = SMTP()
    smtp.set_debuglevel(0)
    smtp.connect(smtp_server,smtp_port)

    date = datetime.datetime.now().strftime( "%m/%d/%Y %H:%M" ) + " GMT"
    subj = "Login Attempt on " + ( device_host )
    message_text = ("Alert Name: \t\t%s\n" % ( module_name ) +
        " \t\t%s\n" % ( missing_msg ) +
        "Date/Time : \t%s\n" % ( date  )  +
        "Host: \t%s\n" % ( device_host ) +
        "Service: \t%s\n" % ( service_name ) +
        "User: \t%s\n" % ( user_dst )
    )

    msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s\n" % ( from_addr, to_addr, subj, date, message_text )
    # "smtp.login()" is necessary if your
    # SMTP server requires authentication
    #smtp.login(smtp_user,smtp_pass)
    smtp.sendmail(from_addr, to_addr, msg)
    smtp.quit()

if __name__ == "__main__":
    dispatch(json.loads(sys.argv[1]))
    read()
    sys.exit(0)

 

And the result, after adding the script as a notification option within the ESA alert:

-----------------------------

 

Of course, all of this can and should be modified to include whatever information you might want/need for your use case.

Amazon Virtual Private Clouds (VPC) are used in hybrid cloud enterprise environments to securely host certain workloads and customers need to enable their SOC to identify potential threats with these components of their infrastructure.  The RSA NetWitness Platform supports ingest of many 3rd party sources,  including Amazon CloudTrail, GuardDuty, and now VPC Flow Logs.

 

The RSA NetWitness Platform has reporting content for Analysts to leverage in assessing the VPC security and overall health.  In https://community.rsa.com/docs/DOC-97451 we illustrate out-of-the-box reporting content to allow an analyst to get quick visibility into potential operational issues, such as highest and lowest accepted/rejected connections and traffic patterns on each VPC. 

 

VPC Flow Logs is an AWS monitoring feature that captures information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. 

 

Logs from Amazon VPCs can be exported to CloudWatch. The RSA NetWitness Platform AWS VPC plugin uses CloudWatch API to capture the logs.

 

 

 

 

                                                                                                                                   

This project is an attempt at building a method of orchestrating threat hunting queries and tasks within RSA NetWitness Orchestrator (NWO).  The concept is to start with a hunting model defining a set of hunting steps (represented in JSON), have NWO ingest the model and make all of the appropriate "look ahead" queries into NetWitness, organizing results, adding context and automatically refining large result sets before distributing hunting tasks among analysts.  The overall goal is to have NWO keep track of all hunting activities, provide a platform for threat hunting metrics, and (most importantly) to offload as much of the tedious and repetitive data querying, refining, and management that is typical of threat hunting from the analyst as possible.  

 

Please leave a comment if you'd like some help getting everything going, or have a specific question.  I'd love to hear all types of feedback and suggestions to make this thing useful.

 

Usage

The primary dashboard shows the results of the most recent Hunting playbook run, which essentially contains each hunting task on it's own row in the table, a link to the dynamically generated Investigation task associated with that task, a count of the look ahead query results, and multiple links to the data in (in this case) RSA NetWitness.  The analyst has not been involved up until now, as NWO did all of this work in the background. 

 

Primary Hunting Dashboard

 

Pre-built Pivot into Toolset (NetWitness here)

 

The automations will also try to add extra context if the result set is Rare, has Possible Beaconing, contains Indicators, and a few other such "influencers" along with extra links to just those subsets of the overall results for that task.  Influencers are special logic embedded within the automations to help extract additional insight so that the analyst doesn't have to.  There hasn't been too much thought put into the logic behind these pieces just yet, so please consider them all proofs of concept and/or placeholders and definitely share any ideas or improvements you may have.  The automations will also try to pare down large result sets if you have defined thresholds within the hunting model JSON. The entire result set will still be reachable, but you'll get secondary counts/links where the system has tried to aggregate the rarest N results based on the "Refine by" logic also defined in the model, eg:

 

If defined in the huntingcontent.json, a specific query/task can be given a threshold and will try to refine results by rarity if the threshold is hit.  Example above shows a raw count of 6850 results, but a refined set of 35 results mapping to the rarest 20 {ip.dst, org.dst} tuples seen.

 

For each task, the assigned owner can drill directly into the relevant NetWitness data, or can drill into the Investigation associated with the task.  Right now the investigation playbooks for each task are void of any special playbooks themselves - they simply serve as a way to organize tasks, contain findings for each hunt task and a place from which to spawn child incidents if anything is found:

 

 

From here it is currently just up to the analyst to create notes, complete the task, or generate child investigations.  Future versions will do more with these sub investigation/hunt task playbooks to help the analyst. For now it's just a generic "Perform the Hunt" manual task.  Note that when these hunt task investigations get closed, the Hunt Master will updated the hunting table and mark that item as "complete", signified by a green dot and a cross-through as shown in the first screenshot.

 

How It Works

Playbook Logic

  1. Scheduled job or ad-hoc creation of "Hunt" incident that drives the primary logic and acts as the "Hunt Master"
  2. Retrieve hunting model JSON (content file and model definition file) from a web server somewhere
  3. Load hunting model, perform "look ahead", influencer, and refining queries
  4. Create hunting table based on query results, mark each task as "In Progress", "Complete", or "No Query Defined"
  5. Generate dynamic hunting investigations for each task that had at least 1 result from step 2
  6. Set a recurring task for the Hunt Master to continuously look for all related hunt tasks (they share a unique ID) and monitor their progress, updating the hunting table accordingly.
  7. [FUTURE] Continuously re-query the result sets in different ways to find outliers (eg. stacking different meta keys and adding new influencers/links to open hunting tasks

(Both the "Hunt Master" and the generated "Hunt Tasks" are created as incidents, tied together with a unique ID - while they could certainly be interacted with inside of the Incidents panel, the design is to have hunters operate from the hunting dashboard)

 

The Hunting Model

Everything is driven off of the hunting model & content.  The idea is to be able to implement any model/set of hunting tasks along with the queries that would normally get an analyst to the corresponding subset of data.  The example and templates given here corresponding with the RSA Network Hunting Labyrinth, modeled after the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide 

huntingcontent.json

This file must sit on a web server somewhere, accessible by the NWO server. You will later configure your huntingcontentmodel.json file to point to it's location if you want to manage you're own (instead of the default version found here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontent.json  

 

This file defines hunting tasks in each full branch of the JSON file, along with queries and other information to help NWO populate discover the data and organize the results:

 

 

(snippet of hunting content json)

 

The JSON file can have branches of length N, but the last element in any given branch, which defines a single hunting technique/task must have an element of the following structure. Note that "threshold" and "refineby" are optional, but "query" and "description" are mandatory, even if the values are blank.

 

 

The attached (same as the github link above as of the first release) example huntingcontent.json is meant to be a template as it is currently at the very beginning stages of being mapped to the RSA Network Hunting Labyrinth methodology.  This will be updated with higher resolution queries over time. Once we can see this operate in a real environment, the plan is to leverage a lot more of the ioc/eoc/boc and *.analysis keys in RSA NetWitness to take this beyond a simple proof of concept. You may also choose to completely define your own to get you started.

 

huntingcontentmodel.json

This file must sit on a web server somewhere, accessible by the NWO server. A public version is available here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontentmodel.json  but you will have to clone and update this to include references to your NWO and NW servers before it will work.  This serves as the configuration file specific to your environment that describes structure of the huntingcontent.json file, display options, icon options,  language, resource locations, and a few other configurations. It was done this way to avoid hard coding anything into the actual playbooks and automations:

 

model: Defines the heading for each level of huntingcontent.json.  A "-" in front means it will still be looked for programatically but will not be displayed in the table.

 

language: This is a basic attempt at making the hunting tasks described by the model more human readable by connecting each level of the json with a connector word.  Again, a "-" in front of the value means it will not be displayed in the table.


groupingdepth:
This tells NWO how many levels to go when grouping the tasks into separate tables. Eg. a grouping level of "0" would contain one large table with each full JSON branch in a row. A grouping level of "3" will create a separate table for each group defined 3 levels into the JSON (this is what's shown in the dashboard table above)

 

verbosity: 0 or 1 - a value of 1 means that an additional column will be added to the table with the entire "description" value displayed. When 0, you can still see the description information by hovering over the "hover for info" link in the table.

 

queryurl: This defines the base URL where the specific queries (in the 'query' element of huntingcontent.json) will be appended in order to drill into the data.  Example above is from my lab, so be sure to adjust this for your environment.

 

influencers: The set of influencers above are the ones that have been built into the logic so far.  This isn't as modular under the hood as it should be, but I think this is where there is a big opportunity for collaboration and innovation, and where some of the smarter & continuous data exploration will be governed.  iocs, criticalassets, blacklist, and whitelist are just additional queries the system will do to gain more insight and add the appropriate icon to the hunt item row.  rarity is not well implemented yet and just adds the icon when there are < N (10 in this case) results.  This will eventually be updated to look for rarity in the dataset against a specific entity (single IP, host, etc.) rather than the overall result count.  possiblebeacon is implemented to look for a 24 hour average of communication between two hosts signifying approximately 1 beacon per minute, 5 minutes, or 10 minutes along with a tolerance percentage.  Just experimenting with it at this point.   Note that the "weight" element doesn't affect anything just yet. The eventual concept is to build a scoring algorithm to help prioritize or add fidelity the individual hunting tasks.

   

Instructions

Installation Instructions:

  1. Prerequisites:  RSA NetWitness Network (Packets) and Logs (original version of NetWitness query integration) integration installed.  Note that there is currently a v2 NetWitness integration, but this will not work with that version at this time due to the change in how the commands work. I will try to update the automations for the v2 integration ASAP.
    1. The v1 NetWitness Integration is included in the zip.  Settings > Integrations > Import.
  2. Create a new incident type named "Hunt Item" (don't worry about mapping a playbook yet)
  3. Import Custom Fields (Settings > Advanced > Fields) - import incidentfields.json (ignore errors)
  4. Import Custom Layouts (Settings > Advanced > Layout Builder > Hunt)
    1. Incident Summary - import layout-details.json
    2. New/Edit - import layout-edit.json
    3. Incident Quick View - import layout-details.json
  5. Import Automations (Automations > Import - one by one, unfortunately)

       - GenerateHuntingIncidents

       - PopulateHuntingTable

       - GenerateHuntingIncidentNameID

       - LoadHuntingJSON

       - NetWitness LookAhead

       - ReturnRandomUser

       - UpdateHuntingStatus

  6. Import Dashboard Widget Automations (Automations > Import)

       - GetCurrentHuntMasterForWidget

       - GetHuntParticipants

       - GetHuntTableForWidget   

  7. Import Sub-Playbooks (Playbooks > Import)

       - Initialize Hunting Instance

       - Hunting Investigation Playbook

  8. Import Primary Playbook (Playbooks > Import)

    - 0105 Hunting

  9. Map "0105 Hunting" Playbook to "Hunt" Incident Type (Settings > Incident Types > Hunt) and set the playbook to automatically start

  10. Map "Hunting Investigation Playbook" to "Hunt Item" Incident Type and set playbook to automatically start

  11. Import Dashboard

  12. Place huntingcontent.json, huntingcontentmodel.json (within the www folder of the packaged zip), onto a web server somewhere, accessible by the NWO server. Note, by default the attached/downloadable huntingcontentmodel.json points at github for the huntingcontent.json file. You can leave this as is (and over time you'll get a more complete set of hunting queries) or create your own as you see fit and place it on your web server as well.

Configuration:

Before the first run, you'll have to make a few changes to point the logic at your own environment:

  1. Edit huntingcontentmodel.json and update all queryURL and icon URL fields to point at your NetWitness server and web server respectively.  You cal also edit the "huntingContent" element of this file (not shown) to point at your own version of the huntingcontent.json file discussed above:
    (Top - huntingcontentmodel.json snippet, showing the references with respect to your standard NetWitness UI URL)
  2. Go into the "Initialize Hunting Instance" playbook, and click on "Playbook Triggered" and enter the path to your huntoncontentmodel.json file (that includes updated fields pointing to NetWitness). If you leave it as is, none of the look ahead queries will work since no configuration file will be loaded.
  3. Creating your first hunting incident, from Incidents > New Incident, select type "Hunt" and give it a time range. Start with 1 day for testing.
  4. Note that the playbook will automatically name the incident "Hunt Master" prepended with a unique HuntID. Everything is working if, in the Incidents page, you see a single Hunt Master and associated Hunt Items all sharing the same HuntID.

Opening up the Hunt Master incident Summary page (or Hunting Dashboard) should show you the full hunting table:

 

Please add comments as you find bugs or have additional ideas and content to contribute.

We are extremely proud to announce that RSA has been positioned as a “Leader” by Gartner®, Inc. in the 2018 Magic Quadrant for Security Information and Event Management research report for its RSA NetWitness® Platform.

 

The RSA NetWitness Platform pulls together SIEM, network monitoring and analysis, endpoint threat detection, UEBA and orchestrated response capabilities into a single, evolved SIEM solution. Our significant investments in our platform over the past 18 months make us the go-to platform for security teams to rapidly detect and respond to threats across their entire environment.

 

The 2018 Gartner Magic Quadrant for SIEM evaluates 17 vendors on the basis of the completeness of their vision and ability to execute. The report provides an overview of each vendor’s SIEM offering, along with what Gartner sees as strengths and cautions for each vendor. The report also includes vendor selection tips, guidance on how to define requirements for SIEM deployments, and details on its rigorous inclusion, exclusion and evaluation criteria. Download the report and learn more about RSA NetWitness Platform.

If you've ever wondered what levers you have available to pull for creating application rule logic then this is your one stop shop for an explanation.

 

There's a fully documented cheat sheet of the parameters you can use in application rules, located at the link below:

Application Rules Cheat Sheet 

 

There are some commands that I personally wasn't aware of.  For example, using ~ instead of not() to negate the contains/begins/ends functions and I had forgotten about the ucount and unique operators that are available.

 

Also, v11.x introduced the ability to have metakeys on both the left and right side of operators (the table in that link explains which ones are available).

 

Overall, this is a good resource to bookmark if you are developing application rules in RSA NetWitness.

A recent customer question about alerting on Uptime values from the REST API got me digging into the Health and Wellness Policies for a better solution.

 

The request was to alert when the uptime value for specific device families was reset indicating that something had occured with the service and reset the uptime value.  Repeated resets of the uptime value could indicate an issue with the service that needed attention (core files created as a result of decoder service crashes was the root of this request).

 

Here is my solution:

  • Admin > Health and wellness > Policies
  • Select the + and add a new policy for the service that you want to monitor
  • In this case the Archiver service is our example

  • Add a new Rule
  • The conditions
    • Alarm = Regex match on .., .. seconds.*
    • REcovery = !Regex match on .., .. seconds.*

  • Save
  • Set your notification output at the bottom
  • save and enable the policy at the top

 

Now you have a policy that alerts when the uptime is within the first 60 seconds of restarting (.. is two digits so up to 60 seconds) and recovers once the uptime doesnt match the pattern (when 60 seconds switches to minute and seconds (61 seconds +)

 

Alarm

Recovery

 

 

Details on the pattern developed:

number of seconds followed by a comma then the friendly time breakdown of the seconds in years, months, weeks, days, hours, minutes and seconds.

.. = looked for 2 digits for the seconds (between 10-59 seconds after service restarted)

, .. = looked for the same seconds value after the comma

seconds.* = the word seconds and the trailing space in the value

when this pattern is matched (between 10-59 seconds after restart) there will be an alarm, then it will clear when that pattern is not matched (60 seconds +)

Eric Partington

Hunting in RDP Traffic

Posted by Eric Partington Employee Nov 12, 2018

I was just working in the NOC for HackFest 2018 in Quebec City (https://hackfest.ca/en/) and playing with RDP traffic to see who was potentially accessing remote systems on the network.  

 

This was inspired by this deck from Brocon and some recent enhancements to the RDP parser. (https://www.bro.org/brocon2015/slides/liburdi_hunting_rdp.pdf)

 

Recent enhancements to the RDP parser include extracting the screen resolutions, the username as well as the hostname, certificate and other details.

 

With some simple charting language we can create a number of rules that look for various properties of RDP traffic based on direction (Should you have RDP inbound from the internet?, should you have RDP outbound to the internet?) as well as volume based rules (which system has the most RDP session logins by unique username?, which system connects to the most systems by distinct count of ip?)

 

The report language is hosted here, simply import it into your Reporting Engine and point it at your packet broker/concentrators.

GitHub - epartington/rsa_nw_re_rdp: RDP summary reports for hunting/identification 

 

Please let me know if there are modifications to the Report that make it more useful to you.

 

Rules included in the report:

  • most frequent RDP hostnames
  • most frequent RDP keyboard languages
  • least frequent RDP keyboard languages
  • Outbound/Inbound/Lateral RDP traffic
  • Most frequent RDP screen resolutions
  • Most frequent RDP Usernames
  • Usernames by distinct destination IP
  • RDP Hosts with more than 1 username from them

A couple of clients have asked about a generic ESA template that can be used to alert into Arcsight for correlation with other sources.  After some testing and configuration this was the template that was created.  One thing that had us stuck for a short period of time was the timezone offset in the FreeMarker template to get Arcsight to read the time as UTC and apply the correct time offset.

 

Hopefully this helps others with this need.

 

<#include "macros.ftl"/>
CEF:0|RSA|NetWitness ESA|11.0|${moduleName}|${moduleName}|${severity}|<#list events as x>externalId=${x.sessionid!" "} proto=${x.ip_proto!" "} categoryOutcome=/Attempt categoryObject=Network categorySignificance=/Informational/Warning categoryBehavior=/Communicate host=<#if x.alias_host?has_content><@value_of x.alias_host /></#if> src=${x.ip_src!" "} spt=${x.tcp_srcport!" "} dhost=${x.host_dst!" "} dst=${x.ip_dst!" "} dpt=${x.tcp_dstport!" "} act=${x.action!" "} rt=${time?datetime?string(“MMM dd yyyy HH:mm:ss z”)} duser=${x.ad_username_dst!" "} suser=${x.ad_username_src!" "} filePath=${x.filename!" "} requestMethod=${x.action!" "} destinationDnsDomain=<#if x.alias_host?has_content><@value_of x.alias_host /></#if>  destinationServiceName=${x.service!" "}</#list> cs4=${moduleName} cs5=PROD cs6=MalwareCommunication

 

This CEF template is added to the Admin > System > Global Notifications > Templates tab and referenced in the ESA rules that need to alert out to Arcsight when they fire.

As cloud deployments continue to gain popularity you may find the need for running the RSA NetWitness Platform in Google Cloud.  The RSA NetWitness Platform is already available for AWS and Azure, however is not "officially" available in Google Cloud as of 11/2018.

 

In this blog post I will walk through how to get the RSA NetWitness Platform running in Google Cloud.  This is NOT officially supported, however it does work and has been deployed in the field.

 

The rough steps are:

 

  1. Install NetWitness to a local virtual machine using the DVD ISO (Use single file for vmdk rather than split)
  2. After startup edit /etc/grub/default
  3. Install ca-certificates via yum
  4. Add repo for Google and install a few more RPM's (https://cloud.google.com/compute/docs/instances/linux-guest-environment)
  5. Copy ISO to the VM (You can also use a Google storage bucket and gcfuse in place of this step)
  6. Install Google SDK on your local machine (https://cloud.google.com/compute/docs/gcloud-compute/)
  7. Upload vmdk from deployed machine to Google Cloud Storage bucket
  8. Run import tool (Importing Virtual Disks  |  Compute Engine Documentation  |  Google Cloud )
  9. (Skip this step if you copied ISO in step 5) Add gcfuse
  10. (Skip this step if you copied ISO in step 5) Use gcfuse to mount ISO
  11. Make a directory to mount the ISO
  12. Mount the ISO
  13. Remove existing ntp rpm (Skipping this step will cause bootstrap to fail)

 

  1. Use VMWare Workstation or vSphere to create a new virtual machine.  Follow sizing instructions here: Virtual Host Setup: Basic Deployment 
    1. Choose to install Operating System Later
    2. Adjust the VM to sizes needed
    3. Ensure you are using one file for the vmdk rather than splitting into multiple disks.  Converting split disks is not in scope for this blog
    4. For the CD/DVD ensure the option "Connected" is checked
    5. Select use ISO image and browse to the path of your 11.x DVD  ISO.  Please note there are both DVD and USB ISO's.  The instructions provided here used the DVD ISO.
    6. Finish and power on the Virtual Machine
    7. Follow the prompts to install NetWitness
  2. Google has very specific instructions on what kernel arguments are allowed for imported, bootable images.  More details here: Importing Boot Disk Images to Compute Engine  |  Compute Engine Documentation  |  Google Cloud 
    1. You'll want to change your Grub command line arguments to exclude any references to splash screens or quiet 
    2. For NetWitness 11.1 ISO I used the following for /etc/grub/default:
    3. GRUB_TIMEOUT=5

      GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"

      GRUB_DEFAULT=saved

      GRUB_DISABLE_SUBMENU=true

      GRUB_TERMINAL_OUTPUT="console"

      GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=netwitness_vg00/root rd.lvm.lv=netwitness_vg00/swap biosdevname=1 net.ifnames=0 rd.shell=0 console=ttys0,38400n8d"

      GRUB_DISABLE_RECOVERY="true"

  3. If DHCP did not automatically assign all network settings, assign gateway, ip and subnet in ifcfg file for the interface and ensure the machine has connectivity to the CentOS repos (https://www.cyberciti.biz/faq/howto-setting-rhel7-centos-7-static-ip-configuration/ )
  4. Run the following and accept any gpg keys if prompted.  The latest version of ca-certificates is required or the daisy converter service will fail when you run the import.
    1. yum install ca-certificates

  5. Add the Google yum repo
    1. vi  /etc/yum.repos.d/google-cloud.repo

    2. Paste contents below

      [google-cloud-compute]
      name=Google Cloud Compute
      baseurl=https://packages.cloud.google.com/yum/repos/google-cloud-compute-el7-x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
      https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

    3. Run command to clean up yum repos

      yum clean all

  6. Install Google Cloud helper rpm's.  Permanently accept any gpg keys so they are stored.  Also install any prerequisite rpm's.  This will prevent errors during the conversion.
    1. yum install python-google-compute-engine

      yum install google-compute-engine-oslogin

      yum install google-compute-engine

  7. Copy the 11.x (Same ISO you used to build) into /tmp via scp.  This will be used for mounting the local yum repo for bootstrap.  You can also use gcfuse in place of this step, however we will not cover that here.
  8. Shutdown the VM and copy the vmdk to Google Cloud Storage bucket accessible to account used with the Google Cloud SDK.  Instructions can be found here: https://cloud.google.com/compute/docs/gcloud-compute/
  9. Run the import tool (Importing Virtual Disks  |  Compute Engine Documentation  |  Google Cloud )
    1. If your vmdk was named nw11.vmdk and your storage bucket is called netwitness the import command would be:

      gcloud compute images import nw11 --source-file gs://netwitness/nw11.vmdk --os centos-7

    2. This process can take up to a few hours
    3. Once the conversion is complete you will now have an image you can use to make NetWitness VM's
  10. Start the VM, switch to user root and mount the ISO that was copied to the vmdk before the conversion. My ISO copied was 11.2 and named rsa-11.2.0.0.3274.el7-dvd.iso
    1. su root

      mkdir /mnt/nw11gce

      mount -t iso9660 -o /tmp/rsa-11.2.0.0.3274.el7-dvd.iso /mnt/nw11gce

  11. Uninstall ntp and install version on NetWitness ISO so bootstrap will successfully complete.  Google installs a newer version of ntp rpm.  The version NetWitness uses can be reinstalled from the ISO you just mounted in step 10
    1. yum remove ntp

      rpm -e ntpdate

      rpm -Uvh /mnt/nw11gce/Packages/11.2.0.0/OS/ntpdate*.rpm

  12. Run nwsetup-tui to complete the install

 

You should now have a working NetWitness image you can build from.  One thing I have noticed is during some upgrades of kernels (which are included in service packs, patches and major versions of NetWitness software updates) additional arguments are added that can cause the instance to lose ssh connectivity and the software to not function correctly.  After any upgrade and BEFORE reboot I recommend checking to ensure additional kernel arguments have not been added.  I'd also recommend upgrading in a lab or small instance as well as take snapshot prior to upgrade so you can return to a known good state if needed.

Hi Everyone,

The PDF compilations for RSA NetWitness Platform (Logs & Network) Version 11.2 are now available at the following link: RSA NetWitness Logs & Network 11.2.  This page is also accessible by navigating to the main RSA NetWitness Community and choosing Version 11.2 on the right hand side of the page.  

 

Once on that page, the links to the documents looks like this:

Localized documents that were updated for Version 11.1 are posted in RSA Link for customers who speak Japanese, Spanish, German, and French. These are the locations.

I was recently working with Eric Partington who asked if we could get the Autonomous System Numbers from a recent update to GEOIP.  I believe at one point this was a feed, but had been deprecated.  After a little bit of research, I learned that an update had been made to the Lua libraries that allowed for the calling of a new api function named geoipLookup that would give us this information as well as some other information that might be of interest.  A few years ago, I painstakingly created a feed for my own use to map countries to continents.  I wish I had this function call back then.

 

The api call is as follows:

 

geoipLookup

-- Examples:
-- local continent = self:geoipLookup(ip, "continent", "names", "en") -- string
-- local country = self:geoipLookup(ip, "country", "names", "en") -- string
-- local country_iso = self:geoipLookup(ip, "country", "iso_code") -- string "US"
-- local city = self:geoipLookup(ip, "city", "names", "en") -- string
-- local lat = self:geoipLookup(ip, "location", "latitude") -- number
-- local long = self:geoipLookup(ip, "location", "longitude") -- number
-- local tz = self:geoipLookup(ip, "location", "time_zone") -- string "America/Chicago"
-- local metro = self:geoipLookup(ip, "location", "metro_code") -- integer
-- local postal = self:geoipLookup(ip, "postal", "code") -- string "77478"
-- local reg_country = self:geoipLookup(ip, "registered_country", "names", "en") -- string "United States"
-- local subdivision = self:geoipLookup(ip, "subdivisions", "names", "en") -- string "Texas"
-- local isp = self:geoipLookup(ip, "isp") -- string "Intermedia.net"
-- local org = self:geoipLookup(ip, "organization") -- string "Intermedia.net"
-- local domain = self:geoipLookup(ip, "domain") -- string "intermedia.net"
-- local asn = self:geoipLookup(ip, "autonomous_system_number") -- uint32 16406
function parser:geoipLookup(ipValue, category, [name], [language]) end

 

As you know, we already get many of these fields already.  Meta keys such as country.src, country.dst, org.src, and org.dst are probably well known to many analysts and used for various queries.  Eric had asked for 'asn' and because I tried it previously with a feed, I wanted to include 'continent' as well.  

 

So....I created a Lua parser to get this for me.  My tokens were meta callbacks for ip.src and ip.dst.

 

[nwlanguagekey.create("ip.src", nwtypes.IPv4)] = lua_geoip_extras.OnHostSrc,
[nwlanguagekey.create("ip.dst", nwtypes.IPv4)] = lua_geoip_extras.OnHostDst,

 

My intent is to build this parser to work on both packet and log decoders.  I had originally wanted to use another function call, but found this was not working properly on log decoders.  However, the meta callbacks of ip.src and ip.dst did work.  Now, with this in mind, I could leverage this parser on both packet and log decoders. :-)

 

The meta keys I was going to write into were as follows:

 

nwlanguagekey.create("asn.src", nwtypes.Text),
nwlanguagekey.create("asn.dst", nwtypes.Text),
nwlanguagekey.create("continent.src", nwtypes.Text),
nwlanguagekey.create("continent.dst", nwtypes.Text),

 

Since I was using ip.src and ip.dst meta, I wanted to apply the same source and destination meta for my asn and continent values.  

 

Then, I just wrote out my functions:

 

-- Get ASN and Continent information from ip.src and ip.dst
function lua_geoip_extras:OnHostSrc(index, src)
   local asnsrc = self:geoipLookup(src, "autonomous_system_number")
   local continentsrc = self:geoipLookup(src, "continent", "names", "en")

   if asnsrc then
      --nw.logInfo("*** ASN SOURCE: AS" .. asnsrc .. " ***")
      nw.createMeta(self.keys["asn.src"], "AS" .. asnsrc)
   end
   if continentsrc then
      --nw.logInfo("*** CONTINENT SOURCE: " .. continentsrc .. " ***")
      nw.createMeta(self.keys["continent.src"], continentsrc )
     end
end

 

function lua_geoip_extras:OnHostDst(index, dst)
   local asndst = self:geoipLookup(dst, "autonomous_system_number")
   local continentdst = self:geoipLookup(dst, "continent", "names", "en")

 

   if asndst then
      --nw.logInfo("*** ASN DESTINATION: AS" .. asndst .. " ***")
      nw.createMeta(self.keys["asn.dst"], "AS" .. asndst)
   end
   if continentdst then
      --nw.logInfo("*** CONTINENT DESTINATION " .. continentdst.. " ***")
      nw.createMeta(self.keys["continent.dst"], continentdst)
   end
end

 

This was my first time using this new api call and my mind was racing with ideas on how else I could use this capability.  The one that immediately came to mind was enriching meta when X-Forwarded-For or Client-IP meta existed.  If it did exist, it should be parsed into a meta key called "orig_ip" today or "ip.orig" in the future.  The meta key "orig_ip" is formatted as Text so I need to account for that by determining the correct HostType.  We don't want to pass a domain name when we are expecting to pass an IP address.  I can do that by importing the functions from 'nwll'.

 

In the past, the only meta that could be enriched by GEOIP was ip.src and ip.dst (I have not tested ipv6.src or ipv6.dst).  Now with this API call, I can apply the content of GEOIP to other IP address related meta keys.  I have attached the full parser to this post.  

 

Hope this helps others out there in the community and as always, happy hunting.

 

Chris

Background Information:

  • v10.6.x had a method in the UI to add a standalone NW head server for investigation purposes (and to help with DR scenarios) using legacy authentication (static local credentials).  
  • v11.x appeared to have removed that capability which was blocking some of the larger upgrades, however it appears that the capability actually exists; it is just not presented in the UI as it was in v10.6.
  • Having a DR investigation server also helps to provide continuous access to data for analysts during the major upgrade from v10.6.x to v11.2 which is incredibly beneficial to have.

 

Review the upgrade guide and the "Mixed Mode" notes at the link below for more details on the upgrade and running in mixed mode:

https://community.rsa.com/community/products/netwitness/blog/2018/10/18/running-rsa-netwitness-mixed-mode

 

If you spin up a DR v11.2 standalone NW server from the ISO/OVA you can connect it to an existing set of concentrators using local credentials (Note: DO NOT expect that Live or ESA will function as they do on the actual node0 NW server.  This method gets you a window into the meta for investigation, reporting and Dashboards only!)

 

Here's the steps you'll need to follow once you have your DR v11.2 NW server spun up:

 

Create local credentials to use for authentication with the concentrator(s) or broker(s) that you will connect to under

Admin > Service > <service> > Security

 

 

You will need to add some permissions to the aggregation role to allow the Event Analysis function to work:

Replicate the role and user to the other services that you will need to authenticate to.

 

Your 11.2 DR investigation head server can connect to a 10.6.6 Broker or Concentrator with the following:

 

Broker service > Explore

Select broker

Right click select properties

Select add from the drop down

Add the concentrators that need to be connected (as they were in 10.6).  Below are the ports that are required for the connection:

  • 50005 for Concentrators
  • 56005 for SSL to Concentrators
  • 50003 to Broker 
  • 56003 for SSL to Broker

 

device=<ip>:<port> username=<> password=<>

 

Click send.

 

You should get a successful connection and in the config section you will now see the aggregation connection setup:

 

Click Start aggregation and make sure Aggregate Autostart is checked:

 

Using this DR Investigation server you can use the following process to help in upgrading from v10.6.6 to v11.2+ in the following steps:

 

Initial State:

 

Upgrade the new Investigation Head:

 

Investigators now can use the 11.2 head to investigate without interruption during the production NW head server upgrade.

 

Upgrade the primary (node0) NW head server and ESA:

Upgrade the decoder/concentrator pairs:

Note: an outage will occur here for investigation as the stacks are upgraded

Now you'll be running in v11.2 mode as you were in 10.6 with DR investigation head server so that your Investigation and Events views will be accessible.

This post details some of the implications of running in a mixed-mode environment. For the purposes of this post, a mixed-mode environment is one in which some services are running on Security Analytics 10.6.x, and others are running on NetWitness 11.x.

 

Note: RSA strongly suggests upgrading your 10.x services to 11.x to match your NetWitness server version, but running in Mixed-Mode allows you to stage your upgrade, especially for larger environments.

 

If you run in a mixed-mode environment for an extended time, you may see or experience some or all of the following behaviors:

Overall Administration and Management Functionality

  • If you add any 10.6.x hosts, you must add them manually to the v11.x architecture.
    • There is no automatic discover, or trust establishment via certificates.
    • You need to manually add them through username and password.
  • In 11.x, a secondary or alternate NetWitness (NW) Server is not currently supported, though this may change for future NetWitness versions.
    • Only the Primary NW Server could be upgraded (which would become "Node0").
    • Secondary NW Servers could be re-purposed to other host types.
  • The Event Analysis View is not available at all in mixed mode, and will not work until ALL devices are upgraded to 11.x.

Mixed Brokers

If you do not upgrade all of your Brokers, the existing Navigate and Event Grid view will still be available.

Implications for ESA

If you follow the recommended upgrade procedure for ESA services, note the following:

  • During the ESA upgrade, the following mongo collections are moved from the ESA mongodb to the NW Server mongodb:
    • im/aggregation_rule.*
    • im/categories
    • im/ tracking_id_sequence
    • context-wds/* // all collections
    • datascience/* // all collections
  • The upgrade process performs some reformatting of the data: so make sure to follow those procedures as described in the Physical Host Upgrade Guide and Physical Host Upgrade Checklist documents, available on RSA Link. One way to find these documents is to open the Master Table of Contents, where links are listed in the Installation and Upgrade section.

 

IMPORTANT!You MUST upgrade your ESA services at the same time you upgrade the NetWitness Server. If you do not, you will have to re-image all of the ESA services as new, and thus lose all of your data. Also, if you do not plan on updating your ESA services, you would need to REMOVE them from the 10.6.x Security Analytics Server before you start your upgrade

Hosts/Services that Remain on 10.6.x

  • If you add a 10.6.x host after you upgrade to 11.x, no configuration management is available through the NetWitness UI. You must use the REST API for this. Existing 10.6.x devices will be connected and manageable via 11.x -- as long as you do not remove any aggregation links.
  • You need to aggregate from 10.6.x hosts to 11.x hosts manually.
    • For example, for a Decoder on 10.6.x and a Concentrator on 11.x:
    • Same applies for any other 11.x service that is aggregating from a 10.6.x host.
  • If you have a secondary Security Analytics Server, RSA recommends that you keep it online to manage any hosts or services that still are running 10.6.x, until you have upgraded them all to 11.x. 

Hybrids

If you are doing an upgrade on a system that has hybrids, the communication with the hybrids will still be functional. The Puppet CA cert is used to as the cert for the upgraded 11.x system, so the trust is still in place.

For example, if you have a system with a Security Analytics or NetWitness Server, an ESA service, and several hybrids, you can upgrade the NW Server and the ESA service, and communications with the hybrids will still work.

Recommended Path Away from Mixed-Mode

For large installations, you can upgrade services in phases. RSA recommends working "downstream." For example:

  1. For the initial phase (phase 1), upgrade the NW Server, ESA and Malware services. Also, upgrade at least the top-level Broker. If you have multiple Brokers, the suggestion is to upgrade all of them in phase 1.
  2. For phase 2, upgrade your concentrators, decoders, and so forth. The suggestion is to upgrade the concentrators and decoders in pairs, so they can continue communicating correctly with each other.

MuddyWater is an APT group who's targets have mainly been in the Middle East, such as the Kingdom of Saudi Arabia, the United Arab Emirates, Jordan, Iraq ... with a focus on oil, military, telco and government entities.

 

The group is using Spear Phishing attacks as an initial vector. The email contains an attached word document which tries to trick the user into enabling macros. The attachment's filename and its content are usually tailored towards the target, such as the language used.

 

In the below example, we will look at the behavior of the following malware sample:

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f

 

Filetype: MS Word Document

 

 

Endpoint Behavior

This specific malware sample is for an Arabic speaking victim targeted at Jordan, where the filename "معلومات هامة.doc" can translate into "important information.doc". Other variants contain content in Turkish, Pakistani ...

 

The file shows blurry text in Arabic, with a message telling the target to enable content (and therefore macros) to unlock the content of the document.

 

Once the user clicks on "Enable Content", we're able to see the following behaviors on RSA NetWitness Endpoint.

 

1- The user opens the file. In this case, the file was opened from the Desktop folder, but if it was from his email, it would have shown from "outlook.exe" instead of "explorer.exe"

 

2- The malware uses "rundll32.exe" to execute the dropped file (C:\ProgranData\EventManager.log), allowing to evade detection

 

3- Powershell is then used to decode the payload of another dropped file ("C:\ProgramData\WindowsDefenderService.ini") and executes it. Having the full arguments of the Powershell command, it would be possible for the analyst to use it to decode the content of the "WindowsDefenderService.ini" file for further analysis

 

4- Powershell modifies the "Run" Registry key to run the payload at startup

 

5- Scheduled tasks are also created 

 

 

After this, the malware will continue execution after a restart (this might be as a layer of protection against sandboxes).

 

6- The infected machine is restarted

 

7- an additional powershell script "a.ps1" is dropped

 

8- Some of the Windows security settings are disabled (such as Windows Firewall, Antivirus, ...)

 

 

 

By looking at the network activity on the endpoint, we can see that powershell has generated a number of connections to multiple domains and IPs (possible C2 domains).

 

 

Network Behavior

To look into the network part in more details, we can leverage the captured network traffic on RSA NetWitness Network.

 

We can see, on RSA NetWitness Network, the communication from the infected machine (192.168.1.128) to multiple domains and IP addresses over HTTP that match what has been originating from powershell on RSA NetWitness Endpoint.

We can also see that most of the traffic is targeting "db-config-ini.php". From this, it seems that the attacker has compromised different legitimate websites, and the "db-config-ini.php" file is owned by the attacker.

 

Having the full payload of the session on RSA NetWitness network, we can reconstruct the session to confirm that it does in fact look like beaconing activity to a C2 server.

 

 

Even though the websites used might be legitimate (but compromised), we can still see suspicious indicators, such as:

  • POST request without a GET
  • Missing Headers
  • Suspicious / No User-Agent
  • High number of 404 Errors
  • ...

 

 

 

Conclusions

We can see how the attacker is using legitimate, trusted, and possibly white-listed modules, such as powershell and rundll32, to evade detection. The attacker is also using common file names for the dropped files and scripts, such as "EventManager" and "WindowsDefenderService" to avoid suspicion from analysts.

 

As shown in the below screenshot, even though "WmiPrvSE.exe" is a legitimate Microsoft files (it has a valid Microsot signature, as well as a known trusted hash value), but due to its behavioral activity (as shown in the Instant IOC section), we're able to assign a high behavioral score of 386. It should also be noted that any of the suspicious IIOCs that have been detected could trigger a real time alert over Syslog or E-Mail for early detection, even though the attacker is using advanced techniques to avoid detection.

 

 

 

Similarly, on the network, even though the attacker is leveraging (compromised) legitimate sites, and using standard known protocols (HTTP) and encrypted payloads, to avoid detection and suspicion, it is still possible to detect those suspicious behaviors using RSA NetWitness Network, and look for indicators such as Post no Get, suspicious user agents, missing headers, or other anomalies.

 

 

 

 

Indicators

The following are IOCs that can be used to look if activity from this APT currently exists in your environment.

This list is not exhaustive and is only based on what has been seen during this test.

 

Malware Hash

SHA-256: bfb4fc96c1ba657107c7c60845f6ab720634c8a9214943b5221378a37a8916cd

MD5: 16ac1a2c1e1c3b49e1a3a48fb71cc74f

 

Domains

  • wegallop.com
  • apidubai.ae
  • hmholdings360.co.za
  • alaqaba.com
  • triconfabrication.com
  • themotoringcalendar.co.za
  • nakoserum.com
  • mediaology.com.pk
  • goolineb2b.com
  • addorg.org
  • mumtazandbrohi.com
  • pmdpk.com
  • buy4you.pk
  • gcmbdin.edu.pk
  • mycogentrading.com
  • ipripak,org
  • botanikbahcesi.com
  • dailysportsgossips.com
  • ambiances-toiles.fr
  • britishofficefitout.com
  • canbeginsaat.com

 

IP Addresses

  • 195.229.192.139
  • 185.56.88.14
  • 196.40.100.202
  • 45.33.114.180
  • 173.212.229.48
  • 54.243.123.39
  • 196.41.137.185
  • 209.99.40.223
  • 192.185.166.227
  • 89.107.58.132
  • 86.107.58.132
  • 192.185.166.225
  • 192.185.75.15
  • 94.130.116.248
  • 192.169.82.62
  • 86.96.202.165
  • 196.40.100.204
  • 192.185.166.22
  • 5.250.241.18
  • 104.18.54.26
  • 217.160.0.2
  • 192.185.24.71
  • 185.82.222.239

Filter Blog

By date: By tag: