Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

437 posts

I leverage many sources to get ideas around spotting anomalies in an environment. One of the sources I leverage comes from the following Twitter account: Jack Crook (@jackcr).  @Jackcr provides many ideas around methods and approaches to separate known from unknown or common from rare.

 

This post inspired me to see if something similar could be implemented using RSA NetWitness Platform.

 

https://twitter.com/jackcr/status/993561834375598080

 

The basis for the report was to look for outbound communications where a domain only has one useragent accessing it (over a period of time) and that useragent contains 'mozilla'.

 

After a few tests in the lab this was the rule that was developed.

 

name: "DomainsWithOneUserAgent(1)"
description: ""
path_for_export: "rsa-custom/rareUaDomain/DomainsWithOneUserAgent(1)"
query {
data_source_type: NWDB
netwitness_query {
select: "alias.host,countdistinct(client),distinct(client),org.dst,countdistinct(ip.src)"
where: "alias.host exists && client exists && direction = \'outbound\' && client contains \'mozilla\'"
group_by_keys: "AGGREGATE"
order_by_keys {
column_name: "countdistinct(client)"
sort_order: ASCENDING
}
order_by_keys {
column_name: "alias.host"
sort_order: ASCENDING
}
limit_results_count: 100
then_clause: "max_threshold(1,countdistinct(client))"
agg_session_threshold: 0
group_by: "alias.host"
group_by: "org.dst"
alias_names: ""
}
data_source_name: ""
}

We limit the returned results to top 100, and looking for results that have a max threshold for count(distinct(client)) of 1 to limit to domains that have only one unique domain accessing it over the reporting time frame.

 

Results look like this (lab results)

The report is included at the github link below.  As always, I'm curious to see how this tested on a larger network to see validity and if tweaks are necessary.  If you have any feedback please let me know.

 

GitHub - epartington/rsa_nw_re_useragent_domain_rare 

 

Feedback always appreciated

 

Happy Hunting

Security Monitoring is no longer in its infancy and most organizations have some level of monitoring in place today.  So the question begs, if this is in place then why do we continue to see organizations failing to secure their networks and protect what matters most to their business?

 

In reality there is no single reason for these breaches nor is there a silver bullet for curing the problem. If you had a chance to listen or watch the keynote from this year’s RSA Conference, delivered by RSA's President Rohit Ghai, you’ll recall that he said we have to look to the silver linings and see where small changes made across the Security Monitoring arena can add up to make significant overall improvements to our security.

 

There is sometimes a perception that deploying multiple security technologies will protect an organization.  In several recent discussions it's apparent that organizations continue to experience major breaches even with technology in place.  Sometimes they simply have the wrong technology.  Other times they have the right technology, but they're not actively using it or using it to its full potential.  The point is that it is less about what technology you have in place and more about what you actually do with it.  We've seen a number of examples where smaller security teams excel purely by knowing their own environment and having a thorough understanding of their tools, capabilities and making the most of what they have been able to invest in.  

 

This is indicative of another issue: skill shortages and finding the right security staff. It’s not necessarily about having the perfect team from day 1, but it’s about growing their skills in-house to make sure they know what they are defending (and why).  This involves having a development path to increase the organization’s Security Operations Maturity.

Knowing your own threat landscape and what gaps you have in threat detection are crucial in a modern Intelligence-led Security Operations Center.  The fact is that understanding your own network landscape is going to be crucial when you are defending it against the most sophisticated attackers.

 

In short, what we are saying here is that it is incredibly difficult to develop a SOC or any other Security Monitoring capability which is going to be effective from day 1. It is all about the journey. SOC Managers, CISO’s, CIO’s and others have to identify what is important to them and develop a plan which will provide the enhancements in capabilities (Tools, Technologies & Procedures) and ensure that these are supported both financially and by metrics.  This includes having a roadmap of where you want your Security Monitoring program to grow to and being able to test how well the team is performing via Red Team engagements as well as Controlled Attack and Response Exercises.

 

Join us on our upcoming webinar next month on June 12th to learn more.  We will discuss this journey with one of our customers who has taken this exact approach in building and developing their team into one of the most skilled Security Operations Centers that we’ve seen to date.   

 

Click here to register.

 

I wanted to give a special thanks to Azeem Aleem, Gareth Pritchard and David Gray for their contributions to this blog and upcoming webinar. 

The Http_lua_options file has many functions that can be enabled and disabled to ensure proper parsing of your traffic.  One of the functions that is not listed in the OOTB options file is the browserprint function.  This is often deployed during RSA Incident Response engagements to give a bit more detail about the headers in HTTP sessions and their order/occurrence.

 

To enable browserprint, do the following:

 

Edit the http_lua_options file (on the decoder):

 

function browserprint()
--[=[
"Browserprint" : default FALSE
Whether to register a "fingerprint" of the browser based upon the specific headers
seen in the request. The format of this browserprint value is:
Position 1 is HTTP version: 0 = HTTP/1.0, 1 = HTTP/1.1
Remaining positions are in order that the header was seen. Only
the below listed headers are included in the fingerprint:
1 = accept / 2 = accept-encoding
3 = accept-language / 4 = connection
5 = host / 6 = user-agent
Example "15613":
HTTP/1.1
HOST:
USER-AGENT:
ACCEPT:
ACCEPT-LANGUAGE:
(other headers may have appeared between those headers)
The usefulness of this meta is not necessarily in determining "good" or "bad"
browser fingerprints. Rather, it is more useful to look for outliers. For
example if the majority of values are 15613 with just a few being 15361, then
the sessions with 15361 may be worth investigation.
--]=]
return true
end

 

Next, restart the decoder service.

 

On your index-concentrator-custom.xml file (on the concentrator), add the following key:

<key description="Browser Print" format="Text" level="IndexValues" name="browserprint" defaultAction="Closed" valueMax="500000" />

 

Next, restart the concentrator service.

 

Update the ESA Explore setting below to make sure ESA sets the values as string[] (array) and not string.

 

Admin > ESA > Explore >
/workflow/source/netGenAggregationSource/ArrayFieldNames

Add browserprint to the end of the line

 

Now you have the ability to generate the browserprint numbers as new sessions arrive at the decoder.

 

Next question... what do those numbers actually mean in the browserprint key?

 

From the code above this is the explanation:

Registers a "fingerprint" of the browser based upon the specific headers seen in the request. The format of this browserprint value is:

 

Position 1 is HTTP version: 0 = HTTP/1.0, 1 = HTTP/1.1

 

Remaining positions are in order that the header was seen. Only the below listed headers are included in the fingerprint:
1 = accept

2 = accept-encoding
3 = accept-language

4 = connection
5 = host

6 = user-agent


Example "15613":


1 - HTTP/1.1
5 - HOST:
6 - USER-AGENT:
1 - ACCEPT:
3 - ACCEPT-LANGUAGE:
(other headers may have appeared between those headers)


The usefulness of this meta is not necessarily in determining "good" or "bad" browser fingerprints. Rather, it is more useful to look for outliers. For example if the majority of values are 15613 with just a few being 15361, then the sessions with 15361 may be worth investigation.

 

What does this look like with more traffic?

 

Simply by looking over a large set of data you can start to see that certain patterns are uncommon and unusual.. those could be investigated to see what they were and if they were interesting.

 

Lets Take this a step farther, can we use this Browserprint data point in combination with other metakeys to  find more specific unusual communication patterns?  Browserprint allows us to take a look at the 'crowd' of common HTTP headers and orders and determine outliers.  What if that was combined with user agent information, destination hosts, outbound traffic direction and the number of source IP addresses?

 

This could give us a method to see the rare combinations of source IP to destination host, outbound, with the same rare Browserprint number (along with the destination org, and the de-duped list of unique UA that match the HTTP communications).

 

The Report and Rule syntax

name: "browserprintUA-Rare-Mozilla"
description: ""
path_for_export: "rsa-custom/browserprint/browserprintUA-Rare-Mozilla"
query {
data_source_type: NWDB
netwitness_query {
select: "browserprint,count(browserprint),count(ip.src),distinct(client),distinct(alias.host),distinct(org.dst)"
where: "browserprint exists && client exists && direction=\'outbound\' && client contains \'mozilla\'"
group_by_keys: "AGGREGATE"
order_by_keys {
column_name: "count(browserprint)"
sort_order: ASCENDING
}
order_by_keys {
column_name: "count(ip.src)"
sort_order: ASCENDING
}
limit_results_count: 100
then_clause: "max_threshold(50,count(ip.src))"
agg_session_threshold: 0
group_by: "browserprint"
alias_names: ""
}
data_source_name: ""
}

 

name: "browserprintUA-Rare-NonMozilla"
description: ""
path_for_export: "rsa-custom/browserprint/browserprintUA-Rare-NonMozilla"
query {
data_source_type: NWDB
netwitness_query {
select: "browserprint,count(browserprint),count(ip.src),distinct(client),distinct(alias.host),distinct(org.dst)"
where: "browserprint exists && client exists && direction=\'outbound\' && not(client contains \'mozilla\')"
group_by_keys: "AGGREGATE"
order_by_keys {
column_name: "count(browserprint)"
sort_order: ASCENDING
}
order_by_keys {
column_name: "count(ip.src)"
sort_order: ASCENDING
}
limit_results_count: 100
then_clause: "max_threshold(50,count(ip.src))"
agg_session_threshold: 0
group_by: "browserprint"
alias_names: ""
}
data_source_name: ""
}

 

When put in a report and run in a lab environment you get something like this when broken down between mozilla or non-mozilla useragents.  Sorted by least occurrance and with a max threshhold of 50 to make sure we focus on the rare Browserprint combinations.

 

Report and rules are listed here to test and provide feedback.

GitHub - epartington/rsa_nw_re_browserprint-rare: Browserprint http_lua_options for rarity 

 

Happy hunting

Overview

This version will now parse over 1,400 events from the devices, however the parser does not parse audit events that are generated in the "Administration-->Security" user interface.  Those events are handled by the Global Audit, Global Notification settings and parsed by the CEF parser.  However, if you made modifications to the "Security" settings on the individual device, that event will be parsed by this parser.

This version was developed and tested on 10.6.2.0 using available log samples from 10.4.x thru 10.6.2.0.

 

Improvements

New Headers have been added to accommodate the log format change in 10.5.1 and above.

Logs from the Virtual Log Collector are now parsed, particularly Windows Collection Errors.

Error/Failure Logs are consolidated under the Event Category Name of "System.Errors"

Puppet Logs are parsed

Collectd Logs are parsed

Added "maxValues" kb 00031300 modification

Custom Index reduction in size and maxValues adjusted accordingly

Overall cleanup of some variable/index clutter

Improved accuracy for parsing of Query and Queue Times

Duration added for Query Times, they are now converted to seconds under the "duration.time" metakey

 

Contents

This package includes:

   Custom Log parser

   Custom Index for Concentrator*

   Custom Table Map*
   Event Categories Spreadsheet

  

*I have revised the custom index and table map to reflect the new changes in the default settings for 10.6.2.  If you are using a prior version to 10.6, you may need to add some additional index keys to the custom index.

 

Parser Content

Content, such as reports and dashboards, written by me for this parser will be published separately and links will be added here.  Currently content for Index operations, queries, cancelled queries, system errors, configuration changes, security changes, service restarts, and content updates for feeds/parsers are being tested on an enterprise system at the time of this writing.  These will start appearing in the next few days.

 

Report:  ValueMax Has Been Reached 

 

Installation

Log Decoder

Remove the prior version of the parser

  1. SSH into each log decoder as "root" that has the prior version.
  2. Remove the old parser directory
    rm -r /etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/
    You should see the prompts like below:
    [root@logdecoder60 SA_Logs]# rm -r /etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/
    rm: descend into directory `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics'? y
    rm: remove regular file `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/rsasecurityanalytics.ini'? y
    rm: remove regular file `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics/rsasecurityanalyticsmsg.xml'? y
    rm: remove directory `/etc/netwitness/ng/envision/etc/devices/rsasecurityanalytics'? y

Download and unzip parser

  1. Download the parser file "rsasecurityanalytics_2.3.99.zip" from the bottom of this page.
  2. Unzip the file using Winzip, or 7zip.
    The unzipped parser file name will be "rsasecurityanalytics.envision"

Upload the parser on the Log Decoder

  1. Login to the Web Interface as "admin" or user who is a member of the "Administrators" Role.
  2. Choose "Administration-->Services" from the navigation menu in the upper left corner of the screen.
  3. Locate the Log Decoder and click on the gear icon, located at the far right of the screen.
  4. Hover over "View", then click "Config".
  5. Click on the "Parsers" Tab.
  6. Click on the "Upload" icon in the upper left portion of the window.
  7. Click on the "+" in the upper left of the "Upload Parsers" dialog box.
  8. Navigate to the folder where the "rsasecurityanalytics.envision" is located and select it.  Click "Open"
  9. Click on "Upload"
  10. Click on the "X" in the upper right corner of the dialog box or click "Cancel"

Remove prior version custom table map entries

  1. On the same screen as above, Click on the "Files" Tab
  2. On the left side of the screen click on the dropdown and select "table-map-custom.xml".
  3. Locate the section related to the custom table entries for the log parser typically labelled
    RSA Security Analytics Log Parser Revision 2.1.63 xx/xx/xx
  4. Remove that section.
  5. Replace with new table map entries from the table-map-custom.xml file.
  6. Click "Apply"

Load the new log parser and custom table map.

  1. On the same screen as above, click on "Config" just above the "App Rules" Tab.
  2. Click on "System"
  3. Click on "Stop Capture" at the top left of the screen.
  4. Wait for capture to stop.
  5. Click on "Shutdown Service" at the top center of the screen.
  6. On the "Confirm Shutdown" dialog, type "RSA Security Analytics Parser update"
  7. Click "OK"

Concentrator

Update The Concentrator Custom Index

  1. Login to the Web Interface as "admin" or user who is a member of the "Administrators" Role.
  2. Choose "Administration-->Services" from the navigation menu in the upper left corner of the screen.
  3. Locate the Concentrator and click on the gear icon, located at the far right of the screen.
  4. Hover over "View", then click "Config".
  5. Click on the "Files" Tab
  6. On the left side of the screen click on the dropdown and select "index-concentrator-custom.xml".
  7. Locate the section related to the custom table entries for the log parser typically labelled
    RSA Security Analytics Log Parser Revision 2.1.63 xx/xx/xx
  8. Remove that section.
  9. Replace with new custom index entries from the index-concentrator-custom.xml file.
  10. Click "Apply"

Load The New Custom Index.

  1. On the same screen as above, click on "Config" just above the "Correlation Rules" Tab.
  2. Click on "System"
  3. Click on "Stop Aggregation" at the top left of the screen.
  4. Wait for aggregation to stop.
  5. Click on "Shutdown Service" at the top center of the screen.
  6. On the "Confirm Shutdown" dialog, type "RSA Security Analytics Parser update"
  7. Click "OK"

ALL Appliances

Configure Rsyslog to Forward Logs

  1. SSH into each NetWitness Appliance.
  2. Modify the /etc/rsyslog.conf file.  
    vi /etc/rsyslog.conf
  3. Press the letter "i" or the "Insert" key.  You should see "-- INSERT --" at the bottom left of your screen.
  4. Scroll to the bottom of the file and look for the following line:
    #*.* @@remote-host:514
  5. Remove the "#" and change "remote-host" to the destination Log Decoder or Virtual Log Collector (VLC).
    *.* @@<Log Decoder or VLC IP Address Here>:514
  6. Press the  "ESC" key
  7. You should see a colon ":" in the lower left of the screen.
  8. Save the file by typing ":wq"
    :wq
  9. Restart the Rsyslog service.
    service rsyslog restart
  10. Rsyslog is now forwarding logs to the Log Decoder or VLC.

The Options file for HTTP_lua parser has been updated recently.  The latest addition is interesting which you can find at the bottom of the file called customHeaders()

 

Current version of the file is at the top of the options ...

 

-- 2018.05.02.1

 

function customHeaders()
--[=[
"Custom Headers" : default NONE

Beware of excessive duplication, which will impact performance and retention. Meta
registered will be in addition to, not replacement of, standard meta registration.
In other words, if you specify "user-agent" headers be registered to key "foo", it
will still also be registered to alias.host (or alias.ip / alias.ipv6 if appropriate).

Syntax is,

["header"] = "key",

Where,

"header" is the desired HTTP header in lowercase. Do not included spaces, colons, etc.

"key" is the desired meta key with which to register the value of that header

Key names must be 16 characters or less, and consist only of alphanumeric, dots, and
hyphens. Keys specified that do not meet these requirements will be modified in order
to conform.

Keys listed here are registered as format="Text". Don't use keys indexed in other formats.

--]=]
return {
--["origin"] = "referer",
}

 

That option addition allows you to capture and write to a metakey a specific header by name.

 

Maybe your use case was to grab Pragma or Proxy-Connection or X-Cache or a custom header that specific malware was using (maybe cookie?). 

 

These values may already be extracted into another set of advanced keys from the Options file such as http response header/request header or unique http request/response header but this gives you a targeted method to grab specific headers and put them in specific keys to reduce meta bloat and get just the data you might be looking for.

 

As always, don't subscribe to any of the options files from Live, deploy directly from Live otherwise once an update like this is pushed out it will overwrite your custom version and changes.  Download the new version, diff offline and make the changes that are required until the product UI catches up with these Options features.

One of the major new features found in RSA NetWitness Platform version 11.1 is RSA NetWitness Endpoint Insights.  RSA NetWitness Endpoint Insights is a free endpoint agent that provides a subset of the full RSA NetWitness Endpoint 4.4 functionality as well as the ability to perform Windows log collection.  Details of how to configure RSA NetWitness Endpoint Insights can be found herehttps://community.rsa.com/docs/DOC-86450

 

Additionally, as of RSA NetWitness Platform version 11.0, those with both RSA NetWitness Log & full RSA NetWitness Endpoint components have the option to start bringing the two worlds together under a unified interface.  This integration strengthens in version 11.1, and will continue to do so through version 11.2 and beyond.   Details of this integration can be found here: Endpoint Integ: RSA Endpoint Integration

 

I created the content below to compliment the endpoint scan data (RSA NW Endpoint and RSA NW Endpoint Insights) as well as tracking data (RSA NW Endpoint + meta integration into 11.X).  As you leverage this content, please let me know if you have any questions, and please post improvements and iterations as well.

 

Note:  If using the RSA NW Endpoint Insights agent (vs the full RSA NW Endpoint 4.4 agent) full process tracking data is not available. The process-centric content below will still work, but keep in mind that the process data reported is only a snapshot in time based on endpoint scan schedules and will not capture any process events in between scans.  

 

Content Summary:

Autoruns -  Outliers Report & Dashboard
Autoruns & Scheduled Tasks launching from or arguments containing AppData\Local\Temp
Autoruns & Scheduled Tasks launching from root of \ProgramData
Autoruns & Scheduled Tasks invoking Command Shell (cmd.exe or powershell.exe)
Autoruns & Scheduled Tasks invoking wscript.exe or cscript.exe
Autoruns & Scheduled Tasks invoking .vbs, .bat, .hta, .ps1 scripts
Autoruns - Rarest HCKU.../Run and /RunOnce keys
Processes & Files - Outliers Report & Dashboard
Rarest Child Processes of Web Server Processes
Rarest Parent Processes of cmd.exe
Rarest Parent Processes os powershell.exe
Rarest Processes running from AppData\Local\ or AppData\Roaming
Rarest Executables in Root of ProgramData
Rarest Executables in Root of C:\
Rarest Executables in Root of Windows\System32
Rarest Company Headers in Files
Rarest Code Signing CN in Files
ESA Rules
Alert: Scheduled Task running out of AppData\Local\Temp
Alert: Scheduled Tasks running cmd.exe or powershell.exe (with Whitelist expectation)
Alert: Scheduled Tasks running cscript.exe or wscript.exe (with Whitelist expectation)
Alert: Windows Reserved Process Names Running From Suspicious Directory
Alert: Process Running from $RECYCLE.BIN
Meta & Column Groups
1 x Meta Group:  Scan and Log Data
7 x Column Groups:  NWEndpoint [Autorun/DLL/File/Machine/Process/Service/General] Analysis

 

Screenshots

Dashboards

Meta Group

 

Column Group (eg. Process Analysis)

Column Group (eg. Autoruns and Tasks)

One of the major new features found in RSA NetWitness Platform version 11.1 is RSA NetWitness Endpoint Insights.  RSA NetWitness Endpoint Insights is a free endpoint agent that provides a subset of the full RSA NetWitness Endpoint 4.4 functionality as well as the ability to perform Windows log collection.  Details of how to configure RSA NetWitness Endpoint Insights can be found here: https://community.rsa.com/docs/DOC-86450

 

Additionally, as of RSA NetWitness Platform version 11.0, those with both RSA NetWitness Log & full RSA NetWitness Endpoint components have the option to start bringing the two worlds together under a unified interface.  This integration strengthens in version 11.1, and will continue to do so through version 11.2 and beyond.   Details of this integration can be found here: Endpoint Integ: RSA Endpoint Integration 

 

The 05/16/2018 RSA Live update added 4 new reports to take advantage of the Endpoint Scan Data collected by either the free RSA NetWitness Endpoint Insights agent, or the full RSA NetWitness Endpoint 4.4 meta integration (search "Endpoint" in RSA Live):

 

 

Use these reports to gain summarized visibility into endpoints, and to prioritize hunting efforts through outlier/stack analysis.  Outliers are usually worth gaining visibility into and understanding, particularly those related to persistence techniques and post-exploit activities commonly used by adversaries.  While not every outlier implies something bad is happening, this type of analysis tends to be fruitful, particularly as you increase the accuracy of rules over time through additional whitelist logic.

 

Report #1 Endpoint Scan Data Autorun and Scheduled Task Report (Outliers)

Outlier (bottom N) reporting of a subset of suspicious autoruns and scheduled task, containing the tables below.

 

Rarest Autoruns/Tasks in AppData/X and ProgramData root folders across environment (rarity among locations commonly used by malware)

Rarest Autorun registry keys across the environment

Enumerate all Autoruns/Tasks Invoking shells or scripts  (some software will do this legitimately, but should be more or less consistent across an enterprise with common images - look specifically at the launch arguments for signs of bad behavior)

 

Eg. Rarest Autoruns invoking command shells table:

 

Report #2 Endpoint Scan Data File and Process Outliers Report

Predominately outlier (bottom N) reporting of contextually interesting processes, containing the tables below.

 

Rarest parent processes of powershell.exe and cmd.exe (this should be fairly uniform across an organization based on common software distribution - outliers become worth of a look)

Rarest child processes of web server processes (looking for anomalous process execution that could indicate presence of a webshell)

Rarest Code Signing Certificate CNs 

Windows Processes with Unexpected Parent Processes (based on https://digital-forensics.sans.org/media/poster_2014_find_evil.pdf), looking for non-typical mismatches of windows child/parent processes)

 

Eg. Rarest child processes of web server processes table:

 

Report #3 Endpoint Scan Data Host Report 

This report takes an endpoint hostname as input.  It will enumerate all scan data (eg. processes, autoruns, machine details, files, etc. collected over a period of time).  NOTE:  This data also lives directly in the NW 11.1 UI under the "Hosts" section in a much nicer layout if you want it at-a-glance.

 

Eg. Report alternative in 11.1 - Hosts view:

 

Eg. Endpoint Scan Data Host Report:

 

Report #4 Endpoint Machine Summary Report

A summary of the Endpoint deployment in an environment, including OS breakdown, and NW Endpoint version breakdown.  NOTE:  This data also lives directly in the NW 11.1 UI under the "Hosts" section if you want it at a glance:

 

Eg. Report alternative in 11.1 - Hosts view:

 

Eg. Endpoint Summary Report:

A question came from a customer about a recent 0-day Doublekill (Byte Nibble Obfuscation) yara rule that they were trying to implement with RSA NetWitness. 

 

Challenge accepted !

 

First thing was to locate the yara signature in question:

c0d3inj3cT on Twitter: "Very interesting collection of Yara hunting rules to discover some of the latest techniques here… 

 

Specifically this signature:

yara-rules/RTF_Byte_Nibble_Obfuscation.rule at master · InQuest/yara-rules · GitHub 

 

Which looks like this

rule RTF_Byte_Nibble_Obfuscation_method1 {     strings:         $magic  = {7b 5c 72}         $update = "\\objupdate" nocase         $data   = "\\objdata"   nocase         $nibble = /([A-Fa-f0-9]\\'[A-Fa-f0-9]{4}){4}/     condition:         $magic in (0..30) and all of them and #nibble > 10 }  rule RTF_Byte_Nibble_Obfuscation_method2 {     strings:         $magic  = {7b 5c 72}         $nibble = /\\objupdate.{0,1024}\\objdata.{0,1024}([A-Fa-f0-9]\\'[A-Fa-f0-9]{4}){2}/     condition:         $magic in (0..30) and all of them }

How can this be applied to RSA NetWitness?
Malware service with 10.6.x or standalone with 11.x can leverage custom yara signatures following this configuration:
Investigation and Malware Analysis User Guide for Version 11.0 

Start with page 172 to start with custom yara content

current yara version on the MA service is 3.7 which is being updated in the Docs (reference to 1.7 is incorrect)
[root@nw11malware ~]# yara -v
3.7.0

Now we need to format the yara rule so that the MA service (Malware) loads the yara signature into the Yara library and runs it against files seen in the appliance.
These are the additional items to be added to each yara signature section (examples)
meta:
iocName = "FW.ecodedGenericCLSID"
fileType = "WINDOWS_PE"
score = 25
ceiling = 100
highConfidence = false

The end result of the rule is this ( The rules are doubled as i wasn't sure how the file would be presented to the engine (PE or MS Office)
rule RTF_Byte_Nibble_Obfuscation_method1
{
meta:
iocName = "RTF_Byte_Nibble_Obfuscation_method1"
fileType = "MS_OFFICE"
score = 85
ceiling = 100
highConfidence = true

strings:
$magic = {7b 5c 72}
$update = "\\objupdate" nocase
$data = "\\objdata" nocase
$nibble = /([A-Fa-f0-9]\\'[A-Fa-f0-9]{4}){4}/
condition:
$magic in (0..30) and all of them and #nibble > 10
}

rule RTF_Byte_Nibble_Obfuscation_method2
{
meta:
iocName = "RTF_Byte_Nibble_Obfuscation_method2"
fileType = "MS_OFFICE"
score = 85
ceiling = 100
highConfidence = true

strings:
$magic = {7b 5c 72}
$nibble = /\\objupdate.{0,1024}\\objdata.{0,1024}([A-Fa-f0-9]\\'[A-Fa-f0-9]{4}){2}/
condition:
$magic in (0..30) and all of them
}

rule RTF_Byte_Nibble_Obfuscation_method1_PE
{
meta:
iocName = "RTF_Byte_Nibble_Obfuscation_method1_PE"
fileType = "WINDOWS_PE"
score = 80
ceiling = 100
highConfidence = true

strings:
$magic = {7b 5c 72}
$update = "\\objupdate" nocase
$data = "\\objdata" nocase
$nibble = /([A-Fa-f0-9]\\'[A-Fa-f0-9]{4}){4}/
condition:
$magic in (0..30) and all of them and #nibble > 10
}

rule RTF_Byte_Nibble_Obfuscation_method2_PE
{
meta:
iocName = "RTF_Byte_Nibble_Obfuscation_method2_PE"
fileType = "WINDOWS_PE"
score = 80
ceiling = 100
highConfidence = true

strings:
$magic = {7b 5c 72}
$nibble = /\\objupdate.{0,1024}\\objdata.{0,1024}([A-Fa-f0-9]\\'[A-Fa-f0-9]{4}){2}/
condition:
$magic in (0..30) and all of them
}

Save that in a file like this RTF_Byte_NIbble_Obfuscation.yara

Follow the instructions in the doc to put the file in the correct directory to get it added to the yara section
(again path being updated for 11.x as it changed in from the 10.6 paths - doc update coming)

[root@TESTHOST yara]# pwd
/var/netwitness/malware-analytics-server/spectrum/yara/
[root@TESTHOST yara]# ls *.yara
rsa_mw_pdf_artifacts.yara rsa_mw_pe_artifacts.yara rsa_mw_pe_ packers.yara

This is where you can drop the yara signature to do any more work on it, then move it to the watch/ folder to import it

Once the import is successful the rule will show like this

[root@nw11malware yara]# ls
error rsa_mw_pdf_artifacts.yara rsa_mw_pe_packers.yara watch
processed rsa_mw_pe_artifacts.yara RTF_Byte_Nibble_Obfuscation.yara

If there are errors then the rule ends up in error/

The rules should be available in MA service UI (Admin > Service > MA > config > IOCs > Yara)


You can see your custom yara rules listed along with the score we assigned and the type of file it will match on


Find a sample to test ... like this one
https://www.hybrid-analysis.com/sample/10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24 

After a sign up and vetting process you can download the sample as bin.gz

Which I transferred as is to the MA service location so that i can uncompress it, change the name to .rtf , zip and add password of infected so that it can be picked up for analysis

scp over to the MA service (if you dont have the file upload dir exposed by NFS)

move the file to this directory
cd /var/netwitness/malware-analytics-server/spectrum/infectedZipWatch

Install zip
yum install zip

Ungzip the sample
gunzip 10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24.bin.gz

Rename from .bin to .rtf
mv 10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24.bin 10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24.rtf

zip -e 10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24.rtf 10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24.rtf.zip

Use password of 'infected'

Move to watch/

This will now get processed by the file watcher and show up in the MA UI when processed like this


The user is fileshare as that was where it was picked up from.

You can open up the report and see the details

Which looks like this


Opening that up looks like this

At the top are the sandbox related items from Threatgrid


The Yara results are shown in the static analysis section further down


Which shows us the signature fired on this sample ( the MS_OFFICE one not the PE_Executable version of the yara sig)

You can see the potential IOC listed in the IOC Summary tab


If this type of file came across the wire and matched the criteria to pull them into MA and you had the License to enable automatic analysis then files like this would be automatically analyzed in MA

Output:
If you had created the syslog output from MA to NetWitness logs or another SIEM then you would get an output like this

May 16 15:51:59 nw11malware CEF:0|RSA|Netwitness for Malware Audit logging|11.1.0.0-8295.5.0|Suspicious Event|Detected suspicious network event|2|static=100.0 community=0.0 sandbox=95.0 malware.nextgen.source=http://localhost event.type=FILE_SHARE event.id=36569 high.confidence.ioc.hit=com.netwitness.malware.rules.sandbox.autostart.registry.currentcontrolset.services USER=Unknown identity
May 16 15:51:59 nw11malware CEF:0|RSA|Netwitness for Malware Audit logging|11.1.0.0-8295.5.0|Suspicious File|Detected suspicious file|2|static=100.0 community=0.0 sandbox=95.0 fname=10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24.rtf fsize=85584 fileHash=b48ddad351dd16e4b24f3909c53c8901 file.sha1.hash=a3424a3593b6d7aaefa23f8076b141205cdbf5c0 file.sha256.hash=10ceb5916cd90e75f8789881af40287c655831c5086ae1575b327556b63cdb24 event.id=36569 high.confidence.ioc.hit=com.netwitness.malware.rules.sandbox.autostart.registry.currentcontrolset.services USER=Unknown identity

By default one of the three hashes are indexed but that is being changed to include all three versions of the hash so that we can match on any version of that hash if we have a known hash list in NetWitness that might match from endpoint logs or malware output. (internal change being made to add these)

 

cef-custom.xml

<DEVICEMESSAGES>
<VendorProducts>
<Vendor2Device vendor="RSA" product="rsa_netwitness_for_malware_audit_logging" device="rsa_netwitness_for_malware_audit_logging" group="Anti Virus"/>
</VendorProducts>
<ExtensionKeys>
<ExtensionKey cefName="file.sha1.hash" metaName="checksum"/>
<ExtensionKey cefName="file.sha256.hash" metaName="checksum"/>
<ExtensionKey cefName="USER" metaName="username"/>
</ExtensionKeys>
</DEVICEMESSAGES>

 

Devices look like this

 

device.type = 'rsa_netwitness_for_malware_audit_logging'

 

And in the Event analysis view you get this type of meta

 

We have the filename from the submission (matches up with filename.all and every other filename that the system might capture from logs/packets/endpoint/malware/netflow), the checksums from the submission and the threat.category.

 

Now you can hook into RE service to report on these occurrences or ESA for immediate correlation across sessions.

A new variant of the SynAck ransomware has been seen in the wild using Process Doppleganging to evade detection. The malware has been seen in multiple geographies, including USA, Europe and the Middle East.

 

The blog below shows how RSA NetWitness Endpoint is able to detect the malicious behavior of SynAck even when the malware is using evasion techniques.

 

After getting infected with the malware, RSA NetWitness Endpoint, based on the detected behaviors of the malware, assigned a high risk score to the inftected machine (in this case, a score of 835 out of a maximum of 1024).

 

 

If we then look at the modules that are part of the malware, we can see:

- synack.exe with a high IIOC score, high Risk Score and a hash reputation tagged as "Malicious"

- Memory DLLs with high risk IIOC and Risk scores, which are the code loaded in memory to evade detection

- The text file that shows up to the victim once infected, also with a high IIOC score due to its behavior (set to be opened at startup)

 

The triggered behaviors by these processes can be seen below:

 

From this list we can point out a few, such as:

- "Suspected thread & Floating module", which as mentioned earlier refers to the DLLs loaded in memory to evade detection (but detected by RSA NetWitness Endpoint)

- "Autorun", this behavior is due to the readme file to display the directions to the victim on how to pay the ransom, as well as a copy of the msiexec.exe file with a valid Microsoft signature and hash stored in the App Data directory and set to run at startup

 

By looking at more details about the autorun settings in scanned data, we can see exactly what is configured to run at startup.

 

 

As for the Memory DLLs loaded by msiexec.exe showing in the Suspicious Threads:

 

 

 

If we now look at the information we have around the msiexec.exe module, we can see that even though it has a valid signature from Microsoft, its score has been increased by RSA NetWitness Endpoint due to multiple suspicious behaviors, such as:

- It's location in an unusual folder

- It modifies the registry key to run at startup

- Accesses a large number of documents in a short period of time (which is typical of ransomware due to the encryption of all the file)

 

 

By checking the path of msiexec.exe we can see that it is located in 2 locations, 1 of which is unusual (in "\AppData\Roaming\").

 

 

If we look at the tracking data we have for the malware, we can see the following behaviors.

1- the malware is manually executed

2- it then checks for running processes

3- it copies "msiexec.exe" to the "\AppData\Roaming\" folder

4- it kills excel.exe (which is one of the processes it watches to kill. among a longer list of 100+ processes)

5- it deletes the original dropper

6- it starts encrypting the documents

7- it modifies the run registry key to open a text file with the instruction on how to pay the ransom every time the workstation starts

8- it continues encrypting the documents

9- it opens the text file with the instructions on how to pay the ransom

 

The following is the message displayed to the user once the infection is completed.

 

 

 

This shows how RSA NetWitness Endpoint can detect an infection, and track behaviors of that malware, even when using advanced technique to evade detection.

logon.type has been a numeric value for windows logs in RSA NetWitness for a while, but it might not normally be indexed.  Now with RSA NetWitness Endpoint Insights and the built in windows log parser (device.type='windows') the metakey logon.type is now indexed OOTB. 

 

Having a feed to match all potential sources of values for that metakey maps a useful, analyst-friendly name that can significantly help illustrate what logon.type=2 means and why you should or should not care.

 

This feed was built from a Microsoft KB article and appears in a new meta key: logon.type.desc

 

It looks like this and currently flags on device.type='windows','nwendpoint','winevent_nic'

 

 

 

Here's my github link specifically for this feed which will reflect any changes made in the future.

 

GitHub - epartington/rsa_nw_feed_microsoftlogontype 

Kevin Arunski

Size Index Bucketing

Posted by Kevin Arunski Employee May 4, 2018

One of the more challenging things to accomplish in the RSA NetWitness core database is querying and filtering using meta items that represent byte sizes.  At face value it may seem simple: sizes are just numbers, so why would it be difficult to compare the size values in each session with the search criteria?  

The traditional RSA NetWitness index does not handle the values in "size" particularly well.  The RSA NetWitness index tries to keep track of the sessions in which every unique value appears.  So that means it could be required to maintain a separate list of sessions for every single possible value of size from 0 all the way up to the maximum possible size.  Since the number of sessions is so large (in the billions) the number of size values to track immediately becomes in the millions.  What's more, each of those values is only associated with a few sessions, on average, and they tend to be spread out all over the data.  That small list of sessions doesn't compress well, and wastes disk space and RAM.

Enter Size Bucketing

To make size indexes work well, we have introduced a new indexing mode that can be used on size keys.  It is called "bucket" mode, and it works like this:  instead of indexing every possible size, we round down the sizes to their nearest "bucket."  The buckets are whole-number values of kilobytes, megabytes, gigabytes, and so on.  This drastically reduces the number of index entries and solves the performance issues.  Fortunately, using this type of index does not really lead to a loss of functionality.  You can still use size indexes to perform queries, so expressions like: 

size > 1024 

or 

size == 1234456

Are valid and are evaluated accurately, even if a bucketed index is used.  The bucketed index narrows down the query enough that the exact expression can be evaluated using the data in the meta database.  

There is a subtle difference in index behavior, however.  If your query criteria specifies an exact bucket value, then the results returned will be all the sessions that have matching values in the bucket.  If you ask for size = 2048, the index engine identifies this as exactly 2 kilobytes, and will return sessions with sizes that are greater than or equal to 2 KB, but less than 3 KB.  If your query criteria does not match an exact bucket size, the query engine narrows down the results to those sessions that match the value exactly.  The reason for this behavior is to support the Navigate view in a logical way, while still allowing for more specific cases of the index to be utilized.

Using Size Bucketing

Size bucketing can be enabled on custom indexes with the following requirements:

  • The index format must be uint32 or uint64.
  • The index must be indexed by value.

To enable the size buckets, just add the bucket parameter to your custom index entry.  For example:

<key name="size" description="size" format="UInt32" bucket="true" level="IndexValues" />

After the index is saved or reloaded, the meta will be indexed with buckets.  Notice that 11.1 also removes an explicit restriction on indexing "size":  it is now acceptable to index this meta type.

If using size bucketing, it is not necessary to specify a valueMax parameter.  The size buckets prevent value max from reaching a large value.

Size Buckets in Navigate View

One immediate effect of size bucketed indexes is that they are useful in the Navigate view.  The Navigate view will render the size buckets in their human-readable form with the appropriate digital unit displayed.  So sizes are shown as "1 MB", "11 MB", "1 TB", and so on.  The navigation report will give you totals for the sessions in the buckets, so you can see useful information about the most frequently encountered sizes in the collection.  In addition, these buckets are maintained when you click into them and pivot to the Events view.  There, you will see a listing of sessions that are in the bucket.

Session Size Meta Can Be Added to Navigate

The labels used for sizes are also supported as part of the raw query syntax, so you may specify a query using human-readable aliases such as:

size > "1 KB"

or 

size < "10 GB"

Note that you have to put the value in quotes, because it's really a text label on the bucket.

RSA NetWitness v11.1 introduces powerful new text indexing features to the RSA NetWitness core database.  However, they are disabled by default, since using these features imposes a cost in terms of storage retention, and potentially throughput.  This article explains what these features are, and how to enable them.

 

This article refers to search features exposed by 'msearch', which is the RSA NetWitness core API that implements text search on collections.

 

Text Pattern Searching 

RSA NetWitness allows for searching within the text of meta items that are indexed.  It does this by utilizing all the indexes on the collection, and attempting to match the search input to values stored in any of the indexes.  The default behavior of this search requires that the search input match some value that the indexer has seen.  So text stored in meta items has to match in one of the ways that the indexer knows about.  Prior to v11.1 we could find these kinds of matches:

  • Exact match:  The search string matches a meta value exactly.  Matches are found if they exactly match a value stored in a value-level index.
  • Regex match:  The search string is treated as a regex, and the regex is tested against every value in every index.  This is very slow, but still much faster than a raw data scan, since only the indexed values are checked for matches.
  • Exact match on truncated token:  The search string can match an index value if it is truncated to some predetermined length that the index knows about.  The search engine uses this type of match to fetch results based on less-accurate indexes.  It uses the full search string to refine the results so that false matches due to truncation are not returned as results.  Therefore, this type of match is transparent to the user and it looks like an exact match.  However, truncating indexes is an important optimization to be aware of from a performance perspective.

 

One significant limitation of pre v11.0 (and earlier) is that it's not easy to find a subset of the text within a meta value.  

Some meta values are phrases rather than single words, such as:  msg="Authentication Error". 

If I wanted to search for the word "error", my index on msg does not yield a result because it's at the end of the value.

Other meta values embed important text within non-text data.  A classic example of this is a text token inside a URL:

referrer="http://localhost:9999/authnetwitnessserv"

If I have an index on referrer, I can't easily use it to find "netwitness" in that value.

One way around this is to feed the raw log messages to the "word" indexer.  This works around the first case, because both of the words are also copied to the word meta items.  However, it does not solve the second scenario, and it incurs some overhead in terms of having some of the same text in both the "msg" and "word" meta items.

 

New Indexing Mode:  N-grams

An N-Gram is a sequence of N characters in length extracted from any position within the text.  We now have the ability to generate indexes from these sequences on text meta items.   The index engine will extract all the subsequences out of the text value and store them in the index.  Note that it only stores them in the index, rather than generating meta items.  Therefore, it increases index storage usage, but not meta usage.

Consider the case of our referrer meta item I used above:  if I turn on N-grams on my referrer index, then we will get index hits for any substring within our meta value.  Embedded words like "local", "auth", "net", "netwitness", "ser", and so on will all be indexed.  Therefore, simple substring searches will return useful matches.

You can turn on N-gram indexing on any index that meets these requirements:

  1. It must be a text meta type.
  2. It must be Value Indexed.

To turn on the N-Gram indexer, add these parameters to the key entry in your custom index configuration.

  • ngrams="all" 
  • minLength="3" 
  • maxLength="5"

Here's an example:

<key description="Text Token" level="IndexValues" name="word" minLength="3" maxLength="5" format="Text" ngrams="all" />

The ngram parameter turns on the N-gram indexing mode.  The min and max length parameters determine the range of size of N-grams that will be generated.  The minimum length is important because it defines the minimum number of characters that a search term must contain in order for it to match anything.  The maximum length is a performance optimization to limit the number of unique n-grams present in the index.  Due to the exact match on truncated tokens logic mentioned earlier, the max length does not actually restrict the maximum length of search patterns.  Instead, max length controls how accurate the index is and provides a performance tradeoff:  Longer maximum lengths will more accurately identify the correct search matches, but they impose more index storage and RAM cost.  Shorter maximum lengths provide a less accurate index that takes longer to resolve search results, but use less storage and RAM.  I recommend using the default minimum of 3 and the default maximum of 5.

Activating any N-gram indexes has a significant cost.   Using the parameters above, an N-gram index entry consumes approximately 5 times the index space as a non N-gram index.  One rule of thumb would be: the cost of turning on an N-gram index is about the same as turning on 5 regular value indexes.

The valueMax for your index should correspond to however many tokens might be generated by your maxLength parameter.   If you use the default maxLength of 5 or less, it is acceptable to not specify a valueMax.

 

How to search against N-grams

To utilize the n-gram matches within the event view search box, just type the substrings that you are looking for as your search terms.   The N-Grams indexes will automatically be utilized where they are enabled.  Don't worry about adding regexes or wildcards:  just put whatever text you are looking for into the search box.    

Finding text at the beginning of a word:

Or the middle of a word:

Or the end of a word:

The default search options for searches are suitable for N-Gram searches:  specifically they require that "search indexes" is enabled.   

There are additional search options that can help with more certain cases.  For example, consider if we want to look for text specifically at the start of the meta or the end of the meta item.  For that, the 11.1 search API does support glob character matching in the pattern.  If I have a host name index and I want to match "*.com" in a meta value but want to exclude "*.com*", I can do that with glob patterns.  They are primarily useful for filtering results to search patterns that match at the beginning or end of a meta item.  Glob patterns have  important caveats:

  • turn off raw packet searching when using glob patterns that match the beginning or end of a string. The glob pattern will attempt to apply the prefix or suffix wildcard at the beginning or end of the entire raw payload, which is rarely the desired result.  
  • Glob pattern matches are for meta value matches only.
  • Glob and regex patterns have conflicting, incompatible syntax.  Do not try to use glob and regex at the same time.
  • For substring matches anywhere in the text, don't use glob characters.  Just specify the text you are looking for and let the N-Gram index find the substring.
  • Glob patterns support the "*" and "?" characters similar to file-name matching in unix.  The asterisk (*) matches zero or more characters, while the question mark (?) matches any single character.  You can specify multiple wildcard characters in the search pattern.

Glob search example:

Word Indexing with N-Grams

The word indexer works with a special log parser that attempts to synthesize word meta items.  This word generating parser works completely independently of the N-Gram indexing feature.  You can use N-Gram indexes on any text key, not just the word meta key.  Conversely, you could continue to use the word indexer without turning on N-Gram indexes at all.  Using the N-Gram indexer in conjunction with the word indexer is a powerful combination, however.  Logs that are fed to the word indexer, and then subsequently indexed with N-Gram indexes, yield a full-text indexed searchable database.  The cost for implementing a search this way is high, but it may be manageable if it reduces the number of indexes needed elsewhere in your custom index.  If you do decide to enable N-grams on the word index, turn off truncation in the word tokenizer parser on the Log Decoder.  This will ensure that all the possible substrings make their way into the index.

Text Searching in Network Meta

N-gram indexes can be used with packet collections, not just logs.  You can add an N-gram index to any of the text meta items, as long as they are indexed at the value level.   This can be very useful for searching within meta values that are long or complicated, and would otherwise require using the contains or ends query operation.

Ethernet_oui.lua is a parser that has existed on the packet side for a while to map the MAC address from network events to the vendor information.  The ethernet_oui parser has recently been extended to work with Log events that write to eth.src/eth.dst and alias.mac as well as Netflow and NW Endpoint events.

Now when an event occurs that has a MAC address parsed out into the three meta keys mentioned above this parser looks at you get the matching vendor information for that NIC.

 

If you have Netflow records with MAC addresses in the events and the ethernet_oui parser is deployed to your log decoders/netflow decoder, you will now get eth.src.vendor and eth.dst.vendor meta registered (it will not be indexed by default but you can add to the index-concentrator-custom.xml).

 

The same goes for RSA NetWitness Endpoint Insights which provides information in alias.mac:

 

Health and Wellness leverages RabbitMQ to be able to collect the actual status of any components of the RSA Netwitness platform. After changing an IP on a component the Health and Wellness keep communicating with the previous IP. To be able to resolve this issue you need to do the following:

 

Open your browser and log in to the RabbiMQ Management interface: https://IP_of_your_head_unit:15671

Log in using the deploy_admin account 

 

When logged in, go to the Admin Tab

 

And in the Admin Tab, Select the Federation Upstreams on the right 

 

 

Identify the wrong upstream and take note of the virtual host, URI, Expires and the Name of this upstream

 

Create a new upstream and enter the right information for the URI, with the new IP, the Name, the Virtual Host and the Expires:

 

 

When adding this new upstream, it will match the upstream name and automatically replace the one with the wrong information.

 

And now the device is in a ready state and the health status changed from RED to GREEN

Virtualization is now an industry standard and RSA NetWitness offers a 100% virtual deployment. The RSA NetWitness Archiver module offers the possibility of using multiple virtual hard disks to increase the retention of the platform. To be able to increase the available space you will need to do the following:

 

 The first step is to add another VMDK to your Virtual RSA NetWitness Archiver :

 

 

Change the size of the Virtual Hard Disk to meet your requirement:

We do recommend to use different SCSI controller per VMDK. In this case, SCSI (0:1) is used by our operating system for the second VMDK, we will use SCSI (1:1):

Press Finish to complete the process:

When the virtual hard disk has been added to our virtual Archiver, we need to add this hard disk to our LVM. We will need to identify our new hard disk using the fdisk -l command. In our case, in the virtual hard disk is /dev/sdb

Create the new partition on the /dev/sdb disk with the following command fdisk /dev/sdb

Press n to create a new partition and p for a primary partition

Type w to write the configuration to the partition table

 

We need to create a Physical Volume for our new partition using the following command pvcreate /dev/sdb1 

 

We need to create a Volume Group for our new partition using the following command vgcreate vg_customer /dev/sdb1. The name of the Volume Group can be changed to meet your requirement

 

We need to create a Logical Volume for our new partition using the following command lvcreate --name customer1_lvm -l 100%FREE vg_customer. The name of the Logical Volume can be changed to meet your requirement

 

RSA Netwitness leverage XFS for best performance. Our new partition needs to be format to XFS using the following command : mkfs.xfs /dev/mapper/vg_customer-customer1_lvm . The LVM name can differ base on your use case.

Create your folder for the mount point

Mount your LVM in your folder created earlier

Validate your mount point with the df command

 

Edit your /etc/fstab file with your mount point information

 

When your LVM is created and available to the operating system , we need to add this storage to your RSA NetWitness Archiver. In our case, we are adding 500 GB to the hot storage. Press the gear button   for the hot storage.

 

Add your mount point to the hot storage and press save

 

Our hot storage have now 639.89 GB

 

We will create a new Collection with 450 GB for our Customer1.  

 

Once the Collection is created, RSA Netwitness will automatically create the following directories for each type of data. 

Filter Blog

By date: By tag: