Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2018 > April

Health and Wellness leverages RabbitMQ to be able to collect the actual status of any components of the RSA Netwitness platform. After changing an IP on a component the Health and Wellness keep communicating with the previous IP. To be able to resolve this issue you need to do the following:


Open your browser and log in to the RabbiMQ Management interface: https://IP_of_your_head_unit:15671

Log in using the deploy_admin account 


When logged in, go to the Admin Tab


And in the Admin Tab, Select the Federation Upstreams on the right 



Identify the wrong upstream and take note of the virtual host, URI, Expires and the Name of this upstream


Create a new upstream and enter the right information for the URI, with the new IP, the Name, the Virtual Host and the Expires:



When adding this new upstream, it will match the upstream name and automatically replace the one with the wrong information.


And now the device is in a ready state and the health status changed from RED to GREEN

Virtualization is now an industry standard and RSA NetWitness offers a 100% virtual deployment. The RSA NetWitness Archiver module offers the possibility of using multiple virtual hard disks to increase the retention of the platform. To be able to increase the available space you will need to do the following:


 The first step is to add another VMDK to your Virtual RSA NetWitness Archiver :



Change the size of the Virtual Hard Disk to meet your requirement:

We do recommend to use different SCSI controller per VMDK. In this case, SCSI (0:1) is used by our operating system for the second VMDK, we will use SCSI (1:1):

Press Finish to complete the process:

When the virtual hard disk has been added to our virtual Archiver, we need to add this hard disk to our LVM. We will need to identify our new hard disk using the fdisk -l command. In our case, in the virtual hard disk is /dev/sdb

Create the new partition on the /dev/sdb disk with the following command fdisk /dev/sdb

Press n to create a new partition and p for a primary partition

Type w to write the configuration to the partition table


We need to create a Physical Volume for our new partition using the following command pvcreate /dev/sdb1 


We need to create a Volume Group for our new partition using the following command vgcreate vg_customer /dev/sdb1. The name of the Volume Group can be changed to meet your requirement


We need to create a Logical Volume for our new partition using the following command lvcreate --name customer1_lvm -l 100%FREE vg_customer. The name of the Logical Volume can be changed to meet your requirement


RSA Netwitness leverage XFS for best performance. Our new partition needs to be format to XFS using the following command : mkfs.xfs /dev/mapper/vg_customer-customer1_lvm . The LVM name can differ base on your use case.

Create your folder for the mount point

Mount your LVM in your folder created earlier

Validate your mount point with the df command


Edit your /etc/fstab file with your mount point information


When your LVM is created and available to the operating system , we need to add this storage to your RSA NetWitness Archiver. In our case, we are adding 500 GB to the hot storage. Press the gear button   for the hot storage.


Add your mount point to the hot storage and press save


Our hot storage have now 639.89 GB


We will create a new Collection with 450 GB for our Customer1.  


Once the Collection is created, RSA Netwitness will automatically create the following directories for each type of data. 

Here are a few column and meta groups to help get you started in NW 11.1 for either the free NW Endpoint Insights integration or the existing NW Endpoint 4.4 meta integration.  These are designed to help speed up analysis based on the category of endpoint data of interest.  It's also worth remembering that you have access to a lot of this data in a per-host context with the new 11.1 Investigate > Hosts view which is a handy way to get a snapshot of what is going on at a given point and time for a specific host, without (or prior to) querying the NWDB, eg:



When hunting or analyzing endpoint data across an entire environment, or in context with network and other log data for a specific host, you would then want to pivot into the more traditional Investigate > Navigate/Hosts view which is where you would apply the appropriate meta and column groups.



Meta Group (1) 

Top down organization of keys:

   - Host Information

   - Data Category (+Action for event tracking)

   - File/Process Keys

   - IPv4 Keys

   - User Keys

   - Service, Autoruns, Tasks


[NWEndpoint] Event and Scan Summary:

Column Groups (5)

When using column groups for analysis of NW Endpoint data, I like having both a generic column group that can show all event and scan data categories on the same page without too much clutter, as well as specific column groups mapped to individual categories (eg. Process Analysis, File Analysis, Autorun Analysis, etc.).  The NW 11.1 platform lets you toggle between these at will.  Also note that these will apply to both Event view and Event Analysis view.


Eg. [NWEndpoint] Event and Scan Summary (same keys as the Meta Group)


Eg. [NWEndpoint] Process Analysis

(Note: 'Process Event' category is only available with the full NW Endpoint Agent)

Eg. [NWEndpoint] File & DLL Analysis


Eg. [NWEndpoint] Service Analysis


Eg. [NWEndpoint] Autorun & Task Analysis


Investigation: Manage Column Groups in the Events View 

Investigate: Use Meta Groups to Focus on Relevant Meta Keys 


** NOTE:  The attached groups use the meta key 'param' to display "Launch Arguments".  11.1 out of box configuration maps this to the 'query' key instead.  'Param' will be the default as of the patch, but in the mean time you can either update your table-map.xml/concentrator index manually, or switch the meta key referenced in the groups to 'query' which is the 11.1 out of box setting.


Process: Host GS: Maintain the Table Map Files  for the table-map.xml instructions, and Core Database Tuning Guide: Index Customization  for the concentrator index.

table-map-custom.xml addition:  <mapping envisionName="param" nwName="param" flags="None"/>

index-concentrator-custom addition:  <key description="Launch Arguments" level="IndexValues" name="param" format="Text" valueMax="100000" />

If you have done anything on an iDRAC that requires the mounting of an ISO file or some remote/virtual media, it is painfully slow.  What I have discovered is that the iDRAC's on the appliances are initially configured to only operate at 100mb Full, but they are 1000mb/1G capable, you just have to turn on "Auto Negotiation".  I have seen this on every appliance I have installed or encountered.  A quick way to tell if your iDRAC is running at 100mb or 1000mb is to look at the link light.  See the picture below.


iDRAC Link Lights


Bellow is a screenshot of a freshly installed appliance with the default iDRAC settings.

Default Settings (100Mb) - Orange Link Light


Below is the screenshot of the iDRAC after I turned on "Auto Negotiation".

New Settings (1Gb) - Green Link Light



You can set "Auto Negotiation" by using the "Lifecycle Manager" (F10) at boot and by using the iDRAC web interface.  When using the web interface you will have to reconnect to the iDRAC as it will reset the network interface you are using to access the UI.  I have not been able to find a way to do this using the ipmitool.

The TLD parser has been updated to now deploy on Log Decoders.  


The parser looks for the following keys from log devices to parse out the same information as packets:

  • Host.src
  • Host.dst
  • Domain.dst
  • Domain.src
  • FQDN


Which writes out information into:

* - mapped to risk meta
* analysis.service - hostname characteristics
* cctld - (nonstandard) (optional) country-code top level domain, e.g., ->
* sld - (nonstandard) (optional) second level domain, e.g. -> amazon
* tld - top level domain, e.g. -> com


When searching for Lua and Log in the RSA Live deployment screen you will see the following:


And linked dependancies:


So this is a really simple method of getting nwll.lua deployed to a log decoder if your custom parser requires that library (PaloAlto URL.raw parser for instance).

Amazon GuardDuty is a managed threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. The service analyzes Amazon CloudTrail, AWS VPC Flow Log data and other services to look for issues such as inbound port scans, possible backdoor access to your systems, unauthorized use of your account, and many other potential problems. GuardDuty can be used to monitor a group of AWS accounts and have their findings routed to another AWS account—the master account—that is owned by a security team. Amazon GuardDuty starts to generate customized threat intelligence for you.


GuardDuty is a regional service. So, when GuardDuty is enabled for a particular AWS Region, findings are generated and delivered for that region only. Each region needs to be configured individually.




The RSA NetWitness Plugin framework uses the AWS Python SDK to access the GuardDuty logs.


This plugin supports different finding types alerted by Guardduty, all types are explained here:

The following are Amazon GuardDuty limits per AWS account per region:


RSA NetWitness can already collect native cloudtrail logs and with this integration with GuardDuty it further expands its visibility into advanced threat detection provided by Amazon which not only monitors cloudtrail logs but also AWS VPC  and flow logs. Combined with the complete visibility that RSA NetWitness Platform delivers for threat detection and response across logs, network, and endpoints for both private and public cloud environments – securing the cloud is simplified.


Downloads and Documentation:


Configuration Guide: Amazon GuardDuty Event Source Configuration Guide 

Collector Package on RSA Live: "Amazon GuardDuty"

Parser on RSA Live: CEF (device.type="amazonguardduty")

Dropbox is a file hosting service that offers cloud storage, file synchronization and personal cloud services. Dropbox allows its users access to files and folders anytime from desktop, web and mobile clients or even through applications connected to Dropbox. This presents a huge challenge for enterprises to closely monitor daily activities and look for malicious file activity, ex filtration of data.unauthorized file access, sharing, etc. 




RSA Netwitness Plugin framework can be used to connect to Dropbox via API v2 to collect all user activity. Here are some of the common scenarios that can be monitored using this integration:


  • Monitoring Sharing Policy.  Statistics around number of shares, number of shares with users outside of the organization (as indicated by the corresponding flag on the event in the sharing category), domains being shared with, etc.
  • Aggregate information on content being added & deleted (file operations category), and logins (login category). Reporting bursts of file deletes/renames, large number of attempted/failed logins, etc.
  • App linkages & behaviors around apps (apps are noted as an actor in actions they perform)


For more details on what can be collected please refer to this link:


Here are some of the use-cases that can be built on NetWitness Platform:



1. Content Sharing Activity (Internal vs External)

2. Login Activity from various localities

3. Top 10 File Uploaded/Downloaded

4. Third-Party App activity.

5. Summary of File activity per user

6. Top User Activities



1. Login from suspicious Locality 

2. Rapid Renames of Files 

3. Sharing of file with more than the allowed number of users

4. External Sharing of Business sensitive files


Combined with the complete visibility that the RSA NetWitness Platform delivers for threat detection and response across logs, network, and endpoints for both private and public cloud environments – securing the cloud is simplified.


Downloads and Documentation:


Configuration Guide: Dropbox 

Collector Package on RSA Live: "Dropbox"

Parser on RSA Live: CEF (device.type="dropbox")

VMware AppDefense is a data center endpoint security product that protects applications running in virtualized environments. AppDefense leverages the unique context provided by its position in the vSphere hypervisor to understand what applications are supposed to look like, and then monitors the applications for unauthorized changes to their intended state. When AppDefense detects anomalies representative of malicious activity, it can automatically remediate them using vSphere and NSX. 


There are four main behaviors that AppDefense monitors:

  • Inbound Communications
  • Outbound Communications
  • Guest OS Integrity
  • Host Module Integrity


For more details please refer to this link:   




The RSA NetWitness Platform uses the Plugin Framework to connect with the AppDefense RestFul API to periodically query for alarms. The alarms provides deep visibility and context of malicious activity in the vshpere environment, which can be used to co-relate with events collected from multiple data sources via the RSA NetWitness Platform.  Combined with the complete visibility that the RSA NetWitness Platform delivers for threat detection and response across logs, network, and endpoints for both private and public cloud environments – securing the cloud is simplified.


Downloads and Documentation:


Configuration Guide: VMware AppDefense 

Collector Package on RSA Live:  "VMware AppDefense"

Parser on RSA Live: "CEF". (device.type=vmwareappdefense) 

For those who have requested a downloadable and searchable PDF of the Logs and Network documentation, the IDD team added downloadable PDFs for the entire 11.0 and 11.1 documentation sets. An RSA Link login is required to download these files.

The IDD team will update these PDFs periodically, so please remember that the most up-to-date documentation for RSA NetWitness Logs and Network Version 11.x can be found here: RSA NetWitness Suite 11.x Master Table of Contents.

While the release of the Unified Data Model (UDM) has given us a unified meta key foundation on which to build moving forward (awesome!), it has also opened an administrative can of worms (not so awesome...).


With these new and/or modified meta keys comes the challenge of combing through your NetWitness architecture to find all the places that the discontinued meta exist, identifying the discontinued keys that you want to change, and then actually changing them. We can’t automate this entire process yet, but we can still automate some to make our lives easier.


One of the primary places that meta keys live within NetWitness is the custom XML file that allows for tuning and adding to the default out-of-the-box meta. In the UI, these files are accessible at Admin (or Administration) → Services → <serviceName> → Config → Files:

Custom XMLs in the UI


And on disk at /etc/netwitness/ng/index-<serviceName>-custom.xml, (Log Decoders have an additional custom XML at /etc/netwitness/ng/envision/etc/table-map-custom.xml):

Custom XMLs in the Filesystem

Custom XMLs in the Filesystem (Table Map)


We could search through and update these files manually for every discontinued meta key...but frankly, that would be an enormous headache and a waste of time, which is why I put together this script to do it instead.


Before running the script, go to the UDM page on RSA Link ( and check out the table of Discontinued Meta ( Copy the contents of this table (with or without the header – the script will omit that line if you do include it) into a text file. No modification of this copied table is necessary – again, the script will take of that for us.

Discontinued Meta - UDM


Any discontinued meta keys from this table that do not have a specific 1-to-1 replacement meta key, such as orig_ip or any of the risk.* keys, will also be omitted when the script runs.


Next, copy this text file and the script to the filesystem of the appliance that you want to run it on (Log/Decoder, Log/Concentrator, Log/Hybrid, Archiver, or Broker), and make the script executable.


The script will require two arguments – the name of the text file that you copied the Discontinued Meta table into, and the name of the custom XML that you want to modify:


# python <> <text_file_with_copied_table.txt> <target_custom_file.xml>


For example:


# python filename.txt index-concentrator-custom.xml

# python filename.txt index-archiver-custom.xml

# python filename.txt table-map-custom.xml


The script will ask whether to perform a dry run replacement or to do it for real. If run as a dry-run, you will get an output of all the discontinued meta keys that were identified within the target custom XML, as well as the new meta key that replaces it in the UDM.


If you do not choose the dry-run option, the script will give you the option to view each discontinued meta key and the corresponding new meta key and accept or deny its replacement, or to simply replace everything without any further prompts.

Script Options

Script Options 2

If the actual replacement(s) are accepted, the script will backup the original custom XML before making any changes.


Once complete, I recommend that you compare the new and original files using your diff tool or utility of choice to verify that everything proceeded without error. And as a reminder, you will need to restart the service for these changes to take effect.


Happy UDM'ing!

This idea started out as a POC to see if there was a way to implement substring matches for a large number of items related to useragent strings.


There are a number of locations that have published known bad UA strings over the years but they have always had to be limited to exact matches in RSA NetWitness (at least for scale reasons).


Here are a few:


The Feed function only works with exact matches, application rules could be used for substring matches but for a large number of elements that option wasn't really scalable.


What about Lua?

Could Lua be used to create a single package of content that could be applied to Log and Packet decoders that could search for exact matches and substrings efficiently (as possible) and write out something useful to indicate further investigation is necessary?


With some help from internal RSA resources and after reviewing some of our parsers we decided to build this function in a lua parser.  The basic idea was two sections that enabled exact matches (replacing feed requirement for 1/2 the solution) and a substring match section.  Optimizing performance as much as possible led to a two table structure for the substring section (make sure you update the two related tables if you add or remove items).


As usual this is POC code.  It works in a small lab - your performance may vary in a large corporate environment.  Use at your own risk!


Attached is the Lua parser that can be loaded to both log and packet decoders which reads from the client metakey and writes out to the IOC and analysis.session keys when there is a match.


The output looks like this:


There are two potential ways of displaying a match in analysis.session:


Exact matches are written like this:

ua_match_metasploit old rare



Susbtring matches are written like this:

ua_substring_sql injection >> ^.*sqlmap.*$

ua_substring_badly scripted >> ^mozilla/3%.0 .*$


This way I was able to understand from the meta written what was matched and why (exact or substring).  This also showed the potential threat it was related to.  The ^ and $ lock the match to beginning and end of the string and .* are your wildcards which are translated from the * entry in the tables to keep things simple on input.


For instance the two entries for the sqlmap look like this:

["*sqlmap*"] = "SQL Injection",


The Lua parser is listed here and any updates made will be placed here.  Ideally this will have the tables contained in an *options file so that the configuration and updates are separate. 

At the moment changes can be made from the decoder > config> files section as this is a cleartext lua parser.  You can change/add from there and push the file to all your other decoders.



GitHub - epartington/rsa_nw_lua_useragents: Lua Parser for user agents searches (exact and substring) 


** keep in mind that most log parsers write the user agent string to user.agent and not client on log decoders.  you can either change the lua parser to read from there, or build a lua utility parser to move user.agent to client then this parser will work evenly across both log and packet decoders.**

by Mike Adler, VP Product RSA NETWITNESS


Empowering intelligent SOCs by providing them with the visibility, insights and actions they need—as quickly as possible—is key to a company’s ability to manage digital risk. However, as the number of users, endpoints, and networks accessing company data grows, so does the risk of cyberattacks to a company’s critical assets.


This can often leave SOC analysts overwhelmed with data and alerts, increasing the potential dwell time of a threat, leaving less time to find the threats that matter.  Ironically (and unfortunately), in its attempt to improve enterprise security by deploying more solutions, security professionals create silos of disconnected security information which can open the organization up to more vulnerabilities as these silos add complexity and deliver a very poor user experience for analysts.


This is why, I am pleased to announce RSA is adding Fortscale’s pioneering UEBA technologies to the RSA NetWitness® Platform.  Adding these capabilities natively to the Platform will enable our customers with an integrated approach that simplifies SOC management and security by correlating data to accurately detect and respond to advanced threats using analytics. RSA NetWitness UEBA seamlessly integrates with the Platform’s meta-data model, allowing intelligent processing of data in a single platform with a reduced storage footprint.  By building on the existing data store and analytical capabilities of the Platform, Fortscale’s technology enables RSA NetWitness customers to see anomalies in user behavior alongside other security alerts in the RSA NetWitness respond module.


The Fortscale UEBA engine identifies deviations from normal user behaviors and uncovers risky and previously hard-to-detect threats. By understanding behavior, Fortscale highlights potential risks such as shared user credentials, privileged user account abuse, geolocation and remote access anomalies. This allows organizations to find unknown threats hiding among the huge volume of security data found in today’s complex IT environments without heavy installation, maintenance or analyst oversight. The Fortscale UEBA engine is designed to:

  • Provide fully automatic, unsupervised machine learning;
  • Reduce the need for organizations to have big data experts on their analyst team;
  • Detect unknown threats (compromised credentials, insider threats, data exfiltration);
  • Address malicious behavior in which exploits have received elevated permissions;
  • Be dynamic, automatically learning behavior specific to the environment; and,
  • Require no customization, rule authoring or ongoing care, tuning, rule creation/adjustment.


The Fortscale UEBA engine strengthens the RSA NetWitness Platform evolved SIEM allowing our customers to have more capability at their fingertips without stitching together multiple security platforms or tools.  We expect customers will quickly come to value the additional alerts and information detected by the Fortscale UEBA engine and extend their adoption of the RSA NetWitness Platform as the centerpiece of an intelligent SOC.  I am excited to welcome the Fortscale team to RSA and look forward to sharing more details about the integration in the future. 

In RSA NetWitness Platform release, a new windows parser has been introduced. This parser helps parse logs that are collected from Windows event sources via the RSA NetWitness Endpoint Agent.


The agent acts as a threat detection solution that detects malware, highlights suspicious activity for investigation, and instantly determines the scope of compromise to help security teams stop advanced threats faster.


Supported Windows OS Versions:

The Endpoint Agent can be deployed on windows laptops, workstations, servers, or any system, physical or virtual. The supported operating systems are:

  • Windows 7,8,8.1,10
  • Windows Server 2008,2012,2016


Structure of Endpoint Agent Log:

The RSA NetWitness Endpoint agent generates syslog formatted logs. The format and structure of logs is displayed in the image below:

Log Format

Every windows log collected through the NetWitness Endpoint Agent has multiple tags with space as a delimiter. Every log has a header and payload part.


Header definition:



Payload definition:

Agent=NWE AgentIP= AgentComputer=Srv01 AgentTime=2018-01-16T18:08:01.5144951Z TimeCreatedSystemTime=2018-01-16T18:06:56.0309840Z EventID=4672 Provider="Microsoft Windows security auditing." Channel=Security Level=Information Task="Special Logon" OpCode=Info Version=0 Keyword="Audit Success" ProcessID=460 Computer=Srv01 RecordId=34819 SubjectUser="NT AUTHORITY\SYSTEM" SubjectUserName=SYSTEM SubjectDomainName="NT AUTHORITY" SubjectLogonId=0x3e7 PrivilegeList="SeAssignPrimaryTokenPrivilege     SeTcbPrivilege     SeSecurityPrivilege" Message="Special privileges assigned to new logon.    Subject:   Security ID:  S-1-5-18   Account Name:  SYSTEM   Account Domain:  NT AUTHORITY   Logon ID:  0x3E7    Privileges:  SeAssignPrimaryTokenPrivilege     SeTcbPrivilege    SeSecurityPrivilege"


Payload contains all the tags which Microsoft Windows generates on an occurrence of any event. Message tag renders complete raw information of that particular event.


The logs generated from supported windows machines via NetWitness Endpoint Agent are parsed against latest NetWitness Windows parser. NetWitness Windows parser supports parsing of every log identified by every Microsoft Windows channels.


This blog is intended to help a user understand the various meta key designed/used in latest NetWitness Windows parser .Specifically, it highlights on meta key usage of major Microsoft Windows channel types such as System, Security and Application.


NetWitness Meta Key usage for Microsoft Windows tags:

We have collected different varieties of tags from Microsoft Windows and the tags important from security perspective are listed below. The tags are mapped strictly to NetWitness defined Meta keys.


Meta data used in windows parser for Security channels are:

Microsoft Windows Security Channel Tags

NetWitness Meta Key

























































































































The Meta data used in windows parser for System channels are as below: 

Microsoft Windows System Channel Tags

NetWitness Meta Key


















































The Meta data used in windows parser for Application channels are as below:

Microsoft Windows Application Channel Tags

NetWitness Meta Key



















Apart from the keys listed above, RSA NetWitness supports customers to collect value from log in their custom meta keys using custom parser methodology. Custom parser helps RSA NetWitness customers to define their own meta keys to collect values from logs.


Comparison of usage of NetWitness meta keys between winevent_nic and windows parser


NetWitness Windows parser provides following additional advantages while compared with winevent_nic parser.

  • No Unknowns : None of the windows logs collected using Netwitness Endpoint Agent goes unknown
  • Low parsing time :Based on our performance test, it is found that parsing time of windows parser is less compared to Winevent_nic parser. 

Below is the comparison of meta key usage for Windows Security Event Id 4672. The screenshot on left is the old parsing windows logs and the screenshot on right is new windows parsing logs via NetWitness Endpoint Agent.



  As assisted by

10.6.5.x and 11.1 now have the ability to apply -custom.xml log parser files to reduce the need for forking a parser to customize log parsing for a particular device.  This means that you no longer have to remove a parser from the auto-update RSA Live flow just to add a custom entry or modify one event id to suit a specific use case.


Documentation on how this is done can be seen here:Log Parser Customization 


Here is how it was implemented to provide enhanced functions to LOGBinder events without breaking the existing log parsing provided by RSA.


LOGBinder is available from here:  LOGbinder


I also noticed this application for Splunk that had some interesting events to pay attention to that was the basis for the additional parsing created in this example:  LOGbinder Solutions - Active Directory Change Auditing 


Sample events were gathered and replayed against the stock RSA Live msexchange parser in NetWitness.


Locate the events in investigation (device.type='msexchange')

Reviewing the splunk app savedsearches.conf and macros.conf I could see that many of the rules were driven however there were a few that were more complicated and might require more parsing work to get the needed values.


Those events included ones found from this drill:

device.type='msexchange' && category='exchange' && ='25001','25002','25003','25004','25005','25006','25007','25008','25009','25010','25011'


An Application rule helped locate these in my testing:

Looking at the event.description fields we can see that some of the events appear to have more data in them than they should and the values we want to extract are not parsed out.


We are looking to extract the following values logonType,client,client ip and process name as well as reduce the event description to something shorter.


Steps to solve:

  • Do this for the other that we need to modify (25008 and 25403 so far)
  • Save the updated log parser xml
  • Follow the instructions in the RSA Link post to create the skeleton -custom.xml file, referenced above.
  • Open the saved Log parser file and locate the three modified message lines, copy them and paste them in the -custom.xml file
  • Add the following to each message entry to indicate that you want to add the modified message above the default - insertBefore="LOGbndEX_25008_LOGbndEX" (add this below the eventcategory line on each message)
  • Save and copy the -custom.xml to the log decoder folder for msexchange and reload the parsers from the explore menu (decoder > parsers > reload - submit)
  • Replay the events and see the extra parsing goodness
  • Now we have the events extracted 
  • The of this matches the name (:01) in the -custom.xml file - 


The custom xml file is attached which you can use in your environment.

GitHub - epartington/rsa_nw_log_LOGBinder: LOGBinder custom parser and application rule content 


The benefit of this is that the RSA Live parser is updated and the custom entries are maintained and eventually if the modifications are rolled into the RSA Parser the -custom can be removed in the future to use only the OOTB Parser.


Look out for a future blog post with content for RSA NetWitness LOGBinder events.

Filter Blog

By date: By tag: