Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2019 > September
2019

Introduction to MITRE ATT&CK™

Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.

This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive, and the community adds them as they are observed and verified.

To read more about how ATT&CK™ is helpful in resolving challenges and validate our defenses, please check this article.

Some Techniques are mapped to multiple Tactics. There are total 244 unique Techniques which results in 314 Non-unique Techniques distributed over 12 Tactics.

 

RSA Threat Content Mapping with MITRE ATT&CK™

RSA has mainly three kinds of Threat Content: a. Application Rules, b. ESA Rules and c. LUA Parsers.These content types can be classified further as per the 'Medium' of each piece of content. Medium depends upon the source of the meta that particular content piece is using. For example: An application rule if using meta populated by packet data then its Medium will be packet. We can search LIVE content using Medium criteria:

 

 

We will try to measure how much ATT&CK™ matrix is covered by RSA Threat Content. Essentially mapping each piece of threat content to one or multiple ATT&CK™ techniques it detects. This mapping needs to be saved in a file and in case of ATT&CK™ the file type will be JSON. For example: In case of application rules, there will be mapping JSON files for each of the following:

  • Mapping of only RSA Application Rules with Medium = log
  • Mapping of only RSA Application Rules with Medium = packet
  • Mapping of only RSA Application Rules with Medium = endpoint
  • Mapping of only RSA Application Rules with Medium = log AND packet
  • Mapping of all RSA Application Rules (Without considering Medium)

The same pattern will follow for ESA Rules and LUA Parsers depending upon Medium value.

This JSON is graphically viewable through ATT&CK™ Navigator web GUI tool which is described later in this post with the process of observing the GUI.

 

a. Application Rules - The Rule Library contains all the Application Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK™ matrix. The mapping shows how many Tactics/Techniques are detected by RSA NetWitness Application Rules. We have generated JSON files for application rules which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA Application Rules:

 

Content TypeMediumLocation of JSON in attached archive
RSA Application RuleslogRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_log
RSA Application RulespacketRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_packet
RSA Application RulesendpointRSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\Medium_endpoint
RSA Application RulesAll Rules(Without considering Medium)RSA_Threat_Content_ATTACK_JSON_Mapping\Application_Rules\All_RSA_Application_Rules

 

Following is the plot which reflects number of techniques detected by all RSA Application Rules with respect to ATT&CK™:

 

b. ESA Rules - ESA is one of the defense systems that is used to generate alerts. ESA Rules provide real-time, complex event processing of log, packet, and endpoint meta across sessions. ESA Rules can identify threats and risks by recognizing adversarial Tactics, Techniques and Procedures (TTPs). We have generated JSON files for ESA rules which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA ESA Rules:

 

Content TypeMediumLocation of JSON in attached archive
RSA ESA RuleslogRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_log
RSA ESA RulespacketRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_packet
RSA ESA Ruleslog AND packetRSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\Medium_log_AND_packet
RSA ESA RulesAll Rules(Without considering Medium)

RSA_Threat_Content_ATTACK_JSON_Mapping\ESA_Rules\All_RSA_ESA_Rules

 

Following is the plot which reflects number of techniques detected by all RSA ESA Rules with respect to ATT&CK™:

 

c. LUA Parsers - Packet parsers identify the application layer protocol of sessions seen by the Decoder, and extract meta data from the packet payloads of the session. Every packet parser is able to extract meta from every session. One of these packet parsers are LUA Parsers which can be customized by customers. We have generated JSON files for LUA Parsers which can be viewed in Navigator. This JSON can be downloaded from attached archive in this blog post. Following are the mappings for RSA LUA Parsers:

 

Content TypeMediumLocation of JSON in attached archive
RSA LUA ParserspacketRSA_Threat_Content_ATTACK_JSON_Mapping\Lua_Parsers\Medium_packet
RSA LUA ParsersAll LUA Parsers(Without considering Medium)RSA_Threat_Content_ATTACK_JSON_Mapping\Lua_Parsers\All_RSA_Lua_Parsers

Note: The above two JSONs will be same as for LUA Parsers the only Medium is packet.

 

Following is the plot which reflects number of techniques detected by all RSA LUA Parsers with respect to ATT&CK™:

 

 

d. Complete RSA Threat Content (Application Rules + ESA Rules + Lua Parsers) - We have combined all three type of contents and created a combined JSON file for ATT&CK™ Navigator and can be downloaded from this blog post.

 

Content TypeMediumLocation of JSON in attached archive
RSA Threat ContentAll RSA Threat ContentRSA_Threat_Content_ATTACK_JSON_Mapping\All_RSA_Threat_Content

 

Following is the plot which reflects number of techniques detected by all three threat content types combined with respect to ATT&CK™ coverage :

Although these statistics are bound to change with time as new content is added or updated. We can update ATT&CK™ coverage periodically which will help us to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.

 

In above sections, we have talked about using JSON files (attached with blog post) in ATT&CK™ Navigator . In next section, we will discuss how to use and observe the JSON files.

 

Introduction to MITRE ATT&CK™ Navigator

ATT&CK™ Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK™ model.

ATT&CK™ Navigator stores information in JSON files and each JSON file is a layer containing multiple techniques which can be opened on Navigator web interface. The JSON contains content in STIX 2.0 format which can be fetched from a TAXII 2.0 server of your own choice. For example, we can fetch ATT&CK™ content from MITRE's TAXII 2.0 server through APIs.

The techniques in this visualization can be:

  • Highlighted with color coding.
  • Added with a numerical score to signal severity/frequency of the technique.
  • Added with a comment to describe that occurrence of technique or any other meaningful information.

These layers can be exported in SVG and excel format.

 

How to View a JSON in ATT&CK™ Navigator?

  1. Open MITRE’s ATT&CK™ Navigator web application. (https://mitre-attack.github.io/attack-navigator/enterprise/).
  2. In Navigator, open a New Tab through clicking '+' button.

    Navigator_Image
  3. Then click on 'Open Existing Layer' and then 'Upload from Local' which will let you choose a JSON file from your local machine (or, the one attached later in this blog).

    Navigator_Image

  4. After uploading JSON file the layer will be opened in Navigator and will look like this:

    Navigator_Image

 

        This visualization highlights the techniques covered in the JSON file with color and comments.

 

    5. While hovering mouse over each colored technique you can see three things:

  • Technique ID: Unique IDs of each technique as per ATT&CK™ framework.
  • Score:  Threat score given to each technique.
  • Comment: We can write anything related in comment to put things in perspective. In this case, we have commented pipes (‘||’) delimited names of content/rules/parsers which cover that technique. For example, if you have opened application rule JSON then comments will contain pipes delimited name of those application rules which detect hovered technique.

 


Other blog posts written before regarding Threat Content coverage of ATT&CK™ can be found here and here.

Overview

The RSA NetWitness is run by many of our customers on RSA's physical appliances, but the entire stack can run in AWS, Azure, VMware, or Hyper-V just fine. You can even mix-and-match hardware between physical and virtual hosts however you prefer. Our Virtual Host Installation Guide does a great job outlining the steps to building a virtual RSA NetWitness Platform host.

 

However, there is frequently a need to build smaller hosts to gather data in smaller remote locations. Small issues that don't apply to larger hosts can cause RSA NetWitness Platform folders to overrun their allotments and cause NetWitness to stop capture or aggregation. This post will primarily focus on the settings to focus on when building smaller virtual hosts. It will also include some tricks to monitor your NetWitness hosts to make sure they don't reach unhealthy levels of storage. Of course, many of these tips will also apply to virtual hosts of all sizes, so hopefully you can benefit regardless of your particular virtual implementation.

 

To ISO or Not to ISO

RSA provides both an ISO and an OVA (and a Hyper-V VHD) to use to build your virtual hosts. Which should you use? If you are building a full RSA NetWitness Platform implementation virtually, you will have to use the ISO to build your Admin Server because the OVA does not come with all of the required RPMs. As for the other hosts, using the OVA isn't a bad idea. The OVA is a much smaller file to deal with (~450MB OVA vs ~6GB ISO) and it has already completed the bootstrap, which is one of the longest steps of the installation. However, the OVA has already provisioned the logical volumes for a 195GB host. That is the recommended size for the OS drive, but if you're wanting to give more than that, the ISO is the easiest option - and I say that as someone who rather enjoys partitioning Linux file systems! As for assigning less than the 195GB, I would recommend you thin provision your host's OS drive before you install with less than what RSA recommends.

 

Keep in mind that your log, network, and endpoint data stores will be separate from this. The OS drive is strictly for holding OS files, NetWitness internal service log entries, temporary data, and some other miscellaneous data. You will add disks to accommodate storing your log, network, and/or endpoint data in the step.

 

Installing the ISO is extremely simple: create your virtual host, give it the CPU, RAM, and HDD storage as recommended in the installation guide or by your RSA engineer (different requirements for different services and different levels of throughput), attach the ISO, and turn on the VM. It will boot to the blue installation screen where you will hit <Enter>. Once you get to the following screen...

...make sure you enter "y" or "Y" and hit <Enter>. Once the bootstrap is complete, the system will reboot to the login prompt. After logging in, you will run "nwsetup-tui" and you can refer to the installation guide for instructions on how to properly orchestrate a host from there.

 

VM Host Sizing

In the previous step, you installed the bootstrapped host via the ISO or the OVA and possibly orchestrated the services as well. In the case of any host that will retain data - Decoders (network / log), Hybrids (endpoint / network / log), Concentrators, or Archivers - you will need to also provision storage for that data. Sizing that can be difficult, but I have a calculator that can help size most of those appropriately.

 

...except Archivers. Why not Archivers? Archivers are employed, generally, for regulatory purposes. You should engage your RSA Engineer to make sure you size them appropriately so that you don't run into issues with auditors. You might be logging especially large logging sources, while the calculator only uses a static 600 bytes per message. You can also retain more or less meta keys which can drastically affect how much storage to assign. And after all, while the "[Small]" in the title of this post was in hard brackets, this guide is generally geared towards smaller deployments / hosts. The sole reason to use an Archiver is because the amount of storage has reached significantly beyond any definition of the word "small".

 

To use the calculator, there are a number of things to understand:

  • The calculator is used to calculate Hybrid storage, because most "small" environments will use Hybrids rather than discrete Decoder and Concentrator pairs. If you are using separate Decoders and Concentrators, you can simply break up the calculated storage per service and split up the provisioning commands. NOTE: There is no such thing as a "discrete Endpoint Decoder". Endpoint servers only come as Hybrids, whether virtual or physical.
  • When you enter information to size up your storage, at the bottom of the calculator you will get provisioning commands to setup your hosts. If you have any Hosts entered in rows 6 or 7, you'll get commands to provision storage for an Endpoint Log Hybrid. If you don't have any Hosts, but you have Log Events >0 GB/day, you will get commands to provision storage for a Log Hybrid. If you have Log Events at 0 GB/day and you have 0 Hosts but your network traffic is >0 GB/day, you will get commands to provision storage for a Network Hybrid.
    • If you are sizing an Endpoint Log Hybrid, keep in mind that you cannot currently download modules automatically, download memory dumps, or download Master File Tables from hosts. Those features which were in ECAT 4.x will be back in the product as of 11.4, and I've included commands to provision them. However, the amount of storage you provision for those purposes is entirely up to you, so you will need to just type the numbers into that cell. They can both be relatively small (10 - 30GB) if you don't plan to auto-download unsigned, new modules. However, once the feature is back, we do highly recommend that you automatically have NetWitness Endpoint download any unsigned, unknown modules less than 5MB - 10MB, and estimate storage for your environment appropriately.
  • Once storage is provisioned for each of the given volumes, the last provisioning command is to give 100% of the remaining space to the MetaDB on the Concentrator. That is done on purpose because if I have any extra space left over, that is where I want it. However, you also must make sure (likely with df -h) that you enough storage in that logical volume. If not, you likely didn't give the entire partition enough space.
    • For this same reason, if you end up using this calculator to build a discrete Decoder, you'll likely want to change the command that would provision your PacketDB to use the "100%FREE" version of the lvcreate command. The syntax would be the same as the one I use for the Concentrator's MetaDB.
  • When you enter the scale information for Network Traffic, you might wonder, "But I don't know how many GB/day of network traffic I plan to send to NetWitness!" The easiest rule of thumb is that if you expect to see 100Mbps on average for a 24-hour period (that would mean ~175Mbps over the peak hour and 10Mbps overnight), that is 1TB/day of traffic. If you expect to see 10Mbps because it's a small office or home environment, assume 100GB/day. If you have absolutely no idea, just throw a number in there.
  • For logs, in a small environment, if you had any log management system you can probably figure out how many GB/day of day you were generating before. If you expect a certain number of Events per Second, I put a handy calculator to turn that into GB/day on row 10. If you have no idea, then once again, I suggest you just throw something in there.
  • You can edit the calculator if you like. The password is just "rsa". I only password protect it to make sure that first-time users aren't editing cells they shouldn't and break it.

 

The calculator is called NW Virtual Hybrid Sizing Calculator v1.0.xlsx. PLEASE, if you find any errors, leave a comment below or contact me somehow so that I can fix it for others.

 

Raw Event Data Storage

The Virtual Host Installation Guide covers how to add storage for the various RSA NetWitness Platform databases in Step 3. It also covers how to calculate the amount of storage you'll need to allocate to each database for any given host/service. For the Admin Server, Archiver, Broker, ESA, Log Collector, and UEBA hosts, all storage will get dumped into the /var/netwitness/ folder. The instructions for extending that volume group and logical volume are in the installation guide and generally involve: pvcreate, then vgextend, then lvextend, and finally xfs_growfs.

 

For Decoders, Concentrators, and Hybrids, I've put together the commands that you need in the attached

 *Commands.txt text files to setup the storage for those hosts. I recommend running all of these scripts to build the partitions, volume groups, and logical volumes after you run nwsetup-tui, but *BEFORE* you install the services on the hosts. A few things to note:

  • I name the volume group "vg01" for the sake of brevity. The name you assign does not matter at all.
  • In Step 5, I assign storage to the "root" folder for each respective service; /var/netwitness/decoder for Network Decoders, /var/netwitness/concentrator for Concentrators, and /var/netwitness/logdecoder for Log Decoders. This is not required, but I prefer to create these volumes so that I can monitor them in case they fill up. Note: they must have at least 5GB of storage assigned, but larger VMs can have as much as 30GB.
  • Also in Step 5, you will need to replace the lv sizes with the proper sizes based on the Installation Guide and/or your RSA NetWitness Platform engineer. In my scripts, I assign specific sizes to every volume except the last one, which I then assign whatever free space is left with the "100%FREE" command.
  • For Step 10, I wrote that so that you can copy and paste it directly into an SSH session into the /etc/fstab file on the host. You can paste that directly to the bottom of the existing file. Once that is done, before you install services, make sure to reboot the host to make sure there aren't any errors in that fstab file. The syntax is very particular and any errors will cause the system to fail to come up. If that happens, just open a Console window to the machine, hit CTRL+D to enter maintenance mode, and then fix the fstab file.
  • I want to say this again because it's very important: after adding your changes to the fstab file, reboot the machine and make sure your syntax was correct!

Just view the *Commands.txt file attached to this post that corresponds to the type of host you're trying to install.

 

Install Services

This step is straightforward. If you haven't already, go to Admin --> Hosts and enable the host. Then install the services just as outlined in the Installation Guide.

 

Validate Folder Sizes - RSA NetWitness Platform Databases

In order to properly roll off the oldest entries in NWDB (NetWitness Database, our proprietary database format), we have to make sure that the RSA NetWitness Platform knows how much storage each database has to fill. Navigate to Admin --> Services, and for any Concentrator or Decoder/Log Decoder service, go to the Explore page. Expand the "database" menu item on the left-hand side, and click on "config". Here I show the page for an RSA Log Decoder service on a physical Endpoint Log Hybrid:

The sizes you see there are 95% of the corresponding folders we built using the provisioning commands, measured in 1,073,741,824 byte blocks. If you want to get to exact, you can run "df --block-size=G", multiply a folder by 95%, and round to the nearest two digits to get the value RSA NetWitness Platform will place in the corresponding line above. Once the data in one of these folders exceeds these limits, RSA NetWitness Platform rolls off data.

 

If you followed this guide and the Virtual Host Installation Guide, you will see folder sizes here that match what you provisioned. But what if they don't match or you made a mistake? Well, you can reset those by right-clicking on the "database" menu item and clicking "Properties":

 

At the bottom-right of the window, the Properties pane will open up. Select "reconfig" from the drop-down and click the Send button:

You can see that these values match what we saw in the previous screen. If these values still don't look correct - usually, if they are all the same - then your folders aren't mounted to separate logical volumes. If these values do look correct, you can remove the "=xx.xxTB" or "=xx.xxGB" from the entries on the previous screen. Then, back in the Properties pane, in the Parameters box, type update=1 and click Send again. It will append those values to the appropriate entries at the top, though you'll have to refresh the screen to see the update.

 

The indexes for each of these services has a separate entry. On the Explore page, you will see a menu item called "Index", and the settings are under the "config" sub-menu. Just like above, if you need to reset the folder size for that, you can right-click on "Index" and run the reconfig commands like before.

 

Validate Thresholds - MongoDB

In addition to NWDB, NetWitness also stores Endpoint scan results (primarily, what you see in Navigate --> Hosts) in mongoDB on the Endpoint Log Hybrid in the /var/netwitness/mongo folder. NetWitness does not display the folder sizes in the Endpoint Server service's Explore page as it does for those services above. Instead, it just looks at the amount of storage in the /var/netwitness/mongo folder, or, if that isn't separately partitioned, in the /var/netwitness folder. Then it compares the current usage to the value in the "rollover-after" setting here:

Your system may not use this setting if your Data Retention policies (found at Admin --> Services --> Endpoint Server --> Config --> Data Retention Scheduler tab) don't already roll over data before the folder hits 80%. You should also be aware of the settings under endpoint/data-store-thresholds:

If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned separately, and /var/netwitness if it's not) crosses these thresholds, you will eventually receive Health & Wellness alerts that correspond to those thresholds.

 

Minimum Available Space - The Key to Reliability

The other setting you may have noticed in the previous screenshots that we ignored were the <database_name>.free.space.min settings. A given database can grow past the maximum size we've setup above with no issues, but capture/aggregation will stop if there is less free space than what is specified in the free.space.min setting for the corresponding service. Just like the folder size above is set as 95% of the total volume size, the free.space.min is set to 0.865% of the total size, by default. In both cases, the default setting can be replaced manually with whatever you would like to enter. For most large VMs, the default is fine. However, for smaller hosts capturing small amounts of data, this default may be a bit high and can be adjusted.

 

Please note: the indexes do not have a similar free.space.min setting, and capture/aggregation will continue to run, even if the index volumes are essentially full.

 

For Mongo, you should also be aware of the settings under Admin --> Services --> Endpoint Server --> Explore --> endpoint/data-store-thresholds:

If the storage in the corresponding folder (/var/netwitness/mongo if it's partitioned; /var/netwitness if it's not) crosses the warning-percent level, <this will happen>. If it crosses the fatal-percent level, <this will happen>.

 

Monitoring Part 1: Folder Sizes

As I mentioned in the Overview, for small hosts (roughly <1TB of total storage), I recommend monitoring your volumes to make sure that they don't fill up. To do this, I modified a script I found here to monitor file system usage:


It pulls back every folder other than temp and boot folders, and if any are at 90% or higher, it will generate a syslog, sent to the IP designated by the -n switch (10.10.10.10 in the image above). I've attached that script below as checkVolumeSizes.sh. (Remember, use chmod to make it executable!) If you run chrontab -e from an SSH terminal, the RSA NetWitness Platform's underlying Centos OS will open vi and allow you to set a schedule to run the script. I imagine most of you reading this are familiar with crontab syntax, but if you're not, or if you want to design something overly tricky, this site takes all the work out of it for you: https://crontab.guru/.

 

The messages generated will look like this:

You can ingest that into any system that can ingest syslog messages and alert on it as you see fit. Seeing as RSA NetWitness Platform *IS* a SIEM, it seemed only right to go ahead and monitor that using the RSA NetWitness Platform . The first step involved in that is properly parsing the message, so I built a parser for that using the NetWitness Log Parser Tool (download here: https://community.rsa.com/docs/DOC-94172, learn how to use it here: RSA ESI Beta 3 - YouTube and Parser Development When No Message ID Exists - YouTube). It took maybe 5 minutes.

 

But there aren't any out-of-the-box keys meant to store the size of logical volumes, and I wanted to include that in the e-mail I send to myself, so I added a meta key to the RSA NetWitness Platform for that. If you use my parser you *MUST* create a custom meta key in your system in order for the parser to work properly. Add the custom meta key to the table-map-custom.xml file on the Log Decoder where you are directing these messages.


You can find that attached as table-map-custom.txt. I didn't want to call it table-map-custom.xml because it needs to be added to the existing file, not pasted over the existing file in its entirety.

 

Now, download nwdiskalert.envision, navigate to Admin --> Log Decoder --> Config, click the Parsers tab, and upload that file. After uploading, if you want to make sure the Log Decoder reloaded its parsers, you can switch from Config to Explore:



Once the page loads, expand the "decoder" menu, right-click on "parsers", and choose "Properties".




In the Properties pane, select "reload" from the drop-down menu and then click Send. Now the parsers have been reloaded and you're all set to ingest these messages!

 

Monitoring Part 2: ESA Correlation Rules

I built three ESA rules to monitor my file system at home, one each for medium, high, and critical severity alerts. Here is what I classify as each:

  • Medium Severity:
    • Goal:
      • Monitor folders that shouldn't ever fill up when they reach high levels of utilization, but won't cause any service issues. 
    • Rules:
      • Any of the following folders are at least 90% but no more than 94% disk usage:
        • /home
        • /var/log
        • /var/netwitness
        • /var/netwitness/concentrator
        • /var/netwitness/decoder
        • /var/netwitness/logdecoder
  • High Severity:
    • Goals:
      • Monitor folders that shouldn't ever fill up when they reach extremely high levels of utilization, but won't cause any service issues
      • Monitor folders that could cause service interruption once they pass 95% (which is where many of them will sit most of the time) but haven't yet reached a point where service interruption will occur
      • Monitor the mongodb folder if it reaches concerning levels
    • Rules:
      • Any of the following folders are at least 95% but no more than 97% disk usage:
        • /home
        • /var/log
        • /var/netwitness
        • /var/netwitness/concentrator
        • /var/netwitness/decoder
        • /var/netwitness/logdecoder
        • /var/netwitness/concentrator/index
        • /var/netwitness/decoder/index
      • Any of the following folders are at 96% or 97%:
        • /var/netwitness/concentrator/sessiondb
        • /var/netwitness/concentrator/metadb
        • /var/netwitness/decoder/sessiondb
        • /var/netwitness/decoder/metadb
        • /var/netwitness/decoder/packetdb
        • /var/netwitness/logdecoder/sessiondb
        • /var/netwitness/logdecoder/metadb
        • /var/netwitness/logdecoder/packetdb
    • The /var/netwitness/mongo folder is at least 90% and no more than 94%
  • Critical Severity:
    • Goals:
      • Monitor folders that shouldn't ever fill up when they reach critical levels of utilization
      • Monitor folders that could cause service interruption once they pass 97% and will soon - or are currently - causing service interruption
      • Monitor the mongodb folder if it reaches its "fatal-percent" setting
    • Rules:
      • Any of the folders in the High Severity list are at 98% or above
      • The /var/netwitness/mongo folder is at 95% or above

 

You can find those attached as nwDiskMonitoringESARules_<severity>_Basic.txt. You might ask yourself, "Why did he call them "Basic"? Well, that's because I actually built more detailed rules in my lab to monitor for the free size returned from the event logs. It's absolutely overkill, and it causes the rules to look like this:

Do you really want to do that to yourself? You really shouldn't, but if you insist, feel free to reach out to me and I'll send you those rules as well.

 

Monitoring Part 3: Generating Notifications

When these rules detect something, of course you'll want to generate an e-mail to notify you of their current state. I use a single notification template for all three ESA Rules. I put my notification template in the attached file nwDiskMonitoringNotificationTemplate.txt. The template breaks down like this:

  • Lines 1 - 20: Builds a banner at the top of the e-mail that is yellow for medium alerts, orange for high, and red for critical
  • Line 25: Prints the time the event was generated
  • Line 27: Prints the IP of the RSA NetWitness Platform host that generated the event log
  • Line 29: Prints the folder that the alert is related to
  • Line 31: Prints the % utilization of the folder
  • Line 33: Prints the amount of free space, in MB, left in that folder
  • Line 35: Generates a hyperlink to the raw event log in the RSA NetWitness Platform; make sure you edit both the <NW_URL_or_IP> and the device ID (mine is 6)

(Have questions about any other items in this notification template? Check out my other relevant blog post here: Building the Notifications of Your Dreams in the RSA NetWitness Platform.)

 

Once you've updated those items, place it under Admin --> System --> Global Notifications --> Template (tab), and make sure you select that template when adding your ESA Rules. You can also build an Incident Rule in the RSA NetWitness Platform if you want to generate incidents for these alerts. Here is mine, for reference:

 

Summary

I can't emphasize enough that the Virtual Host Installation Guide has very comprehensive instructions for setting up a virtual RSA NetWitness Platform host, and you should make sure you follow those instructions. However, following some of the additional steps included in this guide can give you peace of mind that your RSA NetWitness Platform environment is running smoothly and collecting your critical security forensic information.

 

Future note: I plan to build some Event Source Monitoring rules to make sure that my hosts are still sending logs. For example, the packetdb folder on your Decoders and Log Decoders should reach 95% eventually and then roll off data, while your Concentrators should reach 95% on their metadb folder. Those should continue to generate logs once they hit 90% utilization at every interval you specified in the cron job. If I ever get the free time to create those, I'll update this post with that information. If someone wants to build that on their own, be my guest!!

Introducing RSA NetWitness Platform's support for AWS VPC Traffic Mirroring!

 

By partnering with AWS and integrating with their AWS VPC Traffic Mirroring, customers are able to access to the right virtual traffic and network metadata from AWS environments. The AWS VPC Traffic Mirroring allows users to capture and inspect network traffic to analyze packets without using any third-party packet forwarding agents. The solution provides insight and access to network traffic across VPC infrastructure. 

 

Packets can now be captured, retained, analyzed and stored in the AWS cloud bringing additional visibility and security with the RSA NetWitness Platform.  With this agent-less packet capture capability, we’re able to provide analysts the context they need to understand the threats they’re investigating.  Combining network visibility with other sources such as Logs, Endpoint and Netflow we’re able to provide a single view to the analyst!

 

RSA NetWitness Platform enables customers to obtain the visibility needed to secure critical infrastructure, and empowers any analyst to identify, understand, and mitigate advanced threats.   RSA’s NetWitness Platform's integration with AWS enables customers to close the visibility gap created by workloads in the cloud.  This solution provides flexible AWS deployment options which allow NetWitness components to be deployed either in a Full Stack (all cloud) or Hybrid (on premise & cloud) configurations.

 

Hybrid Deployment

RSA NetWitness - AWS VPC Traffic Mirroring

 

For technical implementation details, see our AWS Deployment Guide

It often happens to me that while I am testing new alerts and incident aggregation rules, I find that the aggregation condition(s) I chose in my Incident Rule are not what I want.  While I could re-create the raw alerts from scratch, I wanted an easier method to tell the Respond engine to re-apply its aggregation rule policies on the alerts that already exist in the database.

 

To be clear, the Respond engine is always attempting to apply all active and valid Incident Rules against un-aggregated and un-affiliated alerts in the database -- that is, any alert that has not been previously aggregated into any incident can be automatically aggregated into an incident if an incident rule with matching conditions is changed/created.  But for previously aggregated alerts whose incidents have been deleted (leaving the alerts un-aggregated but previously-affiliated), the Respond engine will not attempt to re-aggregate them.

 

So my goal, then, was to get the Respond engine to include these previously-affiliated alerts in its aggregation attempts.  To achieve this, the alerts simply needed to be updated to remove their previously-affiliated status.  And to make it easy to change dozens or even hundreds of alerts at once, I wrote a simple shell script (attached to this blog and pasted below) to do it all for me.

 

#!/bin/bash
#
#grab the deploy_admin password
DEPLOY_PW=$(security-cli-client --get-config-prop --prop-hierarchy nw.security-client --prop-name platform.deployment.password --quiet)

#set a desired time range to query for alerts
#examples: "24 hours ago" or "14 days ago" or "4 weeks ago"
timeRange=$(date +%s%N -d "30 days ago" | cut -b1-13)

#identify primaryESA host
primaryESA=$(echo -e "use orchestration-server\ndb.host.find({installedServices:\"ESAPrimary\"},{hostname:1})" | mongo admin -u deploy_admin -p $DEPLOY_PW --quiet | grep -Po "hostname.*\"" | sed -e "s/hostname.\{5\}\|\"//g")

#change status on all alerts that were part of a deleted incident
#within the timerange from "REMOVED_FROM_INCIDENT" to "NORMALIZED"
echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

A couple notes on the script:

  • I used one extremely generic parameter (timestamp within last 30 days) to limit the database query and update operation (line 15)
    • you should feel free to modify the timeRange (line 8) to suit your needs
    • you should also feel free to (carefully) modify the database query to focus on specific alerts in your environment
      • for example, given the following raw alert:

 

...you could change line 15 and add:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

 

...or:

echo -e "use respond-server\ndb.alert.update({\$and:[{status:\"REMOVED_FROM_INCIDENT\"},{\"originalHeaders.timestamp\":{\$gte:$timeRange}},{\"originalAlert.moduleName\":\"Alert with source and destination IP values\"},{\"originalAlert.events.ip_src\":\"192.168.20.20\"}]},{\$set:{status:\"NORMALIZED\"}},{multi:true})" | mongo admin -u deploy_admin -p $DEPLOY_PW --host $primaryESA --quiet

  • a successful run of the script will produce output like this, showing you how many alerts in the database were modified (3, in this case):

 

Of course, I recommend testing this (and most everything else) in a pre-prod or test NetWitness environment, if you have one.  And should you have any questions about what might be a good and/or valid database query, the Link community is always on hand to help (please have screenshots and/or specifics about your alerts ready...its hard to help without knowing details...  ).

Filter Blog

By date: By tag: