Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2018 > August
2018

Introduction to MITRE’s ATT&CK™

 

Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK™) for enterprise is a framework which describes the adversarial actions or tactics from Initial Access (Exploit) to Command & Control (Maintain). ATT&CK™ Enterprise deals with the classification of post-compromise adversarial tactics and techniques against Windows™, Linux™ and MacOS™.

 

Consequently, two other frameworks are also developed namely, PRE-ATT&CK™ and ATT&CK Mobile Profile. PRE-ATT&CK™ is developed to categorize pre-compromise tactics, techniques and procedures (TTPs) independent of platform/OS. This framework categorizes the adversaries planning, information gathering, reconnaissance and setup before compromising the victim.

 

ATT&CK™ Mobile Profile is specific to Android and iOS mobile environments and has three matrices that classifies tactics and techniques. This does not just include post-compromise tactics and techniques but also deal with pre-compromise TTPs in mobile environments.

 

This community-enriched model adds techniques used to realize each tactic. These techniques are not exhaustive and the community adds them as they are observed and verified.

 

                This matrix is helpful in validation of defenses already in place and designing new security measures. It can be used in the following ways to improve and validate the defenses:

 

  1. This framework can be used to create adversary emulation plans which can be used by hunters and defenders to test and verify their defenses. Also, these plans will make sure you are testing against an ever-evolving industry standard framework.
  2. Adversary behavior can be mapped using ATT&CK™ matrix which can be used for analytics purposes to improve your Indicators of Compromise (IOCs) or Behavior of Compromise (BOCs). This will enhance your detection capabilities with greater insight into threat actor specific information.
  3. Mapping your existing defense with this matrix can give a visualization of tactics and techniques detected and thus can present an opportunity to assess gaps and prioritize your efforts to build new defenses.
  4. ATT&CK™ framework can help to build the threat intelligence with perspective of not just TTPs but threat groups and software that are being used. This approach will enhance your defenses in a way that detection will not be just dependent upon TTPs but the relationship it has with threat groups and software that are in play.

 

Relationships between Threat-Group, Software, Tactic and Techniques

Figure 1: Relationships between Threat-Group, Software, Tactics and Techniques

 

This framework resolves the following problems:

 

  1. Existing Kill Chain concepts were too abstract to relate new techniques with new types of detection capabilities and defenses. ATT&CK can be called a Kill Chain on steroids.
  2. Techniques added or considered should be observed in a real environment and not just from theoretical concepts. The community adding techniques insures that the techniques have been seen in the wild and thus are suitable for people using this model in real environments.
  3. This model gives common language and terminology across different environments and threat actors. This factor is important in making this model industry standard.
  4. Granular indicators like domain names, hashes, protocols et cetera do not provide enough information to see the bigger picture of how the threat actor is exploiting the system, and its relationship with various sub-systems and tools used by the adversary. This model gives a good understanding and relationship between tactics and techniques used which can be used further to drill down into only the important granular details.
  5. This model helps with making a common repository from where this information can be used with APIs and programming. This model is available via public TAXII 2.0 server and serve STIX 2.0 content.

 

ATT&CK Navigator

 

ATT&CK Navigator is a tool openly available through GitHub which uses the STIX 2.0 content to provide a layered visualization of ATT&CK model.

 

ATT&CK Navigator

 

Figure 2: ATT&CK Navigator

 

By default, this uses MITRE’s TAXII server but it can be changed to use any TAXII server of choice. Navigator uses JSON files to create layers which can be programmatically created and thus used to generate layers.

 

RSA NetWitness Event Stream Analysis (ESA)

 

ESA is one of the defense systems that is used to generate alerts. ESA Rules provide real-time, complex event processing of log, packet, and endpoint meta across sessions. ESA Rules can identify threats and risks by recognizing adversarial Tactics, Techniques and Procedures (TTPs).

 

The following are ESA Components:

 

  1. Alert - Output from a rule that matches data in the environment.
  2. Template - Convert the rule syntax into code (Esper) that ESA understands.
  3. Constituent Events - All of the events involved in an alert, including the trigger event.
  4. Rule Library - A list of all the ESA Rules that have been created.
  5. Deployments - A list of the ESA Rules that have been deployed to an ESA device.

 

The Rule Library contains all the ESA Rules and we can map these rules or detection capabilities to the tactics/techniques of ATT&CK matrix. The mapping shows how many tactics/techniques are detected by ESA. Please find attached with this blog post the excel workbook of mapping between ESA Rules and ATT&CK Tactics/Techniques.

 

In other words, overlap between ESA Rules and ATT&CK matrix can not only show us how far our detection capabilities reach across the matrix but also can quantify the evolution of product. We can measure how much we are improving and in which directions we are improving.

 

We have created a layer as a JSON file which has all the ESA Rules mapped to techniques. Then we have imported that layer on ATT&CK Navigator matrix to show the overlap. In the following image, we can see all the techniques highlighted that are detected by ESA Rules:

 

ATT&CK Navigator ESA Rules Mapping

 

Figure 3: ATT&CK Navigator Mapping to ESA Rules

 

To quantify how much ESA Rules spread across the matrix we can refer to the following plot:

 

ATT&CK Navigator ESA Rules Mapping Plot

 

                               Figure 4: Plot for ATT&CK Matrix Mapping to ESA Rules

 

Moving forward we can map our other detection capabilities with ATT&CK matrix. This will help to give us a consolidated picture of our complete defense system and thus we can quantify and monitor the evolution of our detection capabilities.

 

References:

[1] https://www.mitre.org/sites/default/files/publications/pr-18-0944-11-mitre-attack-design-and-philosophy.pdf

[2] https://attack.mitre.org/wiki/Main_Page

[3] https://attack.mitre.org/pre-attack/index.php/Main_Page

[4] https://attack.mitre.org/mobile/index.php/Main_Page

[5] https://www.mitre.org/capabilities/cybersecurity/overview/cybersecurity-blog/using-attck-to-advance-cyber-threat

[6] https://www.mitre.org/capabilities/cybersecurity/overview/cybersecurity-blog/using-attck-to-advance-cyber-threat-0

 

Thanks to Michael Sconzo and Raymond Carney for their valuable suggestions.

A recent advisory was sent out for firmware updates to a number of base components in NetWitness.

 

RSA NetWitness Availability of BIOS & iDRAC Firmware Updates

 

There were three components that were mentioned that needed potential updates and instructions to update them were provided in the Advisory.

 

How do you do gather the state of the environment quickly with the least amount of steps so that you can determine if there is work that needs to be done?

 

Chef to the rescue ...

You might need these tools installed on your appliances to run later commands (install perccli and ipmitool)

 

From the NW11 head server (node0)

salt '*' pkg.install "perccli,ipmitool" 2>/dev/null

then you can query for the current versions of the software for PERC, BIOS and iDRAC

salt '*' cmd.run 'hostname; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c0 show | grep "FW Package Build"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "Product Name"; /opt/MegaRAID/perccli/perccli64 /c1 show | grep "FW Package Build"' 2>/dev/null
salt '*' cmd.run 'hostname; ipmitool -I open bmc info | grep "Firmware Revision"' 2>/dev/null
salt '*' cmd.run 'hostname; ip address show dev eth0 | grep inet; dmidecode -s system-serial-number; dmidecode -s bios-version; dmidecode -s system-product-name;' 2>&-

 

The output will list the host, and the version of the software that exists which can be used to determine if an update is required to your NetWitness Appliances.

 

Ideally this is in Health and Wellness where policies can be written against it with alerts (and an export to csv function would be handy).

Wireshark has been around for a long time and the display filters that exist are good reference points to learn about network (packet) traffic as well as how to navigate around various parts of sessions or streams.

 

Below you will find a handy reference which allows you to cross-reference many of the common Wireshark filters with their respective RSA NetWitness queries. 

 

This is where I pulled the Wireshark display filters from:  DisplayFilters - The Wireshark Wiki 

 

Show only SMTP (port 25) and ICMP traffic:

WiresharkNetWitness
tcp.port eq 25 or icmpservice=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)
tcp.dstport=25 || ip.proto=1,58 -> (icmp or ipv6 icmp)

 

Show only traffic in the LAN (192.168.x.x), between workstations and servers -- no Internet:

WiresharkNetWitness
ip.src==192.168.0.0/16 and ip.dst==192.168.0.0/16ip.src=192.168.0.0/16 && ip.dst=192.168.0.0/16
direction='lateral' (RFC1918 to RFC1918)

 

Filter on Windows -- Filter out noise, while watching Windows Client - DC exchanges

WiresharkNetWitness
smb || nbns || dcerpc || nbss || dnsservice=139,137,135,139,53

 

Match HTTP requests where the last characters in the uri are the characters "gl=se":

WiresharkNetWitness
http.request.uri matches "gl=se$"service=80 && query ends 'gl=se'

 

Filter by a protocol ( e.g. SIP ) and filter out unwanted IPs:

WiresharkNetWitness
ip.src != xxx.xxx.xxx.xxx && ip.dst != xxx.xxx.xxx.xxx && sipservice=5060 && ip.src!=xxx.xxx.xxx.xxx && ip.dst != xxx.xxx.xxx.xxx

 

ip.addr == 10.43.54.65 equivalent to

WiresharkNetWitness
ip.src == 10.43.54.65 or ip.dst == 10.43.54.65ip.all=10.43.54.65
ip.src=10.43.54.65 || ip.dst=10.43.54.65

 

Here's where I pulled some additional filters for mapping:  HTTP Packet Capturing to debug Apache 

 

View all http traffic

WiresharkNetWitness
httpservice=80

 

View all flash video stuff

WiresharkNetWitness
http.request.uri contains "flv" or http.request.uri contains "swf" or http.content_type contains "flash" or http.content_type contains "video"service=80 && ( query contains 'flv' || query contains 'swf' || content contains 'flash' || content contains 'video')

 

Show only certain responses

WiresharkNetWitness
http.response.code == 404service=80 && error begins 404
service=80 && result.code ='404'
http.response.code==200service=80 && error !exists (200 are not explicitly captured)
service=80 && result.code !exists (200 are not explicitly captured)

 

Show only certain http methods

WiresharkNetWitness
http.request.method == "POST" || http.request.method == "PUT"service=80 && action='post','put'

 

Show only filetypes that begin with "text"

WiresharkNetWitness
http.content_type[0:4] == "text"service=80 && filetype begins 'text'
service=80 && filename begins 'text'

 

Show only javascript

WiresharkNetWitness
http.content_type contains "javascript"service=80 && content contain 'javascript'

 

Show all http with content-type="image/(gif|jpeg|png|etc)" §

WiresharkNetWitness
http.content_type[0:5] == "image"service=80 && content ='image/gif','image/jpeg','image/png','image/etc'

 

Show all http with content-type="image/gif" §

WiresharkNetWitness
http.content_type == "image/gif"service=80 && content ='image/gif'

 

Hope this is helpful for everyone and as always, Happy Hunting!

I was reviewing a packet capture file I had from a recent engagement. In it, the attacker had tried unsuccessfully to compress the System and SAM registry hives on the compromised web server. Instead, the attacker decided to copy the hives into a web accessible directory and give them a .jpg file extension. Given that the Windows Registry hives contain a well documented file structure, I decided to write a parser to detect them on the network.

 

 

If we see something on the wire, there is a pretty good chance we can create some content to detect it in the future. This is the premise behind most threat-hunting or content creation. Make it easier to detect the next time. This is the same approach I take when building Lua Parsers for the RSA NetWitness platform.

 

Here, we can see what appears to be the magic bytes for a registry file “regf”.

 

 

Let’s shift our view into View Hex and examine this file.

 

 

When creating a parser, we want to make it as consistent as possible to reduce false positives or errors. What I found was that immediately following the ‘regf’ signature the Primary Sequence Number (4 bytes) and Secondary Sequence Number (4 bytes) would be different. Then, there was the FileTime UTC (8 bytes) field which would most definitely be unique.

 

However, the Major and Minor versions were relatively consistent. Therefore, I could skip over those 16 bytes to land on the first byte of the Major Version immediately after my initial token matches.  Let’s create a token to start with.

 

fingerprint_reg:setCallbacks({

   ["\114\101\103\102"] = fingerprint_reg.magic,   -- regf

}) 

 

If you notice, this token is in DECIMAL format, not HEX. Also, 4 bytes is quite small for a token. What happens is that when a parser is loaded into the decoder, the tokens are stored in memory and compared as network traffic is going through the decoder. Once a token matches, the function(s) within the parser are run. Too small of a token means the parser may run quite frequently with or without matching on the right traffic. Too large of a token means the parser may only run on those specific bytes and you could miss other relevant traffic. When creating a parser token, you may want to error on the side of caution and make it a little smaller but know that you will have to add additional checks to ensure it is the correct traffic you want.

 

In Lua for parsers, you are always on a byte. Therefore, we need to know where we are and where we want to go. I like to set a variable called ‘current_position’ to denote where my pointer is in the stream of data. When the parser matches on a token, it will return 3 values. The three values are the token itself, the first position of the token in the data stream and the last position of the token in the data stream. This helps me as I want to find the ‘regf’ token and move forward 17 bytes to land on the Major version field.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

This will put the pointer on the first byte (0x01) of the Major Version field. Next what I want to do is extract only the payload I need to do my next set of checks, which will involve reading the bytes.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

Here, I created a variable called ‘payload’ and used the built-in function ‘nw.getPayload’ to get the payload I wanted. Since I previously declared a variable called ‘current_position’, I use that as my starting point and tell it to go forward 7 bytes. This gives me a total of 8 bytes of payload. Next, I make sure that I have payload and that it is, in fact, 8 bytes in length (#payload == 8).

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

 

If the payload checks out, then in this parser, I want to read the first 4 bytes, since that should be the Major Version. In the research I did, I saw that the Major Version was typically ‘1’ and was represented as ‘0x01000000’. Since I want to read those 4 bytes, I use “payload:uint32(1,4)”. Since those bytes will be read in as one value, I pre-calculate what that should be and use it as a check. The value should be ‘16777216’. If it is, then it should move to the next check.

 

function fingerprint_reg:magic(token, first, last)

                  current_position = last + 17

                  local payload = nw.getPayload(current_position, current_position + 7)

                  if payload and #payload == 8 then

                                    local majorversion = payload:uint32(1, 4)

                                    if majorversion == 16777216 then

                                                      local minorversion = payload:uint32(5, 8)

                                                      if minorversion == 50331648 or minorversion == 67108864 or minorversion == 83886080 or minorversion == 100663296 then

                                                                        nw.createMeta(self.keys["filetype"], "registry hive")

                                                      end

                                    end

                  end

end

 

The Minor Version check winds up being the second and last check to make sure it is a Registry hive. For this to run, the Major version had to have been found and validated based on the IF statement. Here, we grab the next 4 bytes and store those in a variable called ‘minorversion’. There were four possible values that I found in my research. Those would be ‘0x03000000’, ‘0x04000000’, ‘0x05000000’, and ‘0x06000000’. Therefore, I pre-calculated those values in decimal form like I did with the Major Version and did a comparison (==). If the value matched, then the parser will write the text ‘registry hive’ as meta into the ‘filetype’ meta key.

 

The approach shown here was useful in examining a particular type of file as it was observed in network traffic. The same approach could be used for protocol analysis, identifying new service types, and many others as well.  If you would like expert assistance with creating a custom parser for traffic that is unique in your environment then that is a common service offering provided by RSA.  If you're interested in this type of service offering, please feel free to contact your local sales rep.  

 

The parser is attached, and I have also submitted it to RSA Live for future use.  I hope you find this parser breakdown helpful and as always, happy hunting.

 

Chris

Microsoft has been converting customers to O365 for a while, as a result more and more traffic is being routed from on-premise out to Microsoft clouds potentially putting it into visibility of NetWitness.  Being able to group that traffic into a bucket for potential whitelisting or at the very least identification could be a useful ability.

 

Microsoft used to provide an XML file for all the required IPv4, IPv6 and URLs that were required for accessing their O365 services.  This is being deprecated in October of 2018 in favor of API access.

 

This page gives a great explainer on the data in the API and how to interact with it as well as a python and Powershell script to grab data for use in firewalls etc.

 

Managing Office 365 endpoints - Office 365 

 

The powershell script is where I started so that a script could be run on client workstation to determine if there was any updates and then apply the relevant data to the NW environment.  Eventually, hopefully this gets into the generic whitelisting process that is being developed so that it is programatically delivered to NW environments.

 

GitHub - epartington/rsa_nw_lua_feed_o365_whitelist: whitelisting Office365 traffic using Lua and Feeds 

 

The script provided by Microsoft was modified to create 3 output files for use in NetWitness

o365ipv4out.txt

o365ipv6out.txt

o365urlOut.txt

 

the IP feeds are in a format that can be used as feeds in NetWitness, the github link with the code provides the xml for them to map to the same keys as the lua parser so there is alignment between the three.

 

the o365urlOut.txt is used in a lua parser to map against the alias.host key.  The reason the lua parser was used is as a result of a limitation of the feeds engine which prevents wildcard matching.  The matches in feeds need to be exact, and some of the hosts provided by the feeds are *.domain.com.  The Lua parser attempts to match direct exact match first then falls back to subdomain matches to see if there are any hits there.

 

The Lua parser has the updated host list as of the published version, as Microsoft updates their API the list needs to be changed.  Thats where the PS1 script comes in.  That can be run from client workstation, the output  txt file then opened up if there are changes and the text copied to the decoder > config > files tab and replace the text in the parser to include any changes published.  The decoder probably needs to have the parsers reloaded which can be done from REST or explore menu to reload the content into the decoder.  You can also push the updated parser to all your other Log and Packet decoders to keep them up to date as well.

 

The output of all the content is data in the filter metakey

filter='office365'

filter='whitelist'

 

Sample URL output

["aadrm.com"] = "office365",
["acompli.net"] = "office365",
["adhybridhealth.azure.com"] = "office365",
["adl.windows.com"] = "office365",
["api.microsoftstream.com"] = "office365",

 

sample IPv4 output

104.146.0.0/19,whitelist,office365
104.146.128.0/17,whitelist,office365
104.209.144.16/29,whitelist,office365
104.209.35.177/32,whitelist,office365

 

My knowledge of powershell is pretty close to 0 at the beginning of this exercise, now it's closer to 0.5.

 

To Do Items you can help with:

Ideally i would like the script to output the serviceArea of each URL or IP network so that you can tell which service from O365 the content belongs to to give you more granular data on what part of the suite is being used.

serviceArea = "Exchange","sway","proplus","yammer" ...

If you know how to modify the script to do this, more than happy to update the script to include those.  Ideally 3-4 levels of filter would be perfect.

 

whitelist,office365,yammer

 

would be sufficient granularity i think

 

Changes you might make:

The key to read from is alias.host, if you have logs that write values into domain.dst or host.dst that you want considered and you are on NW11 you can change the key to be host.all to include all of those at once in the filtering (just make sure that key is in your index-decoder-custom.xml)

 

Benefits of using this:

Ability to reduce the noise on the network for known or trusted communications to Microsoft that could be treated as lower priority.  Especially when investigating traffic outbound and you can remove known O365 traffic (powershell from endpoint to internet != microsoft)

 

As any FYI, so far all the test data that I have lists the outbound traffic as heading to org.dst='Microsoft Hosting', i'm sure on wider scale of data that isn't true, but so far the whitelist lines up 100% with that org.dst.

The Respond Engine in 11.x contains several useful pivot points and capabilities that allow analysts and responders to quickly navigate from incidents and alerts to the events that interest them.

 

In this blog post, I'll be discussing how to further enable and improve those pivot options within alert details to provide both more pivot links as well as more easily usable links.

 

During the incident aggregation process, the scripts that control the alert normalizations create several links (under Related Links) that appear within each alert's Event Details page.

 

These links allow analysts to copy/paste the URI into a browser and pivot directly to the events/session that caused the alert, or to an investigation query against the target host. 

 

What we'll we doing here is adding additional links to this Related Links section to allow for more pivot options, as well as adding the protocol and web server components to the existing URI in order to form a complete URL.

 

The files that we will be customizing for the first step are located on the Node0 (Admin) Server in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js

 

(We will not be modifying the normalize_ecat_alerts.js or normalize_wtd_alerts.js scripts because the Related Links for those pivot you outside of the NetWitness UI.)

 

As always, back up these files before committing any changes and be sure to double-check your changes for any errors.

 

Within each of these files, there is a exports.normalizeAlert function:

 

At the end of this function, just above the "return normalized;" statement, you will add the following lines of

 

//copying additional links created by the utils.js script to the event's related_links
for(var j = 0; j < normalized.events.length; j++){

if (normalized.related_links) {

normalized.events[j].related_links = normalized.events[j].related_links.concat([normalized.related_links]);

}

}

 

 

So the end of the exports.normalizeAlert function now looks like this:

 

Once you have done this, you can now move on to the next step in this process.  This step will require modification of 3 files - the two we have already changed plus the utils.js script - all still located in the "/var/netwitness/respond-server/scripts" directory:

  • normalize_core_alerts.js
  • normalize_ma_alerts.js
  • utils.js

 

Within each of these files search for "url:" to locate the statements that generate the URIs in Related Links.  You will be modifying these URIs into complete URLs by adding "https://<your_UI_IP_or_Hostname>/" to the beginning of the statement.

 

For example, this: 

 

...becomes this:

 

Do this for all of the "url:" statements, except this one in "normalize_core_alerts.js," as this pulls its URI / URL from a function in the script that we are already modifying:

 

Once you have finished modifying these files and double-checking your work for syntax (or other) errors, restart the Respond Server (systemctl restart rsa-nw-respond-server) and begin reaping your rewards:

 

RSA SecurID Access (Cloud Authentication Service) is an access and authentication platform with a hybrid on-premise and cloud-based service architecture. The Cloud Authentication Service helps secure access to SaaS and on-premise web applications for users, with a variety of authentication methods that provide multi-factor identity assurance. The Cloud Authentication Service can also accept authentication requests from a third-party SSO solution or web application that has been configured to use RSA SecurID Access as the identity provider (IdP) for authentication.

 

For More details:

RSA SecurID Access Overview 

Cloud Authentication Service Overview 

 

 

The RSA NetWitness Platform uses the Plugin Framework to connect with the RSA SecurID Access (Cloud Authentication Service) RestFul API to periodically query for Admin activity. This provides visibility into all the administrative activities like: Policy, Cluster, User, Radius Server and various other configuration changes.  

 

Here is a detailed list of all the administrative activity that can monitored via this integration

Administration Log Messages for the Cloud Authentication Service 

 

Downloads and Documentation:

 

Configuration Guide: RSA SecurID Access Event Source Configuration Guide

(Note: This is Only supported on RSA NetWitness 10.6.6 currently.  And it will be in 11.2 (Coming Soon..))

Collector Package on RSA Live:  "RSA SecurID"

Parser on RSA Live: "CEF". (device.type=rsasecuridaccess) 

Filter Blog

By date: By tag: