Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

486 posts

Following up from the previous blog, Web Shells and RSA NetWitness, the attacker has since moved laterally. Using one of the previously uploaded Web Shells, the attacker confirms permissions by running, whoami, and checks the running processes using, tasklist. Attackers, like most individuals, are creatures of habit:

 

The attacker also executes a quser command to see if any users are currently logged in, and notices that an RDP session is currently active:

 

The attacker executes a netstat command to see where the RDP session has been initiated from and finds the associated connection:

 

The attacker pivots into his Kali Linux machine and sets up a DNS Shell. This DNS Shell will allow the attacker to setup C&C on the new machine she has just discovered:

 

The attacker moves laterally using WMI, and executes the encoded PowerShell command to setup the DNS C&C:

 

The DNS Shell is now setup and the attacker can begin to execute commands, such as whoami, on the new machine though the DNS Shell:

 

Subsequently, as the attacker likes to do, she also runs a tasklist through the DNS Shell:

 

Finally, the attacker confirms if the host has internet access by pinging, www.google.com:

 

As the attacker has confirmed internet access, she decides to the download Mimikatz using a PowerShell command:

 

The attacker then performs a dir command to check if Mimikatz was successfully downloaded:

 

From here, the attacker can dump credentials from this machine, and continue to move laterally around the organisation, as well as pull down new tools to achieve their task(s). The attacker has also setup a failover (DNS Shell) in case the Web Shells are discovered and subsequently removed.

 

 

 

Analysis

Since the previous post, the analyst has upgraded their system to NetWitness 11.3, and deployed the new agents to their endpoints. The tracking data now appears in the NetWitness UI, and subsequently the analysis will solely take place, on the 11.3 UI.

 

Tracking Data

The analyst, upon perusing the metadata, uncovers some reconnaissance commands being executed, whoami.exe and tasklist.exe on two of their endpoints:

 

Refocusing their investigation on those two endpoints, and exposing the Behaviours of Compromise (BOC) meta key, the analysts uncovers some suspect indicators that relate to a potential compromise, creates remote process using wmi command-line tool, http daemon runs command shell, runs powershell using encoded command, just to name a few:

 

Pivoting into the sessions related to, creates remote process using wmi command-line tool, the analyst observes the Tomcat Web Server performing WMI lateral movement on a remote machine:

 

The new 11.3 version stores the entire Encoded PowerShell command and performs no truncation:

 

This allows the analyst to perform Base64 decoding directly within the UI using the new Base64 decode function (NOTE: the squares in between each character are due to double byte encoding and not a byproduct of NetWitness decoding):

  

 

Navigating back to the metadata view, the analyst opens the Indicators of Compromise (IOC) meta key, and observed the metadata, drops credential dumping library:

 

Pivoting into those sessions, the analyst see’s that Mimikatz was dropped onto the machine that was previously involved in WMI the lateral movement:

 

Packet Data

The analyst also is looking into the packet data, they are searching through DNS as they had seen an increase in the amount of traffic that they typically see. Upon opening the SLD (Second Level Domain) meta key, the culprit of the increase is shown:

 

Focusing the search on the offending SLD, and expanding the Hostname Alias Record (alias.host) meta key, the analyst observed a large number of suspicious unique FQDN’s:

 

This is indicative behaviour of a DNS tunnel. Focusing on the DNS Response Text meta key, it is also possible to see the commands that were being executed:

 

We can further substantiate that this is a DNS Tunnel by using a tool such as CyberChef, and taking the characters after cmd in the FQDN, and hex decoding them, this reveals that data is being sent hex encoded as part of the FQDN itself, and sent as chunks, and reconstructed on the attacker side, due to the constriction on how much data can be sent via DNS:

 

 

ESA Rule

DNS based C&C is noisy, this is because there is only a finite amount of information that can be sent with each DNS packet. Therefore returning information from the infected endpoint requires a large amount of DNS traffic. Subsequently, the DNS requests that are made, need to be unique, so as not to be resolved by the local DNS cache or internal DNS servers. Due to this high level of noise from the DNS C&C communication, and the variance in the FQDN, it is possible to create an ESA rule that looks for DNS C&C with a high rate of fidelity.

The ESA rule attached to this blog post calculates a ratio of how many unique alias host values there are toward a single Second Level Domain (SLD). Whereby we count the number of sessions toward the SLD, and divide that by the number of unique alias hosts for that SLD, to give us a ratio:

 

  • SLD Session Count ÷ Unique Alias Host Count = ratio

 

The lower the ratio, the more likely this is to be a DNS tunnel; due to the high connection count, and variance in the FQDN to a single SLD. The below screenshot shows the output of this rule which triggered on the SLD which was shown in the analysis section of this blog post:

 

 

NOTE: Legitimate products perform DNS tunnelling, such as McAfee, ESET, TrendMicro, etc. These domains would need to be filtered out based on what you observe in your environment. The filtering option for domains is at the top of the ESA rule.

 

The rule for import and pure EPL code in a text file are attached to this blog. 

IMPORTANT: SLD needs to be set as an array for the to rule to work.

 

 

Conclusion

This blog post was to further demonstrate the TTP’s (Tools, Techniques, and Procedures) attackers may utilise in a compromise to achieve their end goal(s). It demonstrates the necessity for proactive threat hunting, as well as the necessity for both Packet and Endpoint visibility to succeed in said hunting. It also demonstrates that certain aspects of hunting can be automated, but only after fully understanding the attack itself; this is not to say that all threat hunting can be automated, a human element is always needed to confirm whether something is definitely malicious or not, but it can be used to minimise some of the work the analyst needs to do.

This blog also focused on the new 11.3 UI. This allows analysts to easily switch between packet data and endpoint data in a single pane of glass; increasing efficiency and detection capabilities of the analysts and the platform itself.

A couple of interactions with customers recently sent me down the path of designing better whitelisting options around well known services that generate a lot of benign traffic.  A lot of customers have gone down the path of Office365, Windows 10 and Chrome/Firefox as standard software in the Enterprise.  As a result, the traffic that NetWitness captures would include a lot of data for these services so enabling the ability to filter this data when needed is important.

 

The Parsers

The end result of this effort is 3 parsers that allow the filtering of Office365 traffic, Windows 10 Endpoint/Telemetry and Generic filtering (Chrome/ Firefox etc.) in the NetWitness plaform.

 

The data for filtering (metvalues) is written by default in the Filter key and looks like this

 

With these metavalues, analysts are able to select and deselect meta for these services to reduce the benign signals for these services from investigations and charting to focus more on the outliers.

filter!='whitelist'

filters all data tagged as whitelist from the view

 

filter!='windows10_connection'

filters all traffic that is related to windows 10 Connection endpoints (telemetry etc.) captured from these sites

windows-itpro-docs/windows/privacy at master · MicrosoftDocs/windows-itpro-docs · GitHub 

 

filter !='office365'

filters traffic from this endpoint related to all Office365 endpoints

Office 365 IP Address and URL Web service | Microsoft Docs 

 

filter !='generic'

filters traffic related to generic endpoints including Chome updates from gvt1.com and gtvt2.com as well as other misc endpoints related to windows telemetry (V8/V7 etc and others)

 

Automating the Parsers

To take this one step further, to make the process of creating the lua parsers easier a script was written for Office365 and Window10 to automate the process of pulling the content down, altering the data, creating a parser with the content and outputting the parser ready to be used on log and packet decoders to flag traffic.

 

Hopefully this can be rolled into a regular content pipeline to update the parsers periodically to get the latest endpoints as they are added (for instance when a new Windows 10 build comes out there will be an update to the endpoints most likely).

 

Scripts, lua parsers and descriptions are listed here and will get updated as issues pop up.

 

For each parser that has an update mechanism the python script can be run to generate the data that outputs a new parser for use in NetWitness (and the parser version is updated to the time the parser is built to let you know in the UI what the version is).

These parsers also serve as proof of concepts for other ideas that might need both exact and substring matches for say hostnames, or other threat data.

 

Currently the parsers read from hostname related keys such as alias.host, fqdn, host.src, host.dst.

 

As always, this is POC code to validate ideas and potential solutions. Deploy code and test/watch for side effects such as blizzards, sandstorms, core dumps and other natural events.

 

GitHub - epartington/rsa_nw_lua_wl_O365: whitelist office365 traffic parser and script 

GitHub - epartington/rsa_nw_lua_wl_windows10: whitelist window 10 connection traffic parser and script 

GitHub - epartington/rsa_nw_lua_wl_generic: whitelist generic traffic parser 

 

There will also be a post shortly about using resourceBundles to generate a single zip file with this content to make uploading and management of this data easier.

Introduction

This blog post demonstrates a common method as to how organisations can get compromised. Initially, the viewpoint will be from the attacker’s perspective, it will then move on to show what artifacts are left over within the RSA NetWitness Packets and RSA NetWitness Endpoint solutions that analysts could use to detect this type of activity.

 

Scenario

Apache Tomcat server exposed to the internet with weak credentials to the Tomcat Manager App gets exploited by an attacker. The attacker uploads three Web Shells, confirms access to all of them and then uploads Mimikatz to dump credentials.

 

Definitions

Web Shells

A web shell is a script that can be uploaded to a web server to enable remote administration of the machine. Infected web servers can be either internet-facing or internal to the network, where the web shell is used to pivot further to internal hosts.

A web shell can be written in any language that the target web server supports. The most commonly observed web shells are written in languages that are widely supported, such as JSP, PHP, ASP, Perl, Ruby, Python, and Unix shell scripts are also used.

 

Mimikatz

Mimikatz is an open source credential dumping program that is used to obtain account login and password information, normally in the form of a hash or a clear text password from an operating system.

 

THC Hydra

When you need to brute force crack a remote authentication service, Hydra is often the tool of choice. It can perform rapid dictionary attacks against more than 50 protocols, including telnet, ftp, http, https, smb, several databases, and much more.

 

WAR File

In software engineering, a WAR file (Web Application Resource[1] or Web application Archive[2]) is a file used to distribute a collection of JAR-files, JavaServer Pages, Java Servlets, Java classes, XML files, tag libraries, static web pages (HTML and related files) and other resources that together constitute a web application.


 

The Attack

The attacker finds an exposed Apache Tomcat Server for the organisation. This can be achieved in many ways, such as a simple Google search to show default configured Apache Servers:

                                      

The attacker browses to the associated Apache Tomcat server and see’s it is running up to date software and appears to be mainly left at default configuration:

                 

 

The attacker attempts to access the Manager App, the manager app requires a username and password and therefore the attacker cannot login to make changes. Typically, these servers are setup with weak credentials:

               

 

Based off of this assumption, the attacker uses an application called THC Hydra to brute force the Tomcat Manager App using a list of passwords. After a short while, Hydra returns a successful set of credentials:

                


The attacker can now login to the Manager App using the brute forced credentials:

               

 

From here, the attacker can upload a WAR (Web application ARchive) file which contains their Web Shells:

                

 

The WAR file is nothing more than just a ZIP file with the JSP Web Shells inside. In this case, three Web Shells were uploaded:

                  

 

After the upload, it is possible to see the new application called, admin (which is based off the WAR file name, admin.war), has been created:

              


The attacker has now successfully uploaded three Web Shells on to the server and can begin to use them. One of the Web Shells named, resetpassword.jsp, requires authentication to help protect direct access by other individuals; this page could also be adapted to confuse analysts when visited:

            

 

The attacker enters the password, and can begin browsing the web servers file system and executing commands, typical commands such as, whoami, are often used by attackers:

           

 

The attacker may also choose to see what processes are running to see if there are any applications that could hinder their progression by running, tasklist: 

           

 

From the previous command, the attacker notices a lack of Anti-Virus so decides to upload Mimikatz via the WebShell: 

          

 

The ZIP file has now been uploaded. This Web Shell also has an UnPack feature to decompress the ZIP file: 

         

 

Now the ZIP file is decompressed:

         

 

The attacker can now use the Shell OnLine functionality within this Web Shell which emulates CMD in order to navigate to the Mimi directory and see their uploaded tools: 

         

 

The attacker can then execute Mimikatz to dump all passwords in memory:

        

 

The attacker now has credentials from the Web Server:

         

 

The attacker could then use these credentials to laterally move onto other machines.

 

The attacker also dropped two other Web Shells, potentially as backups in case some get flagged. Let’s access those to see what they look like. This Web Shell is the JSP file called, error2.jsp, it has similar characteristics to the resetpassword.jsp Web Shell:

       

 

We can browse the file system and execute commands:

        

 

The final Web Shell uploaded, login.jsp, exhibits odd behavior when accessed:

       

 

It appears to perform a redirect to a default Tomcat Page named, examples, this appears to be a trick to confuse anyone who potentially browses to that JSP page. Examining the code for this Web Shell, it is possible to see it performs a redirect if the correct password is not supplied:

     

           <SNIP>

     

 

Passing the password to this Web Shell as a parameter, which is defined at the top of this Web Shell’s code, we get the default response from the Web Shell:

    

 

Further analyzing the code, you can understand further parameters to pass in order to make the Web Shell perform certain actions, such as a directory listing:

   

 

This Web Shell is known as Cknife, and interacting it in this way is not efficient or easy, so Cknife comes with a Java based client in order to control the Web Shell. We can launch this using the command shown below:

  

 

The client is then displayed which would typically be used:

  

Note:  This web shell is listed in this blog post as it is something the RSA Incident Response team consistently sees in some of the more advanced attacks.

 

The Analysis

Understanding the attack is important, and hence why it comes prior to the analysis section. Understanding how an attacker may operate, and the steps they may take to compromise a Web Server, will significantly increase your ability to detect these types of threats, as well as better understand the viewpoint of the analysis while triage is performed.

 

RSA NetWitness Packets

While perusing the network traffic, a large number of 401 authentication errors towards one the Web Servers was observed; there is also a large variety of what look like randomly generated passwords:

        

 

Focusing on the 401 errors, and browsing other metadata available, we can see the authentication errors are toward the Manager App of Tomcat over port 8080, also take note of the Client Application being used, this is the default from THC Hydra and has not been altered:

       

 

Removing the 401 errors, and opening the Filename and Directory meta keys, we can see the Web Shells that were being accessed and the tools that were uploaded:

       

 

NOTE: In an actual environment, a large number of directories and filenames would exist, it is up to the analyst to search for the filenames of interest that sound out of the norm or are in suspicious directories, are newly being accessed, and not accessed as frequently as other pages on the web server. For a more in-depth explanation to hunting using NetWitness Packets, take a look at the hunting guide available here: https://community.rsa.com/docs/DOC-79618

 

The analyst could also use other meta keys to look for suspicious/odd behavior. Inbound HTTP traffic with windows cli admin commands would be worth investigating, as well as sessions with only POST’s for POST based Web Shell’s, http post no get or http post no get no referer, for a couple of examples:

     

 

Investigating the sessions with windows cli admin commands yields the following two sessions, you’ll notice one of the sessions is one of the Web Shells, resetpassword.jsp

    

 

Double clicking on the session will reconstruct the packets and display the session in Best Reconstruction view, in this case, web. Here we can see the Web Shell as the browser would have rendered it, this instantly should stand out as something suspicious:

    

 

This HTTP session also contains the error2.jsp Web Shell, from the RSA NetWitness rendering, it is possible to see the returned results that the attacker saw. Again, this should stand out as suspicious:

    

 

Coming back to the investigate view, and this time drilling into the sessions for post no get no referer, we can see one of the other Web Shells, login.jsp:

   

 

Double clicking on one of these sessions shows the results from the Cknife Web Shell, login.jsp:

    

 

As this was not a nicely formatted web based Web Shell, the output is not as attractive, but this still stands out as suspicious traffic: why would a JSP page on a Web Server return a tasklist?

 

Sometimes changing the view can also help to see additional data. Changing the reconstruction to text view shows the HTTP POST sent, this is where you can see the tasklist being executed and the associated response:

   

 

Further perusing the network traffic, it is also possible to see that Mimikatz was executed:

  

This is an example of what the traffic may look like in RSA NetWitness Packets. The analyst would only need to pick up on one of these sessions to know their organization has been compromised. Pro-active threat hunting and looking for anomalies in traffic toward your web servers will significantly reduce the attacker dwell time.

 

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

 

RSA NetWitness Endpoint

On a daily basis, analysts should be perusing the IIOC’s within NWE, paying particular attention to the Level 1 IIOC’s. Upon logging into NWE, we can see an interesting Level 1 IIOC has hit, HTTP Daemon Runs Command Shell. This IIOC is looking for a HTTP Daemon, such as Tomcat, spawning cmd.exe:

 

      

 

If we double click on the machine name in the Machines window, we can then navigate to the tracking data for this machine to see what actually happened. Here we can see that Tomcat9.exe is spawning cmd.exe and running commands such as whoami and tasklist, this is not normal functionality and should raise concern for the analyst. We can also see the Mimikatz execution and the associated command executed for that:

 

    

 

Another IIOC that would have led us to this behavior is, HTTP Daemon Creates Executable:

 

   

 

Again, coming back into the tracking data, we can see the Tomcat9.exe web daemon writing files. This would be note-worthy and something that should be investigated further, as web daemons can perform this activity normally. In this instance, the presence of Mimikatz is enough for us to determine this would be malicious activity:

  

 

The analyst also has the capability to request files from the endpoint currently under analysis by right-clicking on the machine name, selecting Forensics and then Request File(s):

 

The analyst can specify the files they want to collect for their analysis (NOTE: wildcards can be used for the filenames but not directories). In this case, the analyst wants to look into the Tomcat Access files, and requests that five of them be returned:

 

Once the files have been downloaded, the analyst can save them locally by navigating to the Download tab, right-clicking the files of interest and selecting Save Local Copy:

 

Perusing the access files, the analyst can also see a large number of 401 authentication errors to the Tomcat Web Server which would have been from the THC Hydra brute force:

 

And also evidence of the Web Shells themselves. Some of the commands the attacker executed can be seen in the GET requests, the data in the body of the POST's however, does not show in the log file, showing why it is important to have both Packets and Endpoint visibility to understand the interaction with the Web Shell:

 

Conclusion

Understanding the potential areas of compromise within your organisation vastly increases your chances of early detection. This post was designed to show one of those potential areas of importance for attackers, and how they may go about a compromise, while also showing how that attack may look as captured by the RSA NetWitness Platform. It is also important to understand the benefits behind proactively monitoring the RSA NetWitness products for malicious activity, simply awaiting for an alert is not enough to capture attacks in their early stages.

 

It is also important to for defenders to understand how these types of attacks look within their own environment, allowing them to better understand, and subsequently protect against them.

 

This is something our RSA Incident Response practice does on a daily basis.  If your organization needs help or your interested to learn more, please contact your account manager.  

 

As always, happy hunting!  

RSA CHARGE 2019 CALL FOR SPEAKERS OPEN FOR SUBMISSIONS

It's official - time to get your creative juices flowing as the RSA Charge 2019 'Call for Speakers' (C4S) is now open and awaiting your submissions!

 

As you are aware, the RSA Charge events represent all RSA products and an increasing number of customers across solutions attend this one-of-a-kind event each year. The RSA 2019 Charge promises to be the biggest event in our history of RSA Charge and RSA Summit conferences. 

 

The RSA Charge event is successful in no small part because of the stellar customer submissions we receive each year. We invite you to submit your presentation brief(s) for consideration. (That's right, you may submit more than one submission brief!)

 

This year for the first time the '8' Tracks for RSA Charge 2019 are identical across all products and represent all RSA solutions. We are pleased to present them to you:

 

Transforming Your Cyber Risk Strategy - Cyber-attacks are at the top of the list of risks for many companies today.  Tell us how you are approaching reducing this risk utilizing RSA products.

 

Beyond the Checkbox: Modernizing Your Compliance Program - The regulatory landscape is always shifting.  How are you keeping up and what steps are you taking towards a sustainable, agile compliance program?

 

Aligning Third Party Risk for the Digital Transformation - Inherited risk from your business partners is a top of mind issue.  Third party risk must be attacked from multiple angles.  Share your strategy.

 

Managing Operational Risk for Impact - Enterprise risk, operational risk, all things risk management.  Share your experience and strategy on how you identify, assess and treat risk across your business operations.

 

View from Above: Securing the Cloud - From security visibility to managing organizational mandates, what is your risk and security strategy to answer the "go to cloud" call.

 

Under the RSA Hood: Managing Risk in the Dynamic Workforce - The workforce has become a dynamic variable for many organizations - from remote users to BYOD to contractors and seasonal workers.  How are you addressing this shift?

 

Business Resiliency for the 'Always On' Enterprise - The world expects connectivity.  When the lights are off, the business suffers.  Tell us how you are ensuring your business is 'always on' - business continuity, recovery, crisis management and the resilient infrastructure.

 

Performance Optimization: RSA Product Learning Lab - Share your technical insights of how you use RSA products to meet your business objectives.  Extra points for cool 'insider' tips and tricks.

 

We know you have great stories to share with your peers, best practices, teachings, and how-to's. We hope you consider submitting a brief and thank you in advance for your consideration. More information can be found on the RSA Charge 2019 website (scroll to bottom of page) including the RSA Charge 2019 Call for Speakers Submission Form. Submission should be sent to: rsa.events@rsa.com.

 

Call for Speakers 'closes' April 19. 

 

There are many reasons I enjoy working with the RSA Netwitness Platform, but it’s when our customers turn their attention to threat hunting that really makes things exciting. In one case, there was a need where they could take new threat intelligence or research and apply it to their RSA Netwitness stack. This wasn’t directed at threat intelligence via feeds, but more around how attackers could deliver malicious content. Since there is never a shortage of threat research, one customer asked about detection of zero-width spaces.

 

In a recent article in The Hacker News, research was presented showing that zero-width spaces that were embedded within URL’s would be able to bypass URL scanners in Office 365. The question now is how to go about detecting this in Netwitness?

 

We begin with a search of our network using the following query:

 

Query
alias.host contains "&#8203;","&#8204;","&#8205;","&#65279;","&#65296;"

 

This gave us several DNS sessions and a single SMTP session.

 

If we pivot into the SMTP session, we can get slightly better view of the meta for that session.

 

That last hostname in ‘alias.host’ does look interesting, but not sure. We need to examine the session more closely.

 

We rendered the session as an email and it bore the signs of a classic phishing email.

 

However, only when we examine the raw contents of it, does the malicious indicator present itself.

 

The bytes highlighted in red (E2 80 8C) represent the zero-width non-joiner character (&#8204). This appears to be the attackers use of zero-width spaces as a bypass attempt in a phishing email. Next we look at the meta data for the session.

 

Above, we can see our suspicious hostname, but how did it get there. Turns out that the ‘phishing_lua’ parser will examine URL’s within SMTP traffic and extract the hostnames it finds into the ‘alias.host’ meta key. Fortunately for us, it included the zero-width space as part of the meta data…we just can’t see it. Or can we?

 

I copied the meta value for the hostname and then pasted it into my text editor. Sadly, I did not notice any strange characters. However, I did paste it into my good friend ‘vi’.

 

This proved that the zero-width spaces were in the meta data, which allowed our query to work successfully. The malicious site actually leads to a credential stealing webpage.  It appears that the website was a compromised wordpress site.

 

Next, I wanted to get a bigger data set. I took some of the meta data, such as the email address of the sender, and used it to find additional messages. Turned out, this helped identify an active phishing campaign.

 

Next up is to put together some kind of detection going forward. My first thought was to use an application rule, but was not successful. I think it was the way the Unicode was being interpreted or how it was inputted. I need to do more research on that. Since the app rule syntax was not working properly, I decided to build a Lua parser instead. This parser would perform the meta callback function of the “alias.host” meta key, just like an app rule would. Next, the parser would loop through a predefined list of zero-width space bytes against the returned meta value. If a match was made, it would write meta into the ‘ioc’ meta key. 

 

lua_zws_check.lua

-- Step 1 Name the parser
local lua_zws_check = nw.createParser("lua_zws_check", "Check alias.host meta for zero-width spaces")

--[[

DESCRIPTION

Check alias.host meta for zero-width spaces


VERSION

2019-01-24 - Initial development

AUTHOR

christopher.ahearn@rsa.com


DEPENDENCIES

None

META KEYS



NOTES

https://thehackernews.com/2019/01/phishing-zero-width-spaces.html?m=1
http://www.amp-what.com/unicode/search/zero%20width

--]]

 

-- Step 3 Define where your meta will be written
-- These are the meta keys that we will write meta into
lua_zws_check:setKeys({

nwlanguagekey.create("ioc", nwtypes.Text),

})

 

-- Step 4 DO SOMETHING
local zws = ({

 

["\226\128\139"] = true, -- &#8203; &NegativeMediumSpac zero width space
["\226\128\140"] = true, -- &#8204; &zwnj; zero width non-joiner
["\226\128\141"] = true, -- &#8205; &zwj; zero width joiner
["\239\187\191"] = true, -- &#65279; zero width no-break space
["\239\188\144"] = true, -- &#65296; fullwidth digit zero

})

 

-- This is our function. What we want to do when we match a token...or in this case, the
-- filename meta callback.
function lua_zws_check:hostMeta(index, meta)

if meta then

for i,j in pairs(zws) do

local check = string.find(meta, i)
if check then

--nw.logInfo("*** BAD HOSTNAME CHECK: " .. meta .. " ***")

nw.createMeta(self.keys["ioc"], "hostname_zero-width_space")
break

end

end 

end

end

 

-- Step 2 Define your tokens
lua_zws_check:setCallbacks({

[nwlanguagekey.create("alias.host")] = lua_zws_check.hostMeta, -- this is the meta callback key

})

 

 

After deploying the parser, I re-imported the new pcap file into my virtual packet decoder. The results came back quickly. I now had reliable detection for these zero-width space hostnames.

 

Since meta is displayed in the order in which it was written, we can get a sense as to the hostname that triggered this indicator.

 

Now that we have validated that the parser is working correctly in the lab environment, it was time to test some other capabilities of the Netwitness platform.

 

As we stated in the beginning, the query (as well as the parser) was flagging on DNS name resolutions that involved Unicode characters. Therefore, we wanted to create an alert when we saw the ‘zero-width’ meta when it was in SMTP traffic. We then created an ESA rule in the lab environment.

 

To begin this alert, I went to the Configure / ESA Rules section in Netwitness and created a new rule using the Rule Builder wizard.

 

 

We gave the rule a name, which will be important in the next phase. Next, we created the condition by giving the condition a name and then populating the fields.

The first line is looking for the meta key and the meta value. The second is looking at the service type. Once it looks good, we hit save. We then hit save again and close out the rule.

 

NOTE: In the first line, you see the “Array?” box checked. Some meta keys are defined as arrays meaning they could contain multiple values in a session. The meta key ‘ioc’ is one such meta key. You may encounter a situation where a meta key should be set as an Array but is not. If that is the case, it is a simple change on the ESA configuration.

 

Next, we want to deploy the rule to our ESA appliance. To do, we clicked the ESA appliance in our deployments table.

Next, we add the rule we want to deploy. Then, we deploy it.

 

We then imported the PCAP again to see if our ESA rule fired successfully, which it did.

 

The last piece before production is to create an Incident rule, based on the ESA alerts. We move to Configure / Incident Rules and create a new rule.

I created the Incident rule in the lab and used the parameters shown below.

 

I then enabled the rule and saved it.

 

Now, when the incidents are examined in the Respond module, we can see our incidents being created.

 

To summarize this activity, we started from some new(ish) research and wanted to find a way to detect this in Netwitness. We found traffic that we were interested in and then built a Lua parser to improve our detection going forward. Next, I wanted to alert on this traffic only when it was in SMTP traffic and, because I wanted to work on some automation, created an Incident rule to put a bow on this. We now have actionable alerting after small bit of research on our end.  My intent is to get the content of the parser added to one already in Live.  Until that time, it will be here to serve as a reference.

 

What are your use cases? What are some things you are trying to find on the network that Netwitness can help with? Let us know.

 

Good luck and happy hunting.

There have been a few blogs recently (Gathering Stats with Salt - BIOS/iDRAC/PERC EditionRSA NetWitness Storage Retention Script) that leverage a new functionality in v11.x for querying data directly from RSA NetWitness hosts through the command line.

 

This functionality - SaltStack - is baked into v11.x (Chef pun ftw!) and enables PKI-based authentication between the salt master (AKA admin server; AKA node0) and any salt minion (everything that's not the salt master, plus itself).

 

During a recent POC, one of the customer's use cases was to gather, report, and alert against certain host information within the RSA NetWitness environment - kernel, firmware, BIOS, OS, and iDRAC versions, storage utilization (%), and some others.

 

In order for NetWitness to report and alert on this information, we needed to take these details about the physical hosts and feed it into the platform so that we could use the resulting meta.  Thankfully, others before me did all the hard work figuring out the commands to run against hosts to extract this information, so all I had to do was massage the results into a format that could be fed into NetWitness as a log event, and write a parser for it.

 

The scripts, parser, and custom index entries are attached to this blog.  All the scripts are intended to be run from your 11.x Admin Server.  If you do choose to use these or modify them for your environment/requirements, be sure to change the IP address for the log replay command within the scripts

NwLogPlayer -r 3 -f $logEvent -s 192.168.10.14 -p 514

 

...to the IP of a Log Decoder in your environment.  

 

A custom meta and custom column group are also attached.

 

Although the RSA NetWitness platform gives administrators visibility into system metrics through the Health & Wellness Systems Stats Browser, we currently do not have a method to see all storage / retention across our deployment in a single instance or view.

 

Below you will find several scripts that will help us gain this visibility quickly and easily.

 

Update: Please grab the latest version of the script, some bugs were discovered that were fixed.

 

How It Works:

 

1. Dependency: get-all-systems.sh (attached) both v10 and v11 version for your particular environment. Please run this script prior to running the get-retention.py as it requires the 'all-systems' file which contains all of your appliances & services.

2. We then read through the all-systems file and look for services that have retention e.g. EndpointLogHybrid, EndpointHybrid, LogHybrid, LogDecoder, Decoder, Concentrator, Archiver.

3. Finally we use the 'tlogin' functionality of NwConsole to allow cert-based authentication, thus, no need to run this script with username/password as input to pull database statistics and output the retention (in days) for that particular service.

 

Instructions:

 

1. Run ./get-all-systems_v10.sh (for 10.x systems) or ./get-all-systems_v11.sh (for 11.x systems)

2. Run ./get-retention.py  (without any arguments). This MUST be run from Puppetmaster (v10) or Node0 (v11).

 

Sample Run: 

 

Please feel free to provide feedback, bug reports etc...

There have been many improvements made over the past several releases to the RSA NetWitness product on the log management side of the house to help reduce the amount of unparsed or misparsed devices.  There are still instances where manual intervention is necessary and a report such as the one provided in this blog could prove valuable for you.

 

This report provides visibility into 4 types of situations:

 

Device.IP with more than 1 device.type

Devices that have multiple parsers acting on them over this time period, sorted most parsers per IP to least

 

Unknown Devices

Unknown devices do not have a parser detected for them or no parser is installed/enabled for it.

 

Device.types with word meta

Device types with word meta indicate that a parser has matched a header for that device but no payload (message body) has matched a parser entry.

 

Device.type with parseerror

Devices that are parsing meta for most fields but have parseerror meta for particular metakey data. This can indicate the format of the data into the key does not match the format of the key (invalid MAC address into eth.src or eth.dst - MAC formatted keys), text into IP key

 

Some of these categories are legitimate but checking this report once a week should allow you to keep an eye on the logging function of your NetWitness system and make sure that it is performing at its best.

 

The code for the Report is kept here (in clear text format so you can look at the rule content without needing to import it into NetWitness):

GitHub - epartington/rsa_nw_re_logparser_health 

 

Here's a sample report output:

 

Most people don't remember the well known port number for a particular network protocol. Sometimes we need to refer to an RFC to remember what port certain protocols normally run over. 

 

In the RSA NetWitness UI, the well known name for the protocol is presented in the UI but when you drill on it you get the well known port number. 

 

This can be a little confusing at times if you aren't completely caffeinated.☕

 

Well here's some good news, you an use the name of the service in your drills and reports with the following syntax:

 

Original method:

Service=123 

 

New method:

Service="NTP"

 

You may get an error about needing quotes around the word however the system still interprets the query correctly.

 

 

This also works in profiles:

 

An in the Reporting Engine as well:

 

Good luck using this new trick!

   

(P.S you can also use AND instead of && and OR instead of || )

RSA NetWitness v11.2 introduced a very useful feature to the Investigation workflow with the improvement of the Profile feature.  In previous versions the Profile could have a pre-query set for it along with the meta and column groups, but you were locked to using only those two features unless you de-activated your profile.

 

With v11.2 you are able to keep the pre-query set from the profile and pivot to other meta and column groups.  This ability allows you to set the Profiles as bookmarks or starting points for investigations or drills.  Along with the folders that can be set in the Profile section to help organize the various groups that help frame investigations properly.

 

Below is a collection of the profiles as well as some meta and column groups to help collect various types of data or protocols together.

 

GitHub - epartington/rsa_nw_investigation_profiles 

 

Protocols

Medium

Log Device Classes

UEBA

 

Let me know if these work for you, I will be adding more as they develop to the github site so check back.

Often times, RSA NetWitness Packet decoders are configured to monitor not only ingress and egress traffic, but also receive internal LAN traffic as well.  On a recent engagement, we identified a significant amount of traffic going to TCP port 9997.  It did not take long to realize this traffic was from internal servers configured to forward their logs to Splunk.

 

The parser will add to the 'service' meta key and write the value '9997'.  After running the parser for several hours, we also found other ports that were used by the Splunk forwarders.  

 

While there wasn't anything malicious or suspicious with the traffic, it was a significant amount of traffic that was taking up disk space.  By identifying the traffic, we can make it a filtering candidate.  Ideally, the traffic would be filtered further upstream at a TAP, but sometimes that isn't possible.  

 

If you are running this parser, you could also update the index-concentrator-custom.xml and add an alias to the service types.  

 

 

...

 

 

If you have traffic on your network that you want better ways to identify, let your RSA account team know.  

 

Good luck, and happy hunting.

I helped one of my customers implement a use case last year that entailed sending email alerts to specific users when those users logged into legacy applications within their environment.

 

Creating the alerts for this activity with the ESA was rather trivial - we knew which event source would generate the logs and the meta to trigger against - but sending the alert via email to the specific user that was ID'd in the alert itself added a bit of complexity.

 

Fortunately, others have had similar-ish requirements in the past and there are guides on the community that cover how to generate custom emails for ESA alerts through the script notification option, such as Custom ESA email template with raw event payload and 000031690 - How to send customized subjects in an RSA Security Analytics ESA alert email.

 

This meant that all we had to do was map the usernames from the log events to the appropriate email addresses, enrich the events and/or alerts with those email addresses, and then customize the email notification using that information.  Mapping the usernames to email addresses and adding this information to events/alerts could have been accomplished in a couple different ways - either a custom Feed (Live: Create a Custom Feed) or an In-Memory Table (Alerting: Configure In-Memory Table as Enrichment Source) - for this customer the In-Memory Table was the preferred option because it would not create unnecessary meta in their environment.

 

We added the CSV containing the usernames and email addresses as an enrichment source:

 

....then added that enrichment to the ESA alert:

 

With these steps done, we triggered a couple alerts to see exactly what the raw output looked like, specifically how the enrichment data was included.  The easiest way to find raw alert output is within the respond module by clicking into the alert and looking for  the "Raw Alert" pane:

 

Armed with this information, we were then able to write the script (copy/pasting from the articles linked above and modifying the details) to extract the email address and use that as the "to_addr" for the email script (also attached at the bottom of this post):

#!/usr/bin/env python
from smtplib import SMTP
import datetime
import json
import sys

def dispatch(alert):
    """
    The default dispatch just prints the 'last' alert to /tmp/esa_alert.json. Alert details
    are available in the Python hash passed to this method e.g. alert['id'], alert['severity'],
    alert['module_name'], alert['events'][0], etc.
    These can be used to implement the external integration required.
    """

    with open("/tmp/esa_alert.json", mode='w') as alert_file:
        alert_file.write(json.dumps(alert, indent=True))

def read():
    #Parameter
    smtp_server = "<your_mail_relay_server>"
    smtp_port = "25"
    # "smtp_user" and "smtp_pass" are necessary
    # if your SMTP server requires authentication
    # used in "smtp.login()" below
    #smtp_user = "<your_smtp_user_name>"
    #smtp_pass = "<your_smtp_user_password>"
    from_addr = "<your_mail_sending_address>"
    missing_msg = ""
    to_addr = ""  #defined from enrichment table

    # Get data from JSON
    esa_alert = json.loads(open('/tmp/esa_alert.json').read())
    #Extract Variables (Add as required)
    try:
        module_name = esa_alert["module_name"]
    except KeyError:
        module_name = "null"
    try:
         to_addr = esa_alert["events"][0]["user_emails"][0]["email"]
    except KeyError:
         missing_msg = "ATTN:Unable to retrieve from enrich table"
         to_addr = "<address_to_send_to_when_enrichment_fails>"
    try:
        device_host = esa_alert["events"][0]["device_host"]
    except KeyError:
        device_host = "null"
    try:
        service_name = esa_alert["events"][0]["service_name"]
    except KeyError:
        host_dst = "null"
    try:
        user_dst = esa_alert["events"][0]["user_dst"]
    except KeyError:
        user_dst = "null"
    # Sends Email
    smtp = SMTP()
    smtp.set_debuglevel(0)
    smtp.connect(smtp_server,smtp_port)

    date = datetime.datetime.now().strftime( "%m/%d/%Y %H:%M" ) + " GMT"
    subj = "Login Attempt on " + ( device_host )
    message_text = ("Alert Name: \t\t%s\n" % ( module_name ) +
        " \t\t%s\n" % ( missing_msg ) +
        "Date/Time : \t%s\n" % ( date  )  +
        "Host: \t%s\n" % ( device_host ) +
        "Service: \t%s\n" % ( service_name ) +
        "User: \t%s\n" % ( user_dst )
    )

    msg = "From: %s\nTo: %s\nSubject: %s\nDate: %s\n\n%s\n" % ( from_addr, to_addr, subj, date, message_text )
    # "smtp.login()" is necessary if your
    # SMTP server requires authentication
    #smtp.login(smtp_user,smtp_pass)
    smtp.sendmail(from_addr, to_addr, msg)
    smtp.quit()

if __name__ == "__main__":
    dispatch(json.loads(sys.argv[1]))
    read()
    sys.exit(0)

 

And the result, after adding the script as a notification option within the ESA alert:

-----------------------------

 

Of course, all of this can and should be modified to include whatever information you might want/need for your use case.

Amazon Virtual Private Clouds (VPC) are used in hybrid cloud enterprise environments to securely host certain workloads and customers need to enable their SOC to identify potential threats with these components of their infrastructure.  The RSA NetWitness Platform supports ingest of many 3rd party sources,  including Amazon CloudTrail, GuardDuty, and now VPC Flow Logs.

 

The RSA NetWitness Platform has reporting content for Analysts to leverage in assessing the VPC security and overall health.  In https://community.rsa.com/docs/DOC-97451 we illustrate out-of-the-box reporting content to allow an analyst to get quick visibility into potential operational issues, such as highest and lowest accepted/rejected connections and traffic patterns on each VPC. 

 

VPC Flow Logs is an AWS monitoring feature that captures information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon S3. After you've created a flow log, you can retrieve and view its data in the chosen destination. 

 

Logs from Amazon VPCs can be exported to CloudWatch. The RSA NetWitness Platform AWS VPC plugin uses CloudWatch API to capture the logs.

 

 

 

 

                                                                                                                                   

This project is an attempt at building a method of orchestrating threat hunting queries and tasks within RSA NetWitness Orchestrator (NWO).  The concept is to start with a hunting model defining a set of hunting steps (represented in JSON), have NWO ingest the model and make all of the appropriate "look ahead" queries into NetWitness, organizing results, adding context and automatically refining large result sets before distributing hunting tasks among analysts.  The overall goal is to have NWO keep track of all hunting activities, provide a platform for threat hunting metrics, and (most importantly) to offload as much of the tedious and repetitive data querying, refining, and management that is typical of threat hunting from the analyst as possible.  

 

Please leave a comment if you'd like some help getting everything going, or have a specific question.  I'd love to hear all types of feedback and suggestions to make this thing useful.

 

Usage

The primary dashboard shows the results of the most recent Hunting playbook run, which essentially contains each hunting task on it's own row in the table, a link to the dynamically generated Investigation task associated with that task, a count of the look ahead query results, and multiple links to the data in (in this case) RSA NetWitness.  The analyst has not been involved up until now, as NWO did all of this work in the background. 

 

Primary Hunting Dashboard

 

Pre-built Pivot into Toolset (NetWitness here)

 

The automations will also try to add extra context if the result set is Rare, has Possible Beaconing, contains Indicators, and a few other such "influencers" along with extra links to just those subsets of the overall results for that task.  Influencers are special logic embedded within the automations to help extract additional insight so that the analyst doesn't have to.  There hasn't been too much thought put into the logic behind these pieces just yet, so please consider them all proofs of concept and/or placeholders and definitely share any ideas or improvements you may have.  The automations will also try to pare down large result sets if you have defined thresholds within the hunting model JSON. The entire result set will still be reachable, but you'll get secondary counts/links where the system has tried to aggregate the rarest N results based on the "Refine by" logic also defined in the model, eg:

 

If defined in the huntingcontent.json, a specific query/task can be given a threshold and will try to refine results by rarity if the threshold is hit.  Example above shows a raw count of 6850 results, but a refined set of 35 results mapping to the rarest 20 {ip.dst, org.dst} tuples seen.

 

For each task, the assigned owner can drill directly into the relevant NetWitness data, or can drill into the Investigation associated with the task.  Right now the investigation playbooks for each task are void of any special playbooks themselves - they simply serve as a way to organize tasks, contain findings for each hunt task and a place from which to spawn child incidents if anything is found:

 

 

From here it is currently just up to the analyst to create notes, complete the task, or generate child investigations.  Future versions will do more with these sub investigation/hunt task playbooks to help the analyst. For now it's just a generic "Perform the Hunt" manual task.  Note that when these hunt task investigations get closed, the Hunt Master will updated the hunting table and mark that item as "complete", signified by a green dot and a cross-through as shown in the first screenshot.

 

How It Works

Playbook Logic

  1. Scheduled job or ad-hoc creation of "Hunt" incident that drives the primary logic and acts as the "Hunt Master"
  2. Retrieve hunting model JSON (content file and model definition file) from a web server somewhere
  3. Load hunting model, perform "look ahead", influencer, and refining queries
  4. Create hunting table based on query results, mark each task as "In Progress", "Complete", or "No Query Defined"
  5. Generate dynamic hunting investigations for each task that had at least 1 result from step 2
  6. Set a recurring task for the Hunt Master to continuously look for all related hunt tasks (they share a unique ID) and monitor their progress, updating the hunting table accordingly.
  7. [FUTURE] Continuously re-query the result sets in different ways to find outliers (eg. stacking different meta keys and adding new influencers/links to open hunting tasks

(Both the "Hunt Master" and the generated "Hunt Tasks" are created as incidents, tied together with a unique ID - while they could certainly be interacted with inside of the Incidents panel, the design is to have hunters operate from the hunting dashboard)

 

The Hunting Model

Everything is driven off of the hunting model & content.  The idea is to be able to implement any model/set of hunting tasks along with the queries that would normally get an analyst to the corresponding subset of data.  The example and templates given here corresponding with the RSA Network Hunting Labyrinth, modeled after the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide 

huntingcontent.json

This file must sit on a web server somewhere, accessible by the NWO server. You will later configure your huntingcontentmodel.json file to point to it's location if you want to manage you're own (instead of the default version found here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontent.json  

 

This file defines hunting tasks in each full branch of the JSON file, along with queries and other information to help NWO populate discover the data and organize the results:

 

 

(snippet of hunting content json)

 

The JSON file can have branches of length N, but the last element in any given branch, which defines a single hunting technique/task must have an element of the following structure. Note that "threshold" and "refineby" are optional, but "query" and "description" are mandatory, even if the values are blank.

 

 

The attached (same as the github link above as of the first release) example huntingcontent.json is meant to be a template as it is currently at the very beginning stages of being mapped to the RSA Network Hunting Labyrinth methodology.  This will be updated with higher resolution queries over time. Once we can see this operate in a real environment, the plan is to leverage a lot more of the ioc/eoc/boc and *.analysis keys in RSA NetWitness to take this beyond a simple proof of concept. You may also choose to completely define your own to get you started.

 

huntingcontentmodel.json

This file must sit on a web server somewhere, accessible by the NWO server. A public version is available here: https://raw.githubusercontent.com/smennis/nwohunt/master/content/huntingcontentmodel.json  but you will have to clone and update this to include references to your NWO and NW servers before it will work.  This serves as the configuration file specific to your environment that describes structure of the huntingcontent.json file, display options, icon options,  language, resource locations, and a few other configurations. It was done this way to avoid hard coding anything into the actual playbooks and automations:

 

model: Defines the heading for each level of huntingcontent.json.  A "-" in front means it will still be looked for programatically but will not be displayed in the table.

 

language: This is a basic attempt at making the hunting tasks described by the model more human readable by connecting each level of the json with a connector word.  Again, a "-" in front of the value means it will not be displayed in the table.


groupingdepth:
This tells NWO how many levels to go when grouping the tasks into separate tables. Eg. a grouping level of "0" would contain one large table with each full JSON branch in a row. A grouping level of "3" will create a separate table for each group defined 3 levels into the JSON (this is what's shown in the dashboard table above)

 

verbosity: 0 or 1 - a value of 1 means that an additional column will be added to the table with the entire "description" value displayed. When 0, you can still see the description information by hovering over the "hover for info" link in the table.

 

queryurl: This defines the base URL where the specific queries (in the 'query' element of huntingcontent.json) will be appended in order to drill into the data.  Example above is from my lab, so be sure to adjust this for your environment.

 

influencers: The set of influencers above are the ones that have been built into the logic so far.  This isn't as modular under the hood as it should be, but I think this is where there is a big opportunity for collaboration and innovation, and where some of the smarter & continuous data exploration will be governed.  iocs, criticalassets, blacklist, and whitelist are just additional queries the system will do to gain more insight and add the appropriate icon to the hunt item row.  rarity is not well implemented yet and just adds the icon when there are < N (10 in this case) results.  This will eventually be updated to look for rarity in the dataset against a specific entity (single IP, host, etc.) rather than the overall result count.  possiblebeacon is implemented to look for a 24 hour average of communication between two hosts signifying approximately 1 beacon per minute, 5 minutes, or 10 minutes along with a tolerance percentage.  Just experimenting with it at this point.   Note that the "weight" element doesn't affect anything just yet. The eventual concept is to build a scoring algorithm to help prioritize or add fidelity the individual hunting tasks.

   

Instructions

Installation Instructions:

  1. Prerequisites:  RSA NetWitness Network (Packets) and Logs (original version of NetWitness query integration) integration installed.  Note that there is currently a v2 NetWitness integration, but this will not work with that version at this time due to the change in how the commands work. I will try to update the automations for the v2 integration ASAP.
    1. The v1 NetWitness Integration is included in the zip.  Settings > Integrations > Import.
  2. Create a new incident type named "Hunt Item" (don't worry about mapping a playbook yet)
  3. Import Custom Fields (Settings > Advanced > Fields) - import incidentfields.json (ignore errors)
  4. Import Custom Layouts (Settings > Advanced > Layout Builder > Hunt)
    1. Incident Summary - import layout-details.json
    2. New/Edit - import layout-edit.json
    3. Incident Quick View - import layout-details.json
  5. Import Automations (Automations > Import - one by one, unfortunately)

       - GenerateHuntingIncidents

       - PopulateHuntingTable

       - GenerateHuntingIncidentNameID

       - LoadHuntingJSON

       - NetWitness LookAhead

       - ReturnRandomUser

       - UpdateHuntingStatus

  6. Import Dashboard Widget Automations (Automations > Import)

       - GetCurrentHuntMasterForWidget

       - GetHuntParticipants

       - GetHuntTableForWidget   

  7. Import Sub-Playbooks (Playbooks > Import)

       - Initialize Hunting Instance

       - Hunting Investigation Playbook

  8. Import Primary Playbook (Playbooks > Import)

    - 0105 Hunting

  9. Map "0105 Hunting" Playbook to "Hunt" Incident Type (Settings > Incident Types > Hunt) and set the playbook to automatically start

  10. Map "Hunting Investigation Playbook" to "Hunt Item" Incident Type and set playbook to automatically start

  11. Import Dashboard

  12. Place huntingcontent.json, huntingcontentmodel.json (within the www folder of the packaged zip), onto a web server somewhere, accessible by the NWO server. Note, by default the attached/downloadable huntingcontentmodel.json points at github for the huntingcontent.json file. You can leave this as is (and over time you'll get a more complete set of hunting queries) or create your own as you see fit and place it on your web server as well.

Configuration:

Before the first run, you'll have to make a few changes to point the logic at your own environment:

  1. Edit huntingcontentmodel.json and update all queryURL and icon URL fields to point at your NetWitness server and web server respectively.  You cal also edit the "huntingContent" element of this file (not shown) to point at your own version of the huntingcontent.json file discussed above:
    (Top - huntingcontentmodel.json snippet, showing the references with respect to your standard NetWitness UI URL)
  2. Go into the "Initialize Hunting Instance" playbook, and click on "Playbook Triggered" and enter the path to your huntoncontentmodel.json file (that includes updated fields pointing to NetWitness). If you leave it as is, none of the look ahead queries will work since no configuration file will be loaded.
  3. Creating your first hunting incident, from Incidents > New Incident, select type "Hunt" and give it a time range. Start with 1 day for testing.
  4. Note that the playbook will automatically name the incident "Hunt Master" prepended with a unique HuntID. Everything is working if, in the Incidents page, you see a single Hunt Master and associated Hunt Items all sharing the same HuntID.

Opening up the Hunt Master incident Summary page (or Hunting Dashboard) should show you the full hunting table:

 

Please add comments as you find bugs or have additional ideas and content to contribute.

We are extremely proud to announce that RSA has been positioned as a “Leader” by Gartner®, Inc. in the 2018 Magic Quadrant for Security Information and Event Management research report for its RSA NetWitness® Platform.

 

The RSA NetWitness Platform pulls together SIEM, network monitoring and analysis, endpoint threat detection, UEBA and orchestrated response capabilities into a single, evolved SIEM solution. Our significant investments in our platform over the past 18 months make us the go-to platform for security teams to rapidly detect and respond to threats across their entire environment.

 

The 2018 Gartner Magic Quadrant for SIEM evaluates 17 vendors on the basis of the completeness of their vision and ability to execute. The report provides an overview of each vendor’s SIEM offering, along with what Gartner sees as strengths and cautions for each vendor. The report also includes vendor selection tips, guidance on how to define requirements for SIEM deployments, and details on its rigorous inclusion, exclusion and evaluation criteria. Download the report and learn more about RSA NetWitness Platform.

Filter Blog

By date: By tag: