Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2016 > September

Help us understand how your security team operates. We want to learn more about your work environment and the tools that you use, so we can tailor NetWitness to provide your team with the best experience possible.

Please click here to take the short and anonymous survey.

Hermes Bojaxi posted the following useful information which is well worth sharing. Normally a Maintenance Job runs every week which will shrink the Database Transaction logs. If for some reason the maintenance job repeatedly fails you may have very large database transaction logs.


You can shrink transaction logs with this script which is actually inside the Maintenance job. The script should be run against either the ECAT$PRIMARY or ECAT$SECONDARY databases.

DECLARE @LogicalLogFileName nvarchar(50)
, @SQL nvarchar(1000)
, @Continue bigint = 100
, @CurrentDate datetime2(3) = getutcdate()
-- Shrink Log file
SELECT @LogicalLogFileName = FILE_NAME(2) ;
SET @SQL = 'DBCC SHRINKFILE('+@LogicalLogFileName+', 1)' ;
EXEC ( @SQL) ;


After running this script go to database -> TASKS -> Shrink ->Files and chose the option to release unused space.


Further information:


Shrinking the Transaction Log 

While working on a solution for collecting logs from a Blue Coat system in a DMZ, we had the requirement that the FTP/FTPS connection needed to traverse a firewall.  The issue that immediately became apparent was that in order to allow the client, using Active FTP/FTPS, to communicate to the Log Collector it would require that almost the entire port range be opened on the firewall to allow the successful communication.  To resolve this, we turned to a Passive FTP/FTPS configuration which would allow us to specify a port range to use for client/Log Collector communication and allow a more acceptable firewall rule.  The explanation below shows how the port assignments work in FTP/FTPS communication.



Active FTP/FTPS uses random ports to initiate the data channel connection from the Log Collector, this presents a challenge for use through a firewall as you cannot predict which ports the server will use to initiate the data transfer.

                FTP/FTPS Client – Random Port1 --> Port 21 – Log Collector (Communication Channel)

                FTP/FTPS Client – Random Port2 <-- Random Port3 – Log Collector (Data Transfer Channel)

NOTE:  Firewalls that are FTP aware seem to work fine with this random data port communication as they can see the data transfer channel communication coming back from the Log Collector to the client and will allow it.  However when you switch to FTPS the Data Transfer Channel is encrypted and the firewall cannot see that it is a Data Transfer Channel coming back from the Log Collector to the client and will block it.  This is when you have to use Passive FTP/FTPS or open the entire port range to allow the Log Collector initiated Data Transfer Channel to come back to the client.


Passive FTP/FTPS

Passive FTP/FTPS uses a defined set of ports for the data channel and the connection is initiated from the client system so that the firewall rules for the ports can be specifically defined instead of random.

                FTP/FTPS Client – Random Port1 --> Port 21 – Log Collector (Communication Channel)

                FTP/FTPS Client – Random Port2 --> Defined Passive Port – Log Collector (Data Transfer Channel)


Demonstration of Active FTP/FTPS and Passive Configuration Video



Passive Configuration Only Video


Help us understand some of the specifics of your organizations use and/or needs from RSA threat intelligence within the NetWitness Suite.  Click here to take the quick survey.

Michael Sconzo

The Evolution of Cerber

Posted by Michael Sconzo Employee Sep 27, 2016

Here's a great bit of research by RSA Research along with associated Live content by the Content team.



Ransomware-as-a-Service (RaaS) offerings first emerged around May of 2015, and removes technical hurdles for would-be cyber criminals by providing configurable components that can be mixed and matched as needed based upon the runners target demographic, support services (e.g., payment processing) and even customer service [1]

Subsequently, ransomware-derived revenues have skyrocketed over the past year as operators have honed and refined their business approach.  As of summer ’16, it is widely believed that Ransomware represents the most profitable malware market to date for cyber criminals and dark web operators. 

Cerber pay screen


Cerber is perhaps the most profitable of recent ransomware campaigns, and recent estimates based upon analysis of statistics from counter-compromised affiliate panels project operator revenues at $2.5M for this year, based on a 40% cut of overall revenues[2].



The goal of this research effort is to investigate recent Cerber campaigns, identify deployment models and infrastructure, and create content/innovation that may aid in the detection of this ransomware. This is done by detonating multiple samples, analyzing the malware callbacks, and enumerating associated networks, behavior, and infrastructure. In order to accomplish the objective several tools where used: Maltego, PassiveTotal, VirusTotal, Malware-Traffic-Analysis, Google and others.



Research and enrichment of the core dataset, produced significant insight into 5 distinct Cerber campaigns, including what we believe to be an alpha or pilot run spanning 5/11 – 6/1, two phishing-based campaigns in July, and two Exploit Kit (EK) based campaigns in August and into early September, which RSA Research believes consistent with the purported improvements and timing for EK-delivery methods.


For the phishing delivered campaigns, RSA researchers identified a clear Domain Generating Algorithm (DGA) and Top Level Domain (TLD) pattern, which characterizes probable payment processing sites.  Based on a number of shared indicators (IPs, SSL certificates, and Domain registrations) that were correlated to previous Torrentlocker/Crypt0L0cker ransomware and Nuclear EK campaigns as well as a number of Alien Vault Open Threat Exchange postings[3], it is believed that these campaigns delivered mixed ransomware to victims.  Snapshots of the related Maltego graphs of these campaigns are below:



With regard to the EK-delivered Cerber campaigns, there is a significant evolution in complexity and scalability with regard to the actor’s deployment model as benchmarked from the alpha campaign through the August and September periods of activity.  Evident to this fact are the use of both perishable (sometimes rotating daily) IP infrastructure as well as nearly unique malware hashes that are created every 15 seconds[4].  Maltego snapshots of these networks with some technical details are below:



Regardless of deployment model changes, the IP-Geo check still functions as detailed in CheckPoint’s August 16th report[5] to bypass hosts in Eastern European countries or systems with correlating language settings.  IP geolocation services were seen from several providers including ‘’ and ‘’ (neither of which are inherently malicious).

"languages": [ 1049, 1058, 1059, 1064, 1067, 1068, 1079, 1087, 1088, 1090, 1091, 2072, 2073, 2092, 2115]

"countries": [ "am", "az", "by", "ge", "kg", "kz", "md", "ru", "tm", "tj", "ua", "uz"]

Directly following the IP-Geo checks, the malware still sprays one-way Command and Control (C2) via UDP port 6892 to the well-known netblock and somewhat less frequently to the netblock.  There has also been some speculation that this UDP capability could be weaponized for DDOS, where the victim could redirect all response traffic from the C2 subnet to a targeted host[6]; however, RSA analysis of the binaries did not identify a ‘listen’ or redirect functions in current Cerber samples.

With regard to the ‘business’ side of Cerber, RSA was able to identify a slightly more sophisticated 16char-KEY[.]DGA[.]TLD pattern with 23 unique key values that correlate to embedded configuration files for the malware’s set up of bitcoin wallets for each victim.  This pay-site pattern was confirmed via the positive identification of 726 unique URLs[7], predominately registered with ‘Eranet International Limited’ or ‘AlpNames Limited’, and hosted on both Tor nodes as well as the rotational infrastructure detailed above.  

"2016-09-01 17:17:32", "Payment Site", "Cerber", "4kqd3hmqgptupi3p.6j7jcn[.]bid", "hxxp://4kqd3hmqgptupi3p.6j7jcn[.]bid", "offline", " | |", "36352|16276", "US|CA"

While EK-delivered Cerber does present a challenge to diagnose intertwined ransomware and exploit kit behaviors and artifacts, some attribution can be made to particular EKs by leveraging findings on both C2 callbacks and the pay-site patterns.  Specifically, the May-June Cerber campaign demonstrates the previously noted UDP callbacks to the netblock and also ‘cerberhhyed5frqa.[DGA].win’ as a naming convention for payment sites; each of these has been linked to RIG EK and the delivery of Cerber[8].  The August and September campaigns can also be attributed to a probable exploit kit.  One of the 20+ payment processing site keys noted in those campaigns was ‘unocl45trpuoefft[.]DGA[.]TLD’, which correlates to open source intelligence documentation as a known Magnitude EK naming convention[9]


These findings suggest that earlier Cerber campaigns may have been delivered by RIG, followed by the July phishing campaigns, and then the August-September Magnitude delivery campaigns; however much more than Cerber   During the course of this research, numerous non-Ransomware activities (e.g., malvertising and information stealing) and related infrastructures were also identified.  RSA believes that these observations demonstrate how campaign runners are diversifying across malvertising, EK’s, and ransomware to drive multiple revenue streams from their campaigns. 


If this is the case, then Cerber-RaaS fits well within the model previously employed by Exploit Kit authors, supplying market demand for subversive and malicious software packages.  This also shows that dark web operators are adopting mainstream models for operations and service delivery, further increasing evidence that adversaries are borrowing on legitimate business models.  An example is the Stampado ransomware, unlimited licenses being offered for $39 is a compelling example[10] of how low the bar now is for market entry.



What remains unknown is how many different groups of actors or affiliates might be actively pushing Cerber ransomware.  Given the enormous payout potential, different TTPs for the phishing and EK delivered campaigns, and a lack of any co-use infrastructure… it is possible if not likely that different actors/affiliates were responsible for each respective infection vector.  However, without further evidence this notion remains speculative.


Threat Intelligence & Detection

By design, the evolving nature of Cerber’s malware, distribution, and rotational infrastructure limits the shelf life and effectiveness for any indicators of compromise (IOCs).  Despite this fact, RSA FirstWatch thought the subject matter significant enough to push two sets of threat intelligence into the ‘FirstWatch Exploit Domains’ and ‘FirstWatch Exploit IPs’ feeds on 9/3 and 9/9.  Each of these feeds are set to age-off after 30-days.


In addition an App Rule is now available via Live that detects a set of 23 unique pay-site hosts for Cerber ransomware that correlate to embedded configuration files for the malware’s set up of bitcoin wallets for each victim.    This rule matches when the '' (packet) or 'fqdn' (web logs) begins with one of the identified hostname patterns.  Either the HTTP_lua or HTTP native parser or one of the web log event sources is required.  You must have the September 2016 or later release of a web log event source plus the Envision Config File for the FQDN to be populated. begins '25z5g623wpqpdwis', '27lelchgcvs2wpm7', '32kl2rwsjvqjeui7', '3qbyaoohkcqkzrz6', '4kqd3hmqgptupi3p', '52uo5k3t73ypjije', '6dtxgqam4crv6rr6', 'cerberhhyed5frqa', 'de2nuvwegoo32oqv', 'i3ezlvkoi7fwyood', 'kkd47eh4hdjshb5t', 'lpholfnvwbukqwye', 'mphtadhci5mrdlju', 'mz7oyb3v32vshcvk', 'pmenboeqhyrpvomq', 'rzss2zfue73dfvmj', 'stgg5jv6mqiibmax', 'twbers4hmi6dc65f', 'unocl45trpuoefft', 'vrvis6ndra5jeggj', 'vrympoqs5ra34nfo', 'wjtqjleommc4z46i', 'zjfq4lnfbs7pncr5' || fqdn begins '25z5g623wpqpdwis', '27lelchgcvs2wpm7', '32kl2rwsjvqjeui7', '3qbyaoohkcqkzrz6', '4kqd3hmqgptupi3p', '52uo5k3t73ypjije', '6dtxgqam4crv6rr6', 'cerberhhyed5frqa', 'de2nuvwegoo32oqv', 'i3ezlvkoi7fwyood', 'kkd47eh4hdjshb5t', 'lpholfnvwbukqwye', 'mphtadhci5mrdlju', 'mz7oyb3v32vshcvk', 'pmenboeqhyrpvomq', 'rzss2zfue73dfvmj', 'stgg5jv6mqiibmax', 'twbers4hmi6dc65f', 'unocl45trpuoefft', 'vrvis6ndra5jeggj', 'vrympoqs5ra34nfo', 'wjtqjleommc4z46i', 'zjfq4lnfbs7pncr5'

An ESA rule is also available that Detects a pattern of Cerber ransomware in which a geolocation check of an IP is performed in order to bypass hosts in Eastern European countries directly followed by a one-way command and control (C2) via UDP port 6892. The time window, list of UDP port numbers and IP geolocation check sites are configurable. The traffic_flow Lua paser and either the native DNS or DNS_verbose_lua parsers are required. The ESA rule uses the following list of hostnames that were observed during the GeoIP check: myexternalip[.]com, ipecho[.]net, ip-addr[.]es, ipinfo[.]io, wtfismyip[.]com, freegeoip[.]net, curlmyip[.]com, ip-api[.]com, icanhazip[.]com.















Thanks to KevinAngela, and Ray for the data, research, and output.

The EsperTech Esper EPL Online can be a bit daunting if you are new to writing rules for your ESA, so here is a quick example that will hopefully get you started.



The screen is divided into three vertical areas:


- EPL Statements: - this is where you define what your events will look like and the ESA rule that will work on your Events

- Time and Event Sequence: -this is where you enter your events and advance time

- Scenario Results: - where the results of running the ESA rule against sequence of events is shown.


Here is an example to paste into the different areas so that you can see the results:


Paste into the EPL Statement Area:


create schema Event(user_dst string,device_ip string, ec_outcome string, ec_theme string, medium integer, ec_activity string);

@Name('Out') select * from Event;

@Name('Result') SELECT * FROM Event(

medium = 32

AND ec_activity = 'Logon'

AND ec_theme = 'Authentication'

AND ec_outcome = 'Failure'

).std:groupwin(user_dst).win:time_length_batch(60 sec,3)



Paste into the Time and Event Sequence Area:


Event={user_dst='joe', device_ip='', ec_outcome='Failure', ec_theme='Authentication', medium=32, ec_activity='Logon'}
Event={user_dst='joe', device_ip='', ec_outcome='Failure', ec_theme='Authentication', medium=32, ec_activity='Logon'}
Event={user_dst='joe', device_ip='', ec_outcome='Failure', ec_theme='Authentication', medium=32, ec_activity='Logon'}


In this example we have a simple ESA rule that will detect a brute force login attempt. We are looking for three failed logins within a 60 seconds period, grouped on the same user_dst. Our events describe a user Joe logging in from three different devices. using the t=t+n we advance the time between the events by n seconds.


The statement

@Name('Out') select * from Event;

will print out any event.


The statement 

@Name('Result') SELECT * FROM Event( .....

will only print out events that match our criteria.

Entropy is a term I am sure most of us are familiar with. In layman’s terms, it refers to randomness and uncertainty of data; it is in this randomness that we can detect potential malicious traffic.


A gentleman named George Zipf lead the way in the study of character frequency in the early 1930’s, his work was further expanded upon by Claude Shannon to examine the entropy of language. These two forms of analysis have become engraved in the computer security domain and often used for cryptography – but what if we used their ideas to help detect malicious traffic?


Some malicious actors utilise domain generation algorithms (DGA) to produce pseudo random domain names they will utilise for their C2 communications. If we apply Shannon’s Entropy to these domains, we can calculate a score of their entropy and possibly identify these maliciously formed domains from the norm:-



Using the RSA Event Stream Analysis (ESA) component and a customised Java based Shannon Calculator, we can generate these entropy scores on the fly for any given metadata, and should they exceed a score greater than X, create an alert.


NOTE: Java plugins can be added to the ESA component as described by Nikolay Klender in his post - Extending ESA rules with custom function.


Once the Java plugin is implemented, we can then create our ESA correlation rule to utilise the new plugin available and calculate the entropy. In this example, we will use the plugin to calculate entropy for DNS domains using the following EPL:-



SELECT * FROM Event(service = 53 AND calcEntropy(alias_host)>4);


The entropy value for this is set to anything greater than ‘4’ but can be edited dependent upon what results are observed.


I have attached the java used for calculating Shannon's Entropy should anyone be interested.


DISCLAIMER: This is by no means a full proof detection method for malicious traffic. The information is here to show the capabilities of the product and avenues of exploration to help thwart the adversary. This content is provided as-is with no RSA direct support, use it at your own risk. Additionally, you should always confirm architecture state before running content that could impact the performance of your SA architecture. 

Michael Sconzo

Content Update

Posted by Michael Sconzo Employee Sep 26, 2016

We've got some nice new additions to Live as well as a high-impact update with our HTTP_lua parser.


First off we've expanded out detection capability and the following App Rules are now available via Live.


In addition the team has pushed an update to the HTTP_lua parser. If you're running this in your environment you want to make sure you get updated to this version. The high points of this release are:

  • 95% rewritten to better accommodate updates, performance and analysis.
  • Language extraction from Accept-Language request headers
  • Additional bug fixes 


Fun Fact: Did you know you could use a CSV file as a whitelist or blacklist in ESA?


Stay tuned for more!

In my previous post, Trend Analysis with the Netwitness Suite, I've presented an approach to develop a baseline and perform a trend analysis with ESA. As mentioned many times, every threat is different and detection techniques not only can, but must vary to effectively protect the businesses of our organizations.


There are situations in which threat patterns can be identified by simply reporting on new values of a given meta key, without the need of performing complicated statistical analysis. For example if a new browser or a new TLD never seen before shows up in our environment.


The Netwitness reporting engine has a very handy function called show_whats_new() which is doing the job for you. However, if you want to leverage the power of ESA to achieve the same, it will be more challenging since there is the need to work with large timeframes which must be handle with care within ESA.


By using the same approach detailed in my previous post, the attached EPL can safely look at the last 30 days of every meta key you want to monitor and alert once there is a new value. Events are aggregated every minute, hour and day so to limit the impacts on ESA performance and store in memory only the information required for achieving the use case.


Multiple meta keys can be monitored by replicating and customizing the last statement.


From an implementation standpoint, the model creates a history of meta key - value pairs which is checked on a daily basis to alert for each new value found. In order to setup a learning phase, the model internally stores also the current date so to prevent alerting until the warm up period is over.


Please note this is not RSA official/supported content so use it at your own risk!

Here is another example of using the context menu option in RSA NW to provide analyst with right click functions to pivot into other sources of data (internal or external to your org.).

Interesting site, provides information for a number of elements so how would we create a context menu to pivot into the site for the meta value that was interesting ?


    "displayName": "[Cymon.IO by eSentire]",
    "cssClasses": [
    "description": "Cymon.IO lookup for IP, hash, domain,URL",
    "type": "UAP.common.contextmenu.actions.URLContextAction",
    "version": "Custom",
    "modules": [
    "local": "false",
    "groupName": "externalLookupGroup",
    "urlFormat": "{0}",
    "disabled": "",
    "id": "CymonIOAction",
    "moduleClasses": [
    "openInNewTab": "true"

(I've only managed to get this method working with UDP)


Customers often want to forward syslog traffic that is being sent to Netwitness for Logs to another syslog destination, so that the syslog traffic is processed by both Netwitness and the other syslog receiver.


There are many ways that this can be achieved (such as Configure Syslog Forwarding to Destination - RSA Security Analytics Documentation ) but here is another method.


1) On the Remote Log Collector go to Event Sources -> Syslog and Config.

2) Disable the current syslog sources. These will typically be upd514 and tcp514. To disable edit each source and untick the Enabled box.


3) Set up new TCP and UDP listeners on a different port. In this case we use UDP 5514 and TCP 5514.

4) Stop and start the Syslog Collection Method

Make sure that in explore view under logcollection/syslog/eventsources/syslog-tcp and syslog udp that you set the following headers to true. This will ensure that when security analytics processes the message the original IP is seen.



5) Confirm with netstat -na |grep -i 514 that the system is only listening on your new ports 5514

 netstat -na |grep -i 514
tcp 0 0* LISTEN
tcp 1 0 CLOSE_WAIT
tcp 0 0 :::5514 :::* LISTEN
udp 0 0*
udp 0 0*
udp 0 0 :::5514 :::*


6) Edit the file /etc/rsyslog.conf


Uncomment the following lines:

# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
And add the line

$PreserveFQDN on


7) At the end of the file add the following lines:


*.* @@




*.* @


This will forward all TCP traffic to port 514 or all UDP traffic to


Also add one of the following lines


*.* @@localhost:5514




*.* @localhost:514


This will forward all syslog traffic to our log collector for processing. The difference is that if @ is used the traffic gets forwarded as UDP syslog 514. If @@ is used the traffic gets forwarded as TCP. Forwarding as TCP is better as large syslog messages can be transferred and TCP has more robust packet delivery than UDP.


7) Restart the rsyslog service with service rsyslog restart

If you find that you are looking for support on your NetWitness purchased VNX it is important to remember to contact NetWitness Support to assist. Currently any customer who has purchased a VNX with their NetWitness purchase must go through NetWitness Support to open a case with EMC.


When requesting a case be opened for the VNX, do the following:

  1. Make sure there is an active maintenance contract for the VNX.
  2. Retrieve the serial number for the VNX
  3. Open the case with Netwitenss Support


Knowledge Base Article: 000034046 - How to open a case for issues with a DellEMC VNX or Unity array used with the RSA NetWitness Suite 

Getting tired of trying to ctrl+c, alt+tab, click, ctrl+v to copy a value from RSA NetWitness to another system to see if that indicator exists ?  There must be a faster way to accomplish this right ?

First part of this 2 part post will be Pivoting from RSA NW into Splunk, the second part will cover Splunk to RSA NW


Enter the context menu option of RSA NetWitness


Lets say you have Splunk for log collection and RSA NetWitness for packet collection and you want to be able to pivot between a few elements of metadata in both to make it easy for your analysts to move between the two products without the help of copy and paste.


Let's start with

Pivoting from RSA NW (NetWitness) to Splunk



In the Admin > System > Context Menu section we will add the following code to create the context menu option to pivot from RSA NW (ip.dst) to splunk (dst)

You need to change the [splunk_server:port] to match your Splunk instance.

Save the edit, refresh the page for RSA NW and right click on the blue (meta) of ip.dst and you will now see the external > [Pivot to Splunk Logs - Destination IP  which will take you to the Splunk interface and search for the dst=[ip] for the last 30 days. (you can changer this to update the timeframe passed to Splunk - &earliest=-30d&latest=now)



    "displayName": "[Pivot to Splunk Logs - Destination IP]",  

    "cssClasses": [  




    "description": "Splunk lookup Destination IP last 30 days",  

    "type": "UAP.common.contextmenu.actions.URLContextAction",  

    "version": "Custom",  

    "modules": [  



    "local": "false",  

    "groupName": "externalLookupGroup",  

    "urlFormat": "http://[splunk_server:port]/en-US/app/search/search?q=search%20dst%3D{0}&earliest=-30d&latest=now",  

    "disabled": "",  

    "id": "SplunkLogLookupDstExt",  

    "moduleClasses": [  




    "openInNewTab": "true"  


You could also create the following context menu to pivot from a number of fields into Splunk (ip.src,ip.dst,


    "groupName": "externalLookupGroup",  

    "openInNewTab": "true",  

    "urlFormat": "{0}&earliest=-30d&latest=now",  

    "moduleClasses": [  




    "type": "UAP.common.contextmenu.actions.URLContextAction",  

    "version": "Custom",  

    "id": "SplunkLogLookupGeneral",  

    "description": "Splunk search IP and Hostname",  

    "local": "false",  

    "displayName": "Pivot to Splunk Logs - General (IP and hostname)",  

    "modules": [  



    "disabled": "",  

    "cssClasses": [  









I have been working with a few customers to add custom CEF log sources to SA and got into using Lua to parse logs instead of customizing the cef.xml parser or other default parsers.  VxStream logs came my way via a side project from the developers of the sandbox software from Payload Security.


If you are looking for an alternative sandbox this one looks pretty interesting with a huge number of behaviour detections to flag and create reports.  I haven't focused on getting the files from either packets or malware to VxStream Sandbox yet but we were assured that there is an API that can be leveraged to post files to the sandbox.


Back to CEF logs... how would be onboard these CEF formatted logs to RSA NetWitness logs without customizing the default cef.xml parser ?  CEF by default will parse the items in the first part of the message where the | values are.  Once you get past that, the cn* and cs* will need to be extracted manually with Lua.


Here is a sample log from Payload Security. (default extractions in bold):


Aug 18 10:26:15 CEF:0|Payload Security|VxStream|5.00|Sample Analysis Result - Malicious|Sample Analysis Result - Malicious|100|end=08/18/2016 15:22:05 cn1=100 cn1Label=Threat Score cn2=62 cn2Label=AV Detection Rate cs1=Trojan.GenericKD cs1Label=Malware Family cs2=4 cs2Label=EnvironmentID cs3=W7 32 bit Kernelmode cs3Label=Environment Description fileHash=8d79bba763f5cbe4b778ddae6de1c97a9aca7049763466ffc289cf1306c71932 fname=Multi_Process.bin fsize=2474496 fileType=PE32 executable (GUI) Intel 80386, for MS Windows request=\=4 msg=Malicious flexString1= flexString1Label=Uploader Comment \ cs4Label=Contacted Domains cs5= cs5Label=Contacted Hosts cs6= cs6Label=Compromised Hosts cs8=2013743 \n2013743 \n2013743 \n2013743 \n2013743 \n2013743 \n2013743 \n2013743 \n2013743 \n2013743 cs8Label=ET Alerts priority=9


From this we are going to extract meta from the CEF format where the default data isn't extracted (cs or cn fields):


Fields extracted:

·         Device IP -> IP of the VxStream Sandbox sending the logs

·         Medium -> 32 is RSA NW internal for logs (packets is 1)

·         Device.type -> payload_security_vxstream (Payload Security|VxStream)

·         Event.time.str -> analysis start time (08/18/2016 15:22:05)

· -> domain name of the VxStream service/appliance (

·         Product-> VxStream (VxStream)

·         Version -> version of the VxStream service/appliance (5.00)

·         Event.type -> from the CEF message (Sample Analysis Result – Malicious)

·         Event.desc -> from the CEF message (Sample Analysis Result – Malicious)

·         Severity -> from the CEF message (100)

·         Checksum -> filehash (8d79bba763f5cbe4b778ddae6de1c97a9aca7049763466ffc289cf1306c719320)

·         Filename -> fname (Multi_Process.bin)

·         Extension -> from the filename (.bin)

·         Filename.size -> fsize (2474496)

·         url -> request (\=4 msg=Malicious flexString1= flexString1Label=Uploader Comment)

·         virusname -> cs1 (Trojan.GenericKD)

·         risk.num.sand -> cn1 (75)

o   Above 90 – very sure

o   75 – pretty sure

·         Event.type -> Sample Analysis Result – Malicious ( Malicious, Suspicious, No Threat, Unknown)– matches with RSA Sandbox malware meta

To Do (requires more Lua foo)

· -> cs4 ( \ cs4Label=Contacted Domains

·         ip.dst -> cs6 (cs6= \n38.229.70.4 \n217.197.83.197 \n93.184.220.29 \n52.85.184.221 cs6Label=Compromised Hosts)


To get these fields to be indexed you need to add the following changes to the index:


Log Decoder

<!-- checksum malware hash -->

<mapping envisionName="checksum" nwName="checksum" flags="None"/>



Index-concentrator-custom.xml <!-- checksum meta for vxstream logs--> <key description="Checksum" format="Text" level="IndexValues" name="checksum" valueMax="250000" defaultAction="Open"/>


Restart the services to bring those keys online


You might want to create meta profile to help you locate the logs and set the metagroup for you automatically.

[UPDATE] Adding the Lua parser to the Log Decoder

  • Admin > Services > Log Decoder > Config > Parsers Tab
  • Click upload
  • Locate the Lua parser
  • Click upload
  • watch for green bar for success
  • close the window
  • go back to the Config > General tab
  • wait for the parsers to reload (screen may stay blank for a bit while they reload)

now the parser will show up top right in the Parsers section



Here is how the parser looks when installed:

Here is how the meta looks when extracted from the logs (parser shows up in the upper right section , not the usual bottom right ):

Metaprofile (malware sandbox)

Device.type = payload_security_vxstream

Meta group = malware sandbox

     Metagroup (malware sandbox)











Virusname (tbd)

Ip.dst (tbd)



You might also want to create app rules on your log decoders to flag for events of interest (sandbox detect malicious file but no AV signature for it, high confidence detection in malicious file):


Application Rules




name=nwfl_malicious_file_no_av_detection rule="device.type='payload_security_vxstream' && event.type = 'malicious file' && virusname !exists" alert=alert order=50 type=application


name=nwfl_malicious_file_high_confidence rule="device.type='payload_security_vxstream' && event.type = 'malicious file' && risk.num.sand = 90-u" alert=alert order=49 type=application

Help us have some basic understanding of your organization's use of threat intelligence .  Click here to take the short survey.

The official Rapid 7 Nexpose Guide seemed unfortunately to be short of a few details (Rapid7 NeXpose Event Source Configuration Guide ) so I described how I integrated the Windows version of Rapid 7 Nexpose into Security Analytics.


I was using Nexpose 5.17.1 on a Windows 2008 Server.

The screenshots have been taken from Security Analytics 10.6.1


This document assumes that the reader is familiar with installing the SFTP Agent and setting it up.


  1. On your Nexpose Server ,create a CSV Report in Nexpose using the "Basic Vulnerability Check Results (CSV) Template)
  2. This will output a CSV Report of the scan. However, the problem is that the file will be gz compressed and this is not compatible when sending it to the Log Collector. As a result we will uncompress it using 7-zip. Install 7-Zip on your server ( )
  3.  Still on your Nexpose Server create a directory called NexposeScripts and populate it with the contents of that is attached to this document. Run a scheduled task in Windows to run the batch file Nexpose.bat every 5 minutes.
  4. On the Nexpose Server install the RSA SFTP Agent and use the attached sftagent.conf to process the nexpose log messages. 
  5. On the LogCollector copy the file rapid7.xml to /etc/netwitness/ng/logcollection/content/collection/file and restart the logcollector service.
  6. On the log decoder make a directory called /etc/netwitness/ng/envision/etc/devices/rapid7 and copy the files /v20_rapid7msg.xml and rapid7.ini into this directory.
  7. Restart the logdcoder


The parser makes use of the vuln_ref reference key so make sure that in your table-map-custom.xml file you have the line

<mapping envisionName="vuln_ref" nwName="vuln.ref" flags="None" format="Text"/>


If everything is correctly setup then you should see a new rapid7 device type, with Threat Category, Threat Description and also the Vuln Ref key populated with CVE numbers.



Note by default, the Script Nexpose.bat will leave the reports reports.csv.gz in the original directory. If you want them to be deleted after processing then add the line highlighted in bold below to the c:\nexposescripts\nexpose.bat


cscript nexpose-audits.vbs
cscript nexpose-authevents.vbs
cscript nexpose-nscevents.vbs
cd "C:\Program Files\rapid7\nexpose\nsc\htroot\reports"
for /R %%f in (report.csv.gz) do "c:\program files\7-Zip\7z.exe" e -y "%%f"
for /R %%f in (report.csv.gz) do del /q "%%f"

Seems there is a possibility that a CA gave away duplicate certs for a GitHub domain


Could SA NetWitness help locate if any Certificates were signed by the potentially offending CA and see if this could impact your organization ?


Let's see ...


Using this post to enable full indexing on the appropriate ssl.* metakeys you could search for the CA name (in this case WoSign) = WoSign

or if the CA name isnt exactly WoSign we could use this query to locate similar names and then tune the drill approriately contains 'WoSign'


Then you could see all the domains ( that the certificate was used as part of the communication and see if you might be affected.  You might also want to focus on outbound traffic (your users connecting to a GitHub domain with a cert signed by WoSign could be something to investigate)


From what I can see with my browser, Digicert should be the CA for GitHub

digicert ca


Taking this one step further, I also found there was a new function that has been spun up to track how many certs have been created for each domain.  Why not create a Context Menu plugin for RSA NW to query one of these certificate transparency sites so that analysts could get additional details about the domain and certs without sharepoint + copy + paste +copy + paste


So here is the context menu item that functions on the, ssl.subject and metakeys


certificate transparency


    "displayName": "Google SSL Cert Transparency Check",
    "cssClasses": [
    "description": "",
    "type": "UAP.common.contextmenu.actions.URLContextAction",
    "version": "1",
    "modules": [
    "local": "false",
    "groupName": "externalLookupGroup",
    "urlFormat": "{0}&incl_exp=true&incl_sub=true",
    "disabled": "",
    "id": "GoogleSSLCERTCHECK",
    "moduleClasses": [
    "openInNewTab": "true",
    "order": ""

Recently, RSA announced three new certified threat intelligence partners with the NetWitness suite through the RSA Ready Partner Program.


These new certified partners can be utilized by the RSA NetWitness Suite to offer security analysts real-time context about an investigation so they can more quickly detect and respond to an incident.


As part of that announcement, we have added additional Threat Intelligence Platform (TIP) and Threat Intelligence Content (TIC) partners.  Those partners include following (click on the respective links to go to each RSA Ready Implementation Guide):



For additional details about the announcement, you can also refer to the original press release here.

As part of our continued efforts to bring customers better and more advanced ways of detecting malware we've got a few things to announce.


First off, the following malware families now have content in Live for you to download and deploy. If you'd like more information on the malware family check out the links (RSA Research). Stay tuned for a few more in the upcoming weeks.


Another part we're hard at work on is bringing more relevant and timely content to our feeds. This week, as part of some additional research that was conducted we've added 4000 unique Ransomware domains into our c2-domain feed, and 1150 unique IPs into our c2-ip feed. This all comes from analyzing 48 different Ransomware families and over 1600 samples. If you're especially concerned about Ransomware check out our Case Study Infographic on our main site.

Eric Partington

HTTP Error code 522

Posted by Eric Partington Employee Sep 1, 2016

Interesting blog post from ISC SANS Handlers blog about http error code 522 (Connection timed out)


Which got me thinking, could RSA NetWitness help detect this potential indicator ?


If you have Packets the http_lua registers the error codes in the error metakey

If you have logs, the error codes should be registered in result.code from your firewalls or proxy logs


This post from Christopher Ahearn shows you how to implement a quick parser to move and split the value from error into result.code to give analysts better pivoting if you happen to have both Packets and Logs from RSA.


Here is what the errors metakey looks like on my test system

http error codes

Which has no error 522 unfortunately.


To locate with a drill in investigator:

error begins '522'


If the parser from chris is implemented or if you have logs that parse out that value for you:

result.code = 522


once you test and validate what you find you might want to create an application rule (looking at outbound traffic in particular as that would be your malware calling home - outbound)



rule=service=80 && error begins 522 && direction=outbound


rule=result.code=522 & direction=outbound


There could be some legit reasons for erro 522 (especially with cloudflare it seems) but from the ISC handlers post there was also some legit malware that was also detected.  Fine tune the alerting and drills to get to the actionable stuff.


as always, comment or DM if you find something interesting or if there are particular tuning parameters that you find effective.

Filter Blog

By date: By tag: