Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2017 > January

A customer had asked me if it was possible to collect logs centrally using WEC (Windows Event Collection) to reduce the amount of WinRM or Windows Legacy Collectors that were needed.  I hadn't heard of WEC so it took me a while to understand it and test it out in a lab.


This post is about what I did to make it work in my lab and see how it works and what limitations it might introduce if its the collection method of choice for some or all Windows events in your environment.


In Short,

Pro: it looks like a simple way to collect logs from assets that might change address regularly (DHCP assets or cloud environments where assets are spun up and torn down frequently) or for specific compliance assets (PCI/SOX).

Con: The logs have the device.ip as the collector not the true source so any alerts that use device.ip will not work as expected.  The and do reflect the true client system so you could use those instead.


** I can't vouch for the security of what I did to make this work, I'm and SE not a Windows Security expert so if you have found a more secure way to accomplish this please comment and i'll test it out and update the post with details **


WEC can be set up in either collector initiated or source initiated.  Collector was chosen for this test.

  • Collector machine in this test was Server 2012R2 DC
  • Clients were mix of Win7,Win8, Win10, Server 2K8R2


Computer Management (as admin) > System Tools > Event Viewer > Subscriptions > Create Subscription


Create subscription name

Destination Log: Forwarded Events

Collector Initiated

select Computers > pick the computers from the domain to add to the list or the computer group where they will reside

Events to collect:

select the event logs to collect (App, Sys, Security, Powershell)


Change User account

There was some difficulty in making a service account and accessing the Security Logs so ended up using a machine account and leaving the event delivery as Normal


Now you collection is ready


Enable WinRM service and network connections to the service by opening cmd.exe (as admin)

winrm qc

select yes to enable service and network ports


Now add the machine account and network service account to allow access to the Security Events

Computer Management (as admin) > local user and groups > groups > event log readers

Add the Network Service Account



Add the machine account the same way for the collector that will be pulling event logs from the client


A reboot of the collector/client was suggested to allow the Network Service account to properly allow access to the event logs

(This could all be accomplished with GPO and pushed out to all machines in a group or domain to make this easier)


Collector - Validate

Computer Management > Event Log > Subscriptions

Select the subscription just created, on the right click Retry and then Runtime Status to see the results of the collection


You will be able to see which clients are reachable and which are not


Now you can take a look at the Forwarded Events log to see which event logs you have collected to make sure your permissions are correct



Hopefully now you have logs being collected from your clients, now all you have to do is configure WinRM to pull events from this collector or add the ForwardedEvents channel to your existing WinRM collection.




If all works out well you should see events like this from your clients



  • The Device.IP will be the collector computer not the clients
  • and will be the true client information
  • Any event source monitoring for these forwarded clients may not work properly as the source IP will be the collector and not the clients (which may be a good thing if you have a highly dynamic client environment which is creating issues for the HW policies)


Let me know your thoughts on this and if this is actively being used in the field (or why not)

UPDATED 2-1-2017 to Version 0.4


1-20-2017 (0.2) : Added capability to auto-populate all appliance IP addresses. Substitute "autoiplist" rather than

user defined iplist. See help for more information. Also fixed help file (previous typo). Removed prompts.


1-27-2017 (0.3): Added a number of SDK checks. Changed the logic on how it identifies the server type, added a size check for VolGroup00. If it shows up as 29.XX GB and your appliance is an R620, you're likely still utilizing the SD cards as part of the OS. Also added a check showing currently free memory. 


2-1-2017 (0.4): Added DRAC Firmware version check


I've worked with dozens of Security Analytics instances and have found myself repeatedly compiling the same information, usually relating to basic asset inventory, configuration information and simple health checks. In order to expedite this process, I've created a simple shell script that will log into each appliance in an environment, pull important information and aggregate it all into a csv file for easy reference. The nice thing about this script is that it obtains many of the important configuration items without needing to log into REST or perform NwConsole commands.



  • List of IP addresses or Hostnames of all SA Appliances (virtual or physical) - List needs to be one IP/Host per line. This step can now be skipped by using the "autoiplist" option (see below)
  • Key exchange between Host where script is installed and SA Appliances - This is optional, but will make things go much faster. If this hasn't been setup, you'll just be prompted for the Host OS username (usually root) for each appliance the script is connecting to
  • A Linux host to run the script from that can connect to all the SA appliances defined in the IP List (I frequently use the SA Server Host)


Installation Instructions:

  1. Copy the attached script to your host
  2. Make it executable
    1. chmod +x
  3. Ensure the md5sum matches the following:

    [root@NW-GUI new]# md5sum



./ <options>

This Script is used to generate a comma-delimited inventory of a Security Analytics  Environment while  also

compiling several important configuration items per appliance.




IMPORTANT: This script functions best when key exchange has been performed between the SA Server and the

           Appliances. If not, it will prompt for a password for each appliance in the IP List





        -h : This help file

        -v : version information

        -a : Generates a list of all currently enabled appliance IPs and quits. File will be named "all_appliance_ips.out" 

        -p : when this option is used, all arguments must be passed in the proper order. if the user chooses "autoiplist"                  rather than defining a set list of ips (see EX2), all appliances connected to the NW GUI will be examined.  The                arguments must be passed in the following order:

                EX: ./ -p <username> <iplist>  </output/path/filename.csv> </output/path/logfile.log>

                EX2: ./ -p <username> autoiplist  </output/path/filename.csv> </output/path/logfile.log>


What the script gathers and where it comes from:


InformationRetrieval Method
Date Checkeddate command
Hostnamehostname command
IP Addresshostname command
Server Typedmidecode
Bios Versiondmidecode
Booting Kerneluname -r
Installed Kernelsrpm -qa
Serial Numberdmidecode
Free Memory/proc/meminfo
CPU Cores/proc/cpuinfo
DNS Serversresolv.conf
Search Domainresolv.conf
NTP Statusntpstat
Puppet Node ID/var/lib/puppet/node_id
Services Installedrpm -qa
Local Accounts per Service/etc/netwitness/ng/Nw*.cfg files
Max Concurrent Queries Per Service/etc/netwitness/ng/Nw*.cfg files
Max Pending Queries/etc/netwitness/ng/Nw*.cfg files
Parallel Query/etc/netwitness/ng/Nw*.cfg files
Parallel Value/etc/netwitness/ng/Nw*.cfg files
Query Parse/etc/netwitness/ng/Nw*.cfg files
Cache Window Minutes Per Service/etc/netwitness/ng/Nw*.cfg files
DRAC IPipmitool
DRAC Firmware Versionipmitool
PFring Versionrpm -qa
Capture Autostart/etc/netwitness/ng/Nw*.cfg files
Capture Interface/etc/netwitness/ng/Nw*.cfg files
Capture Device Params/etc/netwitness/ng/Nw*.cfg files
Aggregating Devices/etc/netwitness/ng/Nw*.cfg files
Aggregate Autostart/etc/netwitness/ng/Nw*.cfg files
Aggregate Hours/etc/netwitness/ng/Nw*.cfg files
Aggregate Interval/etc/netwitness/ng/Nw*.cfg files
Aggregate Max Session/etc/netwitness/ng/Nw*.cfg files
Active App Rules/etc/netwitness/ng/Nw*.cfg files
Active Correlation Rules/etc/netwitness/ng/Nw*.cfg files
Installed Feedsdeduplicated files in /etc/netwitness/ng/feeds
Custom Index Entries

cleaned index-*-custom.xml files


VolGroup00 Size

vgs (volume group scan)


Meta DIR Mounts/etc/netwitness/ng/Nw*.cfg files
Packet DIR Mounts/etc/netwitness/ng/Nw*.cfg files
Session DIR Mounts/etc/netwitness/ng/Nw*.cfg files
Save Session Cound/etc/netwitness/ng/Nw*.cfg Files
Index DIR Mounts/etc/netwitness/ng/Nw*.cfg files

Index Slices Open            /etc/netwitness/ng/Nw*.cfg files


  • The script has not been tested against Malware Appliances, does not work with WLCs (Windows Based) and will retrieve less information from ESAs due to their architecture differences.
  • This script is beta, if you notice some information does not look correct, please let me know.

Looks like Windows 10 has introduced some new Security event ID's as well as modified the content on some existing messages with more info (4688).


This page seems to have the best breakdown of the new and modified events


In short these are the new ones:


4798/4799 - write operations only used to be audited, now read and query are audited along with write.

4826 - Boot Configuration Database

6416 - PNP events (this one might be interesting to watch around high value assets like DC's)


There are a number of modified events that now have more information in them.


Great resource from Windows IT Pro that summarizes the changes well.


As always, feedback is welcome. 


Are you aware of these new EventID's ?

Are you leveraging them in any alerts or reports ?

If you didn't catch Saket's update about Log Parsers, be sure to look at all the improvements they made. Here's the January roll-up of the new detection capabilities added via Live.



  • PVID
  • CustomTCP
  • Lua Mail Options file
  • rekaf
  • Cerber
  • Updates to the DynDNS parser


Feed Additions

  • Grizzly Steppe
  • Locky
  • Cerber
  • Schoolbell
  • Kingslayer
  • Tox Supernode



  • Added Tox traffic to the 'Encrypted Traffic' report

The RSA Live Content team has published updates for 6 more Log Parsers that generate the largest number of, “Unknown Message Defect” support cases. Earlier in October 2016 (Log Parser Improvements ) 15 parsers were published. 


These enhancements are part of a strategic initiative to drive improvements to Log Parsers.


Benefits from these improvements result in:

  • Fewer Unknown Messages
  • Improved Device Discovery
  • Better Adaptability to newer versions of an Event Source
  • Reduced Parser Maintenance


To take advantage of these improvements you will need to download the latest versions of the parsers listed below from the Live Portal.




Event Source

Log Parser



Fortinet FortiGate


This parser has been redesigned to parse all event ids generated by the event source. We have made the parser future proof to parse newer event ids that may be introduced in newer versions of the product. It can also accommodate New/Unknown tags, which significantly reduces the number of unknown messages.


Microsoft Exchange Server


This parser can now identify all Microsoft Excahnge events coming in via Windows Collection. 


F5 Big-IP Application Security Manager


This event source has a structured log format and uses tag=value format. It has been improved to accommodate New/Unknown tags, which significantly reduces the number of unknown messages.


Bit9 Security Platform


This parser has been redesigned to parse all event ids generated by the event source coming in via Syslog. We have made the parser future proof to parse newer event ids that may be introduced in newer versions of the product.

This event source has a structured log format and uses tag=value format. It can also accommodate New/Unknown tags, which significantly reduces the number of unknown messages.


Cisco IronPort Email Security Appliance


This parser has been made future proof to identify all events coming in via File Reader or Syslog.


Trend Micro Control Manager


This parser has been redesigned to parse all event ids generated by the event source. It has been made future proof to parse newer event ids that may be introduced in newer versions of the product.

This event source has a structured log format and uses tag=value format. It can also accommodate New/Unknown tags, which significantly reduces the number of unknown messages.


RSA Live Content team will be powering similar improvements for more parsers over the next two quarters.

Last Updated: 12:41 February 27th 2017

Latest Version: 17


I had a customer who wishes to extract the raw event time for particular logs. This is because they use this raw event time for further analysis of events in a third party system. The raw event time may differ greatly from the actual log decoder processing time, especially if there is a delay in the logs reaching the log decoder, perhaps due to network latency or file collection delays.


Currently they use the event.time field. However this has some limitations:

  • If the timestamps are incomplete then this field is empty. For example many Unix systems generate a timestamp that does not contain the year.
  • Even for the same device types, event source date formats can be different. For example US based system may log the date in MM/DD/YY format, where as a UK based system may log the date in DD/MM/YY format. A date of 1/2/2017 could be interpreted as either the 1st February 2017 or the 2nd January 2017.
  • The event.time field is actually a 64 bit TimeT field which can not be manipulated within the 32 bit LUA engine that currently ships with the product.


All these issues are being addressed in future releases of the product, but the method outlines here gives something that can be used today.


Create some new meta keys for our Concentrators


We add the following to the index-concentrator-custom.xml files:


<key description="Epoch Time" level="IndexValues" name="epoch.time"format="UInt32" defaultAction="Open" valueMax="100000" />
<key description="Event Time String" level="IndexValues" name="eventtimestr" format="Text" valueMax="2500000"/>
<key description="UTCAdjust" level="IndexValues" name="UTCAdjust" format="Float32" valueMax="1000"/>

The meta key epoch.time will be used to store the raw event time in Unix Epoch format. This is seconds since 1970.

The meta key event.time.str will be used to store a timestamp that we create in the next step.

The meta key UTCAdjust will hold how many hours to add or remove from our timestamp.


Create a  Feed to tag events with a timestamp

Within the Netwitness LUA parser we are restricted on what functions we can use. As a result the functions are not available, so we need another method of getting a timestamp for our logs.


To do this, create a cronjob on the SA Server that will run every minute and populate the following CSV file.


# Script to write a timestamp in a feed file
devicetypes="rsasecurityanalytics rhlinux securityanalytics infobloxnios apache snort squid lotusdomino rsa_security_analytics_esa websense netwitnessspectrum bluecoatproxyav alcatelomniswitch vssmonitoring voyence symantecintruder sophos radwaredp ironmail checkpointfw1 websense rhlinux damballa snort cacheflowelff winevent_nic websense82 fortinet unknown"

for i in {1..60}
mydate=$(date -u)
echo "#Device Type, Timestamp" >/var/netwitness/srv/www/feeds/timestamp.csv
for j in $devicetypes
echo "$j",$mydate >>/var/netwitness/srv/www/feeds/timestamp.csv
sleep 1

This will generate a CSV file that we can use as a feed with the following format

checkpointfw1,Wed Jan 25 09:17:49 UTC 2017
citrixns,Wed Jan 25 09:17:49 UTC 2017
websense,Wed Jan 25 09:17:49 UTC 2017
rhlinux,Wed Jan 25 09:17:49 UTC 2017
damballa,Wed Jan 25 09:17:49 UTC 2017
snort,Wed Jan 25 09:17:49 UTC 2017
cacheflowelff,Wed Jan 25 09:17:49 UTC 2017
winevent_nic,Wed Jan 25 09:17:49 UTC 2017

Here the first column of the csv is our device.type and the part after the column is our UTC timestamp. 


We use this as a feed which we push to our Log decoders.


The CSV file is created every minute, and we also refresh the feed every minute. This means that potentially this timestamp could be 2 minutes out of date compared with our logs.


Here is an example of the timestamp visible in our websense logs:

eventtimestr holds our dummy timestamp

epoch.time holds the actual epoch time that the raw log was generated. 



Create an App Rule to Tag Session without a UTC Time with an alert.

Create an App Rule on your log decoders that will generate an Alert if the no UTCAdjust metakey exists. This prevents you having to define UTC Offsets of 0 for your devices when they are already logging in UTC.


It is important that the following are entered for the rule:

Rule Name:UTC Offset Not Specified

Condition:UTCAdjust !exists

Alert on Alert

Alert box is ticked.


Create a feed to specify how to adjust the calculated time on a per device ip and device type setting

Create a CSV file with the following columns:

#DeviceIP,DeviceType,UTC Offset,rhlinux,0,snort,1.0,securityanalytics,-1.5

Copy the attached UTCAdjust.xml. This is the feed definition file for a Multi-indexed feed that uses the device.ip and device.type. We are unable to do this through the GUI.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<FDF xmlns:xsi="" xsi:noNamespaceSchemaLocation="feed-definitions.xsd">
<FlatFileFeed comment="#" separator="," path="UTCAdjust.csv" name="UTCAdjust">
<MetaCallback name="Device IP" valuetype="IPv4">
<Meta name="device.ip"/>
<MetaCallback name="DeviceType" valuetype="Text" ignorecase="true">
<Meta name="device.type"/>
<LanguageKey valuetype="Float32" name="UTCAdjust"/>
<Field type="index" index="1" key="Device IP"/>
<Field type="index" index="2" key="DeviceType"/>
<Field key="UTCAdjust" type="value" index="3"/>


Run the script to generate the feed. This generates the actual feed and then copies it to the correct directories on any log and packet decoders. (There really isn't any reason to copy it to a packet decoder). This script will download the feed from a CSV file hosted on a webserver. This script could be scheduled as a cronjob depending on how often it needs to be updated.


wget http://localhost/feeds/UTCAdjust.csv -O /root/feeds/UTCAdjust.csv --no-check-certificate

find /root/feeds | grep xml >/tmp/feeds
for feed in $(cat /tmp/feeds)
FEEDDIR=$(dirname $feed)
FEEDNAME=$(basename $feed)
NwConsole -c "feed create $FEEDNAME" -c "exit"
scp *.feed root@
scp *.feed root@
scp *.feed root@
NwConsole -c "login admin netwitness" -c "/decoder/parsers feed op=notify" -c "exit"
NwConsole -c "login admin netwitness" -c "/decoder/parsers feed op=notify" -c "exit"
NwConsole -c "login admin netwitness" -c "/decoder/parsers feed op=notify" -c "exit"


Use a LUA parser to extract the event time and then calculate Epoch time.


We then use a LUA parser to extract the raw log and then calculate epoch time. Our approach to do this is as follows:


  • Define the device types that we are interested in
  • Create a regular expression to extract the timestamps for the logs we are interested in
  • From this timestamp add additional information to calcualate the epoch time. For example for timestamps without a year, I assume the current year and then check that this does not create a date that is too far into the future. If the date is too far in the future, then I use the previous year. This should account for logs around the December 31st / 1st January boundary.
  • Finally adjust the calculated time depending on the UTC Offset feed.


I've attached and copied the code below. This should be placed in /etc/netwitness/ng/parsers on your log decoders.

Currently this LUA parser supports:


  • windows event logs
  • damballa logs
  • snort logs
  • rhlinux
  • websense


The parser could be expanded further to account for different timestamp formats for particular device.ip values. You could create another feed to tag the locale of your devices and then use this within the LUA parser to make decisions about how to process dates.


Here is the finished result:



I can investigate on epoch.time.


For example: 1485253416 is Epoch Converter - Unix Timestamp Converter 

GMT: Tue, 24 Jan 2017 10:23:36 GMT


and all the logs have an event time in them of 24th January 10:23:36



 Please join us for the RSA NetWitness Suite Customer Summit at the RSA Conference 2017 in San Francisco.


You'll get a chance to see the RSA NetWitness Suite in action, take a sneak peek into the product's roadmaps as well as network and connect with your peers. The Customer Summit is a great opportunity to informally meet with the RSA NetWitness Product Team to share insights and gather information – and to have some fun as we kick off RSA Conference 2017.


We hope that you can join us for the Customer Summit!


Date: Monday, February 13, 2017

Time: 3:00 - 6:00 PM


RSVP now!


Please contact Mary Roark with any questions

Lets say you have NetWitness packet capture and you are at the point where you have located a suspicious executable which you want to check against VirusTotal or another hash lookup site to see if there are any matches ...  How would you go about that the most efficient way possible ?


Luckily there is the context menu function which can save your copy paste madness.


To use this context menu you need to be in the events section of investigator and looking at the files in the session.

Investigator > Events (where filename exists) > double click on session


You will see the hashes on the right for each of the files located in the session

You can right click on the has and select the options to submit the hash to VirusTotal (or whatever site you want to add to check on the hash)



You will open VT in a new tab with the hash passed over to search/report on


Here is the context menu:

    "displayName": "[VirusTotal Hash]",
    "cssClasses": [
    "description": "",
    "type": "UAP.common.contextmenu.actions.URLContextAction",
    "version": "Custom",
    "modules": [
    "local": "false",
    "urlFormat": "{0}",
    "disabled": "",
    "id": "vtHashLookup",
    "moduleClasses": [
    "openInNewTab": "true",

Michael Sconzo

Content Update

Posted by Michael Sconzo Employee Jan 13, 2017

Hopefully everybody had a great holiday season! I know we did, and we've been getting some new capabilities into Live.


For starters if you're running 10.6.2, you'll notice 2 new bundles. The Starter Pack for Logs, and the Starter Pack for Packets. These provide a great starting point to make sure you can find some interesting activity in 10.6.2 moving forward, and to insure that dashboards populate if you have the appropriate data coming into the NetWitness Suite.


App Rules, Parsers, and Reports.

  • CustomTCP Parser - Schoolbell Malware
  • Rekaf malware Parser - Schoolbell Malware
  • Updated Cerber Parser
  • Updates to the Dynamic DNS parser
  • Updated the Encrypted Traffic report with Tox protocol identification


Lots of Feed updates for: Locky, Cerber, Schoobell, Kingslayer, and Grizzly Steppe


In addition First Watch has been putting some great blog posts out there!


As usual more great stuff on the horizon.

I have a customer who use something called a "Data Diode" to enforce one way connectivity through their network.

One result of this is that any syslog that is being sent through the diode gets its device IP changed.


For example any message that was sent through the diode would have the Source IP address and Sequence Numbers appended.


Original Message:


Jan 11 18:01:13: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet13/30, changed state to up


Message after Passing through Data Diode and seen by RSA Netwitness for Logs: 1515391: Jan 11 18:01:13: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet13/30, changed state to up


Unfortunately this means that the device.ip is populated with the Data Diode address rather than the original source address. The LUA parser below checks logs coming from the same IP as the Data Diode, and if they have this header format, the IP address is extracted and stored in device.ip. 


This method works where logs are parsed by Security Analytics even with this additional header.


1) Edit the DataDiode.Lua on Line 41. This looks for the IP to check against with the format in decimal. In the file is 3232267011 in decimal format.
2) Add the address of the datadiode in decimal format. (If you are lazy just use
3) Copy the parser to /etc/netwitness/ng/parsers on both your logdecoder
4) Reload your parsers (with your script that you use)

local lua_DataDiode= nw.createParser("DataDiode", "Registers the IP given by the Data Diode into device.ip")


Takes a message from a data diode and adds the IP address supplied into device.ip
Sample Mesage: 8936: 008934: Jan 11 17:55:03.566: %SYS-6-LOGGINGHOST_STARTSTOP: Logging to host Port 514 started - CLI initiated

There can be any number of sequence numbers after the IP and the messages might be coming from any number of different event sources


function lua_DataDiode:sessionBegin()
deviceip = nil
devicetype = nil

function lua_DataDiode:CheckDevice(index,dtype)
if dtype == 'checkpointfw1' then
devicetype = dtype

function lua_DataDiode:CheckDeviceIP(index,dip)
-- Data Diode Device IP is
-- nw.logInfo("DataDiode: DeviceIP: " .. dip .. type(dip))
-- Note IP Addresses are stored in DECIMAL notation so need to convert dotted value to DECIMAL
-- To convert a.b.c.d is a *256 ^3 =b *256 ^2 +c *256 +d
--- is therefore 3232267011
if dip == 3232267011 then
deviceip = dip
--nw.logInfo("DataDiode: Matched")

function lua_DataDiode:register()

if deviceip then
-- Reparse the message:
-- nw.logInfo("DataDiode: DeviceIP has matched" )
local fullPayload = nw.getPayload():tostring()
local o1,o2,o3,o4,sequence,rubbish = string.match(fullPayload,"(%d+).(%d+).(%d+).(%d+)%s(%d+):(.*)")

--nw.logInfo("DataDiode: DeviceIP has matched" )

-- Check we have an IP address
if o1 and o2 and o3 and o4 then
host = o1*256^3 + o2*256^2 + o3*256 + o4
--nw.logInfo("DataDiode: Registered New Device IP: " .. host)

[nwevents.OnSessionBegin] = lua_DataDiode.sessionBegin,
--[nwlanguagekey.create("device.type", nwtypes.Text)] = lua_DataDiode.CheckDevice,
[nwlanguagekey.create("device.ip", nwtypes.IPv4)] = lua_DataDiode.CheckDeviceIP,


This is a collection 2017 security predictions made by various organizations. I have put them under 4 categories - 1) Infosec and cyber crime, 2) Ransomeware, 3) IoT and 4) Drones


Infosec and cyber crime


Minority Report (from Infosec Edition)

"Math, machine learning and artificial intelligence will be baked more into security solutions. Security solutions will learn from the past, and essentially predict attack vectors and behavior based on that historical data," says Cunningham, who is director of cyber operations for A10. "This means security solutions will be able to more accurately and intelligently identify and predict attacks by using event data and marrying it to real-world attacks.”


Aftershock password breaches will expedite the death of the password (


A joint international effort to fight the cyber crime

We will see the consolidation of the collaboration between law enforcement agencies worldwide that will join the forces against criminal organizations across the world.


Data breaches 3.0

Instead of stealing data, attackers in 2017 will seek to manipulate data, unleashing potentially dire and long-lasting consequences.


New technologies such as Blockchain may be used to enhance trust between stakeholders and facilitate exchange of threat intelligence among industries (from APAC)

The setup of more Information Sharing and Analysis Centers (ISAC) will form platforms for both the private and private sector participants to share threat intelligence. However, participants are wary of exposing their weak security posture when contributing intelligence due to a successful attack, and there are issues of untrusted sources that may contribute the wrong intelligence. Blockchain may emerge as the technology to facilitate the exchange as it authenticates the trusted party to contribute, obfuscates the contributor's detail with anonymity, and offers a tamper proof system that prevents unauthorized alteration of any data shared.


Cybercriminals focus on crypto currencies

Cyber criminals will continue to show a great interest in earning opportunities offered by cryptocurrencies. Security firms will continue to detect malware specifically designed to steal crypto currencies or to abuse victim’s resources for mining activities. The Zcash currency will probably offer the greatest financial opportunity to criminal syndicates. Zcash mining will remain among the most profitable compared to other cryptocurrencies; this means more opportunity for cyber criminals that started creating botnets for mining.


The number of cyber-attacks will continue to grow almost in every industry.

It is very easy to predict a constant increase of cyber-attacks in the wild. Healthcare, energy, and retail will be the sectors most targeted by cyber criminals. While enterprise will improve their security posture, SMBs will continue to be exposed to hacker attacks,

Lack of awareness about cyber threats and significant cuts on budgets reserved to cyber security are principal problems for SMBs.




Ransomeware, one of the most dangerous cyber threats (Infosec Institute)

Ransomware will be one of the most dangerous menaces in the threat landscape. The number of new Ransomware families will increase, and the malware authors will implement new features to make these specific threats even more efficient and hard to detect. Security experts will discover a greater number of ransom-as-a-service platforms.


Ransomeware gets physical

Attackers will take over and disble hardware as a way to extort money from corporate victims. 


Business Email Compromise (BEC) attacks will overtake Ransomware and Advanced Persistent Threat (APT) attacks

BEC generally happens when email accounts of key executives are compromised and involves payments made to fraudulent bank accounts. In Singapore alone, aboutS$19 million has been lost through BECs between January toSeptember 2016. There was an increase of 20% in number of such cases as compared to the same period last year. Police investigations revealed that the scam usually involves businesses with overseas dealings with email as the main form of communication in the dealings.

"As BECs are relatively easier to execute and evades cyber defense tools better than other popular attack vectors such as ransomware and APTs, it can potentially be the main cyber threat inAsia," notedCharles Lim, Industry Principal, Cyber Security practice, Frost & Sullivan,Asia Pacific.


Ransomware At Your Service

"As awareness around ransomware grows and fewer people click on links, ransomware operators will need to take steps to improve their ransomware conversation rate by making it easier for ransomware victims to pay up. In 2017, we’ll see the widespread availability of ransomware customer support with more attackers offering FAQs, tech support forums, and even call centers to walk victims through paying and restoring their data," says Todd O'Boyle, co-founder and CTO of Percipient Networks. "And to increase their chances of being paid, many ransomware operators will lower their prices, be open to negotiation, and offer discounts.”





IoT bankruptcy

Companies that refuse to bake security into their IoT products will suffer financial repercussions.


IoT devices, a dangerous weapon in the wrong hands (Infosec Institute)

The lack of security by design and poor security settings will be the principal reasons for the success of the attacks that will target IoT devices next year. Unfortunately, IoT vendors will continue to put on the market devices that are easy to exploit by crooks for cyber-attacks. We will see a significant diffusion of ThingBot, some of them will also be offered for rent to power massive DDoS attacks. IoT incident and the increase in cyber threats will prompt regulatory responses. 

Rubber ducky botnet army

"We expect to see hackers continue to exploit IoT device vulnerabilities to launch attacks, and they will likely use Edwin, the app-connected smart duck who will be the biggest security threat of the year," says Jeff Harris, vice president of solutions for Ixia. "Hackers will leverage Edwin to launch the “Rubber Ducky Botnet Army” of 2017, making it critical for organizations to better defend their networks to prevent the strong DDoS attacks made possible through a yellow ducky.”


Not A Movie Title: Return Of The Worm

“2017 will be the return of the worm," says Lamar Bailey, senior director of security R&D at Tripwire, specifically pointing to IoT applications as prime targets. "The inherent insecurity in the majority [of] IoT devices, due to the fact vendors are valuing time to market over security, makes them ripe for exploit. Consumers are buying and installing these devices in record numbers to make their life easier but in many cases they are opening up their homes to complete external surveillance and control.”





"Drones have their own unique identity but they could be considered mobile as well as IoT devices as they start connecting with other devices," says Mandeep Khera, CMO of Arxan. "As drones start getting more used for deliveries of goods, expect dronejacking and other attacks. Hackers can also cause drones to malfunction with malware, resulting in injuries.”


More drones will be used to facilitate cyber attacks ( from APAC) 

A group of researchers from iTrust, a Center for Research in Cyber Security at theSingapore University of Technology and Design, demonstrated that it is possible to launch a cyber attack using a drone and a smartphone.  In the future, it is expected that drones will be an easy way to scan for unsecured wireless traffic as a way of performing war driving attacks.

Kevin Stear

Candygram for Mongo??

Posted by Kevin Stear Employee Jan 10, 2017
Over the last several weeks, the security community has bit their collective tongues as they watch thousands of Internet accessible mongoDB instances powned at an alarming rate.  In fact, according to an article published by BLEEPINGCOMPUTER this morning:

The number of hijacked MongoDB servers held for ransom has skyrocketed in the past two days from 10,500 to over 28,200, thanks in large part to the involvement of a professional ransomware group known as Kraken.

According to statistics provided by two security researchers monitoring these attacks, Victor Gevers and Niall Merrigan, this group is behind around nearly 16,000 hijacked databases, which is around 56% of all ransacked MongoDB instances.

The Kraken group got involved in these MongoDB attacks on Friday, January 6, seeing how successful and profitable previous attacks from other groups had been.

The vulnerability (and potential ransoming) of thousands of MongoDB instances is based on two common security denominators: authentication/authorization and network access control.  First, MongoDB by default employs no-authentication for read/write access, but there are a number of available and routinely utilized security extensions.  As for networking, accessibility of default port allocations (e.g., 27017-default, 28017-REST) needs to be controlled via basic IT hygiene measures such as iptables and firewalls.
These are basic security aspects (that are well documented by MongoDB,, and yet people continue to deploy Internet facing MongoDB instances with default or improper configurations, leaving themselves extremely vulnerable to elementary attack vectors.  In the case of recent ransomware infections, actors simply connect to the DB, export and drop tables, and then ransom their return for bitcoin.  
.mongo ransomware
NetWitness customers can evaluate their possible exposure to this malicious activity via a simple rule: direction = ‘inbound’ &&  = ‘flags_syn’ && tcp.dstport = ‘27017' *, and everyone using MongoDB instances should ensure that their administrators:
  1. Enable authentication (i.e., start by setting auth = true in in the config file)
  2. Use firewalls to disable remote access by binding local IP addresses and blocking access to port 27017

MongoDB has also released additional specific guidance in response to recent ransomware attacks, which is available at 

* suggested app rule may differ slightly depending on NetWitness configuration

Some threat data vendors provide a compiled .feed file as a potential output for use with RSA NetWitness (such as Crowdstrike).

Working with Joshua Waterloo we came up with an better solution than manually uploading each .feed file to each decoder by using NwConsole to script the upload of the feeds (you can also upload parsers and other config items with NwConsole)


This is the result of that work, a script that can be run from the SA server (with the feed files in the same directory as the script) and it will push out the .feed files to the log or packet decoders listed in the script using NwConsole.  Thanks to David Waugh for the original idea for a script like this.


At this time the RSA  Live > Feeds function is not able to distribute .feed files so this script is required to fill that gap.


You could modify the script to pull down the feed files from a central internal server and then crontab the script to perform this on a regular schedule to keep the .feed files updated on all the *Decoders in your environment


This also takes a first swing at writing a log message from the script when its completed to create a log message that you could use in RSA NW to correlate or chart (or forward to another SIEM).


Also as a side note if you get a .feed file and are wondering what metadata keys are used to write into you can always look at the decoder > explore /decoder/parsers/feeds and locate the feed there where you will be able to see stats about the feed including which metakeys are read from (feed.callbacks) and written to (feed.meta) (but not the actual values that may be written)

an example from the Crowdstrike email feed


This is an attempt to implement a research paper that I found via a twitter post some time in mid 2016.  The premise is that based on research into Intrusion detection certain events can be chained together that might indicate an intrusion.  This paper also attempts to use these scenarios to reduce the impact of noisy (common) windows events that might otherwise drown out the indication of an attacker.


So here goes my attempt at creating a number of ESA rules that map out these intrusion patterns.


THese have not been tested in a production environment and would like the community help in testing and validating the rules and the research.


There are more elaborate ways of writing the rule language but for the moment this is how I chose to write them for testing.


If you choose you test these rules please look into the stats on the rules to see how much memory they consume (curious to see how they fare in the performance department as some of the starting events are very rare).


There are two configuration files that will need to be updated which are included in the zip archive below.



As always, let me know how these rules perform and how they can be improved.

This context menu allows a right click pivot from DNS traffic (alias.ip) to any equivalent HTTP traffic (ip.dest) allowing analysts to quickly move between DNS traffic to HTTP traffic without the ctrl+c ctrl+v dance.


You will need to update the investigation url to match your NW installation (change the number)



    "displayName": "[Pivot to ip.dst from DNS Request]",
    "cssClasses": [
    "description": "Update your SA server and ID",
    "type": "UAP.common.contextmenu.actions.URLContextAction",
    "version": "Custom",
    "modules": [
    "local": "false",
    "groupName": "investigationGroup",
    "urlFormat": "/investigation/2/navigate/query/ip.dst%3d{0}",
    "disabled": "",
    "id": "NavigateHostAliasIp",
    "moduleClasses": [
    "openInNewTab": "true"


Filter Blog

By date: By tag: