Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > Author: Andreas Funk

RSA NetWitness Platform

2 Posts authored by: Andreas Funk Employee

Preface

 

This blog post explains how to import external intelligence as a NetWitness recurring feed. As an example, the Locky Ransomware C2 domain blocklist from https://ransomwaretracker.abuse.ch is used.

We will use the SA Server Head Unit to perform all the operations described in this blog post. 

 

 

Downloading and pre-processing external content

 

At first the list offered by the external provider must be downloaded to the local environment:

 

curl https://ransomwaretracker.abuse.ch/downloads/LY_C2_DOMBL.txt > locky.txt 

The file contains a comment block at the top followed by one domain per line:

##########################################################################
# Locky Ransomware C2 domain blocklist (LY_C2_DOMBL)                     #
# Generated on 2016-10-18 09:10:02 UTC                                   #
#                                                                        #
# For questions please refer to:                                         #
# https://ransomwaretracker.abuse.ch/blocklist/                          #
##########################################################################
wrubyjtvqhxaqkh.pw
jfmiondv.xyz
tswsgajtwhqkosd.su
xofguhypjgvxrm.pw
yofkhfskdyiqo.biz
[...]

To be able to schedule the feed import for the decoders, we create a path dedicated for feed deployment on the local webserver:

 

mkdir /var/netwitness/srv/www/rsa/feeds

Now we need to perform the following operations on the file:

  • Remove the comments
  • Add the values to be written to Meta Keys at the end of each line
  • Write the result to a new file

 

The following command will do all those tasks at once. It may look very complicated at first, but all parts of the command will be explained below:

 

cat locky.txt | grep -v '#' | awk -F $'\n' '{print $1",Locky Ransomware,ransomwaretracker.abuse.ch,Ransomware"}' > /var/netwitness/srv/www/rsa/feeds/locky.csv

 

Explanation:

  • cat locky.txt: Read the file locky.txt in the current folder
  • grep -v '#': Remove all lines that include the hash symbol
  • awk -F $'\n' '{print $1"[...]"}': for each line return the first column (which is the domain name) and add the static text within the double quotes (which are the meta values to be generated later).
  • > /var/netwitness/srv/www/rsa/feeds/locky.csv: Save the result to the locky.csv file to the path created before

 

As these lists usually change quite often (some providers update their feeds every 30 minutes), a bash script can be created and scheduled to automate the process (some minimal error handling is added in case the file could not be downloaded):

 

  • Create the bash script (the file can also be found in the attachments to this post)

 

vi /root/lockyC2.sh
#!/bin/bash
curl https://ransomwaretracker.abuse.ch/downloads/LY_C2_DOMBL.txt > locky.txt
firstLine=$(head -n 1 locky.txt)
if [[ ${firstLine:0:1} == '#' ]]; then
cat locky.txt | grep -v '#' | awk -F $'\n' '{print $1",Locky Ransomware,ransomwaretracker.abuse.ch,Ransomware"}' > /var/netwitness/srv/www/rsa/feeds/locky.csv
else
echo "File not downloaded successfully"
fi

 

  • Schedule the script to run every 30 minutes

 

crontab -e

*/30 * * * * /root/lockyC2.sh

 

Creating a custom feed

 

After completing the steps mentioned above, a recurring custom feed can be created in Live > Feeds:

 

  • Provide a name for the feed and add its URL, make sure the “Recur Every” schedule matches the update interval that you set in the cronjob (to have the data updated in the same intervals).

         Feed Configuration

 

  • Select the decoders to push the feed to

 

  • Select “IP” if the feed is based on IP addresses or “Non-IP” otherwise. Select the correct index column (the column where the “search” parameter can be found) and the callback key(s) which are the meta keys where you want to find the value in. Define which meta keys the remaining columns should be written to by selecting from the drop-down above each column.

         Feed Columns

 

  • Verify your settings then click “Finish”

Preface

 

This blog post should help everybody who wants to integrate the free (community) version of the MySQL database with NetWitness for Logs. This blog does NOT describe the MySQL database auditing. Instead the procedure can be used for applications that store their events in the MySQL database.

 

As we do not provide the drivers for that version, it has to be downloaded from http://dev.mysql.com/downloads/connector/odbc/

 

Make sure to get the tar.gz version for EL6. The version downloaded at that the time of writing was mysql-connector-odbc-5.3.4-linux-el6-x86-64bit.tar.gz:

 

MySQL download

   (click the image to enlarge)

 

Enabling MySQL collection

 

To enable MySQL collection perform the following steps:

  • Untar the file obtained from the MySQL website and copy the ODBC driver to the SA ODBC drivers folder:
    tar -xvzf mysql-connector-odbc-5.3.4-linux-el6-x86-64bit.tar.gz
    cp mysql-connector-odbc-5.3.4-linux-el6-x86-64bit/lib/libmyodbc5a.so /opt/netwitness/odbc/lib/

 

  • This is the structure of my example database on 192.168.2.200 Port 3306:
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| test               |
+--------------------+
3 rows in set (0.01 sec)

 

mysql> use test;
    Database changed

    mysql> show tables;
    +----------------+
    
| Tables_in_test |
    +----------------+
    | audit          |
    +----------------+
    1 row in set (0.00 sec)

     mysql> desc audit;
    +--------------+--------------+------+-----+-------------------+-----------------------------+
    | Field        | Type         | Null | Key | Default           | Extra                       |
    +--------------+--------------+------+-----+-------------------+-----------------------------+
    | ID           | int(11)      | YES  |     | NULL              |                             |
    | Username     | varchar(255) | YES  |     | NULL              |                             |
    | Action       | varchar(255) | YES  |     | NULL              |                             |
    | TimeOfAction | timestamp    | NO   |     | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
    +--------------+--------------+------+-----+-------------------+-----------------------------+
    4 rows in set (0.00 sec)

 

  • Create the database DSN in Administration > Services > LogCollector > View > Config > Event Sources.

        DSN

 

  • The names for the parameters are different from the names of our default drivers. The following values have to be set:

   

ParameterValue
DatabaseDatabase name
SERVERDatabase server IP
PORTDatabase server listening port
DriverDriver path

 

       In my example:

 

       DSN Values

 

  • Now create the type specification (mine is named mysql_audit.xml) for your database in /etc/netwitness/ng/logcollection/content/collection/odbc/. My example would require the following specification:

 

<?xml version="1.0" encoding="UTF-8"?>
<typespec>
 
   <name>mysql_audit</name>
   <type>odbc</type>
   <prettyName>Mysql Custom Auditing</prettyName>
   <version>1.0</version>
   <author>Andreas Funk</author>
   <description>Mysql SQL for Testing</description>
 
   <device>
      <name>mysql_audit</name>
   </device>
 
   <configuration>
   </configuration>
 
   <collection>
      <odbc>
         <query>
            <tag>mysql_audit</tag>
            <outputDelimiter>||</outputDelimiter>
            <interval>30</interval>
            <dataQuery>
               SELECT ID, Username, Action, TimeOfAction FROM audit WHERE ID > '%TRACKING%' ORDER BY ID ASC
            </dataQuery>
            <trackingColumn>ID</trackingColumn>
            <maxTrackingQuery>SELECT MAX(ID) FROM audit</maxTrackingQuery>
            <trackingColumn>ID</trackingColumn>
         </query>
      </odbc>
   </collection>
</typespec>
  • Next create a parser to match this specification in /etc/netwitness/ng/envision/etc/devices/yourDeviceName. My simple example looks as follows:

 

<?xml version="1.0" encoding="ISO-8859-1" ?>
 
<DEVICEMESSAGES>
 
        <VERSION
                xml="1"
                checksum=""
                revision="0"
                enVision=""
                device="2.0"/>
 
<!--ESI.DeviceClass = Database-->
<!--
If the message tag does not contain a definition of a property,
the default value will be used.
The default values are:
                category="0"
                level="1"
                parse="0
                parsedefvalue="0"
                tableid="1"
                id1=""
                id2=""
                content=""
                reportcategory="0"
                sitetrack="0"
 
The following are the entity reference for all the predefined entities:
&lt;           <(opening angle bracket)
&gt;           >(closing angle bracket)
&amp;          &(ampersand)
&quot;         "(double quotation mark)
 
-->
        <HEADER
                id1="0001"
                id2="0001"
                content="&lt;messageid&gt;:&lt;!payload&gt;"/>
 
        <MESSAGE
                level="5"
                parse="1"
                parsedefvalue="1"
                tableid="47"
                id1="%mysql_audit"
                id2="%mysql_audit"
                eventcategory=""
                content="&lt;sessionid&gt;||&lt;username&gt;||&lt;action&gt;||&lt;event_time&gt;"/>
 
</DEVICEMESSAGES>
  • Finally add the category (name as chosen in your typespec file) and database to the Event Sources and start the ODBC collection:

       Add event source

       Start ODBC collection

 

Testing MySQL collection

To test MySQL collection:

  • Wait for new events to arrive in the database. In my test database I created two events manually:
    mysql> INSERT INTO audit VALUES (7, 'Andreas', 'Login', NOW());
    Query OK, 1 row affected (0.01 sec)
     
    mysql> INSERT INTO audit VALUES (8, 'Andreas', 'Logout', NOW());
    Query OK, 1 row affected (0.00 sec)
     
    mysql> SELECT * FROM audit WHERE ID > 6;
    +------+----------+--------+---------------------+
    | ID   | Username | Action | TimeOfAction        |
    +------+----------+--------+---------------------+
    |    7 | Andreas  | Login  | 2015-08-07 17:27:44 |
    |    8 | Andreas  | Logout | 2015-08-07 17:27:55 |
    +------+----------+--------+---------------------+
    2 rows in set (0.00 sec)
  • Wait for the ODBC collection to get those events. You can verify collection in /var/log/messages:

Aug  7 15:30:22 ld nw[1420]: [OdbcCollection] [info] [mysql_audit.SQL_Audit] [processing] [SQL_Audit] [processing] Published 2 ODBC events: last tracking id: 8

  • The events can now be found in the Investigator with the defined meta generated:

    Navigate view

 

    Events view

Filter Blog

By date: By tag: