Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

502 posts

Instant Indicators of Compromise (IIOC)s are a feature within the NetWitness Endpoint (NWE) platform that looks for defined behavioral traits such as processes execution, memory anomalies, and network activity. By default, NWE is packaged with hundreds of built-in IIOCs that are helpful for identifying malware and hunting for malicious activity. The topic of this article is relevant because new attacker techniques are constantly being discovered which can potentially be accounted for through the development of new IIOCs. In addition, individual organizations can devise active defense measures specific to their environment for the purpose of identifying potentially anomalous behavior through IIOC customization. Before diving into the details of developing IIOCs let’s review NWE documentation on the subject.

 

Warning: The database, in which IIOCs are dependent on, is subject to incur changes with each version of NWE which may affect custom created IIOCs. Contents within this article are based off the current version of NWE: 4.4.0.9.

 

The user guide provides a plethora of useful information on the subject of IIOCs and this article assumes that the reader has a baseline knowledge of attributes such as IIOC level, type, and persistent IIOCs.  However, specifically regarding the creation and editing of IIOCs, there is limited information that is documented. Figure 1 is an excerpt from the NWE manual providing steps for IIOC creation.

 

NWE User Guide IIOC Creation

Figure 1: Excerpt from NWE User Guide

 

Reviewing and modifying existing IIOC queries will often times allow users to achieve the desired behavior. It is also a great way to enhance the user’s understanding of important database tables, which will be covered more in the query section. However, there is much more that can be said on the topic of IIOC creation. Expounding on these recommendations, as well as the entire “Edit and Create IIOC” section in the user guide, building a new IIOC can be broken down into three major steps.

 

IIOC Creation Steps:

  • Create a new IIOC entry
  • Develop the SQL Query
  • Validate IIOC for Errors

 

Create IIOC Entry

Before we delve into the specifics of IIOC queries, let's begin with the basics. A new IIOC can be created in the IIOC window either by clicking the "New" button in the InstantIOC sub-window or by right-clicking on a specific entry in the IIOC table and then choosing the clone option. Both of these options are circled in red in Figure 2. As the name suggests, cloning an IIOC entry will copy over most of the attributes of the designated item such as the level, type, and query. In both cases, the user must click save in the InstantIOC sub-window for any changes to remain.

 

IIOC Entry Creation Options

Figure 2: IIOC Entry Creation Options

 

When creating a new IIOC make sure that the Active checkbox is not marked until the entry has been finalized. This will prevent the placeholder from executing on the server until the corresponding query has been tested and verified. The IIOC level assigned to a new entry should reflect both its severity as well as the query fidelity. For example, a query that is prone to a high number of false positives would be better suited as a higher level in order to avoid inflated IIOC scores.

 

There are four unique IIOC types that can be chosen from in the Type drop-down menu: Machine, Module, Event, and Network. The distinction between these types will be further elaborated on in the next section. Also, be aware that IIOCs are platform specific, so the user must ensure that the correct operating system is chosen for the specified IIOC. Changing the OS platform can be done by modifying the ‘OS Type’ field and will cause the IIOC to appear in its matching platform table.

 

IIOC Platform Tables

Figure 3: IIOC Platform Tables

 

At this point, it is also notable that all user-defined IIOCs will automatically be given a non-persistent attribute. This means that any modules or machines associated with the matching conditions of the IIOC will only remain associated as long as the corresponding values that matched are in the database. Non-persistent behavior is potentially problematic for event tracking and scan data, which utilizes ephemeral datasets. In order to bypass this potential pitfall, the persistent field can be updated for the targeted IIOC as seen in the SQL query below.

 

Update IIOC Persistent Field

Figure 4: Update IIOC Persistent Field

Develop Query

Once a placeholder entry for an IIOC has been created, it can then be altered to achieve the desired behavior by modifying the query. IIOCs are built primarily through the use of SQL select statements. If the reader does not have a fundamental knowledge of the SQL language then a review of SQL tutorials such as W3Schools[1] is recommended.

 

More complex IIOCs can make use of additional SQL features such as temporary tables, unions, nested queries, aggregator functions, etc. However, a basic IIOC select query can be broken down to three major components: fields, tables, and conditions.

 

Basic IIOC Query Sections

Figure 5: Query Sections

 

Database Tool

In order to develop and test queries for IIOCs, it is highly advantageous for the user to have access to database administrative software. An example of such a tool is SQL Management Studio, which can be installed alongside the Microsoft SQL Server instance on the NWE server. There are many alternative options to choose from such as heidiSQL and DBeaver, if the user wishes to connect from a client system or a non-Windows machine.  These tools can also be used to provide further visibility into the database structure and to perform direct database queries for analysis purposes.

 

Required Fields

As shown in Figure 5, IIOCs require specific fields or columns, to be selected in order to properly reference the required data in the interface. Depending on the IIOC type chosen, these fields will slightly vary, but there will always be one or more primary keys needed from tables such as machines and machinemodulepaths.  Unlike the other types, machine IIOCs only require the primary key of the ‘machines’ table. Nevertheless, the NWE server still expects two returned values. In these instances, a null value can be used as the second selected field.

 

Furthermore, the operating system platform will affect the keys that are selected in the IIOCs. Linux does not currently support event or network tracking, and the associated tables do not exist in the NWE database. Selected keys are expected to be in the order shown in Table 1 per entry from top to bottom.

 

Type

Windows

OSX

Linux

Module

PK_Machines

PK_MachineModulePaths

PK_Machines

PK_MacMachineModulePaths

Machines

PK_LinuxMachineModulePaths

Event

PK_Machines

PK_MachineModulePaths

PK_WinTrackingEvents

PK_Machines

PK_MacMachineModulePaths

PK_mocMacNetAddresses

N/A

Network

PK_Machines

PK_MachineModulePaths

PK_mocNetAddresses

PK_Machines

PK_MacMachineModulePaths

PK_mocMacNetAddresses

N/A

Machine

PK_Machines

NULL

PK_Machine

NULL

PK_Machine

NULL

 Table 1: Required Fields by Platform

 

Keys selected can be either the primary keys shown in the table above or their associated foreign keys. Premade IIOC queries typically alias any selection of primary keys (PK_*) to match the prefix used for foreign keys (FK_*). However, this selection was probably chosen for consistency and does not affect the results of the query.

 

Tables Needed

Moving on to the next section of the IIOC query, we will need to determine which tables are selected or joined. As shown in the previous section, there are required keys based on the IIOC type. Therefore, the selected tables for a valid IIOC query must reference these keys. The tables that are needed will also be dependent on the fields that are utilized for the IIOC conditions, which will be covered in the next section. While there are hundreds of tables in the NWE database, knowledge of a few of them will help to facilitate basic IIOC development.

 

Since the NWE database currently utilizes a relational database, many important attributes of various objects such as filenames, launch arguments, paths, etc. are stored within different tables. A SQL join is required when queries utilize fields from multiple tables. In most cases when joining a foreign key to a primary key, an “inner join” can be used. However, if the IIOC writer must join tables on a different data point, then an alternative join type should be considered. A similar approach should be taken for any foreign keys that do not have the “Not Null” property.

 

Tables Used in IIOC

Figure 6: Tables Used in IIOC Query

 

As with the selected keys in the Required Fields section, in many cases, there are separate tables based on the platform being queried. In these instances, the table name will be similar to its Windows counterpart with a special prefix denoting the platform. For example, the MachineModulePaths table that contains module information on a machine specific basis is called MacMachineModulePaths for OSX and LinuxMachineModulePaths for Linux. Table 2 contains the table names in the NWE database that are most useful while creating IIOCs. Most of these tables are platform specific, however, a few correspond to all operating systems. The next section will provide additional context for some of the Windows tables and their associated fields.

 

Windows

MachineModulePaths

mocAutoruns

mocDrivers

mocImageHooks

mocKernelHooks

mocLoadedFiles

mocNetAddresses

mocProcessDlls

mocProcesses

mocServices

mocThreads

mocWindowsHooks

mocWindowsTasks

WinTrackingEvents_P0

WinTrackingEvents_P1

WinTrackingEventsCache

Mac

MacMachineModulePaths

mocMacAutoruns

mocMacDaemons

mocMacKexts

mocMacLoginItems

mocMacNetAddresses

mocMacProcessDlls

mocMacProcesses

mocMacTasks

mocMacTrackingEvents

Linux

LinuxMachineModulePaths

mocLinuxAutoruns

mocLinuxCrons

mocLinuxDrivers

mocLinuxProcessDlls

mocLinuxProcesses

mocLinuxServices

mocLinuxSystemdServices

mocLinuxBashHistory

ALL

FileNames

Paths

LaunchArguments

Machines

Domains

 Table 2: Useful Tables for IIOC Creation

 

Table Relations and IIOC Conditions

The final portion of the IIOC query pertains to the conditions that must be met in order to return valid results. In other words, it is the part of the query that defines what the IIOC is attempting to detect. For a standard SQL select IIOC this logic is contained within the where clause of the statement at the end of the query. In this segment, we will be going further in depth into the relevant database tables and covering the most pertinent fields for IIOC conditions.

 

To assist with elaborating on database details, this section contains simple diagrams to illustrate the relationship between the various tables and their fields. These schema diagrams, which are very condensed, are not meant to include every relevant field that the tables describe. Additionally, for the sake of brevity, these cover the Windows versions. However, a similar pattern applies to the non-Windows operating systems when applicable.

Most IIOC queries will make use of the MachineModulePaths table to some degree, as it acts as an intersection between many of the other tables used by the NWE database. For this reason, it makes for a great starting point for our database examination.

 

Module Tables

Figure 7: Module Tables

 

MachineModulePaths contains keys to tables such as the Paths, FileNames, and UserNames referenced by FK_FileNames, FK_Paths, and FK_UserNames__FileOwner. These keys point to tables that primarily contain what their table name describes (e.g. filenames, usernames, paths).  Fields in these ancillary tables are generally valuable for filtering, regardless of IIOC type and are frequently referenced by other database tables including the ones for alternate OS platforms. The query in Figure 8 is an example of a SQL select statement that utilizes these fields by joining their associated tables with machinemodulepaths.

 

Basic IIOC Filters

Figure 8: Basic Filters

 

At its core, MachineModulePaths contains aspects of modules that apply on a per machine basis. This includes the various tracked behaviors, associated autoruns, and network activity. Network related fields correspond to general network behavior, and not comprehensive network tracking which is stored in a different table. Table 3 provides a trimmed list of fields from the MachineModulePaths that can be potentially useful for IIOC filtering.

 

Field Name

Date Type

Field Name

Data Type

BehaviorFileReadDocument

bit

BehaviorFileRenameToExecutable

bit

BehaviorFileWriteExecutable

bit

BehaviorProcessCreateProcess

bit

BehaviorProcessCreateRemoteThread

bit

DaysSinceCreation

int

FileAccessDenied

bit

FileADS

bit

FileAttributes

int

FileAutoStart

bit

FileAutoStartWinsockProviders

bit

FileEncrypted

bit

FileHiddenAttribute

bit

FileHiddenDifferentialView

bit

FilenameUTCTimeAccessed

datetime2

FilenameUTCTimeCreated

datetime2

FilenameUTCTimeModified

datetime2

MarkedAsDeleted

bit

NetworkAccessNetwork

bit

NetworkListen

bit

ProcessAccessDenied

bit

ProcessFirewallAuthorized

bit

ProcessTopLevelWindowHidden

bit

RecordUTCTimeModified

datetime2

UserModulePrivateMemory

bit

 

 

 Table 3: MachineModulePaths Fields

 

MachineModulePaths also has a reference to the Modules table which contains more information about the executable. Unlike MachineModulePaths, the Modules table stores data about an executable that is immutable from machine to machine. This data includes the cryptographic hashes, executable header values, entropy, etc. Similar to the table provided for MachineModulePaths, Table 4 lists useful fields found within the Modules table.

 

Field Name

Date Type

Field Name

Data Type

DaysSinceCompilation

int

Description

nvarchar

Entropy

real

HashMD5

binary

HashSHA1

binary

HashSHA256

binary

MarkedAsDeleted

bit

ModulePacked

bit

ModulePE32

bit

ModulePE64

bit

ModuleTypeDLL

bit

ModuleTypeEXE

bit

ModuleVersionInfoPresent

bit

NameImportedDLLs

nvarchar

NumberImportedDLL

int

PEMachine

int

PESizeOfImage

int

PEUTCTimeDateStamp

datetime2

SectionsNames

nvarchar

SignatureUTCTimeStamp

datetime2

Size

bigint

 

 

 Table 4: Modules Fields

 

Lastly, the Machines table is comprised of machine attributes that a specific module was observed executing on. This includes data such as the machine name, associated IP address, and OS version. All endpoints in NWE are listed in this table regardless of OS platform.

 

Next, is the event tracking table, which for Windows endpoints are represented in the WinTrackingEvents* tables. While the diagram in Figure 9 only shows the WinTrackingEventsCache table, the Windows event tracking actually utilizes two additional tables for permanent event storage (WinTrackingEvents_P0 and WinTrackingEvents_P1). Since WinTrackingEventCache contains only the most recent events it is the most efficient of these tables for IIOCs. However, with the populated data constantly migrating to another table, any IIOC that utilizes it will need to be set to persistent in order to be effective.

 

WinTrackingEventCache Table

Figure 9: WinTrackingEventCache Table

 

Tracking event tables contain a reference to a secondary table called LaunchArguments, which is also used by several other tables. An IIOC will only need to join this table with WinTrackingEvents* for the source command-line arguments, as many of the target process attributes are already stored as varchar fields in the table. Booleans for various event behaviors are also extremely useful for a filtering event in these tables.

 

Field Name

Date Type

Field Name

Data Type

BehaviorFileDeleteExecutable

bit

BehaviorFileOpenLogicalDrive

bit

BehaviorFileOpenPhysicalDrive

bit

BehaviorFileReadDocument

bit

BehaviorFileRenameToExecutable

bit

BehaviorFileSelfDeleteExecutable

bit

BehaviorFileUnquanrantine

bit

BehaviorFileWriteExecutable

bit

BehaviorProcessAllocateRemoteMemory

bit

BehaviorProcessCreateProcess

bit

BehaviorProcessCreateRemoteThread

bit

BehaviorProcessOpenBrowserProcess

bit

BehaviorProcessOpenOSProcess

bit

BehaviorProcessOpenProcess

bit

BehaviorRegistryModifyFirewallPolicy

bit

BehaviorRegistryModifyRunKey

bit

EventType

int

EventUTCTime

datetime2

FileName_Target

nvarchar

HashSHA256_Target

binary

LaunchArguments_Target

nvarchar

Path_Target

nvarchar

Pid

int

 

 

 Table 5: WinEventTracking Fields

 

The table used to store network tracking data for Windows systems is called MocNetAddresses. This table tracks unique instances of network connections and their associated execution environment such as the machine, module, and launch arguments. Unlike event tracking data, this table keeps account of the module which initiated the network request (FK_MachineModulePaths) as well as the process it resides within (*__Process). The distinction is useful in cases where a loaded library is responsible for a network request, such as a service DLL in svchost.exe.  There is also a secondary table tracking the domain name of the associated request called Domains.

 

MocNetAddress Table

Figure 10: MocNetAddresses Related Tables

 

In addition to the domain, the IP and Port fields are also useful filters for the MocNetAddresses table. Network sessions with an RFC 1918 destination address can be disregarded using the NonRoutable field. A trimmed list of MocNetAddresses fields that can be used for IIOC conditions is provided in Table 6.

 

Field Name

Date Type

Field Name

Data Type

BurstCount

int

BurstIntervalDeviation

real

BurstIntervalMean

real

ConnectCount

int

EProcess

bigint

EThread

bigint

FailConnectCount

int

FirstConnectionUTC

datetime2

IP

nvarchar

IPIsV4

bit

IPIsV6

bit

LastConnectionUTC

datetime2

NonRoutable

bit

OffHourNetworkTraffic

bit

Port

int

ProcessId

int

TotalReceived

bigint

TotalSent

bigint

UserAgent

varchar

 

 

Table 6: MocNetAddress Fields

 

Finally, we will briefly cover scan data found inside the NWE database. These tables include additional information concerning autostarts, services or crons, memory anomalies, etc. Similarly, to the module and tracking data tables, the presence of scan-related tables are OS platform specific. For example, memory anomaly tables are currently only used by Windows systems and tables such as bash history only apply to Linux endpoints.

 

Windows

mocAutoruns

mocDrivers

mocImageHooks

mocKernelHooks

mocProcesses

mocLoadedFiles

mocNetAddresses

mocProcessDlls

mocWindowsTasks

mocServices

mocThreads

mocWindowsHooks

Mac

mocMacAutoruns

mocMacDaemons

mocMacLoginItems

mocMacKexts

mocMacProcessDlls

mocMacProcesses

mocMacTasks

Linux

mocLinuxAutoruns

mocLinuxCrons

mocLinuxDrivers

mocLinuxSystemdServices

mocLinuxProcessDlls

mocLinuxProcesses

mocLinuxServices

mocLinuxBashHistory

Table 7: Scan Data Tables by Platform

 

Scan data tables can also be considered ephemeral datasets that are replaced upon a new scan. Thus, persistent IIOCs should be used when referencing these tables to avoid missing IIOC matches, especially in environments where the scanning schedule is set to a high frequency of occurrence.

 

Identifying New Columns and Tables

When attempting to determine the table and column associated with known fields in the NWE interface, it can be helpful to query the information_schema database. For instance, consider a circumstance where a user needs to create an IIOC based on a unique section name string in an executable, but does not know the corresponding column name within the database. In the UI, this field is located in the properties window of a specific module and its field name is called “Sections Names”.  By querying the columns table and reviewing the results, it can be determined that a similar fieldname is located in the modules tables.

 

InformationSchema Query

Figure 11: Information_Schema Query

 

In most cases, the field name in the UI will be almost identical to its corresponding column name within the database. Further filtering or sorting by the data type and expected table name can also be useful to identify the desired column when a large number of results are returned.

 

Check for Errors

IIOC validation should initially be performed as part of crafting the query. Check for any errors and that the expected number of fields and values are returned when executing the query being tested. Once a new IIOC entry has been created, and the query edited, refresh the window and check for an updated “Last executed” field. Also, make sure that the Active check box in the InstantIOC sub-window is marked. Note that it may take a few minutes for the IIOCs to be executed by the server from the time that the new entry was set to active.  To quickly identify any custom IIOCs that have been created, use the column filter on the “User Defined” field in the IIOC table.

 

User Defined IIOC Filter

Figure 12: IIOC Table Filter on “User Defined” Field

 

If any resulting errors occurred after the IIOCs have been executed by the database, then the entire row for the failed IIOC will be italicized. There will also be an error message located in the “Error Message” field to provide additional context as to why it occurred. In the example in Figure 13, the highlighted IIOC entry called “Test IIOC” returned an error for an invalid column name. In this case, the first selected field had a minor typo, and should have been referenced as “fk_machines”.

 

IIOC Error

Figure 13: IIOC Table with Error Returned

 

Tip: SQL Query

Analyzing data directly from the back-end database is definitely outside of the scope of this article, however, a useful tip for IIOC development is to include a “human readable” query alongside any custom developed IIOCs. Doing so will allow the analyst to review all the relevant data that is related to the query instead of just seeing the machines and modules associated with the IIOC in the interface. In the example shown in Figure 14, there is an additional query that is commented out. Instead of selecting the keys listed in the Required Fields section, this query selects fields that provide additional context to the analyst typically based on the fields in the IIOC condition.

 

Human Readable Query

Figure 14: IIOC with Human Readable Query

 

Since the second query has been commented out, it does not affect the IIOC in any way. However, it does provide the analyst with the means to quickly gain additional information regarding the related events. Executing the query, as shown in Figure 15, provides previously unseen information such as source and target arguments and the event time. This information would normally only be available on a per machine basis in the interface. Reviewing data in this manner is especially useful when analyzing queries that return a high number of results and potential false positives.

 

Query Results

Figure 15: Database Query Results

 

Custom IIOC Examples

The following section covers examples of IIOCs that are currently, as of version 4.4.0.9, not included by default. Each example is based on potentially anomalous behavior or observed threat actor techniques.

 

Filename Discrepancies

There are many instances where a modified filename could be an indicator of evasive execution on a machine. In other words, it could be evidence of an attempt to avoid detection by antivirus or monitoring systems. The following IIOCs compare module attributes with the associated filename for anomalies based on observed techniques.

 

Copied Shell

Windows accessibility features have been a known target of replacement to regain access to a system by both malicious actors and system administrators for a long time. Furthermore, actors occasionally rename shells and scripting hosts to bypass security monitoring tools. For these reasons, the following IIOC looks for instances of the Windows shells and scripting hosts that have been renamed. It is also looking for instances where the trusted install user (“NT SERVICE\TrustedInstaller”) is no longer the owner of one of these files.

 

Copied Windows Shell

Figure 16: Copied Windows Shell

 

Psexesvc

Currently, there is a built-in IIOC called “Run Remote Execution Tool” that looks for instances of Psexec execution. However, a common issue in endpoint monitoring is the lack of full visibility, and consequently the machine running the client program is not always being observed. Therefore, when looking for evidence of potential lateral movement, it is important to monitor for suspect behavior of the target machine as well.

 

Psexec is known to create a service on the target machine using the executable “psexesvc.exe”. It also provides the ability for the user to change the name of the service, and subsequently the name of the executable to a different value. Similar to the previous example, this change in name does not affect the file description of the executable. Thus, the IIOC author can specifically look for occurrences of this name modification. The creator of this IIOC could also choose to include instances of the normal psexec service filename by omitting the second condition, but a separate IIOC with a higher IIOC level would be more appropriate.

 

Renamed Psexesvc

Figure 17: Renamed PSexesvc Process

 

Suspicious Launchers

Malware and malicious actors often utilize proxied execution, or launchers, for their persistence mechanisms in order to bypass security monitoring tools. PowerShell is a prime example of a program that facilitates this behavior in Windows. Executing this program upon startup provides an attacker with a wealth of functionality within a seemingly legitimate executable.  The IIOC shown in Figure 18 identifies instances of PowerShell that are associated with a persistence mechanism on an individual machine.

 

PowerShell Autostart

Figure 18: PowerShell Autostart

 

Typically, legitimate instances of this functionality created by system administers will simply reference a PowerShell script instead of the Powershell module itself. Nonetheless, analysts should still be cognizant of potentially valid instances. This is demonstrated in the next query in Figure 19, which contains additional conditions that the filters autostart programs or scheduled tasks that match the given keyword.

 

Figure 19: PowerShell Autoruns and Scheduled Tasks

 

This new query utilizes a SQL union to combine the results of two select statements. Instead of initially selecting from the “machinemodulepaths” table and filtering from the rows, this query uses two tables that contain additional startup program information. With these additional data points, the IIOC author can filter on attributes such as task name and registry path. An example of the latter would be a unique run key name. Since the second IIOC utilize tables from scan data, it would need to be set to persistent.

 

PowerShell Injected Code

Figure 20: Floating Module in PowerShell in UI

 

Another attribute of a process that may be indicative of a malicious launcher is the presence of injected code. Executable files that circumvent the Windows image loader, otherwise known as reflective loading, as well as libraries that have been unreferenced from PEB ldr_data lists will show up in the NWE UI as “Floating Modules.” While there is already an IIOC looking for the presence of these modules, augmenting the existing query by including known launcher processes such as PowerShell can produce inherently more suspicious results. By increasing the specificity of the query, we now have an additional IIOC that is a higher indicator of malicious activity.

 

Figure 20: Floating Module in PowerShell Query

 

There are numerous modules other than PowerShell that can be utilized to initiate execution on Windows systems such as mshta, rundll32, regsvr32, installutil, etc. Similar IIOCs for all of these modules can be useful for monitoring evasive execution or persistence. However, in each case, there will be filtering required that is specific to an individual organization.

 

The End.

Hopefully this article has provided insight into custom IIOC creation. It has covered the key components of basic IIOCs and provided steps for their creation. Additionally, it has discussed the most useful backend database tables and fields used for building IIOC queries. Lastly, it has provided examples that can be used to identify real-world attacker techniques.

 

[1] https://www.w3schools.com/sql/

Customers that use Azure cloud infrastructure require the ability to enable their Security Operations Center (SOC) to monitor infrastructure changes, service health events, resource health, autoscale events, security alerts, diagnostic logs, Azure Active Directory Sign-In and Audit logs, etc. The RSA Netwitness Platform is an evolved SIEM that natively supports many 3rd party sources like Azure Active Directory Logs, Azure NSG Flow Logs, and now Azure Monitor Activity and Diagnostic Logs for depth of visibility and insights to enable SOC analysts and threat hunters.

 

Azure Monitor Activity and Diagnostic Logs background:

 

The Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred in Azure. This includes a range of data, from Azure Resource Manager operational data to updates on Service Health events. Using the Activity Log, you can determine the ‘what, who, and when’ for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. You can also understand the status of the operation and other relevant properties. The Activity Log does not include read (GET) operations or operations for resources that use the Classic/"RDFE" model.

 

Azure Monitor diagnostic logs are logs emitted by an Azure service that provide rich, frequent data about the operation of that service. Azure Monitor makes available two types of diagnostic logs:

  • Tenant logs - These logs come from tenant-level services that exist outside of an Azure subscription, such as Azure Active Directory logs.
  • Resource logs - These logs come from Azure services that deploy resources within an Azure subscription, such as Network Security Groups or Storage Accounts.

 

Azure Monitor Activity Logs: https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs?toc=/azure/azure-monitor/toc.jsonhttps://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs?toc=/azure/azure-monitor/toc.json

 

Azure Monitor Diagnostic Logs: https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-of-diagnostic-logs?toc=/azure/azure-monitor/toc.jsonhttps://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-of-diagnostic-logs?toc=/azure/azure-monitor/toc.json

 

Azure Monitor Active Directory Logs: https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-activity-logs-azure-monitorhttps://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/concept-activity-logs-azure-monitor%20

 

Azure Monitor Activity, Diagnostic and Azure Active Directory Logs can be exported to an Event Hub. The RSA NetWitness Platform’s Azure Monitor plugin collects the logs from this Event Hub.

 

 

 

                                                                                                   *In Log Collection, Remote Collectors send

                                                                                                    Events to the Local Collector and the Local

                                                                                                    Collector sends events to the Log Decoder

 

Configuration Guide: Azure Monitor Event Source Configuration Guide

Collector Package on RSA Live: "MS Azure Monitor Log Collector Configuration"

Parser on RSA Live: CEF

When attacking or defending a network it is important to know the strategic points of the environment. In the case an environment is running Active Directory, as almost every organization in the world does, it is important to understand how rights and privilege relationships work, as well as how they are implemented. With relative ease, an attacker can take advantage of the fact that an unintended user was somehow added to a group with more elevated privileges than he/she needs. Once they are able to identify this the battle might be over before the defender has a chance to know what happened.  In this blog post, we will go through how RSA NetWitness Network/Packets can be utilized to detect if BloodHound’s Data Collector (known as SharpHound) is being used in your environment to enumerate Group Membership via the LocalAdmin Collection method.

 

In order to automate the process of determining the privilege relationships in an environment, the incredibly talented group of @_wald0, @CptJesus, and @harmj0y created a very popular tool aptly named BloodHound. You can find this awesomeness at https://github.com/BloodHoundAD/BloodHound/.  The tool is a single page Javascript web application, built on top of Linkurious, compiled with Electron, with a Neo4j database fed by a PowerShell/C# ingestor. The tool utilizes graph theory to reveal the hidden and often unintended relationships within an Active Directory environment. This is an incredibly awesome tool that provides a much needed insight into what is often a forgotten or mismanaged process.  If you haven’t tried this out on your environment, offensive or defensive minded, I’d recommend it.

 

SharpHound is a completely custom C# ingestor written from the ground up to support collection activities. Two options exist for using the ingestor, an executable and a PowerShell script. Both ingestors support the same set of options. SharpHound is designed targeting .Net 3.5. SharpHound must be run from the context of a domain user, either directly through a logon or through another method such as RUNAS. The functionality we will be analyzing in this blog post is only a small percentage of what BloodHound/SharpHound can do and other portions will be covered in upcoming blog posts.

 

First let’s tackle some minimum requirements & some assumptions that we have put in place:

#1) This assumes you have a TAP location feeding your Decoder(s) somewhere between your workstations & your Domain Controllers (DC). This is a best practice to ensure that traffic flows to all critical devices, including DCs, are being captured.

#2) You’ve created a Feed with a list of your Domain Controllers & are tagging them appropriately. (While not 100% necessary this will definitely aid in analysis) This can easily be accomplished by modifying the Traffic Flow Lua Parser to include DC subnets, or one by one. Please see this guide for further details. 

#3) BloodHound Data Collector is performing Local Admin Collection Method – SharpHound has several different enumeration options, as are listed on BloodHounds GitHub wiki, while it’s possible to catch some of these other options, this post will be focus in on how its Local Admin Collection Method works.  Further technical details on how this method works can be found on @cptjesus's blog post here.  CptJesus | SharpHound: Technical Details 

#4) I am not an Active Directory or BloodHound expert by any stretch. What is detailed in this blog post is what a colleague (Christopher Ahearn) & I discovered by attempting to help our clients detect badness.

 

Now then, let’s begin.

 

Group Enumeration

As soon as SharpHound starts up it will attempt to connect to the workstation’s default Domain Controller over LDAP (TCP Port 389) with the current user’s privileges, in order to enumerate information about the domain. It is important to note the latest version (2.1) of SharpHound will grab a list of domain controllers available for each domain being enumerated, starting with the primary domain controller, and then do a quick port check to see if the LDAP service is available[1]. Once the first one is found, it will cache the domain controller for that domain and use it for LDAP queries. In our testing this traffic will contain LDAP searchRequests & binds, then becomes encrypted with SASL, which will provide no insight into what’s actually being enumerated, as depicted below.   I know, frustrating.

 

 

However depending on the size of your domain this can be an indicator itself, as there can be a lot of data being transferred. Looking for one host with an abnormal amount of LDAP traffic can be a starting point to look for BloodHound, however, this is highly dependent on your environment as there may be legitimate reasons for the traffic. 

What happens next can be used as a more high fidelity alert. The host running BloodHound will attempt to enumerate the group memberships for users found in the LDAP Queries. and does so in a unique manner. In order to enumerate this information the host creates an RPC over SMB connection to the Domain Controller. How this occurs is the host will first create a named pipe to SAMR - Security Account Manager (SAM) Remote Protocol (Client-to-Server) - over the IPC$ share.

 

Once this is accomplished the RPC Bind command is used to officially connect the RPC session. Now that the RPC session is available, SharpHound will attempt to request in the following order

  1. Connect5
  2. LookupDomain
  3. OpenDomain
  4. OpenAlias
  5. GetMembersInAlias

 

 

To start off, the SamrConnect5 method obtains a handle to a server object, while only requiring a string input for the ServerName. Next the session will call SamrLookupDomaininSamServer which required a handle to the server object provided by the SamrConnect5, in order to look up all the domains hosted by the server side protocol. Now that the session has a handle to work with, it then proceeds to look up the domain with and use SamrOpenDomain to obtain a handle for the Domain Object. As you can see this was an iterative process in order to get handle for first, the Server Object, then the Domain Object, which will then be used in SamrOpenAlias to obtain a handle to an alias. Finally the session will use the alias handle as input to the SamrGetMembersinAlias with the goal of enumerating the SIDs for the members of the specified alias object. After SamrGetMembersinAlias completes it’s response the SAMR session is then closed, and in it’s place an Local Security Authority (LSA) RPC session is started.

 

As the name states, the lsarpc interface is used to communicate with the LSA subsystem. Similar to how SharpHound had to find all the necessary handles in order to finally enumerate the SIDs in a given group, it must also find a Policy handle to convert the discovered SIDs from SamrGetMembership. In order to do that it utilizes OpenPolicy2, which only requires the SystemName, which in this case will be the DC. Once the OpenPolicy2 has finished & acquired the Policy Handle it then calls the LookupSids2 method in order to resolve the SIDs it acquired in the SamrGetMembersinAlias. Now the Data Collector has all the information it needs to provide information concerning what Users are in what groups.

 

During RSA’s enterprise testing of BloodHound, this unique way of enumerating Group Membership via SamrGetMembersinAlias, and subsequently looking up the acquired SIDs via LsarLookUpSids2 in the same session, has a very high detection rate of BloodHound activity. It should be noted that this activity has the potential to be normal activity depending on the various administration techniques being used by administrators, however, during our testing in various large environments we have not found an instance of its usage in that capacity.

 

Detecting with RSA NetWitness Packets

NetWitness packets does an incredible job dissecting the various different RPC & SMB actions that cross over capture points, which make creating detection rather simple. As the RPC session is being transported via SMB all of the necessary metadata found in our analysis is in one session.

 

 

The following application rule will flag sessions which contain this activity.

 

(service =139) && (directory = ‘\IPC$\') && (analysis.service = ‘named pipe’) && (filename = ‘samr’) &&  (action = ‘samrconnect5’) && (action = ‘SamrLookupDomaininSamServer’) && (action = ‘samropendomain’) && (action = ‘samropenalias’) && (action = ‘SamrGetMemberinAlias’) && (action = ‘lsaropenpolicy2’) && (action = ‘lsarlookupsids2’)

 

Wrapping Up

This blog’s intent was to bring up new and interesting ways to detect BloodHound traffic with the specific context of running the LocalAdmin Collection method from a workstation in a domain to a DC. It’s possible that this traffic could be used legitimately via an Administrators script or it’s also possible that it might be an indicator for another tool using a similar enumeration method. Every environment is different from the next and each needs to be analyzed with the proper context in mind, including this traffic. If you are defending a network, I would encourage that you try out BloodHound, even if it’s just in a Lab environment, and see what your toolsets can detect. If you’re not already monitoring traffic going in and out of your DC’s I would also encourage you to try more play books aside this one attack vector, and to help determine potential visibility gaps. In the upcoming blog posts we’ll attempt detection of other methods used by Bloodhound.

 

[1] https://blog.cptjesus.com/posts/bloodhound21

RSA Netwitness Endpoint (NWE) offers various ways to alert the analyst of potentially malicious activity. Typically, we recommend that an analyst look at the IIOCs daily, and investigate and categorize (whitelist/graylist/blacklist) any hits on IIOC Level 0 and Level 1.

 

When an IIOC highlights a suspicious file or event, the next investigative step is to look at the endpoint where the IIOC hit, and investigate everything related to the module and/or event. Depending on the type of IIOC, the analyst can get answers related to the file/event in any or all of the following categories of data:

  • Scan data
  • Behavioral Tracking Events
  • Network Events

NWE Data Categories

 

If we focus on a Windows endpoint, regardless of whether it is part of an investigation or a standalone infected system, we always complement the analysis of data that has been automatically collected by the agent, with an analysis of the endpoint's Master File Table (MFT). There are very good reasons to always analyze the MFT in these situations. Let me list the main reason here:

  • The automatically collected data (scan, behavioral, network) is always a subset of all the actual events that happen on the endpoint. Namely, the agent collects process, file, and registry related events but with some limitations. For example, while the agent records file events, it is focused on executable files rather than all file events. So, looking at the MFT around the time of the event will enable the analyst to discover and collect additional artifacts related to an incident, such as, non-executable files related to an incident. These non-executable files can be anything, from the output of some tool the attacker executed (such as a password dumper) to archive files related to data theft activity, to configuration files for a Trojan, etc.

 

Let us first describe some key concepts related to the MFT so that you can get the best value out of your analysis of it. 

 

What is the MFT?

In general, when you partition and format a drive in Windows, you will likely format it with the New Technology File System (NTFS). The MFT keeps track of all the files that exist in the NTFS volume, including the MFT file itself. The MFT is a file that is actually named $MFT in the NTFS, and the very first entry inside this file is a record about the $MFT file itself. Just so that you are aware, on average, a MFT file is around 200MB. If you open a $MFT file using a hex editor, you can see the beginning of each MFT record marked by the word "FILE":

 

MFT Record Example

 

The MFT keeps track of various information about the files on the file system, such as filename, size, timestamps, file permissions, as well as where the actual data of the file exists on the volume. The MFT does not contain the data of the file (unless the file is a few bytes small < 512 bytes), but instead it contains metadata about each file and directory on the volume. Perhaps the easiest way to understand the MFT is to think of a library that uses index cards to keep track of all the books in it. The MFT is like the box containing these index cards, where each index card tells you the title of the book and where to find a particular book in the library. The index card is not the book, it just contains information about the book.

 

The library analogy is also useful to describe a few other concepts regarding the MFT. In this imaginary library, the index cards are never discarded, but rather reused. So, when the library wants to remove a book from its records, it would just mark the index card related to the book as available to contain the information about some new book. Notice that in this situation the index card still contains the old information, and the book is still sitting on a shelf in the library. This situation remains true, until the information in the index card is overwritten by the information of a new book. What we are describing here is the process of a file deletion in Windows (and we are not talking about the Recycle Bin here but actual file deletions). Namely, when a file is deleted in Windows, the MFT record for that file is marked as available to be overwritten by some other new file that will be created in the future. So, deleting a file does not mean deleting its information, or the actual data of the file. Windows just does not show the user files marked as deleted in the MFT. The data may still be there though, and depending on how busy the system is, i.e. how many files are created/deleted, you have a good chance of recovering a deleted file if you get to it before it is overwritten.

 

When NWE parses the MFT, it shows you both regular MFT records, and those that have been marked as Deleted. The deleted records can also be grouped by themselves (more on this later). However, NWE does not do file recovery, meaning it will not get you the data of a deleted file. It will only show you the information that exists in the MFT of that deleted file. In order to recover a deleted file, you will need to use other forensic tools on the actual endpoint, or an image of the drive, to recover the data of the deleted file. NWE is able to retrieve any other file referenced in the MFT that is not marked as deleted.

 

MFT Timestamps

Another very important concept about the MFT is related to file timestamps. The MFT contains 8 timestamps in it for each file (a folder is also considered as a file in the MFT) on the endpoint. These 8 timestamps are contained within two attributes named $Standard_Information ($SI) and $Filename_Information ($FN). So, each of these attributes contains these time stamps:

 

   $SI Timestamps                                          $FN Timestamps

   - Created                                                      - Created

   - Modified                                                     - Modified

   - Accessed                                                   - Accessed

   - MFT Entry Modified                                   - MFT Entry Modified

 

Whenever you look at the properties of a file using explorer.exe, you see the first three $SI timestamps. Here is an example of looking at the properties of a file named kernel32.dll:

 

Explorer.exe showing $SI timestamps

 

So, you may wonder what is the MFT Entry Modified time, and what is the purpose of the other equivalent timestamps under the $FN Attribute. The MFT Entry Modified is a timestamp that keeps track of when changes are made to the MFT record itself. Think of it as when the library index card is updated, Windows keeps track of those updates through this MFT Entry Modified timestamp. Windows does not show this timestamp through explorer.exe or any other native Windows tools because this timestamp is not relevant to the typical user.

 

Typically, when we talk about a file, in our mind we think of it based on its name. However, the name of a file is just one property of a file. This distinction is important as we talk about the $FN timestamps. Whenever a file is created so is its filename, because in order to create a file you have to specify a name. However, Windows creates two sets of timestamps associated with a file object. You can think of the $SI timestamps as associated with the file object itself, and of the $FN timestamps as associated with the filename of the file object.

 

The reason why we are talking about all these timestamps is because during the analysis of a MFT, time is critical in identifying relevant events. You want to ensure that you are sorting the files based on a reliable timestamp. When it comes to the fidelity of the timestamps, the $FN timestamps are your friend. It is trivial to change the $SI timestamps (Created, Modified, Accessed). In fact, Windows has API functions that allow you to do that. So, many attackers code their droppers to manipulate the $SI timestamps of their files, so that if you are a typical user using explorer.exe to view the properties of a file, you will be tricked into believing that the file was created much further in the past then in reality (attackers typically backdate files). So, during our analysis of the MFT we sort files by their $FN Created Time or the $FN MFT Entry Modified time, since these would likely have the exact timestamp of when the file was created on disk. There are various situations when a file is renamed, moved on same volume, moved to a different volume, etc. that affect these 8 timestamps in various ways depending on the Windows operating system. We will not cover these here as it will require a blog on its own. For more details on these situation there are various write ups online. 

 

How To Request and Open the MFT

  • Since a Windows endpoint may have multiple NTFS volumes it means that each volume (or partition, or drive letter) will have its own MFT. So, if the endpoint has multiple drive letters C:, D:, E:, etc, each of them will have its own MFT. In NWE you can request the MFT by right-clicking the endpoint's machine icon, Forensics > Request MFT.

 

Steps to request MFT

 

  • NWE will request the Default Drive, which means it will request the MFT from the volume where Windows is installed (typically C:). However, here you can request the MFT of any drive letter from the drop down.

 

Request MFT by drive letter

 

  • You may wonder how do you know in advance how many volumes exist on the endpoint. This information is available to the analyst under the More Info tab:

 

More Info tab

 

  • Once you request the MFT, it will show up within a few seconds under the Downloaded tab. This is where you should also open the MFT. In order to open the MFT, you should right click on the MFT and select Download and Open MFT. NWE will then ask you where you want to save the MFT file itself. It is a good practice to have a folder named Analysis, under which you can create subfolders for each endpoint you are investigating. Under this subfolder you can save all the artifacts related to this endpoint, including the MFT.

 

How to open downloaded MFT

 

Sample MFT Analysis

 

When you open the MFT using the method described above, NWE will automatically associate the MFT with the endpoint that it came from. NWE will assume the MFT belongs to the C: volume. If this is not the case you can change the drive letter as shown below:

 

MFT drive letter and endpoint

As you can see on the left side you can select various folders under the root of C:, look at only files that are marked as deleted in the MFT, or look at All Files. Generally, we want to look at all files to get a global view of everything and start our analysis with whatever file brought us to this system. Sometimes, it is not necessarily a particular file but an event in time. In this case we can then start our analysis by looking at files at that particular time. 

 

Before we look at an example, let us also talk about what columns you should have in front of you to ensure you can be successful in your analysis. We recommend that you have at least the following columns exposed (hiding the rest if you wish) in this order from left to right. 

  • Filename
  • Size
  • Creation Time $FN (you should also sort the files by this column)
  • Creation Time $SI (this is for comparison purposes to $FN)
  • Modification Time $SI (you want the $SI modification time because the $FN timestamps are only good for creation times, not subsequent modifications to the content of the file)
  • Full Path
  • MFT Update Time $SI
  • MFT Update Time $FN (sometimes the $FN timestamp maybe backdated, so this timestamp can be used instead)

 

You can expose additional columns if you wish, but these should be in this order to maximize the effectiveness of your analysis. In order to select/deselect which columns you wish to see, you can right-click on the column:

 

Column Chooser

 

Let us now go over an example of why you need to MFT. We start the analysis by reviewing the IIOCs, which as we mentioned, should be something that is done daily. We notice that a suspicious file is identified under the "Autorun unsigned Winsock LSP" IIOC as shown below:

 

Winsock LSP IIOC

We see two files with the same name listed here uixzeue.dll, which means even though they have the same name, they have different hash values. Let us pull the MFT for this system and see what else occurred on this system that we would not see in Behavioral Tracking for reasons that will be explained below. When we open the MFT, we select All Files on the left pane, and make sure we sort by $FN Creation Time column. A system can have over 100,000 files in it, so to quickly get to the one we are interested in, we can search (CTRL-F) for the file name as shown below:

 

Step 1: Find file of interest

As we can see in the example above, whatever the attacker used to drop these DLLs on this endpoint, it time stomped the $SI timestamps by backdating them. If we were to sort and perform our analysis on the $SI Creation Time we would be way off of any relevant events about this malware. The $FN Creation Time shows the true time the file was created (born) on this endpoint. After identifying the file of interest we can then clear the search field to again view all files, but NWE will keep the focus on these two files. When you perform analysis on the MFT, the idea is to look at any files Created and/or Modified around the time of interest. What we notice is that a few microseconds before the DLLs showed up, two .fon files were created.

 

FON files

NWE will have not recorded anything related to these .FON files because they are just binary files. In fact they are the bulk of the malicious code, but its content is encrypted. The job of the DLL after it is loaded is to open the .FON file, decrypt it, and load its code in memory. The malicious code and the C2 information is all encrypted inside these .FON files. By the way, these are not FONT files in any way. They are just stored in the \fonts folder and have the .fon extension just to blend in with the other file extensions in that folder.

 

So, as you can see every time you are chasing a malicious file down you should always also look at the endpoint's MFT to ensure that you have discovered all related artifacts associated with the malicious event. 

 

Happy Hunting!

There are a myriad of post exploitation frameworks that can be deployed and utilized by anyone. These frameworks are great to stand up as a defender to get an insight into what C&C (command and control) traffic can look like, and how to differentiate it from normal user behavior. The following blog post demonstrates an endpoint becoming infected, and the subsequent analysis in RSA NetWitness of the traffic from PowerShell Empire. 

 

The Attack

The attacker sets up a malicious page which contains their payload. The attacker can then use a phishing email to lure the victim into visiting the page. Upon the user opening the page, a PowerShell command is executed that infects the endpoint and is invisible to the end user:

 

 

The endpoint then starts communicating back to the attacker's C2. From here, the attacker can execute commands such as tasklistwhoami, and other tools:

 

From here onward, the command and control would continue to beacon at a designated interval to check back for commands. This is typically what the analyst will need to look for to determine which of their endpoints are infected.

 

The Detection Using RSA NetWitness Network/Packet Data

The activity observed was only possible due to the communication happening over HTTP. If this had been SSL, the detection via packets would be much more difficult. This is why introducing SSL Decryption/Interception/Offloading is highly recommended. SSL inspection devices are nothing more than a well-designed man-in-the-middle attack that breaks the encryption into two separate encrypted streams. Therefore, they still provide an adequate level of protection to end-users while allowing security analysts and devices to properly monitor and alert when malicious or unwanted activity takes place, such as the web shells shown here. In summary, if you are responsible for protecting your organization’s assets, you should definitely consider the pros and cons of using this technology.

 

The analyst begins their investigation by placing a focus on looking for C2 traffic over HTTP. The analyst can then look into pulling apart the characteristics of the protocol by using the Service Analysis meta key. From here they notice a couple interesting meta values to pivot on, http with binary and http post no get no referer directtoip:

 

Upon reducing the number of sessions to a more manageable number, the analyst can then look into other meta keys to see if there are any interesting artifacts. The analyst look under the Filename, Directory, Client Application, and Server Application meta keys, and observes the communication is always towards a microsft-iis/7.5 server, from the same user agent, and toward a subset of PHP files:

 

The analyst decides to use this is as a pivot point, and removes some of the other more refined queries, to focus on all communication toward those PHP files, from that user agent, and toward that IIS server version. The analyst now observes additional communication: 

 

Opening up the visualization, the analyst can view the cadence of the communication and observes there to be a beacon type pattern:

 

Pivoting into the Event Analysis view, the analyst can look into a few more details to see if there suspicions on this being malicious are true. The analyst observes a low variance in payload, and a connection which is taking place ~every 4 minutes:

 

The analyst reconstructs some of the sessions to see the type of data being transferred, the analyst observes a variety of suspicious GET and POST's with varying data being transferred:

 

The analyst confirms this traffic is highly suspicious based of the analysis they have performed, the analyst subsequently decides to track the activity with an application rule. To do this, the analyst looks through the metadata associated with this traffic, and finds a unique combination of metadata that identifies this type of traffic:

 

(service = 80) && (analysis.service = 'http1.0 unsupported cache header') && (analysis.service = 'http post missing content-type')

 

IMPORTANT NOTE: Application rules are very useful for tracking activity. They are however, very environment specific, therefore an application rule used in one environment, may be of high fidelity, but when used in another, could be incredibly noisy. Care should be taken when creating or using application rules to make sure they work well within your environment.

 

The Detection Using RSA NetWitness Endpoint Tracking Data

The analyst, as they should on a daily basis, is perusing the IOC, BOC, and EOC meta keys for suspicious activity. Upon doing so, they observe the metadata, browser runs powershell and begin to investigate:

 

Pivoting into the Event Analysis view, the analyst can see that Internet Explorer spawned PowerShell, and subsequently the PowerShell that was executed:

 

The analyst decides to decode the base64 to get a better idea as to what the PowerShell is executing. The analyst observes the PowerShell is setting up a web request, and can see the parameters it would be supplying for said request. From here, the analyst could leverage this information and start looking for indicators of this in their packet data (this demonstrates the power behind having both Endpoint, and Packet solutions):

 

Pivoting in on the PowerShell that was launched, it is also possible to see the whoami and tasklist that was executed as well. This would help the analyst to paint a picture as to what the attacker was doing: 

 

Conclusion

The traffic outlined in this blog post is of a default configuration for PowerShell Empire; it is therefore possible for the indicators to be different depending upon who sets up the instance of PowerShell Empire. With that being said, C2's still need to check-in, C2's will still need to deploy their payload, and C2's will still perform suspicious tasks on the endpoint. The analyst only needs to pick up on one of these activities to start pulling on a thread and unwinding the attackers activity,

 

It is also important to note that PowerShell Empire network traffic is cumbersome to decrypt. It is therefore important to have an endpoint solution, such as NetWitness Endpoint, that tracks the activities performed on the endpoint for you.

A question was posed to our team by one of the engineers; had we seen the new Chrome and Microsoft zero-day exploits using RSA NetWitness Endpoint?  I honestly didn't even know about these exploits and so I had to do some research.  I found the initial Google Blog post here: Google Online Security Blog: Disclosing vulnerabilities to protect users across platforms.  The first vulnerability (NVD - CVE-2019-5786) is the Google Chrome vulnerability and the second was disclosed to Microsoft by Google but as of the time I am writing this, no patch had been released by Microsoft.

 

Other articles and blogs that talk about these zero-days say they are being used in conjunction with each other. There was no proof of concept code nor any exploits I could use from the research I did. I did see some articles talking about these being exploited in the wild but I couldn’t find any other details. The second zero day is a Windows 7 32 bit privilege escalation vulnerability which does a null pointer dereference in win32k.sys. I found a similar privilege escalation exploit for CVE-2014-4113 and successfully exploited a box in my sandbox environment while it had an NetWitness Endpoint agent on it. The two IIOCs that fired that would help detect this attack were:

 

IIOC 1 - “Creates process and creates remote thread on same file”
IIOC 2 - “Unsigned creates remote thread”

 

The remote thread in this case was on notepad.exe which is common of Meterpreter.

 

The exploit I used can be found here: https://www.exploit-db.com/exploits/35101. It also does a null pointer dereference in win32k.sys similar to the Microsoft zero-day.  Below are some screenshots of what I saw from the attacker side and the NetWitness Endpoint side.

 

Here you can see the the exploit being injected in process ID 444.

 

 

Here is the entry in RSA NetWitness Endpoint.

 

 

Another entry is lsass.exe opening notepad.exe after the remote thread creation.  I believe this is the actual privilege escalation taking place.  It also makes sense because the timestamp matches exactly to the timestamp in Kali.

 

 

Here are the IIOCs which I believe are the initial meterpreter session based on timestamps.   It's still an indication of suspicious activity and when combined with lsass.exe opening the same remote thread process, it raises even more alarms.

 

 

I gave this to the engineer in hopes that the new Microsoft zero-day could be detected in the same way and even though we don't know the details of the Google Chrome vulnerability, we do know they are being exploited together.  This could help possibly identify this attack that has been seen in the wild.  Also, on another note, the fact that two zero-days are being exploited in the wild together just screams of a well-funded advanced adversary and it's a relief to know that our tool out-of-the-box should be able to help find this type of activity.

In line with some of my other integrations, I recently decided to also create a proof-of-concept solution on how to integrate RSA NetWitness meta data into an ELK stack.

 

Given that I already had a couple of Python scripts to extract NetWitness meta via the REST API, I quickly converted one of them to generate output in an ELK-friendly format (JSON).

 

Setting up an ELK instance is outside the scope of this post, so with that done all I needed was a couple of configuration files and settings.

 

Step #1 - Define my index mapping template with the help of curl and the attached mappings.json file.

curl -XPUT http://localhost:9200/_template/netwitness_template -d @mappings.json

NOTE: The mappings file may require further customization for additional meta keys you may have in your environment.

 

Step #2 - Define my Logstash configuration settings.

# Sample Logstash configuration 
# Netwitness -> Logstash -> Elasticsearch pipeline.

input {
  exec {
    command => "/usr/bin/python2.7 /usr/share/logstash/modules/nwsdk_json.py -c https://sa:50103 -k '*' -w 'service exists' -f /var/tmp/nw.track "
    interval => 5
    type => 'netwitness'
    codec => 'json_lines'
  }
}

filter {
   mutate {
      remove_field => ["command"]
   }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "netwitness-%{+YYYY.MM.dd}"
  }
}

Again a level of ELK knowledge will be required that is outside the scope of this post. However, on the command section a few settings may require additional clarification, the Python code has them documented but for ease of reference, I'm listing them below:

 

  1. The REST endpoint from where to collect the data
  2. The list of meta keys to retrieve (in the example below '*' refers to all available meta keys)
  3. The SDK query that references the sessions that should be retrieved (in the example below, collect all "packet sessions" meta data)
  4. A tracker file location so only new data is retrieved by each execution on the input command. (i.e. continue from last data previously retrieved)
-c https://sa:50103 
-k '*'
-w 'service exists'
-f /var/tmp/nw.track

 

There will be additional configuration settings and steps required in ELK, once again, there's plenty of information available on this already as the open source solution that ELK is, so I won't go into that. I'm by no means an expert on ELK.

 

Finally, all that is left to show you is how the data looks. First, some of my Dynamic DNS events.

 

List of NetWitness Events in ELK

 

Below the details of one of those events.

 DynDNS Event Details

 

 

As a proof-of-concept all these details and scripts are provided as-is without any implied support or warranty. I'm not really that experienced in ELK as so I'm sure that someone can probably improve on this significantly, if you do feel free to share your experiences below in the comments section.

 

Thank you,

 

Rui

We are excited to announce the latest version of the RSA NetWitness Platform! 

For those of you at RSA Conference, come to the RSA Booth to see first hand the new capabilities of the platform. RSA NetWitness Platform 11.3 is also being used by the RSA Conference SOC team as they monitor the traffic in and around the Moscone Center.

 

New Features in RSA NetWitness Platform

New capabilities in RSA NetWitness Platform 11.3 provide distinct value and are further enhanced when leveraged across a single platform.

 

  • Threat-Aware Authentication with RSA SecurID Access:  RSA NetWitness Platform now fuels threat-aware authentication to enable continuous authentication and the ability to block insider threats and malicious actors in the act of an attack while reducing the time and effort by overworked security operations teams.

 

  • RSA NetWitness UEBA: RSA NetWitness Platform introduces the first machine learning models based on deep endpoint process data collected by RSA’s EDR Solution, RSA NetWitness Endpoint. This advanced analytics capability can rapidly detect anomalies in user’s behavior and uncover unknown, abnormal, and complex evolving threats that will be otherwise missed by analyzing logs alone.

 

  • RSA NetWitness Endpoint 11.3: The only fully native endpoint detection and response solution within an evolved SIEM, to equip security analysts with industry-leading detection, investigation, and incident response capabilities. This Endpoint Detection and Response (EDR) solution is an integrated product offering within the RSA NetWitness Platform.

Understanding how attackers may gain a foothold on your network is an important part of being an analyst. If attackers want to get into your environment, they typically will find a way. It is up to you to detect and respond to these threats as effectively and efficiently as possible. This blog post will demonstrate how a host became infected with PoshC2, and subsequently how the C&C (Command and Control) communication looks from the perspective of the defender.

 

The Attack

The attacker crafts a malicious Microsoft Word Document that contains a macro with their payload. This document is sent to an individual from the organisation they want to attack, in the hopes the user will open the document and subsequently execute the macro within. The Word document attempts to trick the user into enabling macros by containing content like the below:

 

The user enables the content and doesn't see any additional content, but in the background, the malicious macro executed and the computer is now part of the PoshC2 framework:

 

From here, the attacker can start to execute commands, such as tasklist, to view all currently running processes:

 

The attacker may also choose to setup persistence by creating a local service:

 

Preamble to Hunting

Prior to performing threat hunting, the analyst needs to assume a compromise, and generate a hypothesis as to what s/he is looking for. In this case, the analyst is going to focus on hunting for C2 traffic over HTTP. Now that the analyst has decided upon the hypothesis, this will dictate where they will look for that traffic, and what meta keys are of use to them to achieve the desired result. Refining the analysts approach toward threat hunting, will heed far greater results in detection, if the analysts have a path to walk down, and can exhaust all possible avenues of that path, before taking another route, the data set will be thoroughly sifted through in a more methodological manner, as there will be less distractions for the analyst.

 

The Detection Using Packet Data

Understanding how HTTP works, is vital in detecting malicious C2 over HTTP. To become familiar with this, analysts should analyse HTTP traffic generated by malware, and HTTP traffic generated by users, this allows the analyst to quickly determine what is out of place in a data set vs. what seems to be normal. This is a common strategy among malware authors, they want to blend in with regular network communications and appear as innocuous as possible, but by their very nature, Trojans are programmatic and structured, and when examined, it becomes clear the communications hold no business value.

 

Taking the information above into account, the analyst begins their investigation by focusing on the protocol of interest at this point in time, HTTP. This one simply query, quickly removes a large amount of the data set, and allows the analyst to place an analytical lens on just the protocol of interest. This is not to say that the analyst will not look at other protocols, but at this point in time, and for this hunt, their focus is on HTTP:

 

Now the data set has been significantly reduced, but that reduction needs to continue. A great way of reducing the data set to interesting sessions is to use the Service Analysis meta key. This meta key contains metadata that pulls apart the protocol, and details information about that session, that can help the analyst distinguish between user behavior, and automated malicious behavior. The analyst opens the meta key, and focuses on a few characteristics of the HTTP session that s/he thinks make the traffic more likely to be of interest:

 

Let's delve into these a little, and find out why s/he picked them:

 

  • http no referer: An interactive session from a typical user browsing the web, would mean the HTTP request should contain a referrer header, with the address of where that user came from. More mechanical type HTTP traffic, typically will not have a referrer header.
  • http four or less headers: Typical HTTP requests from users browsing the web, have seven or more HTTP headers, therefore, looking for sessions that have a lower HTTP header count, could yield more mechanical type HTTP requests.
  • http single request/response: A single TCP session can be used for multiple HTTP transactions. Therefore, if a typical user is browsing the web, you would expect to see multiple GET's and potentially POST's within a single session. Therefore placing a focus on HTTP sessions that only have a single request and response, could lead us to more mechanical type behavior.

 

There are a variety of other drills that could have been performed by the analyst, but for now, this will be sufficient for the path they want to take as they have reduced the data set to a more manageable amount. The analyst while perusing the other available metadata, observes an IP communicating directly to another IP, with requests for a diverse range of resources:

 

Opening the visualization to analyse the cadence of the communication, the analyst observes there to be some beaconing type behavior:

 

 

Reducing the time frame down, the beaconing is easier to see and appears to be ~every 5 minutes:

 

Upon opening the Event Analysis view, the analyst can see the beacon pattern, which is happening roughly every 5 minutes, the analyst also observes there to be a low variance in the payload size; this is indicative of some mechanical check-in type behavior, which is exactly what the analyst was looking for:

 

Now the analyst has found some interesting sessions, they can reconstruct the RAW payload, to see if there are further anomalies of interest. Browsing through the sessions, the analyst see's that the requests do not return any data, and are to random sounding resources. This seems like some sort of check-in type behavior:

 

The analyst comes back to the events view to see if there are any larger sessions toward to this IP, to get a better sense if any data is being sent back and forth. The analyst notices a few sessions that are larger than the others and decides to investigate those sessions:

 

Reconstructing one of the larger sessions, the analyst can see a large chunk of base64 is being returned:

 

As well as POST's with suspicious base64 encoded cookies header that does not conform to the RFC:

 

This seems to be the only type of data transferred between the two IP's and stands out as very suspicious, This should alert the analyst, that this is most likely some form of C2 activity:

 

The base64 is encrypted, and therefore the analyst cannot decode and find out the information being transferred. 

 

THOUGHT: Maybe there is another way for us to get the key to decode this? Keep reading on!

 

The analyst has now found some suspicious activity, the next stage is to track this activity and see if it is happening elsewhere. This can easily be done by using an application rule, the analyst identifies somewhat unique criteria to this traffic using the investigation view, and converts that into an application rule, the following example would pick up on this activity and make it far easier for the analyst to track:

 

(service = 80) && (server = 'microsoft-httpapi/2.0') && (filename !exists) && (http.response = 'cachecontrol') && (resp.uniq = 'no-cache, no-store, must-revalidate') && query length 14-16

 

IMPORTANT NOTE: Before adding this application rule to the environment, it is important to note that the analyst thoroughly checked how many hits this logic would create in their environment before deploying. Application rules can work well in one environment, but can be very noisy in others.

 

It is also important to note that this application rule was generated specific to this environment and the traffic that was seen, not all PoshC2 traffic would look this way. It is up to the analyst to create application rules that suit their environment. It is also important to note that the http.response and resp.uniq meta keys need to be enabled in the http_lua_options file as they are not enabled by default.

 

The analyst creates the application rule, and pushes this to all available Decoders:

 

Upon doing so, the analyst see's the application creating metadata as expected, but also notices that there is another C2, and also another host infected in their network by PoshC2:

 

This demonstrates the necessity for tracking activity on your network as and when it is found, it can uncover new endpoints infected and allows you to track that activity easily.

 

From here, the analyst from this point has multiple routes that they could take:

 

  • Perform OSINT (Open Source Intelligence) on the IP/activity in question; or
  • Investigate if there is a business need for this communication; or
  • Investigate the endpoint to see what is making the communication, and if there are any other suspicious indicators.

 

The Detection Using Endpoint Tracking

The analyst, while performing their daily activity of perusing IOC's (Indicators of Compromise), BOC's (Behaviors of Compromise), and EOC's (Enables of Compromise) - observes an BOC that stands out of interest to them, office application runs powershell:

 

Opening the Event Analysis view for this BOC, the analyst can better understand what document the user opened for this activity to happen. There are three events for this, because the user opened the document three times, probably as they weren't seeing any data from the document after enabling the macros within the document:

 

Opening the session itself, the analyst can see the whole raw payload of the PowerShell that was invoked from the Word document:

 

Running this through base64 decoding, the analyst can see that it is double base64 encoded, and the PowerShell has also been deflated, meaning more obfuscation was put in place:

 

Decoding the second set of base64 and inflating, the actual PowerShell that was executed can now be seen:

 

Perusing the PowerShell, the analyst observes that there is a decode function within. This function requires an IV and KEY to successfully decrypt. This could be useful to decrypt the information that we saw in the packet data: 

 

The analyst calculates the IV from the key, which according to the PowerShell, is the key itself, minus 15 bytes from the beginning of said key, we then convert this to hex for ease of use:

 

Now the analyst has the key and the IV, they can decrypt the information they previously saw in the packets. The analyst navigates back to the packets and finds a session that contains some base64:

 

Using the newly identified information retrieved via the endpoint tracking, the analyst can now start to decode the information and see exactly what commands and data was sent to the C2:

 

Some of which can be incredibly beneficial, such as the below, which lists all the URL's this C2 will use:

 

The analyst also wants to find out if any other interesting activity was taking place on the endpoint, upon perusing the BOC meta key, the analyst spots the metadata, creates suspicious service running command shell:

 

The analyst opens the sessions in the Event Analysis view, and can see that PowerShell was spawning sc.exe to create a suspicious looking service called, CPUpdater:

 

This is the persistence mechanism that was chosen by the attacker. The analyst now has the full PowerShell command and base64 decode it, to confirm the assumptions:

 

 

 

Conclusion

Understanding the nuances between user based behavior and mechanical behavior, gives an advantage to the analyst who is performing threat hunting. If the analyst understands what "normal" should look like within their environment, they can easily discern it from abnormal behaviors.

 

It is also important to note the advantages of having endpoint tracking data with this scenario as well. Without the endpoint tracking data, the original document with the malicious PowerShell may not have been recoverable, and therefore the decryption of the information between the C2 and the endpoint, would not have been possible; both tools heavily compliment one another in creating the full analytical picture.

Attackers are continuously evolving in order to evade detection.  A popular method often utilized is encoding. An attacker may choose to, for example, encode their malicious binaries in order to evade detection; attackers can use a diverse range of techniques to achieve this, but in this post, we are focusing on an example of a hex encoded executable. The executable chosen for this example was not malicious, but a legitimate signed Microsoft binary.

 

This method of evading detection was observed in the wild by the RSA Incident Response team. Due to the close relationship between the Incident Response Team and RSA's Content Team, a request for this content was submitted by IR, and was published to RSA Live yesterday. The following post demonstrates the Incident Response team testing the newly developed content.

 

The Microsoft binary was converted to hexadecimal and uploaded onto Pastebin, which is an example of what attackers are often seen doing:  

 

A simple PowerShell script was written to download and decode the hexadecimal encoded executable and save it to the Temp directory:

 

Typically, the above PowerShell would be Base64 encoded and the IR team would normally see something like the below:

 

After executing the PowerShell script. It is possible to see the dllhost.exe was successfully decoded and saved into Temp directory:

 

Upon perusing the packet metadata, the analyst would be able to easily spot the download of this hex encoded executable by looking under the Indicator of Compromise key:

 

Conclusion

It is important to always keep the RSA NetWitness platform up to date with the latest content. RSA Live allows analysts to subscribe to content, as well as receive updates on when newly developed content is available. For more information on setting up RSA Live, please see: Live: Create Live Account 

I've come across ICMP tunneling only a handful of times, but this was the first time I had seen it used as part of a VPN client.  The VPN client was SoftEther VPN and, in addition to SSL VPN, it can also perform ICMP and DNS tunneling.  During a recent hunting engagement, I had the opportunity to identify and create content to detect this activity.

 

Let's have a look.

 

 

I drilled into ICMP traffic (ip.proto = 1) and then looked at Session Characteristics (analysis.session).  This led me to the meta 'icmp large session'.  Previously, I had created meta to describe when sessions had 'session.split' meta.  Session.split occurs when a session is either very large or very long.  You can find more about session.split in a previous post I wrote here.  However, one simple way to identify when session.split exists, is to simply write an application rule.  

 

 

As we look at the sessions associated with this activity, we see the following:

 

 

The data called out above is around the transmitted (requestpayload) and received (responsepayload) as well as the payload size and overall size of the session.  If you've ever looked at ICMP traffic before, its typically small.  This is not small.

 

Furthermore, this does not look like typical ICMP traffic as shown below:

 

 

With RSA Netwitness Packets, we have an opportunity to describe our network traffic pretty effectively.  When I saw this traffic, I wanted to improve some of the meta I had to describe the size.  The one below will let me know if a session is greater than 1mb.

 

 

Now, circling back to the ICMP tunneling, we can do this with another application rule.

 

 

By taking these steps, I am now better equipped at identifying ICMP tunneling when I observe it.

 

App Rules
name="session split" rule="session.split exists" alert=analysis.session type=application
name="session size greater than 1mb" rule="streams=2 && size=1024000 -u" alert=analysis.session type=application
name=possible_icmp_tunneling rule="ip.proto=1 && session.split exists && analysis.session = 'icmp large session' && analysis.session = 'session size greater than 1mb'" alert=ioc type=application

 

Note that 'session split' and 'session size greater than 1mb' should be before the 'possible_icmp_tunneling' rule.  Order is important.

 

I'll try to get some DNS tunneling created with this SoftEther VPN client soon.

 

Good luck and happy hunting.

These are a collection of ESA rules that create persisted in-memory tables for various different scenarios.  Hopefully they are useful as well as serve as templates for future ideas.

 

GitHub - epartington/rsa_nw_esa_whatsnew: collection of ESA rules for whats new stuff 

 

  • New JA3 hash
  • New SSH user agent
  • New useragent
  • New src MAC family
  • New certificate CA
  • New certificate CA (Endpoint)

 

These are advanced ESA rules so it will require copying and pasting the text into the rules.

 

These can also be tuned to learn more (longer learning window) so that more data is added to the known window of the ESA rule.  Just be careful about potential performance issues if you make the window too long for your environment.

 

Recently, a question came from a customer who wanted to know if it was possible to alert when a new device.ip started logging to RSA NetWitness.  Thinking about it for a second it seemed like a good test of a new template that I was testing for ESA.

 

The rule, located here, does just that:

GitHub - epartington/rsa_nw_esa_whatsnewdeviceip: ESA rule to indicate when a new device type is seen 

 

Add this rule in ESA in the advanced editor to create the rule.

 

It works as follows:

A window of a learning phase is created with the timer in the rule (1 day default)

In that learning window new device.ip + device.type are added to the window to create a known list of devices.

Once the learning window has expired the system alerts on any new combinations of device.ip and device.type that is seen after that.

 

Customizations that you possibly want to make would include changing the learning window timer from 1 day to longer (5 days potentially)

 

The data is kept in a named window and persisted to a JSON file on the ESA disk system in case there are restarts or service changes.

 

Alerts are created in ESA/Respond that can then be assigned work to validate that the new system was on-boarded properly and configured appropriately before closing.

 

During a recent customer engagement, I found the "customtcp shell" meta with some very interesting sessions.  All of the traffic was using what appeared to be custom encryption and the destination IP was based in Korea.  Of course, I knew this couldn't be the first time someone had come across traffic like this so I looked at previous reporting for similar traffic.  Multiple analysts, like myself, had seen traffic exactly like this and their analysis led them in different directions.  Some even believed that this was NanoCore RAT traffic as it had similar attributes but I was still skeptical.  After hours of researching this traffic and trying to dissect this traffic I came across someone's master's thesis talking about end-to-end encrypted mobile messaging apps.  The link to the thesis is below.

 

https://www.diva-portal.org/smash/get/diva2:1046438/FULLTEXT01.pdf 

 

The traffic I was seeing matched perfectly with the handshake packet used with the propriety protocol called LOCO, which is used by the mobile messaging app KakaoTalk.  The article broke it down very well and without it I think I would still be scratching my head at this traffic.  So lets go back to what this traffic looks like and how I was able to determine it was KakaoTalk.

 

Here is an example of the customtcp shell meta being populated in a customer's environment.  This over a 5 day time period so it's not very common in most customer's environments.

 

 

As of today, I have seen this type of traffic in four customer environments within the RSA NetWitness Platform. Further, fellow colleagues and team members have also inquired about this type of traffic.  To recreate this traffic and avoid showing customer data, I downloaded the KakaoTalk mobile app on my personal iPhone.  Here is an example of the what the handshake packet looks like.

 

 

With the help of Stephen Brzozowski during the first engagement, we were able to dissect this packet to some extent.  The first 12 bytes of the sessions are the custom headers and always repeat during the first session.

 

 

The response and any follow-on packets would begin with the size in little-endian format.

 

 

With the help of the article mentioned above, I was able to match this traffic to the LOCO protocol's initial handshake packet.

 

 

As well as the follow-on packets that matched the LOCO encrypted packet.

 

Considering that every instance of the handshake packet I have seen in multiple environments always begin with the same first 12 bytes, I wrote a quick parser to find this traffic.  It's attached below.  I have deployed it on two customer environments and left it running for almost 24 hours with no false positives and approximately 20 sessions discovered in each environment.  Currently, it only detects the initial handshake, but I intend to modify it later to detect all sessions with the same high fidelity.  In addition, I have seen one other instance where the first 12 bytes didn't match because of one bit being off but I still believe it was KakaoTalk.

 

While deploying this parser and testing in these environments, I also discovered similar traffic that used other custom headers which I believe is another type of mobile messaging app that uses end-to-end encryption.  I believe we will continue to see additional mobile apps that populate this meta key in the future.  While documentation for these apps and custom protocols are scarce, I believe that it will present a challenge for analysts to distinguish between malicious custom TCP shells and benign traffic such as discussed in this blog.

This blog post is a follow on from the following two blog posts:

 

 

 

The Attack

The attacker is not happy with executing commands via the Web Shell, so she decides to upload a new Web Shell called, reGeorg (https://sensepost.com/discover/tools/reGeorg/). This Web Shell allows the attacker to tunnel other protocols over HTTP, therefore allowing the attacker to RDP for example, directly onto the Web Server, even though RDP isn’t directly allowed from the internet.

 

The attacker can upload the Web Shell via one of the previously utilized Web Shells:

 

The attacker can now check the upload was successful by navigating to the uploaded JSP page. If all is okay, the Web Shell return the message shown in the below screenshot:

 

The attacker can now connect to the reGeorg Web Shell:

 

This means the attacker now has remote access to anything accessible from the Web Server where the Web Shell is located. This means the attacker could choose to RDP to a previously identified machine for example:

 

Attackers also like to keep other access methods to endpoints, one way of doing this is to setup an accessibility backdoor. This involves the attacker altering a registry key to load CMD when another application executes, in this case sethc.exe – this is an accessibility feature you typically see when pressing the SHIFT key five times. This now means that anyone who can RDP to that machine, can receive a system level command prompt with no credentials required; this is because sethc.exe can be invoked at the login screen by pressing the SHIFT key five times, and with the registry key altered, will spawn CMD as well.

 

To set this up, the attacker can use the Web Shell, and perform this over WMI using REG ADD:

 

Now the attacker can RDP back to the host they just setup the accessibility backdoor on, press the SHIFT key five times to initiate sethc.exe, and will be given the command prompt as system without having to use credentials:

 

 

The Analysis in RSA NetWitness

The analyst, while perusing Behaviors of Compromise, observes some suspicious indicators, runs wmi command-line tool, creates remote process using wmi command-line tool, and http daemon runs command shell just to name a few:

 

Drilling into the WMI related metadata, it is possible to see the WMI lateral movement that was used to setup the accessibility backdoor from the Web Shell:

 

The analyst also observes some interesting hits under the Indicators of Compromise meta key, enables login bypass and configures image hijacking:

 

Drilling into these sessions, we can see it is related to the WMI lateral movement performed, but this event being from the endpoint the backdoor was setup on:

 

The analyst, further perusing the metadata, drills into the Behavior of Compromise metadata, gets current username, and can see the sticky key backdoor being used (represented by the sethc.exe 211) to execute whoami:

 

The analyst, also perusing HTTP network traffic, observed HTTP headers that they typically do not see, x-cmd, x-target, and x-port:

 

Drilling into the RAW sessions for these suspicious headers, it is possible to see the command sent to the Web Shell to initiate the RDP connection:

 

Further perusing the HTTP traffic toward tunnel.jsp, we can see the RDP traffic being tunnelled over HTTP requests. The reason this shows as HTTP and not RDP, is that the RDP traffic is being tunnelled over HTTP, there are therefore more characteristics which define this as HTTP, compared to RDP:

 

Conclusion

Attackers will leverage a diverse range of tools and techniques to ensure they keep access to the environment they are interested in. The tools and techniques used here are freely available online and are often seen utilized by advanced attackers; performing proactive threat hunting will ensure that these types of events do not go unnoticed within your environment.

Filter Blog

By date: By tag: