Skip navigation
All Places > Products > RSA NetWitness Platform > Blog > 2019 > June
2019

(Authored by Steve Schlarman, Portfolio Strategist, RSA)

It was Mark’s big shot.  He finally had a meeting with Sharon, the CIO.  Her schedule was so busy it was legendary and for her to spend time with a risk analyst was a clear indicator she recognized the new challenges facing their company.  Although he only had 15 minutes, Mark was prepared - notepad at the ready, brimming with nervous energy.   After some brief chit-chat he got down to business – ready to drill into a conversation about their company’s biggest obstacles; the most impactful concerns; the top of mind issues; the coup de grace that could spell disaster for the organization.  He took a deep breath and went to his big money question… ‘So, what keeps you up at night? What are you worried about?’ 

 

Sharon beamed.  She spun around to her white board and spewed a litany of projects fueling their company’s digital transformation – an IoT project, the SalesForce.com implementation, a massive VMWare migration and their hybrid cloud, the new employee work-at-home program, the impending customer mobile portal…

While that question got Sharon started, let’s think about this a bit differently.

 

With all the benefits the new digital world offers, there are a host of risks that must be managed.   The major areas of risk remain the ‘usual suspects’ such as security, compliance, resiliency, inherited risks from third parties and operational risk. However, digital business amplifies uncertainty for organizations today.  For example:

  • Digital business, by its very nature, increases the threat of cyber incidents and risks around your intellectual property and customer data.
  • The expanded connectivity and expectations of the ‘always on’ business stresses the importance of resiliency.
  • Business has evolved into an ecosystem of internal and external services and processes leading to a complex web of ‘inherited’ risks.
  • The disappearing perimeter and digital workforce is challenging how organizations engage their customers and employees.

 

Factors such as these are why digital initiatives are forcing organizations to rethink and increasingly integrate their risk and security strategies. 

The objective for today’s risk professional is not just about defending against the bad.  Just like Mark discussing the parade of initiatives with Sharon that clearly impact their company’s future, you must be ready to help usher in a new age of digital operations.  Merely riding the buzzword wave - IoT, social media, big data analytics, augmented reality… - is not enough. 

 

You must look at opportunities to enable innovation in your business while building trust with your customers and throughout your enterprise.  Your business must be comfortable with embracing risk and aggressively pursuing market opportunities offered by new technology.  To do that, risk associated with the use of emerging or disruptive technology in transforming traditional business processes needs to be identified and assessed in the context of fueling innovation.   You also must keep focus on the negative side of risk.  Your business today demands an open, yet controlled, blend of traditional and emerging business tactics.  You must help manage the ongoing risk as these transformed business operations are absorbed into the organization fully, i.e. the new model becomes the normal model of doing business.

Risk is, by definition, uncertainty.  Everyone is concerned about uncertainty in today’s world.  However, if we go back to the simple equation (risk = likelihood * impact), risk should be something we can dissect, understand, and maybe even calculate.   While you are helping your organization embrace the advantages (positive risk) of technologies like IoT, data analytics, machine learning and other emerging digital enablers, the volatile, hyperconnected nature of digital business amplifies the negative side of risk.  It is anxiety about the unknown that leads us into that executive conversation, but it shouldn’t lead to worry.

Worry is about fear.  Your executives shouldn’t be afraid in today’s world.   They should have informed concerns.  And you – as the security or risk person in the room – should be feeding insights to raise their visibility of the likelihood of events and diminish their distress on the negative impacts.  Risk is part of riding the waves of business opportunities.

Risk is not something you should WORRY about…  it is something you should ACT on.

***********

 

To learn more about digital risk management, click on our new Solutions Banners located in the right-hand column of each RSA product page: Third Party RiskCloud TransformationDynamic Workforce, and Cyber Attack Risk.

 

Rui Ataide

Domain Fronting Malware

Posted by Rui Ataide Employee Jun 19, 2019

Customers frequently ask me about malware that uses domain fronting and how to detect it. Simply put, domain fronting is when malware or an application pretends to be going to one domain but instead is going somewhere completely different. (Mitre ATT&CK - T1172)

 

The goal of domain fronting is to have the analysts believe that the connection is being a made to a safe site while the true destination is in fact somewhere completely different.

 

Let’s look at a piece of malware that uses this method. This is a PowerShell Empire sample:

 

 

In the configuration information of this file, we see a URL that will be requested, which is also Base64 encoded. The URL decodes to https://ajax.microsoft.com:443 as seen below:

 

 

So, this script will initiate a connection to ajax.microsoft.com:443, and appear to request /login/process.php. However, because the Host: header is pointing to content-tracker.*******.net, the request will actually go to https://content-tracker.*********.net/login/process.php instead.

 

You may be thinking that all you have to do in order to detect examples of domain fronting is to look for discrepancies between the requested URL and the domain/IP in the Host: header. However, there are  some complexities to deal with. Most of the time the initial connection is SSL encrypted, so you are limited to artifacts related to SSL traffic, unless you have SSL inspection technology in place. Another consideration is if whether a proxy is involved in this connection or not.

 

In order to describe what the analyst would see if SSL inspection technology is in place, let us use a Man-In-The-Middle proxy to inspect this traffic in its clear-text form. SSL Inspection technologies are extremely useful for this and other scenarios where malware communicates over SSL, and it is something that we highly recommend that you deploy in your organization.

 

Let us introduce two terms here to clarify the elements of this technique as we describe how to hunt for it:

            Fronting Domain (ajax.microsoft.com)

            Fronted Domain (content-tracker.*********.net)

 

Here we see the Fronting Domain request, which is also the only thing you would see if you were only relying on proxy logs. The Domain Fronting domain (ajax.microsoft.com in this case) is also what the proxy would use for URL Filtering checks. The proxy logs would not actually see the “Fronted Domain”. For all intents and purposes this would be a legitimate request to a Microsoft site.

 

 

 

However, the response is anything but what you would expect from the site. Namely, instead of some HTTP content the site returns an encoded blob of data that decodes into more PowerShell code.

 

How do I know, it was more PowerShell code, that was easy, I simply replaced the follow-up execution with the output to a file as seen below: 

 

 

Then opening the resulting stage2.ps1 file, you can see to contains additional PowerShell code that is highly obfuscated.

 

 

Let us go back one step and discuss another key aspect regarding Domain Fronting. Namely, the SSL certificates used during this communication. The SSL certificates are legitimate Microsoft signed certificates, since the initial connection is indeed to ajax.microsoft.com. This certicate is tied to many Microsoft domains and Microsoft CDN Domains.

 

 

We could try to de-obfuscate the stage2.ps1 PowerShell script but there really is no need, since by looking at the subsequent request of the malware on the proxy we can get an idea of what it does. Its initial check-in posting back victim information again in an encrypted binary blob of data.

 

 

Additionally, this particular strain of malware also seems to do a legitimate call to the ajax.microsoft.com site as shown below. While not at all relevant for domain fronting, it is important for the analysts to be aware as to why they might see both legitimate and malicious requests mixed together. The analyst will notice that the "Host:" header will match the requested domain in legitimate requests. 

 

 

 

The response from the legitimate site is also completely different and starts a redirect chain that we will show below:

 

 

And finally, the legitimate page for Microsoft Ajax Content Delivery Network.

 

 

Now that we have described in detail the sequence of requests, let us see how this all looks from the Netwitness Packet perspective. There are two cases, one where there is no proxy and one where there is one.

 

Traffic Analysis – SSL Only

 

Let’s start with the traffic without a proxy. I have isolated only the relevant events in a separate collection to facilitate the analysis. I will also point out how some of these indicators can be spotted in larger traffic patterns.

 

In the example below, the indicators are separated in two sessions: a DNS request and an SSL connection. You can see that the DNS request is for one domain name, while the SSL session displays what is referred to as the SNI, which does not match the DNS request.

 

 

For the legitimate traffic, the DNS request and the SSL SNI value both match. These are both extracted into the alias.host key.

 

 

So, how can you detect this type of behavior? It is not easy, especially on high volume environments. However, a starting point is to look for are alias.host values that only show one of the service types (DNS or SSL), but not both. Legitimate traffic will likely have both as shown below:

 

 

You should not expect these values to be balanced or equal as DNS is often cached, but you should expect to see both types of service. Some environments at times do not capture DNS due to volume, but to be successful it is critical to have both.

 

For the malicious traffic each domain will only have one type of traffic (i.e. DNS or SSL). This detection criteria is not  an exact “science” as you could easily have only DNS for all sorts of other types of traffic that are not domain fronting. The Fronting Domain will have the DNS traffic, while the Fronted Domain will have the SSL sessions.

 

 

 

Since the traffic is split between sessions on the packet side, we would need to use an ESA rule to detect this type of activity.

Traffic Analysis – Proxied Requests

 

For explicit proxied traffic things are slightly easier, as all the traffic is contained in a single session. We see the "raw" payload of one such session below. It can seem confusing at first, but Netwitness identifies this traffic as HTTP. This is correct since this part of the traffic is indeed HTTP.

 

 

Since we have all the pieces in one session here, the detection is easier. But how can we do it for high data volumes. In this case the HTTP session will have two different hostnames. While this is at times common for pure HTTP traffic due to socket re-use, it is uncommon for HTTPS/SSL traffic as the standards advise against it for privacy/security purposes, among other reasons.

 

 

This shows a possible solution to detect this type of traffic with a simple App rule that could identify traffic HTTP with 2 unique alias.host values and the presence of a certificate.

 

In summary, domain fronting is a technique used by attackers/red teams with the intent of either circumventing network policies (URL filtering), or hiding in plain sight, as the analysts are more likely to see/notice the legitimate domains than the malicious ones and assume this activity as safe/legitimate. However, this type of activity still has a certain footprint that we have described. Hopefully the information provided here will help you all improve your defenses against this technique.

 

Thank you,

 

Rui

If you need to achieve HA through load balancing and failover for VLCs on AWS you can use the built-in AWS load balancer. I have tested this scenario so I am going to share the outcome here.

 

Before starting I need to state that VLCs failover/balancing  is not an RSA officially supported functionality. Furthermore this can only work with "push" collections such as syslog, snmp, etc. It does not work with "pull" collections such us Windows, Checkpoint, ODBC, etc. (at least not that I am aware of and I have personally never tested it).

 

That being said, let's get started.

 

As you may be aware, in AWS EC2 you have separate geographic areas called Regions (I am using US East - N.Virgina here) and within regions you have different isolated locations called Availability Zones.

 

 

We are going to leverage this concept and we will place two VLCs into two different Availability Zones. If one VLC fails we will have the VLC in the other Availability Zone to take over.

 

The following diagram helps understanding the scenario (for better clarity I omitted the data flow from the VLCs to the Log Decoder/s):

 

Assuming you have already deployed the two VLC instances, the next step to do is creating two different subnets and associate two different Availability Zones to each of them .

 

  • From the AWS Virtual Private Cloud (VPC) menu go to Subnets and start creating the two subnets:

 

 

  • Next we need to create a Target Group (from the EC2 menu) which will be used to route requests to our registered targets (the VLCs):

     

 

  • Finally we need to create the load balancer itself. For this specific test I have used a Network Load Balancer but I think an Application Load Balancer would work too. I selected an internal balancer. I chose syslog on TCP port 514 so I created a listener for that. Actually, the AWS load balancer does not support UDP so I was forced to use TCP, however I would have used syslog over TCP anyway as it is more robust and reliable and large syslog messages can be transferred (especially if it is a production environment). I also select the appropriate VPC and the Availability Zones (and subnets) accordingly.  

     

 

In the advanced health check settings I chose to use port 5671 (by default the balancer would have used the same as the listener, 514). The reason of using 5671 is because the whole log collection mechanism works with rabbitmq which uses this port. In fact the only scenario 514 would not work is when the VLC instance is down or if we stop the syslog collection. I think rabbitmq is more prone to failures and may fail in more scenarios, such as queues filling up because the decoder is not consuming the logs, full partitions, network issues, etc. 

 

 

  • Once the load balancer configuration is finished you will see something similar:

 

 

           We need to take note of the DNS A Record as this is what our event sources will use to send syslog traffic to.

 

  • Now to configure an event source to send syslog logs to the load balancer you just need to point the event source to the load balancer DNS A Record. As an example, for a Red Hat Linux machine you should edit the /etc/rsyslog.conf file as follow:

 

  

 

         We are using @@ because is TCP, for UDP it's just one @.

 

         Then we need to restart the rsyslog service as follow:

 

            --> service rsyslog restart (Red Hat 6)

            --> systemctl restart rsyslog (Red Hat 7)

 

  • To perform a more accurate and controlled test and demonstration, I am installing a tool on the same event source and I will push some rhlinux logs to the load balancer and see what happens. The tool is an RSA proprietary one and is called NwLogPlayer (more details here How To Replay Logs in RSA NetWitness ). It can be installed via Yum if you have enabled the RSA Netwitness repo:

 

   

 

      I also prepared a rhlinux sample logs file with 14000 events and I am going to inject these to the load balancer and       see what happens. Initially my Log Decoder LogStats page is empty:

 

  

 

     Then I start with the first push of the 14000 events:

 

 

     Now I can see the first 14000 events went to VLC2 (172.24.185.126)

 

      At my second push I can see the whole chuck going to VC1 (172.24.185.105)

 

      At the third push the logs went again to VLC2

 

     At the fourth push the logs went to VLC1

 

     At the fifth push, I sent 28000 events (almost simultaneously)  and they get divided to both VLCs

 

     This demonstrates that the load has been balanced equally between the two VLCs.

 

     Now I stop VLC1 (I actually stopped the rabbitmq-service on VLC1) and I push other 14000 logs:

 

     and again

 

     On both instances above VLC2 received the two chunks of 14000 logs since VLC1 was down. We can safely say            that Failover is working fine!

Note: This configuration is not officially supported by RSA customer support. 

Filter Blog

By date: By tag: