Greetings fellow innovators!
** REMINDER **
Greetings fellow innovators!
** REMINDER **
My previous blog post described how combining the concepts of decentralized identity with verifiable claims creates a powerful new model that allows any person, organization, or thing to interact with any other entity with trust and privacy. This post will delve deeper into the inner workings of Project Sif.
A decentralized identity is a digital identity an individual creates, owns, and controls without requiring the involvement of any centralized 3rd party. Decentralized identities are accessible to everyone and designed with privacy in mind. There are no passwords and no centralized repositories of identity data. The idea is that instead of creating a new digital identity for every digital service you want to consume, you can bring your existing IDs with you, similar to how things work in the physical world.
The RSA Labs Identity Wallet mobile app allows you to manage your decentralized identities. This includes creating a decentralized identity (equivalent to a pseudonym or persona) which is backed by a public/private keypair. The public key is stored in a publicly accessible location, in this case a blockchain, where it can be accessed by anyone; the private key is stored encrypted on your mobile device.
Verifiable claims are cryptographically signed attestations which can be instantly verified by anyone. They can be issued by governments, banks, or even a friend or family member. Upon reading more about verifiable claims, you’ll undoubtedly stumble across this diagram from W3C:
Let’s briefly go through each component to better understand how the model works:
In this model, these components can be provided by disparate vendors. The only trust relationship that exists is between the inspector-verifier and the issuer. This is analogous to how trust works with physical credentials in the real-world. A driver’s license, for example, is issued by a DMV (the issuer) and presented, by the holder, to a liquor store (the inspector-verifier) to prove age. The liquor store must trust the DMV to only issue valid licenses, which in turn allows it to trust the age claim of the holder of the license.
The RSA Labs Identity Wallet mobile app allows you to store and manage your verifiable claims given to you by issuers. Claims can be imported onto your mobile device and later presented to any inspector-verifier that requests them. Here, the mobile app is fulfilling the role of the holder.
Putting these pieces together, the Project Sif architecture takes shape:
Note that in this solution the DID is not the public key. This was done to be able to support the use case of a user revoking a public key and associating a new one with an existing DID.
It’s always helpful to understand a system by seeing how data flows through it. Here’s a sample sequence diagram of a user registering for a new website that requires a verified age claim:
It’s important to note that the Identifier Registry as defined by W3C makes no mention of blockchain or any other underlying data storage technology. When considering the combination of decentralized identity with verifiable claims, the Identifier Registry should ideally have the following attributes:
RSA or any other organization could stand up its own Identifier Registry server, but it would represent a centralized component in a decentralized solution. The solution is more resilient when every component is decentralized. When considering these ideal attributes, a public blockchain checks most of the boxes. The biggest limitation imposed by a blockchain is the throughput, an area of active research by many groups. Other distributed ledger solutions could also fill the role of the Identifier Registry if properly configured – a blockchain is not the only solution.
Project Sif demonstrates how the concepts of decentralized identity and verifiable claims can be combined to create a new model for identity management; a model that brings advantages regarding both security and usability. As digital services move to a decentralized model, decentralized identity solutions will be required. If you have a use case where decentralized identity and verifiable claims could be helpful or want to learn more about Project Sif please reach out to firstname.lastname@example.org. We’d love to hear from you!
Today Dell Technologies joined with the San Diego Supercomputer Center, industry companies, and academic partners to launch a new blockchain research lab: BlockLAB. The BlockLAB will focus on business use cases for distributed ledgers and evaluation of technology stacks. One area where blockchains can provide real value is in enabling decentralized identities, an area we have been researching at RSA Labs as part of Project Sif. Project Sif explores how we can move from the familiar world of centralized identity to a more distributed and decentralized model.
Decentralized identity is a fundamentally different view on identity management as compared to the centralized model that predominantly exists today. Centralized identity has several shortcomings. Users today create new user credentials for nearly every service they want to consume. This leads to users having to maintain too many usernames and passwords (not to mention the security and usability problems surrounding passwords). Making matters worse, users are not in control of their data. Should Google or Apple cease to exist then so would everyone’s online identity that’s tied to them. Companies holding identity data also represent very rich targets for hackers. In short, the problem is that the web as we know it today wasn’t built with an identity layer.
A decentralized identity is a digital identity an individual creates, owns, and controls without requiring the involvement of any centralized 3rd party. Decentralized identities are accessible to everyone and designed with privacy in mind. There are no passwords and no centralized repositories of identity data. The benefits of this approach differ depending on the end-user.
For consumers, decentralized identities allow:
For enterprises, decentralized identities allow:
Through Project Sif, RSA Labs is prototyping an Identity Wallet mobile app to allow you to manage your decentralized identities. This includes creating a new decentralized identity backed by a public/private keypair. The public key is stored in a public blockchain where it can be accessed and verified by anyone. The Identity Wallet app also helps you store and manage verifiable claims. These are cryptographically signed attestations which can be instantly verified by anyone (similar to government-issued IDs or legal documents). They can be issued by governments, banks, or even a friend or family member. Combining the concepts of decentralized identity with verifiable claims creates a powerful new model that allows any person, organization, or thing to interact with any other entity with trust and privacy.
Please stay tuned to the RSA Labs blog for the latest on Project Sif. Let us know if you have any questions or feedback through the comments below. We’d love to know what you think! Check out the following links for additional information on the BlockLAB announcement and Dell Technologies support for the lab.
Today we are announcing support for Azure IoT Edge, which is Microsoft's solution for edge computing suitable for IoT gateways. Project Iris now brings visibility and threat detection to the Azure IoT Edge platform and connected edge devices managed by it.
Azure IoT Edge extends Microsoft's cloud-based Azure IoT Hub architecture to the edge.
Azure IoT Hub provides a bidirectional communication channel between devices and the cloud, enabling users to perform tasks such as configuration, data collection, and command execution from the cloud. With just Azure IoT Hub in the picture (and prior to Azure IoT Edge), IoT devices would be required to implement the Azure IoT SDK to directly communicate with Azure IoT Hub in the cloud. The supported protocols between an IoT device and Azure IoT Hub are MQTT, AMQP, and HTTPS.
Azure IoT Edge opens up the picture, allowing IoT devices not using the Azure IoT SDK to be brought into the fold. These devices make up the vast number of existing IoT devices out there, and they use an alphabet soup of IoT protocols such as modbus, BACnet, and OPC-UA. Azure IoT Edge proxies communication between these devices and the cloud. This model is especially helpful from a security perspective.
Going into more depth, this post describes three different patterns for how Azure IoT Edge can used at the gateway:
In addition to protocol translation, Azure IoT Edge allows for general purpose computing at the edge. For instance, running analytics at the edge can save on overall IoT solution costs, compared to shipping all the data to the cloud for processing.
To use Project Iris to monitor Azure IoT Edge, deploy the Project Iris Docker container side by side with Azure IoT Edge running on the same IoT gateway host.
Azure IoT Edge uses modules to achieve a general purpose edge computing framework. Modules are simply Docker containers. There are two special modules provided by Microsoft, edgeAgent and edgeHub. The edgeAgent module uses the Docker service to manage other modules, and the edgeHub module handles communication between other modules and the cloud. Other modules, such as the Microsoft-provided modbus module, can perform protocol translation, edge analytics, or other activities.
The Project Iris container passively monitors all Azure IoT Edge modules and their communication with other edge devices, and passes up data to the Project Iris cloud service. Based on the data gathered, the Project Iris cloud service dynamically builds out profiles of expected behavior for Azure IoT Edge modules and edge devices tailored to your deployment. Alerts are triggered when significant deviations or anomalies from expected behavior are detected.
The Project Iris container should be deployed with the following environment variables as container arguments:
Device identities can be managed in Azure or elsewhere. Project Iris is intelligent about surfacing these identities, depending on the type of architectural pattern under which Azure IoT Edge is deployed at the gateway (see above).
In the "Transparent gateway" pattern, device identities are fully managed in Azure, and Project Iris gets all device related metadata from Azure. This metadata includes arbitrary tags and configuration properties that can be set in the cloud.
In the "Identity translation" pattern, device identities are managed in Azure and in another piece of software such as EdgeX. Project Iris gathers identity data from both Azure and EdgeX and merges the data together, creating a unified view of identities across both sources.
In the "Protocol translation" pattern, identities are managed outside of Azure. However Project Iris can infer device identities by inspecting Azure IoT Edge module configuration. For instance, the modbus module contains configuration describing how that module can connect to downstream modbus slaves. Project Iris manufactures device identities based off this configuration.
In the future, as Project Iris continues to support more IoT gateway platforms, it will continue to merge device data together from disparate stores in a intelligent way to surface up a meaningful set of identities.
Project Iris raises alerts when Azure IoT Edge modules running on the gateway or connected edge devices exhibit behavior that deviates significantly from an established norm. The usage of containers by the Azure IoT Edge runtime permits the development of precise behaviorlal models for describing Azure IoT Edge modules. The types of alerts covered by Project Iris include initial infection, lateral movement, command and control, data exfiltration, and denial of service. These alerts are described in more detail in this blog post.
Below are some hypothetical alerts focused on the edgeHub module. This module has perhaps the most surface area for attack as it exposes several ports for access outside the gateway host.
This alert shows the edgeHub module making an unexpected outbound network connection, for instance in the case of an initial infection to download an exploit payload or reaching out to a command and control host.
Suppose malicious code injected into the edgeHub module attempts to move laterally by probing the network. Project Iris can pick this up - in the example below the edgeHub module is shown reaching out to a modbus device. This is unusual as the edgeHub module by design doesn't directly communicate with any IoT devices.
Now suppose the edgeHub module unexpectedly crashed or was unexpectedly killed:
If configured to integrate with Azure Event Hub, Project Iris can pull in diagnostic events raised by the Azure IoT Hub. Project Iris filters these events to raise interesting security-relevant events. For instance, below is an example of unauthorized access by a device reporting to be a thermostat.
All alerts includes applicable device details gathered from Azure IoT Hub. For instance, the sample below shows configuration details and tags for the aforementioned thermostat:
Whether you're using Azure IoT Edge or other technologies at the edge, we want to hear from you! If you want to learn more about Project Iris, visit the Project Iris web site and click Notify Me. Fill out the contact form and we'll be in touch!
By design containers are meant to be disposable. They are meant to be shipped around to different environments and brought up and down at will. For instance, a container orchestration technology like Kubernetes can automatically bring up new containers in response to a spike in demand, and then tear down the same containers when the demand subsides. Or, as part of the continuous delivery life cycle, the same container image running on a developer's laptop can be spun up in a test environment for verification and then deployed in production by an operations team.
AI-based security solutions like Project Syn use training data to learn what's normal and flag abnormal activity. But the impermanence of containers poses an interesting problem: how can a machine learning system gain the right amount of insight about individual containers in order to raise meaningful alerts? An overly sensitive system that raises alerts before it has enough data to extrapolate from will generate noise and false positives. But an overly conservative system that waits too long for enough data to become available will fail to raise important alerts and result in false negatives.
Project Syn addresses this problem with the concept of container profiles. In a nutshell, container profiles allow for behavior learned about one container to be shared with similar containers run later in time. This means that for many containers, the training stage can be bypassed altogether, and alerts can be generated immediately after deployment.
Let's take the example of continuous delivery, shown in the figure below. Suppose a new container image is in the process of being deployed to production. First a container from this new image is deployed in a staging environment. After a set training period, Project Syn creates a profile for this container, which captures the behavior learned about this container.
At a later point in time, after the container image has passed the requisite checks in staging, a new container (or set of containers) is deployed in production from the same image. Since Project Syn already has a profile from a previous container coming from the same image, it applies that profile to the new container. The new container bypasses training, and if it happens to be compromised shortly after deployment, Project Syn can immediately raise alerts to that effect.
How does Project Syn determine when a profile can be applied to a container? It's not based simply on the container's image. Containers run from the same image can exhibit very different behavior based on how they are run. Project Syn uses containers' runtime metadata, such as command line arguments and ports, as part of profile matching.
For example, let's compare three nginx web server containers that are run from the same nginx image. Container A runs only with a private port and is only accessible on the same local virtual network as the container. Container B exposes its private port on port 80 and is accessible from outside the container host (assuming the host firewall is open). Container C exposes its private port on port 8080 and is also accessible outside the container host.
In this case, there are two unique profiles, one profile for container A, and one profile for both container B and C. The difference in public port for container B and C (80 vs 8080) doesn't represent a meaningful difference in behavior.
If you want to, you can explicitly control how profile matching works using Docker object labels. Labels are custom metadata in the form of key-value pairs that can be attached to containers.
Here's how it works: first, you tell Project Syn which label keys you want Project Syn to use for profile matching. When you run a container, you run it with those same labels, and set the label values appropriately. Containers with the same label key-value pairs are matched to the same profile.
Container profiles today only work within the context of a single customer. It's not hard to see a future in which customers can opt-in to share profiles with and use profiles from RSA and other customers. This would enable the community to collectively improve container security for everyone.
Stay tuned for more updates!
It’s an essential question for security teams following a cyber attack: Where did the threat originate? In the days and weeks following the WannaCry ransomware attack—which swept through 150 countries, infecting hundreds of thousands of computers—reports emerged pointing to various potential actors. But none of the insights came soon enough to help defend against the attack. Unfortunately, the type of analysis used to derive them just doesn’t work that fast. The good news is there are other approaches that do.
Dynamic analysis of WannaCry and its possible origins required hours of manual code inspection. As a result, the first clues took several days to emerge, and further insights took weeks. The problem is the process entails manually comparing thousands of code segments from dozens of known malicious actors. As the volume of new malware threats grows (the AV-TEST Institute reports registering over 390,000 new malicious programs daily), that problem is only going to get worse. Dynamic analysis simply can’t scale to compare code quickly enough to identify the origins of a new piece of malware in a timely way.
Dynamic analysis can help determine the runtime effects of a piece of malware, but with tools for sandbox detection and evasion becoming increasingly common, its value is limited. Besides, knowing what a piece of malware does won’t help with file similarity analysis, as there may be dozens of ways to achieve that result. Comparing file hashes has never really been useful, either, since attackers routinely leverage code polymorphism to ensure each piece of malware has a unique hash. What about fuzzy hashing as a tool for file similarity analysis? It’s increasingly being used to measure how similar two binaries are. The challenge is fuzzy hashing tools like ssdeep are applied to the entire file and can’t catch similarities more complex than one file being related to another.
But what if fuzzy hashing could be applied to pick up code similarity at a more granular level? That thinking has led RSA to a new static analysis technique for detecting complex similarities and, moreover, identifying similarities from multiple pieces of malware. Through this approach, we can create a malware genome, if you will, that provides an understanding of how malware evolved, even when it’s an amalgamation of multiple malicious tools. Beyond mapping out code capabilities, this genealogy may shine some light on the malicious infrastructure and exchange of tools happening on the attacker side.
As a service to others engaged in threat investigation, we’re freely sharing the tool we’ve been using to explore this approach. Our hope is WhatsThisFile will help defenders evaluate unknown files faster, discover similarities to known malware and quickly gain the insights needed to better defend their enterprises.
IoT gateways are critical pieces of enterprise infrastructure that facilitate secure communication between IoT edge devices and the cloud. As IoT gateways serve as single points of control for all edge devices, they can make an attractive target for attackers, and protecting them is paramount.
RSA Project Iris provides security monitoring and visibility at the IoT edge. This post walks through several examples of how Project Iris can monitor IoT gateways, using the open source EdgeX Foundry platform as a motivating example.
The EdgeX Foundry platform for IoT gateways consists of many microservices that are deployed as Docker containers. Almost all of these microservices expose web APIs, some for internal consumption within the gateway and others for external use. MongoDb is used for storing data, such as IoT device metadata, logs, and sensor readings from connected IoT edge devices.
Setting up Project Iris on an EdgeX Foundry gateway involves simply deploying the Project Iris Docker container on the gateway. The Project Iris container passively collects data about local EdgeX microservices and securely sends the data to the Project Iris cloud service. The Project Iris cloud service analyzes the data and uses anomaly detection techniques and threat intelligence to identify suspicious activities and raise security alerts.
So what can Project Iris do? Below are examples of interesting security events that Project Iris can detect.
A compromised host or microservice container will often execute a malicious payload and initiate suspicious network connections to risky sites, from which further payloads may be downloaded or "command and control" instructions are received to execute.
Project Iris can show when these suspicious payloads are executed or suspicious network connections are made. Below are example alerts for a compromised edgex-device-bacnet service, which is responsible for managing communications to IoT devices that support the BACnet protocol. The first alert shows an anomalous Python process that runs code to connect to an external site, download a payload, and execute it. The second alert is raised for the network connection being made to a known high risk IP address based in Germany.
Malicious payloads may probe the network for other endpoints to compromise. This is especially of concern for IoT gateways, which sit on many local edge networks and have privileged access to edge devices.
Project Iris can detect when a microservice container initiates these suspicious probes. The example alerts below show the compromised edgex-support-logging container probing another IoT device, a KMC thermostat, and also trying to connect to another microservice, edgex-device-snmp, on the same host. An alert is also raised for the execution of the ping command used for probing. Project Iris understands that these activities are not typical for the edgex-support-logging microservice and flags them.
Data exfiltration is often the end goal of a compromise. IoT gateways often contain a wealth of sensitive information about edge devices including raw device data, device metadata, and credentials and keys for secure access to edge devices. On the EdgeX Foundry platform, this information is housed within MongoDB.
In the current pre-release version of EdgeX Foundry, MongoDB is set up with remote access enabled and well known default usernames and passwords. As an example of data exfiltration, we can dump the contents of the MongoDB database remotely using the mongodump tool:
This type of activity would cause Project Iris to raise several type of alerts, as shown below. The first alert is raised for a remote network connection to MongoDB. This connection was flagged as unusual because the database is normally only meant for local use on the gateway itself. The second alert is triggered because of an unusually large data transfer out of MongoDb.
IoT gateways are especially susceptible to denial of service attacks because of the large number of edge devices they manage. Compromised edge devices could launch denial of service attacks at the gateway or through the gateway to other hosts.
As an example, we used a compromised network signal tower device to initiate a large volume of network connections to the gateway. Project Iris can detect this type of activity, as shown in the first alert below:
A denial of service attack can subsequently lead to one or more microservice containers crashing in an unexpected way. Project Iris can also detect this, as shown in the second alert above.
The goal of Project Iris is to bring security monitoring and threat detection capabilities to the IoT edge. In this post we walked through how Project Iris can be used to secure IoT gateways, which are critical enterprise assets responsible for managing edge devices. In a subsequent post, we'll talk about what Project Iris can do to bring similar visibility down to the edge devices themselves.
If you're interested in trying out Project Iris, register here and the RSA Labs team will notify you when it's available.
Web applications and web services are probably the most commonly produced type of software, and they are increasingly being developed and deployed as containers. Among the top downloaded container images on the public Docker Hub are many related to web application development, such as nginx, MySQL, PostgreSQL, the Apache HTTP server, Ruby, PHP, Tomcat, and Django.
This post walks through an example scenario of detecting a web application attack using Project Syn. The scenario is admittedly simple and contrived, but we believe it's illustrative of how Syn can help in the real world.
In our scenario we use the Damn Vulnerable Web Application (DVWA) as the web app to be exploited. The application is intentionally riddled with vulnerabilities and is often used in security pen-test training. We deploy the entire web app, based on the LAMP stack, in a single Docker container. (Typically a web application would be deployed as many containers but a single container is sufficient for our purposes.)
We also deploy the Project Syn container side-by-side with the DVWA container on the same Docker host. The Project Syn container collects security-related data about other containers on the same host (in this case the DVWA container) and forwards them to the Syn cloud service for analysis and alerting.
Here's the output of docker ps:
Among the many vulnerabilities in the DVWA is one that permits the upload and execution of malicious code disguised as image files.
We use the OWASP Zed Attack Proxy to exploit the vulnerability to install a malicious PHP file, bad.php, and execute it. The PHP file contains a small bit of code that when executed launches a Python process that connects to an external IP hosting a Remote Admin Tool (RAT). The Python process downloads a full payload from the external IP and executes it, giving the operator of the RAT full control over the container.
The Syn service raises alerts when it detects the launching of the malicious Python process:
In addition, the Syn service raises alerts when it detects network traffic to the malicious external IP on ports 8080 and 443:
Once the payload is installed, the operator of the RAT has full control over the container and can do any number of things. In our case, we are using the open source Pupy RAT tool. We start an interactive shell on the container, dump the MySQL database to a file, and download it.
The Syn service detects the anomalous execution of the mysqldump process:
The Syn service also detects that data exfiltration through the producer-consumer ratio (PCR) metric.
The PCR metric tracks a normalized ratio of network bytes in and out of a component. Producers (PCR value between 0 and 1) have more data flowing out than in, while consumers (PCR value between -1 and 0) have more data flowing in than out. Components tend to have pretty stable PCR values over time.
In our scenario, the Syn service detected a significant change in the DVWA container's PCR. It changed from being a moderate producer (.613) to being a strong producer (.979) at the moment the database dump was downloaded.
The beauty of containers is that they are designed to be limited in function and behavior. As such, from a security perspective, we believe we can precisely model what the expected/normal behavior for any container should be, and raise targeted alerts when anomalies arise. We walked through a simplified scenario above of using Project Syn to detect the exploitation of a containerized web application.
Dynamic analysis. Sandboxing. Is that all we got? Am I right?!?
Fact. Sandboxing is a necessity to understand malware behavior. It’s the defacto standard for our industry. However, for the average enterprise security team it feels overwhelming to consistently operationalize. In addition, for security vendors trying to keep up with the millions of samples that emerge daily, the infrastructure and expense needed to support and scale long-term may have no ceiling. Thousands of virtual hosts, running for several minutes each, not to mention deception techniques, dynamic IoC's, etc., etc., etc., the long-term math to keep ahead of the malware problem just doesn’t seem to add up.
The concept for What's this file? was born from that perspective. Can we accurately detect and classify known and unknown malware without ever executing it? It seemed like a worthy challenge for RSA Labs.
RSA Labs developed novel techniques to identify and classify malware, and we packaged them into a cloud service that operates like your typical multi-scanner, but its FAR from typical in approach. In addition, we bundled in a light-weight static-analysis UX to round out what we believe is a useful tool for security analysts.
What makes What's this File? different from other multi-scanner type services is:
We would greatly appreciate your feedback on its effectiveness, we think it is pretty cool! If you can fool the service let us know. And, WTF is free to use!
VP, RSA labs
Deployments of micro services and applications alike is changing rapidly, moving towards container based environments. As this paradigm shift happens, similar to the paradigm shift with the advent of VM’s the IT security paradigm must also shift. RSA labs created Project Syn as a test bed for enabling visibility and threat detection in Docker container environments. We believe that container based technologies will be a well adopted way for IT, Devops and developers alike to create, manage and distribute new technologies. With every new technological advancement, there comes inherent security risk.
Project Syn can help! If you’re a Netwitness for Logs customer, great, we can feed alert data directly into Netwitness. If not, that’s cool too! Our online dashboard will allow you to monitor the health of your Docker Hosts, monitor alerts and drill down into pertinent meta data to help gain visibility into the threats your environments are facing. Advanced Behavioral Analytics techniques are being developed from our data science group to ensure the alerts are fined tuned to the latest threats. We also leverage RSA Live Connect for current known malicious website blacklist data.
Project Syn works hard to protect your Docker Environments, but as always, there’s room for improvement! Feedback is encouraged! We’re always looking for ways to improve our value to our customers! Best of all, Project Syn is free of charge! All we ask is you install our lightweight container in your Docker environment and we’ll do the rest!
Interested? Please visit https://syn.rsa.com for more information and to request access!
RSA Labs team!
Unless you were sleeping during your crypto elective, or like myself, find it difficult to remember everyday life events, RSA Laboratories should spark a memory going back to the early '90s when it was THE resource for Cryptography Research and Education. Over the years RSA Labs churned out an impressive portfolio of intellectual property to fuel RSA strategy, while keeping its roots in academia and the research underworld. The Labs organization has since evolved, and even went underground for a period of time, to re-emerge, now with a renewed mission and purpose for RSA.
RSA Labs sole mission in life is to incubate and accelerate the development of differentiated and high-value capabilities for RSA products. Simply put. Take risks, disrupt.
For me, research and innovation can be an amazing thing, particularly if there is a purposeful outlet for people to use the results. I have too often been part of projects that were loaded into a wooden crate, wheeled into some cavernous warehouse never to be seen again. Think Raiders of the Lost Ark with less face-melting and more professional disappointment.
Our goal in RSA Labs is to "Release" every project we develop to our customers and the community. Free to use and critique, to better RSA and the security community we serve. It will be exciting. We will fail at times, but creating that opportunity to disrupt and innovate is well worth the price of a bruised ego every now and then.
Here are a few things to remember about RSA Labs:
Lets do this.