Skip navigation
All Places > Products > RSA NetWitness Platform > Blog
3 4 5 6 7 Previous Next

RSA NetWitness Platform

596 posts

One of the changes introduced in 11.x (11.0, specifically) was the removal of the macros.ftl reference in notification templates.  These templates enable customized notifications (primarily syslog and email) using freemarker syntax. The 10.x templates relied on macros (which are basically just functions, but using freemarker terminology) to build out and populate both the OOTB and (most likely) custom notifications.


If you upgraded from 10.x to 11.x and you had any custom notifications, there's a very good chance you noticed that these notifications failed, and if you dug into logs you'd have probably found an error like this:

The good news is there's a very easy fix for this, and it does not require re-writing any of your 10.x notifications.  The contents of macros.ftl file that was previously used in 10.x simply need to be copy/pasted into your existing notification templates, replacing the <#include "macros.ftl"/> line, and they'll continue to work the same as they did in your 10.x environment (props to Eduardo Carbonell for the actual testing and verification of this solution).






I have attached a copy of the macros.ftl file to this blog, or if you prefer you can find the same on any 11.x ESA host in the "/var/netwitness/esa/freemarker" directory.

G Suite (formerly known as Google Business Suite or Google Apps for Business) is now supported for log collection using the RSA NetWitness Platform.  Collection is achieved via the G Suite Reports API (v1) and is enabled in RSA NetWitness via the plugin framework.



The G Suite API schema provides several types of events which can be monitored.  Below is the list of event types currently supported by this plugin:


  • access_transparency – The G Suite Access Transparency activity reports return information about different types of Access Transparency activity events.
  • admin – The Admin console application's activity reports return account information about different types of administrator activity events.
  • calendar – The G Suite Calendar application's activity reports return information about various Calendar activity events.
  • drive – The Google Drive application's activity reports return information about various Google Drive activity events. The Drive activity report is only available for G Suite Business customers.
  • groups – The Google Groups application's activity reports return information about various Groups activity events.
  • groups_enterprise – The Enterprise Groups activity reports return information about various Enterprise group activity events.
  • login – The G Suite Login application's activity reports return account information about different types of Login activity events.
  • mobile – The G Suite Mobile Audit activity report return information about different types of Mobile Audit activity events.
  • rules – The G Suite Rules activity report return information about different types of Rules activity events.
  • token – The G Suite Token application's activity reports return account information about different types of Token activity events.
  • user_accounts – The G Suite User Accounts application's activity reports return account information about different types of User Accounts activity events.


Suggested Use Cases


G Suite Admin Report:


  1. Top 5 Admin Actions: Depicts the top 5 actions by Admin
  2. Admin activity: Activities performed by admins
  3. App Token Actions: Displays details on app token actions in a pie chart
  4. Users Created and Deleted: Displays users created and deleted as a table chart including details on the user’s email, admin action, and admin email.
  5. Groups - Users Added or Removed: Displays information on Groups, with users added or removed as a table chart including details on the user email, admin action, group email, and admin email.


G Suite Activity Report:


  1. Activity by IP Address: Shows a table of actions w.r.t IPs
  2. Login State Count: A pie chart that depicts the login states by count
  3. Logins from Multiple IPs: Shows logins from multiple IP addresses by user on a pie chart
  4. Most Active IPs: Shows a table with the most active IP addresses based on the number of events performed by that IP address
  5. Top 10 Apps by Count: Shows the top ten apps by count on a column graph
  6. Login Failures by User: Shows the login failures by user on a pie chart


Downloads and Documentation


Configuration Guide: Google G Suite 
Collector Package on RSA Live: Google Business Suite Log Collector Configuration
Parser on RSA Live: CEF (device.type='gsuite')


Sending a notification based on a critical or time-sensitive event seen in your environment is table stakes functionality for any detection platform. Alerting someone in a timely manner is important, but building a custom e-mail that includes relevant, concise information that an analyst can use to determine the appropriate response is just as important. As they work to juggle their daily priorities, they need to know whether an alert requires immediate attention or whether it's something they can filter as a false positive as time permits.


The RSA NetWitness Platform uses Apache FreeMarker template engine to build its notifications, be they e-mail, syslog, or SNMP. For the purposes of this post, I'm going to focus on e-mail notifications as the concepts apply to all notifications, and e-mail is the most complex of the options.


Available Data

The first step is finding out what information you can include in your notification. All of that data can be seen in the Raw Alert section of an Alert in the Respond UI. That Raw Alert is formatted in JSON, and anything in there can be placed into a notification. To find that Raw Alert data, you can go to one of two places.


Location #1:


Location #2:


Example #1: Basic Email

Let's start with a basic example. I want to send an e-mail that includes the name, severity, and time of the Alert, as well as a link to the raw event (network or log) that generated the alert. Here is a snippet of the data from my Raw Alert (the full alert, with addresses changed to protect the innocent, is attached as raw_alert.json):


Under Admin --> System --> Global Notifications, on the Template tab, I add a new template. Give it a name, choose the template type (we're going to select Event Stream Analysis for these), and then paste in the below code (also under example_1.html):


Assuming a severity of 9, that gives an e-mail formatted like this (using Gmail):


Rows 1 - 20 give us a color-coded banner which highlights the severity of the incident. In rows 3 - 6, you can see that we're making a logical check for the severity to determine the background color of the banner. Row 22 (we'll come back to row 21) prints the rule name. Row 23 gives us the time and includes the field, the input format, and the output format. You can even take epoch time and adjust it for your local time zone, but that's another post. Row 25 builds a hyperlink to the raw event that generated the Alert. Keep in mind that by default, notifications will separate large numbers with commas, which is why row 21 is necessary. Without row 21, the notification link (which I highlighted in the e-mail screenshot) would include commas in the sessionid within the URL, which would obviously not work when clicked. Also, you will need to update two portions of the URL specific to your environment:


The [URL_or_IP] is self-explanatory. The [Device_ID] is different for every environment and for every service. If you login to the RSA NetWitness Platform and navigate to the Investigate --> Navigate page and load values, the Device ID will be in the URL string in your browser, and it will correspond to the data source you've selected. In this example, my Broker has a Device ID of 6.


Above, we used https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/AUTO. This loads the "Default Session View" that each individual user defined in their Profile --> Preferences --> Investigation settings, which by default is "Best Reconstruction" view for network sessions and the "Raw Log" view for log events. Should you prefer to jump directly to other views, you can use these formats:

  • Meta View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/DETAILS
  • Text View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/TEXT
  • Hex View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/HEX
  • Packets View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/PACKETS
  • Web View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/WEB
  • Files View: https://[URL_or_IP]/investigation/[Device_ID]/reconstruction/[Session_ID]/FILES


Great! Now we have a notification.


Example #2: Multiple Values

But what if we have an array of values like analysis_service here:


In order to print those multiple values out, we need do some formatting with a FreeMarker macro. I'm pasting the following onto the bottom of my notification:


Lines 1 - 11 iterate through any meta value that has more than one value and separate them with a comma. Lines 13 - 22 print out Service Analysis with a comma-separated list of values. First, there is a logical test to see if there are any events in the first place. This was taken from the Default SMTP Template (Admin --> System --> Global Notifications --> Templates tab), and can be used to print out every meta key and all of their values. In my case, I altered it (or, well, Josh Randall did and I stole borrowed it) to only apply to Service Analysis by adding a logical test (lines 16 and 19) and then only printing out that one meta key. Here is what that looks like:



If you would like to print out more than one key, you can add elseif statements like this:


Testing Your Syntax

So what if you want to use some FreeMarker concepts, but you want to see if they'll work before putting them into the RSA NetWitness Platform? Luckily, there is a tester put out by Apache here -


In order to use it on your data, just copy that Raw Alert section from an Alert and paste it into the Data model box shown above. Then paste your FreeMarker code into the Template box and click Evaluate. Keep this in mind: this will not work the same as an RSA NetWitness Platform notification would. If I took the Raw Alert I used for my examples above along with the template I was using, I would not see the output I actually get from the RSA NetWitness Platform. This should ONLY be used to test some basic syntax concepts. For example, printing out UNIX Epoch Time in various formats, adjusted for different time zones, is something this helped me do.



These concepts - along with some basic HTML formatting - give you the tools to build just about any notification you would want. I also recommend taking a peek at the Default SMTP Template I referenced above to use as a starting point for more advanced formatting. If you do some other interesting things or need help getting a notification to work, please post that in the comments below.

One of the most powerful features to make its way into RSA NetWitness Platform version 11.3 is also one of the most subtle in the interface.  11.3 now saves analysts one more step during incident response by integrating rich UEBA, Endpoint, Log, and Full Packet reconstruction directly into the incident panel.  This view is essentially the same as if you were looking at events directly in the Event Analysis part of the UI, or the Users (UEBA) part of the UI, just consolidated into the incident panel.  Prior to this improvement,the only way to view the raw event details was to open the event and click on "Investigate Original Event", pivoting into a new query window.  This option may still be appropriate for some, and still exists, but for those needing the fastest route possible to validating detection and event details, this feature is for you.


To use the new feature, for any individual event of interest that has been aggregated or added into an incident you'll see a small hyperlink attached to each event on the left hand side, labeled with one of: "Network", "Endpoint", "Log", "User Entity Behavior Analytics".  These labels correspond to the source of the event, and upon click will slide in the appropriate reconstruction view.


User Entity and Behavior Analytics (UEBA) view:

Network packet reconstruction view:

Endpoint reconstruction view:

Log reconstruction view:


Happy responding!

Starting in version 11.3, the RSA NetWitness Platform introduced the ability to analyze endpoint data captured by the RSA NetWitness Endpoint Agent (both the free "Insights" version and the full version). For more information on what RSA NetWitness Endpoint is all about, please start with the RSA NetWitness Endpoint Quick Start Guide for 11.3.


One of the helpful new features of the endpoint agent is the ability to not only focus the analyst on the "Hosts" context of their environment, but also the ability to gain full visibility into process behaviors and relationships whenever suspicious behaviors have been detected by the RSA NetWitness platform, or when investigating alerts from others.


The various pivot points bring an analyst into Process Analysis in the context of a specific process, including it's parent and child process(es) and based on the current analysis timeline which is adjustable if needed.


Example Process Analysis view, drilling into all related events recorded by the NW Endpoint Agent


Example Process Analysis view, focused on process properties (powershell.exe) collected by the NW Endpoint Agent


The feature is simple to use when RSA NetWitness Endpoint agent data exists, and is accessible from a number of locations in the UI depending on where the analyst is in their workflow:


Investigate > Hosts > Details (if endpoint alerts exist):

Investigate > Hosts > Processes (regardless of alert/risk score): 


Investigate > Event Analysis:


Respond > Incident > Event List (card must be expanded):


Respond > Incident > Embedded Event Analysis (reconstruction view):


Happy Hunting!

Unfortunately sometimes sensitive data can find its way where it is not wanted. It should not, but it happens. Perhaps your IT Person decided connecting the high side network to the low side was a good idea. Maybe someone accidentally uploaded the wrong PCAP (packet capture) to the system. However it happened, there are options to remove that data. If a large amount of data needs to be purged, probably want to start with the storage component (e.g. SAN) to see what capabilities are available. In terms of RSA NetWitness Platform software, one option is to utilize the wipe utility that allows the administrator to strategically overwrite events.


  1. The first step is to find the data in question. This can be done via a query either in the RSA NetWitness Investigate user interface, the REST API interface, or the NwConsole. If use the first option will require additional steps to clear user interface cache on the admin server. This is an example of an event found using the Investigate user interface. The PCAP used in this example has one event and was tagged by name during import to make it easier to query.

  2. After you execute the query make note of the session ID (sid) and remote ID (rid) that can be seen here using a custom column group. They are both in the above view as well, but have to scroll down the list of meta to find the remote id. 

  3. Starting with the concentrator, use the wipe command against those session IDs to overwrite them with a pattern.
    • There are multiple options to the wipe command.
      • session - <uint64> The session id whose packets will be wiped
      • payloadOnly - <bool, optional> If true (default), will only overwrite the packet payload
      • pattern - <string, optional> The pattern to use, by default it uses all zeros
      • metaList - <string, optional> Comma separated list of meta to wipe, default (empty) is all meta
      • source - <string, optional, {enum-any:m|p}> The types of data to wipe, meta and/or packets, default is just packets
    • Note that if you use a string as your pattern it will not overwrite any meta values that are not a string type. Therefore best to keep the pattern as a numerical value.
    • Initially go to the concentrator that was found to have those session IDs (sids) and use the wipe command to overwrite the session meta data on disk.

  4. Rinse and repeat this on the upstream service (e.g. decoder, log decoder) in the path of the query. This time use the remote session IDs (rids) to overwrite the raw sessions on disk.

  5. To ensure that the indexed meta values that were stored on the Concentrator are removed, rebuild the index. This can take a long time but is necessary because the wipe command does not remove any data from the Concentrator index. Refer to the Core Database Tuning Guide for instructions.
  6. Now that you have overwritten the data on the decoder, where it was ingested, and the concentrator, where meta related to it was created, you're done right? Well it depends on how you discovered the data in the first place. If you know for sure no one found the data by way of the RSA NetWitness Platform user interface you should be done. If the user interface was used or you just want to be on the safe side continue to the next step. Otherwise might still see the raw event data being rendered from cache like below.

    • If the Investigate > Event Analysis was used to find the data the cache for the event reconstruction should be cleared by restarting the Investigate service.

    • If the Investigate > Events was used to find the data the event reconstruction cache should be cleared by removing the contents of the service folders on the admin server as shown below.

    • The cache for the concentrator and the decoder can also be cleared by executing the delCache command in Admin > Services > sdk > properties for each as shown below.

    • After clearing the cache attempting to view the same session that was wiped you will see the event is unavailable for viewing.


To gain further knowledge on protecting the data stored within your RSA NetWitness system take a look at the Data Privacy Management Guide.

(Authored by Steve Schlarman, Portfolio Strategist, RSA)

It was Mark’s big shot.  He finally had a meeting with Sharon, the CIO.  Her schedule was so busy it was legendary and for her to spend time with a risk analyst was a clear indicator she recognized the new challenges facing their company.  Although he only had 15 minutes, Mark was prepared - notepad at the ready, brimming with nervous energy.   After some brief chit-chat he got down to business – ready to drill into a conversation about their company’s biggest obstacles; the most impactful concerns; the top of mind issues; the coup de grace that could spell disaster for the organization.  He took a deep breath and went to his big money question… ‘So, what keeps you up at night? What are you worried about?’ 


Sharon beamed.  She spun around to her white board and spewed a litany of projects fueling their company’s digital transformation – an IoT project, the implementation, a massive VMWare migration and their hybrid cloud, the new employee work-at-home program, the impending customer mobile portal…

While that question got Sharon started, let’s think about this a bit differently.


With all the benefits the new digital world offers, there are a host of risks that must be managed.   The major areas of risk remain the ‘usual suspects’ such as security, compliance, resiliency, inherited risks from third parties and operational risk. However, digital business amplifies uncertainty for organizations today.  For example:

  • Digital business, by its very nature, increases the threat of cyber incidents and risks around your intellectual property and customer data.
  • The expanded connectivity and expectations of the ‘always on’ business stresses the importance of resiliency.
  • Business has evolved into an ecosystem of internal and external services and processes leading to a complex web of ‘inherited’ risks.
  • The disappearing perimeter and digital workforce is challenging how organizations engage their customers and employees.


Factors such as these are why digital initiatives are forcing organizations to rethink and increasingly integrate their risk and security strategies. 

The objective for today’s risk professional is not just about defending against the bad.  Just like Mark discussing the parade of initiatives with Sharon that clearly impact their company’s future, you must be ready to help usher in a new age of digital operations.  Merely riding the buzzword wave - IoT, social media, big data analytics, augmented reality… - is not enough. 


You must look at opportunities to enable innovation in your business while building trust with your customers and throughout your enterprise.  Your business must be comfortable with embracing risk and aggressively pursuing market opportunities offered by new technology.  To do that, risk associated with the use of emerging or disruptive technology in transforming traditional business processes needs to be identified and assessed in the context of fueling innovation.   You also must keep focus on the negative side of risk.  Your business today demands an open, yet controlled, blend of traditional and emerging business tactics.  You must help manage the ongoing risk as these transformed business operations are absorbed into the organization fully, i.e. the new model becomes the normal model of doing business.

Risk is, by definition, uncertainty.  Everyone is concerned about uncertainty in today’s world.  However, if we go back to the simple equation (risk = likelihood * impact), risk should be something we can dissect, understand, and maybe even calculate.   While you are helping your organization embrace the advantages (positive risk) of technologies like IoT, data analytics, machine learning and other emerging digital enablers, the volatile, hyperconnected nature of digital business amplifies the negative side of risk.  It is anxiety about the unknown that leads us into that executive conversation, but it shouldn’t lead to worry.

Worry is about fear.  Your executives shouldn’t be afraid in today’s world.   They should have informed concerns.  And you – as the security or risk person in the room – should be feeding insights to raise their visibility of the likelihood of events and diminish their distress on the negative impacts.  Risk is part of riding the waves of business opportunities.

Risk is not something you should WORRY about…  it is something you should ACT on.



To learn more about digital risk management, click on our new Solutions Banners located in the right-hand column of each RSA product page: Third Party RiskCloud TransformationDynamic Workforce, and Cyber Attack Risk.


Rui Ataide

Domain Fronting Malware

Posted by Rui Ataide Employee Jun 19, 2019

Customers frequently ask me about malware that uses domain fronting and how to detect it. Simply put, domain fronting is when malware or an application pretends to be going to one domain but instead is going somewhere completely different. (Mitre ATT&CK - T1172)


The goal of domain fronting is to have the analysts believe that the connection is being a made to a safe site while the true destination is in fact somewhere completely different.


Let’s look at a piece of malware that uses this method. This is a PowerShell Empire sample:



In the configuration information of this file, we see a URL that will be requested, which is also Base64 encoded. The URL decodes to as seen below:



So, this script will initiate a connection to, and appear to request /login/process.php. However, because the Host: header is pointing to content-tracker.*******.net, the request will actually go to https://content-tracker.*********.net/login/process.php instead.


You may be thinking that all you have to do in order to detect examples of domain fronting is to look for discrepancies between the requested URL and the domain/IP in the Host: header. However, there are  some complexities to deal with. Most of the time the initial connection is SSL encrypted, so you are limited to artifacts related to SSL traffic, unless you have SSL inspection technology in place. Another consideration is if whether a proxy is involved in this connection or not.


In order to describe what the analyst would see if SSL inspection technology is in place, let us use a Man-In-The-Middle proxy to inspect this traffic in its clear-text form. SSL Inspection technologies are extremely useful for this and other scenarios where malware communicates over SSL, and it is something that we highly recommend that you deploy in your organization.


Let us introduce two terms here to clarify the elements of this technique as we describe how to hunt for it:

            Fronting Domain (

            Fronted Domain (content-tracker.*********.net)


Here we see the Fronting Domain request, which is also the only thing you would see if you were only relying on proxy logs. The Domain Fronting domain ( in this case) is also what the proxy would use for URL Filtering checks. The proxy logs would not actually see the “Fronted Domain”. For all intents and purposes this would be a legitimate request to a Microsoft site.




However, the response is anything but what you would expect from the site. Namely, instead of some HTTP content the site returns an encoded blob of data that decodes into more PowerShell code.


How do I know, it was more PowerShell code, that was easy, I simply replaced the follow-up execution with the output to a file as seen below: 



Then opening the resulting stage2.ps1 file, you can see to contains additional PowerShell code that is highly obfuscated.



Let us go back one step and discuss another key aspect regarding Domain Fronting. Namely, the SSL certificates used during this communication. The SSL certificates are legitimate Microsoft signed certificates, since the initial connection is indeed to This certicate is tied to many Microsoft domains and Microsoft CDN Domains.



We could try to de-obfuscate the stage2.ps1 PowerShell script but there really is no need, since by looking at the subsequent request of the malware on the proxy we can get an idea of what it does. Its initial check-in posting back victim information again in an encrypted binary blob of data.



Additionally, this particular strain of malware also seems to do a legitimate call to the site as shown below. While not at all relevant for domain fronting, it is important for the analysts to be aware as to why they might see both legitimate and malicious requests mixed together. The analyst will notice that the "Host:" header will match the requested domain in legitimate requests. 




The response from the legitimate site is also completely different and starts a redirect chain that we will show below:



And finally, the legitimate page for Microsoft Ajax Content Delivery Network.



Now that we have described in detail the sequence of requests, let us see how this all looks from the Netwitness Packet perspective. There are two cases, one where there is no proxy and one where there is one.


Traffic Analysis – SSL Only


Let’s start with the traffic without a proxy. I have isolated only the relevant events in a separate collection to facilitate the analysis. I will also point out how some of these indicators can be spotted in larger traffic patterns.


In the example below, the indicators are separated in two sessions: a DNS request and an SSL connection. You can see that the DNS request is for one domain name, while the SSL session displays what is referred to as the SNI, which does not match the DNS request.



For the legitimate traffic, the DNS request and the SSL SNI value both match. These are both extracted into the key.



So, how can you detect this type of behavior? It is not easy, especially on high volume environments. However, a starting point is to look for are values that only show one of the service types (DNS or SSL), but not both. Legitimate traffic will likely have both as shown below:



You should not expect these values to be balanced or equal as DNS is often cached, but you should expect to see both types of service. Some environments at times do not capture DNS due to volume, but to be successful it is critical to have both.


For the malicious traffic each domain will only have one type of traffic (i.e. DNS or SSL). This detection criteria is not  an exact “science” as you could easily have only DNS for all sorts of other types of traffic that are not domain fronting. The Fronting Domain will have the DNS traffic, while the Fronted Domain will have the SSL sessions.




Since the traffic is split between sessions on the packet side, we would need to use an ESA rule to detect this type of activity.

Traffic Analysis – Proxied Requests


For explicit proxied traffic things are slightly easier, as all the traffic is contained in a single session. We see the "raw" payload of one such session below. It can seem confusing at first, but Netwitness identifies this traffic as HTTP. This is correct since this part of the traffic is indeed HTTP.



Since we have all the pieces in one session here, the detection is easier. But how can we do it for high data volumes. In this case the HTTP session will have two different hostnames. While this is at times common for pure HTTP traffic due to socket re-use, it is uncommon for HTTPS/SSL traffic as the standards advise against it for privacy/security purposes, among other reasons.



This shows a possible solution to detect this type of traffic with a simple App rule that could identify traffic HTTP with 2 unique values and the presence of a certificate.


In summary, domain fronting is a technique used by attackers/red teams with the intent of either circumventing network policies (URL filtering), or hiding in plain sight, as the analysts are more likely to see/notice the legitimate domains than the malicious ones and assume this activity as safe/legitimate. However, this type of activity still has a certain footprint that we have described. Hopefully the information provided here will help you all improve your defenses against this technique.


Thank you,



If you need to achieve HA through load balancing and failover for VLCs on AWS you can use the built-in AWS load balancer. I have tested this scenario so I am going to share the outcome here.


Before starting I need to state that VLCs failover/balancing  is not an RSA officially supported functionality. Furthermore this can only work with "push" collections such as syslog, snmp, etc. It does not work with "pull" collections such us Windows, Checkpoint, ODBC, etc. (at least not that I am aware of and I have personally never tested it).


That being said, let's get started.


As you may be aware, in AWS EC2 you have separate geographic areas called Regions (I am using US East - N.Virgina here) and within regions you have different isolated locations called Availability Zones.



We are going to leverage this concept and we will place two VLCs into two different Availability Zones. If one VLC fails we will have the VLC in the other Availability Zone to take over.


The following diagram helps understanding the scenario (for better clarity I omitted the data flow from the VLCs to the Log Decoder/s):


Assuming you have already deployed the two VLC instances, the next step to do is creating two different subnets and associate two different Availability Zones to each of them .


  • From the AWS Virtual Private Cloud (VPC) menu go to Subnets and start creating the two subnets:



  • Next we need to create a Target Group (from the EC2 menu) which will be used to route requests to our registered targets (the VLCs):



  • Finally we need to create the load balancer itself. For this specific test I have used a Network Load Balancer but I think an Application Load Balancer would work too. I selected an internal balancer. I chose syslog on TCP port 514 so I created a listener for that. Actually, the AWS load balancer does not support UDP so I was forced to use TCP, however I would have used syslog over TCP anyway as it is more robust and reliable and large syslog messages can be transferred (especially if it is a production environment). I also select the appropriate VPC and the Availability Zones (and subnets) accordingly.  



In the advanced health check settings I chose to use port 5671 (by default the balancer would have used the same as the listener, 514). The reason of using 5671 is because the whole log collection mechanism works with rabbitmq which uses this port. In fact the only scenario 514 would not work is when the VLC instance is down or if we stop the syslog collection. I think rabbitmq is more prone to failures and may fail in more scenarios, such as queues filling up because the decoder is not consuming the logs, full partitions, network issues, etc. 



  • Once the load balancer configuration is finished you will see something similar:



           We need to take note of the DNS A Record as this is what our event sources will use to send syslog traffic to.


  • Now to configure an event source to send syslog logs to the load balancer you just need to point the event source to the load balancer DNS A Record. As an example, for a Red Hat Linux machine you should edit the /etc/rsyslog.conf file as follow:




         We are using @@ because is TCP, for UDP it's just one @.


         Then we need to restart the rsyslog service as follow:


            --> service rsyslog restart (Red Hat 6)

            --> systemctl restart rsyslog (Red Hat 7)


  • To perform a more accurate and controlled test and demonstration, I am installing a tool on the same event source and I will push some rhlinux logs to the load balancer and see what happens. The tool is an RSA proprietary one and is called NwLogPlayer (more details here How To Replay Logs in RSA NetWitness ). It can be installed via Yum if you have enabled the RSA Netwitness repo:




      I also prepared a rhlinux sample logs file with 14000 events and I am going to inject these to the load balancer and       see what happens. Initially my Log Decoder LogStats page is empty:




     Then I start with the first push of the 14000 events:



     Now I can see the first 14000 events went to VLC2 (


      At my second push I can see the whole chuck going to VC1 (


      At the third push the logs went again to VLC2


     At the fourth push the logs went to VLC1


     At the fifth push, I sent 28000 events (almost simultaneously)  and they get divided to both VLCs


     This demonstrates that the load has been balanced equally between the two VLCs.


     Now I stop VLC1 (I actually stopped the rabbitmq-service on VLC1) and I push other 14000 logs:


     and again


     On both instances above VLC2 received the two chunks of 14000 logs since VLC1 was down. We can safely say            that Failover is working fine!

Note: This configuration is not officially supported by RSA customer support. 


Cobalt Strike is a threat emulation tool used by red teams and advanced persistent threats for gaining and maintaining a foothold on networks. This blog post will cover the detection of Cobalt Strike based off a piece of malware identified from Virus Total:


NOTE: The malware sample was downloaded and executed in a malware VM under analysts constant supervision as this was/is live malware.

The Detection in NetWitness Packets

NetWitness Packets pulls apart characteristics of the traffic it sees. It does this via a number of Lua parsers that reside on the Packet Decoder itself. Some of the Lua parsers have option files associated with them that parse out additional metadata for analysis. One of these is the HTTP Lua parser, which has an associated HTTP Lua options file, you can view this by navigating to Admin  Services ⮞ Decoder ⮞ Config ⮞ Files - and selecting HTTP_lua_options.lua from the drop down. The option we are interested in for this blog post is the headerCatalog() - making this return true will register the HTTP Headers in the request and response under the meta keys:

  • http.request
  • http.response


And the associated values for the headers will be registered under:

  • req.uniq
  • resp.uniq


NOTE: This feature is not available in the default options file due to potential performance considerations it may have on the Decoder. This feature is experimental and may be deprecated at any time, so please use this feature with caution, and monitor the health of all components if enabling. Also, please look into the customHeader() function prior to enabling this, as that is a less intensive substitute that could fit your use cases.


There are a variety of options that can be enabled here. For more details, it is suggested to read the Hunting Guide -


These keys will need to be indexed on the Concentrator, and the following addition to the index-concentrator-custom.xml file is suggested:

<key description="HTTP Request Header" format="Text" level="IndexValues" name="http.request" defaultAction="Closed" valueMax="5000" />
<key description="HTTP Response Header" format="Text" level="IndexValues" name="http.response" defaultAction="Closed" valueMax="5000" />
<key description="Unique HTTP Request Header" level="IndexKeys" name="req.uniq" format="Text" defaultAction="Closed"/>
<key description="Unique HTTP Response Header" level="IndexKeys" name="resp.uniq" format="Text" defaultAction="Closed"/>



The purpose for this, amongst others, is that the trial version of Cobalt Strike has a distinctive HTTP Header that we, as analysts, would like to see: This HTTP header is X-Malware - and with our new option enabled, this header is easy to spot:

NOTE: While this is one use case to demonstrate the value of extracting the HTTP Headers, this metadata proves incredibly valueable across the board, as looking for uncommon headers can help lead analysts to uncover and track malicious activity. Another example where this was useful can be seen in one of the previous posts regarding POSH C2, whereby an application rule was created to look for the incorrectly supplied cachecontrol HTTP response header:


Pivoting off this header and opening the Event Analysis view, we can see a HTTP GET request for KHSw, which was direct to IP over port 666 and had a low header count with no referrer - this should stand out as suspicious even without the initial indicator we used for analysis:


If we had decided to look for traffic using the Service Analysis key, which pulls apart the characteristics of the traffic, we would have been able to pivot of off these metadata values to whittle down our traffic to this as well:


Looking into the response for the GET request, we can see the X-Malware header we pivoted off of, and the stager being downloaded. Also, take notice of the EICAR test string in the X-Malware as well, this is indicative of a trial version of Cobalt Strike as well:


NetWitness Packets also has a parser to detect this string, and will populate the metadata, eicar test string, under the Session Analysis meta key (if the Eicar Lua parser is pushed from RSA Live) - this could be another great pivot point to detect this type of traffic:


Further looking into the Cobalt Strike traffic, we can start to uncover more details surrounding its behaviour. Upon analysis, we can see that there are multiple HTTP GET requests with no error (i.e. 200), and a content-length of zero, which stands out as suspicious behaviour - as well as this, there is a cookie that looks like a Base64 encoded string (equals at the end for padding) with no name/value pairs, cookies normally consist of name/value pairs, these two observations make the cookie anomalous:


Based off of this behaviour, we can start to think about how to build content to detect this type of behaviour. Heading back to our HTTP Lua options file on the Decoder, we can see another option named, customHeaders() - this allows us to extract the values of HTTP headers in a field of our choosing. This means we can choose to extract the cookie into a meta key named cookie, and content-length into a key named http.respsize - this allows us to map a specific HTTP header value to a key so we can create some content based off of the behaviours we previously observed:


After making the above change, we need to add the following keys to our index-concentrator-custom.xml file as well - these are set to the index level of, keys, as the values that can be returned are unbounded and we don't want to bloat the index:

<key description="Cookie" format="Text" level="IndexKeys" name="cookie" defaultAction="Closed"  />
<key description="HTTP Response Size" format="Text" level="IndexKeys" name="http.respsize" defaultAction="Closed" />


Now we can work on creating our application rules. Firstly, we wanted to alert on the suspicious GET requests we were seeing:

service = 80 && action = 'get' && error !exists && http.respsize = '0' && content='application/octet-stream'

And for the anomalous cookie, we can use the following logic. This will look for no name/value pairs being present and the use of equals signs at the end of the string which can indicate padding for Base64 encoded strings:

service = 80 && cookie regex '^[^=]+=*$' && content='application/octet-stream'

These will be two separate application rules that will be pushed to the Decoders:


Now we can start to track the activity of Cobalt Strike easily in the Investigate view. This could also potentially alert the analyst to other infected hosts in their environment. This is why it is important to analyse the malicious traffic and create content to track:



Cobalt Strike is a very malleable tool. This means that the indicators we have used here will not detect all instances of Cobalt Strike, with that being said, this is known common Cobalt Strike behaviour. This blog post was intended to showcase how the usage of the HTTP Lua options file can be imperative in identifying anomalous traffic in your environment whilst using real-world Live malware. The extraction of the HTTP headers, whilst a trivial piece of information, can be vital in detecting advanced tools used by attackers. This coupled with the extraction of the values themselves, can help your analysts to create more advanced higher fidelity content.

In order to prevent confusion, I wanted to add a little snippet before we jump into the analysis. The blog post
first goes over how the server became infected with Metasploit, it was using a remote execution CVE
against an Apache Tomcat Web Server, the details of which can be found here,
CVE-2019-0232. Further into the blog post, details of Metasploit can be seen.


This CVE requires that the CGI Servlet in Apache Tomcat is enabled. This is not an abnormal servlet to be
enabled and merely requires the Administrator to uncomment a few lines in the Tomcat web.xml. This is a
normal administrative action to have taken on the Web Server:


Now, if the administrator has a .bat, or .cmd file in the cgi-bin directory on the Apache Tomcat Server. The
attacker can remotely execute commands as Apache will call cmd.exe to execute the .bat or .cmd file and
incorrectly handle the parameters passed; this file can contain anything, as long as it executes. So here as an
example, we place a simple .bat file in the cgi-bin directory:


From a browser, the attacker can call the .bat file and pass a command to execute due to the way the CGI
Servlet handles this request and passes the arguments:


From here, the attacker can create a payload using msfvenom and instruct the web server to download the Metasploit payload they had created:


The Detection in NetWitness Packets

RCE Exploit
NetWitness Packets does a fantastic job pulling apart the behaviour of network traffic. This allows analysts to
detect attacks even with no prior knowledge of them. A fantastic meta value for analysts to look at is windows
cli admin commands, this metadata is created when cli commands are detected; grouping this metadata with inbound
traffic to your web servers is a great pivot point to start looking for malicious traffic:


NOTE: Taking advantage of the traffic_flow_options.lua parser would be highly beneficial for your SOC. This parser allows you to define your subnets and tag them with friendly names. Editing this to contain your web servers address space for example, would be a great idea.


Taking the above note into account, your analysts could then construct a query like the following:
(analysis.service = 'windows cli admin commands') && (direction = 'inbound') && (netname.dst = 'webservers')
Filtering on this metadata reduces the traffic quite significantly. From here, we can open up other meta
keys to get a better understanding of what traffic is related to these windows cli commands. From the below
screenshot, we can see that this is HTTP traffic, with a GET request to a hello.bat file in the /cgi-bin/ directory,
there are also some suspicious looking queries associated with it that appear to reference command line


At this point, we decide to reconstruct the raw sessions themselves as we have some suspicions surrounding
this traffic to see exactly what these HTTP sessions are. Upon doing so, we can see a GET request with the
dir command, and we can also see the dir output in the response - this will be what the windows cli admin
commands metadata was picking up on:


This traffic instantly stands out as something of interest and as being something that requires further
investigation. In order to get a holistic view of all data toward this server, we need to reconstruct our query, as
the windows cli admin commands metadata would have only picked up on the sessions where it saw CLI
commands, we are, however, interested in seeing it all. So we look at the metadata available for this session
and build a new query. This now allows us to see other interesting metadata and get a better idea of what the
attacker was doing. Looking at the Query meta key, we can see all of the attackers commands:


Navigating to the Event Analysis view, we can see the commands in the order they took place and reconstruct
what the attacker was doing. From here we can see a sequence of events whereby the attacker makes a
directory, C:\temp, downloads an executable called 2.exe to said directory, and subsequently executes it:


MSF File and Traffic

As we can see the attackers commands, we can also see the download for an executable they performed, a.exe. This means we can run a query and extract that file from the packet data as well. We run a simple query looking for a.exe
and we find our session. Also, take note of the user agent, why is certutil being used to download a.exe? This is also a great indicator of something suspicious:


We can also choose to switch to the File Analysis view and download our file(s). This would allow us to perform additional analysis on the file(s) in question:


Merely running a strings on one of these files yields a result of a domain this executable may potentially connect to:


As we also have another hostname to add to our analysis, we can now perform a query on just this hostname
to see if there is any other interesting metadata associated with it. Opening the session analysis meta key, we can see a myriad of interesting pivot points. We can group these pivot points together, or make combinations of them to whittle down the traffic to something more manageable:

NOTE: See the RSA IR Hunting guide for more details on these metadata values:


Once we have pivoted down using some of the metadata above, we start to get down to a more manageable amount of sessions - continuing looking at the service analysis meta key we also observe some additional pieces of metadata of interest we can use to start reconstructing the sessions to get a better understanding of what this traffic is:


  • long connection
  • http no referer
  • http six or less headers
  • http post missing content-type
  • http no user-agent
  • watchlist file fingerprint



Opening these sessions up in the Event Analysis view, we can see an HTTP POST with binary data, and a 200 OK from the supposed Apache Server, we can also see the directory is the same as we saw from our strings analysis:


Continuing to browse through these sessions, yields more of the same:


Navigating back to the investigate view, it is also possible to see that the directory is always the same and the one we saw in our strings analysis:


NOTE: During the analysis, no beaconing pattern was observed, this can make the C2 harder to detect and requires continued threat hunting from your analysts to understand your environment and pick up on these types of anomalies.


Web Shell

Now we know that the Apache Tomcat Web Server is infected, we can look at all other traffic
associated with the Web Server and continue to monitor to see if anything else takes place, attackers like to keep
multiple entry points if possible. Focusing on our Web Server, we can also see a JSP page being accessed
that sounds odd, error2.jsp, and observe some additional queries:


Pivoting into the Event Analysis view and reconstructing the sessions, we can see a tasklist command being


And the subsequent response of the tasklist output. This is a Web Shell that has been placed on the server and
the attacker is also using to execute commands:


NOTE: For more information on Web Shells, see the following series:


It is important to note that just because you have identified one method of remote access, it does not mean that
is the only one, it is important to ascertain whether or not other access methods were made available by the


The Detection in NetWitness Endpoint
As I preach in every blog post, the analyst should always log in every morning and check the following
three meta keys as a priority, IOC (Indicators of Compromise), BOC (Behaviours of Compromise), and EOC
(Enablers of Compromise). Looking at these keys, a myriad of pieces of metadata stand out as great places to
start the investigation, but let's place a focus on these three for now:


Let's take the downloads binary using certutil to start, and pivot into the Event Analysis view. Here we
can see the certutil binary being used to download a variety of the executable we saw in the packet data:


Looking into one of the other behaviours of compromise, http daemon runs command shell, we can also
see evidence of the bat file being requested and the associated commands, as well as the use of the Web
Shell, error2.jsp. It is also important to note that there is a request for the hello.bat prior to the remote code
execution vulnerability being exploited, this would be seen as legitimate traffic given that the server is working
as designed for the CGI-BIN scripts. It is down to the analyst to review the traffic and decipher whether or not
something malicious is happening, or whether this is by design of the server:


NOTE: Due to the nature of how the Tomcat server handles the vulnerable cgi-bin application and "legitimate" JSP files, you can see hello.bat as part of the tracking event as it's an argument passed to cmd.exe. However, with the error2.jsp, it is executed inside the Tomcat process, and only when the web shell spawns a new command shell to execute certain commands will you see cmd.exe being executed, and not every time error2.jsp is used. Having said that, the advantage for the defender is that even if not all of it is tracked, or leaves a visible footprint, at some point something will, this will/ could be the starting thread needed to detect the intrusion.


Coming back to the Investigate view we can see another interesting piece of metadata that would be of interest, creates remote service - let's pivot on this and see what took place:

Here we can see that cmd was used to create a service on our Web Server that would run a malicious binary dropped by the attacker in the c:\temp directory:


It is important to remember that as a defender, you only need to pick up on one of these artifacts leftover from
the attacker in order to start unraveling their activity.


With today's ever-changing landscape, it is becoming increasingly inefficient to create signatures for known
vulnerabilities and attacks. It is therefore far more important to pick up on behaviours of traffic that stand out as
abnormal, than generating signatures. As shown in this blog post, a fairly recent remote code execution CVE
was exploited, - no signatures were required to pick up on this
as NetWitness pulls apart the behaviours, we just had to follow the path. Similarly, with Metasploit it is also very difficult to generate effective long life signatures that could detect this C2; performing
threat hunting through the data based on a foundation of analysing behaviours, will ensure that all knowns and
unknowns are effectively analysed.


It is also important to note that the packet traffic would typically be encrypted but we kept it in the clear for the purposes of this post, with that being said, the RCE exploit and Web Shell is easily detectable when NetWitness Endpoint tracking data is being ingested, and this allows the defender to have the necessary visibility if SSL decryption is not in place.


A vulnerability exists within Remote Desktop Services and may be exploited by sending crafted network requests using RDP. The result could be remote code execution on a victim system without any user authentication or interaction. The vulnerability, CVE-2019-0708, is not known to have been publicly executed, however, expectations are that it will. Follow the Microsoft advisory to patch vulnerable systems -- CVE-2019-0708 | Remote Desktop Services Remote Code Execution Vulnerability.


Live Content

The RSA Threat Content Team has added detection for NetWitness packet customers based on the work of the NCC Group. To get the detection, update your Decoders with the latest version of the RDP Lua parser (dated May 22nd, 2019).


If an exploit has been detected, meta will be output to the NetWitness Investigation page for


ioc = ‘possible CVE-2019-0708 exploit attempt’


You may also see the exploitation by deploying rules to the NetWitness ESA product and viewing the Respond workflow for alerts. Deploy the following rules from Live to ESA:


  • RDP Inbound
  • RDP from Same Source to Multiple Destinations


RDP Inbound may catch the initial connection from the attacker. It’s expected the infection would be worm-like moving to internally networked systems. In that case, the second rule, RDP from Same Source to Multiple Destinations, may catch the behavior. Please note you must be monitoring lateral traffic within your network for this detection.



In 11.3 the same NWE Agent can operate in Insights (free) or Advanced Mode . This change can be made by toggling a policy configuration in the UI and does not require agent reinstall or reboot. 

There could be both Insights and Advanced agents in a single deployment. Only agents operating in Advanced mode are accounted for licensing.






Operating Systems Support





Basic scans




Tracking scans

Continuous file,network,process,thread monitors

Registry monitor(Specific to windows)



Anomaly detection

Inline hooks, kernel hooks,suspicious threads,registry discrepancies



Windows Log Collection

Collect Windows Event Logs


Threat Detection Content

Detection Rules /Reports


Risk score

Based on Threat Content Pack



File Reputation Service

File Intel ( 3rd Party Lookup)



Live Connect

Community Intel



Analyze module

Analysis of downloaded file




Block an executable



Agent Protection

Driver Registry Protection / User Mode Kill Protection


Powershell , Command-line ( input)

Report user interactions within a console session


Process Visualization

Unique identifier (VPID) for process that uniquely identifies the entire process event chain 



MFT Analysis



Process Memory Dump



System Memory Dump



Request File



Automatic File Downloads



Standalone Scans









API Support



Certificate CRL Validation




** - New Capabilities , these do not exist in 4.x



11.3 Key Endpoint Features 

1Advanced Endpoint Agent

Full and Continuous Endpoint Visibility

Advanced Threat Detection / Threat Hunting

Performs both kernel and user level analysis

  • Tracks Behaviors such as process creation,remote thread creation,relevant registry key modifications,executable file

    creation, processes that read documents (refer doc for the detailed list)

  • Tracks Network Events

  • Tracks Console Events ( commands typed into console like cmd)
  • Windows Log Collection
  • Detects Anomalies such as Image hooks , Kernel Hooks , Suspicious Threads , Registry Discrepancies
  • Retrieves lists of drivers, processes, DLLs, files (executables), services, autoruns,
  • Host file entries, scheduled tasks
  • Gathers security information such as network share, patch level, Windows tasks,logged in Users,bash history
  • Reports the hashes (SHA-256, SHA-1, MD5) and file size of all binaries (executables, libraries (DLL and .SO)and scripts found on the system
  • Reports details on certificate,signer,file description,executable sections,imported libraries etc

2Threat Content PacksDetection of adversary tactics and techniques ( MITRE ATT&CK matrix)See attached 11.3 Endpoint Rules spreadsheet
3Risk Scoring

Prioritized List of Risky Hosts /Files

Automated Incident Creation for Hosts /Files when risk threshold exceeds

Risk Score backed up with context of contributing factors

Rapid/Easy Investigation Workflow

Risk Scores are computed based on a proprietary scoring algorithm developed by RSA's Data Sciences team

The Scoring server considers takes multiple factors into consideration for scoring

  • Critical , high ,medium indicators generated by the endpoints based on the threat content packs deployed
  • Reputation status of files - Malicious / Suspicious
  • Bias status of file - Blacklisted /Greylisted /Whitelisted
4Process Visualizations

Provides a visualization of a process and its parent-child relationships

Timeline of all activities related to a process



File Analysis/Reputation/Bias Status

Categorize Files

Saves Analysis time , Filter Out Noise , Focus on Real threats

File hashes from the environment are sent to RSA Threat Intel Cloud for reputation status updates

Live connect Lookup in Investigations

6Response Actions - File BlockingAccelerate Response /Prevent Malware ExecutionBlocks File Hash across the environment
7Response Actions - Retrieve Files

Download and Analyze File Contents for Anomalies

Static Analysis using 3rd Party Tools

8Centralized Group Policy Management

Agent Configurations Updated Dynamically Based on Group Membership

Groups can be created based on different criteria such as IP Address,Host names,Operating System Type,Operating Description

Endpoint Policies such as Agent Mode ,Scan Schedule , Server Settings , Response Actions can be automatically pushed based on group membership

Agents can be migrated to different Endpoint Servers based on Group/Policy Assignment

9Geo Distributed Scalable DeploymentConsolidated view & management of endpoints /files and the associated risk across distributed deployments

Strides have been made in RSA NetWitness Platform v11.2 to provide an administrator alternatives to the standard proprietary NW database format. Now an admin can choose to have the raw packet database files written in PcapNg format allowing them to be directly accessible using third party tools like Wireshark.


To enable storing the raw packet data as PcapNg files, the setting packet.file.type in the network decoder database configuration node has to be changed from netwitness to pcapng. After making this change a restart of the service is not required unless you are too impatient for the existing database file (default size is 4GB) to roll-over.


PcapNg configuration


Once the change is applied any new PCAPs uploaded or network traffic ingested into the decoder will be stored as pcapng files. Now as the database files age they are more readily available while on the decoder and when backed up off the system. In the below image you can see a mixture of the formats commingling in the packet database folder. The database written format can be changed between the two options without any loss of standard functionality.


pcapng files


There are some considerations before making the switch to PcapNg format over the default nwpdb format. The PcapNg format requires approximately 5% more storage when compared to the nwpdb format. The PcapNg format is not recommended to be used when ingest rates are greater than 8 Gbps on a single decoder as can introduce approximately 5% packet drops compared to when nwpdb is in use. The PcapNg files cannot be compressed while nwpdb files can, although in general raw network data typically does not compress well compared to raw logs. The PcapNg format is an open format while the nwpdb files are in a proprietary format so as accessibility improves, privacy concerns may arise when storing as PcapNg files. However, I am not suggesting security through obscurity is the right answer when measuring your GDPR compliance.


Hopefully this along with the already available SDK and APIs make NetWitness data more accessible.

One of the more common requests and "how do I" questions I've heard in recent months centers around the Emails that the Respond Module can send when an Incident is created or updated.  Enabling this configuration is simple (, but unfortunately changing the templates that Respond uses when it sends one of these emails has not been an option.


Or rather...has not been an accessible option.  I aim to fix that with this blog post.


Before getting into the weeds, I should note that this guide does not cover how to include *any* alert data within incident notification emails. The fields I have found in my tests that can be included are limited to these using JSON dot notation (e.g. "", "incident.title", "incident.summary", etc.):


Now, this does not necessarily mean it isn't possible to include other data, just that I have not figured out how...yet.


The first thing we need to do is create a new Notification Template for Respond to use.  We do this within the UI at Admin / System / Global Notifications --> Templates tab.  I recommend using either of the existing Respond Notification templates as a base, and then modifying either/both of those as necessary. (I have attached these OOTB notification templates to this blog.)


For this guide, I'll use the "incident-created" template as my base, and copy that into a new Notification Template in the UI.  I give my template an easy-to-remember name, choose any of the default Template Types from the dropdown - it does not matter which I choose, as it won't have any bearing on the process, but it's a required field and I won't be able to save the template without selecting one - and write in a description:


Then I copy the contents of the "incident-created" template into the Template field.  The first ~60% of this template is just formatting and comments, so I scroll past all that until I find the start of the HTML <body> tag.  This is where I'll be making my changes


One of the more useful changes that comes to mind here is to include a hyperlink in the email that will allow me to pivot directly from the email to the Incident in NetWitness.  I can also change any of the static text to whatever fits my needs.  Once I'm done making my changes, I save the template.


After this, I'm done in the UI (unless I decide to make additional changes to my template), and open a SSH session to the NetWitness Admin Server.  To make this next part as simple and straightforward as I can, I've written a script that will prompt me for the name of the Template I just created, use that to make a new Respond Notification template, and then prompt me one more time to choose which Respond Notification event (Created or Updated) I want to apply it to. (The script is attached to this blog.)


A couple notes on running the script:

  1. Must be run from the Admin Server
  2. Must be run as a superuser


Running the script:


...after a wall of text because my template is fairly long...I get a prompt to choose Created or Updated:


And that's it!  Now, when a new incident gets created (either manually or automatically) Respond sends me an email using my custom Notification Template:


And if I want to update or fix or modify it in any way, I simply make my changes to the template within the UI and then run this script again.


Happy customizing.

Filter Blog

By date: By tag: