Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2014 > February
2014

I'm pleased to share with you an article I had published in Continuity Insights magazine.  It's about building a more strategic Business Continuity Management program.  Check it out here! http://www.continuityinsights.com/articles/2014/02/building-stronger-more-strategic-bcm-program

So, NIST just released the final draft of their Cybersecurity framework.

 

You can read it here, or I can give you a synopsis for now:

The federal government is concerned about the 16 critical infrastructure sectors identified by DHS. If you are in one of these sectors, the concern is that the collection of security tools you have and the security compliance activities that you do, do not add up to a totally comprehensive Cybersecurity program. If a nation state were to engage us in a cyber-war tomorrow, they would certainly target our critical infrastructure. That’s where the NIST CSF comes in. They provide a list of capabilities and goals that an organization should include in their Cybersecurity program. They provide a list of references to use to implement and achieve those capabilities and goals, and they provide a method for assessing and measuring yourself along the way.

 

I gave a webcast on this subject a few months ago if you’re interested, and will be giving an updated webcast with new material on March 13

 

Anyone familiar with RSA Archer would recognize that as a GRC platform, we are well-equipped for the sort of use case presented by the NIST CSF. So, in response we did two things:

 

  1. We consumed the mappings defined in the CSF between the security goals (called Categories and Subcategories ) and the references (called Informative References). This would provide the owners of the RSA Archer Policy Management solution with the core to build their own NIST CSF solution. [DEAD LINK /docs/DOC-32101]Here is a blog from Mason Karrer, our content strategist, on the subject.
  2. We built a proof of concept NIST CSF solution, which we will be showing at the RSA Conference in a few weeks. I will be giving demos at the RSA booth, so please stop by if you’re attending, and if you’re not registered to attend,

Thanks for tuning in, hope to see you at the RSA Conference

 

Email me with comments or questions

 

Chris

In my previous blog series on Vulnerability Risk Management, I included a post on “Metrics that Matter”. I made the statement that in security, we constantly talk about the challenges of showing return on the investment. Security Operations is one of these areas that can be hard to show a return.  If you have prevented an attack through an efficient identification, escalation and quarantine process, how can you estimate the damage that was avoided? In other words, if you spent $X on creating a streamlined efficient detection and response process, how can you balance that against the $Y of losses that you prevented.   In this case, you may never know what losses were prevented.  However, there are certain tangible metrics that can give insight into how well the security operations strategy is playing out.

 

Metrics that Matter

Accurate Asset Inventory – I explained this in terms of Vulnerability Risk Management – and the same holds true for Security Operations: The priority and proper handling of security events hinges on a clear understanding of the assets.   Security Operations must have insight into the business value of assets to stay ahead of the curve.   While security is many times completely dependent on the weakest link, and the weakest link isn’t always the host holding the crown jewels, the closer the ninjas come to accessing critical business assets drives the urgency of the incident handling.   If Security Operations has an understanding of the infrastructure in terms of the business it services, it can be game changing.  The key metric would be the percentage of incidents that can be associated with a true, cataloged business asset.  Tracking this metric over time gives a clear indicator on how well Security Operations has that insight to prioritize security events and protect the most valued business assets.

 

Incident Throughput/Workload – When a security event triggers some response, the time it takes to escalate that alert from the generating system to eyes on glass is critical.  Then, how fast that event is understood, prioritized and resolved can make all the difference between a minor security incident and a significant breach.   In addition, the frontline analysts – those eyes on glass – must have the time and bandwidth to do more than fire fight. This breaks down into a few key metrics – duration from time of alert to escalation/identification, time to resolution, number of incidents per analyst and overall analyst workload (time spent on individual incidents). These metrics will give insight into the overall throughput of your security operations cycle.  Each stage of an incident (first analysis, second analysis, resolution time, etc.) must be tracked over time to identify the rate security events are moving through the process and are addressed.

 

Remediation Time – Security Operations will not own the remediation or resolution of every security alert.  At times, certain actions will need to be transitioned to external teams.   Once an incident indicates some external action needed e.g. a host re-imaged due to virus infection, a configuration change or patch needed, the time it takes for these actions to come to closure is an indicator of how efficient adjacent processes are.   Security Operations should understand these metrics to ensure they are handing off the necessary information to help other groups to close possible vulnerabilities. This metric should measure the length of time from identification to remediation and is a measure of the efficiency of the hand off to external processes and the true time it takes to reduce security risks.

 

Control Efficacy – I dedicated the last blog to highlight the role Security Operations can play in providing tangible evidence of control effectiveness.   This metric cannot be undervalued.  A Security Operations team’s role in tracking and measuring controls that work – and DON’T work – is critical in really understanding where the organization is succeeding and failing in reducing security risk.  A readout on a regular basis, with the constant, living catalog of security controls and analysis, gives material insight into the effectiveness of the overall security controls program.

 

SOC Program Management – Finally, Security Operations should be a constantly evolving and improving discipline within the organization.  Measuring the key metrics of the operations such as percentage of threat categories that have documented triage procedures, percentage of SOC personnel that are maintaining competency via training and continuing education and consistent shift handover and communication are ways to identify areas of improvement for the overall SOC program.

 

The more formal and disciplined the Security Operations function becomes, the more metrics that can, and should, be tracked and measured.  Metrics greatly improve the understanding of the efficiency and effectiveness of the threat detection and response process within the organization. As with all metrics, it takes a commitment to consistently measure over time to produce meaningful conclusions and understand where the systemic issues are and possible areas of improvement.

 

As with all metrics programs, the main question to ask yourself is “Can I measure, track and report on these metrics today?”  If so, then you most likely have a pretty solid process in place and can provide management with the basics in terms of progress, efficiency and effectiveness. If not, then the question is how can you put in place the right infrastructure to start measuring Metrics that Matter?

 

To find out how your Security Operations Management team can measure these metrics, research our new module or contact your RSA representative.

Don’t miss tomorrow’s webcast (2/13/14 at 11 AM EST), “Managing Third Party Risk in the Extended Enterprise” where Michael Rasmussen, GRC Pundit with GRC 20/20, and I will be discussing:

 

  • The clear advantages gained by businesses who can effectively manage the broad spectrum of third party risks
  • The elements of a strong governance process that promotes an integrated and consistent approach to third party risk and performance management, and establishes the necessary elements to provide stakeholder confidence
  • How RSA Archer solutions provide answers to third party risk and performance questions, promote strong governance, and capitalize on the advantages of effective third party management

 

To arrange an RSA Archer demo, contact: 1-888-539-EGRC

I’m pleased to announce the latest localized versions of the [DEAD LINK /docs/DOC-15636]Archer Control Standards library. The Control Standards translations have been updated for all languages currently offered on the Archer platform. Customers with an active support contract can contact their sales or Customer Support to obtain import packs for the language of choice.

 

Thank you!

Or, as they say…Merci, Danke, 谢谢, Gracias, Спасибо, ありがとう, Obrigado, Grazie!

RSA Archer was positioned as a Leader in the 2014 Forrester Wave: GRC Platforms report, issued last week by independent research firm Forrester Research. The Forrester Wave GRC report shows very positive evaluation of the RSA Archer GRC Platform across the board, with a focus on reviewing specific Platform features. It’s very clear that our customers once again provided Forrester with terrific feedback on the RSA Archer GRC Platform and solution offerings.

 

In addition to ranking GRC platforms, the Forrester report also noted the end of distinctly defined GRC Platform submarkets, such as IT GRC versus Enterprise GRC, due to growing customer interest in a consolidated platform for diverse use cases. Leaders were shown to support these diverse use cases and possess the flexibility to help customers address changing market and business demands.

 

We greatly appreciate the concerted efforts of our customers and various RSA Archer groups that came together to make the Leader ranking by Forrester possible.  We’re very excited that RSA Archer continues to be the only GRC solution provider rated as Leader across both the Forrester Wave GRC report and all Gartner reports for IT GRC, Enterprise GRC, and Business Continuity Management Planning.

 

We invite you to download Gartner reports on IT GRC, Enterprise GRC and Business Continuity Management.

The Tuckman Model of Group Development says that it takes time, effort, and pain to align and be productive as a combined function or team. The alignment process evolves from simply bringing similar groups, functions or processes together (forming); to determining the best approach moving forward (storming); to aligning (norming), and ultimately becoming efficient (performing).

 

The continuum of aliging Internal Audit (IA) with Governance, Risk and Compliance (GRC) functions follows the same steps, and I've added some challenges that span the four stages – Visibility, Efficiency, Accountability and Collaboration.  These areas, before they result in benefits, start out as growing pains during the alignment process.

 

Emerging Visibility – IA or GRC groups begin to identify other oversight functions performing similar activities, yet with different and sometimes competing priorities.  Initial reactions are to protect the empire instead of aligning with these groups. It’s all new to everyone and to further complicate matters, there are political, geographic, or financial (e.g., funding) factors that stand in the way of alignment.

 

Inefficiency – With increased visibility into these multiple oversight groups comes the realization that duplication exists.  This equates to inefficiency because of duplicate resources, processes and misaligned objectives.  In some cases these groups may be working against each other, not intentionally, but as these factors come to light the redundancies and inefficiencies become exposed.

 

Lacking Accountability – Closely following the visibility of these separate GRC functions is an analysis of their objectives.  Looking at the whole often results in the disclosure of gaps, or areas no one group is focused on.  This could be certain risk categories, control exposures, geographies, or process areas.  The question then becomes which group needs to address these gaps.

 

Lack of Collaboration – The question quickly becomes, “why aren’t these groups working together?”, and “how much time, resources and money have we been wasting doing the same things?”  This lack of collaboration also exposes more gaps and lacking accountability.

 

One of the first questions for IA is should they align with these other GRC groups, and how. Further, how closely (if at all) should they align their approaches, thresholds, and decision criteria with others? An example is that IA conducts their annual audit universe risk assessment (AURA) by identifying potential auditable entities, assessing their criticality, and determining for which entities to perform audit engagements.  Other groups, such as Enterprise Risk Management (ERM), also perform risk assessments which drive activities such as risk evaluation, gap identification and remediation plans.  It stands to reason that IA and ERM should align at least some level of their assessment approaches in order for risks to be evaluated under the same lens, and for the two groups to leverage each other’s results.

 

Other intersections exist where IA could leverage other groups’ work and vice versa. Automated tools can help in this process as approaches can be applied more consistently, and results along with supporting documentation is more visible and accessible.  Multiple groups can access and leverage the information and alignment is better achieved. Another factor in this dilemma is the use of tools and how to align them across these groups.  If a common technology solution is used, IA must weigh the benefits of sharing information against limiting access to such areas as privileged and confidential audit projects.

 

In my next blog in this series, I'll talk about striking the right balance and moving forward!

When I started this blog series, I referenced our latest SBIC (Security Business Innovation Council) report – Transforming Information Security: Future-Proofing Processes.  One of the points covered in that report highlighted the need for evidence-based controls assurance.   The need to have a more tangible, fact based approach to measure controls within an organization is fueled by the reality that empirical data provides a higher level of confidence on the effectiveness of the control.  While the traditional audit methods of sampling and validation provide point-in-time assessments – and many times, valuable face time between control testers and control owners – the fact is that given the velocity of security threats in today’s environment, this style of control assurance is past its prime.  The report stresses that an ongoing collection of relevant data to test the efficacy of controls is necessary to “future proof” security processes.

 

While organizations strategize on putting in the necessary data collection and aggregation points to constantly poll controls, there are many opportunities to improve the ongoing assessment of controls using existing processes.   The team managing security incidents within Security Operations is in an excellent position to provide this type of visibility.   Those individuals see evidence of control efficacy every time an alert crosses their screen - even an instance of a virus infection on an endpoint provides insight.  Did the local Anti-Virus find and quarantine the virus?  If so, that control worked.  If it didn’t, why not?  What control did identify the virus?  Was it the end user reporting an issue to the help desk who investigated and identified the virus?  Then the virus education part of the security awareness program seems to be working.  Somewhere along the line a control worked, and many times, one or more controls missed an opportunity.

 

This leads to an important part of today’s security operations management strategy – post incident analysis and control efficacy.   The concept of post incident analysis is most often relegated to the big incidents.  The post mortem held after a major incident gives the organization lots to think about.  What went wrong, what went right, what needs to change, etc.   However, it is those daily – dare I say mundane – alerts and events that really give important insight into the operational fabric of the security controls.  The hard part is implementing a post-incident analysis for every alert tracked down and resolved during a day’s work.  It isn’t uncommon for help desk operations to do some quick root cause breakdown on trouble tickets coming in from end users.  Security Operations should be the same.

 

To implement even a simple routine for post-security event analysis, the organization should:

  • Catalog a basic set of common security controls.  This should be a collection of both technical and process oriented controls.  It doesn’t have to be an endless list of every possible security control – even beginning with the basics is a start.  Catalog the major security tools implemented (AV, Firewall, IDS, etc.) along with the most common escalation or security monitoring processes (security awareness, help desk, access control reviews, etc.).
  • Institute a quick post-event process to map security alerts and events to this catalog.   If the control catalog is reasonable, then it should not add an administrative overhead to the process.  It also helps if you have a system (such as our Security Operations Management module) that has this post-event process built in.
  • Include controls that worked and didn’t work in the mapping. Mapping only those controls that were the source of the alert isn’t the only objective here.  We want to find those controls that “should” have worked but for some reason didn’t.  The virus missed by the AV scan is an easy one.  Identifying other “missed control opportunities” may require a little more probing into the event.    However, a key goal is to identify what didn’t work and why.  It could be that the control needs a little tuning, or it could indicate a much bigger issue – these are things we are looking for.
  • Enhance the catalog over time.  Obviously, there will be controls that pop onto the radar based on the experiences of the security team and other controls will be determined as unnecessary in the catalog.  The catalog should be a living document within the event process.

Over time, the catalog will begin to reveal those effective controls that consistently are identifying and escalating security issues.  It should also reveal those controls that are missing opportunities to prevent, detect or contain security threats.  This type of information is not only key to understanding control efficacy, it can go a long way in rationalizing investments and providing overall control assurance based on solid evidence.

 

To see how Control Efficacy is incorporated into our SOC Readiness process in our Security Operations Management module, along with many other key SOC processes, take a look at our new practitioner guide.

Filter Blog

By date: By tag: