Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2013 > February
2013

Did you know that Business Continuity Awareness Week 2013 takes place March 18–22, 2013? The theme this year is ‘Business Continuity for the risks you can see and the ones you can’t.’

 

The theme announcement was made by the Business Continuity Institute (BCI), the organization that coordinates Business Continuity Awareness Week. The BCI explains that “In a world of increasing uncertainty and constant change, organizations are confronted with an ever growing range of risks to deal with. Business continuity enables an organization to increase its capability to respond to any existing, emerging or unknown risk by focusing on mitigating the impact of any disruption on the most urgent and high priority activities.”

 

RSA Archer helps customers focus on risk management, whether it be continuity-related risk, operational risk, audit risk, enterprise risk or many other risk types. Business Continuity (BC) risk has been part of the Governance, Risk and Compliance (GRC) picture for years, but the challenge has been, and still is, how to better integrate with other related risk disciplines by driving common approaches to identify, mitigate, monitor and treat risks. This should then drive smarter BC strategies that further reduce risk.

 

Here's an example of this concept in practice today. The BC profession has been dedicated to developing and implementing effective business and IT recovery plans and has done so quite effectively. However, organizations are increasingly looking at BC groups as an integral part of their Enterprise Risk Management (ERM) framework and resources, expanding their importance and scope, yet also increasing the range and complexity of risks they have to deal with. BC teams are being expected to think outside the box of typical BC-type risks (i.e., what happens if we lose a facility or a process is disrupted) to such areas as upstream supply chain risk (e.g., a critical partner's critical partner is impacted), extended regional risks (i.e., what happens in China affects India which affects our organization in London), or regulatory risk (e.g., a distribution facility is shut down so where do we ship from). Just from these examples, we see risks are becoming more diverse, complex and less predictable and I would add even more critical and impactful.

 

Further to this point, I'm writing this while attending the 2013 RSA Conference in San Francisco. The conference focuses on information security and related topics, and there have been an amazing array of speakers, excellent sessions and partner presentations. I mention this because I'm learning so much more about how seemingly unrelated topics to BCM are indeed related and represent risks we either choose to mitigate or ignore.

 

Finally, let's talk about BCI's statement of focusing on risks that are most urgent and of highest priority. This is absolutely critical, but the challenge is how to do this proactively before you're in that dreaded "knee-jerk reaction" mode. Just as every good BC professional or risk manager knows, some of that inevitably happens, but there are ways to plan ahead. One way is to really understand your priorities, again not just for BC but for the organization. What's important from a strategic standpoint to the organization should be what's important to the BC program, and should drive your risk planning and recovery priorities. Often, in the heat of the moment you're pressured to throw what you've done and planned for out the window. It's always important to be flexible and adaptive, but also have confidence in your decisions and planning.

 

Hopefully this has given you some food for thought. So, for this upcoming Business Continuity Awareness Week, take the time to evaluate your BC program and approach. Strengthen BC by making it an integral part of your larger ERM approach.

 

PS, I'm looking forward to attending DRJ Spring World in Orlando, Florida from March 17-20. If you're going to be there let me know - I'd love to connect with you.

So, in Part 1 of this series I covered a few terms and the relevant documents around CM. This time I am going to talk about models for actually doing it. I will cover a few ways that federal entities have done CM or suggest how to do it, then I will just summarize with a few take-aways you can apply to your custom CM use case.

iPost - The State Department created their own model for CM and called it iPost. iPost was mature and buzz-worthy by 2009 / 2010 and has been widely adopted since then. In addition to State, other users like NASA and the Army have adopted this model. It is based mostly on monitoring the security status of Microsoft Windows machines, using factors like vulnerability and configuration counts, patch status, and AV signature and password ages. This model was documented in iPost: Implementing Continuous Risk Monitoring at the Department of State.

The iPost model is about monitoring systems, providing numeric risk metrics on a per-host/device basis, and listing hosts by their relative risk rankings. This is all done within a dashboard that all stakeholders can see, which is important because it uses a “name and shame” scenario, ranking the best and worst and showing who they belong to. This use of pride and peer pressure drives faster improvements, but because it does an ordinal ranking of systems by highest risks, it also drives improvements to the places where they are needed first (“worst first”). iPost made State Department’s risk metrics plummet. This model had to be altered to be used by others because it was written so custom to the State Department infrastructure and policies. The more generic, more “accessible” version of iPost was rebranded as Portable Risk Score Manager (PRSM) and was made available to the public.

The best part of this model is that drives real improvements, and quickly. The worst parts of this model is that it focuses on a narrow set of risk factors, it is very Microsoft-centric, and it only accounts for a very small percentage of the controls which need to be monitored (does not address manual assessment).

CAESARS (FE) - Next, the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring (CAESARS) model was developed and documented by the Department of Homeland Security, using inputs from the State Department, the IRS, and the Department of Justice. It was a more mature and extensible version of iPost, and was embraced as the basis for a set of new interagency reports (known as NISTIRs) to be written by NIST with input from NSA and DHS. I covered these in Part 1 of this series (NISTIR 7756, 7799, and 7800). .

The NIST-adapted version of CAESARS is called CAESARS Framework Extension (FE). It is designed to make the DHS CAESARS model even more extensible, scalable to the very largest organizations, and it also addressed the many non-automatable controls.  CAESARS defines the components that should be present in a mature CM implementation and the roles of those components, called subsystems, which are: Presentation/Reporting, Content, Collection, Data Aggregation, Analysis/Scoring, and the Task Manager. Unlike iPost, CAESARS acknowledges that real CM “will require some human data collection effort”.

The best part of the CAESARS model are that it defines all of the possible inputs and outputs from each of the six subsystems to the other.  It shows an implementer how many disparate monitoring tools can work together, maintain data integrity, and stay synchronized. It is the most technical model and is the most practical for operational considerations. The worst part of the model is the limited consideration for non-automatable controls and how to tie monitoring to compliance.

NIST SP 800-37 / 800-137 – NIST began to define a CM model with their Risk Management Framework (RMF), essentially their new way of saying Certification and Accreditation (C&A), or which some people are calling Assessment and Authorization (A&A). NIST defined how to do RMF and focuses more on integrating CM with FISMA compliance and the RMF. NIST spent less effort describing the technical details of the model and more about how it integrates with NIST RMF, FISMA compliance, and risk management overall.

SP 800-137 is NIST’s publication dedicated to CM. It describes the steps to develop a CM program and implement it. Of all CM documents, 800-137 spends the most time on the subject of manual control assessment as part of the CM scheme. It also includes a section on factors influencing the frequency of assessments which I will cover in detail in Part 3 of this series.

As you can see, each of the models has different strengths and focuses. If you want a CM capability that is no-nonsense risk reduction, and you have a Windows-heavy IT environment, iPost can help you. If you are looking for a way to tie all of your existing sensors and tools into a more robust CM model, CAESARS FE, especially NISTIR 7799 should influence your design. If you are more concerned about tying CM to FISMA compliance and/or how you will handle your manual control assessments, NIST 800-137 has the most to offer you. If you are interested in seeing a demo of how Archer’s Federal solution can accommodate all of these models, or if you have any questions/comments email me.

Come back the week of March 4th to see the third and last installment of this series, which will cover CM implementation considerations. Thanks!

Chris

To continue with my series on the Next Generation of Security Operations, I want to look at how well security operations are positioned for the be-all, end-all of security – the actual Security Breach.  Security incidents have a life of their own.  How it all turns out is very dependent on how soon the problem is detected.   Initial detection and preventing an attack early in the ‘kill chain’ can minimize or even stop any issue from escalating.  However, that is not always possible and security operations must be prepared to escalate throughout the entire process until closure.   There are some traditional stages when it comes to Security Incident response.

 

Stage 1: Security Event:  The first stage is the security event.  Many times this can be triggered from an individual event or a series of system events identified through some monitoring function.  A few failed logins, some system errors thrown from an application, a log file growing quicker than usual…  The types of events are numerous and the cause can range from innocuous hardware failures to a full blown attack.  At this point, little is known except that something is indicating a possible security problem.

 

Stage 2: Security Incident:  Once an event, or series of events, is identified and the cause is pointing to an active security issue, the event is escalated and becomes part of an incident response.

 

These first two stages are traditional Security Incident Management.  Security Incident Management is the process by which IT security related events are reported, cataloged, triaged and resolved.   This process will include gathering data on the system events, analyzing the information relevant to the event, assigning prioritization and documenting the response.  

 

Stage 3: Security Investigation:  Investigations are the next step and include the processes by which larger investigations are conducted around IT security incidents.  These investigations can include larger data breaches, system compromises, internal investigations such as unacceptable use of company resources or other security incidents that require a larger amount of time or investigative procedures.   An investigation can result from a singular IT security incident or multiple incidents that are connected.

 

Organizations with mature security response plans have typically laid out these first three stages.  However, what happens when the Security Incident is bigger than usual? From the stories we see in the news, security incidents can spiral into significant crises very quickly.  The next stages of security incidents are the areas where companies need to evaluate their capabilities.

 

Stage 3a: Breach Management: Did the security incident involve sensitive personal information or some other data related to mandated reporting of disclosure?  If so, that security investigation now needs Breach Management – the notification of appropriate regulatory bodies or individuals involved.   This stage needs to be handled not only in compliance with legislative obligations but also to manage reputational risks.

 

Stage 3b: Crisis Management: If the security incident mushrooms into a serious event such as significant data disclosure or major business disruption, the company may need to go into Crisis Management mode.   Public relations, legal counsel, corporate governance boards or other entities may need to be engaged to sort out the problem and manage reputational, legal and business risks.

 

Security Operations should begin looking into these broader processes and can take a lesson from other GRC related processes such as the Business Continuity program.   To start this process, you can begin by asking a few key questions:

  • Does security operations understand the data profiles that would trigger broader Breach Management activities? 
  • How would the operations personnel know that a system with a security issue stores or processes regulatory related data?
  • Does the business context around IT devices exist and if so, does it give the security operations function the capability to quickly determine that a possible data breach might lead to regulatory or compliance notifications?
  • Are the other key stakeholders like Public Relations, Human Resources and Business Operations prepared to assist if a security issue mushrooms into a full blown crisis?  Who are the resources that will be involved and what is the process to manage the crisis?

While the transition from Security Event to Crisis may happen very infrequently or - if you are lucky – not at all, companies should be putting these connections in place. 

Greetings from the RSA Archer GRC nerve center! There are lots of exciting things happening which I’m eager to share with you as they unfold. In the meantime let’s continue our recap of the Compliance Week forum on organizational policy management that I participated in with Michael Rasmussen and OCEG.

 

We began our discussion in the first segment with an overview of regulatory change management and the importance of establishing and maintaining a strong diligence program to bolster compliance. To measure we must first detect; tracking internal and external change to the business plays a critical role in enabling an organization to remain nimble. The burden of regulation will only increase going forward. As we learned last time, the reality of climbing this steepening mountain has emerged as one of the key stated risks that trouble executive decision makers.

 

Keeping pace with change is only one aspect. What do we do about it? The legal and regulatory landscape shifting beneath our feet is one thing, but the business’ foundation itself changes as well. What happens when these intersect or better yet collide? How does this concert of change coalesce into an overall model of risk? Ultimately it comes back to the policies that define and drive how the business functions. Does your organization conduct a business impact analysis on significant changes impacting policy? When we asked this same question of our panel audience, 48% of organizations responded they did not. On the surface it’s troubling that nearly half of organizations surveyed do not formalize this process, but with the blistering pace of business, global economic volatility, and the constant swell of changes it’s an understandable struggle to stay ahead the curve. The question is how long can an organization roll the dice before they eventually fall the wrong way?

 

For example, suppose Company XYZ operates in a heavily regulated sector but over the past few years has been diversifying into different industries and markets. Now the XYZ execs decide to acquire a specialty alloy parts manufacturer to support a new product they intend to bring to market. Although a pain, compliance was always something XYZ was able to keep under control. They have a couple of key stakeholders that do a good job of keeping watch and handling it, and the regulators seem happy enough.

 

Right there we have a problem brewing. There’s little transparency into the process of compliance and a big chunk of success is wrapped up in a handful of people doing things in a silo. So what happens when XYZ turns in this new direction and executes the acquisition? Along with the patents, goodwill, and receivables, Company XYZ just unknowingly inherited a ton of new environmental regulations to boot. Because the language of risk within XYZ is not well established, there is no common thread to weave impactful elements together throughout the organization and raise an alert when a gap is encountered. Does your organization have a defined taxonomy of risks and regulations mapped to key subject matter experts and stakeholders? If the answer is no, you’re not alone. 52% of respondents we polled didn’t have any kind of taxonomy or structured process either.

 

For fun let’s say hypothetically as this acquisition deal is wrapping up that the SEC conveniently announces new revisions to regulations that govern a separate XYZ venture which also happens to be their primary revenue stream. Although these changes had been on the horizon for some time, unfortunately XYZ’s pseudo-compliance team doesn’t have any kind of continuous governance program and were caught off guard. Now they’re completely bogged down trying to scramble an impact analysis and response. Any M&A questions drop by the wayside and the alloy business acquisition sails through without a second thought.

 

Does any of this sound familiar? It should. The saying “when it rains, it pours” comes to mind, not to mention Murphy and his laws. These things happen all the time. The ability to react and adapt can often mean the difference between sinking and swimming for a modern business. It’s not unusual for the mistake that becomes the undoing to have been made months or years ahead of time in a seemingly innocuous or unrelated endeavor. Companies that maintain sound operational policies are always in a stronger position to respond to change. What would happen to XYZ if they learned post-acquisition that their precious alloy manufacturer was positioned to run afoul of new EPA mandates? An enterprise program with policies and standards for risk-based acquisition analyses as a natural part of its embedded “system of compliance” would have exposed this risk before it could impact.

 

When the only constant is change, organizational leaders must accept that it very often won’t be on their terms. The best way to hedge against this unknown is to proactively prioritize policy and compliance as the institutional guardians of corporate diligence. Together with sound risk management practices, this becomes a powerful combination that yields value far beyond its cost. Organizations in very highly regulated industries have already learned painful lessons and are embracing this new approach. However any company of any size or industry can benefit from this approach. Impact on policy is impact on the business, plain and simple. Analyzing those impacts and their ramifications is nothing more than intelligence gathering for the executive decision makers. Establishing a common taxonomy of risk within the organization is the best and often only way to piece everything together in a way that makes sense.

 

How does this contrast with your own organization’s practices? What resonates best with your executive leaders? Are there potential regulatory threats looming on the horizon and if so what do you need to examine and adapt accordingly? I’d like to hear from you and if there’s a way we can help then let’s get connected and start working the problem. From there we can begin to establish consistency and accountability, something I’ll discuss further next time.

It seems like I am seeing more and more discussions in the press and blogs about “Risk Culture” and how important it is to have a “Strong” one.  This is particularly common whenever there is a high profile negative risk event reported in the press and, unfortunately, there continue to be a lot of those.  This activity has also spurred several consulting companies to launch interesting surveys designed to measure the strength of organizations’ risk culture.  With all of this talk of risk culture and my predisposition to thinking that culture has something to do with Sociology, I can’t resist delving deeper.

 

Let’s break apart the term Risk Culture.  ISO 31000 defines risk as the “effect of uncertainty on objectives.”  Merriam Webster defines culture as “the set of shared attitudes, values, goals, and practices that characterizes an institution or organization” and “…consists of language, ideas, beliefs, customs, taboos, codes, institutions, tools, techniques, works of art, rituals, ceremonies, and symbols”.  The conjunction of these two definitions alone do not completely define the way in which the term “Risk Culture” is being used and most certainly do not inform us about what might be considered a “strong” or “weak” risk culture.  What is really meant by an organization having a strong risk culture is a culture not only geared toward minimizing the “effect of uncertainty on objectives” but objectives that are in some manner virtuous – such as complying with legal directives, moral codes of conduct, or producing a sufficient principled return for shareholders.  So, having a strong risk culture in this sense is a good thing to have!

 

In practical terms, here are some areas to consider when evaluating and strengthening your risk culture:

 

  • The scope of culture is not only internal to your organization but includes third parties acting on behalf of your organization
  • Attitudes, values (including code of conduct), goals, and practices need to be aligned across the organization, including aligning individual employee incentives to desired attitude, value, goal, and practice
  • Attitudes, values, goals, and practices cannot be shared if they are not communicated on a regular basis throughout the organization, from the board of directors to every employee of the organization.  There are two aspects of this communication: sharing the organization’s objectives and sharing the organization’s risk management practices
  • The language of risk (risk taxonomy), how the organization views risk (it’s appetites and tolerances), and the techniques of how to assess, decision, and monitor risk should be clear to everyone in the organization that interacts with risk
  • Risk taboos should also be clear and enforced at all levels of the organization, without exception.  Taboos are as simple as each manager abiding by and reinforcing with their direct reports the day to day use of agreed-upon risk management practices  or as difficult as terminating employees that breach risk-related protocols or limits.  This is tone at the top in practice
  • Tools used to identify, assess, decision, treat, and monitor risk should reinforce risk culture in every respect including risk language, risk practice, and identification and response to taboo risk events
  • Employees should be recognized for risk well managed.  Again, this may be as simple as the boss giving an employee a pat on the back, recognizing an employee among peers, or financial rewards for demonstrating strong risk management.

Hello everybody! We are very pleased to announce the next installment to the RSA Archer eGRC Content Library. First and foremost, some clarification on quarterly intervals: Previously the quarterly content updates were retroactive for the prior quarter (e.g. the Q4 updated would go out in January). Originally this was to allow a full quarter's worth of development time at the end of the year, but could also be confusing and ended up being more trouble than it's worth. Beginning this year the quarterly update name will coincide with the quarter it falls. As such, to get things aligned this Q1-2013 update is also the Q4-2012 too. Clear as mud, right? Not to worry, it will get better. The Q2 update will go out in April, the Q3 in September, and so on. Hopefully all will be right with the world after that.

 

Our focus this quarter was NIST SP 800-53, or more specifically 53A, officially titled as the Guide for Assessing Security Controls in Federal Information Systems and Organizations. For those of you unfamiliar with “53 Alpha” it’s essentially the specialized assessment component of the NIST Special Publication 800-53 set of security controls. It describes the testing and evaluation procedures for each 800-53 control and is used to identify and prioritize control selection for a given asset.

 

NIST SP 800-53 Revision 3 was already an authoritative source in Archer and we’re pleased to be able to offer the companion control assessment resource as a set of integrated Archer Control Procedures. These control procedures have also been cross-mapped to both Archer Control Standards and the SP 800-53 authoritative source. As such, a new version of the Control Standards library and the NIST 800-53 Authoritative Source are also included in this bundle. (Note: The authoritative source content itself has not changed. The purpose of re-releasing the authoritative source import is to slipstream the updated mappings to Archer Control Standards and the new mappings to the companion Archer Control Procedures.)

 

At this point you’re probably realizing that the direct relationship between Authoritative Sources and Control Procedures doesn’t exist today so how will the import work? The beauty of having such a flexible platform like Archer is that adding this new cross-reference is a breeze. The original 800-53A taxonomy also contains several other useful categorization elements to support filtering and classification activities. In order to accommodate this and enable these 800-53A control procedures to be fully functional in Archer, several new values list fields are being added to the Archer Control Procedures application. A future platform release will make these additions permanent. The good news is you don’t have to wait to take advantage. You can very easily add these additional fields manually today with minimal time and effort. The field specs can be found in either this quarter’s release notes or the import tip sheet.

 

Not ready to make changes to your Archer instance just yet? No worries, this 800-53A base control procedure content will still import into a default instance of Archer without these new supplemental field values. This will establish the control records and language but the advanced 53A taxonomy filtering and direct mapping back to the authoritative source won’t be enabled.

 

Here's a snapshot of this quarter's full bundle:

 

  • Authoritative Sources:
    • The specified item was not found. (mapping release only – same authoritative source content)
  • Control Standards:
    • 20+ updates and new standards
  • Control Procedures
    • 656 new control procedures

 

Hopefully you will find this new content a useful addition to your eGRC library. I’m especially interested in how you’re able to use the 800-53A procedures to drive stronger ITGRC compliance in your organization, so please send me your feedback! The Release Notes are posted on the RSA Archer Exchange and content import packs are available through Customer Support.

Filter Blog

By date: By tag: