Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2013 > March
2013

Hello from baby central! If you detected radio silence from me lately it’s with good reason. We welcomed a little bundle of joy into the world recently and suffice it say I’ve been busy at home! Nevertheless I’m back and eager to wrap up this conversation on policy management so we can move on to other exciting things on the horizon.


The backdrop for this series was a multi-part panel forum I participated in for OCEG and Compliance Week, led by the venerable Michael Rasmussen. We began with a look at the effect external business impacts can have on the enterprise policy management program. From there we moved from detecting changes to identifying cohesive impact assessment and policy change workflow processes necessary for a strong diligence program. Now we’ll tie it together with a look at the ongoing maintenance aspects of a robust policy management program, including monitoring and accountability.

 

Beginning with policy measurement and evaluation, it goes without saying that effective policy management requires periodic review. Conventional wisdom tells us policies should be reviewed as needed to maintain state with changes in the business and otherwise annually at a minimum. Let’s explore why in more detail from the perspective of somebody who thinks it’s unnecessary busywork (like Morty K., the CEO of Morty’s International Widget Emporium for instance.) From Morty’s perspective the business is the same as it was last year. He still sells widgets, his tolerance for acceptable use hasn’t changed, and so on. But, like everybody else he needs to cut costs wherever he can. So he challenges the value of bothering his people to keep up appearances with some administrative review that increases his costs without a tangible return on the investment.

 

Okay so Morty’s thrown down the gauntlet. Now let’s respond. All other things being equal, those administrative gymnastics actually go a long way toward demonstrating diligence, and good diligence reduces exposure risk and compliance costs. Even if policies happen to be out of step with the business at any given time of examination, it’s hard to argue a company isn’t trying to be diligent if it can produce a consistent trail of reviews. Think of it as cheap insurance, for no more than the cost of a few hours per year. That doesn’t mean there won’t be findings around accuracy, but that’s a whole lot better the absence of policies entirely which is the de facto opinion of policies that are never reviewed. Plus there’s the intangible benefit of increased operational stability through raising cultural awareness stakeholder participation. So, when it comes to the “burden” of annual reviews, to quote Nike, “just do it.”

 

In terms of active diligence and regular review cycles the following factors can influence whether policy revisions may be required:

  • Have changes to the business occurred which may affect this policy?
  • Are there regulatory/legal changes requiring a policy update?
  • Is an unacceptable amount of exceptions being generated?
    • Could indicate issues with policy language, divergence from the business state, or training and awareness issues.
  • How many policy violations have occurred and why?

 

If the organization waits for problems to visit before policies are revisited, it will always lag behind the curve. This is an area where technology can be a force multiplier to ensure the train stays on the track and runs on time. Systems are great at performing repetitive tasks, like pestering policy owners (and managers) to do their reviews and capturing all of that in a verifiable system of record, year in and year out, over and over again. Anymore if a company is trying to do this by hand rather than leveraging a tool like Archer Policy Management, then they’re probably not doing it effectively at all. Instead they’re stumbling through some haphazard, analog process that will ultimately fail them when they need it the most; namely, crisis time.

The folly of a manual policy management program is further revealed in organizations with a disparate, document-centric approach. Static, dusty paper policy binders are relics of the past, not to mention boring and ineffective. Why not modernize with embedded multi-media awareness training and automated acknowledgement and acceptance features baked right into the same portal used to demonstrate that almighty diligence to the external auditors? People are engaged more effectively and disparate tracking is replaced with a single verifiable system of record.

 

Why is this important? Because without effective policy awareness what’s the point? Consistent publication and communication is the best way for the company to participate in an ongoing basis. Policies are conditions of employment. Employees must accept these terms and they can’t do that if they’re not aware. When that process is centralized and streamlined the company benefits multiple ways. First, the staff is kept up to speed as an integrated part of normal business so behavior is influenced more quickly and naturally. Second, capturing the acknowledgements reminds the staff they’re accountable plus provides good evidence of the overall process. Everything works in concert and the business gains confidence it can remain a step ahead of its risk.


So we’ve detected changes to the business, put those through a workflow to analyze the impact, adjusted policies to match new expectations, raised awareness, captured staff acknowledgements, and established useful metrics to measure and monitor the program. Overall our diligence picture is shaping up nicely. Let’s wrap up by covering one last item, the audit trail.

 

Policy archival and history is something that often gets overlooked which can often bite an organization in a bad way. When policies must change or retire, it’s extremely important to preserve legacy versions for historical reporting purposes. Otherwise how can they demonstrate adaptation over time? Remember, corporate policies are the codified basis for business operations. So they’re almost always legally discoverable as evidence in addition to being a living history log of changes to the business. The more closely the policy history coincides with shifts in business, the tighter the diligence connections are made. It’s never a good idea to enable a plaintiff to define how the business operates. A robust and complete policy revision history that is producible on demand is a very powerful indicator of strong corporate governance. Failing to preserve and protect that is wasting an opportunity to improve compliance results and reduce organizational risk.

 

That brings us to the end of this series on policy change management. We’ve covered a lot of ground and I hope added clarity to the main aspects of a successful policy program. Managing enterprise policy in today’s global business climate of constant change can be a challenging story. I’d love to hear how Archer helps you tell it in your organization. Be sure to watch for several exciting announcements we have coming up including updates to the Unified Compliance Framework, enhanced PCI capabilities, and much more!

So far in this series I talked about CM documentation and CM models. What’s next?

     Beginning with the assumption that you are already using NIST RMF (or possibly DIACAP if in the military space) you build on what you’re already doing. This means you already have defined roles and responsibilities for A&A/C&A processes. Start there. It is essentially the same process but more frequent. CM should also start with the tools you already have. Implementing CM doesn’t mean reinventing and replacing all your current processes and tools. The A&A stakeholders need to first get together and establish monitoring strategies and agree on some model of measures and metrics.

       Every system owner needs a monitoring strategy for every Information System they own. Every common control provider needs a monitoring strategy for the controls they offer. A monitoring strategy should 1) account for every control that is allocated to a system and 2) say whether each control is manually or automatically assessed and 3) at what frequency.

     Which should be automated? Any that you can, right? Sadly, a small percentage of the controls in NIST SP 800-53 are conducive to automated assessment, and NIST has not made a declaration (or even recommendation) on which they are. For now, this has to be determined by your organization. One good indicator is that controls beginning with “The Information System…” are usually better candidates for automated assessment than the ones that begin “The organization…”. Some families are more automatable than others.

     It also boils down to which tools you have in place. If you have a good configuration scanning tool and program in place, you can automatically assess a few controls in the CM family. If you have an asset management tool, you can automate CM-8. Automated patch management gets you SI-2. Etc. You get the idea. This will be a tedious process for your organization to go through the first time. It will be slightly different for each Information System and each organization, but even an organization with mature processes and technologies would be challenged to automate a couple dozen controls worth of assessments.

The rest will of course be manual. AT, CP, MP, and PS are examples of control families with few if any automatable controls. Policy and process-oriented controls, physical and personnel controls, and even many technical controls cannot be automatically assessed. This is the point where, if you’re one of those organizations that brings in a third party assessor every time ($$$), you may want to reconsider hiring your own internal assessors. You’ll get your money’s worth out of them.

So, most of you know about SCAP by now. It’s a group of specialized XML formats that enable automation and enable security tools to share scan data, analyses, and results. I won’t talk about SCAP today in the interest of brevity, other than to say, of course, use it where you can and upgrade to tools that use SCAP when you can. The common language element provided by XML and by SCAP means that disparate tools and organizations can share data now where they couldn’t before, including high-volume, frequent scan data across huge ranges of hosts. SCAP has reduced obstacles in scanning and reporting workflow and so, has increased the frequency with which some scans can be performed. The point of interjecting SCAP into this discussion is to point out that automated assessments will be streamlined by this new technology, and automation is a significant consideration as to the frequency of assessments. Which leads to…

 

How to determine the frequency for each control assessment? There are a few important factors:

 

What is the criticality of the Information System? This can be decided by the criticality from a BIA and/or from the Security Category assigned according to FIPS 199 & NIST SP 800-60. A system with a higher criticality or Security Category should have its controls assessed more often. A system with a lower criticality or Security Category should have its controls assessed less often.

Is it automated or manual? Manual controls can likely not be assessed as frequently as automated controls. This is just a sheer logistical truth. A fixed amount of employees can only perform so many manual assessments in an allotted time. Automated controls, however, despite the potential for much higher frequency, should only be assessed as often as is useful. An enterprise patch scan may be run daily, for example. Running two patch scans a day would take twice the effort, but may not be twice as useful.

How volatile is the control? A control that is known to change more often should be assessed more often. This means, for example, configuration checks should be assessed more often than a written policy, because the former would change much more often than the latter. (I am just finishing a white paper on the subject on CM monitoring strategies. It should be available in the next week or so. Email me if you’d like a copy when it is done.)

So, you’ve figured out your monitoring strategy. Next is implementation and then scoring and reporting, which you have to figure out on your own because it’s specific to your environment and organization - but email me if you’d like to see a demo of Archer’s CM solution, and even if you don’t decide to buy it on the spot, you can maybe get some ideas you can use by seeing how we’ve done it. I’d also mention for the scoring piece, look at iPost and the original CAESARS for clear, simple, scoring models.

     Lastly, how do you know when you’re done? You’ll probably never be done, right? … but how do you know when you’re adequate? To answer this, I will close with this cool little yardstick, (with thanks to Peter Mell of NIST). This scale gives us all a lot to aspire to.

Level 0: Manual Assessment – Security assessments lack automated solutions

Level 1: Automated Scanning

o    Decentralized use of automated scanning tools

o    Either provided centrally or acquired per system

o    Reports generated independently for each system

Level 2: Standardized Measurement

o    Reports generated independently for each system

o    Enable use of standardized content (e.g., USGCB/FDCC, CVE, CCE)

Level 3: Continuous Monitoring

o    Reports generated independently for each system

o    Federated control of automated scanning tools

o    Diverse security measurements aggregated into risk scores

o    Requires standard measurement system, metrics, and enumerations

o    Comparative risk scoring is provided to enterprise (e.g., through dashboards)

o    Remediation is motivated and tracked by distribution of risk scores

Maturity level 4: Adaptable Continuous Monitoring

o    Enable plug-and-play CM components (e.g., using standard interfaces)

o    Result formats are standardized

o    Centrally initiated ad-hoc automated querying throughout enterprise

o    on diverse devices (e.g., for the latest US-CERT alert)

Maturity level 5: Continuous Management

o    Risk remedy capabilities added (both mitigation and remediation)

o    Centrally initiated ad-hoc automated remediation throughout enterprise on diverse devices (with review and approval of individual operating units)

o    Requires adoption of standards based remediation languages, policy devices, and validated tools

 

Thanks for tuning in for this 3-part series on continuous monitoring! As always, any questions or comments, email me.

 

Chris

Over the last few blog entries, I outlined some of the dimensions that security operations need to think about during 2013 and beyond.   In some respects, this is the tip of the iceberg – there is only so much you can cover in a blog.   However, I think there are some important items to put on the radar.

 

First, Business Context is becoming a big priority for security.  No longer can companies chase vulnerabilities and events around the infrastructure.  There has to be a layer on top of the monitoring and analysis processes that is cognizant of the business impact.  This is not just about prioritizing events but understanding the business impact when specific systems are involved.  Escalation of a security incident can be triggered by the nature of the events, the magnitude of the threat or the data or business process impacted.   The only way to truly add this dimension to the “tuning” of security monitoring is through Business Context.

 

Secondly, we need to continue to recognize that security incident handling must evolve in parallel with the threat landscape.   Quarantining a virus infected system is one thing; responding to a data breach with significant regulatory and catastrophic business implications is a totally different animal.   Companies can begin with streamlining the security event-to-investigation transition to bolster the foundation.  Folding in Breach and Crisis Management takes the process to the next level.

 

Finally, there are many related processes that should be evaluated regularly to minimize attack vectors.  Processes that educate or involve the end users of the companies are key points of defense.  There is only so much that technology will do and the ‘flesh and blood’ of the company must be engaged.  One way to improve this within your company is to implement some type of threat assessment or brainstorming on a regular basis to highlight possible attack vectors.  Key business contacts can prove to be valuable assets when thinking outside the box on possible internal and external threat scenarios.

 

The need for a next generation security operations mindset is evident across the industry.  Technologies will continue to improve but we need to keep the pressure up on how we view security processes.   The attackers are constantly evaluating their methods and improvising new techniques.   The defenders must think in those same fluid terms.   I started this blog series using the analogy of the appearance of the catapult and trebuchet on the horizon outside a castle.   In some respects, this analogy holds water but in reality, the threats we need to prepare against are not obvious hulking pieces of machinery being drug across the battlefield but electrons and shadowy figures that we only catch in fleeting glances.  The next generation of security operations will need to dispel the shadows.  In the end, it isn’t just arming our lookouts with telescopes; we need to give them searchlights as well.

 

To follow my entire blog series on this topic, check out:

Next Generation Security Operations: Part 1

The specified item was not found.

The specified item was not found.

Next Generation Security Operations: Flesh and Blood

Years ago, companies had to worry about the “brick and mortar” threats – physical theft, property destruction, natural disasters.   Next, it was the “bits and bytes” threats – intellectual property theft, website defacement, denial of service attacks.   Now, there is a new element to our threat landscape – the “flesh and blood” threats.  I don’t mean personal physical attacks but rather attackers exploiting an individual for nefarious purposes.

 

Phishing is a well-worn arrow in the quiver of a would-be attacker.  Whether it is used to target a broad range of people or target a single person, a phishing attack can have a devastating effect if executed properly.  Phishing attacks typically contain some tidbit of personal information that makes the attack even more persuading.  With the advent LinkedIn, Facebook, Twitter and the entire spectrum of social media, attackers have a comprehensive research library at their fingertips.  It doesn’t take long to construct business relationships via LinkedIn nor much effort to compile personal information from Facebook.

 

There is little companies can do about this threat except establish policies and increase awareness and training for employees.  An active education program for employees highlighting the daily risks they face as end users is core to a security program.   In addition to awareness campaigns, employees should have a clear escalation path for possible phishing attempts.  The garden variety spam phishing emails should be stopped at the perimeter via email filtering or content analysis technologies.  However, once it gets past that perimeter defense, users should know how to handle a possible email borne threat.  If the communication contains some request for sensitive data or an action that is out of the ordinary (or maybe even in the ordinary but involve some escalated privilege or confidential information), employees should be trained to escalate or, at a minimum, verify the request through other mechanisms.  Too often picking up the phone and making a call is a forgotten communication method in today’s E-society.

 

One thing to think about is validation processes around resetting passwords.  This process is exploited often to bypass security controls.   A common mechanism is the “question/answer” dance that hinges on the user and verifier having a common piece of confidential information to verify identity.   However, with today’s social sites, some of those validating pieces of information are no longer confidential.   High school mascot?  Easy to find.  Family names? Easy to find.   I once was part of a penetration test where we validated ourselves via an “ID” number that was deemed confidential.  The bad part about the ID number was it was used on the public website to identify associates in the company.  (Granted, the ID number was buried in the URL when doing an employee lookup and we guessed it was the employee ID number but it was a pretty solid guess.)

 

When thinking about the next generation of security operations, these tangential processes such as security awareness, end user escalation procedures and password reset processes need to be incorporated into the attack vectors of any threat assessment.  Processes such as these are important frontline defenses that need to be evaluated regularly.   When was the last time the procedures for password verification were reviewed?   How often does communication go out to employees reminding them of their security roles?   What data is used to verify employee requests?  Sometimes we can get mired down in protecting against the “bits and bytes” threats so much that the “flesh and blood” threats saunter right around the defenses.

 

One mechanism that can get these tangent processes identified and up to date is through threat scenario modeling.  Engaging business contacts into a brainstorming session whereby different threat scenarios are modeled out can give great insight into vulnerable business processes.  It gives the business representatives a chance to play the adversary and give the security team much to think about in terms of attack vectors.   This can build a strong dialogue between security and the business to not only identify possible scenarios but also bring more business context to the security controls.

 

I would be interested to hear if your security teams engage with the business or how these ‘social’ attack vectors are addressed in your company.  Feel free to give out ideas on how threat assessments, social media or the ‘flesh and blood’ in your company is impacting your security operations.

It is a well-known fact that unwanted employee turnover can have a significant negative impact on an organization’s performance.  These impacts include recruiting and training expenses, remarking salary to market, business interruption or slowdown in areas affected by the turnover, stress to employees picking up extra workload, higher expenses from slower process execution and process errors, customer defection, and a loss of management’s leadership credibility with remaining employees and stakeholders.  Many similar negative consequences can arise from poorly managed, but intentional, employee attrition and downsizing, management restructuring, organizational expansion, mergers and acquisitions, business process changes, and hiring the wrong person for a job. 

The more complex the affected activities, the greater the organizational scope of the activities, or the greater their importance to strategic objectives, the greater the inherent risk to the organization.  Although the frequency by which these situations occur is often within management’s control, the likelihood of these situations occurring over the long-run is certain.  How significant the residual risk will be when these situations arise, depends directly on how well prepared the organization is to manage their impact.  Preparation relies largely on the collection, management, and dissemination of institutional memory.

Institutional memory is a collective set of facts, concepts, experiences and know-how held by an organization that defines the organization and how it operates.  It informs members of the organization about:

  • The organization’s mission, purpose, and values
  • Assets and liabilities of the organization
  • How the pieces of the organization fit together and operate to deliver the organization’s mission, purpose, and values
  • Internal and external threats
  • Who is responsible for each of the pieces and activities of the organization
  • How, when, why, and by whom decisions are made

Institutional memory grows and changes as the organization changes, retaining memories of best practice and supplanting suboptimal practices with better ones.  Leveraging institutional memory avoids “recreating the wheel” and promotes organizational agility.  The negative impacts of unwanted employee turnover, employee attrition and downsizing, management restructuring, organizational expansion, mergers and acquisitions, business process changes, and hiring the wrong person for a job, can all be significantly mitigated by leveraging institutional memory.

How effective an organization leverages its institutional memory depends on the formality and scope of the memory.  Organizations that have implemented broad eGRC programs utilizing robust eGRC technology are best able to collect and manage institutional memories and disseminate them to affected members of the organization as needed.  Dissemination of the memories is not only by way of the information being available through the eGRC technology but through the appropriate enforcement of significant memories via automated process and decision workflow, escalation, and reporting.

Filter Blog

By date: By tag: