Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2013 > September
2013

Hello everybody! We are very pleased to announce the RSA Archer eGRC Content Library quarterly bundle is now available.

 

Earlier this month we released NIST SP 800-53 Revision 4. We’ve since added a major update to HIPAA as well as the latest version of the Monetary Authority of Singapore’s Technology Risk Management Guidelines. In support of these several additions to Archer Control Standards and the Archer Question Library are also included. For folks that haven’t yet obtained the NIST 800-53 update we’ve also packaged it the Q3 bundle for convenience.

 

Here's a snapshot of this quarter's full bundle:

 

  • Authoritative Sources:
    • The specified item was not found.
    • HIPAA Privacy and Security
    • The specified item was not found.

 

  • Control Standards:
    • 80+ updates and new standards

 

  • Questionnaire Assessments:
    • 600+ new questions

 

Something to note on the HIPAA content specifically – we have migrated the changes to the regulation into the existing authoritative source content, expanded the structure, remapped it, and developed a questionnaire assessment that also maps back to the authoritative source. So this import will update your existing HIPAA records with those changes. HIPAA sections which are no longer present in the current version of the regulation will be tagged as superseded in Archer so you can easily locate and remove them at your discretion.

 

The Release Notes for this quarter are posted on the RSA Archer Exchange and content import packs are available through Customer Support.

 

Thank you for your continued support in making RSA Archer the leader in GRC content. And check back soon for additional updates to the Information Security Forum’s [DEAD LINK /docs/DOC-15651]Standard of Good Practice and a forthcoming update to the Unified Compliance Framework.

Internal Audit is one of many organizational groups whose mission is to assess risks, evaluate controls, raise findings and improve processes.  Similar groups include Enterprise Risk Management, Security, Compliance and others. With some common objectives and not-so-common approaches, there is value in aligning methodologies, resources and results.  However, Internal Audit needs to maintain a certain level of independence, so how does Internal Audit align with these groups while maintaining its independence?


Internal Auditors have an essential need for independence.  It’s a requirement for the profession.  The Institute of Internal Auditors (IIA) Code of Ethics states, “Internal auditing is an independent, objective assurance and consulting activity designed to add value and improve an organization's operations”.   One of the Code’s principles on objectivity states “Internal auditors exhibit the highest level of professional objectivity in gathering, evaluating, and communicating information about the activity or process being examined. Internal auditors make a balanced assessment of all the relevant circumstances and are not unduly influenced by their own interests or by others in forming judgments.”  This independence begins at the highest levels in the chief audit executive’s reporting relationship to the organization’s board of directors and filters down.

 

Alongside the need for independence is a competing priority for IA to be a “partner” with management.  As directed by IIA standards, IA reports to the board of directors and senior management.  To contrast the Code of Ethics quoted earlier, “Internal auditing is an independent, objective assurance and consulting activity…”  The challenge for IA groups is how to strike the right balance between independence and partnership.


The formalization of Governance, Risk and Compliance (GRC) as an operating framework has begun to force the discussion of IA and other oversight functions working together toward common goals, and has increased the opportunities for IA to partner with management.  The question for IA is how closely to align their approaches, thresholds, and decision criteria with others.  The “right balance” is a relative term that depends on the organization and industry, its place on the maturity spectrum, regulatory issues, management priorities, and many other factors.  IA must continue to strike a balance between independence and partnership.


IA and enterprise GRC programs should look to remove as many boundaries between them as possible.  However, IA must decide where those boundaries should exist that also enables them to maintain an appropriate level of independence.  As the organization proceeds down the path of alignment and moves up the spectrum of group development, the growing pains of alignment will turn into realizable benefits.

My team and I have been having many discussions lately on the evolution of GRC programs and the value of integrating or supplementing tangential processes with data flowing in and out of risk management activities.   Much of this discussion is fueled by the efforts we have had on the solution development front.  Over the past two years, much work has gone into updating our solutions based on industry practices and how our customers use RSA Archer to implement a wide range of GRC use cases.  We have been working diligently towards deeper and deeper integration across modules and streamlining data sharing between core GRC processes.  In addition, our integration to our Security Analytics has continually progressed towards providing information security management processes with business context to improve security.

As the conversation around the value of connecting processes within GRC progressed, the idea of a “Value Ceiling” for certain operational enablers and processes emerged.  Certain niche technology enablers have a point where the tool is bringing value for the immediate needs but there is more value to be extracted if that technology enabler could be used for broader purposes. In other words, there is POTENTIAL value that could be derived beyond the initial scope of the technology IF the technology can share data or enable other processes.  A Value Ceiling is the point where the technology enabler achieves its operational value but can no longer provide greater potential enterprise value due to constraints, disconnectedness or some other barrier.

 

In June, I posted a white paper that was written in collaboration with the GRC Strategy team and the Customer Advisory Council releasing the RSA Archer GRC Reference Architecture.  The GRC Reference Architecture was designed to help put context around the vast universe that is GRC.  The illustration, guiding principles and objectives outlined a framework to think about what the true goals of a GRC program are, how the GRC program needs to flow top down through the organization and where certain processes, technologies, roles and responsibilities fit into the big picture.

 

I am pleased to combine these two conversations into this paper “Breaking Through the Value Ceiling”. Technologies implemented to meet operational needs bring tangible benefits to an organization with focused, tactical functions.  These tools bring value to organizations due to the focus on the specific business challenge at hand and most often help achieve goals at the operations level. However, certain processes need to lead to greater enterprise value.  This paper uses the RSA Archer GRC Reference Architecture to illustrate the value of operational technologies while acknowledging there is a “value ceiling” of some niche operational tools highlighting the missed opportunity for broader value.

 

The paper includes some simple questions to ask yourself about key processes and technology enablers in your organization.  It is a simple concept, but I hope this piece ignites discussions in your organization about ‘value ceilings’ and unlocking benefits within your GRC program.

Cliff Stoll’s The Cuckoo’s Egg was one of the major influences in my early career.  The tale of a lone astronomer doggedly pursuing an accounting error on the mainframe to discover a nefarious hacker with international spook connections combined elements of things I loved:  the hard-nosed detective embodied by Sam Spade, the international intrigue of James Bond, and the technology conundrums of the early networked world.   Ok – Cliff Stoll isn’t exactly Sam Spade or James Bond but you get the point.  The story was a fascinating epic journey that led its protagonist to places he never expected.  Other similar stories highlight the fundamental personality types associated with “hacker hunters”: They are relentless, passionate and ultimately, take these security breaches personally.

I have already talked about giving better Visibility and <em style="color: #fb1e00;">Context</em> to help response teams with uncovering the rocks during a security incident.   First responders assigned to analyze, triage and resolve a security incident must also have the right skills to enable their digging.  If you read analysis of how today’s advanced threats need to be approached (See RSA First Watch’s excellent paper on “Stalking the Kill Chain”), there are multiple facets of Expertise needed:

1)      The responders need to be up-to-date and connected to current threat attack vectors.  It isn’t sufficient to just know or understand security vulnerabilities.  Responders need to understand HOW attackers are utilizing the large global catalog of vulnerabilities at that  point in time.

2)      The responders need to have a clear understanding of what is normal and abnormal activity on the network.

3)      The responders need to be able to imagine other channels and vectors that haven’t been seen before.

There are few tangible elements of building this Expertise:

Gather and organize a clear picture of current adversaries, attack vectors and threat scenarios.  Some of this can be gained through external services such as RSA Live.  Other elements will need to be sourced internally.   One company I have worked with is maintaining a “persons of interest” database cataloging shadowy but known adversaries, their favorite attack vectors, known affiliations and other pertinent information.  This information, alongside incoming indicators of compromise, intel from shared information groups and other sources, helps them keep an eye out for specific activity that are known threat scenarios.

Enable the technical infrastructure. Intel flowing into the security function should impact controls design and implementation downstream.  If intel points to a set of known bad IP addresses, enable protection at multiple layers – perimeter defenses, IDS, etc.   This seems like just common sense but the critical point is how fast is that intel acted upon.  Barriers that may slow down the process (such as change control) should be optimized to ensure the flow is as accelerated based on risk.

Don’t rely solely on internal resources.  This doesn’t mean that you shouldn’t trust or use your internal resources.  It just means that sometimes you should level set your processes, technology infrastructure or threat intelligence with a trusted outside party.  Attack scenarios are so varied today that it is good to get experiences outside your own world to validate your approaches.

RSA’s First Watch paper mentioned above states ‘the success of a modern attack often depends on the activities of the carbon based unit between the keyboard and the chair’.  I wholeheartedly agree with this statement.  Technologies can fill many voids except the void between the ears of the person staring at the screen.  Security analysts are extremely talented people and when given the right tools and processes pose a solid opponent for today’s adversaries.   Security organizations must embrace that Expertise and view it as an imperative to optimize and leverage it in managing today’s security threats.

GRC (Governance, Risk and Compliance) is a familiar enough term, but what is EHS and CAPA?  Well, providing employees, contractors and customers with a safe working environment is a major priority for any organization.  Numerous industry regulations as well as government agencies such as the Occupational Safety and Health Administration (OSHA) have enacted specific rules, procedures and laws that must be followed in order to ensure compliance with safety measures in the workplace.  As an example, Environmental Health and Safety (EHS) rules, procedures and laws require companies to track all recordable events relating to workplace illness, injury or death.  A good EHS program will:

 

  • Escalate incidents quickly and efficiently; and capture, investigate, assess and prevent future hazards, injuries, illnesses, near-misses, and property-damages
  • Implement corrective and preventive actions, and perform root cause analyses for all events to proactively prevent recurrences
  • Provide visibility and reporting on incidents, events, ownership and statuses
  • Drive quality and safety performance across locations and business areas, and help maintain a safe and secure work environment and reduce risk to people and property


One important aspect of EHS programs is the Corrective Action and Preventive Action (CAPA) process.  CAPA is a methodology that strives to identify errors or nonconformity in a process along with the resulting problems, and then to understand the impacts, implement corrective action, and implement preventive measures. This concept is commonly built into many systems, processes and programs. One example is incident reporting standards for health and safety requirements as mandated by OSHA.


Other examples of CAPA programs include the “Plan Do Check Act” philosophy (by Deming – Shewhart) common to Crisis Management, as well as Quality Management Systems (QMS) focused on continual improvement and customer satisfaction. CAPA also forms the core of quality management disciplines such as Lean Manufacturing and Six Sigma, or ISO 9000. Permanent embedding of CAPA as part of a continuous improvement process in highly structured and regulated environments is critical.

A common criticism of CAPA programs is they often do not deliver the Return on Investment (ROI) expected by management. In order for CAPA programs to drive enterprise-wide benefits, they need to connect and fully integrate with strategies and solutions, and other supporting corporate-wide information systems. Further, CAPA programs need to have clear definitions of risk, severity and impact. These need to be clear and should be utilized to manage prioritization.  Effective CAPA programs should also be developed to enable organizations to integrate related disciplines, such as risk, incident, business continuity and audit management into a broader enterprise governance program.


RSA Archer has recently made available a focused EHS solution that enables organizations to comply with OSHA requirements that require companies to track all recordable events relating to workplace illness, injury or death. Customer benefits include:

  • Capture, investigate, assess and prevent all hazards, injuries, illnesses, near-misses, and property-damages with corrective and
    preventive actions
  • Escalate incidents quickly and efficiently
  • Provide visibility and reporting on incidents, events, ownership and statuses
  • Perform root cause analyses for all events to proactively prevent recurrences
  • Host all witness statements, investigations, and evidential information as it relates to EHS events to prevent or rebut potential penalties, fines, and fees
  • Drive quality and safety EHS performance across multiple locations and business areas
  • Help maintain a safe and secure work environment and reduce risk to people and property
  • Comply with OSHA incident reporting standards 

     

The solution has potential to provide rich information to the other disciplines mentioned above, thus better integrating the broader GRC program.  For companies that are challenged with EHS requirements and looking for a CAPA solution, you should check out our new focused solution.

I’m happy to announce the release of [DEAD LINK /docs/DOC-27046]NIST SP 800-53 Revision 4 as Archer content. This newest addition to the library is offered as a full-text authoritative source with over 1,100 mappings to Archer Control Standards.

 

Special Publication 800-53 is one of the foremost flagship security control catalogs in the world. This latest version reflects a multiyear effort on the part of NIST to refine the control set, and expand with additional coverage for current and emerging trends in various technology areas. With a title of “Security and Privacy Controls for Federal Information Systems and Organizations”, SP 800-53 is often mischaracterized as only being relevant to the public sector. However the control catalog and methodology serves as an excellent baseline resource for any company looking to rationalize and improve their security control environment. The Presidential Policy Directive and Executive Order released earlier this year underscores the trend toward public and private sector security practices beginning to align. Guidance provided by NIST will be deeply integrated into these public initiatives, so it’s worth turning to SP 800-53 as a reference whenever security control designs are being considered.

 

If you caught our webcast with Dr. Ron Ross earlier this year you’ll recall one of the major updates in SP 800-53 Rev 4 is the addition of a new family of privacy controls. This is a big deal since NIST has only added one other control family since the inception of 800-53. Another new element is the introduction of the “overlay” concept. Think of this as an additional way to uniquely identify and allocate controls based on overlaying the deployment context of the platform being protected. These additions further illustrate a growing overlap of security concerns shared by public and private sector organizations alike, and complement a concerted effort by NIST to reach out and collaborate with the private sector.

 

The addition of Revision 4 in Archer together with the addition of [DEAD LINK /docs/DOC-15427]800-53A as Archer Control Procedure content released earlier this year, you have everything you need to drive a serious security control assessment program or transition your existing program across to the latest version as part of your security control environment lifecycle management process.

 

If you’d like a deeper dive on using SP 800-53 Rev 4, be sure to check out our upcoming webcast on September 12, 2013.

 

Content import packs are available through Customer Support.

One of the frustrations - or challenges depending on your viewpoint - about responding to security incidents is that you spend a lot of time overturning rocks.   Most people, when they turn over a rock, expect to see bugs, worms and a few unknown creepy crawlies scattering for cover.  However, when a security person overturns a rock during a security incident, most often all they find are more rocks.  Then those rocks are overturned, revealing more rocks, and so on.  The endeavor turns into a battle of attrition until finally (usually under the most unexpected rock) the bugs, worms and creepy crawlies are discovered.  For every security event, there are several rocks to be uncovered:  What system is involved?  What does the system do?  What was the source system?  What is that system used for?  What protocol was used?  What does the log or event mean?  Who triggered the event?  What is that person’s role?  Was it a real business activity?  Overturning each of these rocks reveals more rocks.

 

Something like this is typical for a security investigation today…Our perimeter defenses notify us that IP address 10.1.1.105, a nameless device, sent a zip file called  really-important-secrets.zip to IP 192.168.1.234, an unknown destination. Many rocks uncovered later – we learn that zip file contained SecretSauce.doc and was sent from John Doe’s laptop to an FTP site at bad.place.net.   More rocks and more digging reveal John Doe was previously spearphished into visiting a website last week at watering.hole.com and had malware R3@llyN@sty installed on his machine.   And that’s the comforting scenario since I didn't even include some of the obfuscation that some of the attacks use for data transfer.   An even worse scenario is law enforcement showing up telling us SecretSauce.doc was deposited on a warez site three months ago and evidence suggests it had been traded through multiple forums for a month or two before that.

 

In my experience, good security analysts like uncovering rocks.  They don’t mind digging through the 1s and 0s relentlessly in pursuit of the creepy crawlies.  The problem with all of these rocks is that they look alike.  Wouldn't it be nice if some of them glowed bright green as to say “turn me over first”?  Or once you overturn one rock, the subsequent rocks are neatly arranged to quickly ascertain the situation?   This is why Context – adding dimensions to the incident to enable prioritization, clarify impact or provide insight that influences the response in a positive manner – is so important to today’s security threat management.  There are many rocks that shouldn't be so hard to overturn.  To deal with some of the opaque, obtuse, wildly sophisticated attacks today, we have to begin fusing this context into the security event equation right from the beginning.

 

Security analysts most often are presented with an IP address – a nameless, faceless string of numbers.  This is much like a phone number.  They don’t know who that phone number is for (owner), what the number is used for (personal or business), the type of phone (mobile or landline), where it is located (physical address), what calls were made, etc. Context is adding all of those things to the phone number turning it from “555-555-1212” to “John Doe’s mobile phone used for his financial advisory business and he is driving down I-435 right now talking with his business partner Bob”.

 

Some tactical – yet important – strategies for you to enhance Context:

Evaluate IT asset repositories to collapse and streamline the concept of “critical systems”.  This will require a combination of input from business (what applications are truly important for business?) and IT (what infrastructure enables the key business processes?).  The definition of critical may be fluid as well – so one time assignment of that indicator is not going to cut it.  You need an ongoing process to identify what are key assets.

 

Don’t forget data.  Owning a router or getting root on a server isn't the end game anymore.  It might have been in the good old days of hacking for fun, but today’s dangerous adversaries are for-profit (or for-damage) driven.  Understanding what data is important and relevant, where that data lives and when and how it is in transit is crucial.

 

Continuously add dimensions.  Bringing this context will require some time.  It won’t happen overnight but having a strategy to incrementally improve security’s understanding of IT assets and systems needs to be on the CISO’s plan.

 

For the security analyst, Context turns “10.1.1.101” into “The SPARC T4-1B server module running Solaris 10 located in the Boston data center which is part of the cluster hosting the Oracle database used in the SAP system that houses employee and customer information, subject to several breach notification laws and PCI, last accessed by John, Jane and Bad Guy Jack and scanned last week by Qualys which found a poorly configured RPC service.”  Now this sounds like pie-in-the-sky but in reality this is the type of information that could be the difference between successfully defending a serious data breach or ending up on the front page news and an anecdote in a security vendor’s marketing presentation.

Filter Blog

By date: By tag: