Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2013 > April

This has been a cool and exciting week in the RSA Archer federal world. I made a couple trips to NIST this week. In terms of federal IA standards, NIST is obviously critical, and RSA Archer has taken the next step to build our relationship with them.


NIST is an existing Archer customer and we are core partners in their National Cybersecurity Center of Excellence (NCCoE) lab. One of the main missions of this lab is to bring different types of vendors together in a collaborative way, with NIST moderating, to innovate and solve security problems. They call this effort the National Cybersecurity Excellence Partnership (NCEP). In a precursor, or maybe test run of the NCEP idea, RSA Archer recently collaborated with the NCCoE lab staff to design a trusted geo-location solution and co-author a paper on the subject.


So, April 15th marked the official beginning of the NCEP and there was an official signing ceremony of the 11 “founding” members to kick it off. It was a great event. From the moment I walked into the lobby, I noticed NIST had set up a demo of the geo-location solution (score!) and Archer was there on the screen as an example of what the NCEP could do.

Some black limos and secret service members showed up, escorting General Alexander (Director of NSA and Commander of US Cyber Command) and Senator Mikulski, who fought for the funding for the lab. The ceremony began. The Director of NIST, Dr. Gallagher, was the MC. NIST coverage of the event here.


Then the 11 partners went up on stage to sign the agreement. Rear Admiral Mike Brown, the VP for RSA Federal Business, was on hand to sign for RSA.  The line-up was impressive: execs from Intel, Cisco, Microsoft, HP, Symantec, McAfee, etc.  – all top-tier partners. Overall, the signing was very positive and inspiring.


I went back to the NCCoE on April 17th for the kickoff of the next use case: Secure Exchange of Electronic Health Information Demonstration Project . A group of about 15-20 vendors showed up, including most of the 11 core partners. We all listened to the use case around secure wireless healthcare data and we each briefly presented on how we thought our products might fit into the solution. The NCCoE lab staff are now discussing how to split all of the proposed products into several prototype solutions and we will hear back on our “teams” shortly.

The NCCoE also announced the areas around the next two use cases for the lab: financial and energy. This is really no surprise, due to the recent resurgence in concern over critical infrastructure, but I am very excited! These are great areas for Archer historically and great opportunities to improve and innovate!


Thanks for reading!!!


Register for the summit if you haven’t already. I hope to see you there.


Email me with questions or comments.



Remember driver education class when the instructor would sound like a broken record telling you to look over your shoulder to check the “blind spot” before changing lanes? Never mind the questionable wisdom of consciously looking in the opposite direction of travel. I could never wrap my head around the supposed reality that every car on the road had such an obvious safety flaw. Granted I’ve always been the inquisitive type but it just didn’t compute to me. Engineers are supposed to be smarter than that right? Why bother putting mirrors on at all if they don’t work?


Suffice it to say my instructor was not impressed. It was a terribly hot summer and he was stuck in a poorly ventilated, semi-trailer classroom conversion full of teenagers driving him crazy (pun intended) with inane questions. “What if we couldn’t see out the back window? What if we had a chronic neck injury?” On and on it went until our weary instructor played his trump card and squashed the automotive design debate for good. “Do you want to drive to school this fall or walk?” Despite the burning desire to prove him wrong, the taste of freedom that laid waiting on the other side of that driving test was too much to risk. So we relented. But I never forgot how silly the whole thing seemed then and how influential experiences like that were in fueling my passion to “figure things out”.


Fast forward to present day: Drivers ed has long faded from my rear view mirror and low and behold we just purchased a new car with a “blind spot alerting system.” What’s this contrivance you ask? Here’s how it works: There are sensors mounted around the vehicle that function like radar. If those sensors detect another vehicle positioned in the “blind spot,” a light will flash in the corresponding side mirror to alert the driver. Personally I convinced myself a long time ago that the blind spot was a myth. But since this will be our primary family vehicle, the more safety features the better I say.


As I was studying the owner’s manual on all this new technology it got me thinking about these new safety features in the context of a system of controls. In terms of the blind spot awareness sensor our stated risk is colliding with a vehicle in another lane. The mirrors provide a detective control to see other vehicles. Other drivers possibly provide a secondary detective control function (preventive from their perspective) if they honk at us (and we hear it) plus a compensating control if they can swerve out of our way.


But none of those are deemed reliable enough so some genius concocted the additional “preventive” control to look over our shoulder and check manually. While this may mitigate one collision risk, it creates a different, potentially much larger risk if the driver directly in front of us slams on their brakes while we’re busy looking backward. Furthermore, cars are built differently today. They’re bigger, faster, and while safer all around, go ahead and try to actually see anything out the back of an SUV with three rows of seats and oversized headrests. It’s practically impossible and certainly unreasonable to do justice to the task in the split second the average glance seems to last.


Hmmm...interesting. We have risks and multiple controls for those risks but those controls seem to have some weaknesses in common. For instance all are manual, none are reliable due to inconsistency & human error, and one could argue the residual risk (risk after controls) is nearly equal to the inherent risk (risk in absence of controls) in several plausible driving scenarios. Not good. How on earth have we ever managed to drive anyplace safely up to now? This is a marketer’s dream scenario. Magnified risks, diminished controls, and the straw man’s seed of impending crisis in an uncertain world firmly planted in our minds with a few images of our loved ones in a collision that thanks to modern technology is now totally preventable.


Enter our new friend the blind spot alerting system; the holy grail of the control universe, the all-seeing, all-knowing, all-powerful, automated control! We’re saved! That is until we read the fine print in the owner’s manual. Seems it only requires one short paragraph to describe how the feature should work but several more paragraphs with graphics and warnings to point out all the potential ways our fancy new automated control can fail. If the sensor is blocked or dirty it may not register other vehicles (false negative) or cause repeated false positives by alerting erroneously. Certain angles and other driving conditions may also trick the system, and so on. So now we have a new problem. How do we know if our automated control fails? Well we’ll certainly know if we change lanes and smack another car I guess. In information security this would be synonymous with a control failing “open” rather than failing “secure”. Not good.


So what do we do? As Bob Dylan said, “the answer my friends, is blowing in the wind.” Our trusty side mirror relegated to hanging off the door as a mere ornament may yet save us after all. Manual controls get a bad rap because they’re perceived as costly and labor intensive which causes people to either not perform them properly and consistently. When it comes to controls performance, inconsistency=unreliability and that leads to control failures and audit findings. Otherwise there’s nothing inherently wrong with a manual control and in many cases (on a control by control basis) it’s often cheaper than an automated alternative. Case in point: The side mirrors came for free on our new car. Heck they’re actually required by law. However the blind spot awareness system was an additional cost option.


So now we come full circle. We need our side mirrors because we can’t look over our shoulder but as a risk-based control our side mirrors are unreliable, right? That’s what they told us in drivers ed but we never really established why. Let’s assume there was a way we could gain more confidence in our side mirrors as a primary key control. If we could implement a policy change that would improve the accuracy and completeness then we might be able to strengthen the control’s performance enough to sufficiently reduce the residual risk. Let’s call it control refinement or tailoring. If this new policy works we’d essentially have a new system of controls featuring complementary automated and manual controls that backstop each other in a way that always manages the risk.


So with that as our backdrop, please allow me to present the following graphic taken from a 2010 article in Car and Driver Magazine, entitled “How To: Adjust Your Mirrors to Avoid Blind Spots”. That’s right Mr. Driving Instructor, eat my dust.





This is proof that simple solutions are always the best. While I won’t suggest this is perfect for everybody, I will say I’ve used this method for years without fail. It’s worked for me on all sizes of vehicles and has saved me more than once.


Just for fun, in preparation for this article I took our new car out to test my theory that a properly adjusted mirror (tailored manual control) was actually just as reliable as the automated control.


Guess what? Not only was it equally good, it even outshined the blind spot system. While the automated control never missed, the mirror actually detected the approaching vehicle earlier every time. Multiple controls that are each reliable enough to be primary?? What a great problem to have!







So let’s recap: We had a stated risk and a control environment that was failing to adequately manage that risk reliably. Through a disciplined approach to remediation we were able to root-cause our inherent control deficiencies and find a new way to leverage existing resources toward a suitable solution. By retailoring our controls we were also able to rationalize away one of our manual controls (looking over the shoulder) that was costly in terms of risk and unbeneficial. So not only did we achieve control nirvana for no more than the cost of a policy change and a little awareness retraining, we actually reduced our manual controls by 50%! Plus, newly acquired technology allowed us to add an automated control to the mix that not only strengthens and reinforces our existing manual control environment, it also expands our risk coverage into lower likelihood (but high impact) scenarios such as a vehicle with no headlights in our blind spot at night.


And there you have it folks: Policy, risk, controls, and ultimately compliance all from the comfort of your driver’s seat. Have high speed stories of your own to share? I’d love to hear them!

Business Continuity Management (BCM) programs typically do a good job of evaluating business criticality through performing Business Impact Analyses (BIAs) to determine recovery priorities.  However, how many BCM and IT Disaster Recovery (DR) programs adequately assess risks starting at the overall program level down to the process or IT infrastructure level?  How do they properly integrate the business and IT in this analysis?  Further, how many BC/DR programs coordinate or leverage planning with their organization’s Enterprise Risk Management (ERM) program, approach and results?  This is especially critical due to recent guidance from the new ISO 22301 standard.


This is where BC/DR planning and ERM converge in their needs, but are rarely synchronized in their discipline, and here’s a real example.  A Fortune 100 financial services (FS) company I consulted with performed over 3,000 BIAs and has as many documented BC plans.  Their central BC program’s charge was to audit as many of these plans as possible (I would dare say “as necessary” and here’s why) but how did they determine which BC plans to audit?  At the time, the FS had their own rudimentary risk assessment process that would help them determine which BC plans to audit (i.e., go onsite, verify plans were documented and tested) versus having those business process owners self-audit through a quick questionnaire that the BC program would review.  However, what their risk assessment process didn’t take into account was how their larger ERM program felt about the risk in those same business process areas.  Were they worth the trip to audit (some of these locations were international, resulting in lots of travel expense)?  Who really knew because they did not align on their definitions of “high risk” versus “low risk”.  Furthermore, they didn’t take into account risk remediation that might have reduced the risk to acceptable levels, allowing them to move that area from “to be audited” to “self-audit” category, thereby allowing the BC program team to focus on higher impact activities.


This is just one example of why it is important for BC programs to align their approaches, methodologies and activities with other related Governance, Risk and Compliance (GRC) programs and disciplines – and vice versa.  Really - your organization’s ERM, GRC or ABC program has a lot to learn from the BCM program too.  Believe it or not, there are many points of intersection and alignment that can and should occur making both programs more effective.  It’s all about effectively reducing risk and doing so with the least amount of resources isn’t it – whether we’re talking BCM, ERM, GRC, or whatever?


This blog series will continue to explore practical ways BCM, GRC, ERM and other related programs, approaches and disciplines can converge to make everyone’s life easier.  Stay tuned!

Over the last few weeks I have outlined several elements of Security Operations that are bubbling to the surface in my blog series “Next Generation Security Operations”.   The series really focused on the reactive side of security management and a key theme was the connection between nuts and bolts security with broader processes.   A key point I wanted to communicate was not only the need for companies to remain vigilant and evaluate the detective side of security management but also look outside of the technical infrastructure for inputs to improve the reaction time within Security Operations.  As most of my readers are GRC Practitioners, this connection stimulated some interesting conversations I had with customers from the GRC side of the house and I hope made some of the same connections from the security side.


One element I did not spend much time on in the series was the proactive side of security management.  Threat Prevention activities such as vulnerability identification, threat assessments and security intelligence coupled with the technical management processes such as configuration management and IT change control are an important part of ensuring your company is best positioned to fend off attacks.   As IT security risks are growing more and more complex, companies face threats from a wide variety of sources – from criminal elements to state sponsored corporate espionage – exploiting an extraordinary array of vulnerabilities within business processes and technology.  These compound threats result in substantial and often unrecognized business risk.  A key strategy to deal with these challenges is to expand tactical IT security processes such as vulnerability identification into a more holistic risk management discipline by deploying a combination of threat prevention and detection capabilities driven by a business-oriented foundation to reduce IT security risk.


I like to term this as IT Security Risk Management rather than Threat or Vulnerability management since the objective should be to build more business context into the picture rather than just traditional vulnerability management.   However, no one label truly captures the combination of these two critical components of holistic security management – Threat Prevention and Threat Detection and Response.  Supporting those two major elements are processes to catalog IT assets, provide business context on IT assets, enable emergency response services and a whole host of other processes.  To place a singular label on this major process is very difficult.  At the end of the day, an organization needs to:

  • Identify IT Assets and the business context and criticality of those assets;
  • Implement proactive threat management controls based on vulnerability intelligence, testing, threat modeling and analysis; and
  • Monitor IT assets, detect active threats and manage incidents and investigations.


As part of an upcoming online event, I am presenting an overview of these concepts.  Rather than heading straight into the weeds, my presentation will focus on a framework to fitting these pieces together in a strategic fashion.  For those of you in the GRC world, this is an excellent opportunity to get an overview of this considerable challenge facing security practitioners.  For the security folks, the presentation can give you a higher level perspective of a long term strategy to communicate or position your security initiatives. I would like to invite anyone interested to check out this event given by BrightTalk.


My presentation will be just one piece of this two day event.  I hope to “virtually see” you there.

Filter Blog

By date: By tag: