Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2013 > January
2013

What a week it's been! We met with Gartner analysts Roberta Witty and John Morency to present the updates to our RSA Archer Business Continuity Management & Operations (BCM&O) solution for their 2013 BCM Magic Quadrant study, announced the new solution to our customers and the public, presented a live webcast last Thursday on the new solution (here it is if you want to see the January 24th BCM Webcast) and just had an article published by Disaster Recovery Journal (read it here at BCM Regulatory Soup) on BCM regulatory guidance.

 

We're ecstatic to have had such a successful launch of the new solution and the interest has been phenomenal. I would like to recognize our internal team and also our BCM customer working group in all of this. Customer feedback is invaluable and I'm proud to say that we had plenty of it that went into the new solution updates. By the way, we're always looking for more participants in our working group.

 

In all the research, customer input, and testing and validation we did we also took time to stop and take stock of where the BCM industry is today and where it seems to be heading and does this BCM solution really address the industry's needs. We looked at things from a lot of different viewpoints - from the executive that has a high level view of and interest in BCM, to the program teams that live BC or IT Disaster Recovery (DR) daily, to business process or IT owners that have to document or test their plans infrequently. For all of them, and for us, BCM is an area of increasing importance and we all still have so much to learn and do. There are many factors at play here and, as you can imagine, no one is standing still.

 

The fact remains that most organizations still use rudimentary and uncoordinated approaches and tools for their risk assessments, Business Impact Analyses (BIAs) and BC/DR plans. We've attempted to provide a solution that helps coordinate all this, and takes as much of the rote actions out of BC/DR planning as possible but doesn't replace that invaluable human judgement and experience. We built the solution around best practices and industry guidance because we found that most BCM programs are looking for that extra bit of help and validation. I think the new solution does a pretty good job at this.

 

Let me just say a word or two about our new mobile capabilities. We're pleased to offer the option to access your BC/DR plans on your iPhone or iPad. Check out the app on the iTunes app store. We're even more pleased to be starting the next phase of mobile capabilities with assessments and the ability to develop your own mobile apps. Lots more to come on this topi!

 

Now that we've launched this new BCM solution, is this it? Are we done? I can emphatically say NO. But that's what is exciting. We're already thinking about how we can do more around topics like third party continuity, cloud recovery and geopolitical feeds - plus any intriguing topics that come up from our working group, which we know they'll come up because this group isn't shy. So stay tuned for the next chapter and my next blog where I'll talk about the keys to effectively managing a crisis event with an automated tool. To many successful recoveries!

You want to know what the real answer to all this Big Data challenge is ? It's in us!!!

Fantastic isn't it. Well till that becomes "commercially viable" , let's talk about what we can do today.

 

The right tool for the right job – that’s no doubt a cliché to many. But it’s surprising how often the tools at hand are used for any kind of job. In my last post, I talked about why dealing with Big Data is not just about data, but also about a new set of tools.

 

Let’s dissect a use case to understand the heart of the problem. In earlier posts, I talked about clickstream data. Clickstream data is data that is generated by user actions on web pages – this can include everything from components on the web page that were downloaded when a user clicked on something, the ip address, the time of the interaction, the session id, the length of time, number of downloads triggered, bytes transferred, referral URL etc – in “tech” speak, you can say it’s the electronic record of actions that a user triggers on a web page. All of this is recorded in your web server logs. On business web sites, these logs can grow to several gigabytes a day easily. Also, like I mentioned in my previous post, analysis of this data can lead to some very beneficial insights and potentially more business. To get some perspective, if you are an Archer Administrator, check out the size of the largest log on your IIS server that hosts the Archer web application. I am guessing the largest file is easily a few hundred megabytes if not close to gigabytes. OR WAIT, Archer History Log anyone?

Here’s a snippet from the web server log on my local Archer instance (on my laptop), which incidentally was about 17 MB in about a day (used primarily by me a couple of times a day):

53958

Now there are those who would argue that log data is not the best example of “Big Data” – part of the reasoning being that it does somewhat have a structure. Besides, weren’t businesses doing click stream analysis already before it was characterized as Big Data?

Yes they were – but there’s a little thing they do that is very inconspicuously described as “pre-processing”. Pre-processing is a diversion that hides fundamental challenges in dealing with all of the data in the logs. Logs themselves or raw data in the logs are second class citizens or even worse “homeless”. The web servers don’t want to hang on to them since the size of the logs can impact the web server host itself in terms of performance and storage. The systems that are going to use this data don’t want them in the raw format and don’t want all of it.

 

Typically, “pre-processing” involves some very expensive investments to clean the data, validate certain elements, aggregate and conform to quite often, a relational database schema or a data warehouse schema. Not only that, but both the content and the time window are crunched to accommodate what the existing infrastructure can handle. At the tail end of this transformation is the loading of this data into the warehouse or relational database system. Not only does this data now represent a fraction of the raw data from the logs, it could be several days between the raw data coming in and the final output into the target data source. In other words, by the time somebody looks at a report on usage stats and patterns for the day the actions were recorded, weeks could have gone by. And the raw data that was input to this is usually thrown away. There’s another big problem in this whole scenario in that you are clogging your network with terabyte data movements.  Let’s face it – where and how do you cost-effectively store data that is coming in at a rate of several hundred gigabytes a day? And once you do how do cost-effectively and efficiently process several terabytes or petabytes of this data later? This is a Big Data problem. We need a different paradigm to break this barrier.

What if, instead of trying to pump that 200 GB daily weblog into a SAN, you could break it apart and store it on a commodity hardware cluster comprised of a couple of machines with local storage?

And what if, you could push the work you want to do onto those machines with the units of data that constitute the file? In parallel? Instead of moving the data around?

 

Say hello to Hadoop – a software framework that has come to play a pivotal role in solving Big Data problems. Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. Hadoop at it’s core consists of two components :

1) HDFS or the Hadoop Distributed File System that provides high throughput access to data and designed to run on commodity hardware

2) Map-Reduce: A Programming model for processing large data sets

So what makes Hadoop the right tool for this problem?

  • Storing very large volumes of data across multiple commodity machines: With HDFS, large sets of large files can be distributed across a cluster of machines.
  • Fault tolerant: In computations involving a large number of nodes, failure of nodes is expected. This notion is built into Hadoop. Data from all files is duplicated across multiple nodes.
  • Move the computation, not the data: This is one of the core fundamental assumptions in Hadoop “Moving the computation is cheaper than moving the data”. Moving the processing of the data to where the data is not only reduces network congestion, but increases the overall throughput of the system. This is known as “data locality”.

 

In my next blog post, I'll explore some more aspects of Hadoop and talk about addtional tools in the Big Data quiver that gets you completely armed for your Big Data challenges.

In my last post I discussed how critically important risk taxonomy is for the success of an ERM program – the need for the organization to agree on risk-related terminology, formalizing it as part of the organization’s risk management practices, obtaining formal sign-off from executive management and the board of directors, communicating it to stakeholders, and operationalizing it within the organization’s governance tools.

 

Another critical aspect of ERM program enablement is the attitude and commitment of the organization’s senior leadership.  This “Tone at the Top” significantly influences the effectiveness of an organization’s ERM program in the following ways:

 

  • The scope of the ERM program.  To truly be an ERM program the scope must be holistic and include all operating units, geographies, risk types, products, processes, etc.
  • The degree to which managers feel responsible for risk management.  Optimally, risk management should be the responsibility of each and every manager, regardless of their position within the organization, and as risk managers, each manager should be accountable for understanding their key risks and maintaining the appropriate internal controls within their domain of responsibility.
  • The consistency of risk decisions.  Consistent risk decisions are encouraged by establishing and enforcing aligned risk appetite, tolerance, and delegated management risk-taking authorities, and escalating decisions to successively higher authority as thresholds are exceeded.  The degree to which the “official” risk management rules are set aside to fast-track an initiative, accommodate a pet project, or avoid confronting an exceptional, difficult, or politically connected manager will undermine the effectiveness of the ERM program.
  • Risk management agility.  The probability of the organization meeting its objectives, whatever they may be is dependent on how quickly management becomes aware of and responds to changes in its risk profile.  Fostering the necessary information transparency throughout an organization and the accountability to respond when appropriate, requires the commitment of senior leadership.
  • The amount of resources committed to manage risk.  Capital investment and human resource commitments should be aligned consistent with the degree of effectiveness necessary to manage risk within the appetite and tolerance of the organization.
  • Aligning compensation to desired behavior.  Incentive compensation that influences risk taking outside desired boundaries or potentially compromises the role of persons in control positions is inconsistent with sound risk management practice.

 

Individuals charged with the responsibility for the effectiveness of the organization’s ERM should seek to secure a tone at the top, both in word and action, that leaves no doubt about the organization’s commitment to risk management best practices.  Fortunately, executive management and boards of directors have plenty of obligations and incentive to establish a strong tone at the top including regulatory obligations, threat of shareholder suits, and empirical evidence of the superior performance of organizations practicing ethical business and holistic risk management. Focusing senior management’s attention on these obligations and incentives may go a long way to secure the necessary commitment, if such commitment is not everything it should be today.

You want to know what the real answer to all this Big Data challenge is ? It's in us!!!

Fantastic isn't it. Well till that becomes "commercially viable" , let's talk about what we can do today.

 

The right tool for the right job – that’s no doubt a cliché to many. But it’s surprising how often the tools at hand are used for any kind of job. In my last post, I talked about why dealing with Big Data is not just about data, but also about a new set of tools.

 

Let’s dissect a use case to understand the heart of the problem. In earlier posts, I talked about clickstream data. Clickstream data is data that is generated by user actions on web pages – this can include everything from components on the web page that were downloaded when a user clicked on something, the ip address, the time of the interaction, the session id, the length of time, number of downloads triggered, bytes transferred, referral URL etc – in “tech” speak, you can say it’s the electronic record of actions that a user triggers on a web page. All of this is recorded in your web server logs. On business web sites, these logs can grow to several gigabytes a day easily. Also, like I mentioned in my previous post, analysis of this data can lead to some very beneficial insights and potentially more business. To get some perspective, if you are an Archer Administrator, check out the size of the largest log on your IIS server that hosts the Archer web application. I am guessing the largest file is easily a few hundred megabytes if not close to gigabytes. OR WAIT, Archer History Log anyone?

Here’s a snippet from the web server log on my local Archer instance (on my laptop), which incidentally was about 17 MB in about a day (used primarily by me a couple of times a day):

53958

Now there are those who would argue that log data is not the best example of “Big Data” – part of the reasoning being that it does somewhat have a structure. Besides, weren’t businesses doing click stream analysis already before it was characterized as Big Data?

Yes they were – but there’s a little thing they do that is very inconspicuously described as “pre-processing”. Pre-processing is a diversion that hides fundamental challenges in dealing with all of the data in the logs. Logs themselves or raw data in the logs are second class citizens or even worse “homeless”. The web servers don’t want to hang on to them since the size of the logs can impact the web server host itself in terms of performance and storage. The systems that are going to use this data don’t want them in the raw format and don’t want all of it.

 

Typically, “pre-processing” involves some very expensive investments to clean the data, validate certain elements, aggregate and conform to quite often, a relational database schema or a data warehouse schema. Not only that, but both the content and the time window are crunched to accommodate what the existing infrastructure can handle. At the tail end of this transformation is the loading of this data into the warehouse or relational database system. Not only does this data now represent a fraction of the raw data from the logs, it could be several days between the raw data coming in and the final output into the target data source. In other words, by the time somebody looks at a report on usage stats and patterns for the day the actions were recorded, weeks could have gone by. And the raw data that was input to this is usually thrown away. There’s another big problem in this whole scenario in that you are clogging your network with terabyte data movements.  Let’s face it – where and how do you cost-effectively store data that is coming in at a rate of several hundred gigabytes a day? And once you do how do cost-effectively and efficiently process several terabytes or petabytes of this data later? This is a Big Data problem. We need a different paradigm to break this barrier.

What if, instead of trying to pump that 200 GB daily weblog into a SAN, you could break it apart and store it on a commodity hardware cluster comprised of a couple of machines with local storage?

And what if, you could push the work you want to do onto those machines with the units of data that constitute the file? In parallel? Instead of moving the data around?

 

Say hello to Hadoop – a software framework that has come to play a pivotal role in solving Big Data problems. Hadoop is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. Hadoop at it’s core consists of two components :

1) HDFS or the Hadoop Distributed File System that provides high throughput access to data and designed to run on commodity hardware

2) Map-Reduce: A Programming model for processing large data sets

So what makes Hadoop the right tool for this problem?

  • Storing very large volumes of data across multiple commodity machines: With HDFS, large sets of large files can be distributed across a cluster of machines.
  • Fault tolerant: In computations involving a large number of nodes, failure of nodes is expected. This notion is built into Hadoop. Data from all files is duplicated across multiple nodes.
  • Move the computation, not the data: This is one of the core fundamental assumptions in Hadoop “Moving the computation is cheaper than moving the data”. Moving the processing of the data to where the data is not only reduces network congestion, but increases the overall throughput of the system. This is known as “data locality”.

 

In my next blog post, I'll explore some more aspects of Hadoop and talk about addtional tools in the Big Data quiver that gets you completely armed for your Big Data challenges.

Continuous monitoring, also known as continuous controls monitoring, continuous re-authorization, continuous diagnostics and mitigation (CDM), or just CM, is not yet mandatory in the federal government, but will be soon. All government agencies are supposed to be thinking about it, and planning and budgeting for it in the future, but only a handful of the most forward-thinking agencies have tried to do it in earnest.

The largest part of CM is continuously assessing your residual risk by checking your controls. It is continuously giving assurance to the AO who gave you permission to operate your Information System, that it is still at least as secure as the day they approved it. It is not to be confused with network security monitoring, and it is not something that is satisfied just by having a SIEM installed. We are talking about a broader and different context of monitoring.

One large obstacle in the discussion of continuous monitoring is the semantic differences and confusion caused by the terms continuous, constant, and automated. Continuous is in many ways just a relative term. OMB A-130 defined a three-year cycle for assessing controls, so any increment less than that can seem “continuous” in comparison. “Continuous” does not mean “constant”, however. NIST states in 800-137 that controls should be “assessed and analyzed at a frequency sufficient to support risk-based security decisions to adequately protect organization information”.

 

 

Another layer of confusion ensues when people start to equate “continuous” with “automated”. All of an Information System’s controls should be continuously monitored whether or not there are automated means. Some controls simply must be assessed manually.

Those agencies who are attempting continuous monitoring, however, are often focused more on a narrow band of controls, like vulnerability and configuration scanning, which are easily automated and / or seem more critical than other controls. Process-oriented controls like policy-writing and personnel security haven’t received much attention in the continuous monitoring world. While some controls are less important, none are unimportant, and there is no consensus on the frequencies and methods that should be used for an effective and comprehensive continuous control monitoring strategy (stay tuned for upcoming white paper on this subject). There are some written guidelines in the federal community, but many of them are in draft. The first two publications to mention concerning CM are NIST SP 800-37 Rev 1 and NIST SP 800-137.

800-37 Rev 1 describes NIST’s Risk Management Framework (RMF). Rev 1 was the first time NIST defined their RMF (the new way of saying C&A) and a new emphasis was put on the monitoring phase to set the stage for 800-137. SP 800-137 is the de facto CM bible right now. http://csrc.nist.gov/publications/nistpubs/800-137/SP800-137-Final.pdf . It is CM theory, not instructions, and it is written at a fairly high level. It can be ephemeral at times for people who are looking for explicit guidance (“Just tell what I need to do to comply!”). Chapter Three is the most specific in this publication and I will be covering a lot of that material in the third part of this blog series. SP 800-137 does a great job of emphasizing that CM means both continuous manual and automated assessments.

There are three other CM documents worth mentioning:  NISTIRs 7756, 7799, and 7800. http://csrc.nist.gov/publications/PubsNISTIRs.html. These cover Continuous Asset Evaluation, Situational Awareness, and Risk Scoring Framework Extension (CAESARS FE). All three are still in draft.  The point of these three NISITRs is to cover the CAESARS model defined by DHS, which is a CM model, in turn, based on the iPost model developed by the State Department. All of these models will be covered in more detail in my next installment of this series.

NISTIR 7756 is the introductory document of the CAESARS FE. It defines the components that should be present in a mature CM implementation and the roles of those components. They are referred to as subsystems, and there are six: Presentation/Reporting, Content, Collection, Data Aggregation, Analysis/Scoring, and the Task Manager.  Most of the data collection discussed in 7756 is to support automated monitoring. The report is also ready to point out, however, that a CM model should also allow for non-automated controls which “will require some human data collection effort”.

NISTIR 7799 builds upon 7756. It takes the six previously defined subsystems and describes ways they can act in very complex use cases. The entire report is devoted to describing all of the possible work flows between the six subsystems. This means defining all of the possible inputs and outputs from one subsystem to another.

NISTIR 7800 defines how to bind the model described so far in 7756 and 7799 to three specific CM domains: vulnerability, configuration, and asset management. While describing how to do this, the report covers special XML formats called Security Content Automation Protocol (SCAP) specifications. These are covered below. Given the fact that 7800 only covers three CM domains and NIST defines “at least 11” in NISTIR 7756, one can guess that there will be future draft NISTIRs that cover the domains not listed in 7800 such as event, incident, malware, etc.

So, that’s a good starting point. Tune in February 11 for the next installment which covers iPost, CAESARS, CAESARS FE, what you can steal from those models, and how to apply those ideas to your organization.

Email me with any comments. Thanks!

Chris

In my previous blog, I introduced the idea that the concepts around security incident response need to evolve based on the threat landscape facing organizations.   The first step in heading towards this next generation of security operations is improving the visibility into what is going on with the technical infrastructure.   I used the analogy of giving telescopes to the lookouts on the castle walls to see the impending attack sooner.

 

First, our lookouts need to be looking in the right direction and taking in the activities in and around our castle. Real Time Monitoring is necessary to capture events and organize the data such that the security operations function can make sense of the activity.  Security Event Information Management (SEIM), log collection and correlation systems are examples of this infrastructure.  This infrastructure also would include file integrity monitoring systems, system event logging systems, application logging systems and any other technology, role or process that is actively monitoring systems.

 

Secondly, the lookouts need to not only see, but understand, what is going on around them.  So a second element is enabling Forensics and Analysis to review security information from the real time monitoring processes and perform analysis based on expert input to identify patterns of active threats in the infrastructure.  This also includes the evidence collection, preservation and analysis processes that would support Incident Management and Investigations.

 

Most organizations have these capabilities.  The depth and breadth of the ability to capture and inspect events and network traffic are varied but this infrastructure has been part of security strategies for a while.   There are two key inputs that are needed to really move the needle when it comes to improving these capabilities within Security Operations.

 

'Real time' event analysis opens up many challenges – too much data moving too quickly towards an overwhelmed team of people.   The technologies for these monitoring processes are getting better.  A dimension that can greatly advance the process is feeding the criticality and data profile of devices into the mix.  Understanding the connection of the devices to business processes, and ultimately what data is flowing through those devices, provides ‘business context’ and is the next evolution of “tuning” for real time monitoring.

 

The second factor in improving monitoring processes is security intelligence and ‘indicators of compromise’.   Known malicious code, URLs, hosts and other data will assist security operations in identifying possible attacks or actual breaches.   This information, coupled with the ‘business context’, greatly improves the prioritization ability of security operations.

 

I won’t keep the analogy running too much longer and exhaust my readers, but I think it is an apropos way to look at this.   The first iteration of real time monitoring placed lookouts on the ramparts focused on watching everything going on OUTSIDE the castle.  Next, we told the lookouts to watch both outside and inside the castle.  Now we need to give the lookouts better methods to view what is going on and methods to identify areas of surveillance (key vulnerable areas, indicators of malicious activity, etc.) that need extra attention.

We had a great turnout for our weekly webcast yesterday which was all about BCM for financial services.  I especially want to thank Dan Minter from Equifax for co-presenting with me.  Here's the link if you want to listen Listen to the Recording or see the presentation View PDF. Dan also recently presented at the RSA Archer Roadshow and here's his full presentation Equifax BCM and RSA Archer presentation.

 

I had a few thoughts after the call that I wanted to talk about here.  Isn't it nice when things fall into place or complement each other and make our lives easier?  In a highly regulated industry like FS it feels at times as if regulations and regulators have different agendas, priorities and approaches - and sometimes they do.  However, sometimes things line up nicely, as is the case with these authoritative sources we talked about on the webcast yesterday.  These are just a few, but focus on the underlined wording:

 

  • ISO 22301: A holistic management process that identifies potential threats to an organization and the impacts to business operations those threats, if realized, might cause, and which provides a framework for building organizational resilience with the capability of an effective response that safeguards the interests of its key stakeholders, reputation, brand and value-creating activities.
  • FFIEC: Specifies that directors and managers are accountable for organization wide contingency planning and for  timely resumption of operations in the event of a disaster.
  • FDIC: ...perform an appropriate enterprise-wide business continuity risk assessment, which duly considers the results of the departmental business impact analyses.

 

So, here are a few points to consider.  Have you ever had a hard time justifying your BCM program, funding and priority?  These regulation among others each highlight the need for holistic, organizational, enterprise-wide resilience.  Not a "check the box" program, but real strategies tied into those of the organization with priority and funding to build organizational recovery.  I once consulted for a utility company that consistently included business continuity as one of its top ten strategic objectives for the year and this was 10 years ago before all of the attention BCM has started to get.

 

This leads into something else, which is the need to coordinate and leverage other related disciplines across your oganization.  For example, in your BCM program do you evaluate and account for risks that could result in a business disruption?  Think your Enterprise Risk Management (ERM) function does the same?  How about your loss prevention group, reinsurance or legal groups if you have them.  Have anything in common with them?  Would aligning with and leveraging these groups help you drive holistic, organizational and enterprise-wide BCM?  I'd also venture to guess that the better story you have around how your BCM function has coordinated, planned and tested with these other related groups, the better you'll fare in your next audit, not to mention how much better and stronger your BCM program will be.

 

My last point, and one that Dan hammered home on the webcast, is that your external partners have a vested interest and want to contribute to your organizational resilience and are a critical part of your "holistic management process".  Equifax showed us how their planning and BIA approach considers effects on their customers.  Like those internal groups that also perform loss prevention, risk mitigation and other resilience type activities, your external partners are a critical piece to the puzzle and are an integral part of your contingency planning activities.

 

In closing, and not just to adhere to a methodology or regulation, let's think holistically about organizational resilience.  There are lots of critical dependencies and intersections you'll come across.  Look for them and embrace them into your enterprise-wide recovery program.  You'll have a much stronger approach and your recovery results will show it!

Hello everybody! A slightly belated Happy New Year to you all. With 2012 barely behind us, 2013 is already shaping up to be a very busy and very exciting year for us as we race ahead with exciting product innovations and thought leadership. RSA recently sponsored a series of roundtable webcasts and I had the pleasure of participating as one of the panelists. Our moderator was Michael Rasmussen, noted GRC pundit and a member of the Leadership Council of OCEG, the Open Compliance Ethics Group. The focus of our discussions centered on the different stages of an organizational policy management program. Leading up to the discussions we helped to create a series of illustrations that were featured over several articles published in Compliance Week.

 

Over the next few posts I’ll recap these discussions and share some insights. One of the areas of focus was tracking changes that affect policies. Shifting regulatory landscapes, third-party relationships, business climate changes such as expanding into emerging markets or M&A, all serve to influence and impact organizational risk and policy. How do we detect and manage this swarm of change and measure the potential impact? What are the best ways to demonstrate diligence and manage risk? How do we ensure organizational policies remain aligned?

 

During the webcasts our audience was polled for how their organizations kept up with changes that could impact policy. This is one of the toughest challenges that companies face. Not only is the global regulatory environment a growing burden, but the ability to demonstrate consistent and timely diligence can itself be a burden. One of the most revealing statistics our audience reported was that over 80% of them used email and ad-hoc, fly-by-the-seat-of-the-pants approaches as their primary means of keeping pace. Perhaps that’s why a Gartner reported regulatory uncertainty as the top risk identified in a recent global CEO study. So where do we begin to gain a solid foothold on the problem?

 

Whether it’s legal & regulatory influencers or changes in business direction, the first step is to establish clear ownership of the process. This role may live in the legal department or maybe it’s just thrown out to the business. Recognizing this as a core enterprise process is the first step and then building a cross functional team to own the methods the organization uses to keep track of regulatory changes is imperative.

 

Secondly, there are several commercial “watch dog” services available that can monitor and report on changes to regulations and a variety of other things. These services may also bundle legal opinions and advice with certain subscriptions which can be helpful for their customers to gauge the initial impact. But there are also a number of free options available too. In the US for instance, nearly every major government agency provides RSS feeds to report their activities, including notices and proposed rule changes. Aggregation sites like Justia.com also provide consolidated RSS feeds for most of the federal register.

 

Whether using a commercial service, tapping into free resources, or both, receiving alerts is only half the battle. What do you do with the information? How organizations respond to these changes will determine whether they remain compliant and having the ability to document the impact is critical. The ability to filter down to the business critical items and put them through a consistent process of review, impact analysis, and action is the key. This can’t just be a thread of emails bouncing around the organization.  It needs to be a defined process with a clear documentation trail to not only remain organized but also demonstrate proper diligence around the process. Next time we’ll explore the impact of internal changes and the elements of review workflow and response. Until then, all my best for 2013!

I'm thrilled to announce that on January 23, 2013 the new Business Continuity Management (BCM) solution from RSA Archer will be available!  We've offered a BCM solution for about five years now with over half of the Fortune 100 companies using it, so you may ask - why a new solution?

 

We're all having to improve our game and some interesting metrics show just how prepared we really are. For example, Gartner estimates that only 35% of organizations have a comprehensive recovery plan in place. According to Strategic Research, the cost of downtime is estimated at close to $90,000 per hour, and Deloitte says the survival rate for companies without a recovery plan is less than 10%.

 

Implementing BCM is a tough job by itself, but things are getting tougher.  Companies are asking for more guidance, structure and approaches to help them.  There seem to be more disruptive events emphasizing the need for better recovery capabilities.  On top of that, organizations are more global, which makes the repercussions greater.  The new ISO 22301 standard requires more of us in that we integrate BCM into the strategic fabric of the organization and such disciplines as Enterprise Risk Management and Governance, Risk and Compliance.  As a result, BCM programs everywhere are going through a transition from a 'check the box activity' to a central focus with lots of internal and external attention.

 

The days of separate BCM, IT Disaster Recovery (DR) and Crisis Management (CM) programs are past and these disciplines must be coordinated, and it's tough to do so without the right toolset.  As a result, an increasing number of organizations are recognizing the need to revamp their approach to BCM which includes changing out their legacy systems that are difficult to use and maintain or not robust enough to accomplish their recovery program objectives.

The new BCM v4 solution enables you to tackle these challenges and more.  I invite you to take a look at the new solution.  Attend our January 24th webcast and see what it has to offer your organization and how you can meet your recovery planning goals.

One of the great benefits of enabling an Enterprise Risk Management program is the ability to see and consistently manage risk regardless of where it resides within the organization.  Consistent risk management is dependent on a lot of different things but foundationally, the organization must agree on certain key terms and these terms must be clearly communicated to management and the board of directors, and routinely reinforced in risk discussions and as management turnover occurs.

 

An organization’s risk taxonomy is the language of how the organization talks about risk.  Identifying, measuring, deciding, treating, and monitoring risk cannot be done consistently without agreement on the definition of the following terms:

 

  • Risk – How does the organization define risk and does it include both negative events and the cost of opportunities forgone?
  • Categories of Risk – What are the risk categories that the organization wants to consider in its discussion of risk and how are they defined?  Risk category examples include market, credit, liquidity, operational, compliance, strategic, reputation, etc.
  • Internal Control – What is the definition of an internal control, how is it constructed and classified (manual, automated, preventive, detective, etc.)?
  • Inherent vs. Residual Risk – Will both of these terms be used? 
  • Loss Event – What constitutes a loss event or near miss and in what categories will losses be captured and catalogued?
  • Risk Assessment – What are the acceptable methods for assessing risks: qualitative, quantitative, modeled, and under what circumstances is it acceptable to use each method?  Will assessments consider likelihood / probability, frequency, impact, or other variables? Will risks be assessed on a discreet or systemic basis or both?  How must the assessments be documented?
  • Risk Appetite and Risk Tolerance – What does risk appetite and tolerance mean to the organization and how should risk decisions be made within the context of risk appetite and tolerance?
  • Risk Rating Scales – Has the organization chosen a risk rating scale such as high, medium, low, 1-5, and/or monetary scaling?  For whatever scale has been selected, what is the definition of high, medium, low, 1-5, etc.  Are these scales aligned to the definition of materiality used in the company’s financial statements?  Are the scales consistent with how the company’s regulator evaluates the company?
  • Policy, Procedure, Regulation, Obligation, Rule – What do these terms mean to the organization?

 

While it is essential that an organization have an agreed upon taxonomy to consistently manage risk, organizations should consider externally aligning their taxonomy.  There are great benefits of an organization aligning their risk taxonomy with common standards such as ISO 31000 or COSO, the taxonomy used by their regulatory bodies, and the taxonomy used most often by its customers and partners.  The broader the agreement an organization and its constituencies have about risk, the more efficiently and effectively risk can be managed.

 

Optimally, terminology should be formalized as part of the organization’s risk management practices, approved by senior management and the board of directors, communicated to all stakeholders, and field names and reports in the risk management and governance tools used throughout the organization  standardized on these terms and definitions.

 

I would be interested in hearing from others on this subject and whether there are other terms and definitions that should be included in this list of essential risk management taxonomy.  In later blog entries I will discuss some of the other factors that contribute to a good risk management program.

In my last post, I covered certain elements of Big Data and how you identify with Big Data. Not everybody needs to deal with Big Data, but for those who do, they quickly realize that the hammers and wrenches they have been using to deal with traditional data no longer are the right tools.

 

Popular websites and portals easily get several million visitors a day and several billion page views per day. This “clickstream” data is very log like and while it has a pattern, does not necessarily fit the definition of “structured”. Further, the rate at which this data streams in is very, very fast.

 

Facebook gets over 2 billion likes and shares a day – to many, this is “fun” data and nobody really looks behind the scenes (nor do they need to) to see how Facebook manages this data. Today, this type of data (social media) is actually being mined by organizations to do things like “sentiment analysis”. This technique is very useful to business in making “course corrections”  based on their interpretation of “sentiment” say towards their products or product campaigns. Similarly,  “Likes” can be utilized for targeted advertising and marketing if it’s a page that’s owned and operated by a business. When you “like” something, news feeds and ads related to that product or service are constantly fed to you.

 

Consider the realm of security. Protecting cardholder information is critical and is a top priority for financial institutions. Understanding purchase patterns and buying behaviors is key to detecting fraud early and accurately. Payment platforms have to deal with several sources from point of sale systems, websites and mobile devices.  Although, many institutions do fraud detection today, they rely on “small” (smaller) subsets of data simply – technically known as “Sampling” to build the data that will eventually run analytics on. The rest of the data is pruned onto magnetic tapes (regulatory requirements) and may potentially never see the light of day or the “probing of BI tools”.

 

Problems like these can be easily related to when you think outside of business and IT. Let’s take this very simple (though a little exaggerated scenario). Let’s say I am helping my school goer with a data collection project to be done during spring break. My son wants to count and group cars that come into our neighborhood street by color – say 7-8 am in the morning and 5-6 pm in the evening for 5 days. We have 4 people in our household. One way we could do this is to have 1 person each for each of the days, with one person maybe covering day 1 and day 5. Another approach could be to have one person cover the morning hour and another cover the evening hour.

Pretty straightforward right? – The tools we would use are no more than a paper, pencil and a calculator potentially (maybe mental math is more than enough). The process is also not too complicated – look out the window or sit outside by the door ; start marking off counts by color:

Green : | | | | | | | | | | |

Red: | | | | | | | | | | |  | |  |  |  |  |  |  | | | | | |

Black: | | | | | | | | | | | | | | | | | |

 

At the end of the 5 days, we sit together by the breakfast table and total up the counts of the different car colors.

 

Now let’s say you want to cover two streets (two neighborhoods) – you can’t just sit outside your door anymore or look out your window. You can either enlist a friend for help in the other neighborhood or sit in front of your friend’s house for a couple of hours every day. That’s not bad – you have to go out of your way to enlist an additional resource (friend) or additional system in the process (friend’s house) – but it’s still doable with paper, pencil and maybe a calculator ( I know you probably don’t need one, but it’s a handy tool lying around the house).

 

You post this little project on your facebook page innocuously. Social media swings into action – ten  subdivisions now are wildly interested in knowing the count of cars grouped by color in all of their neighborhoods not just for 2 hours a day but for 8 hours every day. They want you to lead this effort. They will anxiously, excitedly wait for the results on the 6th day.

Ten subdivisions – let’s say each subdivision has 10 streets, that’s counting cars in and out from 100 streets for 8 hours.

This is a wildly exaggerated scenario – but consider this: even if it were three subdivisions with 30 streets and 2 hours a day, your process for 1 street – sitting outside your doorstep for an hour, counting cars on paper and finally tallying at end of day 5 – is no more feasible in this scenario. It’s certainly not feasible for 100 streets and 8 hours each day.

Your process needs to change to handle this scenario – you will need new tools (probably spreadsheets, a tablet maybe) and new ways to efficiently divide up the work and do the final aggregation. You certainly need more people involved and doing work for sure.

So, if your business needs require you to ingest and process this type of data – where the volume, the velocity and the variety is far greater than any scale that has been dealt with before- you need a different approach to tackling this data. You need new tools and new ways of handling this data deluge. This is really what Big Data is about.

I am purposely peeling the layers of the Big Data onion slowly. Many times, and quite too often, people think about the data deluge in terms of one element or one characteristic of Big Data (often volume, sometimes variety) and immediately run off to acquire tools for that element.

In the next blog post, I will start delving into some of the technologies and tools that are necessary when you start down the Big Data path.

 

Rajesh Nair

Senior Product Manager, RSA Archer

Happy New Year!

 

Well, we are now finally entering into the season of federal continuous monitoring. In the past few years just a few trailblazers tried it, and there were lessons learned and false starts, but 2013 looks to be the year the government is moving in earnest to really embrace CM.   DHS has released their RFP for Continuous Monitoring as a Service (CMaaS) solutions. Supposedly this year, there will be some update that makes CM mandatory. Will it be an Executive Order? Will Congress be able to pass a bill after so many failed attempts? Will it be an update to the crusty, old OMB A-130? (one of the more interesting rumors) Whatever the case, it’s time we all “get smart” on the subject of continuous monitoring.

 

To that end, I will be writing a series of blogs this quarter covering the subject of continuous monitoring.

 

21 January –Continuous Monitoring: What It Is and Isn’t. This is an introduction that will cover definitions and concepts, including the semantic differences that drive everyone crazy and often derail conversations on CM (continuous vs. constant vs. automated, etc.) I will give a synopsis of all the relevant documents and their relationships (800-137, NISTIRs 7756, 7799, 7800, etc.)

 

11 February – Practical Models: iPost and CAESARS / CAESARS FE.  I will discuss the practical models that have been developed so far that we can learn from, use, or emulate. The two models I will cover will be iPost and CAESARS (FE). I will compare and contrast them and discuss their strengths and gaps. Seeing what others have done will hopefully give you ideas for what you want to do, which leads to…

 

4 March – Implementation! What are your options for implementation? What will the challenges be? How do you devise an implementation plan without killing your compliance / IA staff from the strain?

 

Throughout this series I will also try to sprinkle in some relevant links to articles as they apply.

If there are subjects pertaining to CM that I haven’t mentioned above and you would like to see, email me . I will try to work them in or add another blog onto the end of the series.

 

Thanks so much and tune in the week of January 21st to start this exciting and enlightening journey!

(or not, but understanding this CM stuff is pretty important for future job security – just sayin’…)

 

“They have got to be so scared to miss it! So terrified!”

- Bill Murray, Scrooged

 

Chris

Over the past few weeks, I have been watching some interesting articles trickle across my screen as I peruse industry news.   Dark Reading has been posting recaps of significant security attacks and breaches from 2012 as they review the year.    Each one of these articles (and this is just one source of industry news) captures security threats in their worst form – the aftermath.  Just a sampling of topics to think about:

 

Insider Threats:  “Five Significant Insider Attacks Of 2012” highlights the challenge of managing insider threats.  This is a serious challenge since the problem hinges on something many companies truly take pride in – their own employees.

 

Malware:Malware in 2012 saw a vicious and ominous turn.  Malware is no longer the random act of some programmer striving for short-lived and notorious programming street cred.  Malware has become the tool of choice for calculated, nefarious crimes.

 

Data Breaches:  Another article, “10 Top Government Data Breaches Of 2012” focuses on the government breaches but highlights just how serious some of these breaches can be in compromising personal information.   Healthcare information faced the same serious threats as reported in another Dark Reading story “Most Healthcare Organizations Suffered Data Breaches“.   There are other massive data breaches reported in 2012 and these articles are just slivers in the big picture.

 

What does this mean as we head into 2013?  It means that the “incident response” plans that were drawn up, tested, implemented and put up on the shelf a few years ago are not prepared for this new battleground.   Security threats – from hacktivists to criminal organizations to state entities – have more tools, techniques and attack vectors than ever before.    Just like when the first trebuchet and catapult arrived on the scene outside the castle, it is that time, once again, when the defenders need to re-think their fortifications, evaluate the ramparts and re-invent defenses and lines of resistance.

In the next few blogs, I will discuss the attributes of the “next generation of security operations”.  The tenets are simple:

  • Increase visibility across the enterprise to identify active threats quickly;
  • Understand the business impacts to better respond; and
  • Utilize resources to the fullest.

 

 

To further my castle analogy, we need to arm the lookouts with telescopes to see the catapults being moved on the battlefield sooner.  We need to know where the castle walls are the thinnest and most vulnerable while understanding where the crown jewels are secured.  We need to marshal the foot soldiers to the right rallying point to meet the enemy.   This is the new paradigm of security operations.  The ‘incident plan’ of the past needs to evolve if we want to change the outcomes of the stories I referenced.  I would hate to be sitting in January 2014 reading some of these same types of articles.  It is too depressing of a way to start off the year.  However, with the right strategy, 2013 can be a year of change for security operations.

 

To get some more insight on the upcoming challenges in 2013, check out RSA’s SBIC Trends Report: Information Security Shake-Up: Disruptive Innovations to Test Security’s Mettle in 2013 to see how some of the industry’s top leaders are approaching top of mind security issues.

This is my first of a series of posts on the topic of data, data management and yes ultimately tying into Archer and GRC. But before we dive into Archer and GRC,
I am going to first talk about data management because fundamentally data is where it all begins.  Right? And what better topic to start off with than something that is trending red hot on the data meter : "Big Data".  Besides we all got a handle on "small" data, right ?

 

In the movie “BIG”, the character played by Tom Hanks literally grows big overnight. This overnight transformation posed immediate problems – he couldn't wear the same outfits anymore, he couldn't use his “boy” bed anymore, his normal mode of transportation wasn't "fitting of him and so on. At the same time, he slowly began to see and use the advantages from being BIG.

 

We can certainly draw from this if anything to shed light on some common concerns about Big Data.

 

1) You don’t wake up to “Oh My God, Where did all this data come from”? Well, hopefully, you don't. At least in general, most organizations don’t get a large shipment of data dumped in their backyard one day in one big visible heap. In fact in almost all organizations, data has been flowing in over the years; it’s been ingested, cleansed, analyzed, filtered, processed, published and archived. Till a few years ago, most of this was data from sources that organizations knew they needed to draw informatio from. Also till a few years ago, you had a data “funnel” – lots of data being ingested, but eventually, after you analyzed it, you only processed and persisted a small percent of the ingested data. Albeit, it should be noted that the variety and rate at which data has been flowing in has picked up in the last few years.

 

2) Do you(I) have a Big Data problem? I have heard this posed over and over again. Data is your “opportunity” not your “problem”. The real question that needs to be asked is what business problem you have or what opportunity you can now create by harnessing “Big Data”.

 

3) What do I do with my old (small) data? Absolutely continue to use it the way you have because your business still runs on it. Big Data doesn't mean that you have to completely rethink and re-hash everything you have as we will soon see.

 

That’s fine and dandy, you say, but can I identify “Big Data” if I needed to solve a business problem – AAAAAhhhh, now let’s talk!  This is a very valid question. Let's talk about this a bit. Fundamentally, you need to first identify your need and then delve into “Discovery mode” to find the data you need to satisfy your requirements. So how do you discover “your” Big Data? We could start with a definition but we will leave technical definitions of Big Data aside for now as there are many pundits who have already defined for the general use case. We will "characterize" Big Data shortly. As I mentioned, keep presumptions aside and focus on identifying the data you need:

  1. First don’t search for big data. When you get there it will be staring at you . Ok let’s not sidetrack. Start with listing all the data sources that you believe will collectively give you the data that you need to solve your business problem or create your business opportunity. The key here is identifying " the data you need", not identifying systems in your organization and this is a very important change in mindset - why? - because traditionally, whether you like it or not, whether consciously or sub-consciously, many look at what data is available within the organization as opposed to what data is needed.
  2. I mentioned the change in mindset needed here. Now that being said, chances are you will find that a lot of the data is already “groomed” and “usable” from systems you have today – transactional systems , data warehouses, data marts etc. You may require additional data to be utilized from these sources than you did before – that’s ok. The more your organization's information systems can be leveraged, the better. 
  3. So far so good - you are happy, you haven't identified anything that really can't be handled by the organization's data sources. But then you start thinking about other pieces of data you really need to achieve your goal:

Maybe you have a popular web retail front end and one of your objectives is to improve your understanding of your customers “click-through” on your website. This can give you all sorts of insights to improve say purchasing likelihood, “website stickiness” etc. So you want to capture and analyze clickstreams starting from the search page that a visitor found your link on. You want to look at the hit date and time, download time, user agent, search key words etc. You are now thinking where and how to ingest and store this data, pre-process and build a predictive model before loading certain information into the warehouse. And you want to keep all that data so you can mine over time,

OR

 

Maybe you are in the energy business and want to take the lead on smart meters by collecting meter data on an hourly basis. No wait- you want to leap frog the competition by building a solution that can take in readings from 10 million meters at 10 minute intervals. That’s 60 million readings per hour or 1.44 billion readings per day.

OR

 

Maybe you are the head of enterprise IT Security team on a mission to minimize threats. You want to take Enterprise IT security to the next level by  analyzing  traffic/data flow from all systems in the enterprise and detecting patterns that are indicative of a threat. That’s right – traffic from all systems in the Enterprise and provide maybe a daily report across all systems on findings.

 

If this type of data is raising your eyebrows, then, well, congratulations, you now have Big Data staring at you !

 

All of the above have at least two common characteristics – it’s volume of data on a scale that you have not handled before and the rate or frequency at which the data is coming in is very fast. There are some other facets to Big Data which I will get into later.

 

In my next few posts, I will explore in more detail the characteristics of Big Data and delve into technologies that can help you leverage Big Data for your business.

Stay tuned.

 

Raj Nair

Senior Product Manager, RSA Archer

Filter Blog

By date: By tag: