Skip navigation
All Places > Products > RSA Archer Suite > Blog > 2014 > October
2014

I recently participated on a webinar panel sponsored by Everbridge and RSA with participants from the medical, transportation and emergency management disciplines where we discussed the Ebola outbreak and impacts to organizations. Each expert had fascinating information to report and there were excellent questions by the 550 person audience on the webinar.


So, what do we know about Ebola? Ebola is a rare and deadly disease caused by infection with a strain of Ebola virus. The 2014 Ebola epidemic is the largest in history, affecting over 10,000 people in multiple West African countries. Ebola is spread through direct contact with blood and body fluids of a person already showing symptoms, but is not spread through the air, water, food, or mosquitoes. The World Health Organization (WHO) provides a data sheet here WHO | Ebola virus disease that is very informative.


The U.S. Centers for Disease Control and Prevention (CDC) reports that the risk of an Ebola outbreak affecting multiple people in the U.S. is very low. The CDC has tried to establish a national standard, recommending that only people who had direct contact with Ebola patients without any protective gear submit to isolation at home for 21 days, the maximum period for symptoms to develop. However, a month after the first confirmed case of Ebola in the U.S., state and local health authorities across the country have imposed a hodgepodge of often conflicting rules.  Some states, such as New York and New Jersey, have gone as far as quarantining all healthy people returning from working with Ebola patients in West Africa. In Minnesota, people being monitored by the state’s health department are banned from going on trips on public transit that last longer than three hours. Others, such as Virginia and Maryland, said they will monitor returning healthcare workers and only quarantine those who had unprotected contact with patients.

 

The international community is also responding.  For example, North Korea announced it will quarantine foreigners for 21 days over fears of the spread of the Ebola virus, even though no cases of the disease have been reported in the country, or anywhere in Asia, and very few foreigners are allowed to enter.  The Australian government announced that it was canceling non-permanent or temporary visas held by people from the affected countries who were not yet traveling, and that new visa applications would not be processed. If the outbreak is not controlled soon it may continue to have affects on other regions such as Europe, where we are seeing some uncertainty and unrest.


The question is, how does this affect your organization currently or in the near future? Is it affecting third parties you do business with, or key customers?  Looking ahead and putting contingency plans in place relative to the risk to your organization is a smart move. A good place to start is to dust off those pandemic plans you Business Continuity folks probably compiled a few years back during the H1N1 scare. A key step is to understand the potential impact of the Ebola situation (current and future - as much as possible) on your organization and employees, then create an action plan and communications plan accordingly.  For your communications plan, take such action as monitor information from formal authoritative (CDC and WHO) and informal (social media) sources and craft factual information into regular and frequent updates to employees and their families, and external constituents (customers, public, regulators).  The communications could include what is known and unknown, what the organization is doing to stay informed of the situation, how the organization and its employees are affected and how the organization is responding proactively. Include both push (emails, notifications) and self-serve (intranet, company website, social media) communications.  Be honest, factual and frequent.  Avoid rumors.  Showing the company is proactive, actively monitoring and assessing the situation and communicating openly and frequently goes a long way towards reducing uncertainty and concern.

 

Contact me at Patrick.potter@rsa.com if you would like the webinar presentation or have questions.

Following up on my previous blog for the Q3-2014 content release announcement, here’s some additional information on the changes we’ve made to the Archer Control Standards library.

 

A few months ago we started a project to create a new GRC taxonomy to improve the way the Archer Control Standards library is organized. While the previous categorical groupings loosely served this purpose already we wanted to tighten things up and reset on a new standardized foundation. So we parsed several prevailing standards and control frameworks to aggregate all the various categories and areas of coverage. We then distilled those down into a consolidated set of 57 categorical terms and developed descriptions for each to comprise our new Archer GRC Control Standard taxonomy. The last step was to reclassify each control standard under the new taxonomy which at 1,200+ control standards was no small effort!

 

This new taxonomy is intended to replace the previous collection of terms that grew over the years with a more concise and descriptive resource to make exploring the Archer Control Standards library easier. You’ll be able to better search and filter for specific areas of coverage as well as more quickly identify and assign ownership based on roles and responsibilities.

 

Everything needed is included in the Q3-2014 quarterly content release package. A formatted XML import file and set of instructions for implementing this new taxonomy are provided to make it a straight forward data import exercise. Adopting the new taxonomy is not a requirement although it is highly recommended as it will be the embedded standard beginning with version 5.5.2 of the Archer platform due to be released shortly. As such we will only be including the new taxonomy values in the Control Standards import files going forward.

 

If you have existing workflows, reports, etc., tied to the old values you can keep using those and migrate to the new taxonomy at a later time or use both indefinitely. The release documentation discusses a few scenarios to help illustrate various options and of course you are always welcome to reach out to Customer Support or me personally for any inquiries or assistance.

 

We hope you find this new GRC taxonomy useful and as always welcome any feedback you have.

 

All the best,

Mason

@masonkarrer

Hello everybody! This has been one of the most exciting Octobers we’ve had here in Kansas City in a long time as our beloved Royals battled it out in the World Series for the first time in nearly 30 years! Although we came up a little short in the final game, win or lose we’re incredibly proud of our team and what they’ve done for our town.

 

Now onward to a special Archer content release that includes a big change to the Archer Control Standards Library. I’ll focus on the normal Q3/2014 cumulative content release items here and cover the Control Standards library changes in a separate blog post.

 

For starters we’ve added FedRAMP to our Authoritative Sources library. It neatly coincides with The specified item was not found. and the The specified item was not found. by virtue of shared mappings to common Archer Control Standards. We also added the 2014 version of the The specified item was not found. published by the Information Security Forum (ISF) with a whopping 9,200+ mappings to Archer Control Standards! This release is just in time for the 25th annual ISF World Congress event around the corner, this year being held in Copenhagen, Denmark.

 

Other updates include a re-release of the The specified item was not found. with enhanced descriptors to the hierarchical name field values to improve filtering and corrections to some minor errors discovered in a handful of PCI v3 SAQ question records. As in the past this quarterly update includes both new content as well as updates to existing content elements that may already in your library. So you’ll want to pay special attention to the release notes and supplemental documentation before processing them to ensure everything is well understood. Once again the update page with release notes is here and content import packs are available through Customer Support. As always we’re here to answer questions too - whatever you need.

 

Mason

@masonkarrer

I just got back from NIST’s 6th Cybersecurity Framework Workshop in Tampa and wanted to share some of the really positive signs of progress. This was the sixth workshop, but the first in another sense. By this I mean that it’s been eight months since the release of the framework. This workshop really had the feel that it was the first post-release workshop where a significant number of organizations have had enough time to assimilate the document, message it throughout their organization, plan, implement, debate, etc. For all these reasons, unlike previous sessions, which were more about tinkering with the framework itself, this was a lot more about getting meaningful feedback from the early adopters and discussing the value people have realized by implementing it.

 

What are the strengths?

 

Intentional Development

Several panel speakers made the same point that just discussing and planning the use of the CSF had multiple positive results. It forced them to bring stakeholders together that had not been communicating previously. It forced them to define what risk means to each of the stakeholders. Finally, it forced them to define their risk appetites.

 

Vetting

While NIST continuously points out that there is no such thing as “CSF-compliant”, many people want to use it for vetting.  This point came up several times in the context of vendor-to-vendor relations and supply-chain, that the CSF could be used for business partners or prospective clients to show each other where they are in their security programs.   One of the panel speakers, who works for a collective that approves funding for large-scale utility investments, said that they want to see evidence of prudent decision making before they invest. They have embraced the CSF as an indicator for prudent decision-making in IT security, an area where they are not experts.

 

Flexibility

“Flexibility is the core strength of the framework”. This was the most common message of the workshop, repeated by many panel speakers and throughout the working sessions. Tim Casey, a risk executive from the chip-maker Intel, gave several examples of how they tailored the categories and subcategories provided by NIST to their own needs. This included adding an entirely new category: Threat Intelligence. They did all of this while in contact with NIST, who consistently offer the message of “tailor it to work for your organization”. Another panel speaker, from Chevron, specifically cited the DHS CSET tool, a precursor to NIST CSF that also targets critical infrastructure, was not customizable and pointed out that the CSF gave him the flexibility he needed to build the appropriate in-house solution.

 

 

How hard is it?

A lot of the questions from the audience to the session panel speakers were around the level of effort in implementing the framework. On this subject, Chris Boyers from AT&T said that “NIST had created a great product, one that industry can largely support”. A more enthusiastic endorsement came from Intel, who said that for their enormous, multi-billion dollar company, that defining their internal process and stakeholders, and completing their first, high-level assessment had taken less than 150 work hours. Most of the audience (including myself) was pretty obviously surprised by that number.

 

Where is it going?

Ari Schwarz, from the National Security Council, headed off questions about a CSF version 2. He essentially said there was no change in the near future, and to implement it as it stands, don’t wait for v2, etc. I think confusion around this subject comes from the NIST CSF Roadmap which can be found here. These were areas for planned improvement that NIST released almost at the same time that the CSF was released. They were just acknowledging that they knew there were areas that would grow, but that implementation of the CSF would still be valuable in the meantime.

 

There were also delegates from the UK government and European Union present. The short take away from them: First, the UK likes the CSF and is encouraging its use to its companies. Second, the CSF will be most successful when it’s embraced globally. This is really just a supply chain comment, since we live in a global economy.

 

Lastly, RSA was present in the tech expo area, which was restricted to only five vendors. We provided demos of our NIST CSF proof of concept. That’s all for now.

 

Email me with comments or questions or if you would like to a demo of our CSF POC.

 

Thanks for reading.

Chris

 

twitter

@chrish00ver

A little verse, in honor of Halloween.

 

It was a dark and stormy night and all through the network

ghostly ghouls and scary creatures around each byte lurk.

 

Zombies crash through the firewall and Monsters infect machines

Goblins eat up data, Yikes! it is such a horrible scene.

 

We find our heroes trying, Oh, so diligently

Art the dogged security guy and Tim the admin techie.

 

Creeping through the darkness, to find the latest foe

A flashlight as their only tool, the batteries running low.

 

Footprints left by invisible villains  and traces they cannot see

and unknown to our heroes, data leaving the DMZ.

 

The maze of the network twists and turns  and shifts constantly

And the creatures grow more powerful becoming monstrosities.

 

Werewolf, Dracula and Frankenstein as scary as they are

Today’s digital adversaries are more terrifying by far.

 

In underground lairs, they connive and scheme and plan their evil ways.

They build their tools to attack your bits to steal anything that pays.

 

Poor Art and Tim work day and night, building up their perimeter walls

only to see the fruits of their labor fail, in minutes the defense falls.

 

So Art and Tim do research and find the answers they seek

They begin to tackle the problem, the dream of every geek.

 

They find answers to lock the windows and solutions to look the door

Their managers are happy and keep asking for more.

 

Their systems now require magic digits that befuddle the foes

to keep the creepsters out of the data and the users on their toes.

 

No longer do they use a flashlight but have a spotlight to illuminate

They capture every packet and log to inspect and investigate.

 

Every click is watched so close, to Art and Tim’s wonder

To make sure those bad guys don’t steal, pilfer, and plunder.

 

The fraudsters try to steal every hard earned buck

But with Art and Tim paving the way, too bad – they are just out of luck.

 

Policy and controls are wrapped up in a nice little bow.

Because of GRC, you see, Art and Tim are in the know.

 

And audit and vendors and risk and continuity

Are managed well to deal with business in all its spontaneity.

 

With intelligence on criminals, our heroes don’t lose heart

They are ready to stop the cybercrime, before it can even start.

 

With visibility, analytics and action, Art and Tim are feeling right

And can protect the day against the things that go bump in the cyber-night.

Do business teams believe they are collecting and analyzing the information they need to be effective?  Do they get relevant information about the operation and value of their capability to manage performance, risk and compliance?  Do boards, auditors and c-suite executives have confidence that the right information is being collected and analyzed to drive achievement of objectives? These questions and more are addressed in the 2014 GRC Metrics Survey.

 

“Using analytical data developed through key performance and risk metrics to drive priority and action is the backbone to any good governance, risk and compliance (GRC) program,” says Patrick Potter, GRC Strategist at RSA, the Security Division of EMC. “Most organizations react from delayed or incomplete information, and by the time corrective action is taken the issue is well past. Analysis of current and relevant metrics brings visibility to areas that truly need the organization’s strategic and tactical focus.”

 

Michael Rasmussen, Chief GRC Pundit of GRC 20/20 and OCEG Fellow, adds, “The question is how mature are an organization’s GRC-related strategy, processes, and architecture.  A primary factor in GRC maturity is how well the organization understands and utilizes metrics to drive the achievement of objectives while addressing uncertainty and acting with integrity.  OCEG’s work in GRC metrics is critical in helping organizations understand, define and mature GRC metrics in their organizations.”

 

“We will be comparing views on metrics and how they are used to our GRC metrics survey that took place in 2008,” said Carole Switzer, OCEG Co-Founder and President. “It will be interesting to see how far organizations have come in the ensuing six years, as technology for collecting and analyzing metrics in GRC has evolved.”

 

Participate in the survey, presented by the OCEG  and sponsored by OCEG GRC Solutions Council members, ACL, Baker Tilly Colombia, and RSA, the Security Division of EMC.  All participants receive a free report on the results.

“Protection in isolation is a brittle strategy.”

Journal for Homeland Security and Emergency Management “An Operational Framework for Resilience” Link

 

Last week, I attended and presented at the ISACA/ISSA joint conference in Phoenix.  During one of the keynote sessions, the quote “Protection, in isolation, is a brittle strategy” was used to highlight the importance of recognizing no defensive or preventive measure is 100% effective.  Organizations must be able to ‘anticipate, absorb and recover from negative events.’  This simple concept is usually ingrained in most risk and information security professional’s brain but so often it slips out of the picture when strategies are being put into place.

 

Today, we see much conversation around the importance of building safety nets for bad things that can affect an organization.  In the past few weeks,  Patrick Potter discussed the importance of Business Resiliency, Fran Howarth covered the perils of not being prepared for a data breach and Demetrio Milea outlined the human and process elements of Incident Response.  Looking across the industry, the discussion of preparing for crisis events – whether it is a data breach, a natural disaster or a compliance violation – has been gaining momentum.   With the firestorm (both internal and external) that immediately accompanies any negative event, the security, risk and compliance teams must be prepared to anticipate that impact and set in motion the right recovery efforts to absorb the event.

 

In Operational Risk Management programs, many times the “Lines of Defense” are termed First Line (Line of business and “frontline employees”), Second Line (Risk functions) and Third Line (Internal/External Audit).  These lines provide the safety net when it comes to anticipating risks, implementing controls and evaluating controls.   In Information Security, those lines equate to the First Line of IT Admins, Application Owners and End Users, Second Line of Security Operations and Third Line of Security Analysts and Crisis management.  From an overarching perspective, Business Continuity and Disaster Recovery can also play a role if the situation warrants true continuity and recovery operations.

 

The point is that as an organization looks at risks and possible negative events, it is important to remember that the front line of defense can crumble quickly.  The Second Line must reinforce and catch the pieces but a larger safety net has to be in place to absorb the overall impact.   The Third Line of defense – whether it is business continuity, disaster recovery, crisis management, audit management or merely your A-team of gurus – has to be prepared, capable and enabled to recover the business.

Mason Karrer

Need Your Help! COBIT 5

Posted by Mason Karrer Employee Oct 8, 2014

Greetings Archer Rockstars!! I'm looking for any COBIT 5 users out there. If you're driving COBIT5 activities in Archer then I want to talk to you and learn more about what you're doing. You can contact me through my community profile or just reply to this post and I'll reach out to you directly. Don't be shy - the more the merrier! Thank you!

Shellshock, you have heard about it, have brainstormed with your teams and now must be assessing how much is the impact to your organization or division. Let’s take a look at some of the steps that you can take to understand and enhance your security posture.

 

Starting point: Take inventory. Identify the devices that are potentially most vulnerable. These include that have “bash” installed on them.

If you are using RSA VRM (Vulnerability Risk Management) you can easily run a query rule to search for all scanned devices that have “bash”. All the information about scanned devices is already stored in the warehouse. VRM creates an active catalog of assets that is auto updated & totally in sync with what network vulnerability scanners are seeing in the environment.

 

The security analyst can run a simple query to find all devices that have “bash”. This will provide the starting inventory of devices. Proactively, the analyst can create alerts so he can monitor on his dashboard the current devices with “bash” and any new devices that are added which have “bash”.

96658

Once you have identified which devices have bash installed, as a second step, check for the devices that have sister shells such as “/bin/sh” installed, as sometimes these are copies of bash.

 

This gives a good first pass on the inventory of devices that could be impacted by Shellshock.

 

Now, you also want to cover your bases by looking for all devices that have the known “shellshock” vulnerabilities. The security analyst can run a search query on devices using CVE ids of “shellshock” vulnerabilities. So far, six vulnerabilities have been identified that are related to Shellshock: CVE-2014-6271, CVE-2014-7169, CVE-2014-7186 & CVE-2014-7187 and CVE-2014-6277 & CVE-2014-6278.

Four of these vulnerabilities (CVE-2014-6271, 7169, 6277, 6278) are related. CVE-6271 is the key vulnerability amongst these and the other three exist because of incomplete fixes.

96659

96690

The patches are not available for some of these, yet. So, keeping a close eye on the inventory of devices would be helpful.

 

So far, you have created a good list of devices by searching for specific vulnerabilities as well as shell (bash or sh).

 

Now, prioritize the devices that need to be patched first. You may want to consider prioritizing these devices based on criticality and business context (who owns the device, risk rating, compliance rating).

As an example, E-mail servers are considered to be potential targets for Shellshock, so you may want to include type of device to patch as a key parameter in your prioritization scheme. Once you have a strategy to prioritize then you can easily add new devices such as Network Attached Storage, which has recently been identified as a target of “shellshock” attack.

 

Once prioritization is determined, kick off a remediation work flow. Involve the decision makers and cross-functional teams that are stake holders. This will provide the necessary foundation to establish the vulnerability management program for Shellshock. Follow-thru by iterative improvement in the process as you go along the remediation process.

 

Manage exceptions along the way, so the risk is properly identified and the concerned people are notified to obtain the risk exception approval.

In summary,

  • Take inventory of devices with Shellshock vulnerabilities
  • Prioritize based on criticality and business context
  • Establish vulnerability management process
  • Use KPIs to iteratively improve the vulnerability management process

 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Here is a quick summary on GNU bash vulnerabilities:

  1. CVE-2014-6271: GNU Bash 4.3 and earlier contains a command injection vulnerability that may allow remote code execution (aka Shellshock/Bashbug). Bash supports exporting of shell functions to other instances of bash using an environment variable. This environment variable is named by the function name and starts with a "() {" as the variable value in the function definition. When Bash reaches the end of the function definition, rather than ending execution it continues to process shell commands written after the end of the function.
  2. CVE-2014-7169: This vulnerability exists because of an incomplete fix for CVE-2014-6271
  3. CVE-2014-7186: The redirection implementation in parse.y in GNU Bash through 4.3 bash43-026 allows remote attackers to cause a denial of service (out-of-bounds array access and application crash) or possibly have unspecified other impact via crafted use of here documents, aka the "redir_stack" issue.
  4. CVE-2014-7187: Off-by-one error in the read_token_word function in parse.y in GNU Bash through 4.3 bash43-026 allows remote attackers to cause a denial of service (out-of-bounds array access and application crash) or possibly have unspecified other impact via deeply nested for loops, aka the "word_lineno" issue.
  5. CVE-2014-6277: This vulnerability exists because of an incomplete fix for CVE-2014-6271 and CVE-2014-7169
  6. CVE-2014-6278: This vulnerability exists because of an incomplete fix for CVE-2014-6271, CVE-2014-7169, and CVE-2014-6277

 

Reach me on Twitter:@RajMeel7 

Data Classification is an absolute core tenet of information security.  I would bet money that if you collected a dozen Info Sec pros in a room and asked for 10 major commandments, Data Classification would be one of them.  It goes back even further than the Orange Book (for those old-timers out there).    Interesting tidbit – the original Ten Commandments were published as “Public Use” documents.   In fact, some of the Dead Sea Scrolls had labels of “Top Secret” on the parchment. Ok – I am stretching the truth a bit but you get my drift.    Labelling data based on its sensitivity is a very important part of security and has been around for a long time.  You have to know what you are protecting.

 

Companies still struggle with this basic premise of security.   Almost every organization has some scheme to label systems and data based on the value to the organization.  This is a good thing.  It sets the bar for securing information, it establishes basic control requirements and it educates the information user on the importance of protecting the data. However, with the proliferation of data in an organization, it is never cost effective to implement a stringent (dare I say completely accurate) process to label data.   Most classification themes and methodologies focus on point in time classification.  The reality is that the sensitivity of some data grows or diminishes over time.   Let’s take a look at some examples:

 

Financial reporting data can be extremely sensitive.  Companies go into lock down mode near end of quarter as the numbers are crunched for earnings reports.  That data is absolutely on a need to know basis until POOF! the numbers are released and they become UBER-Public.  So the curve of sensitivity (simplistically visualized) looks something like:

 

96242

Personal Information has a different curve.  A name by itself is mostly harmless.  A name plus a phone number is relatively harmless.  But a combination of certain personally identifiable information (depending on your jurisdiction) can instantly become extremely sensitive. PII (or EPHI) has a sensitivity curve like this:

96243

 

If you evaluate other forms of information in your organization – Research and Development plans, Merger and Acquisition negotiations, Pricing negotiations, etc. – most every form of data will have some curve related to its sensitivity. Sticking a label on the data at any one time may or may not be valid over the lifetime of the data.  Modifying the controls based on these changes could be impossible. Creating a Control Curve that mimics the Classification Curve is most likely completely cost ineffective or administratively impossible (e.g. moving data between control environments based on changes).  Then you have other challenges like what happens when mixed data is sitting together such as last Quarter’s earnings numbers (Public information) sitting with next Quarter’s earning numbers (Very sensitive)?  Of course you default to the highest level security but it muddies the picture.

 

So do we throw Data Classification out the window when it comes to information security? Absolutely not.  What we need to incorporate in our strategies is the notion that the data does change over time and that this has to be a living part of the security program.  Information Owners should be educated about their “curve” and Security must be aware of the major shifts in sensitivity during the data lifecycle.  A fluid, working relationship between Information Owners and Information Custodians needs to be established to ensure controls (and level/cost of effort) is commensurate with the sensitivity of the data. Labels shouldn’t be placed on data using permanent glue.  Instead, applying Velcro labels that can be modified is a better approach.  The challenge is getting the right conversation going in the first place and this drives right into one of my constant themes – Business Context for Information Security.

 

Business Context for Information Security is the cornerstone of building a security strategy that meets 21st century demands. Information Security functions need the visibility to adjust efforts, prioritize issues and focus controls based on the value to the business.   Data Classification and understanding information assets is a critical part of this visibility.   When a security function can work with the business to understand these data sensitivity curves, it is much better positioned to address threats to the data. Fostering this conversation should be a priority as part of the greater security strategy.

Flying an airplane is no easy feat.  If you don’t believe me, just check out the cockpit of this Boeing 747-400! Even the most well-trained
96226and experienced pilot has a myriad of details to think about during each and every flight – from navigation, to weather and air traffic control, to aircraft operations; not to mention flying conditions like air speed, altitude and turbulence.  To successfully operate the aircraft, the pilot has a litany of controls, dashboards, indicators, gauges, GPS and flight schedules they rely on.  They have to monitor and use these interrelated procedures and controls at all times during each flight – and every flight is different.  They also have to manage risks along the way, such as weather, other flights and disruptions, along with complying with from Federal Aviation Administration (FAA) regulations and policies instituted by their airline or employer.  With any aircraft there are built in safeguards, controls, and standard operating procedures to ensure safe operations is the highest priority.  There are also backup procedures and steps to follow in the event something doesn’t go as planned or unknown risks present themselves.  This is a perfect example of how resiliency is built into a process.

 

The Business Continuity Management (BCM) industry is changing to take a similar focus.  Just take a look at the latest governing standard – ISO 22303/22313, which is all about building resiliency into the business, expanding the scope from historic business recovery.  Resiliency needs to be incorporated into all areas of the business – from risk management, to performing the business processes themselves, to managing IT, third parties, and more.  The latest analyst predictions of the BCM market substantiate this movement as well.  However, when we look introspectively at our own companies and functions, do we know just how resilient our functions, company or organization are, and what we can do to drive resiliency throughout the organization?

 

What most organizations struggle with is not the desire to be resilient, but the ways to put this into practice. Back to the airplane analogy, in order to drive true resiliency, there have to be interrelationships between processes.  Just like the pilot who manages the flight, navigation adjustments, risks that come up and compliance simultaneously, our organizations don’t work in silos either.  For example, in this day and age of cost cutting and process reengineering, it’s common to see business process managers that are also risk managers that have BC plans to update and test, vendors to evaluate and controls to follow.  The challenge is most of our business processes and teams work alone.  What further complicates matters is unlike the pilot who has interrelated tools, dashboards and controls, most  automated tools  they’re using are not inter-related. They’re also not very intuitive or easy to learn.  They are typically not used very often, so end user adoption isn’t very successful and intended process improvement and return on investment is rarely achieved.

 

Governance, Risk and Compliance (GRC) attempts to weave together separate processes into inter-related disciplines.  Even if step by step, the organization that sees and implements a true vision of related, resilient processes and automation, and does so from the perspective of the end user will significantly improve adoption, achieve greater ROI and be far ahead in building true resiliency into the organization.  Like the pilot flying above clouds, seeing the vast horizon ahead of them, our organizations will fly higher on the wings of business resiliency!

Filter Blog

By date: By tag: