In my previous blog about cyber risk quantification and privacy, I suggested that there is a role for assessing risk using cyber risk quantification and assessing risk from a privacy orientation. Let me explain further. Cyber risk quantification is hugely important to an organization! Cyber risk quantification is used to answer these kinds of questions:
- What would be the monetary impact on the organization, if it experienced a cyber breach?
- How much, in monetary terms, is risk reduced if a particular control is implemented?
- What’s the monetary value of implementing this control over that control?
- How much cyber insurance should be purchased to cover the organization’s cyber risk (what should be the dollar limit of the insurance policy on a single and aggregate loss basis)?
These are extremely important questions that every organization needs to answer. When these questions can be answered in monetary terms, it is much easier for executives and the board to prioritize the allocation of scarce human and capital resources in the management and transfer of risk.
Privacy laws change the orientation of risk assessment from the impact of a cyber incident on the organization to an assessment of how the cyber incident would impact an individual. Originally, privacy laws were very prescriptive about the obligations to individuals, as can be seen in these two regulatory obligations:
- The Australian Privacy Principles state that an “entity must take reasonable steps to protect personal information it holds from misuse, interference and loss, as well as unauthorized access, modification or disclosure.”
- Section 501 of the U.S. Gramm-Leach Bliley Act states that each financial institution has an affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal information.
Contrast these rather prescriptive requirements with the EU General Data Protection Regulation, effective this May.
- The EU-GDPR was designed to “protect [the] fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.”
The EU General Data Protection Regulation broke from the older, more prescriptive, requirements of the Australian Privacy Principles and the U.S. GLBA, and expanded the scope to include “fundamental rights” of EU citizens. In the United States, this would be analogous to equating GLBA with the Declaration of Independence, where you might end up with a privacy statement like “institutions have an affirmative and continuing obligation to respect the privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal information so as to not infringe upon the individual’s unalienable right to life, liberty, and the pursuit of happiness.”
As I said, The EU-GDPR was designed to “protect [the] fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data.” There happen to be fifty fundamental rights identified in the Charter of Fundamental Rights of the European Union. Not all 50 of these fundamental rights could be infringed by poor information security but a thorough risk assessment requires the assessor to evaluate the likelihood and impact that an information security incident could have on the individual’s fundamental rights.
The change in orientation from assessing the impact of a breach to the organization to one of assessing the impact on the individual ultimately influences an organization’s cyber risk appetite too. An organization may have an appetite for $10 million in cyber breach-related costs but zero tolerance for an information security breach that could compromise the life and safety of employees. Both risk appetite statements are perfectly logical. However, to assess the risk requires two different but complimentary approaches: Cyber Risk Quantification and Privacy Risk Assessment.