How to Conduct a Proper Cybersecurity Risk Analysis

Probability x Impact = Risk Formula

In the cybersecurity industry, the scientific process of studying, investigating, uncovering, and explaining the nature of behaviors and dynamics of computing systems that may result in risk to the companies that operate them and to their users, is considered risk analysis.  Attributing values and classifications to analyzed risks is the process of risk evaluation.  These two processes — analysis first, evaluation last — comprise a joint concept referred to in English as risk assessment.

This distinction is important in cybersecurity because it does not necessarily apply to other fields where risk plays a pivotal role, including finance and insurance where the art of risk management originated.  Indeed, many individuals who are considered the pioneers of risk awareness and who teach courses on risk management perceive “analysis,” “assessment,” and “evaluation” very differently.  To this day, you will see and hear arguments that we’re saying this wrong.

For cybersecurity purposes today, the overriding authority on the final definition of risk analysis is international standard IEC/ISO 31010, Risk management – Risk assessment techniques.  This standard unequivocally categorizes risk identification, risk analysis, and risk evaluation, in that order, as the three principal stages of risk assessment.  Used properly in the context of cybersecurity, “analysis” and “assessment” are not interchangeable.

What you need to know about risks before analyzing risks

As the second stage in assessment, risk analysis assumes that risks have already been identified and registered.  The task at hand now is to research and identify the many factors that constitute the often unique profiles of each risk, in order that they may be evaluated (literally, given statistical values) in the third stage of risk assessment.

A proper risk analysis methodology incorporates both qualitative and quantitative techniques, not one’s choice between the two.  There is never a decision to use one or the other; you always use both.  All good risk evaluation formulas incorporate a principal qualitative factor and a principal quantitative factor.  For example, in: 

risk = probability × impact 

the probability factor is an estimate, although it is rendered as a calculation.  The impact factor, meanwhile, may have quantifiable guideposts that divide their classes of magnitude on a scale.  However, where those guideposts reside and how the factor’s value is arrived at, are essentially subjective judgment calls.  Good risk analysis always blends the objective with the subjective, and one’s ability to provide both classes improves as risks are more properly managed over time, and as organizations come to inevitably experience more and more risk events.

risk event is an uncertain, though possible, occurrence that may have negative impact on the organization.  (Other methodologies refer to this as a hazard.)  Properly identifying and registering a risk event means separating it from any specific cause, and distinguishing the event itself from the impact it has.  A stain on the reputation of the company is a potential impact, the scale of which does factor into the analysis of risk.  Yet it is not the identity of risk — the hit a company takes when a risk event happens, is not the same thing as the risk itself.

When you get to the risk analysis stage of risk assessment, risks will have been identified properly.  A break-in at the company’s data center is not the risk itself, but it could cause a risk event such as the degradation or elimination of system functionality.

One simple yet very effective way suggested in 2008 by risk consultant Dr. David Hillson [PDF] to test whether risks have been identified properly is to imagine the three components — the cause, the risk event, and the effect or outcome — as elements of a sentence constructed with metalanguage:  “As a result of [cause], [risk event] could happen, resulting in [effect].”  Once you can perceive these three components in their proper relationship to one another, a follow-on sentence may include the quantitative and qualitative factors in the risk analysis equation:  “Such an event has a [probability] percent chance of occurring, and if it does, the worst that could happen is [impact].”

Questions every risk analyst must ask

The main issue that the impact variable in the risk equation seeks to resolve is, how big each risk is.  One could apply a simple scale to the question and ask every risk analyst, “On a scale of 0 to 10, how would you score the impact level for this risk?”  It would be a very subjective question that ignores the varying experiences analysts have had in their careers.  What one analyst may perceive as a relatively insignificant event might, for another analyst, have been a grueling ordeal in the past that can never be forgotten.

Among the qualitative rating methodologies that SimpleRisk employs are two — the Common Vulnerability Scoring System (CVSS) and the Open Worldwide Application Security Project methodology (OWASP) — whose shared guiding principle is a comprehension of the nature of the threat at the center of risk.  Rather than set some arbitrary 0-to-10 scale of “bigness,” OWASP and CVSS both ask the risk analyst to classify several aspects of each threat.  Behind their respective veils where you don’t necessarily need to look, both systems assign scores to the classifications you’ve assigned.  Those scores are applied to formulas which yield a numerical result, which constitutes the impact factor of the risk formula.

When you’ve adopted one of these scoring systems, there are aspects you must consider as a risk analyst that might never have come up in discussion before.  Indeed, some of these aspects were acknowledged for the first time years after the first editions of these systems were deployed in the field.

Can the method of exploiting this threat be automated?

Starting in 2011, aircraft maker Lockheed Martin developed a seven-stage framework it calls the Cyber Kill Chain, as a method for modeling the behavior of an intentful threat actor.  They’re familiar concepts, but quite ingenious when laid out in this fashion: reconnaissance (researching potential targets and vulnerabilities), weaponization, delivery, exploitation, installation (injecting malware into the system), command and control, and actions on objectives (doing actual damage).

Assuming a vulnerability is exploited with intent, you need to ask whether any part of this Kill Chain, or the Chain in its entirety, can be automated so that humans don’t have to be sitting there at some terminal console in order for the seventh stage to be carried out.  At issue here is a component called the attack vector, which is an indicator of whether the entire attack may be carried out remotely, or whether instead it requires the physical presence of a human actor at or near the site of attack.

Does the exploit require alterations to the threatened system?

If a vulnerability is severe, one might not necessarily need to install malware in order to open the system up to exploit.  This fact gets to the heart of several components of CVSS 4.0, including:

  • Attack requirements, which denotes whether the components with which a malicious actor would exploit the vulnerability are already present in the system, or else need to be injected into it or installed somehow
  • Attack complexity, which denotes whether any protection system would need to be circumvented in order for the vulnerability to be exploited
  • Privileges required, which denotes whether an actor exploiting the vulnerability must first gain administrative privileges for the system, or whether instead that vulnerability is open for exploitation by a standard user with normal restrictions and limited access

How much potential damage is there to the brand’s reputation?

When cybersecurity frameworks were first being conceived, reputational harm was considered, at best, “collateral damage” — a kind of side-effect from inadvertently exposing so much personally identifiable data to malicious users.  Security professionals were advised to consider reputational harm as a supplemental shock wave of longer-term impact.

Then in 2011, a Ponemon Institute survey report revealed that whenever a data breach occurs at a corporation whose assessed brand value is around $1.5 billion, the event wipes out about 17 percent of that value on average, perhaps more.  Since it’s reputational damage that gets a data breach incident covered by major news media anyway, there recently has been a revelation that assaulting the brand may very well be the principal objective of a malicious incident.  A subsequent 2013 University of Göttingen, Germany, empirical study underscored this finding:  Publicly traded corporations suffering massive data breaches, the study concluded, tend to lose stock value in excess of operational losses, especially if internal fraud is a contributing factor.

There are ways of ascertaining the possible quantified monetary loss to an organization, specifically by measuring the perceived value minus perceived liability of an organization, for periods of time before and after the breach event.  Rather than afflict organizations with the burden of estimating such a value now (there are professional risk assessors for precisely this purpose), OWASP simply asks you to consider whether an exploit would harm the brand and its standing among customers.  For now, it is enough to classify whether a risk event would lead to negative publicity that’s often more difficult to mitigate than systems failures.

Does an exploit require direct human participation on the inside?

Some vulnerability exploits may be completely automated.  But in the case of activities involving phishing and behavior manipulation, the vulnerability being exploited is a kind of hybrid beast, where it’s not so much the system that’s being attacked as the environment of organizations using that system.  This is why a professional vulnerability assessment takes employee behavior into account.  The weak points in any organization’s security are often humans whose behavior may be manipulated by forces much more psychological than hacking into networks.

Quantitative analysis demands a proper sense of scale

Qualitative risk analysis presents you with a profile that gives you a more realistic measure of the size and priority of various risks.  The quantitative factor applies this profile to an appropriate scale.

With SimpleRisk, that scale is the cumulative value of all valuated assets attached to a risk, called Maximum Qualitative Loss (MQL).  It’s equivalent to the value an organization would expect to lose in a worst-case scenario.  It’s a methodology that’s easier to calculate than a factor based on a time scale, such as Loss Event Frequency (LEF) or Threat Event Frequency (TEF), though it consumes far less time to calculate and is probably less prone to error.  It eliminates the guesswork element that’s normally attributed to qualitative analysis.

Once you have conducted a thorough risk analysis, you have all the factors necessary to enable SimpleRisk to produce a thorough risk evaluation.

 

LEARN MORE FROM SIMPLERISK




Want to learn more? Check out these related posts:

Cyber Risk Management Cybersecurity Policies & Risk Frameworks Risk Assessment & Mitigation Security Strategy Privacy & Online Safety Compliance Frameworks & Standards Business Continuity & Resilience Security Questionnaires & Assessments