Mind The Gap: The Critical Difference Between Likelihood and Impact

Loading the Elevenlabs Text to Speech AudioNative Player...

When assessing risks, likelihood (how probable an event is) and impact (how severe its consequences would be) often get blurred into a single metric, summarized in shorthand such as “low risk” or “high risk”. 

This blurring happens in two main ways. Sometimes it is psychological conflation, when our intuitions and emotions make us confuse how likely something is with how bad it would be. Other times, it is methodological combination, when organizations deliberately merge probability and impact into a single composite score for operational simplicity.

Whether unconscious or by design, this conflation flattens important nuances. The result is that fundamentally different risks get treated as equivalent. A low-probability, high-impact event and a high-probability, low-impact event might both receive the same "medium risk" label for example, despite requiring entirely different responses. In the worst cases, we end up treating low-probability, high-impact scenarios the same way we treat low-probability, low-impact ones – dismissing them as unimportant simply because they are unlikely.

The Deepwater Horizon oil spill in 2010 illustrates this danger well. The possibility of a major blowout was known within BP, but internal assessments emphasized its improbability. Cost-saving and operational priorities took precedence over preparing for a worst-case scenario they assumed would not occur. When the unlikely happened, the impact was devastating: eleven lives lost, an environmental catastrophe, and over $65 billion in costs (and reputational damages) – far exceeding what prevention would have required. Treating "unlikely" as "low risk" created a false sense of safety and left BP unprepared when crisis struck.

Similar dynamics have appeared elsewhere, from fire-safety oversight before the Grenfell Tower tragedy to pandemic preparedness shortfalls. Risks were recognized but deprioritized because they seemed remote. This pattern reveals a dangerous tendency to treat low-probability, high-impact risks as operationally equivalent to low-probability, low-impact ones, which can have devastating consequences.

To cope with uncertainty, we simplify risk, focusing on what's most likely rather than what would matter most if it happened. This helps us function day-to-day, but it can also leave us unprepared for the rare, transformative events that truly test resilience.


Why Our Minds Blur the Line

Although the distinction between likelihood and impact appears straightforward, even the smartest people often fail to maintain it. Cognitive psychology reveals a set of systematic biases that cause us to blur these dimensions:

  • Probability neglect: When emotionally charged outcomes are involved, we tend to ignore probabilities altogether. A frightening scenario may feel equally urgent whether it has a 1% or 10% chance of occurring. As a result, we treat small risks in all-or-nothing terms: dismissing remote possibilities or obsessing over worst cases despite their tiny odds. 

  • Availability heuristic: We judge likelihood based on how easily examples come to mind. Vivid events feel “likely”, while mundane risks fade into the background. E.g., many assume plane crashes are more common than car crashes simply because aviation disasters dominate the news. In organizations, the last highly visible failure often crowds out quieter threats developing out of sight.

  • Affect heuristic: Our likes and dislikes distort our risk judgments. If we favor a project or technology, we unconsciously downplay its risks; if we dislike it, we inflate them. A popular initiative gets treated as “safe,” while an unfamiliar option feels disproportionately “risky.” But preferences have no bearing on actual probabilities or impacts.

  • Optimism bias: We assume good things are more likely for us and bad things less so. In corporate contexts, this becomes an “it won’t happen here” mindset. Leaders hear about industry-wide breaches or failures yet privately believe their organization is the exception.

These psychological tendencies operate unconsciously and affect novices and experts alike. Awareness alone rarely neutralises them: knowing about probability neglect doesn’t prevent it. But they can be mitigated through deliberate training and structured methods. Professional forecasters, for instance, learn to recognise and counteract these biases, separating emotion from probability judgments. Without such discipline, these tendencies quietly shape decisions at every organisational level.


When Organizations Reinforce the Blur

Individual awareness of cognitive biases, while valuable, is not enough. Organizational systems themselves encode the conflation of likelihood and impact, embedding it into the structures that shape decision-making across the enterprise.

Many organizations rely on risk matrices that plot likelihood against impact and then assign an overall rating based on where a threat lands in the grid. A single “medium risk” label can describe a once-in-a-century catastrophe or a monthly operational hiccup – two fundamentally different situations requiring vastly different responses. Instead of clarifying this distinction, the matrix often obscures it.

Alongside matrices, organizations also rely on composite scoring systems that collapse risks into a single number (often probability multiplied by impact or a similar weighted formula). These scores create cut-off thresholds that determine which risks receive resources or executive attention.

This “single number” mindset can lead to perverse outcomes, particularly for the most consequential risks. Consider threats so severe that they would compromise the system itself – scenarios where, if they occurred, all other concerns would become moot. A cyberattack that could collapse critical financial infrastructure, a pandemic capable of shutting down global supply chains, or emerging technologies that could fundamentally destabilize geopolitical order: these are risks where the impact is so grave that normal probability-weighted prioritization breaks down. Even with extremely low likelihoods, these risks demand priority precisely because, if they occur, every other concern becomes irrelevant. Yet composite scoring often ranks them below routine operational disruptions, systematically underfunding the contingencies that matter most.

The pressure for simplification compounds these problems. Executives prefer dashboards with red, amber, and green indicators to nuanced discussions of probability distributions and impact scenarios. Media coverage amplifies this tendency, reducing complex risk landscapes to dramatic headlines about threats being "high" or "low" without distinguishing whether that assessment reflects likelihood, impact, or some unclear mixture of both. This preference for simplicity cascades through organizations: analysts learn to present “the risk” as a single quantity, because that is the language that resonates in boardrooms and budget meetings. Resource allocation then follows suit, rewarding frequent irritations while neglecting transformative threats.

Organizational practices do not merely reflect cognitive biases; they sometimes amplify and institutionalize them. Once embedded into systems, the conflation becomes self-perpetuating, shaping decisions long after anyone has reflected on the underlying risk.


A Rigorous Approach 

From finance to national security, decision-makers navigate an increasingly uncertain world where low-probability, high-impact events can reshape entire systems. Thinking clearly about these "severe but plausible" scenarios requires holding likelihood and impact separate, creating space for more balanced, resilient planning.

At the Swift Centre, this distinction sits at the heart of our work. Rather than asking "What's the risk of China invading Taiwan?", we break the question apart: What is the probability of x military action in different timeframes? If it occurs, what would be the impact on semiconductor supply chains, financial markets, or regional stability? Each element is assessed on its own evidence before being brought together, ensuring both likelihood and impact remain visible throughout the analysis.

This approach offers concrete advantages. It makes scenarios more discussable: a 1% chance and a 10% chance are both "unlikely," but they demand very different strategies depending on your organization's exposure, objectives, and risk appetite. The same holds for distinguishing between moderate disruptions and transformative crises. Seeing these differences clearly helps organizations calibrate responses: deciding where to monitor, where to hedge, and where to invest in deeper resilience. When likelihood and impact are discussed explicitly, assumptions become transparent. A statement like "We estimate a 5% probability of this outcome; if it happens, losses could reach $500 million" tells a story that cannot be captured by a "high risk" label alone.

Our forecasting teams combine calibrated probability estimates with structured scenario analysis, drawing on expert input and superforecaster judgment. The process is dynamic: when teams focus primarily on probabilities, we help them explore impacts; when immersed in impact planning, we bring likelihoods back into view. Over time, this creates a richer, shared language for discussing uncertainty to support better decisions without either downplaying unlikely catastrophes or overreacting to them.

Next
Next

Why Financial Institutions Need Structured Forecasting (Not Just Smart Analysts)