Risk and Uncertainty

In 2007, the world’s largest banks employed thousands of risk analysts running sophisticated quantitative models. These models could price individual mortgage-backed securities with impressive precision. They could calculate default probabilities, assign credit ratings, and generate Value-at-Risk estimates that satisfied regulators and reassured shareholders. By 2008, the global financial system nearly collapsed. The models had not failed at what they were designed to do. They had been applied to a problem they were never equipped to handle.

Individual mortgages were calculable. Their default rates could be estimated from historical data. But the complex web of interactions between millions of bundled, sliced, and re-bundled securities created something qualitatively different. When housing prices dropped, the knock-on effects cascaded through interconnections that no model had mapped, because the interconnections themselves were emergent properties of the system’s complexity. The banks had treated a situation of genuine uncertainty as though it were a problem of risk.

This is not a story about bad math. It is a story about the wrong category. We say “risk” constantly when we mean something else entirely. Project risk assessments, geopolitical risk analyses, AI risk discussions: in most of these contexts, “risk” functions as a catch-all for “something bad might happen.” The word papers over a distinction that matters enormously in practice.

Vaughn Tan, who has spent years dissecting how organizations relate to the unknown, describes a chain reaction that makes this confusion consequential: what we call something determines what we think it is, which determines how we act, which determines outcomes.1 When we label a situation “risky,” we implicitly claim that the future states are knowable and their probabilities calculable. That claim activates a specific toolkit: quantitative models, cost-benefit analyses, expected-value calculations, insurance mechanisms. If the situation is genuinely uncertain, these tools generate false confidence, which is worse than admitting ignorance.

The confusion between risk and uncertainty is therefore not a problem of imprecise language. It is a problem of misguided action. Getting the words right is a prerequisite for getting the responses right. This note traces the distinction, examines why we persistently collapse it, and explores the practical consequences of that collapse.

The Distinction

In 1921, the economist Frank Knight published Risk, Uncertainty and Profit, a book that drew a line between two fundamentally different relationships to the unknown.2

Risk describes situations where the possible future states are known and their probabilities can be assigned. A fair die has six faces, each with a one-in-six probability. An insurance company can calculate the likelihood that a 45-year-old nonsmoker will die within the next decade, because actuarial tables rest on vast historical datasets. In these situations, formal rational decision-making is appropriate. Cost-benefit analysis works. Expected-value calculations are meaningful. The unknown is, in a precise sense, measurable.

Uncertainty describes situations where the possible future states cannot be fully enumerated, or their probabilities cannot be reliably determined. The launch of a genuinely novel product, the long-term consequences of a new technology, the trajectory of a geopolitical conflict: these involve unknowns that resist quantification. Formal rational tools do not apply cleanly, and their application can actively mislead.

Knight distinguished three types of probability that help sharpen this boundary. A priori probability is purely deductive: the six faces of a die, the 52 cards in a deck. Statistical probability is inferred from historical data: mortality tables, insurance claims, batting averages. Estimated probability relies on subjective judgment: an entrepreneur’s gut feeling about market demand, an analyst’s assessment of political stability.

The critical insight is that only a priori probabilities fully justify the apparatus of formal rationality. Statistical probabilities come close but carry an implicit assumption: that the future will resemble the past closely enough for historical frequencies to hold. Estimated probabilities are further still, resting on judgment rather than data. Yet in practice, all three types are routinely treated as interchangeable. Organizations apply the same quantitative frameworks to estimated probabilities that would only be warranted for a priori ones. This systematic conflation is where the trouble begins.

Knight’s practical point was about profit. Risk can be insured away. If the probability of a loss is known, someone can price the insurance, and the risk effectively disappears as a factor in economic decision-making. Uncertainty, by contrast, cannot be insured, because no one can price what no one can measure. Entrepreneurial profit, Knight argued, exists precisely as the reward for bearing genuine uncertainty: profit arises from the inherent unpredictability of things, from the fact that the results of human activity cannot be anticipated and that probability calculation is, in many situations, impossible and meaningless.2

This is not an abstract philosophical distinction. It has direct implications for how organizations should structure their decision-making, allocate resources, and prepare for the future.

Why We Get It Wrong

If the distinction matters so much, why do we collapse it so persistently? The answer operates on three levels.

The Words Themselves

Vaughn Tan has analyzed the linguistic dimension in detail.3 The problem has two faces. The first is overloading: the word “risk” is used to refer to so many different things that it has become nearly meaningless as a diagnostic term. In one of Tan’s Interintellect sessions, ten participants generated more than eight distinct informal definitions of “risk”: a potential problem, a known problem with mitigations, the probability of a negative outcome, the cost of inaction, exposure to downside, a threat that has been assessed, something on a risk register, and more. Each definition implies a different relationship to the unknown, but the single word “risk” obscures these differences.

The second face is appropriation: the word “uncertainty” has been co-opted by fields that use it to mean something much closer to calculable risk. In AI and machine learning, “uncertainty quantification” typically refers to the confidence intervals of a model’s predictions. In mainstream economics, “uncertainty” often means volatility that can be modeled with probability distributions. In both cases, the term has been appropriated to describe situations that Knight would have classified as risk, not uncertainty. This appropriation strips the word of its diagnostic power precisely where it is needed most.

As Tan writes: “Confused terminology about the unknown stops organizations from relating well to not-knowing.”1

The Emotions Underneath

Uncertainty is viscerally uncomfortable. The risk mindset offers psychological relief because it implies that the situation is, in principle, controllable and calculable. Assigning a probability to a threat, even an unreliable one, feels better than sitting with the acknowledgment that the situation cannot be quantified at all.

This emotional dimension is rarely discussed openly in organizational life. Fear, anxiety, and the discomfort of not-knowing are not items on meeting agendas. Yet they drive behavior powerfully. Organizations develop what Tan calls “antibodies” against genuine engagement with uncertainty: reflexive responses that convert every uncertain situation into a risk-management exercise, because risk management is emotionally tolerable in ways that uncertainty-acknowledgment is not.4

The result is a kind of organizational self-medication. The risk framework does not cure the underlying condition, but it alleviates the symptoms. A board that receives a risk dashboard feels informed. A team that has completed a risk assessment feels prepared. The activities are real, the effort is genuine, and the comfort they provide is immediate. The cost is paid later, when reality fails to conform to the model, and the organization discovers that it spent its preparatory energy on the wrong kind of preparation entirely.

The Institutional Infrastructure

Cost-benefit analyses, expected-value calculations, risk-management departments, insurance frameworks, regulatory compliance regimes: the entire institutional infrastructure of modern organizations is built on the risk mindset. A situation classified as “risky” automatically activates this infrastructure. Analysts produce quantitative assessments. Committees review risk registers. Boards receive risk dashboards. The machinery is impressive and, within its proper domain, genuinely useful.

The problem is that the machinery does not have an off-switch for situations where it does not apply. When a situation is genuinely uncertain, no institutional mechanism exists to say: “Our standard tools are inappropriate here. We need a different approach.” Instead, the tools are applied regardless, because they are available and because applying them feels like due diligence. The institutional infrastructure creates its own demand, independent of whether it matches the actual nature of the problem.

The Consequences

When risk tools are applied to uncertainty situations, the result is not mild inaccuracy. It is systematic misdirection. The 2008 financial crisis, described above, illustrates the core pattern: precision was the danger, because it created confidence where humility was warranted. The banks did not lack sophisticated tools. They ran Monte Carlo simulations, stress tests, and correlation analyses. These tools gave precise-looking answers to a question that did not admit precise answers.5

The collapse of Silicon Valley Bank in 2023 shows how the confusion perpetuates itself even after its consequences become visible. The near-universal diagnosis was “poor risk management.” The bank had concentrated its assets in long-duration bonds without adequate hedging against interest rate changes. In conventional terms, this looks like a straightforward risk-management failure. But the dynamics that destroyed SVB were not conventional. Depositors withdrew $42 billion in a single day. The speed of the run had no historical precedent, amplified by a networked depositor base of tech firms and venture capitalists who moved in parallel. The interest rate exposure was a risk problem. The deposit flight, at that speed and scale, was an uncertainty problem. The standard diagnostic language collapsed both into “risk,” which prevented the more important question from being asked: were the bank’s models even addressing the right category of threat?

The consequence of mislabeling goes beyond inaccuracy. Organizations optimize for the wrong thing entirely. Organizations that treat uncertainty as risk build elaborate quantitative models that deliver the illusion of control. They invest in refining their predictions rather than developing the capacity to act well under conditions where prediction is impossible. They mistake precision for accuracy and control for preparedness.

Daniel Ellsberg’s paradox, demonstrated empirically in 1961, helps explain why this pattern is so persistent.6 Ellsberg showed that people are averse to ambiguity beyond their aversion to risk. Given a choice between a bet with known probabilities and a bet with unknown probabilities, people reliably choose the known-probability option, even when the expected values are identical. This ambiguity aversion operates below conscious deliberation. It means that organizations will gravitate toward the “risky” interpretation of a situation, because risk feels cognitively manageable in a way that uncertainty does not.

Beyond the Binary: Varieties of Not-Knowing

The risk/uncertainty distinction is a starting point, not a destination. The binary captures something real, but the territory of not-knowing is more differentiated than a two-category scheme suggests.

Why “Not-Knowing”

Tan deliberately introduces the term “not-knowing” as an alternative to “uncertainty.”7 The move is pragmatic rather than aesthetic. As the linguistic analysis above shows, “uncertainty” has been so thoroughly overloaded and appropriated that using it invites exactly the confusion it is supposed to resolve. “Not-knowing” is admittedly clunky. Its value lies precisely in that clumsiness: it resists casual use and forces the speaker to specify which type of not-knowing is under discussion.

Four Sources of Not-Knowing

Tan identifies four distinct sources from which not-knowing can arise.8

Unknown causality. The relationship between actions and outcomes is unclear. A central bank lowers interest rates, but the systemic effects cascade through channels that are imprecisely understood, producing consequences that no causal model reliably predicts. The action is deliberate; the outcome is genuinely uncertain because the causal chain is opaque.

Unknown action space. It is unclear which actions are even available. System complexity, social dynamics, or incomplete information can obscure the set of possible moves. An organization facing a novel crisis may not know what it can do, let alone what it should do. The menu of options is itself part of the unknown.

Unknown outcome space. The set of possible outcomes is itself unclear. Either existing outcomes have not yet been identified, or the relevant outcomes do not yet exist. Innovation operates in this territory by definition: genuinely novel products or technologies create outcomes that could not have been enumerated in advance.

Unknown preferences. It is unclear which outcomes are desirable. This connects directly to Meaningmaking: when we do not know what we value, we engage in subjective judgment that no calculation can replace. Preference uncertainty is the domain where meaningmaking becomes the generative response to not-knowing.

The existence of any one of these sources is sufficient to place a situation in the territory of genuine uncertainty rather than calculable risk.

Complementary Frameworks

Several other frameworks map adjacent territory. Each adds something specific that Knight’s binary and Tan’s taxonomy do not cover on their own.

Cynefin. Dave Snowden and Mary Boone developed the Cynefin framework as a diagnostic tool with five domains: Simple (later renamed Clear), Complicated, Complex, Chaotic, and Disorder.9 Risk inhabits the Simple and Complicated domains, where cause and effect are either self-evident or discoverable through expert analysis. Uncertainty inhabits the Complex and Chaotic domains, where cause and effect are visible only in retrospect or not discernible at all. What Cynefin adds to the risk/uncertainty distinction is an operational question that Knight does not ask: how do you determine which domain you are in? The framework’s central warning is that domain mismatch is the primary strategic error. Treating a Complex situation as Complicated means applying expert analysis where experimentation is required.

Sensemaking. Karl Weick introduces ambiguity as a category distinct from both risk and uncertainty.10 Under uncertainty, the probabilities are unknown. Under ambiguity, the meanings are unclear. The addition matters because ambiguity requires a fundamentally different response than either risk management or uncertainty navigation. Sensemaking is the ongoing process of projecting plausible narratives onto ambiguous situations: plausibility matters more than precision, and action is as important as analysis, because acting generates data that can then be interpreted. Where Knight and Tan focus on what we do not know, Weick focuses on what we cannot yet interpret.

Antifragility. Nassim Nicholas Taleb radicalizes the Knight distinction. The problem, he argues, is that we pretend we can measure what we cannot.11 Standard probability models systematically underestimate the likelihood and impact of extreme events (“Black Swans”) because they assume thin-tailed distributions where fat-tailed ones apply. What Taleb adds is a design vocabulary for responding to uncertainty: fragile systems are damaged by volatility, robust systems resist it, and antifragile systems benefit from it. His practical principle is via negativa: avoid large fragile commitments, build in optionality, permit trial-and-error. Small failures prevent catastrophic ones.

Connections

Meaningmaking. Meaningmaking operates in Tan’s fourth not-knowing type: not-knowing about value. It is the generative response to preference uncertainty. When we do not know what outcomes are desirable, we engage in subjective value judgments that no calculation can replace.

Fictional Expectations. Jens Beckert’s theory of capitalist dynamics starts exactly at the fracture point between risk and uncertainty. When genuine uncertainty reigns, rational calculation fails. Fictional expectations are the social mechanism through which economic actors coordinate despite uncertainty: collective images of the future replace probability calculation. As Beckert writes, “Rational actor theory does not fail because actors do not wish to maximize their utility but because it is unable to address the consequences of genuine uncertainty.”12

Scenario Planning. Scenario planning is explicitly anti-predictive, designed as a response to uncertainty rather than risk. Pierre Wack developed the methodology at Shell precisely because forecasting (a risk tool) failed under genuine uncertainty. Scenarios create “prepared imagination” by developing comfort with multiple futures rather than confidence in one.

Causal Layered Analysis (CLA). CLA’s four layers map roughly onto the risk/uncertainty gradient. The surface layer (litany) deals in observable events that can often be quantified. The systems layer identifies structural patterns amenable to modeling. But the deeper layers (worldview, myth/metaphor) resist quantification entirely: the question “what deep narrative shapes how this society thinks about progress?” has no probability distribution. CLA works by descending into the territory where risk tools lose their grip.

Images of the Future. Under risk, images of the future can take the form of probability-weighted forecasts: a 70% chance of outcome A, a 30% chance of outcome B. Under uncertainty, this format breaks down. Images of the future become navigational tools instead: not predictions of what will happen, but orientations that allow actors to move coherently through an open future. The difference between a forecast and a scenario is, at its root, the difference between risk and uncertainty.

Open Questions

Is the binary between risk and uncertainty tenable? Knight drew a sharp line. Tan’s four sources of not-knowing suggest something more differentiated. The question is whether the binary distinction is a useful simplification that orients thinking, or whether it obscures the gradations that matter most when organizations actually try to act under conditions of not-knowing.

Does AI shift the boundary between risk and uncertainty? More data and more computation make more things calculable, which expands the domain of risk. But more interconnection and more systemic complexity create more emergent dynamics, which expands the domain of uncertainty. The 2008 crisis happened partly because quantitative models gave false confidence about genuinely uncertain dynamics. AI systems are far more powerful models. Whether they help us handle uncertainty or deepen the confusion by making risk tools seem more capable than they are remains an open question.

How does not-knowing map to futures practice? Foresight may be fundamentally a response to Tan’s third type of not-knowing: the unknown outcome space. If so, that would explain why risk-based forecasting consistently fails at the task foresight is designed for. Foresight does not aim to predict the most likely future. It aims to expand the space of futures we can imagine and prepare for. That is an uncertainty task, not a risk task.

The distinction Knight drew in 1921 is over a century old. The fact that we still collapse it routinely suggests that the problem is not lack of knowledge but lack of practice. We know the difference between risk and uncertainty. We have frameworks for diagnosing which one we face. What we lack is the institutional willingness to act on the diagnosis when the answer is uncomfortable. Better risk models will not help if the situation is not a risk problem. The harder discipline is recognizing that, before reaching for the toolkit.

Reading List

  1. Frank Knight, Risk, Uncertainty and Profit (1921). The foundational distinction between measurable risk and unmeasurable uncertainty, and why the difference explains entrepreneurial profit.
  2. Daniel Ellsberg, “Risk, Ambiguity, and the Savage Axioms” (1961). The empirical demonstration that humans are averse to ambiguity beyond their aversion to risk.
  3. Karl Weick, Sensemaking in Organizations (1995). Introduces ambiguity as a distinct category and argues that plausible narrative matters more than accurate calculation under uncertainty.
  4. Nassim Nicholas Taleb, The Black Swan (2007). The argument that standard models systematically underestimate extreme events, and that pretending to measure the unmeasurable is more dangerous than admitting ignorance.
  5. Dave Snowden and Mary Boone, “A Leader’s Framework for Decision Making” (HBR, 2007). The Cynefin framework: matching response strategies to the type of environment you face.
  6. Vaughn Tan, The Uncertainty Mindset (2020). How high-end culinary R&D teams design organizations for genuine uncertainty rather than calculable risk.
  7. Vaughn Tan, “Varieties of the Unknown” (Uncertainty Mindset #1). Knight’s typology applied to contemporary examples, including three non-risk forms of not-knowing.
  8. Vaughn Tan, “How to think more clearly about risk” (vaughntan.org). The overloading and appropriation of “risk” and “uncertainty,” and why getting the words right is a prerequisite for getting the actions right.
  9. Vaughn Tan, “Strategies and tools for not-knowing” (vaughntan.org, 2024). The synthesis: a practical taxonomy of not-knowing types and a toolkit for navigating them.
  1. Vaughn Tan, “How to think more clearly about risk,” vaughntan.org.  2

  2. Frank Knight, Risk, Uncertainty and Profit (Boston: Houghton Mifflin, 1921).  2

  3. Vaughn Tan, “How to think more clearly about risk,” vaughntan.org. See also Tan’s account of the Interintellect session on overloading and appropriation. 

  4. Vaughn Tan, “The difficulties of not-knowing,” The Uncertainty Mindset #34. 

  5. Vaughn Tan, “Varieties of the Unknown,” The Uncertainty Mindset #1. 

  6. Daniel Ellsberg, “Risk, Ambiguity, and the Savage Axioms,” The Quarterly Journal of Economics 75, no. 4 (1961): 643-669. 

  7. Vaughn Tan, “Strategies and tools for not-knowing,” vaughntan.org, 2024. 

  8. Vaughn Tan, “Strategic uncertainty,” The Uncertainty Mindset #4. 

  9. Dave Snowden and Mary Boone, “A Leader’s Framework for Decision Making,” Harvard Business Review, November 2007. 

  10. Karl Weick, Sensemaking in Organizations (Thousand Oaks: Sage, 1995). 

  11. Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007). See also Antifragile: Things That Gain from Disorder (2012). 

  12. Jens Beckert, Imagined Futures: Fictional Expectations and Capitalist Dynamics (Cambridge, MA: Harvard University Press, 2016), 8. 

No notes link to this note yet.


Note Graph

ESC