The Superforecaster's Mindset: Probability Over Certainty and Fate

The Superforecaster's Mindset: Probability Over Certainty and Fate

The Era of Big Data and "Superquants"

We live in an era dominated by Big Data, where vast information technology networks generate immense quantities of data. Data scientists, equipped with powerful computers and complex mathematics, analyze this data to extract meaning and predict reality. For most people, the methods employed by these data scientists, often referred to as "magicians," are intimidating, as described by Arthur C. Clarke's famous observation: "Any sufficiently advanced technology is indistinguishable from magic." Men like Lionel Levine, an assistant professor of mathematics at Cornell with a background from Harvard and Berkeley (Ph.D.), embody these "math wizards" (or "quants" on Wall Street) with their impressive résumés including papers with titles such as "Scaling Limits for Internal Aggregation Models with Multiple Sources."

Superforecasters, like Levine, consistently demonstrate a strong affinity for numbers. They excel in basic numeracy tests, such as correctly calculating that out of 10,000 people, approximately 5 would be expected to get a viral infection with a 0.05\% chance. Their backgrounds frequently include math, science, or computer programming, as exemplified by Joshua Frankel, a Brooklyn filmmaker with a math and science high school background who started in computer animation. Superforecasters are not just comfortable with numbers; they can apply them practically. For instance, Bill Flack develops Monte Carlo models based on historical currency exchange rates to forecast future rates. Initially, one might assume their exceptional forecasting results stem directly from their use of complex mathematical models.

The Illusion of Math Magic in Forecasting

Contrary to the common intuition that superforecasters achieve their results through complex algorithms or statistical spells, the reality is far simpler. There is no "moat and no castle wall" of advanced mathematics separating superforecasters from others. While they may occasionally use or consult math models, it is rare. The majority of their accurate forecasts are the outcome of "careful thought and nuanced judgment." Lionel Levine himself admits to using math in only "a couple of questions" and primarily relies on subjective judgment, emphasizing the importance of "balancing, finding relevant information and deciding how relevant is this really? How much should it really affect my forecast?" In a contrarian move, Levine even seeks to prove his forecasting success is independent of his mathematical prowess, noting that people would otherwise attribute it solely to his math skills.

The fact that superforecasters are highly numerate is not coincidental, but their numeracy aids them not by enabling arcane math models but through a "simpler, subtler, and much more interesting" mechanism.

Certainty vs. Probability: The "Where's Osama?" Scenario

The Fictional Panetta's Demand for Certainty

The movie Zero Dark Thirty dramatizes the intelligence community's efforts to locate Osama bin Laden in 2011. In a pivotal scene, the fictional CIA Director Leon Panetta (played by James Gandolfini) demands a definitive "yes or no" answer from his analysts regarding bin Laden's presence in the Abbottabad compound. He expresses frustration when analysts provide probabilistic estimates (e.g., 60\%, 80\%, 60\%) and sighs, "This is a clusterfuck, isn't it?" Panetta, like most people, desires agreement and certainty, finding disagreement unsettling. Maya, the film's protagonist, declares with absolute conviction, "A hundred percent, he's there," later adjusting to 95\% because "certainty freaks you guys out." Her unwavering confidence impresses Panetta, who later dismisses other analysts as "cowed" for expressing uncertainty. This portrayal highlights a "two-setting mental dial" in Panetta's thinking (yes/no), lacking degrees of "maybe."

Critique of Absolute Certainty

However, Maya's 100\% certainty is unreasonable in a real-world scenario. While bin Laden was indeed in the compound (an objective truth), being absolutely certain (no chance whatsoever it could be someone else) disregards the myriad of tiny alternative possibilities (another terrorist, a drug trafficker, an Afghan warlord, etc.), which could collectively add up to 1\%, 2\%, 5\%, or more. Such fine distinctions matter, as evidenced by the intelligence community's erroneous 100\% certainty about Saddam Hussein's weapons of mass destruction, which shut down exploration of alternative possibilities. Maya's position is described as "right but unreasonable," contrasting with a "wrong but reasonable" position the intelligence community could have taken by acknowledging a 60\% or 70\% chance regarding Hussein's WMDs. The final outcome (Maya being correct) does not validate the unreasonable process.

The Real Panetta's Embrace of Diversity

In stark contrast, the real Leon Panetta welcomed diverse judgments, which ranged from 30\% to 90\% probability regarding bin Laden's presence. He encouraged analysts to be honest with their beliefs, seeing an array of judgments as "welcome proof that the people around the table are actually thinking for themselves and offering their unique perspectives." This diversity, he felt, was the "wisdom of the crowd." Panetta, a former congressman, chief of staff to President Clinton, and secretary of defense to President Obama, understood "process-outcome paradoxes" and repeatedly stated, "Nothing is one hundred percent." He genuinely thinks like a superforecaster.

Barack Obama's "Third Setting" and the Nature of Human Judgment

Mark Bowden's book The Finish recounts a similar scene with the real President Barack Obama in the Situation Room. CIA officers presented a range of confidence levels for bin Laden's presence, from a team leader's 95\% to others' 80\%
and some as low as 30\% or 40\%
. Obama acknowledged, "This is a probability thing." Bowden, however, editorializes that the CIA's elaborate process for weighing certainty, instituted after the Saddam WMD error, resulted in "more confusion" rather than certainty. Obama reportedly told Bowden that probabilities in this situation often "disguised uncertainty as opposed to actually providing you with useful information," concluding it was a "gamble, pure and simple."

Obama ultimately declared, "This is fifty-fifty," silencing everyone. This statement could be interpreted in several ways:

  • Literal but misguided: Plucking 50\% out of the air, ignoring the median estimate of the "wisdom of the crowd" (around 70\%).

  • "I'm not sure" or "uncertain" (reasonable): As an executive, Obama might have decided that any significant probability of bin Laden's presence justified the strike, making the exact probability less critical than moving forward with the decision.

  • Ignorance prior (less defensible): Bothered by the wide variation, Obama might have retreated to a state of indifference, known in probability theory as the "ignorance prior," akin to a "50/50" coin flip. This meant he didn't fully utilize the available information.

This phenomenon echoes Amos Tversky's observation that most people operate with only three settings for probabilities: "gonna happen," "not gonna happen," and "maybe." While Tversky made this remark with humor, it captures a fundamental truth about human judgment. People naturally desire clear, decisive answers.

Probability for the Stone Age: Evolutionary Roots of Simplistic Thinking

Humans have grappled with uncertainty throughout history, long before formal statistical models emerged around Jakob Bernoulli's Ars Conjectandi in 1713. Our ancestors relied on a "tip-of-your-nose perspective," a fast, intuitive System 1 process. For instance, spotting a shadow in the grass: if a lion attack easily comes to mind, run! This mental process often produces binary (Yes/No) conclusions or, if weaker, a "Maybe." It does not facilitate fine-grained distinctions between, say, a 60\% and 80\% chance of a lion, which requires slower, conscious System 2 thought.

In existential situations our ancestors faced, quick, clear directions from a three-setting dial (YES = run! MAYBE = stay alert! NO = relax) were more advantageous. Fine-grained analysis could slow down decision-making, potentially leading to death.

Our preference for two- and three-setting mental dials has deep evolutionary roots. Research shows that people value certainty disproportionately: reducing a child's risk of disease from 5\% to 0\% is far more valuable than from 10\% to 5\%, because the former delivers certainty. Both 0\% and 100\% probabilities hold significantly more weight in our minds than economic mathematical models suggest. This desire for "worry-free zones" led our brains to ignore small chances and use binary thinking whenever possible, resorting to "maybe" only when compelled.

This explains why we often seek confident yes/no answers. Harry Truman's joke about wanting a "one-armed economist" (to avoid "on the one hand…on the other…") reflects this desire. While confidence and accuracy are positively correlated, people tend to exaggerate this correlation. Studies show people prefer confident financial advisors even if their track records are identical, and they equate confidence with competence. Forecasters offering middling probabilities are often perceived as incompetent, ignorant, or lazy.

Misconceptions and the Counterintuitive Nature of Probability

Our "primal thinking" contributes to a poor grasp of probability, often manifest in simple misunderstandings. For example, a "70\% chance of rain" is often misinterpreted as rain for 70\% of the day, or 70\% of the city, or 70\% of forecasters believing it will rain. The correct understanding—that over many days with a 70\% forecast, it should rain on 70\% of them—is deeply counterintuitive, clashing with our natural inclination to think in terms of "it will rain," "it won't rain," or "maybe it will rain."

Even sophisticated individuals make elementary mistakes, as seen when journalist David Leonhardt initially misinterpreted a prediction market's 75\% chance of a law being struck down as a certainty. He later corrected himself, explaining that a "74\% chance it will" also means a "26\% chance it won't." Robert Rubin, former Treasury secretary, recounted his frustration with policymakers who treated an 80\% probability as a certainty, requiring him to "pound the table" to emphasize the 20\% chance of non-occurrence. Only with probabilities closer to even (e.g., 60/40) did they easily grasp the uncertainty. Amos Tversky's insight about the three-setting dial clearly resonates with these observations.

Probability for the Information Age: The Scientific Approach

Scientists approach probability radically differently, embracing uncertainty as an inherent aspect of reality. As Leon Panetta stated, "Nothing is one hundred percent." This contrasts sharply with a 19th-century view of science, which aimed to eradicate uncertainty by accumulating facts. However, 20th-century science has revealed that uncertainty is "an ineradicable element of reality" (William Byers), and "the dream of total certainty that is an illusion." All scientific knowledge is tentative, not "chiseled in granite."

Given this, the two- and three-setting mental dials are fundamentally flawed because "yes" and "no" imply certainty. The only viable setting is "maybe," which then must be subdivided into degrees of probability. Vague terms like "probably" introduce ambiguity; thus, scientists prefer numerical probabilities with fine granularity (e.g., 10\%,
20\%,
30\% or 10\%,
11\%,
12\%).

Robert Rubin exemplifies this "probabilistic thinking." After a lecture at Harvard on the lack of provable certainty, he adopted this axiom, which guided his career. He demanded precision from his aides, instructing them to state probabilities like "60\%" instead of "absolutely." Rubin's insights, published in his autobiography In an Uncertain World, initially bemused him because what seemed obvious to him was startlingly counterintuitive to others, leading many to pin his thoughts on cubicle walls. This highlights the profound difference between probabilistic thinking and the more natural two- or three-setting mental models, akin to "fish and birds"—fundamentally different creatures with distinct assumptions about reality.

"Uncertain Supers": How Superforecasters Leverage Probabilistic Thinking

Superforecasters, unlike most, readily grasp that an 80\% chance implies a 20\% chance of the opposite. Their superior numeracy enables them to be probabilistic thinkers, centered on an awareness of irreducible uncertainty.

Philosophers distinguish between two types of uncertainty:

  • Epistemic uncertainty: What you don't know but is, in theory, knowable (e.g., how a mystery machine works). This is a "clocklike" forecasting challenge.

  • Aleatory uncertainty: What you don't know and is unknowable (e.g., rain one year from now). This is an "intractably cloud-like problem," where uncertainty cannot be eliminated.

Superforecasters excel at discerning these. When facing "cloudier" questions with significant aleatory uncertainty (e.g., currency markets), they are cautious, keeping initial estimates between 35\% and 65\% and adjusting incrementally. They understand that the cloudier the outlook, the harder it is to outperform random chance.

Their approach to the "fifty-fifty" judgment is also telling. Unlike those with a three-setting dial who use 50\% as a stand-in for "maybe," careful probabilistic thinkers see 50\% as just one point in a vast range, no more likely to be used than 49\% or 51\%. Tournament data confirms that frequent users of 50\% are less accurate.

Superforecasters embrace granularity. Brian Labatte, a superforecaster, demonstrates this even in casual conversation, stating "65/35" for his fiction-to-nonfiction reading ratio. While ordinary forecasters stick to "tens" (20\%,
30\%,
40\%), superforecasters frequently use single percentage points (e.g., 3\% instead of 4\%) for one-third of their forecasts. This precision isn't "bafflegab"; it captures real distinctions that can be critical (e.g., between 1\% and 5\% risk for an Ebola outbreak or funding the next Google).

Research by Barbara Mellers confirms that granularity predicts accuracy: forecasters using single percentage points are more accurate than those using fives, who are more accurate than those using tens. Even slight rounding of superforecasters' predictions reduced their accuracy, a phenomenon not observed in regular forecasters. This underscores that superforecasters' precision is a key to their success.

Charlie Munger famously quipped, "If you don't get this elementary, but mildly unnatural, mathematics of elementary probability into your repertoire, then you go through a long life like a one-legged man in an ass-kicking contest." Even sophisticated organizations like the National Intelligence Council (NIC), which informs critical decisions, use a relatively broad five- or seven-degree scale (Remote, Unlikely, Even chance, Probably likely, Very likely, Almost certainly). While an improvement, it falls short of the precision superforecasters achieve. Encouraging greater granularity could yield significant rewards: a clearer perception of the future.

The "Why?" vs. "How?" Divide: Meaning, Fate, and Chance

In Kurt Vonnegut's Slaughterhouse-Five, the alien Tralfamadorians dismiss the Earthling question "Why?" as naive, understanding that reality simply is. This highlights a deep human yearning for meaning, particularly when unlikely or tragic events occur.

The Allure of Meaning and Fate
  • Religious Perspective: Offers solace by framing events, even tragedies, as part of a divine plan. Oprah Winfrey, in both secular and explicitly religious terms, promotes the idea that "Everything happens for a reason," and there are "no coincidences. Only divine order here."

  • Secular Perspective: Psychologists find many atheists also imbue significant life events with meaning and often believe in fate, defined as an "underlying order to life that determines how events turn out." Finding meaning is a marker of a healthy, resilient mind, such as among 9/11 survivors who showed fewer post-traumatic stress responses.

  • Counterfactual Thinking: Experiments show that imagining how things could have turned out differently (e.g., college choice, meeting a partner) makes the actual outcome feel more significant and "meant to be."

    • Love of Your Life Example: The sheer improbability of meeting a partner is often interpreted as proof it was "meant to happen," not simply "luck."

    • Big Bang Example: The fine-tuning of natural laws for life's existence is often attributed to a divine force or purpose, rather than cosmic luck or parallel universes.

The Tension with Scientific Probabilistic Thinking

This yearning for meaning (the "why?") fundamentally clashes with a scientific worldview, which focuses on "how?" (causation and probabilities). Science views events as not predetermined by God or fate; rather, outcomes are uncertain until they occur. As Einstein was famously quoted, "God does not play dice with the universe," but probabilistic thinking suggests that, indeed, God does. Chance and fate are incompatible, and indulging in fate undermines probabilistic thinking.

The logic of fate is often incoherent: "The probability that I would meet the love of my life was tiny. But it happened. So it was meant to be. Therefore the probability that it would happen was 100\%." This conflates what did happen with what was always destined to happen.

Probabilistic thinkers prioritize "how?" over "why?" (physics over metaphysics). Robert Shiller, a Nobel laureate economist, views his own improbable existence (owing to Henry Ford's hiring policy leading his grandfathers to Detroit) not as fate, but as an illustration of the future's radical indeterminacy. He calls the belief that history unfolds logically an "illusion of hindsight." Even in tragedy, a probabilistic thinker accepts that while an outcome was incredibly unlikely, it was one of countless possible paths events could have taken.

Fate, Well-being, and Foresight

Research directly links a rejection of fate-based thinking to forecasting accuracy. Studies gauging reactions to pro-fate (e.g., "Events unfold according to God's plan") and pro-probability (e.g., "Nothing is inevitable") statements revealed:

  • Superforecasters scored lowest on a 9-point "fate score" scale, firmly rejecting "it-was-meant-to-happen" thinking.

  • There was a significant negative correlation between individual fate scores and Brier scores (a measure of accuracy). This means that the more a forecaster inclined towards fatalistic thinking, the less accurate their forecasts were. Conversely, embracing probabilistic thinking correlated positively with accuracy.

This presents a paradox: finding meaning in events is positively correlated with psychological well-being but negatively correlated with foresight. While the book focuses on achieving accuracy, the implication is that clarity in forecasting may come at the cost of the consoling narrative of fate.