In 2011, the US Intelligence Community commissioned a forecasting tournament to find out who could best predict geopolitical events. What they discovered was unexpected: a group of ordinary people with no classified access consistently outperformed trained intelligence analysts by 30% or more.

Philip Tetlock, who ran the Good Judgment Project behind these findings, called the top performers superforecasters. And the most important finding wasn't that they were smarter — it was that they had different habits. Habits that are entirely teachable.

For business leaders, these habits have direct applications. Every major decision rests on predictions: about market conditions, competitor response, customer behaviour, team execution. The quality of those predictions shapes everything downstream.

Why most business predictions fail

Business predictions fail for three structural reasons, none of which have anything to do with intelligence.

Vague language. "We expect strong growth in Q3" is not a prediction — it is a sentiment. If there is no explicit probability, there is no way to measure accuracy and no feedback loop. Studies of executive language find that words like "likely", "good chance", and "probably" are interpreted differently by different people in the same room, ranging from 55% to 90% probability depending on who you ask.

No review process. Most organisations make forecasts in budgets, strategy documents, and investment memos and never formally revisit them. Without a structured outcome review, there is no learning. The same errors repeat across cycles because there is no mechanism that surfaces them.

The inside view. When making predictions, humans naturally focus on the specific features of the current situation — the quality of the team, the strength of the product, the size of the market. What they underweight is the base rate: what typically happens to companies, products, or initiatives in this reference class. Daniel Kahneman called this the planning fallacy. It causes systematic overconfidence in project success, timeline adherence, and revenue ramp.

"The best forecasters don't know more than others. They think differently — probabilistically, with explicit numbers, and with genuine curiosity about being wrong."

The 5 habits of accurate forecasters

Tetlock's research found that superforecasters share five consistent habits. Each is directly applicable to business decision-making.

Habit 1

They find the base rate first

Before forming a view, superforecasters ask: what reference class does this situation belong to, and what is the historical outcome rate for that class? If predicting whether a new enterprise SaaS product will reach £1M ARR in year two, the relevant question is: what proportion of similar products in similar markets have done so? This outside view becomes the anchor. Only then do they adjust for the specific features of this situation — and they adjust conservatively, because the inside view is almost always more optimistic than the data supports.

Habit 2

They break questions into components

Complex predictions are decomposed into sub-questions that are individually more tractable. "Will this acquisition create value?" becomes: Will integration complete in under 18 months? (Estimate: 55%.) Will key leadership stay through integration? (Estimate: 70%.) Will the assumed cross-sell rate materialise? (Estimate: 40%.) Combined, the probability of all three is roughly 15% — far lower than most executives would guess when asked the top-level question directly. Decomposition exposes hidden assumptions and prevents vague optimism from hiding compound risk.

Habit 3

They use explicit probabilities

Where most people say "I think this will probably work", superforecasters say "I'd put this at 65%". The number matters for two reasons. First, it forces genuine reflection — 51% and 80% are both "probably", but they imply very different levels of conviction and should lead to different actions. Second, it creates the only data format that allows calibration to be measured. You cannot know if you are overconfident until you have enough numeric predictions to compare against outcomes.

Habit 4

They update often, in small increments

Superforecasters revise their views regularly as new information arrives — not in large emotional swings, but in small, evidence-driven adjustments. Most executives update too rarely and too dramatically. They maintain a view until it becomes untenable, then shift sharply. This produces worse calibration than frequent, modest updates that track the evidence as it accumulates.

Habit 5

They score their predictions and learn from them

This is the differentiating habit. Superforecasters maintain records of their predictions and outcome scores. Over time, this reveals systematic biases: categories where they over-predict, time horizons where they are unreliable, types of questions where they are well calibrated. Without this data, improvement is random. With it, you can target the specific failure modes in your own forecasting.

Reference class forecasting in practice

Reference class forecasting is the single most powerful technique business leaders can adopt. The process has three steps.

Step 1: Identify the reference class. What category of prior situations most closely resembles the one you're forecasting? If you're predicting a product launch timeline, the class is "enterprise software product launches in companies of this size and maturity". If you're predicting an acquisition outcome, the class is "acquisitions of companies in this sector at this stage by acquirers of this type".

Step 2: Find the base rate. What proportion of cases in the reference class achieved the outcome you're predicting? This is often publicly available: industry research, analyst reports, and post-mortems from comparable situations. If you're predicting that a digital transformation project will complete on time and on budget, the answer from research is roughly 20–30% — far lower than most internal forecasts assume.

Step 3: Adjust for the specifics of your case — cautiously. Your team is strong, your market is favourable, your leadership is committed. These specifics do matter. But the research shows that inside-view adjustments should be small. Kahneman's rule of thumb: if the base rate is 30%, allow yourself to adjust to 40–50% for genuinely strong specific factors, not to 80%.

How to build a business prediction practice

For most organisations, the gap between aspiration and practice in forecasting is large. Here is how to close it systematically.

Make predictions explicit and time-bound

Every material business assumption should be expressed as an explicit prediction with a numeric probability and a specific review date. "We believe this market will grow 15–20% in the next 12 months, confidence 65%, review November 2026." This single practice eliminates the vagueness that prevents feedback loops from forming.

Log predictions at decision time

Predictions must be written down before the outcome is known. Memory is not reliable — research on hindsight bias shows that people routinely misremember their pre-outcome predictions to align with what actually happened. A decision log that captures confidence scores at the time of the decision is the only reliable record.

Review at structured intervals

Schedule outcome reviews at 30, 90, and 180 days depending on the prediction horizon. When an outcome is known, record it and compare to the original prediction. This is where confidence calibration is built — over enough reviews, you will have a genuine dataset on your forecasting accuracy by category.

Analyse calibration by category

After 30 or more predictions with outcomes, segment by category: market forecasts, team execution forecasts, product adoption forecasts, competitive response forecasts. You will almost certainly find that your calibration differs significantly across these categories. The goal is to identify where you are systematically overconfident — and to build that knowledge into future predictions in those categories.

The link between predictions and decision quality

Better predictions do not automatically produce better decisions — the quality of the decision process also matters. But predictions are the inputs that decisions are built on. A well-structured decision-making framework applied to a set of badly calibrated predictions will produce worse outcomes than one applied to well-calibrated ones.

The executives and investment professionals who consistently make the best decisions share one trait: they know which of their predictions to trust. They have calibration data that tells them they are reliable in their domain-specific judgement calls and unreliable in their macroeconomic forecasts — or vice versa. They weight their own predictions accordingly.

This kind of self-knowledge is only possible if you have been tracking predictions and outcomes systematically. Gut feeling about your own reliability is almost always wrong — and almost always overconfident. The data tells a different story.

A platform like Reflect OS is built specifically to capture this data: confidence scores at decision time, structured outcome reviews at defined intervals, and calibration analysis that shows the gap between stated confidence and actual accuracy across decision categories. For executives and investment teams that make high-stakes predictions regularly, this kind of systematic measurement is the difference between anecdote and intelligence.

Frequently asked questions

What is superforecasting?

Superforecasting is a discipline from Philip Tetlock's Good Judgment Project research, where a subset of forecasters consistently outperformed professional intelligence analysts by 30%+. Superforecasters share habits: probabilistic thinking, base rate anchoring, frequent updating, and rigorous scoring of their predictions over time.

How can business leaders apply superforecasting techniques?

The core techniques are: expressing predictions as explicit probabilities (not vague language), using reference class forecasting to anchor on base rates, decomposing complex forecasts into component sub-questions, actively seeking disconfirming evidence, and tracking prediction accuracy over time to measure and improve calibration.

What is reference class forecasting?

Reference class forecasting is a technique where you begin with the historical base rate for a category of similar situations before adjusting for the specifics of your case. It counteracts the planning fallacy and inside-view optimism, which consistently cause executives to underestimate timelines, costs, and probability of failure.

Why do most business forecasts fail?

Most business forecasts fail because they use vague language that cannot be measured, are never formally reviewed after outcomes are known (no feedback loop), and rely on the inside view while ignoring base rates from similar past situations. Superforecasting research shows the inside view is systematically overconfident in most business contexts.

How do I measure my forecasting accuracy?

Record predictions in writing before outcomes are known, assign explicit probability estimates, and set a review date. When the review date arrives, record the actual outcome and compare against your confidence score. A well-calibrated forecaster will be right roughly 75% of the time on 75%-confidence predictions. Over 30–50 predictions, patterns become visible.

What is the difference between a prediction and a decision?

A prediction is a probabilistic belief about a future state of the world. A decision is a commitment to a course of action, which depends on one or more predictions plus values and constraints. Better predictions improve decision quality, but a decision can be well-made even if an underlying prediction turns out wrong — what matters is the quality of the reasoning at the time.

Track your predictions and measure your calibration

Reflect OS captures confidence scores at decision time and shows you exactly where your predictions are accurate — and where they're not. Log a decision in under 60 seconds.

Start tracking — 90-day guarantee