The most important thing to understand about cognitive bias is that it is not a character flaw. It is not a sign of low intelligence, poor education, or weak judgment. Cognitive biases are structural features of the human brain — shortcuts that evolved to help us process large amounts of information quickly and make decisions under time pressure. In many contexts, they work well. In high-stakes professional environments, where the cost of systematic error is high and the feedback loops are slow, they are reliably expensive.
The problem with simply knowing that biases exist is that knowledge does not change behaviour at the moment of decision. You can read about confirmation bias before breakfast and still exhibit it in every meeting before lunch. What changes behaviour is a structured practice that creates feedback between decisions and outcomes — one that makes the patterns visible over time. We will return to that at the end of this article. First, the eight biases themselves.
1. Confirmation Bias
Confirmation bias is the tendency to seek out, interpret, and remember information in a way that confirms what we already believe. It is the most prevalent bias in professional decision making and operates at every stage of the decision process — from how we frame the question, to which data we commission, to how we interpret ambiguous findings. An executive who has already decided to expand into a new market will hear the risks from their CFO as obstacles to be solved rather than genuine signals to be weighted.
The professional example is everywhere: an investor who is excited about a company dismisses weak unit economics as "early-stage noise" while a sceptical colleague in the same room sees them as a fundamental thesis concern. Both are looking at the same data. The difference is the prior belief each brought to the room. The mitigation is structural: assign a team member to steelman the opposing thesis before the final discussion. Not to argue against the decision, but to ensure the strongest version of the contrary view is explicitly heard and addressed.
2. Overconfidence Bias
Overconfidence bias causes professionals to overestimate the accuracy of their own judgments. In surveys of professional forecasters, doctors, lawyers, and executives, stated confidence levels consistently exceed actual accuracy rates — sometimes by twenty to thirty percentage points. Experts are often more overconfident than generalists, because their expertise makes them fluent in the vocabulary of certainty.
In business, overconfidence shows up in timeline estimates that are systematically optimistic, market size projections that assume a larger share than is realistic, and risk assessments that underweight tail scenarios. The mitigation is confidence calibration — the practice of explicitly logging your confidence level before each decision and comparing it against your outcome record over time. After fifty logged decisions, the gap between stated and actual accuracy becomes visible in a way that self-perception alone never reveals.
3. Anchoring Bias
Anchoring bias is the tendency to rely disproportionately on the first piece of information encountered when making subsequent judgments. In negotiation, the first number on the table anchors all subsequent discussion. In valuation, the price paid in a prior funding round anchors all future round assessments. In hiring, the first candidate interviewed sets the benchmark against which all subsequent candidates are measured — often unfairly in both directions.
The professional consequence is that decisions are often more influenced by arbitrary starting points than by the merits of the situation. A salary negotiation that starts at £90,000 produces a different outcome than one that starts at £70,000, even when both parties believe they are evaluating the role on its merits. The mitigation is to delay anchoring: generate your own independent assessment before exposing yourself to external reference points, and document that assessment formally before the anchor is introduced.
4. Availability Heuristic
The availability heuristic causes people to assess the likelihood of an event based on how easily a similar event comes to mind. If you recently heard about a competitor failing because of a supply chain issue, supply chain risk will feel more salient in your next strategic discussion than it might otherwise warrant. Conversely, risks with no recent vivid example feel remote even when they are statistically significant.
In investment and strategic planning, this manifests as the recency trap — overweighting the most recent market cycle, the most recent competitive event, or the most recently discussed risk. The mitigation is base rate reference: before making a probability assessment, explicitly look up the base rate of the relevant event category rather than estimating from memory. How often do supply chain failures actually end companies in this industry? What is the historical frequency of this type of competitive incursion? The data is almost always more useful than the anecdote.
5. Sunk Cost Fallacy
The sunk cost fallacy is the tendency to continue investing in a course of action because of prior investment, even when the forward-looking case no longer justifies continuation. Money, time, reputation, and emotional commitment already spent should be irrelevant to forward-looking decisions — they are gone regardless of what you decide next. But they are psychologically very difficult to ignore.
In business, this shows up in product investments that continue well past the point where evidence supports termination, in employment relationships that are maintained because of how much has been invested in the individual, and in strategic directions that are defended publicly because of how strongly they were advocated historically. The mitigation is the "starting from zero" question: if we did not have any prior investment in this path and were evaluating it fresh today, would we choose it? If the honest answer is no, the prior investment is driving the decision, not the merit.
6. Authority Bias
Authority bias is the tendency to defer to the views of those with higher status, seniority, or perceived expertise — even when the specific question at hand is not one where their authority is relevant. In organisations, this is one of the most consequential biases because it systematically suppresses the views of people who hold the most relevant information. A junior analyst who covers a sector more deeply than the Managing Director in the room will still feel pressure to soften or defer their view if the MD signals a different opinion.
The result is that organisations make decisions that reflect the views of the most senior people in the room rather than the people with the most relevant knowledge. This is not because the senior people are wrong more often — it is because the information environment around them is distorted by the deference of those beneath them. The mitigation is anonymous pre-submission: before a group discussion on a significant decision, have each participant record their independent view in writing before the meeting begins, so that no one has had the opportunity to anchor on the most senior voice.
7. Recency Bias
Recency bias causes decision-makers to weight recent events more heavily than older ones when assessing trends, probabilities, and risks. After a strong year, the future looks brighter than the long-run base rate justifies. After a difficult one, the reverse. Recency bias is the engine behind boom-and-bust hiring cycles, over-aggressive expansion in good markets, and over-aggressive contraction in bad ones.
The professional mitigation is systematic use of historical base rates rather than trend-extrapolation. When assessing a market, require your analysis to include a minimum five-year historical window, not just the most recent four quarters. When making a staffing or capital allocation decision, explicitly ask: "What would this decision look like if the last 18 months had been average rather than exceptional?" The answer is almost always different from the one you would otherwise make.
8. Groupthink
Groupthink is the tendency of cohesive groups to converge on consensus positions prematurely — suppressing dissent, over-valuing agreement, and failing to fully explore alternatives. It is most common in groups with high social cohesion, strong leadership pressure, and time constraints. These are features shared by most executive teams and investment committees.
Groupthink's insidious quality is that it feels like good decision-making from the inside. The meeting is smooth, everyone is aligned, the decision gets made efficiently. The cost only becomes visible when the outcome arrives and it emerges that half the room had reservations that were never fully voiced. The mitigation is structural: designate a rotating devil's advocate for every significant decision, require written dissent to be formally recorded in the decision log, and create explicit space for minority views before convergence is allowed.
Quick reference: bias, manifestation, and mitigation
| Bias | How it shows up | Mitigation tactic |
|---|---|---|
| Confirmation bias | Selectively seeking data that supports the preferred option | Assign a structured steel-man of the opposing view |
| Overconfidence | Timelines, projections, and risk assessments that are systematically optimistic | Log confidence levels; compare against outcome accuracy over time |
| Anchoring | Disproportionate influence of the first number or frame introduced | Generate independent assessment before exposing to external anchors |
| Availability heuristic | Overweighting risks or opportunities with vivid recent examples | Reference historical base rates; do not estimate from memory alone |
| Sunk cost fallacy | Continuing to invest in failing paths because of prior commitment | Ask "starting from zero today, would we choose this?" as a forcing function |
| Authority bias | Senior voices dominating group decisions regardless of information quality | Anonymous written pre-submission before group discussion |
| Recency bias | Extrapolating short-term trends into long-run forecasts | Require minimum 5-year historical window in all trend analysis |
| Groupthink | Premature consensus; dissent suppressed by social pressure | Rotating devil's advocate; required written minority view in decision log |
How decision logging counters bias systematically
The single most effective thing about a structured decision log — more effective than any individual bias mitigation tactic — is that it creates a feedback loop between decisions and outcomes that operates on a timescale long enough to surface patterns. You cannot see your confirmation bias in a single meeting. You can see it when you review fifty decisions and notice that the ones you were most confident about at the time of decision have a materially lower accuracy rate than the ones you were uncertain about. That pattern is invisible without a log. With one, it is impossible to miss.
This is why the most important thing is not to memorise the list of biases and consciously apply mitigation tactics in every meeting — though that helps. The most important thing is to build a system that generates real data about your real decisions and makes you sit with that data at regular intervals. The biases do not disappear. But over time, the patterns they create become legible, and legible patterns can be countered.
Related reading
Go deeper: the biases that cost executives the most
We've written a focused breakdown of the five biases with the largest financial impact at the executive level — with case structures and specific decision practices to counter each.
Read the executive bias guide →