A decision log is a structured record of decisions made over time — capturing what was decided, why, and with what confidence, before the outcome is known. When built correctly, it becomes the most useful professional development tool available to any knowledge worker. When built poorly — usually as an afterthought in a shared spreadsheet — it is abandoned within a month and generates no learning whatsoever.
The difference between a decision log that works and one that does not comes down to three things: what fields it captures, how it is updated when outcomes arrive, and whether there is a review cadence that uses the data. This article covers all three, with a full template schema you can implement immediately — either manually or with a dedicated tool.
Why spreadsheets usually fail at decision logging
Spreadsheets are the first tool most people reach for when starting a decision log, and they fail reliably for the same set of reasons. They offer no prompted structure — meaning the quality of what gets recorded depends entirely on the discipline of the person entering the data, which declines sharply over time. They provide no automated reminders when an outcome review is due, so reviews happen sporadically or not at all. They have no mechanism for locking the original record after entry, so it is psychologically easy to edit your original rationale after you know the outcome — which destroys the value of the record entirely. And they produce no analytical views on calibration, accuracy by category, or pattern detection without significant manual effort.
None of this means you cannot start with a spreadsheet. A structured spreadsheet decision log is infinitely better than no decision log. But understanding its limitations is important for knowing when to migrate to a purpose-built tool — and for designing the spreadsheet in a way that compensates for those limitations as much as possible.
The 8 essential fields
A decision log that captures fewer than these eight fields is not a decision log — it is a note. Each field exists for a specific reason, and removing any one of them meaningfully reduces the value of the record for review and analysis.
1. Date logged
The timestamp of when the decision was recorded — not when it was made in conversation, but when it was formally entered into the log. This is the anchor for all subsequent review scheduling and is essential for distinguishing decisions made before from decisions recorded after the outcome was visible.
2. Decision summary
A one to two sentence description of what was decided. The discipline here is specificity: "decided to hire" is not useful; "decided to hire a VP of Marketing at £180k base, prioritising brand experience over performance marketing background" is. The summary should be precise enough that someone reading it 180 days later with no additional context understands exactly what commitment was made.
3. Context and constraints
The situation that made this decision necessary and the constraints that shaped it. What was the triggering event? What were the key constraints — budget, timeline, people, information quality? This field is what separates a decision record from a decision rationale: it preserves the environment in which the decision was made so that outcome reviews are comparing actuals against a realistic picture of what was known and what was possible.
4. Options considered
A brief description of the options that were genuinely considered and rejected. Not an exhaustive list, but the two or three most credible alternatives to the chosen option, with a one-sentence explanation of why each was rejected. This field is important for two reasons: it documents that the decision-maker engaged with the alternatives (which matters for audit and governance purposes), and it provides the comparison set needed to assess whether the correct option was chosen at review time.
5. Chosen option and rationale
Which option was chosen and why. The rationale should be the actual reason — the factor or factors that tipped the balance toward the chosen option — rather than a post-hoc justification. If the decision was close, say so. If it was clear, say that. Precision about the actual decision process is what makes this field useful at review time.
6. Confidence level
A numeric confidence rating from 0 to 100 percent, representing how confident the decision-maker is at the time of logging that the chosen option will produce the expected outcome. This is the most important field in the log for calibration analysis and the field most likely to be skipped or guessed carelessly. Resist both temptations. The confidence level, compared against outcome accuracy across many decisions, is the raw material from which the most useful insights are derived.
7. Expected outcome
What you expect to happen, stated in measurable terms, by when. "This hire will help the team" is not an expected outcome. "This hire will increase MQL to SQL conversion from 18% to 24% within two quarters" is. The more measurable the expected outcome, the more useful the review will be. Some decisions resist measurable framing — in those cases, state the qualitative outcome expected as precisely as possible.
8. Actual outcome and review date
Left blank at the time of logging, completed at the scheduled review date. The actual outcome field captures what actually happened, compared against the expected outcome stated at decision time. The review date should be set when the decision is logged — not left as an open question. For most professional decisions, a 90-day review is the right starting point. Some decisions warrant both a 30-day and a 180-day review.
Full template schema
| Field | Description | Example value |
|---|---|---|
| Date logged | Timestamp of log entry — locked after creation | 2026-04-10 |
| Decision summary | 1–2 sentence precise description of the commitment made | Decided to switch primary cloud vendor from AWS to GCP for ML workloads |
| Context & constraints | Triggering event and key constraints at time of decision | AWS Bedrock pricing increased 30%; budget constraint requires cost reduction within 6 months |
| Options considered | Credible alternatives rejected, with brief rationale | Stayed on AWS (too expensive); Azure (integration complexity); hybrid (too operationally complex) |
| Chosen option & rationale | What was decided and the actual tipping factor | GCP — best price/performance for our workload profile and strong existing GCP credits |
| Confidence level | 0–100% confidence that this choice produces the expected outcome | 68% |
| Expected outcome | Measurable outcome, by when | 25% reduction in ML infrastructure cost by Q3 2026, with no degradation in model latency |
| Review date | Scheduled date for outcome review | 2026-10-10 |
| Actual outcome | Completed at review — what actually happened | (Completed at review) |
| Lesson | 1–2 sentence learning from comparing expected vs actual outcome | (Completed at review) |
Optional advanced fields
Stakeholders. Who was involved in or affected by the decision. Useful for team-level logs where understanding the social and organisational context of a decision is valuable for later analysis.
Reversibility. A binary or three-point scale (reversible / partially reversible / irreversible). This field is useful for two reasons: it sets appropriate expectations about review outcomes (an irreversible decision cannot be corrected, but can be learned from), and it provides a useful analytical dimension at review time — are your most irreversible decisions also your most carefully made?
Bias check. A brief note on which cognitive biases, if any, may have influenced the decision and what was done to counter them. This field requires honest self-awareness, but its presence creates accountability for applying bias mitigation practices.
Follow-up date. Separate from the review date — a date for an intermediate check on implementation progress, not on the final outcome. Useful for decisions with long outcome timelines where mid-course corrections may be possible.
Common mistakes
Logging too late. A decision logged after the outcome is visible is not a decision log — it is a hindsight rationalisation. The entire value of the practice depends on capturing what you believed before the outcome was known. If you log a decision more than 48 hours after it was made, the contamination from early outcome signals is already significant.
Logging outcomes without logging rationale. Many people initially log only the decision summary and the outcome, omitting the rationale, expected outcome, and confidence level. This produces a record that tells you what happened but not what you expected to happen — which makes pattern analysis impossible. The rationale fields are the most tedious to complete. They are also the most valuable.
Not reviewing logs. A decision log without a review cadence is an archive, not a learning system. Set quarterly calendar blocks for reviewing all decisions whose review dates have passed. Protect those blocks. The insights generated in a 60-minute quarterly review of your logged decisions are worth more than most professional development investments of equivalent time.
How to establish a review cadence
The simplest cadence that works is a quarterly 60-minute review. At the start of each quarter, pull all decisions logged in the previous quarter plus any decisions from prior quarters whose review dates have arrived. Read each original entry — rationale, options, confidence level, expected outcome — before looking at the outcome. Then record the actual outcome and lesson. Then step back and look at the aggregate: where was your confidence well-calibrated? Where was it not? Which categories of decision are producing better outcomes than expected? Which are disappointing consistently?
After four quarterly reviews — twelve months of data — the patterns become clear enough to act on. They are almost always surprising in at least one dimension. That surprise is the value of the practice.
Related reading
Reflect OS is a decision log that works
Structured capture, automated review reminders, calibration tracking, and team sharing — built for professionals who want a decision log that actually gets used.
See how Reflect OS works →