Most people who try to keep a decision journal give up within a month. The journal becomes a record of decisions they already know the outcomes of, or a list of intentions they never follow through on. The reason is almost always the same: they are capturing the wrong things.
A decision journal only works if it records four specific things at the time of the decision — not after the outcome is known.
The four things to capture at decision time
1. The decision
Stated precisely. Not "I decided about the new hire" but "I decided to offer the Head of Sales role to candidate A over candidate B." Specificity matters because vague decisions produce vague learning.
2. The expected outcome
What do you expect to happen, and by when? "I expect revenue to increase by 15% within 6 months" is a testable hypothesis. "I expect things to improve" is not. The expected outcome is what you will compare reality against at review time.
3. Your confidence level
How certain are you, expressed as a percentage or score? This single field is the most valuable data point in the entire journal. It is what makes confidence calibration analysis possible — and calibration is where most executives discover their biggest blind spots.
4. The review date
When will you know whether the decision worked? Set a specific date. The review is where the learning happens. Without a scheduled date, it never happens.
What happens at review time
At your review date, you return to the decision and record what actually happened. Compare it against your expected outcome. Update your confidence score based on the accuracy of your prediction. Record the key lesson — what you would do differently with hindsight.
This is the moment most people skip, and it is the entire point. Without structured reviews, a decision journal is just a diary. With reviews, it becomes a learning system.
Common mistakes that kill decision journals
Journalling after outcomes are known
This produces a rationalised record, not an honest one. Your brain rewrites history automatically. Capture the decision before the outcome.
Only capturing big decisions
The most revealing patterns often come from mid-stakes decisions that happen frequently — hiring junior staff, choosing vendors, setting priorities. Log these too.
No review cadence
A decision journal without scheduled reviews is just a list of things you did. Build review dates into your calendar at the time of logging.
Vague expected outcomes
"I hope this goes well" is not an expected outcome. Force yourself to state a measurable result.
The calibration insight
After 50 decisions with confidence scores, you will almost certainly discover that you are systematically overconfident in one or more categories. Most executives are. The research on overconfidence in professional decision-making is unambiguous — the majority of professionals rate themselves in the top quartile of performers, and express confidence that significantly exceeds their accuracy rates.
This is not a flaw. It is information. Once you know where your confidence is miscalibrated, you can correct for it in real time — and your decision quality improves measurably.
Software vs a physical journal
A physical notebook works for the basics but breaks down at the analysis layer. Pattern analysis across 100 decisions requires the data to be searchable and structured. Decision intelligence software like Reflect OS handles structured capture, automatic review scheduling, and calibration analysis that a notebook cannot.
Start tracking your decisions with Reflect OS
Log decisions in under 60 seconds. Review at 30, 90, and 180 days. See exactly where your judgement is strong — and where it costs you.
Get started — 90-day guarantee