Three stages. One continuous cycle. Most people only do the first — and wonder why they keep making the same mistakes.
The most valuable thing you can do is record decisions while memory is fresh. Reflect OS prompts you for the fields that matter: what you decided, why, what you considered and rejected, what risks you identified, your confidence level, and what else was happening at the time.
This isn't a form — it's a structured conversation. The situational context field alone is worth it. "What else was happening when you made this decision?" is the question that unlocks honest self-assessment later.
Most decisions aren't reviewed because nobody remembers to. Reflect OS surfaces your decisions when outcomes are due — 30, 90, 180 days, 12 months, or custom horizons aligned to how you actually measure things.
At each checkpoint, you compare what you expected with what happened. Not to assign blame — to understand the gap between your model of the world and how it actually works. That gap is where all the learning is.
Decisions stay in "unrealised" state when outcomes are partial or on a longer horizon. Reflect OS handles the ambiguity of real-world timelines without forcing premature closure.
Individual decisions are useful. A library of decisions is where the intelligence lives. Over time, Reflect OS identifies patterns in your confidence calibration, recurring risk blind spots, decision quality by category or context, and outcome trends.
The first meaningful pattern the app surfaces about you should feel uncomfortable — because it's accurate. That's the signal that it's working.
How your stated confidence correlates with actual outcomes
Overconfidence, recency bias, and sector-specific blind spots
Rolling outcome quality trend across categories and timeframes