A universal operating system for decisions, execution, and continuous improvement — grounded in the laws that govern how reality actually works.
Every outcome — in business, health, relationships, or learning — is a product of the system you're running, the environment it's running in, and how much entropy (disorder, decay, friction) you've failed to counter. This framework makes that equation precise and actionable.
The framework, the equation, and the underlying logic that makes it domain-agnostic.
CDA is a decision-making and execution system built on the observation that outcomes follow patterns — patterns that are consistent across domains. Business, health, learning, relationships — all obey the same underlying laws. CDA makes those laws explicit and gives you a repeatable process to work with them, not against them.
Most decision frameworks fail because they're contextual — they work in one domain but break in another. CDA is built from first principles: universal laws that hold regardless of domain, a loop that mirrors how adaptive systems actually improve, and an evolution engine that converts execution into compounding advantage.
| Layer | What it is | Role |
|---|---|---|
| Universal Laws | Entropy, constraints, compounding, feedback, phase transitions, evolution | The physics of the system — what always applies |
| Consilient Invariants | Fundamental unit, fitness landscape, equilibrium, scale hierarchy | The diagnostic lens — how to see any system clearly |
| 5D Loop | Define → Deconstruct → Design → Deploy → Deduce | The operating system — how to move through any problem |
Apply the 5D loop to any decision, goal, or problem. Start with DEFINE — clarity on what you're trying to achieve and what environment you're in. Work through each step in sequence. Close with DEDUCE — learn, update, and re-enter the loop. Every cycle makes you more adapted to the environment. Compounding begins immediately.
Most failures begin here. The wrong goal, a vague goal, or a correct goal in the wrong environment. Define is the foundation everything else builds on.
A goal that isn't specific and measurable isn't a goal — it's a wish. Wishes don't produce systems. The test: can you look at a result and know with certainty whether the goal was achieved?
A valid goal has three elements: a specific outcome, a measurable indicator, and a defined timeframe. "Grow the business" fails all three. "$20K MRR by April 2027" passes all three.
Every goal exists within an environment. That environment defines what "fit" means — what succeeds, what fails, what the selection pressure is. Ignore the environment and even a correctly designed system will fail.
The fitness landscape is the terrain. Some terrain is flat (easy, competitive, commoditised). Some is rugged with many peaks (niche, differentiated). Your strategy must match your terrain.
| Ask | Why it matters |
|---|---|
| What does this environment currently reward? | Defines your fitness function — what you're optimising for |
| What does it kill? | Defines your hard constraints — what you cannot ignore |
| Is the environment stable or changing? | Determines explore/exploit ratio — more change = more exploration needed |
| What does a competitor with 10× resources do differently? | Reveals whether you're competing on the right variables |
The most expensive mistake in decision-making is optimising correctly for the wrong fitness function. Executing a strategy brilliantly in the wrong environment produces excellent failure. Define the environment before you define the goal.
Before proceeding, run the goal through the Universal Laws. Does the goal violate any law? A goal that requires eliminating a bottleneck that will immediately be replaced by another is not a goal — it's a treadmill. A goal that requires compounding results without consistent input is fantasy.
Before designing an intervention, you must understand the current system. Every system has a structure, a constraint, and a set of feedback loops. Find them.
Start at the goal. Work backward to the present. At each step, ask: "What must be true just before this?" Build the dependency chain. This reveals what is actually required versus what feels required.
Goal → Last step before goal → Step before that → ... → Current state. Each node in the chain has: a timeframe, a measurable KPI, and a current status (done / in progress / not started). The first undone node in the chain is your current bottleneck.
Every system has exactly one constraint that limits its total throughput. Improving any non-constraint produces no improvement in overall output — it simply creates slack before the bottleneck or pressure after it.
The constraint governs the rate of all output. A factory with 10 machines where one produces at half the speed of the others is limited by that one machine — regardless of how fast the others run. Identify it. Elevate it. Only then address anything else.
| Step | Action |
|---|---|
| 1. Identify | What single thing, if removed, would produce the most progress toward the goal? |
| 2. Exploit | Extract maximum output from the constraint without additional investment |
| 3. Subordinate | Align all other elements to support the constraint — nothing else matters more |
| 4. Elevate | If still limiting: invest in removing or expanding the constraint |
| 5. Repeat | Once the constraint is resolved, a new one emerges. Return to Step 1. |
Before every action, run the phase test. It reveals whether you're in Build, Validate, or Scale — and whether your proposed action is appropriate for that phase.
"If I executed the critical path tomorrow at full force, could I deliver on what I've promised?" → NO = Build. → YES = Validate or Scale.
Building infrastructure in Validate phase is waste. Scaling in Build phase is destruction.
Every system produces signals — information that accurately reflects reality — and noise — information that merely feels significant. Confusing the two leads to decisions made on misleading data.
Given the constraint, what is the exact intervention that produces the most improvement per unit of effort? Design is where frameworks converge on a specific answer.
Apply all three before choosing an intervention. Each reveals a different dimension of the problem. They should converge on the same answer — if they don't, re-examine the constraint identification.
What single constraint unlocks the most downstream? Every intervention that isn't the constraint is noise until the constraint is resolved.
Which 20% of inputs produce 80% of outputs? Identify them. Double down. Eliminate the 80% that produces the remaining 20%.
P(success) × Magnitude − Cost. Rank all options. Build highest EV first. Any option with unsurvivable downside is a hard no regardless of expected value.
Every action has a first-order consequence (the obvious, intended effect) and second-order consequences (what happens after the obvious thing happens). Most decisions that look good at first order are bad at second order.
Always ask: "And then what?" at least twice. The third-order consequence is often where the most important information lives.
The best interventions have bounded downside and uncapped upside. When the worst case is survivable and the best case is transformative, the risk-reward profile is asymmetric in your favour.
| Type | Downside | Upside | Verdict |
|---|---|---|---|
| Asymmetric bet | Bounded / survivable | Uncapped | Take it |
| Symmetric bet | Equal to upside | Equal to downside | Analyse carefully |
| Existential risk | System-destroying | Any amount | Never — redesign first |
| Minimum viable action | Hours of effort | Information + possible win | Default starting point |
Behaviour follows incentives. Always. When a system produces unexpected behaviour, examine the incentive structure before examining the people. Most dysfunction in organisations, relationships, and markets is a rational response to a poorly designed incentive system.
Deployment is not the execution of a plan. It is the running of experiments. The environment selects winners. Your job is to generate variation, run selection, and replicate what survives.
Traditional execution assumes the plan is correct and optimises for speed of completion. Evolutionary execution assumes the plan is a hypothesis and optimises for speed of learning.
The difference compounds dramatically over time. The traditional model produces one data point per deployment. The evolutionary model produces dozens.
Generate minimum 3 options before committing. Monostrategy = one point of failure. Apply the barbell: most bets small and safe, one large and asymmetric.
Pre-define kill and scale criteria BEFORE running the experiment. The environment selects — your job is to read the selection signal honestly.
When a variant wins selection, replicate it fast. The window is never permanent. Speed of replication determines who captures the compound return.
The system changes based on what survived. Update your model, your process, and your fitness function. Then run the next cycle.
The smallest test that produces useful signal. Not the smallest test that feels comfortable — the smallest test that answers the key question. MVAs are fast, cheap, and designed to fail informatively rather than succeed expensively.
Before deploying, ask: "What is the minimum action that would tell me if this approach works?" Everything beyond that is premature scaling of an unproven hypothesis. Prove first. Scale second. Always.
Selection criteria defined after the experiment is bias masquerading as analysis. Define kill and scale criteria before deploying — when you have no emotional investment in the outcome.
| Criteria type | What it means | Example |
|---|---|---|
| Kill criteria | If result is below this, stop immediately | "If call → demo rate is below 3% after 50 calls, rewrite the script" |
| Scale criteria | If result hits this, replicate immediately | "If demo → close is above 25%, increase call volume to 30/day" |
| Time box | Maximum window before forced evaluation | "Run for 2 weeks regardless — then evaluate against criteria" |
| Sample size | Minimum data before any conclusion | "50 data points minimum before declaring success or failure" |
Fragile systems break under disorder. Robust systems withstand it. Antifragile systems gain from it. The goal is not to avoid uncertainty — it is to build systems that benefit from variance.
DEDUCE closes the loop. It converts execution data into system improvements. Without DEDUCE, you run the same experiments indefinitely. With it, each cycle makes you more adapted.
Did the variant achieve the pre-defined fitness criteria? The answer is binary: yes or no. Qualified yeses ("it almost worked") and qualified nos ("it would have worked if...") are rationalisation, not analysis.
Kill or scale. No middle ground. Keeping a losing variant alive because you're attached to it consumes resources that winners need. The inability to kill losers is the most common reason systems stagnate.
Every result is new evidence. Update your model proportionally to the strength of the evidence — not to the direction you wanted it to go. Strong disconfirming evidence should update your model more than weak confirming evidence.
| Scenario | Update required |
|---|---|
| Result matches prediction | Small positive update to confidence in model |
| Result doesn't match prediction | Larger update — examine what assumption failed |
| Repeated mismatches in same direction | Major model revision — fitness function may be wrong |
| Unexpected positive result | High-value signal — investigate the mechanism before scaling |
The DEDUCE step is where cognitive bias does the most damage. You've invested effort. You want the experiment to have worked. Your brain will find reasons to declare success when the data says failure.
DEDUCE produces three outputs that feed back into the system:
Top constraints causing −EV, ranked by +EV of fixing them. This is the input to the next DESIGN phase.
Has the environment changed? If so, the definition of winning may have changed. Update DEFINE before re-entering the loop.
What specifically changed in the system? Document it. Without documentation, DEDUCE produces learning that doesn't compound.
Evolution is not a metaphor. It is a universal algorithm that operates wherever variation, selection, and replication exist. Embed it into every execution system.
The same process that produced every living organism also governs which businesses survive, which habits persist, which ideas spread, and which strategies outcompete. The substrate differs. The algorithm is identical.
Without variation, selection has nothing to act on
Generate minimum 3 variants of any major decision. Monostrategies are monocultures — optimal in stable environments, catastrophic when conditions change. Apply the barbell: most options safe and incremental, one large and asymmetric.
Failure mode: running the same experiment for years, calling it consistency
The environment — not your intent — determines what survives
Before acting, explicitly state what the environment currently selects for. Not what you want it to select for. Not what it used to select for. What it actually rewards right now. Then optimise for that. When the environment changes, the fitness function changes.
Failure mode: optimising for a fitness function that no longer matches reality
Winners that replicate slowly lose to inferior winners that replicate fast
When a variant passes selection, replicate it before the environment changes. The compounding advantage comes from replication speed, not from variant quality alone. Pre-define the scale trigger and remove the bottleneck to replication before you need it.
Failure mode: finding what works and then studying it, refining it, and debating it while the window closes
Local maxima are traps. Getting to a higher peak requires descending first.
Every strategy you optimise creates a local maximum — a point where small changes look like regression. The path to a higher peak goes through a valley. Explore when you have runway. Exploit when you don't. The time to test adjacent strategies is while the current one is still working — not when you're desperate.
Failure mode: continuous improvement of a strategy that has hit its ceiling
Survival is the prerequisite for all optimisation
A −100% outcome is unrecoverable. Existential threats are categorically different from optimisable risks. Separate decisions into two categories: existential and optimisable. Address existential threats before optimising anything else. Never bet the system regardless of expected value.
Failure mode: accumulating existential risk while optimising short-term EV
| Condition | Direction | Reason |
|---|---|---|
| Environment is stable | Exploit | Optimisation compounds; variation is waste |
| Returns are plateauing | Explore | You're at or near a local maximum |
| Environment is changing | Explore aggressively | Current fitness function is becoming obsolete |
| Long runway | Explore more | Time to use the information before it expires |
| Short runway | Exploit hard | No time for information to compound |
These are the laws that CDA is built on. They hold across every domain, every scale, and every time period. Violating them produces predictable failure.
All systems tend toward disorder without continuous energy input. A business, relationship, or body that isn't actively maintained degrades. Structure fights entropy — but only while it's maintained.
Exactly one constraint limits any system's output at any moment. Everything else is either contributing to throughput or is slack. Fix the constraint first. Period.
Small consistent inputs produce exponential output over time. Consistency beats intensity. The 50th iteration of anything is categorically more valuable than the 1st — if each iteration was logged and improved.
Tight feedback loops compound improvement. Broken feedback loops drift toward entropy. Every system needs a mechanism that detects failure early and routes information back to decision-makers in time to act.
Systems don't change gradually — they flip at thresholds. Water doesn't slowly become ice; it crosses a phase boundary. Identify the threshold and design everything to cross it.
Any system that generates variation, undergoes selection, and replicates differentially will adapt toward fitness. Systems without variation stagnate. Systems without selection accumulate dead weight. Systems without replication cannot compound.
The party with better information about the true state of a system has a structural advantage. Investing in information quality — accurate measurement, honest feedback, real data — is always +EV.
Behaviour follows incentives at the structural level. Consistently unexpected behaviour in a system is almost always a correctly designed response to a poorly designed incentive. Fix the incentive — not the behaviour.
Every framework fails in predictable ways. These are the seven most common. Knowing them prevents them.
Executing flawlessly against the wrong goal. Working harder in the wrong direction. The most expensive failure because the effort is real but the output is worthless.
Running one strategy in a changing environment. Monostrategy produces a single point of failure. Every risk is concentrated. When conditions change, there is nothing to select.
Running experiments but refusing to kill losers. Emotional attachment to sunk costs. Every resource kept in a losing strategy is unavailable to a winning one.
Acting without measurement. The system drifts toward entropy by default because there is no mechanism to detect and correct failure. Silent failures accumulate until they're catastrophic.
Removing all slack and redundancy in the pursuit of efficiency. A system with no buffer cannot absorb any shock. Efficiency is maximum performance in expected conditions. Robustness is survivability in unexpected ones.
Amplifying before validating. Scaling a broken system makes it more broken, faster, with more money wasted. The only thing premature scaling reliably produces is a bigger version of the original problem.
Confusing the map for the territory. Spending time refining the plan instead of generating data from reality. Plans are hypotheses. Reality is the experiment. Only execution generates real information.
Optimising short-term EV while accumulating existential risk. A bankruptcy, a health crisis, a destroyed key relationship — these reset the system to zero regardless of all other progress.
If you applied nothing else from this framework, these five rules would produce 80% of the available results.
Ten minutes spent clarifying what the environment actually rewards prevents months of executing in the wrong direction. Most failed projects are correctly executed — just aimed at the wrong target.
Every system has one constraint. Improving a non-constraint in the presence of the constraint produces zero improvement in total output. This is the most violated rule in execution.
Most people run 1–2 variants and call it testing. Evolution runs millions. Even at human scale, 5 deliberate experiments per month beats one perfectly optimised strategy. The raw material of adaptation is variation.
Selection speed determines adaptation speed. The bottleneck in most systems is not finding winners — it's failing to kill losers fast enough. Every resource in a losing strategy is unavailable to a winning one.
Unmeasured systems drift toward entropy. Wrongly-measured systems optimise toward the wrong thing. Revenue, not followers. Closed deals, not demos booked. Customer outcomes, not activity metrics.
The full framework distilled to a set of questions and rules you can run every day, every week, and every month.
The universe compounds in the direction you're pointed. Small, consistent, aligned actions — measured and iterated — beat large, sporadic, heroic efforts every time. The CDA loop is the mechanism of that compounding. Run it more cycles per unit of time than anyone else and you win — not because each cycle is better, but because more cycles produce more selection events, more learning, and faster adaptation.