
The 2% Solution
Why Digital Immortality Has Worse Odds Than You Think
If You’re Secretly Betting on Digital Heaven
Long Odds
You feel the pull of immortality but suspect the probabilities look more like a lottery than a roadmap.
Stacked Gates
You intuit that “just solve ASI, fusion, UBI, coordination” isn’t a single problem but seven locked doors in a row.
Stacked Gates
You intuit that “just solve ASI, fusion, UBI, coordination” isn’t a single problem but seven locked doors in a row.
Value Math
You care less about vibes and more about expected value across centuries and trillions of lives.
Sober Hope
You don’t want naive optimism or fatalism, you want numbers you can think with.
Sober Hope
You don’t want naive optimism or fatalism, you want numbers you can think with.
If this kind of probabilistic doom-and-hope is how you already think, you’re the kind of person we design strategy for. Subscribe for early access and our latest insights before they’re published.
By Aamir Butt
Blog 9 of 10 in The Great Threshold series.
Let me be brutally honest about the probabilities. Full transformation—consciousness upload, digital immortality, galactic civilization—has roughly 2-5% chance of happening this century.
Not 50%. Not 20%. Two to five percent.
These are lottery ticket odds. Yet unlike lottery tickets, the payoff involves not just personal wealth but the entire future of consciousness. Trillions of beings experiencing trillions of subjective years across galaxies—or oblivion.
Here's why the math is so unforgiving.
The Seven Prerequisites
For digital immortality and cosmic expansion to happen, we need all of these simultaneously:
ASI Alignment Solved (25-35% probability)
Quantum Computing at Scale (60-70% probability)
Energy Abundance via Fusion (70-80% probability)
Social Stability via UBI (45-55% probability)
Consciousness Science Breakthrough (40-50% probability)
Brain-Computer Interfaces Mature (55-65% probability)
Global Coordination (30-40% probability)
Each prerequisite is individually plausible but far from certain. The compound probability calculation is merciless:
0.30 × 0.65 × 0.75 × 0.50 × 0.45 × 0.60 × 0.35 = 2.6%
This is why I estimate 2-5% probability of full transformation by 2100.
Understanding Compound Probability
Think of it like passing through seven locked doors to reach paradise. Each door has separate key with probability of working. You need all seven keys to work.
Even if each individual key has 50% chance of working, getting through all seven doors: 0.5^7 = 0.78% chance. That's less than one percent despite each individual door being coin-flip probability.
With ASI transformation, some factors are worse than coin-flips. ASI alignment at 30%? Global coordination at 35%? These aren't reassuring odds.
And they must ALL succeed. If even one fails catastrophically, transformation fails.
Breaking Down the Bottlenecks
ASI Alignment (30%): The Critical Path
This is the lynchpin. If ASI alignment fails, nothing else matters—we're extinct or permanently disempowered.
Why only 30%? Because:
Alignment is genuinely hard (value learning, corrigibility, interpretability all unsolved)
Competitive dynamics prioritize speed over safety (<1% of investment in safety)
Deceptive alignment might be undetectable
Fast takeoff gives no iteration time
We need to get it right on first try
Why not lower? Because:
Some alignment approaches show promise (Constitutional AI, RLHF)
Growing investment and awareness
Possible "warning shots" before full ASI
We might get lucky with slow takeoff
If this drops below 25%, compound probability drops below 2%.
Quantum Computing (65%): The Physics Challenge
Relatively high probability because physics is understood—challenges are engineering.
Progress is steady: qubit counts increasing, coherence times extending, error correction improving. Multiple approaches pursued (superconducting, ion trap, topological).
Why not higher? Decoherence might impose fundamental limits we haven't discovered. Scaling from 1,000 to 1,000,000 qubits requires breakthroughs, not just refinement.
But this is solvable problem with clear metrics. Confidence: medium-high.
Energy Abundance (75%): The Engineering Problem
Fusion break-even likely by 2035 (ITER on track). Commercial viability 2040s-2050s. Strong incentives (climate + energy demand). Alternative pathways exist (advanced fission, space solar).
Why not higher? Commercial viability ≠ break-even. Deployment at scale takes decades after first commercial reactor. Regulatory hurdles. Fossil fuel resistance.
But physics proven, engineering path clear. This is the prerequisite I'm most confident about.
UBI Implementation (50%): The Political Challenge
Coin-flip probability. Economics work (AI productivity gains fund it). Necessity clear (prevent social collapse). Historical precedent exists (social programs after crises).
But: Wealth concentration creates political opposition. Cultural resistance to "welfare." Political gridlock. Global coordination nearly impossible.
Estimate: 60-70% in individual developed nations eventually, but only 45-55% adequate implementation globally before social collapse forces it.
Consciousness Science (45%): The Fundamental Unknown
We don't know if the hard problem is solvable. Consciousness might require specific biological substrate. Or might be substrate-independent. Or might involve physics we don't understand.
Orch-OR theory suggests quantum processes—adds complexity. Classical theories suggest complexity sufficient—but don't explain qualia.
This represents "unknown unknowns" territory. Medium-low confidence reflects profound uncertainty.
BCI Maturity (60%): The Bioengineering Challenge
Demonstrated proof of concept (Neuralink). Clear engineering path (miniaturization, biocompatibility). Strong commercial and medical incentives. No fundamental physics barriers.
But: Scaling from thousands to millions of connections hard. Safety standards stringent. Regulatory approval slow. Long-term biocompatibility unproven.
Medium confidence—harder than seems, easier than feared.
Global Coordination (35%): The Game Theory Problem
Historical failure rate on global coordination is high. Climate change (ongoing failure despite clear science). Nuclear proliferation (partial success at best). Biodiversity (mostly failure).
Current geopolitical tensions (US-China competition) make coordination harder. Race dynamics create defection incentives. Verification mechanisms insufficient.
Why not lower? Existential stakes might focus minds. Cuban Missile Crisis showed crisis can produce cooperation. Technical solutions possible (compute governance).
Low-medium confidence. This could easily drop to 20-25% if tensions escalate, further reducing compound probability.
Why These Aren't Fully Independent
The calculation above assumes independence—each factor success/failure doesn't affect others. Reality is more complex:
Positive correlations (success in one helps others):
Aligned ASI dramatically accelerates everything else (designs fusion reactors, maps consciousness, optimizes BCI, etc.)
UBI implementation reduces geopolitical tension, improving coordination probability
Energy abundance makes UBI more affordable, improving its probability
Negative correlations (failure in one cascades):
Social collapse from no UBI prevents ASI development entirely
War over ASI supremacy destroys infrastructure needed for all other factors
ASI misalignment kills everyone, making other factors moot
Accounting for correlations doesn't significantly improve overall odds. The positive and negative effects roughly balance. Compound probability remains in 2-5% range.
What About Partial Success?
Full transformation isn't only possible outcome. We might achieve partial transformation (cognitive enhancement, extended lifespans, space presence) without complete digital immortality and galactic expansion.
Partial transformation probability: 5-10%
Requires fewer prerequisites:
AGI aligned but doesn't reach ASI (or reaches it slowly)
BCI mature enough for augmentation
Some life extension (decades to centuries, not immortality)
Inner solar system colonization (Mars, asteroid belt)
But no consciousness upload or full post-biological existence
This is more likely because it's less demanding on technology and coordination. But still only 5-10% because ASI alignment remains critical path and most partial successes still require it.
The Sobering Reality
Most likely outcome (65-75% probability): We muddle through—avoiding existential catastrophe but also failing to achieve transformation.
AGI arrives slowly or not at all this century
Some technological progress but no paradigm shift
Biological humans adapt to AI coexistence
Incremental improvements in health, longevity, living standards
No digital immortality, no cosmic expansion
Neither utopia nor apocalypse—just continuation with better tools
Second most likely (20-30% probability): Catastrophe—ASI misalignment, nuclear war, civilizational collapse, or combination.
Third most likely (5-10%): Partial transformation—enhanced biological humans, some space presence, extended lifespans.
Least likely (2-5%): Full transformation—everything works, digital immortality, galactic civilization.
Why Try With Such Long Odds?
Because the payoff is infinite relative to alternatives.
Expected value calculation:
2% chance of trillions of beings experiencing trillions of years = massive expected value
98% chance of something less = still worth attempting if that 2% is real possibility
Pascal's Wager for species: Even small probability of transcendent outcome justifies significant effort.
Moreover, these aren't fixed odds—they're current trajectory estimates. Our choices affect probabilities. Allocating 10x more resources to alignment research might shift ASI alignment from 30% to 45%. Implementing UBI early might shift social stability from 50% to 70%.
Small shifts in individual probabilities compound dramatically:
Improving each factor by just 10 percentage points: (0.40 × 0.75 × 0.85 × 0.60 × 0.55 × 0.70 × 0.45) = 13.4%
That's 5x improvement in overall odds from modest improvements across factors
Every percentage point matters enormously when dealing with existential stakes and cosmic timescales.
What This Means for You
Don't expect digital immortality. Odds are against it. Plan for biological lifespan. Make peace with mortality. Find meaning in finite existence.
But work toward it anyway. Because 2-5% odds for transcendence justify effort. Because trying improves odds. Because even partial success (extended lifespans, reduced suffering, space exploration) is valuable.
Manage expectations while maintaining hope. Avoid both naive optimism ("ASI will solve everything!") and paralyzing pessimism ("we're doomed anyway"). Thread the needle: acknowledge difficulties while working toward solutions.
Focus on improving the odds. Support AI safety research. Advocate for UBI. Make this politically salient. Prepare personally but don't bet everything on transformation.
The 2% solution isn't giving up—it's honest assessment enabling strategic action. Knowing the odds helps prioritize. ASI alignment is critical path (lowest probability, highest impact). UBI prevents catastrophe enabling other work. Energy abundance is likely and enables everything else.
"We're attempting the hardest thing humanity has ever tried, with 2% odds of complete success, 5-10% odds of partial success, 20-30% odds of catastrophe, and 65-75% odds of muddling through. Those are terrible odds for a gamble this important. But they're the only odds we have, and not playing means certain mediocrity. So we play—carefully, strategically, humbly."
The transformation might fail. Probably will fail completely. But the possibility—however remote—of trillions of conscious beings flourishing across galaxies makes the attempt worthwhile.
That 2% represents everything. We chase it knowing we'll probably fail, because the alternative—not trying—guarantees failure.
You can’t control the 2–5% outcome, but you can influence the inputs. Subscribe for early access and our latest insights before they’re published.



