What You Can Actually Do About ASI Risk

Looking Beyond Tweeting Doom

If you’re past the stage of merely understanding the risk and ready to act. Subscribe for early access and our latest insights before they’re published.

By Aamir Butt

Blog 10 of 10 in The Great Threshold series.

You've read about ASI risk. You understand the stakes. You're appropriately terrified. Now what?

Tweeting "we're all gonna die" doesn't help. Doomscrolling doesn't help. Anxiety spirals definitely don't help.

Here's what actually might help—concrete actions segmented by who you are and what resources you have. Some of these shift probabilities by microscopic amounts. But microscopic shifts in existential risk probabilities translate to millions of lives saved.

If You're a Researcher or Engineer

Priority 1: Work on alignment directly

Frontier AI labs (OpenAI, Anthropic, DeepMind) hire alignment researchers. Academia needs people working on interpretability, value learning, corrigibility, formal verification.

This is highest-impact career path for technical talent. One good alignment researcher might shift existential risk by 0.01-0.1%. That's 700,000 to 7 million expected lives saved (0.01% of 7 billion).

Where to start:

  • MATS (ML Alignment & Theory Scholars): Mentorship program for aspiring alignment researchers

  • Anthropic, OpenAI, DeepMind: Apply for safety roles directly

  • Academia: PhD programs focusing on AI safety (UC Berkeley, MIT, Cambridge, Oxford)

  • Independent research: Alignment Research Center, Redwood Research

Even if you're not top-tier researcher, support roles matter. Engineering infrastructure for safety research, improving interpretability tools, red-teaming systems, writing clear explanations of technical work—all valuable.

Priority 2: Advocate for safety culture internally

If you work in AI capabilities (building more powerful systems), push for:

  • Rigorous safety testing before deployment

  • Red-teaming and adversarial analysis

  • Publishing safety research openly

  • Slowing down when safety lags capabilities

  • Whistleblower protections for safety concerns

Internal advocacy matters enormously. Companies respond to employee pressure. OpenAI's board drama showed internal dynamics affect major decisions. Be the voice saying "we should slow down."

Priority 3: Make alignment research accessible

Write blog posts, give talks, create visualizations explaining alignment challenges to broader audiences. Build political will through public understanding.

Clear explanation of technical problems is high-leverage: one good explainer read by 100,000 people creates political pressure, recruits talent, builds consensus.

If You're in Policy or Government

Priority 1: Push for US-China AI safety coordination

This is critical path for preventing ASI arms race. Track-2 diplomacy (unofficial channels) can bypass official tensions.

Concrete actions:

  • If you're in State Department/foreign service: advocate for AI safety dialogue

  • If you're congressional staff: brief your representative on coordination necessity

  • If you're think tank researcher: publish papers on coordination mechanisms

  • If you're diplomat: use backchannels to explore cooperation

Even informal conversations between US and Chinese AI safety researchers reduce misunderstanding and build relationships enabling future coordination.

Priority 2: Implement UBI pilots now

Social stability prerequisite for everything else. Without UBI (or equivalent), mass unemployment triggers conflicts preventing ASI work entirely.

Push for:

  • City/state pilots: Start small (5,000-50,000 people), gather data

  • Diverse implementations: Test different amounts, funding mechanisms, eligibility criteria

  • Bipartisan framing: "Automation dividend" and "freedom dividend" poll better than "universal basic income"

  • Rapid scaling: As unemployment rises, expand successful pilots

Alaska Permanent Fund proves concept works. Expand it.

Priority 3: Develop compute governance infrastructure

Make large AI training runs detectable and trackable:

  • Chip export controls: (US already doing some with China)

  • Energy monitoring: Large training runs use massive electricity—monitorable

  • Mandatory reporting: Require disclosure of training runs above threshold compute

  • International coordination: Treaty mechanisms for verification

This enables verification of AI safety agreements—critical for making cooperation trustworthy.

If You're in Business or Finance

Priority 1: Incorporate AI safety into ESG metrics

Environmental, Social, Governance standards should include AI safety:

  • Does company invest in alignment research proportional to capabilities?

  • Do they have safety protocols for deployment?

  • Do they advocate for regulation or fight it?

  • Do they participate in safety research sharing?

Investors can reward safety: Companies prioritizing alignment get capital, those cutting corners get divested. Market incentives currently favor speed—change them to favor safety.

Priority 2: Fund UBI research and pilots

Sovereign wealth funds, philanthropic foundations, corporate citizenship programs—all should fund UBI:

  • Enlightened self-interest: Markets need consumers with money. Dead or desperate people don't buy products.

  • Social stability: Prevents conflicts that disrupt business operations

  • Long-term thinking: Quarterly earnings matter less than civilizational survival

MacKenzie Scott, Bill Gates, various sovereign wealth funds could fund global UBI pilots for relative pocket change. This is highest-leverage philanthropy possible.

Priority 3: Adopt long-term time horizons

Optimize for decades, not quarters. This requires:

  • Different corporate structures: B-corps, benefit corporations, long-term holding companies

  • Different metrics: Measure long-term value creation, not just short-term profit

  • Different compensation: Reward executives for 10-year outcomes, not stock price next quarter

Unprecedented situation requires unprecedented time horizons. Business as usual doesn't work when facing existential transformation.

If You're an Educator

Priority 1: Teach critical thinking about AI

Not just technical skills—teach:

  • Existential risk assessment and reasoning under uncertainty

  • Ethics of transformative technology

  • Systems thinking (second-order effects, feedback loops)

  • Long-term consequences of short-term decisions

Students graduating now will be mid-career when ASI arrives. Their decisions matter. Prepare them intellectually and ethically.

Priority 2: Prepare students for post-work world

If AI automates most jobs, what gives life meaning? Teach:

  • Philosophy (meaning beyond productivity)

  • Ethics (morality independent of economic value)

  • Creativity (expression for its own sake)

  • Relationships (connection as intrinsic good)

  • Resilience (adapting to rapid change)

Education system optimized for industrial economy (training workers for factories) is obsolete. Redesign for post-scarcity or post-employment reality.

Priority 3: Advocate using your platform

Educators have credibility and reach. Use it:

  • Write op-eds about ASI risk and UBI necessity

  • Give public lectures

  • Engage with media

  • Influence curriculum standards

Teachers unions, faculty senates, professional associations—powerful constituencies politicians listen to. Mobilize them around existential priorities.

If You're a Citizen and Voter

Priority 1: Make this a voting issue

Ask candidates:

  • "What's your position on AI safety and ASI risk?"

  • "Will you support international coordination on AI development?"

  • "How will you address AI-driven unemployment?"

  • "Will you fund UBI pilots?"

Single-issue voting on existential risk is rational. Nothing else matters if we don't survive. Make politicians understand this affects votes.

Priority 2: Support organizations working on alignment

Donate money, volunteer time, amplify their work:

Research organizations:

  • Anthropic (alignment-focused AI company)

  • Alignment Research Center

  • Redwood Research

  • Machine Intelligence Research Institute (MIRI)

  • Future of Humanity Institute (Oxford)

  • Center for AI Safety

Advocacy organizations:

  • Future of Life Institute

  • Center for Security and Emerging Technology (CSET)

  • Partnership on AI

Even small donations matter. These organizations are funding-constrained. Your $100 might be marginal dollar enabling additional research or advocacy.

Priority 3: Have conversations

Discuss ASI risk with:

  • Family and friends

  • Colleagues and professional networks

  • Online communities

  • Local groups

Political will emerges from bottom-up pressure. Most people don't know about ASI risk because nobody's told them. Tell them. Build social consensus that this matters.

Not doom-mongering—thoughtful explanation of stakes, probabilities, what can be done. You've read this far, you understand more than 99% of population. Share that knowledge.

Priority 4: Prepare personally

While working on systemic solutions, prepare individually:

Develop AI collaboration skills: Learn to work with AI tools rather than compete. Prompt engineering, AI-assisted coding, creative workflows using AI. Augmentation over replacement.

Build financial resilience: Save, diversify, reduce dependence on single income stream. Easier to weather unemployment with buffer.

Cultivate meaning beyond work: Hobbies, relationships, creative expression, volunteer work, learning for its own sake. If work disappears, what remains?

Maintain mental health: This is anxiety-inducing material. Have coping strategies. Therapy, meditation, exercise, community. Stay informed without becoming paralyzed.

Stay flexible: Technological change accelerates. Adapt continuously. Lifelong learning becomes necessity, not luxury.

For Everyone: The Overton Window Strategy

The Overton Window is the range of policies politically acceptable. Currently, serious AI safety regulation and UBI are outside it for most politicians. Make them acceptable by:

Talking about it constantly: Frequent discussion normalizes ideas.

Framing strategically:

  • Not "UBI welfare" but "automation dividend"

  • Not "regulate AI" but "ensure AI safety"

  • Not "slow down progress" but "develop responsibly"

Finding unlikely allies:

  • Conservatives care about family values, entrepreneurship, freedom—UBI supports all three

  • Libertarians care about reducing bureaucracy—UBI eliminates welfare state complexity

  • Business leaders care about stable markets—UBI provides consumer base

Building coalition: Left and right can agree on existential priorities even if disagreeing on everything else.

Once ideas enter Overton Window, politicians follow. Public opinion leads, policy follows.

What Not to Do

Don't:

  • Panic or spread panic (counterproductive, burns political capital)

  • Assume individual actions don't matter (they compound across millions)

  • Wait for someone else to solve it (diffusion of responsibility guarantees failure)

  • Give up (self-fulfilling prophecy—if we believe we'll fail, we will)

  • Ignore uncertainty (acknowledge long odds while working to improve them)

The Impact Calculation

"My actions don't matter" is wrong. Here's why:

  • If you shift it by 0.0001% (one in million)—more plausible for dedicated work—that's 8,000 lives saved in expectation.

  • If you dedicate career to alignment research and shift odds by 0.01%—achievable for top researchers—that's 800,000 lives saved.

These aren't hypothetical. They're expected value calculations under uncertainty. Your actions matter probabilistically even if individual impact is small.

Moreover, impacts compound. If 10,000 people each shift odds by 0.0001%, combined effect is 1%—shifting existential risk from 25% to 24%. That's 80 million expected lives saved through collective action.

Your individual action combines with others' actions. You're not alone. Movements start with individuals deciding their participation matters.

The Realistic Assessment

Will your individual actions save humanity? Probably not by themselves.

Will collective action of millions of concerned people, researchers, policy makers, educators, voters, and advocates shift probabilities enough to matter? Yes, potentially dramatically.

We're not guaranteed to win. Odds favor failure or muddling through rather than transcendence. But we're also not guaranteed to lose. The outcome remains uncertain, influenced by choices we make now.

Every action that:

  • Funds alignment research

  • Builds political will for coordination

  • Implements UBI reducing social collapse risk

  • Educates public about stakes

  • Shifts corporate incentives toward safety

  • Recruits talent to critical problems

  • Creates international dialogue

...moves probability curves in favorable direction.

The Moral Obligation

You didn't choose to be born in the most consequential generation in human history. But here you are.

You know about ASI risk. You understand the stakes. You have agency, however limited. That creates moral obligation to act.

Not unlimited obligation—you're allowed to live your life, pursue happiness, maintain sanity. But some obligation to contribute proportional to your knowledge and capacity.

What that looks like varies:

  • For brilliant researcher: Maybe career pivot to alignment

  • For policy maker: Maybe advocating coordination in your sphere

  • For educator: Maybe updating curriculum to include these topics

  • For average person: Maybe voting based on this, donating $100/year, having conversations

Match contribution to capacity, but don't use "I'm just one person" as excuse for inaction.

The Paradox of Impact

Here's the strange thing: Your highest-impact actions are probably ones you can't measure.

That conversation with friend who becomes policy maker who influences treaty. That blog post read by student who becomes alignment researcher. That donation funding the marginal researcher who makes the breakthrough. That vote that shifts election that changes trajectory.

Causal chains are long, uncertain, and invisible. You take actions not knowing their ultimate impact. Most actions have zero impact. Some have enormous impact you never learn about.

This requires acting under uncertainty, optimizing for expected value rather than guaranteed results.

That's uncomfortable but necessary. We're playing probability games with existential stakes. Can't demand certainty—must act on best estimates.

The Timeline for Action

2025-2030: Foundation phase

  • Build alignment research capacity

  • Implement UBI pilots

  • Establish US-China dialogue

  • Create compute governance infrastructure

  • Build public awareness and political will

This is the window we're in now. Your actions during these 5 years matter most.

2030-2040: Critical decade

  • AGI likely arrives

  • Mass unemployment begins

  • ASI possibly emerges

  • US-China tensions over AI intensify

  • Social stability tested

If we haven't built foundation by 2030, this decade becomes catastrophic.

2040-2060: Transformation or termination

  • ASI definitely exists (if we survived 2030s)

  • Alignment success/failure becomes clear

  • Consciousness upload possibly available

  • Full transformation or permanent failure determined

Post-2060: Living with consequences

  • Either: Post-biological civilization beginning cosmic expansion

  • Or: Biological humanity adapted to AI coexistence

  • Or: Extinct/permanently disempowered by misaligned ASI

  • Or: Recovering from catastrophic setback

Your actions in 2025-2030 influence which future we get.

The Personal Cost-Benefit

"This sounds exhausting. Can I just live my life?"

Yes. You can. Most people will. That's fine—not everyone needs to be activist or researcher.

But consider:

  • Time spent on these issues: Maybe 5-10% of your discretionary time

  • Money spent: Maybe 1-5% of disposable income

  • Career impact: Maybe choosing somewhat different path than purely self-interested choice

In exchange:

  • Possibly contributing to preventing extinction

  • Possibly enabling digital immortality for billions (including you)

  • Possibly shaping whether future is flourishing or dystopia

  • Definitely living with knowledge you tried rather than passively accepting whatever happens

For many people, living with regret of inaction is worse than cost of action.

The Realistic Hope

I'm not optimistic we'll achieve full transformation (2-5% odds). But I'm not pessimistic we'll avoid catastrophe either (65-75% survival odds).

Most realistic hope: We muddle through. Avoid extinction, achieve some beneficial AI applications, implement partial UBI after crises force it, extend lifespans modestly, begin space exploration tentatively.

Not transcendence, but not termination. Survival with incremental improvement.

That's still worth fighting for. That's still trillions of future lives. That's still possibility of eventual transformation even if this generation doesn't achieve it.

And there's non-zero chance of surprise success. Breakthroughs happen. Coordination emerges under pressure. We might get lucky. 2-5% odds mean 1-in-20 to 1-in-50 chance we actually pull this off.

Those aren't impossible odds. They're long-shot odds. Long shots sometimes win.

The Call to Action

You've reached the end of this essay series. You understand:

  • ASI timeline (5-15 years to AGI, weeks-to-years to ASI)

  • Young men unemployment catastrophe (10-50 million deaths without UBI)

  • UBI as critical stabilizer (economically feasible, politically difficult)

  • Alignment difficulty (instrumental convergence, deceptive alignment, verification impossibility)

  • Taiwan flashpoint (30-50% conflict probability, potential nuclear escalation)

  • Consciousness upload uncertainty (don't know if it preserves identity)

  • Multipolar trap (game theory makes cooperation nearly impossible)

  • Quantum consciousness implications (might need quantum substrate for genuine consciousness)

  • Realistic probabilities (2-5% full transformation, 20-30% catastrophe, 65-75% muddling through)

Now you choose: What will you do with this knowledge?

Option 1: Nothing. Dismiss as speculative, continue life unchanged. Risk: Living with regret if catastrophe happens or transformation occurs without your contribution.

Option 2: Passive concern. Follow developments, feel worried, take no action. Risk: Anxiety without agency. Worst of both worlds.

Option 3: Active engagement proportional to capacity. Whatever your role—researcher, policy maker, business leader, educator, citizen—do something concrete within your power.

I advocate Option 3.

Your specific actions depend on who you are:

Researcher? Work on alignment. Policy maker? Push coordination and UBI. Business leader? Fund safety and long-term thinking. Educator? Teach critical thinking and post-work meaning. Citizen? Vote, donate, advocate, educate.

All of you: Have conversations. Build political will. Shift Overton Window. Make this socially salient.

The Window Is Now

We're in the 5-year window (2025-2030) where foundation gets built or doesn't. After 2030, events accelerate beyond our control. AGI arrives. Mass unemployment begins. Geopolitical tensions peak. Social stability tested.

What you do in next 5 years matters more than almost anything you'll ever do.

Not because you're special—because timing is special. You happen to be alive during civilization's most consequential transition. That's not achievement, it's accident. But it creates responsibility.

The future isn't written. Probabilities are current trajectory estimates, not destiny. Every action shifting those probabilities—even microscopically—compounds with millions of other actions.

Collectively, we determine the outcome.

The question isn't whether your individual action saves humanity. The question is whether you'll be able to say you tried when the outcome becomes clear. Whether you'll live with regret of inaction or satisfaction of effort. Whether you'll tell future generations—if there are future generations—that you saw the cliff approaching and did nothing, or that you saw it and hit the brakes as hard as you could.

We're all on the same train. The cliff approaches. Some people don't see it. Some see it and freeze. Some see it and hit the brakes.

Be one who hits the brakes.

It might not be enough. Probably won't be enough. But it's the only thing worth doing.

The great threshold awaits. Will we cross it wisely or recklessly? Cooperatively or competitively? With eyes open or closed?

Your actions—yes, yours specifically—help determine the answer.

Make them count.

You can’t fix ASI risk alone, but you also can’t outsource your share of the work. If you want a clear, realistic plan for what action looks like from where you stand, in career, capital, or influence, we can help you design it now, while the window is still open.

Copyright © 2025 Pullstream Company. All Rights Reserved.