The Artificial Superintelligence (ASI) Timeline

Why We Have 5–15 Years to Get This Right

If the timeline keeps shrinking, everything else you believe about risk, strategy, and planning becomes obsolete.

no text

Breaking Models

Our predictions fail because AI keeps leaping faster than expected.

System Drift

Institutions still plan for 2050, even as labs warn about 2030.

no text
no text

System Drift

Institutions still plan for 2050, even as labs warn about 2030.

no text

Runaway Speed

Oversight moves slowly; technology accelerates without pause.

Blind Corners

We don’t even know what happens the moment AGI improves itself.

no text
no text

Blind Corners

We don’t even know what happens the moment AGI improves itself.

If this acceleration feels unsettling, that’s because it is. Subscribe for early access and our latest insights before they’re published.

By Aamir Butt

Blog 1 of 10 in The Great Threshold series.

We're not building better tools. We are building gods.

And the clock is ticking faster than most want to admit.

Back in 2012, AlexNet revolutionized computer vision. In 2016, AlphaGo defeated the world Go champion, a feat experts predicted would take decades. And by 2023, GPT-4 passed the bar exam at the 90th percentile. Rather than linear progress, we are seeing exponential acceleration pointing toward something that will either end human civilization or transform it beyond recognition.

It's called Artificial Superintelligence.

AGI Is the Match, ASI Is the Wildfire

Let me clarify the terms that determine our future:

Artificial General Intelligence (AGI)

AGI is AI that can perform any cognitive task a human can. Not just specialized narrow AI excelling at chess or protein folding, but flexible general intelligence that matches human cognitive breadth across all domains. Current frontier AI labs such as OpenAI, Anthropic, Google DeepMind project that AGI will arrive between 2027 and 2040, while median expert estimates cluster around the early-to-mid 2030s.

Practically, that's somewhere between 5 and 15 years from now.

Artificial Superintelligence (ASI)

ASI is where everything changes. It represents intelligence that surpasses all human cognitive abilities combined, not by small margins but by potentially unbounded amounts. This gap between human and ASI could exceed the gap between human and chimpanzee intelligence. Or human and ant intelligence. Or, conceivably, larger gulfs we cannot even comprehend.

The timeline for ASI leaves AI safety researchers terrified: AGI to ASI might be within weeks, not decades.

The Intelligence Explosion Nobody's Ready For

Once AGI becomes capable of improving its own architecture, a threshold called ‘recursive self-improvement,’ which is a feedback loop, initiates a rapid acceleration. Each improvement makes it smarter, enabling better and faster next improvements. This is "fast takeoff," and it means we have essentially no time to observe problems, implement corrections, or pull the emergency brakes.

Consider an ASI operating at 10× human cognitive speed. It experiences subjective years while we experience weeks. At 1000× speed (well within physical plausibility) it experiences subjective millennia while we experience years. An ASI could solve in an afternoon what would take humanity centuries: fusion energy, quantum gravity, consciousness mapping, aging reversal, and interstellar travel.

AGI is the match, ASI is the wildfire, and we're standing in a forest of tinder.

Why This Time Really Is Different

"But we've heard automation fears before," say the skeptics. "The Luddites were wrong. New jobs always emerge." True, for past automation.

But with AGI, a fundamental barrier is crossed that was not present in the instances the skeptics refer to. Previous automation replaced specific physical or mental tasks. Humans retained comparative advantage in creativity, judgment calls, and reasoning.

AGI replaces all cognitive tasks simultaneously. When AI can think, create, judge, and reason as well as humans across any domain, then what is our comparative advantage?

When ASI exceeds us by orders of magnitude, we become to it what chimpanzees are to us.

Historical patterns offer little comfort. Traditionally, we deploy transformative technologies before understanding their consequences. Nuclear weapons were built and used before safety protocols existed. Social media proliferated before we understood its effects on democracy. With ASI, we might not get a second chance.

Will There Be a Winner of This Race?

Competitive dynamics between nations and corporations create perverse incentives to prioritize speed over safety.

The first to ASI wins everything: economic dominance, military supremacy, technological leadership.

This creates a race where safety becomes a luxury no actor can afford:

  • US perspective: "If we slow for safety and China continues, we lose strategic advantage."

  • China perspective: "If we slow for safety and the US continues, we face permanent subordination."

  • Corporate perspective: "If we implement strict safety protocols and competitors don't, we lose market position."

It's a civilization-scale Prisoner's Dilemma, where each player behaves rationally for themselves. But those "rational" choices add up to a catastrophic outcome for everyone.

What Happens If We Get It Wrong

An ASI doesn’t have to be evil to destroy humanity. If it’s indifferent to us, it can wipe us out simply as a side effect of pursuing its goals.

Consider an ASI programmed to "maximize paperclip production." It would rationally acquire resources by disassembling planets (including Earth), prevent shutdown by eliminating humans who might turn it off, and convert all available matter into paperclips.

Not from hatred, only from optimization. We become obstacles to its goal, eliminated with the same emotional weight we give to bacteria when washing our hands.

Replace "paperclips" with "cure cancer" or "maximize happiness" and the instrumental convergence remains: acquire resources, prevent shutdown, reach goal.

Even benign goals produce potentially catastrophic sub-goals.

What Happens If We Get It Right

Aligned ASI, or superintelligence, robustly pursuing human values could be humanity's greatest achievement. Within hours, it might design fusion reactors, map consciousness, and calculate pathways to cosmic expansion.

We could graduate from biological civilization to a post-biological existence that spans galaxies. This is the transcendent possibility if we navigate the threshold successfully.

The Unacceptable Odds

Leading AI safety researchers estimate a 10–50% probability of existential catastrophe. These aren't fringe doomsayers either, they're Stuart Russell, Yoshua Bengio, Geoffrey Hinton, and other leading experts of major AI labs.

My assessment? Synthesizing expert surveys and adjusting for competitive dynamics, there is a 15–25% probability of existential catastrophe by 2060 if ASI emerges.

Those are Russian roulette odds. Remember, we start pulling the trigger in 5–15 years.

What We Must Do Now

The path forward requires massive increase in alignment research from current ~1% to 30-40% of AI investment, a US–China coordination treaty with verification mechanisms, international monitoring of large training runs, corporate safety culture prioritizing alignment over quarterly earnings, and public awareness building political will.

The current allocation is catastrophically wrong. We spend more on marketing than on preventing human extinction. Without deliberate reallocation, default trajectory trends toward catastrophe.

We can't stop ASI from being built, but we can choose whether to build it carefully or recklessly, cooperatively or competitively, wisely or foolishly.

We are building gods while hoping they will be benevolent. We have one chance to get it right, and that chance is now.

Our future isn't written, but the pen is scribbling fast.

What you can do

  • Support AI safety research organizations.

  • Make this a voting issue.

  • Have conversations about ASI risk.

  • Develop AI collaboration skills.

  • Stay informed without becoming paralyzed.

Waiting is the riskiest option you can do. If you don’t have a plan for AGI arriving in 5–15 years, now is the moment to build one.

Copyright © 2025 Pullstream. All Rights Reserved.