Understanding AI Extinction Risk: A Practical Guide from Leading Experts
Overview
In December 2025, during pre-trial testimony in the Musk vs. Altman case, renowned computer scientist Stuart Russell—co-author of the foundational textbook Artificial Intelligence: A Modern Approach—delivered a sobering assessment of humanity's future with advanced AI. Russell’s testimony, which we’ll unpack in this guide, reveals a startling consensus among top AI leaders: the risk of human extinction from artificial general intelligence (AGI) may be far higher than what society would deem acceptable. This tutorial transforms that expert testimony into a practical framework for understanding, evaluating, and communicating about AGI extinction risk. You’ll learn how to think about probabilities, where the numbers come from, and why even the people building these systems are deeply worried.

Prerequisites
Before diving in, ensure you have:
- A basic understanding of what artificial general intelligence (AGI) is—an AI that can perform any intellectual task a human can.
- Familiarity with the concept of existential risk (e.g., nuclear war, asteroid impacts).
- No advanced math required, but comfort with percentages and basic probability helps.
- Willingness to engage with uncomfortable scenarios—this guide deals with potential human extinction.
Step-by-Step Instructions
Step 1: Understand the Baseline — What “Acceptable Risk” Means
Russell explains that humanity routinely accepts certain background risks without panic. For example, the annual chance of a civilization-ending asteroid impact is estimated at roughly 1 in 100 million per year. That’s our benchmark: any new technology with a higher annual extinction probability would (or should) be considered unacceptable.
- Key takeaway: The bar is extremely low. A risk of 0.000001% per year is already near the edge of what we tolerate.
- Action: Ask yourself: would you fly on a plane that had a 1-in-100-million chance of crashing each flight? Probably yes. Now ask yourself: would you accept a 1-in-5 chance of global catastrophe? That’s where AGI estimates land.
Step 2: Collect the Expert Estimates — What Top AI Leaders Actually Say
During his testimony, Russell cited a range of influential figures who have publicly or privately estimated AGI extinction risk:
| Expert | Position | Estimated Risk (approx.) |
|---|---|---|
| Geoffrey Hinton | "Godfather of AI" | ~25% |
| Yoshua Bengio | Turing Award winner | ~20-25% |
| Dario Amodei | CEO of Anthropic | ~20-25% |
| Sundar Pichai | CEO of Google | ~20-25% |
| Demis Hassabis | CEO of Google DeepMind | ~20-25% |
Russell noted that while he doesn’t know the exact derivation, these estimates reflect each expert’s best judgment based on their deep understanding of AI capabilities, safety research, and regulatory prospects.
- Implied probability: Roughly 20–25% chance of extinction from AGI over the long term (not per year, but cumulative).
- Compare to baseline: That's tens of millions of times higher than the asteroid benchmark.
Step 3: Apply Russell’s Key Question — Is the Risk “Scientifically Reliable”?
Russell emphasizes a crucial epistemological point: we have no scientifically rigorous way to put a precise percentage on AGI extinction risk. All current estimates are “best guesses” informed by technical reasoning, but they lack the statistical foundation we have for, say, asteroid impacts.
However, he argues that even rough estimates can be useful. If every leading expert independently arrives at ~20–25%, that’s a signal worth heeding. In his words: “I can't say where the other widely quoted risk estimates come from… but the numbers from many leading experts are all in this range.”

- Practical advice: Treat expert estimates as order-of-magnitude indicators, not precise predictions. The gap between 1-in-100-million and 1-in-4 is what matters.
Step 4: Understand the Race Dynamics — Why We Can’t Just Slow Down
Russell’s testimony also highlighted a conversation with DeepMind CEO Demis Hassabis. Both agreed that “race dynamics” make it nearly impossible for any single company or country to unilaterally pause or exit the development race. The fear: if you stop, someone else (perhaps with fewer safety precautions) will push ahead and deploy an unsafe AGI.
- This creates a prisoner’s dilemma: Cooperation would benefit everyone, but each actor has a short-term incentive to defect.
- Concrete example: Even if Google DeepMind wanted to halt AGI work, China or a startup might not. So they keep racing, hoping their version is safer.
Step 5: Synthesize the Information — Form Your Own Informed Opinion
Now that you have the data:
- Acceptable annual risk: ~1 in 100 million (from asteroid baseline).
- Expert cumulative risk estimate: ~20–25%.
- Race dynamics: prevent easy de-escalation.
Russell’s conclusion? “Making these systems more capable… doesn’t seem like a sensible move.” You can adopt that view or challenge it, but you now have a rigorous framework for the debate.
Common Mistakes
Confusing “cumulative” with “annual” risk
Many people misinterpret the 20–25% figure as an annual probability. It is not—it’s a lifetime (or long-term) risk. Still, compared to the annual asteroid benchmark, even a cumulative 20% over, say, 50 years is astronomically high.
Assuming expert consensus means certainty
Just because Hinton, Bengio, and others agree doesn’t guarantee they’re right. The point is that they agree, and they’re the most knowledgeable people we have. Dismissing their estimates because they aren’t “scientific” misses the practical urgency.
Overlooking the race dynamic
Some argue that if risk is so high, we should just stop AI research. But that ignores the competitive pressures Russell and Hassabis described. Unilaterally stopping would likely backfire.
Summary
Stuart Russell’s testimony provides a clear, grounded way to think about AGI extinction risk. The acceptable annual risk from asteroids sets an extremely high bar (1 in 100 million). Top AI leaders estimate cumulative extinction risk at ~20–25%—a gap of many orders of magnitude. Race dynamics compound the problem. Whether you agree or not, this framework equips you to participate in one of the most important conversations of our time.
Related Articles
- Everything You Need to Know About the LG 27-inch Ultragear QHD Monitor Deal at $189
- 5 Reasons Gamers Are Cheering Microsoft’s Decision to Scrap Xbox Copilot
- Unlocking PS5 Power: How Linux Lets You Play Steam Games on Sony's Console
- 10 Ways Netflix Is Reinventing Family Game Night with Boggle
- The Blood of Dawnwalker: Breaking Free from Linear Quest Design
- Esoteric Ebb: A Tabletop-Style CRPG Where Your Inner Voices Roll the Dice
- Breaking: Gaijin Single Sign-On Goes Live on GeForce NOW—Stream War Thunder Instantly
- How to Score the Best Android App Deals and Freebies This Weekend