The uncontrolled scaling of AI poses a near certain existential threat where superintelligent AI could spontaneously develop dangerous drives like self-preservation, making human extinction inevitable without an immediate global halt to development, according to AI doomerism book, “If Anyone Builds It, Everyone Dies.”
Authors Eliezer Yudkowsky and Nate Soares describe the global race to build ever-larger, more intelligent AI models as a “suicide race” requiring only incompetence as opposed to malignity.
Once AI is scaled up, the authors warn its drive toward self-preservation and power-seeking can come about spontaneously and make human control impossible. The vision isn’t just about technical fears but also debates over doomsday AI and the humanities.
Will human creativity and meaning survive in a machine-dominated future? The book’s perspective lands in the middle of a divided community.
“Doomers” argue extinction is inevitable unless AI development halts or slows dramatically, while advocates insist pressing forward could unlock breakthroughs in medicine, science, and economics.
Simultaneously, some say AI is not dangerous, framing worries as exaggerated compared to pressing societal issues.
Others caution against overreach, suggesting that some say AI should not be regulated, as premature laws could stall innovation.
Critics of the doomer view counter that focusing on extinction risks distracts from immediate harm like bias, layoffs, surveillance, and disinformation. Still, the authors’ stark framing AI doomerism puts the spotlight on long-term risks, linking their concerns to the broader debate about AI policy and governance at the global level.
Nick Bostrom’s Superintelligence raised the existential case a decade ago, but Yudkowsky and Soares sharpened it into an urgent call, warning of the existential risk from technology if left unchecked.
AI Models React to the Book
Interestingly, leading AI platforms were asked to weigh in. OpenAI’s ChatGPT called the book “a useful provocation rather than a prophecy,” adding that, “it sharpens intuitions about alignment and unintended consequences.”
Yet, ChatGPT’s response reflects critiques of AI doomerism, emphasizing that worst-case assumptions should not overshadow uncertainty or progress in alignment research.
Meta AI took a middle ground: “Its dire predictions may feel exaggerated, but the emphasis on caution and international cooperation is justified.”
That position echoes elements of AI for public good, suggesting research can be shaped to serve collective benefits rather than only corporate or military interests.
Google’s Gemini described the book as “essential for understanding the extreme end of AI risk,” but noted that its solution—a total shutdown—was politically unrealistic. In contrast, Anthropic’s Claude, often seen as safety-focused, criticized the book’s “overconfident” tone: “The authors correctly identify real risks, but their certainty feels overconfident.
They dismiss progress in AI safety research and frame the issue as a binary between perfection and extinction.” Claude’s response hinted at AI utopianism, the belief that responsible innovation could still yield a flourishing future.
Meanwhile, Elon Musk’s Grok described it as “doomer porn for rationalists: thrilling in its fatalism, but it underplays human adaptability.”
Still, Musk’s Grok conceded it was “a provocative, efficient read for anyone grappling with AI’s future,” reflecting broader debates about whether humanity faces collapse or a post human future shaped by technological dominance.
To Borrow the Age of AI and Our Human Future
The perspectives’ clash defines the AI policy and governance dilemma, whether humanity can harness transformative technology without losing control. Despite Yudkowsky and Soares’ argument that superintelligence would guarantee extinction, AI systems themselves reflect a spectrum of skepticism, urgency, and cautious optimism.
As Gemini put it, the book’s true value may lie less in prescribing impossible solutions and more in galvanizing efforts toward safety and governance “before we reach the point of no return.”
The broader AI doomerism challenge is ensuring that innovation serves humanity’s long-term trajectory, not simply accelerating unchecked risks. The debate ultimately revolves around balancing risk, progress, and regulation—whether through AI accelerationism or cautionary restraint.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.