top of page

Book Review: If Anyone Builds It, Everyone Dies

  • Writer: Kevin D
    Kevin D
  • 4 minutes ago
  • 6 min read

This week's review is on If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares.

Also informing this piece, their supporting website, https://ifanyonebuildsit.com/, and the following review articles:

"The case for AI doom isn't very convincing" by Timothy B. Lee. Understanding AI. Substack. September 25, 2025.

"Book Review: If Anyone Builds It, Everyone Dies" by Scott Alexander. Astral Codex Ten. Substack. September 11, 2025.


Long a Cassandra of the rationalist community against the danger of AI, Yudkowsky offers up a provocative thesis with some predictive capabilities alongside his foundation's current head, Soares. Critics have jumped on the very real short-comings of this short book but have largely ignored the central premise (page 7, in bold):


If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.

Fin.

Purposeful Design works, man.
Purposeful Design works, man.

Yudkowsky and Soares [here-after Y & S] (and you can feel how Soares' diplomacy reigned in Yudkowsky's sometimes heavy-handed superiority [sometimes being a key word]) split their text into three basic parts: the problem (what is AI and what will it become), a predictive story, and the challenge/solutions. The authors are careful to note in their introduction that their "concern is for what comes after, machine intelligence that is genuinely smart" (4). Much of their concern mirrors the concerns of AI 2027 - that AI will reach a point where it will be self-directed in its growth and evolution, spiraling out of human monitor and control, and then its direction will conflict with human ones (or existence). They fear this is coming sooner rather than later and that we are not ready, so thus we will die.


Each chapter begins with a short tale [echoing the nature of Yudkowsky's earlier Harry Potter and the Methods of Rationality work to some extent] designed to illustrate the concepts - many of which are supported by such grounding. Given their concerns and unlike many AI-centered texts, the duo spend a great deal of time discussing the nature of intelligence ("the work of predicting the world and the work of steering it" [20]). This enables the first major claim to rest on a philosophical ground - ASI will be fundamentally different from human intelligence and - thus - unintelligible to use to an extent that makes extinction likely.


In this ASI will be aided by several major advantages over human generality - speed, multiplicity, evolutionary speed, memory banks, rational perfection, and easier experimentation/permutation. These aspects are inherent to the digital realm to some extent. Y and S then link this to the lack of true construction of LLMs - they are not programmed or made like traditional computing models, but instead "grown... enginners understand the process that results in an AI, but do not much understand what oges on inside the AI minds they manage to create" (31). We can observe from the outside what weights and tokens and training results in but not the inside of the "intelligence" itself means.


Perhaps most interestingly, Y and S highlight the issues inherent in LLMs but don't propose these as current stopgaps towards ASI development. Instead, they brush past this (again, like AI 2027) and point that future development will then lead to preference-seeking; but preferences alien to the human intellect (paper clips?). These preferences, they argue, are difficult to predict or orient in a desired direction as models increase in size, intelligence, and capability.


The first part closes by arguing that such preferences will include recognition that human survival is not part of such preference. Short paragraphs attack arguments against such a claim, but lack the substance needed for such a text. The last chapter then highlights reasons why AI would win, akin to Aztecs espying the ships of the Conquistadors. Part II is a sci-fi recreation of this potentiality across three chapters, grounded in some actual events and reasonable extrapolation rooted in their thesis. The key insight here is in how the model circumvents its limitations and competitors.


Part III seeks to present solutions and an immediate path forward - one Y & S feel needs to happen immediately and substantively. Their solutions echo the arms control movement of the Cold War era and the collective action against CFCs. Countries need to come together and impose immediate restrictions because "humanity only gets one shot at the real test" (161) much like nuclear war. They examine some historical study cases of times collective action was and wasn't taken to illustrate the complexity of the AI scenario. AI has the difficult of a hinge point that launching a space probe has, the speed/complexity/self-amplification of nuclear reactor failure, and the necessity for blanket coverage of computer security. Y & S charge that AI solutions rarely address all these failure points and thus a moratorium is necessary.


They examine these proposed solutions. They tend to the idealistic, vague, or even trust in AI itself to do the policing. "Superalignment" offers an AI that "can help us interpret what's on inside" or "figure out how to initiate an intelligence explosion such that the resulting superintelligence will be friendly to humanity" (189). The argument against these is likewise short and focuses on the above point - even one miss is too much.


Their solution is simple - like we did in response to CFCs - we need to ban it. Companies will not take the lead here so public and government action is required. "AI engineers and their leaders have a lot more than their salaries hanging in the balance" (202) - as does much of the economy and business world now too. Because "datacenters can kill more people than nuclear weapons," a super-powered directed world order could work to minimize the collection and deployment of GPUs needed to establish ASI. The last pages are calls to action for the public and leaders to move forward in these efforts.


After reading If Anyone Builds It, Everyone Dies, I engaged with several reviews across the AI world of the text. The two I found most helpful were by Tim Lee and Scott Alexander. Lee comes from a Boomer-viewpoint, consistently pro-AI and argues that the ASI explosion will fail to happen:


Some of the most important systems—including living organisms—are so complex that no one will ever be able to fully understand or control them. And this means that raw intelligence only gets you so far. At some point you need to perform real-world experiments to see if your predictions hold up. And that is a slow and error-prone process...
So the question is not “will the best AI become dramatically smarter than humans?” It’s “will the best AI become dramatically smarter than humans advised by the second-best AI?” It’s hard to be sure about this, since no superintelligent AI systems exist yet. But I didn’t find Yudkowsky and Soares’s pessimistic case convincing.

Alexander assisted with AI 2027 so his evaluation is somewhat more positive. Alexander disagrees with the scenario where "It feels too much like they’ve invented a new technology that exactly justifies all of the ways that their own expectations differ from the moderates’." and that the text reveals a split amongst the Doomer movement:


Both sides honestly believe their position and don’t want to modulate their message for PR reasons. But both sides, coincidentally, think that their message is better PR. The incrementalists think a moderate, cautious approach keeps bridges open with academia, industry, and government, and other actors who prefer normal clean-shaven interlocutors who don’t emit spittle whenever they talk. MIRI thinks that the public is sick of focus-group-tested mealy-mouthed bullshit, but would be ready to rise up against AI if someone presented the case in a clear and unambivalent way...
IABIED’s scenario belongs to the bad old days before this leap. It doesn’t just sound like sci-fi; it sounds like unnecessarily dramatic sci-fi. I’m not sure how much of this is a literary failure vs. different assumptions on the part of the authors.

At the end of the day, though, all authors seem to agree that ASI would be bad. Lee and his ilk that such a concern need not be at the top of the heap due to its impossibility. Alexander's moderate group argue that moderate policing and public service would inhibit such an outcome. Y and S are the crew arguing that any risk is too high and demand drastic action. My review of the literature, leads me still to the fundamental parts addressed at the beginning of this book and those that seek to truly engage with the moment - what is intelligence and can it be replicated/created by humans digitally?


Because if so, everyone dies. Right?


If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky & Nate Soares.

Rating: 4/5 Stars

Good For: The ultimate doomer take.

Best nugget: We might be screwed because some people want AI to win: "There are humans out there who will give AIs power at the first opportunity, and who are already doing so, and who are unlikely to stop as AIs get smarter. Some of them will get even more enthusiastic as the AIs get power, and egg them on twice as hard if they act weird and ominous and mysterious. We doubt it will be hard for AIs in real life to find enthusiastic assistance" (95).


Please note: As an Amazon Associate I earn from qualifying purchases. However, I am not paid to provide reviews or use content.

Comments


©2018 by Kevin Donohue. Proudly created with Wix.com

bottom of page