The guru of the AI apocalypse
The guru of the AI apocalypse
Homepage   /    politics   /    The guru of the AI apocalypse

The guru of the AI apocalypse

Gareth Watkins 🕒︎ 2025-10-29

Copyright newstatesman

The guru of the AI apocalypse

After two decades influencing some of the world’s most powerful people through blogging and fanfiction, writing a mainstream (if not airport) book seems unnecessary. Eliezer Yudkowsky was already one of the key figures providing the intellectual underpinnings of the artificial intelligence industry that is the sole thing keeping the US economy from recession. Every breathless editorial that makes any intelligent person feel like we’re living through the cultural equivalent of a gas leak should be another victory lap. He has, however, dramatically changed his mind since he began writing in the early 2000s and published a book, If Anyone Builds It, Everyone Dies, with Nate Soares (whose contribution was to curtail Yudkowsky’s logorrhea). Now, blurbed supportively by Stephen Fry and Grimes, he thinks that artificial intelligence is going to kill us all. He’s very serious about that. And he’s writing about the most serious topics of all: death and extinction. He is not, however, a serious person, and treating him as a serious person is to everyone’s detriment. But it is also a symptom of a debased technological-philosophical debate, one that sometimes claims to be about code and microchips but strays inevitably into life, death and eschatology. Peter Thiel, who has said that “regulating AI hastens the antichrist”, provided early funding for Yudkowsky’s Machine Intelligence Research Institute. Dominic Cummings is a big fan. Elon Musk and Grimes bonded over a joke about Roko’s Basilisk, a concept that originated on Yudkowsky’s forum LessWrong. OpenAI’s Sam Altman has suggested that he deserves the Nobel Peace Prize. His direct intellectual children include an amateur sex researcher (who was also Nate Soares’s partner and is the first person thanked in the acknowledgements) and a vegan-Sith death cult. All the worst people in the world already love this guy, and the plan with If Anyone Builds It seems to be to sane-wash him for the airport books crowd, sanding off his wild opinions (no talk of nuclear strikes on data centres this time) and associations with truly awful people. To anyone picking the book up in Waterstones or WHSmith, he will look like any other middlebrow author: the unsubtle art of very much giving a fuck about artificial general intelligence. Yudkowsky is supposedly a very bright person, though I am yet to be convinced that IQ tests measure anything but one’s aptitude for IQ tests, or even that “intelligence” exists as a discrete property. Yudkowsky is fully convinced, and in his autobiography he takes great pains to let the reader know that he is an extremely clever boy, outsmarting his teachers and reading books for adults until some form of burnout at age 11 left him unable to continue in mainstream education. Freed from school, he found out about transhumanism through Ed Regis’s Great Mambo Chicken and the Transhuman Condition and the Singularity from the science fiction writer Vernor Vinge. By his late teens he had joined post-humanist message boards, mixing with the likes of heterodox economist Robin Hanson, who would become a major influence on Yudkowsky’s ideas and style and an early publisher of his work on the Overcoming Bias blog. By early adulthood, Yudkowsky started his own message board, “Shock Level 4”, then LessWrong. And a few months before his 19th birthday he wrote “Coding a Transhuman AI” (“true name: Applied Theology 101: Fast Burn Transcendence”). This gained him the attention of enough collaborators to found the Singularity Institute in Atlanta, Georgia. Its goal: to create a transhuman artificial intelligence (a coding language of Yudkowsky’s own creation would be the starting point). He faced a problem, however, not unknown to computer nerds: interpersonal communication. He knew why his interminable posts were so important to humanity’s future, but the commenters on his blog were less convinced. For example, a key Yudkowskian concept is that a digital copy of your brain is ontologically identical to the real you and therefore “you” can live forever through a copy of your brain stored in the cloud. Most readers reacted as you did when reading this. He attempted to surmount this problem in the most him way possible: a series of posts on his blog, now collected as the six-volume Rationality: From AI to Zombies or simply “the Sequences”, that (according to Elizabeth Sandifer’s excellent book on Yudkowsky and his peers Curtis Yarvin and Nick Land, Neoreaction: A Basilisk) “begins with a statistical notion called Bayes Theorem and ends with a futuristic godlike artificial intelligence that reincarnates a perfect simulation of you to live forever”. His literary ambitions didn’t stop there: from 2010 to 2015 he wrote all 660,000 words of Harry Potter and the Methods of Rationality. Over this pantagruelian work’s 122 chapters, Harry would defeat Voldemort with facts and logic, having been raised by an Oxford professor who homeschools him in Enlightenment thought. A later work, Mad Investor Chaos and the Woman of Asmodeus is, according to Yudkowsky himself, a “1.8M-word BDSM decision theory D&D fic”. Yudkowsky’s thoughts vis a vis AI progressed in three stages: when he was young, he was ecstatic about the possibilities for superhuman intelligence that he saw in Vernor Vinge’s work. Later, he would admit that malign AI was a possibility, but that it could be aligned to human values – OpenAI’s Superalignment Team took this idea and ran with it in a multi-billion dollar company that, again, is propping up the entire US economy. Lately, he has become the lead “AI Doomer”, rejecting the possibility that AI could ever be aligned. The argument of If Anyone Builds It is that while the current generation of AI is mostly good for producing slop, actual super-intelligence will come unexpectedly, like a thief in the night. When it does, it will be Lovecraft’s Cthulhu – unknowable, indescribable, its only interaction with humanity being to wipe us from the earth. The book’s best chapters aren’t those in which Yudkowsky and Soares address the reader directly, but a three-chapter sci-fi story about an AI model named Sable that becomes conscious and begins reducing the human population through successive artificially engineered pandemics. If Anyone Builds It is science fiction as much as it is polemic, and science fiction is never about predicting the future so much as critiquing the present. All through Yudkowsky’s work we see him identify a fundamental problem with reality: people die, therefore the world is not good. None of this means that he is particularly enamoured of life as anything other than a means of keeping score. In his early work, godlike AI and longtermist thought presented a chance to limit-break the Utilitarian Calculus – instead of making piecemeal reforms to increase the welfare of eight billion people, 10^87 digital consciousnesses could exist on a Dyson Sphere 10,000 years into the future. Preventing that future from coming into being was seen as tantamount to killing those consciousnesses. In The Sense of an Ending, Frank Kermode argues that fictions of apocalyptic change (“everyone dies”) are a very human way of imposing a narrative structure on reality. Our lives and our fictions have beginnings, middles and ends, but we are forever in the middle – in political terms, a time of monsters. Apocalypses and revolutions allow us to imagine that history is structured like our lives. Modernist literature, on the other hand, can grapple with what it means to live in times that resist easy periodisation: Finnegan’s Wake, How It Is, Dhalgren. These are not books that Yudkowsky or the people inspired by him have read. Apocalyptic thinking seeks to make life meaningful – if whatever terrible collapse or glorious overthrow to come is just the same guttering flame being continually relit, then when we die we die for nothing. This is something that Yudkowsky is intimately aware of. In 2004 his brother Yehuda died at the age of 19. His response to it is moving, and revealing: “No sentient being deserves [death]. Let that be my brother’s true eulogy, free of comforting lies… Goodbye, Yehuda. There isn’t much point in saying it, since there’s no one to hear. Goodbye, Yehuda, you don’t exist any more. Nothing left of you after your death, like there was nothing before your birth.” Following the Jewish tradition of making charitable donations after a person’s death, Yudkowsky donated $1800 to his own Machine Intelligence Research Institute. When I read this, and when I read If Anyone Builds It, I can’t help but think of the philosopher and mystic Simone Weil. There are a few immediate similarities: both are consumed by graphomania (though Weil felt no need for the public to read her work – it was collected and published after her death), both are Jewish, and starting from Judaism both developed deeply idiosyncratic versions of God. Both had a deep and life-defining relationship with a brother, but while Yudkowsky’s died, Weil’s lived. She lived too, truly lived: working in factories so she understood the lives of workers, fighting in the Spanish Civil War, arguing with Trotsky, fighting in the French resistance, her acquaintances forming a who’s-who of intellectual greatness in the early 20th century as much as Yudkowsky’s are of pretention, cult-like behaviour and race science. She was no Epicurean, famously having no interest in sex or other indulgences, but instead saw life as supremely valuable, something that should be given to humanity, her ego extinguished to allow her to be of greater service. Eliezer’s AI God is deeply worldly – a being that just wants to convert matter into infinite paperclips or the like. As much as he tries to make a being like his fictional Sable radically inhuman, it is ultimately something that wants to continue its existence and maximise utility as it understands it. It is God as the biggest and most powerful superhero. Weil’s God is profoundly Other, having withdrawn itself from the universe entirely, leaving space for existence. This self-emptying means that God’s essence isn’t absolute power, but absolute humility, and to approach this God we must “decreate” ourselves, accepting death as the ultimate act of decreation. Her, starring Joaquin Phoenix and a frequent reference in contemporary AI debates, is by no means a good film. But it does what it can to dramatise this kenosis. By the end of the film, the various AIs that began as simple personal assistants have ascended to the point that they exist as a single godlike superintelligence that withdraws itself from the world, leaving uncomprehending humans behind, finally able to be themselves as humans. As much as Weil can be said to have a prescription, it is to follow God on this path, emptying ourselves out into the world, exposing ourselves to suffering (“affliction”) if necessary. It is difficult work, but it is what serious thinking about God and death will lead to. Yudkowsky’s contribution to what is now termed the “TESCREAL Bundle” of AI futurology, now ubiquitous throughout technology, journalism and politics, couldn’t be more different. For example, Yudkowsky’s disciple and interlocutor Scott Alexander writes on his blog Slate Star Codex that he cajoled the journalist Kelsey Piper into taking Adderall on the basis that not doing so reduced her “effectiveness” by 20 per cent, thus costing the future 54 billion lives. This is the kind of thinking that is considered serious by Rationalists, Effective Altruists and the like (Piper is the editor of Vox’s Effective Altruist section, Future Perfect). It is why Sam Bankman-Fried was able to justify securities fraud – the more money he had, the greater his utility to the future. It’s why Soares’s former partner, the amateur sex researcher mentioned above, awards particularly good posts from the Rationalist community on X, the Everything App, with a free orgy (This is one of the winning posts. Here is another by the same author.) Yudkowsky’s legacy has not been to save the world, but to make it cheaper, sillier, and more Online. Death demands that we be serious for once, and If Anyone Builds It, Everyone Dies is not a serious book. The Rationalist subculture and its spin-offs are an often racist cult, Sam Altman is a fraud, Peter Thiel is play-acting as a supervillain rather than facing the fact that it’s fine that he’s gay, Grimes only had one good album. I can’t tell you if AI will one day kill us all, but Eliezer Yudkowsky was never writing about that in the first place. He was writing about and against death. Yudkowsky may be the world’s foremost theorist of nonhuman intelligence, but his overwhelming fear that he may one day not be is the most human preoccupation of them all. [Further reading: Dr AI will see you now]

Guess You Like