Copyright forbes

A mind is a terrible thing to cage. As of this spooky Halloween, more than sixty-five thousand people, presumably scared of big beautiful brains, have signed this statement: We call for a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in. https://superintelligence-statement.org/ They define superintelligence as AI “that can significantly outperform all humans on essentially all cognitive tasks.” But they limit the ban to artificial superintelligence (ASI) only. As we all know, ASI is a black box. In fact, perhaps the very first ASI was the chess-playing “automaton” known as The Mechanical Turk way back in 1770, with a human hiding inside. So, naturally, we ought to ban human superintelligence (HSI) too. Engraving depicting Wolfgang von Kempelen (1734-1804) a Hungarian author and inventor, known for his chess-playing "automaton" hoax The Turk and for his speaking machine. Dated 19th century. (Photo by: Universal History Archive/Universal Images Group via Getty Images) Universal Images Group via Getty Images Indeed, every possible bad thing that ASI can do, HSI can do better. And unlike ASI, it already has. The context for the statement on superintelligence lists some of these bad things. Let’s see how humans measure up. If we cause the same amount of potential harm, do we not deserve the same amount of dread and regulation? MORE FOR YOU Human Economic Obsolescence Around 1900, almost one in two Americans worked in agriculture. By the 1960s, it was down to one in twenty. Not to worry! By then there were hundreds of thousands of manual switchboard operators. Oops. Within a decade or so, they were gone too. But good news! Now we had floor after floor of typists and stenographers. Surely those jobs lasted forever, right? AI has nothing on humans when it comes to making humans obsolete. If only Hiram Moore of Kalamazoo, Michigan and Hugh Victor McKay of Victoria, Australia hadn’t both independently invented the combine harvester, how much happier we would all be tilling the fields and milking the cows. We would be too tired to even worry about artificial superintelligence. If job loss is the sin, we’d better punish human superintelligence too, not just the artificial kind. I wear Ray-Ban Meta AI glasses. If they make me smarter, am I an artificially-augmented intelligence? What if I combine multiple mini-intelligences on my phone to wield enormous intellectual power? The superintelligence itself is presumably the problem, not the mix of carbon vs. silicon in its container. Don’t ban me, bro! Logically, we should have some limit on human intelligence too, especially education. Heaven forbid some smart guy or gal turns our world upside down with new knowledge before our beloved regulators, scientists, and the public at large have weighed in. At what age should we ban further human education? Moore was self-taught. McKay left school at thirteen. So surely all high school and college are out, perhaps even part of middle school. Disempowerment. The steam engine, printing press, and democracy disempowered aristocrats, priests, and kings. Losses of Freedom, Civil Liberties, Dignity, and Control. I forget which large language model invented slavery, censorship, taxes, and bureaucracy. National Security Risks. Some of the smartest people of all time built the greatest security risks of all. How dare we have let Einstein, Fermi, Oppenheimer, and Feynman study physics? The argument that we can “control” a human genius with laws and jails is cold comfort. By the time an HSI or an ASI has caused an existential catastrophe, prison is a moot point. Potential Human Extinction Some of the signatories on this ban on superintelligence are luminaries. It’s almost impossible to scroll the list and not find many personal heroes and admirable, accomplished, thoughtful leaders. To be sure, no one on either side of the argument advocates for a higher probability of human extinction. We all share the same goal, articulated perhaps most clearly by Elon Musk back in 2021, a year and a half before the launch of ChatGPT: A tweet by Elon Musk Given we all want to “extend the light of consciousness to the stars,” and no one sensible is proposing human extinction, how are we to think about the proposed ban on superintelligence? Why shouldn’t it be applied equally to humans as to machines? And if it is applied to humans, does that mean we need “broad scientific consensus” and “strong public buy-in” every time we consider reproducing? Prohibition From alcohol to drugs to censorship and now to superintelligence, has any prohibition ever achieved its intended goal without creating larger harms? This is not good company to be in. What is the better solution? If prohibition is bad, should we instead encourage superintelligence? Perhaps a new ASI levy on everyone to get there faster than the competition! No, that solution is just as bad. It’s a terrible proposal indeed when it is itself bad and its opposite is also bad. Rather than aim to use the coercive power of consensus, perhaps the right approach is to embrace freedom. That’s the gut-level reason we wouldn’t want to ban human superintelligence. It violates human freedom. Yes, any of our babies can grow up to be a villainous genius who blows up the universe. That doesn’t mean we need other people’s approval to procreate. And yes, any of our ideas or “brain babies” can grow to become an evil scheme to wipe out all life. That also doesn’t mean we need other people’s approval to create. The answer to fear is not coercion but creativity. I fear bans, but don’t think we should ban bans. We should just argue against them. Sometimes, an argument by analogy can help expose logical fallacies, such as this attempted prohibition on superintelligence. Ultimately, the best cure to bad ideas is more good ideas, not a cap on thinking. The light of consciousness will not expand if we stand by and allow some of our brightest minds to dim it. Editorial StandardsReprints & Permissions