Prince Harry, Meghan join call for ban on development of AI ‘superintelligence’
Prince Harry, Meghan join call for ban on development of AI ‘superintelligence’
Homepage   /    science   /    Prince Harry, Meghan join call for ban on development of AI ‘superintelligence’

Prince Harry, Meghan join call for ban on development of AI ‘superintelligence’

🕒︎ 2025-10-22

Copyright The Boston Globe

Prince Harry, Meghan join call for ban on development of AI ‘superintelligence’

The 30-word statement says: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.” Prince Harry added in a personal note that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance.” Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex. “This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. ”It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?” Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create. But the list also has some surprises, including Bannon and Beck, in an attempt by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Donald Trump’s Make America Great Again movement even as Trump’s White House staff has sought to reduce limits to AI development in the U.S. Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama. Former Irish President Mary Robinson and several British and European parliamentarians signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation. “Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the upheaval that led to CEO Sam Altman’s temporary ouster in 2023. “But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that.” The letter is likely to provoke ongoing debates between the AI research community about the likelihood of superhuman AI, the technical paths to reach it and how dangerous it could be. “In the past, it’s mostly been the nerds versus the nerds,” said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. “I feel what we’re really seeing here is how the criticism has gone very mainstream.” Confounding the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems — when what it really did was find and summarize what was already online. “There’s a ton of stuff that’s overhyped and you need to be careful as an investor, but that doesn’t change the fact that — zooming out — AI has gone much faster in the last four years than most people predicted,” Tegmark said. Tegmark’s group was also behind a March 2023 letter — still in the dawn of a commercial AI boom — that called on tech giants to pause the development of more powerful AI models temporarily. None of the major AI companies heeded that call. And the 2023 letter’s most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause. Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn’t expect them to sign. “I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”

Guess You Like

Trump is going to Asia - what happens next is anyone's guess
Trump is going to Asia - what happens next is anyone's guess
WASHINGTON – President Donald ...
2025-10-21