Copyright Fortune

The letter’s more notable signatories include AI pioneer and Nobel laureate Geoffrey Hinton, other AI luminaries such as Yoshua Bengio and Stuart Russell, as well as business leaders such as Virgin founder Richard Branson and Apple co-founder Steve Wozniak. It was also signed by celebrities, including actor Joseph Gordon-Levitt, who recently expressed concerns around Meta’s AI products, will.i.am, and Prince Harry and Meghan, Duke and Duchess of Sussex. Policy and national security figures as diverse as Trump ally and strategist Steve Bannon and Mike Mullen, Chairman of the Joint Chiefs of Staff under Presidents George W. Bush and Barack Obama, also appear in the list of more than 1,000 other signatories. New polling conducted alongside the open letter, which was written and circulated by the non-profit Future of Life Institute, found that the public generally agreed with the call for a moratorium on the development of superpowerful AI technology. In the U.S., the polling found that only 5% of U.S. adults support the current status quo of unregulated development of advanced AI, while 64% agreed superintelligence shouldn’t be developed until it’s provably safe and controllable. The poll found that 73% want robust regulation on advanced AI. “95% of Americans don’t want a race to superintelligence, and experts want to ban it,” Future of Life President Max Tegmark said in the statement. Superintelligence is broadly defined as a type of artificial intelligence capable of outperforming the entirety of humanity at most cognitive tasks. There is currently no consensus on when or if superintelligence will be achieved, and timelines suggested by experts are speculative. Some more aggressive estimates have said superintelligence could be achieved by the late 2020s, while more conservative views delay it much further or question the current tech’s ability to achieve it at all. Several leading AI labs, including Meta, Google DeepMind, and OpenAI, are actively pursuing this level of advanced AI. The letter calls on these leading AI labs to halt their pursuit of these capabilities until there is a “broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” “Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years,” Yoshua Bengio, Turing Award-winning computer scientist, who along with Hinton is considered one of the “godfathers” of AI, said in a statement. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future,” he said.