Over 200 prominent politicians and scientists, including 10 Nobel Prize winners and many leading artificial intelligence researchers, released an urgent call for binding international measures against dangerous AI uses on Monday morning.
Warning that AI’s “current trajectory presents unprecedented dangers,” the statement, termed the Global Call for AI Red Lines, argues that “an international agreement on clear and verifiable red lines is necessary.” The open letter urges policymakers to enact this accord by the end of 2026, given the rapid progress of AI capabilities.
Nobel Peace Prize Laureate Maria Ressa announced the letter in her opening speech at the United Nations General Assembly’s High-Level Week Monday morning. She implored governments to come together to “prevent universally unacceptable risks” from AI and to “define what AI should never be allowed to do.”
In addition to Nobel Prize recipients in Chemistry, Economics, Peace and Physics, signatories include celebrated authors like Stephen Fry and Yuval Noah Harari as well as former heads of state, including former President Mary Robinson of Ireland and former President Juan Manuel Santos of Colombia, who won the Nobel Peace Prize in 2016.
Geoffrey Hinton and Yoshua Bengio, recipients of the prestigious Turing Award and two of the three so-called ‘godfathers of AI,’ also signed the open letter. The Turing Award is often regarded as the Nobel Prize for the field of computer science. Hinton left a prestigious position at Google two years ago to raise awareness about the dangers of unchecked AI development.
The signatories hail from dozens of countries, including AI leaders like the United States and China.
“For thousands of years, humans have learned — sometimes the hard way — that powerful technologies can have dangerous as well as beneficial consequences,” Harari said. “Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”
The open letter comes as AI attracts increasing scrutiny. In just the past week, AI made national headlines for its use in mass surveillance, its alleged role in a teenager’s suicide, and its ability to spread misinformation and even undermine our shared sense of reality.
However, the letter warns that today’s AI risks could quickly be overshadowed by more devastating and larger-scale impacts. For example, the letter references recent claims from experts that AI could soon contribute to mass unemployment, engineered pandemics and systematic human-rights violations.
The letter stops short of providing concrete recommendations, saying government officials and scientists must negotiate where red lines fall in order to secure international consensus. However, the letter offers suggestions for some limits, like prohibiting lethal autonomous weapons, autonomous replication of AI systems and the use of AI in nuclear warfare.
“It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly,” said Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons (OPCW), which was awarded the 2013 Nobel Peace Prize under Üzümcü’s tenure.
As a sign of the effort’s feasibility, the statement points to similar international resolutions that established red lines in other dangerous arenas, like prohibitions on biological weapons or ozone-depleting chlorofluorocarbons.
Warnings about AI’s potentially existential threats are not new. In March 2023, more than 1,000 technology researchers and leaders, including Elon Musk, called for a pause in the development of powerful AI systems. Two months later, leaders of prominent AI labs, including OpenAI’s Sam Altman, Anthropic’s Dario Amodei and Google DeepMind’s Demis Hassabis, signed a one-sentence statement that advocated for treating AI’s existential risk to humanity as seriously as threats posed by nuclear war and pandemics.
Altman, Amodei and Hassabis did not sign the latest letter, though prominent AI researchers like OpenAI co-founder Wojciech Zaremba and DeepMind scientist Ian Goodfellow did.
Over the past few years, leading American AI companies have often signalled a desire to develop safe and secure AI systems, for example by signing a safety-focused agreement with the White House in July 2023 and joining the Frontier AI Safety Commitments at the Seoul AI Summit in May 2024. However, recent research has shown that, on average, these companies are only fulfilling about half of those voluntary commitments, and global leaders have accused them of prioritizing profit and technical progress over societal welfare.
Companies like OpenAI and Anthropic also voluntarily allow the Center for AI Standards and Innovation, a federal office focused on American AI efforts, and the United Kingdom’s AI Security Institute to test and evaluate AI models for safety before models’ public release. Yet many observers have questioned the effectiveness and limitations of such voluntary collaboration.
Though Monday’s open letter echoes past efforts, it differs by arguing for binding limitations. The open letter is the first to feature Nobel Prize winners from a wide range of scientific disciplines. Nobel-winning signatories include biochemist Jennifer Doudna, economist Daron Acemoglu, and physicist Giorgio Parisi.
The release of the letter came at the beginning of the U.N. General Assembly’s High-Level Week, during which heads of state and government descend on New York City to debate and lay out policy priorities for the year ahead. The U.N. will launch its first diplomatic AI body on Thursday in an event headlined by Spanish Prime Minister Pedro Sanchez and U.N. Secretary-General António Guterres.
Over 60 civil-society organizations from around the world also gave their support to the letter, from the Demos think tank in the United Kingdom to the Beijing Institute of AI Safety and Governance.