By Sylvie Zhuang
Copyright scmp
China has warned that artificial intelligence will allow people to learn how to make their own world-destroying weapons, such as nuclear missiles and biological and chemical weapons.
The real-world risk – “loss of control over knowledge and capabilities of nuclear, biological, chemical and missile weapons” – was put forward in an AI safety governance document China unveiled on Monday.
It updates the country’s first AI safety governance document that was made public last year. Both documents are grounded in China’s 2023 proposal of the Global AI Governance Initiative.
“In training, AI uses content-rich and wide-ranging corpora and data, including fundamental theoretical knowledge related to nuclear, biological, chemical and missile weapons,” the latest framework says.
“Without sufficient management, extremist groups and terrorists may be able to acquire relevant knowledge and develop capabilities to design, manufacture, synthesise and use such weapons with the help of retrieval-augmented generation capabilities,” it said.
“Retrieval-augmented generation capabilities” is an AI technique that combines the ability to retrieve large amounts of information online or from an up-to-date knowledge base before generating a text response.
“This would render existing control systems ineffective and intensify threats to global and regional peace and security,” it said.
The framework is jointly published by China’s National Cybersecurity Standardisation Technical Committee, which is a state-affiliated organisation that sits under China’s cybersecurity watchdog, the Central Cyberspace Affairs Commission.
The National Computer Network Emergency Response Technical Team, functioning as China’s national-level cybersecurity command and emergency response centre, is another state-affiliated entity drafting the document.
It is mainly responsible for monitoring the security of critical networks, responding to cyberattacks, researching threats and offering protective measures in the face of cyberattacks.
The document unveiled on Monday further elaborated on the risks Beijing foresees arising from the development of AI. In the previous framework, China identified risks from dual-use AI technologies that might undermine national security and lower the bar for non-experts to use nuclear weapons.
However, the warning came as the world’s leading military powers, including China, seek ways to further empower their forces and weapons with AI. Some examples of AI use in military operations include Israel’s successful attack on Iran in June, which mobilised AI techniques.
The framework has identified AI’s impact on education and the suppression of innovation as among issues around research ethics.
It added ensuring that AI application was trustworthy and preventing loss of human control as principles, pointing to dangers around people interacting with AI when it behaves in a humanlike way and the potential for addiction, as well as AI’s emerging self-awareness.
“We strictly prevent any uncontrolled risks that could threaten the survival and development of humanity to ensure that AI is always under human control,” it said.