By Vincent Chow
Copyright scmp
Chinese artificial intelligence start-up DeepSeek has conducted internal evaluations on the “frontier risks” of its AI models, according to a person familiar with the matter.
The development, not previously reported, comes as Beijing seeks to promote awareness of such risks within China’s AI industry. Frontier risks refer to AI systems that pose potentially significant threats to public safety and social stability.
The Hangzhou-based company, which has been China’s poster child for AI development ever since it released its R1 reasoning model in January, conducted evaluations of the models’ self-replication and cyber-offensive capabilities in particular, according to the person who requested anonymity.
The results were not publicised. It was not clear when the evaluations were completed or which of the company’s models were involved. DeepSeek did not respond to a request for comment on Tuesday.
Unlike US AI firms such as Anthropic and OpenAI, which regularly publish the findings of their frontier risk evaluations, Chinese companies have not announced such details.
They were likely conducting evaluations internally and not publicising the results because “the market environment is very different”, said Sarah Sun, executive director at Singapore-based AI Governance Exchange, which promotes collaboration between Chinese and Western companies on AI safety.
“It’s a lot more risk-averse in China, where people might ask why you are releasing your model if you say the model is very dangerous, while in the US, [saying that] can help you raise money,” she said.
According to the Shanghai Artificial Intelligence Laboratory, one of China’s leading AI safety research institutes, powerful AI systems pose an “unprecedented challenge” for cybersecurity by potentially making cyberattacks much easier to plan and execute.
Powerful AI agents also posed self-replication risks, whereby they autonomously replicate their model weights and code onto other machines without human supervision, potentially leading to a loss of human control, it said in a July paper.
Based on its own evaluations, the state-backed lab concluded that no Chinese AI models at the time had crossed any “red lines” for frontier AI risks that would require suspension of development. But it noted that some leading models, including DeepSeek-V3-0324 and DeepSeek-R1-0528, required strengthened mitigation when it came to self-replication risks.
“Models possessing greater knowledge and problem-solving abilities are also more likely to be used for, or exhibit characteristics associated with, malicious activities, thereby posing higher security risks,” according to the paper, whose project scientific director was former JD.com director of AI research Zhou Bowen.
DeepSeek is one of 22 domestic tech companies that have agreed to industry-led, voluntary AI safety and security commitments. Others include Chinese Big Tech firms including Alibaba Group Holding, Baidu and ByteDance.
The commitments, updated in July, include conducting research on frontier AI risks.
Alibaba owns the South China Morning Post.
On Monday, a technical standards body associated with the Cyberspace Administration of China released an updated “AI Safety Governance Framework” that referenced the risk of AI systems becoming “self-aware” and escaping human control.
The framework also called for greater attention to the misuse risks surrounding open-source AI models.
“It would make sense for Chinese companies to now test their models on ‘self-awareness’ capabilities [after the release of the new framework],” said Sun of the AI Governance Exchange.