Business

AI Propaganda and the China-US Race for Influence

By Shaoyu Yuan

Copyright thediplomat

AI Propaganda and the China-US Race for Influence

In recent months, U.S. social media users scrolling their feeds might have encountered a too-smooth news anchor delivering anti-U.S. broadsides – only to discover it was a deepfake. In fact, pro-China bot accounts on Facebook and X (formerly Twitter) have been caught distributing AI-generated “news” videos by a fictitious outlet called Wolf News, in which avatar anchors decry U.S. domestic policy failures (like gun violence) or tout China’s leadership. Advances in artificial intelligence have dramatically lowered the barrier to producing such propaganda. Generative AI can now churn out realistic images, videos, and chatty texts in seconds, allowing governments (and anyone else) to flood the information space with content tailored for maximum impact. Both Beijing and Washington find themselves entering a new arms race; one where algorithms, not armaments, are the weapons, and online propaganda is easier to manufacture and harder to detect than ever before. Artificial intelligence is turbocharging techniques that were already in play. China has long employed an “Internet troll army,” known colloquially as the “50-cent” brigade or wumao, to push pro-Communist Party narratives on social media. Now AI tools can shoulder much of that work. A recent article described how a Chinese state media deepfake effort used AI to streamline content production. With just a few tools, one person can now create images, turn them into video, and add realistic voice-overs – all tasks that used to require a full team. In short, propaganda that might once have required a dedicated team can increasingly be produced at scale by a single operator with the right algorithms. China’s AI Propaganda Playbook Beijing has embraced these AI capabilities with zeal. State outlets like CGTN (China Global Television Network) have begun using AI-generated presenters in slickly packaged videos that paint dystopian portraits of American society. What makes the Chinese effort uniquely dangerous is the combination of scale plus plausibility. RAND researchers have traced People’s Liberation Army writings that openly advocate “social-media manipulation 3.0”: automated persona farms that look and sound painfully normal, posting cat photos on Monday and divisive memes on Tuesday. The goal is no longer to proclaim “Xi is great,” but to erode Americans’ trust in each other, a far subtler, and more effective, strategy. One recent CGTN series called “Fractured America” relied on AI to depict U.S. workers in turmoil and an America in decline, part of a narrative that China is rising while the U.S. collapses. The segments’ visuals and voiceovers were synthesized by AI, a strategy that a Microsoft Threat Analysis Center report said allows Beijing to produce “relatively high-quality” propaganda that gains more engagement online. In the past year China debuted an AI system to generate fake images of Americans across the political spectrum and inject them into U.S. social networks, stoking controversies along racial, economic, and ideological lines. These AI-generated content echo the complaints of everyday U.S. voters while pushing divisive talking points. It’s a covert effort to simulate grassroots outrage or consensus, and it could represent a “revolutionary improvement” in crafting the illusion of public agreement around false or biased narratives. Some of China’s AI propaganda efforts have been brazen. In Taiwan, on the eve of its 2024 presidential election, over 100 deepfake videos surfaced with AI avatars posing as news anchors attacking the incumbent president with sensational claims, an influence operation attributed to China’s security services. Beijing-linked networks like “Spamouflage” have deployed deepfake anchors (sporting fictitious Western names and faces) to deliver Beijing’s messaging in English on U.S. platforms. These clips, ranging from denigrating Taiwan’s leaders to mocking U.S. policies, are often low-budget and slightly uncanny. Chinese propagandists seem to subscribe to the mantra “quantity over quality”: flood the zone with so much content that some of it will inevitably go viral. The sheer volume is worrying – and the quality is improving. As AI models grow more sophisticated, the fakes are getting harder to distinguish from genuine media. Notably, Chinese information warriors are learning from past missteps. Historically, their fake social media accounts were easy to spot because of the clumsy English phrasing, posts blasting out during Beijing business hours, etc. But Chinese strategists have sketched out a new playbook: using AI to create whole networks of believable personas. In 2019, a Chinese military-affiliated researcher, Li Bicheng, outlined a blueprint for AI-generated online personas that could behave like real users, posting about everyday life most of the time while occasionally slipping in propaganda messages about topics Beijing cares about (say, Taiwan or U.S. “social wrongs”). These AI personas wouldn’t need sleep and wouldn’t make linguistic errors, unlike human trolls. Little by little, they could bend opinions under the radar. What sounded like science fiction in 2019 is now quite feasible: today’s large language models can produce fluent, culturally savvy posts in any voice or style at the push of a button. In an American society that is already hyper-polarized, an army of AI “fakes” amplifying extreme viewpoints could pour fuel on the fire, without ever revealing their Chinese origin. An Open‑Society’s Achilles’ Heel All this comes at a sensitive time for the United States. As a democracy, the U.S. prizes free expression and an open internet, but that openness also leaves it uniquely vulnerable to foreign disinformation. U.S. intelligence assessments make clear that China (alongside Russia and Iran) is actively exploiting information warfare tactics to sow discord among Americans. Meanwhile, the United States’ response to foreign propaganda has been faltering. In recent years, partisan debates over “fake news” and free speech have led Washington to scale back its defenses. Ironically, just as AI-driven disinformation surges, the U.S. government has dismantled key counter-propaganda units: for instance, the State Department’s Global Engagement Center, which coordinated efforts to counter foreign disinformation, was disbanded amid criticism that its work impinged on domestic speech. Likewise, other monitoring initiatives have been paused or defunded. Free speech advocates believe that government oversight of content is more dangerous than foreign disinformation. However, this view misses a key reality: when hostile foreign powers are allowed to manipulate the information environment without restraint, the foundation of open expression is itself threatened. The challenge for the United States is finding a response that defends the integrity of public discourse without eroding the liberties that discourse rests on. It’s worth noting that Washington, unlike Beijing, does not run sprawling state propaganda campaigns using AI. American public diplomacy efforts (like Voice of America) hew to fact-based messaging and are openly branded, not covert deepfakes. And legal restraints (as well as ethical norms) generally forbid U.S. agencies from deploying misinformation or deepfake deceptions in domestic arenas. This asymmetry is stark: China’s authoritarian system aggressively pushes propaganda abroad while insulating its own population from outside influence – even passing laws requiring that AI-generated media be watermarked. The U.S., for its part, relies on a free marketplace of ideas where truth can ideally rise above falsehood, but that ideal is being stress-tested by the onslaught of AI-enabled fakery. Ultimately, the U.S. can’t out-propagandize Beijing without losing its soul, and it shouldn’t try. The United States’ strength lies in the credibility of its information and the openness of its society. The goal, then, is to shore up that openness so it cannot be exploited as a weakness. The coming years will be a testing ground: malicious actors may attempt to influence the elections using these new AI tools, and if they do, the impact could be far more effective than past low-tech meddling. We are entering an era when a flood of fake personas, videos, and images will seek to manipulate opinions, a true infodemic. Free societies must respond with agility and clarity, lest we wake up to find the narrative about our own world has been hijacked by those who wield AI in the service of falsehood.