By Budrul Chukrut,Contributor,David Kirichenko
Copyright forbes
NEW YORK, NEW YORK – JULY 19: An information screen informs travellers that train information is not running due to the global technical outage at Canal Street subway station on July 19, 2024 in New York City. Businesses and transport worldwide were affected by a global technology outage that was attributed to a software update issued by CrowdStrike, a cybersecurity firm whose software is used by many industries around the world. (Photo by Adam Gray/Getty Images)
Getty Images
Open-source software powers much of the modern internet – from cloud infrastructure to government services. As a digital public good, its reliability is essential to the internet and yet increasingly fragile.
Despite its ubiquity, most projects are maintained by a small number of volunteers or underfunded developers. Tech giants are spending billions on artificial intelligence, but far less on securing the open-source tools that underpin their products.
As The Economist put it, “the software at the heart of the internet is maintained not by giant corporations or sprawling bureaucracies but by a handful of earnest volunteers toiling in obscurity.” The rise of autonomous AI agents could destabilize this ecosystem. Nation-states and cybercriminals may soon weaponize these tools to exploit the openness of open source software.
How AI Supercharges Old Threats
AI can scan repositories, inject subtle backdoors, generate benign-looking contributions, or impersonate trusted developers. Stormy Peters, vice president for communities at GitHub, noted in ComputerWeekly that “China has the second-largest number of developers on GitHub by country.” That global scale matters because it amplifies the risk.
Ryan Ware, an open-source security expert, sees the threat already taking shape. “AI can help with some of the social engineering aspects,” he told me. “It’s already a proven benefit to help people in creating content for social engineering efforts.”
MORE FOR YOU
In other words, AI doesn’t need to write malicious code to be dangerous – it just needs to talk like a developer. The same dynamic is unfolding in developer communities. As the Wall Street Journal reported, activity on Stack Overflow has collapsed by more than 90% since the launch of ChatGPT.
That decline matters because, as tech writer Nick Hodges explained in InfoWorld, “Stack Overflow provides much of the knowledge that is embedded in AI coding tools, but the more developers rely on AI coding tools the less likely they will participate in Stack Overflow, the site that produces that knowledge.”
Dan Middleton, chair of the Confidential Computing Consortium’s technical advisory committee, says, “AI agents are already a routine part of both open-source and closed source software maintenance. Many developers rely on automated tools – linters, test runners, dependency updaters – to catch common errors. The transition to AI-assisted development is accelerating.” That acceleration makes it useful to examine how past breaches unfolded.
What Past Breaches Reveal About Today’s Risks
These incidents show how a single weak link can ripple through entire systems – an effect AI could magnify. The XZ Utils backdoor offered a glimpse of how devastating a single compromise can be. Before that came the SolarWinds breach, a Russian operation that infiltrated trusted update channels across the U.S. government and industry.
Even widely used packages can rest on fragile foundations. The Node.js utility fast-glob, downloaded nearly 80 million times a week and embedded in more than 30 Department of Defense projects, is maintained by a single developer in Russia.
HONG KONG – 2019/04/05: In this photo illustration a Russian Federation flag is seen on an Android mobile device with a figure of hacker in the background. (Photo Illustration by Budrul Chukrut/SOPA Images/LightRocket via Getty Images)
LightRocket via Getty Images
While there’s no evidence of wrongdoing, the situation highlights the enormous trust placed in lone maintainers. In an article for The Register, Haden Smith of Hunted Labs noted, “Every piece of code written by Russians isn’t automatically suspect, but popular packages with no external oversight are ripe for the taking by state or state-backed actors.” The growing reliance on single maintainers shows why AI-driven threats could be so destabilizing.
AI Can Turn Small Threats Into Big Ones
With generative AI, such attacks could scale faster and operate with greater stealth. “A proliferation of independent agents can reduce the risk posed by any single compromised tool, but that also makes deep inspection of each tool more difficult,” Middleton said. “On the other hand, consolidating trust into a small set of well-vetted agents improves auditability, yet increases systemic risk.”
That systemic tension extends to volunteer capacity. Ware believes the deeper problem is capacity. “There aren’t enough resources to cover every open-source project with overworked maintainers that find their projects suddenly in use by industry,” he said.
Derek Zimmer, executive director of the Open Source Technology Improvement Fund, told me, “A majority of organizations don’t know nor fully understand how much open source they run, or their level of exposure to these kinds of threats.”
Over time, software continues to become complex, including the amount of direct and indirect dependencies on the software. “This interdependence gives rise to rich software that delivers fantastic features, but the hidden cost is the increased exposure to threats in the supply chain,” Zimmer said.
Exhausted volunteers need more support to reduce risk. “An attack where a maintainer who no longer can or wants to contribute to a critical project can simply hand off the project to a malicious actor is a very real threat, and advocacy for mechanisms to reduce the risks are few and far between,” he warned.
For now, it’s often easy to spot where AI is making contributions, because it tends to add too many extra libraries. “AI conversations are still pretty easy to spot but may not be as easy to catch in a few years. I think we are still far away from the capability being there for AI, but I have no doubt that someone will attempt to do this at scale,” Zimmer cautioned.
When AI Becomes A Spy Tool
With organizations generating more data than ever, the risk of AI-driven surveillance is only increasing. “If China develops the best AI models and DeepSeek on Alibaba Cloud becomes the dominant thing that everyone uses, it would have unfettered access to personal and business secrets,” Zimmer said.
As AI tools integrate into coding assistants and business platforms, unsuspecting users may expose sensitive data. Ware has considered detection tools to flag AI-generated contributions, but admitted that “It would be the beginning of a new cat-and-mouse game that would be ongoing for decades.”
That kind of endless cycle leaves project maintainers under immense pressure. “The open source culture needs to have a wake-up call,” Zimmer said. “Maintainers need to be notified that they are critical parts of the global supply chain.”
Nation-states are constantly searching for new ways to infiltrate their adversaries’ systems. Too many organizations take for granted the unpaid work of open source maintainers. Without greater support, these projects could one day be handed off, whether willingly or through burnout, to hostile actors or even AI agents weaponized by nation-states.
Editorial StandardsReprints & Permissions