Vibe Coding: When AI Writes the Code, Who Secures It?
Vibe Coding: When AI Writes the Code, Who Secures It?
Homepage   /    other   /    Vibe Coding: When AI Writes the Code, Who Secures It?

Vibe Coding: When AI Writes the Code, Who Secures It?

Crystal Morin,Heather Joslyn 🕒︎ 2025-10-21

Copyright thenewstack

Vibe Coding: When AI Writes the Code, Who Secures It?

Trying something new can often be exciting, maybe a little scary, and sometimes even empowering. The concept of “vibe coding,” just opening up a blank editor and letting tools like ChatGPT, Copilot, and Cursor (why do they all start with “C”?) bring an idea to fruition, gives people the courage to just create without strings attached. The freedom is thrilling for many, but, as with most new technologies, this innovation also breeds security concerns. I worked as a linguist and intelligence analyst for over ten years, first in the US Air Force and then at Booz Allen Hamilton, tracking and defending against nation-state threat actors. But today, as a Sr. Cybersecurity Strategist at Sysdig, I get the opportunity to tackle the security challenges facing those on the cutting edge of security innovation and help organizations build and maintain security programs that work in the cloud-native era. When AI Speeds Ahead, Security Can Fall Behind In August, I had the privilege of hosting a webinar panel featuring three brilliant minds from the open source community, each of whom created a tool you may use daily: Craig McLuckie, co-creator of Kubernetes; Loris Degioanni, creator of Falco; and Gerald Combs, creator of Wireshark. During our panel discussion, all three speakers unanimously expressed their concerns about AI-generated code contributions for open source projects, while acknowledging its future and mature potential. They questioned the security, long-term maintainability, and the burden it places on the open source community. At the same time, they recognized its value for rapid prototyping and noted that, in a few years, AI will likely become a more valuable contributor than a risky liability. But this security risk extends beyond open source, and their opinions ultimately surfaced a broader discussion on how we can adapt to and shape the use of AI in software development. So what happens when enthusiasm surpasses experience? Without human oversight, training, knowledge, and security guardrails, AI can just as easily accelerate chaos. Radiologists, for example, have embraced AI. They rely on AI systems to read X-rays, CTs, and MRIs, and to highlight areas of concern. However, they don’t accept the results at face value. Radiologists interpret them alongside the patient’s history, their clinical experience, and nuanced pattern recognition, relying on their expertise over sole reliance on new technology. What I’m suggesting is that, while AI is helping us reduce manual effort and speed up processes, we cannot yet fully trust it. Seasoned experience and human judgment remain key. Vibe coders must also adopt this mindset. More code, more risk With the use of an LLM, everyone can code – but not everyone is a developer. Vibe coding, or what some are calling “agentic coding,” can be a way for young techies to learn to code and contribute something meaningful to the open source projects they appreciate and rely on. Unfortunately, if you don’t know Python or Go well, you’re going to hit a wall in the debugging process. A vulnerability could be unintentionally introduced into the code. The first rule of vibe coding is understanding that AI-generated code suggestions are not secure by default. Different AI models may also pose different kinds of risk. While one may hallucinate less, it could lack sufficient context. On the other hand, a more general-purpose model can generate code that looks correct but introduces subtle security flaws. Additionally, a model trained on old or outdated developer data may use insecure code patterns that were historically present in open source repositories. So while the increased contributions and participation are wonderful for the open source community, for example, there are humans on the other end of the project. Maintainers are now auditing and reviewing more code, and the added volume is slowing down the pace of progress. The risk associated with AI-generated code submissions to open source projects is also accelerating maintainer burnout. With the excessive amount of AI-driven code waiting to be reviewed, the quality of open source projects may drift over time. The culmination of small mistakes creates system fragility. Threat Actors Are Using AI-Generated Code What happens when our adversaries leverage AI not to help, but to hide? They can use AI to obfuscate scripts so well that skilled maintainers struggle to parse through it or identify the illegitimate code. The Sysdig 2025 Cloud-Native Security and Usage Report found that image bloat had quintupled since 2024, and the number of packages in container images had grown by 300%. Image bloat is the excess of packages that are not necessary for an application to run correctly. This is where the attack surface grows. An attacker, too, can utilize AI to increase the volume of code, making the review process more time-consuming and increasing the likelihood of dangerous commits slipping through. Take a page from the threat actor playbook: In June 2025, the Sysdig Threat Research Team discovered AI-generated malware targeting a misconfigured open source tool on both Windows and Linux systems. However, the code was only 85-90% AI-generated. The threat actor still manually reviewed and debugged the code before using it. Even attackers are reviewing AI code for quality assurance. From concept to compromise Twitter co-founder Jack Dorsey built Bitchat, a Bluetooth-based decentralized messaging platform, in a weekend using an open source AI coding and debugging assistant called Goose. On July 6, 2025, he shared the application on GitHub and X, but the next day, a man-in-the-middle flaw was discovered. Security researcher Alex Radocea discovered that attackers could bypass the “Favorites” feature, which enabled Bitchat users to verify their trusted contacts. Where there was meant to be identity key pair authentication between “Favorite” users, an adversary could present their own keys and impersonate the friend users thought they were chatting with, no authentication necessary. Dorsey admitted the app was not formally reviewed before its release and warned users it was still in development. Secure your vibes. Jack Dorsey’s Bitchat is just one example of an unhardened, vibe-coded application that wasn’t ready for production. As more contributors rely on AI models for their code, the spectrum of risks will continue to grow. However, these security concerns can generally be mitigated with the right pre-production security measures and code review practices in place. Use the Tool, Trust the Human This is where our radiology analogy comes full circle. Just as a radiologist reviews AI-interpreted scans, humans must remain in the code loop before it is committed to production. AI-assisted coding isn’t going away, nor should it. The ability to move from an idea to the first line of code to production within days is a powerful step forward. However, security guardrails, review processes, and developer literacy must quickly evolve in parallel. AI amplifies both the good and the bad. While it lowers the barrier to entry for developers, new contributors need structured mentorship — otherwise open source projects risk increasing fragility and applications risk grave vulnerability. The open source community or veteran developers can advocate for safer AI-generated code by promoting basic security guardrails to track and scan contributions, using AI tools for the initial review of AI-generated code, and educating the next generation of developers on best practices. Vibe coding is transformative, but we need to apply the same human judgement, discipline, and creativity that have always driven technology and innovation forward.

Guess You Like

Azerbaijanis commemorate heroes of Second Karabakh War
Azerbaijanis commemorate heroes of Second Karabakh War
BAKU, Azerbaijan, September 27...
2025-10-21