Copyright Fast Company

When AI systems started spitting out working code, many teams welcomed them as productivity boosters. Developers turned to AI to speed up routine tasks. Leaders celebrated productivity gains. But weeks later, companies faced security breaches traced back to that code. The question is: Who should be held responsible? This isn’t hypothetical. In a survey of 450 security leaders, engineers, and developers across the U.S. and Europe, 1 in 5 organizations said they had already suffered a serious cybersecurity incident tied to AI-generated code, and more than two-thirds (69%) had uncovered flaws created by AI. Mistakes made by a machine, rather than by a human, are directly linked to breaches that are already causing real financial, reputational, or operational damage. Yet artificial intelligence isn’t going away. Most organizations feel pressure to adopt it quickly, both to stay competitive and because the promise is so powerful. And yet, the responsibility centers on humans. A blame game with no rules When asked who should be held responsible for an AI-related breach, there’s no clear answer. Just over half (53%) said the security team should take the blame for missing the issues or not implementing specific guidelines to follow. Meanwhile, nearly as many (45%) pointed the finger at the individual who prompted the AI to generate the faulty code. This divide highlights a growing accountability void. AI blurs the once-clear boundaries of responsibility. Developers can argue they were just using a tool to improve their output, while security teams can argue they can’t be expected to catch every flaw AI introduces. Without clear rules, trust between teams can erode, and the culture of shared responsibility can begin to crack. Subscribe to the Daily newsletter.Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters Some respondents went further, even blaming the colleagues who approved the code, or the external tools meant to check it. No one knows whom to hold accountable. The human cost In our survey, 92% of organizations said they worry about vulnerabilities from AI-generated code. That anxiety fits into a wider workplace trend: AI is meant to lighten the load, yet it often does the opposite. Fast Company has already explored the rise of “workslop”—low-value output that creates more oversight and cleanup work. Our research shows how this translates into security: Instead of removing pressure, AI can add to it, leaving employees stressed and uncertain about accountability. In cybersecurity, specifically, burnout is already widespread, with nearly two-thirds of professionals reporting it and heavy workloads cited as a major factor. Together, these pressures create a culture of hesitation. Teams spend more time worrying about blame than experimenting, building, or improving. For organizations, the very technology brought in to accelerate progress may actually be slowing it down. Why it’s so hard to assign responsibility AI adds a layer of confusion to the workplace. Traditional coding errors could be traced back to a person, a decision, or a team. With AI, that chain of responsibility breaks. Was it the developer’s fault for relying on insecure code, or the AI’s fault for creating it in the first place? Even if the AI is at fault, its creators won’t be the ones carrying the consequences. That uncertainty isn’t just playing out inside companies. Regulators around the world are wrestling with the same question: If AI causes harm, who should carry the responsibility? The lack of clear answers at both levels leaves employees and leaders navigating the same accountability void. Workplace policies and training are still behind the pace of AI adoption. There is little regulation or precedent to guide how responsibility should be divided. Some companies monitor how AI is used in their systems, but many do not, leaving leaders to piece together what happened after the fact, like a puzzle missing key parts. What leaders can do to close the accountability gap Leaders cannot afford to ignore the accountability question. But setting expectations doesn’t have to slow things down. With the right steps, teams can move fast, innovate, and stay competitive, without losing trust or creating unnecessary risk. Track AI use Make it standard to track AI usage and make this visible across teams. Share accountability Avoid pitting teams against each other. Set up dual sign-off, the way HR and finance might both approve a new hire, so accountability doesn’t fall on a single person. Set expectations clearly Reduce stress by making sure employees know who reviews AI output, who approves it, and who owns the outcome. Build in a short AI checklist before work is signed off. Use systems that provide visibility Leaders should look for practical ways to make AI use transparent and trackable, so teams spend less time arguing over blame and more time solving problems. Use AI as an early safeguard AI isn’t only a source of risk; it can also act as an extra set of eyes, flagging issues early and giving teams more confidence to move quickly. Communication is key Too often, organizations only change their approach after a serious security incident. That can be costly: The average breach is estimated at $4.4 million, not to mention the reputational damage. By communicating expectations clearly and putting the right processes in place, leaders can reduce stress, strengthen trust, and make sure accountability doesn’t vanish when AI is involved. AI can be a powerful enabler. Without clarity and visibility, it risks eroding confidence. But with the right guardrails, it can deliver both speed and safety. The companies that will thrive are those that create the conditions to use AI fearlessly: recognizing its vulnerabilities, building in accountability, and fostering the culture to review and improve at AI speed.