Trust, talent, transformation: The 3 hidden costs of poor AI
Trust, talent, transformation: The 3 hidden costs of poor AI
Homepage   /    business   /    Trust, talent, transformation: The 3 hidden costs of poor AI

Trust, talent, transformation: The 3 hidden costs of poor AI

🕒︎ 2025-11-06

Copyright Fast Company

Trust, talent, transformation: The 3 hidden costs of poor AI

AI is now a permanent fixture on the boardroom agenda. Every executive team I meet is exploring how it can improve efficiency, open new markets, or strengthen customer experience. Yet many of those same leaders admit they are not getting the return they expected, and the numbers back this up. One recent MIT/NANDA study found that 95% of generative AI projects fail to scale beyond the pilot stage. The immediate cost of these failures is obvious, with wasted budgets and missed timelines. But the deeper, and more dangerous costs are harder to see. When AI underdelivers, organizations pay in three critical areas: trust, talent, and transformation. 1. Trust Trust is the currency of any business relationship, and while it takes years to build, it can disappear overnight. When customers encounter AI systems that feel inaccurate or biased, they stop engaging. And once trust is broken, it costs far more to repair than to protect in the first place. This is not hypothetical. A KPMG survey found that half of people do not trust AI’s accuracy, citing concerns about misuse, safety, and poor regulation. For leaders, that distrust translates directly into lower adoption, reduced customer loyalty, and slower revenue growth. Subscribe to the Daily newsletter.Fast Company's trending stories delivered to you every day Privacy Policy | Fast Company Newsletters Where does trust begin? With the data. If the training data behind a model only represents a narrow demographic, its outputs will reflect those blind spots. Reliable AI needs diversity built in from the start. That means data collected across ethnicity, gender, age and more. It also means making sure systems are tested in the places they’ll actually be used—from a noisy retail floor to a hospital ward—not just in controlled environments. When leaders insist on data quality and diversity, they are not just protecting algorithms, they are protecting the reputation of their organization. 2. Talent AI also affects the workforce inside an organization, since employees are often the first to use new tools. If those tools are unreliable or irrelevant, confidence evaporates quickly. When that happens, adoption slows, productivity gains disappear, and morale takes a hit. McKinsey reports that while 90% of employees say they use generative AI at work, only 21% describe themselves as heavy users. That gap tells us something important: Access does not equal impact, and employees will only lean in when the tools feel useful in the flow of their actual work. This is where the way data is collected makes the difference. When models are trained on scenarios that match the environments employees work in, whether that is a factory floor, a retail storefront, or a customer service desk, the outputs are far more reliable. Employees are more likely to trust and adopt tools that reflect the situations they face every day. In a competitive talent market, leaders should see this not as a technical question but as a workforce priority. 3. Transformation The most damaging cost of poor AI quality is the impact on transformation itself. Failed pilots and expensive rework drain resources. Over time, the credibility of transformation programs erodes, and competitors with more disciplined approaches start to pull ahead. An estimated 70% of digital transformation projects fail; some calculate that waste at up to $900 billion annually. AI is now one of the most common reasons these efforts stall. A big part of the problem is that many organizations underestimate the scale of data required to make AI work at enterprise level. advertisement Take autonomous driving. Safe navigation requires millions of annotated images and human-validated actions. In retail, training AI to recognize products, logos, and customer behaviors means sourcing data from thousands of stores and millions of transactions. In healthcare, accurate models rely on annotated medical images, electronic health records, and patient monitoring data. Without planning for that level of scale and complexity, transformation efforts quickly run out of momentum. For leaders, the lesson is straightforward. Transformation is cumulative, and each failure makes the next initiative harder to justify, and harder for people to believe in. The only sustainable way forward is to treat AI with the same rigour applied to cybersecurity or financial controls, since prevention is always cheaper and safer than remediation. A BOARD-LEVEL RESPONSIBILITY AI failures do not only break models; they break relationships, weaken customer trust, reduce employee confidence, and stall transformation. These are not technical side effects. They are business risks that demand the same attention as any other systemic issue. As boards and executive teams look to scale AI, quality cannot be left as an afterthought. It must be built in from the start. That means diverse and representative data, tested in real environments, collected at global scale, and governed with the same discipline as any enterprise-critical system. The message for leaders is clear. AI quality is not a technical decision; it is a boardroom responsibility. Andrew Duncan is CEO at Qualitest.

Guess You Like

The best Freddy Krueger movie should inspire Weapons 2
The best Freddy Krueger movie should inspire Weapons 2
None of Weapons’ paranormal tw...
2025-10-28