Copyright Forbes

Meghan Tisinger is Managing Director and Global Head of Practices of Leidar. Meghan specializes in crisis communications. Crisis communications once revolved around issues like defective products, executive missteps, natural disasters and employee controversies. Today, the greatest accelerant of reputational risk is something far less predictable: AI producing misinformation that is completely out of our control. One misfired chatbot reply, deepfake video or AI-written statement riddled with bias, and suddenly, a company is in the headlines—not for its products or leadership, but because AI created an entirely new and totally false narrative that spread like a virus. This is not hypothetical. When an Air Canada chatbot gave a passenger the wrong information about refunds, a court ruled the airline had to honor the bot’s words as if spoken by a human agent. This turned a minor glitch into a reputational and legal crisis about accountability. In Hong Kong, criminals used voice-cloning software to impersonate an executive and trick an employee into transferring $25 million, rattling global trust in business safeguards. Even brands experimenting with AI-generated marketing images have been caught in firestorms, accused of reinforcing racial or gender stereotypes. If people cannot trust your voice, images or even emails, then your reputation becomes hostage to technological failure and public doubt. The New Reality Of AI Crises While traditional controversy might unfold over days or weeks, AI-driven crises erupt in minutes. When an AI system misfires, it’s not one person hitting send on the wrong message; it’s a supercharged engine churning out errors at machine speed, pushing flawed outputs into inboxes, timelines and newsfeeds everywhere. A communications crisis that starts with AI requires a different approach than a traditional crisis. It’s no longer enough to monitor, wait and apologize. In a climate where technology is both a tool and a threat, the new skill is learning how to mobilize allies and redirect narratives in real time. In the AI era, it’s not just about managing outrage but also mobilizing defense. Supporters can include customers, partners, credible third-party voices and internal advocates who already trust your brand and want it to succeed. Equipping customers can be as simple as posting content they can share, while business partners, internal staff and third parties may need additional materials like fact sheets, approved messaging and press materials to counter misinformation and reinforce your position in the channels where the crisis is accelerating. The New Crisis Playbook After nearly 20 years in crisis and reputation management, I’ve learned that every crisis teaches you something new. The old playbooks still have value, but they no longer offer the full protection brands need when machines can create, spread and escalate harm faster than humans can respond. In an AI-driven environment, organizations must rethink how they communicate, who they activate and how quickly they move. Here are five rules companies should follow: 1. Anticipate the flashpoints. If you’re deploying generative tools, you already know the risks. AI tends to break in predictable places: bias in outputs, hallucinated facts, copyright misuse, data privacy breaches, inappropriate or unsafe recommendations and off-brand responses that undermine trust. Communications teams should map out these common failure modes in advance and develop scenario playbooks around each. Each playbook should include a media holding statement that can be quickly edited to add relevant facts, talking points and a stakeholder map. Having preapproved messaging that affirms your values, clarifies how the failure occurred and explains what safeguards will change going forward is key to overcoming any crisis. If you wait to write that language until the headlines hit and customers are posting screenshots, you’re already too late. 2. Arm your allies before the outrage. Your employees, partners and advocates should never be left scrambling for answers. They are your most credible messengers in a moment when trust is fragile and speed is everything. Hold quarterly trainings and host a team town hall anytime a new AI deployment occurs. People forget under pressure, so frequent refreshers keep everyone confident and aligned. I also suggest to clients that they develop short, role-specific talking points and a living FAQ that evolves as your tools do. Place approved language directly into the places teams communicate most, whether that’s Slack channels, Teams Group, sales enablement tools or customer support scripts. Make the communications tools easily accessible and relevant to each person so they know the role they’re playing in defending the company’s reputation. 3. Track both criticism and advocacy. Most companies track their competition but forget about the people willing to defend them without hesitation. A crisis is shaped not only by who is angry but also by who is willing to speak up on your behalf. Monitor the voices correcting misinformation, providing context and reinforcing your intentions. These advocates might be loyal customers, trusted partners, internal teams, influential followers or respected experts who genuinely believe in what you’re building. 4. Recognize that reputation management is a team sport. AI crises crash into every corner of a company at once: IT scrambling to diagnose the failure, legal assessing exposure, compliance reviewing standards, customer care and sales fielding panic, PR trying to get ahead of the narrative and leadership searching for the right tone. If those groups operate in silos, mixed messages start flying and trust evaporates. The brands that survive these moments are the ones where teams know exactly how to work together under pressure. Our clients that have thrived during a crisis have clearly defined roles and responsibilities, a healthy company culture that values consistent internal communications and an easily accessible location for messaging and escalation protocols. 5. Update your dusty playbook. Too many organizations are still relying on crisis communications manuals written for a pre-AI world. Those frameworks don’t move fast enough for deepfakes, chatbot errors or viral misinformation. Your playbook should include AI-specific scenarios, preapproved messaging that can be deployed in minutes and clear escalation paths when automation misbehaves. Build workflows for monitoring outputs in real time, flagging anomalies and freezing systems when things go sideways. Train your spokespeople to communicate confidently about algorithms, safeguards and accountability. A crisis plan is a living document. Revisit it regularly. Stress-test it with simulations. Practice like the stakes are real. From Risk To Redirection For most companies, the instinct is to treat AI failures as technical problems. But when those failures go public, they become reputational. And reputations are won or lost in how quickly and credibly you respond. We’ve entered an era where AI crises are not just possibilities but inevitabilities. The companies that will thrive are those that stop treating these incidents as embarrassments to minimize and start treating them as moments to prove their values, demonstrate accountability and build trust in real time.