The real risk of AI isn’t what it can do. It’s what we fail to do with it—as leaders of people.
We’re not just implementing AI. We’re renegotiating the social contract between technology and trust.
The tools are smarter, faster. But what hasn’t changed is this: People want to matter. They want to be seen, not scanned. Developed, not displaced.
And if we miss that truth, no AI strategy will stick.
Subscribe to the Daily newsletter.Fast Company’s trending stories delivered to you every day
Privacy Policy
|
Fast Company Newsletters
THE SIGNAL WE CAN’T IGNORE: ENGAGEMENT IS FALLING
According to Gallup’s most recent data, global employee engagement has dropped to just 21%—a post-pandemic low.
Even more concerning? Manager engagement is also falling, down from 30% to 27% in 2024, creating a ripple effect across teams. The cost is staggering: Gallup estimates $438 billion in global productivity loss tied directly to disengagement.
Let’s be clear: This isn’t about morale—it’s about performance, trust, and culture at scale.
And if we deploy AI without fixing the engagement gap, we risk automating our way into even deeper disconnection.
THE REAL DIVIDE: TWO TYPES OF SECURITY
A colleague once said to me, “Companies are worried about cybersecurity. People are worried about job security.”
That tension is everywhere.
At Greif, we experienced it when rolling out a mobile authentication tool. Though it was designed to improve access, the concern was immediate: “Are you tracking me?”
It didn’t matter that our intent was protective. What mattered was the perception.
That’s the challenge with AI, too. People don’t fear automation. They fear invisibility. They fear being left behind.
HOME VERSUS WORK: THE AI PARADOX
At home, we welcome AI. It curates playlists, finishes texts, and predicts our routes.
At work, it creates anxiety. Why? Because at home, AI serves us. At work, it can feel like it’s replacing us.
If we don’t name this tension—and lead through it—we will not build trust. We’ll only deepen silence.
AI IS A FORCE MULTIPLIER—IF WE LET IT BE
At Greif, we’re integrating AI into areas like talent acquisition—not to eliminate roles, but to elevate them.
Our recruiters are not worried about resume-screening tools. They’re grateful to have more time to coach managers, connect with candidates, and focus on culture-building.
And then there’s Maria. A plant line leader with 17 years of experience. When we introduced predictive maintenance, she didn’t resist. She learned what the AI could see—and what it missed. She became the bridge between the algorithm and the operation.
And her team? They didn’t just see her as a line leader anymore. They saw her as a translator of the future. AI didn’t reduce her value. It revealed it.
That’s the story I want more employees to experience—and more leaders to enable.
GUARDRAILS OVER HYPE
Too many AI conversations begin at 50,000 feet—with strategy slides and tech specs—then crash into daily operations without any human context.
advertisement
My approach? Operate at 15,000–20,000 feet. High enough to see the future. Low enough to stay human.
Leaders must:
Connect the “why” to the business.
Connect the “what” to people’s reality.
Set guardrails. Then walk the path together.
This isn’t just a transformation. It’s trustwork.
FROM CHANGE TO CHOICE: THE PATH FORWARD
Gallup defines engagement as psychological ownership—the belief that your work matters, and your voice is heard.
If only 1 in 5 employees are engaged, we’re not facing a tech barrier. We’re facing a leadership one.
So our job isn’t to prove AI is powerful. It’s to prove that people are essential within it.
That means:
Invite them into the design process
Clarify what won’t change (values, purpose, integrity)
Democratize AI literacy
Celebrate what machines can’t replicate: empathy, ethics, and trust
Let’s stop trying to make people comfortable with change. Let’s make them confident in their value.
If you’re ready to lead through AI, start here:
Create listening loops: Host open forums or anonymous channels where employees can share AI-related fears—and respond visibly.
Map AI to your values: Articulate how each initiative strengthens—not sidelines—your human culture.
Celebrate new heroes: Spotlight those embracing AI with curiosity and courage. Let them lead the storytelling.
This requires leadership that listens.
You can’t automate culture. You can’t mandate trust. An AI initiative without a human foundation is just a tech stack.
To lead with integrity in this moment, we must:
Speak plainly.
Listen deeply.
Act decisively—but with humility.
This isn’t about racing toward innovation.
It’s about bringing your people with you—every step of the way.
BOTTOM LINE
AI doesn’t define the future. People do.