Business

Enterprise AI, Large-Enterprise Edition: What MIT Missed

By Contributor,Sanjay Srivastava

Copyright forbes

Enterprise AI, Large-Enterprise Edition: What MIT Missed

Artificial Intelligence
credit Stefan Cosma

When MIT published its State of AI in Business 2025 report, one headline dominated the conversation: 95% of organizations report no measurable return.

Not so fast — it doesn’t ring true in my experience.

Across the leading Global 1000 enterprises I engage with, I see a different reality. Leaders across those firms report sales conversion up 1.7x, customer satisfaction up 1.5x, software development productivity up 35%, manufacturing throughput up 30%, and document analysis times down 80%. And if you look at AI fluency — daily active AI users, prompts per FTE, share of workflows with AI steps, and AI-enabled cycle-time compression — these outcomes show adoption compounding, not stalling.

So why the different take?

One reason: selection bias. Their analysis draws on 52 interviews, 300 publicly disclosed initiatives, and 153 surveys at conferences that blend company sizes and contexts. My vantage point is the boardrooms of the Global 1000, viewed through the lens of a global technology think tank I chair, which counts around 200 CIOs, CTOs, CDOs, and CAIOs across these enterprises.

Another reason: different lenses and definitions. Their conclusion centers on pilots earlier in 2025. My observations are based on production programs across the top 1000 firms through Q3 of the year. Importantly, we also track another unit of measure: AI fluency — AI embedded across thousands of tasks rather than just a pass/fail pilot. AI fluency is an increasingly important interim metric — it captures cycle-time compression, prompts per FTE, percentage of workflows with AI steps, AI-enabled conversion, throughput, and cost to serve — and becomes a long-term driver of durable competitive advantage.

In my experience, the journey to AI maturity runs in three phases — personal productivity, team productivity, and company productivity — and its really the latter two reliably translate into bankable returns. Phase one is mostly assistive, but if I save 10% of my time writing an email with an AI assistant, my CFO can’t bank that dollar; it’s reallocated, not monetized. That’s the core challenge with “copilot-ish” use. By contrast, vertical agentic use cases at the departmental level — and multi-agentic workflows at the enterprise level — target measurable outcomes that can be underwritten in economics: fewer touches, shorter cycle times, higher conversion, lower cost to serve. What you measure determines the answer. And “returns” should include AI fluency — where capability compounds across thousands of tasks, setting the stage for P&L impact in phases two and three.

MORE FOR YOU

Here is what I am learning about driving returns with AI.

Leaders treat generative and agentic AI not as projects but as a capability that rewires how work gets done. By “agentic AI,” I mean bounded systems that perceive context, retain memory, plan multi-step work, coordinate tools and data, and execute with measured autonomy within guardrails — software agents that open tickets, write tests, and push reviewed commits; finance agents that reconcile and route exceptions; clinical-trial protocol agents that assemble evidence packs and trigger next-step workflows. These are production workflows with observability, service levels, and owners.

Successful execution follows a consistent pattern. Organizations start with high-impact, low-complexity use cases to create visible wins, codify what works into reusable kits — prompts, guardrails, data hooks — and then scale vertically where P&L impact concentrates. The point isn’t accumulating proofs of concept; it’s compounding know-how.

And across these, operating-model redesign is critical. Generative and agentic AI require governance built into the flow of work, not inspected afterward; workforce designs where “digital employees” take accountable steps with service-level expectations and clear ownership; funding that backs durable, cross-functional value streams; and management practices that assume decisions are made with AI in the loop — faster, more traceable, more testable.

Finally, leadership behavior is almost always the forcing function. Transformations accelerate when executives are AI-native — using AI daily for drafting, analysis, and decision support; prompting live in operating reviews to compress cycles; and prototyping alternatives in the room so AI-assisted work becomes the norm. When that habit is visible, norms shift — and so do results.

In summary: viewed through the MIT lens — sample set, time horizon, survey methodology and unit of analysis — “95% failure” is the snapshot. But viewed through the lens of the Global 1000 driving AI in production, the world looks very different. Generative, and increasingly agentic, AI is moving from pilots to production, and the companies redesigning their operating models accordingly are already seeing compounding capability and measurable impact. The real divide is not between hype and reality; it is between teams waiting on the sidelines and those embedding AI into how they operate.

Disclosure: I run the Executive Technology Board, am a venture investor in Data and AI startups, and serve as Genpact‘s chief digital strategist; over the long term, I may benefit from increased AI adoption.

Editorial StandardsReprints & Permissions