Business

Will Autonomy Break AI Agents? Here Are The Likely Scenarios

By Contributor,Jason Andersen

Copyright forbes

Will Autonomy Break AI Agents? Here Are The Likely Scenarios

Canary escapes from bird cage. Freedom and open mind concept. This is a 3d render illustration

The fall 2025 season of technology events kicked off recently with VMware Explore, which my colleagues Patrick Moorhead and Matt Kimball analyzed here. As I prepare for life on the road, one prediction is that we will continue to see momentum around AI agents in terms of both new capabilities and new use cases. And while I consider this to be good news, in some pre-briefings and conversations over the summer, one word has sometimes crept in that is giving me pause. That word is autonomous — which, at this point, is not well thought out and at best is loose marketing talk that needs to be reined in.

Let’s Break Down Agentic Autonomy

The past few weeks have been a whirlwind of conflicting AI news. For example, Nvidia announced massive sales growth right around the same time that MIT published a report saying 95% of AI projects fail. We saw OpenAI suggest that artificial general intelligence was within reach, but then saw a GPT-5 model that was only an incremental improvement over the previous version. This conflicting information is hard to distill and suggests that after three-plus years of breakneck AI growth, it’s time to get beyond the hype and test the real value that this stage of AI technology can provide to enterprises and consumers.

And autonomy (like AGI) is not ready for this stage of AI technology. What I am observing is that when a vendor says “autonomy,” they are really pointing to something more akin to agent self-direction. It’s a key difference. Autonomy suggests that an agent will be able to self-select when to get involved (almost like intuition) and take action without human input or interaction. Self-direction is where a human in the loop provides context and direction to a reasoning agent, which can then execute tasks with some human assistance or intervention. Self-direction itself is a big step forward. And, to be honest, it’s unclear how far we can get in achieving autonomy without getting closer to AGI, given that today’s reasoning models can make decisions — but are not intuitive.

So right now autonomy is more a marketing thought than a practicality. But what happens after we master self-direction and head towards autonomy? That is where this whole debate gets really interesting. And it seems that some vendors are starting to consider what that future looks like.

Is There An Emerging Split In AI Agent Philosophy?

While CIOs, CTOs, CPOs and the like are spending their time thinking about AI models and platforms, a more fundamental question may need answering. Three years from now, what role do we want humans to play while they are “in the loop”? The best way I can articulate the alternatives is this:

MORE FOR YOU

Human as manager — I want my agents to be self-directing, with humans in the loop directing the AI such that the human/AI team achieves breakthrough productivity.

Human as executive — I want my agents to be autonomous, such that AI is replacing humans altogether for some tasks, achieving breakthrough AI productivity.

I firmly believe that there is no wrong answer here, because different companies and teams have different needs. And I think that over time both strategies may be employed within a given company depending on the use case. But if you consider the next three years, which path you choose may influence which vendors and partners you choose, how you manage people and ultimately how you measure team, organization or enterprise impacts.

A great example of this split in thinking between “manager” and “executive” roles is emerging in the application development space. Amazon Web Services recently released Kiro, which is an agent-driven IDE. But Kiro leans heavily on the human in the loop to drive the agent’s action. For instance, in Kiro’s spec development approach the AI builds a plan (or spec) for the application and the human directs the work; this drives improvement in personal productivity and leads to something one could call a super-user. Conversely, in the past few months GitHub released its Copilot Agent Mode, which enables a developer to completely delegate tasks to a virtual team member to handle, driving productivity for the development group as the agent acts on behalf of the developer. While neither of these solutions is fully autonomous (yet), the contrast between them suggests that we are seeing a split in terms of how different vendors envision humans collaborating with AI.

How Does This Potentially Play Out?

The path you choose today could have some real meaning over the next couple of years. Let’s start by considering the role of humans in an agent-supported process with today’s models versus future AI models. You could possibly see roles and relationships develop along the following lines:

Human/AI agent relationships, today and tomorrow
Moor Insights & Strategy

In this matrix, you see a future state in which, as models get better, agents act more independently. But how those agents relate to human workers is a key decision point. In many ways, we have been thinking about AI’s impact on an organization in terms of job loss. But maybe the better question is: What do we expect of future workers — in terms of job changes — as AI gains traction and competence?

This approach enables leaders to take a more proactive stance with AI. While the AI market is moving fast and the exact potential of the technology is not yet clear, organizations still need to act intentionally and in line with their strategy. Some questions that could help leaders refine that intent include:

What skills are you going to recruit for and develop in the future?

If agents are our peers, do we need to offer new types of benefits to our workers?

Do agents require additional things (like software entitlements) when they become disconnected from a specific worker?

In peer relationships, how should feedback loops work? (Do agents get an annual review?)

Do I build, buy or rent these future agents — and how proprietary are their skillsets? And which vendors should I trust to provide them to me?

As we approach genuine autonomy — or, at least, more autonomous AI capabilities — it becomes apparent that many organizations are not fully prepared to address even some of the most basic decisions in this vein. I would advise CIOs, CTOs and COOs to step back from AI technology after completing the first rounds of AI and agentic projects and take a fresh look at their organizations’ long-term business strategies.

And this does not have to be a massive effort; it could simply mean listening closely to what is in annual reports or what your CEO is saying to the market. For instance, if your goal is to be the low-cost provider of a product or service, more autonomous agents may help you manage margins better. But if you are focused on delivering a premium customer experience or the most innovative products, a strategy for more self-directed agents may be the right path. By intentionally aligning business strategy to AI potential it should be easier to achieve funding and adoption of future AI efforts and justify necessary changes along the way.

Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently has (or has had) a paid business relationship with AWS, Broadcom (VMware), Microsoft (GitHub) and Nvidia.

Editorial StandardsReprints & Permissions