By Bernard Marr,Contributor
Copyright forbes
Agentic AI promises a future where intelligent digital agents handle complex tasks across industries, but significant barriers stand in the way.
Adobe Stock
The era of agentic AI is here, or so we are told, bringing super-smart AI assistants capable of carrying out complex tasks on our behalf.
This represents the next generation of AI beyond current chatbots like ChatGPT and Claude, which simply answer questions or generate content.
Those building (and selling) the tech tell us we are on the verge of a fully-automated future where AIs cooperate and access external systems to carry out vast numbers of routine knowledge and decision-making tasks.
But just as emerging concerns around hallucinations, data privacy and copyright have put up barriers to generative AI that some organizations have found insurmountable, agents have their own set of obstacles.
So, here’s my rundown of the challenges that developers of AI agents, organizations wanting to leverage them, and society at large will have to overcome, if we’re going to deliver the promised agentic future.
The biggie. To achieve the critical mass of adoption needed to fuel mainstream adoption of AI agents, we have to be able to trust them. This is true on several levels; we have to trust them with the sensitive and personal data they need to make decisions on our behalf, and we have to trust that the technology works, our efforts aren’t hampered by specific AI flaws like hallucinations. And if we are trusting it to make serious decisions, such as buying decisions, we have to trust that it will make the right ones and not waste our money.
MORE FOR YOU
Agents are far from flawless, and it’s already been shown that it’s possible to trick them. Companies see the benefits but also understand the real risks of breaching customer trust, which can include severe reputational and business damage. Mitigating these risks requires careful planning and compliance, which creates barriers for many.
Lack Of Agentic Infrastructure
Another problem is that agentic AI relies on the ability of agents to interact and operate with third-party systems, and many third-party systems aren’t set up to work with this yet. Computer-using agents (such as OpenAI Operator and Manus AI) circumvent this by using computer vision to understand what’s on a screen. This means they can use many websites and apps just like we can, whether or not they’re programmed to work with them. However, they’re far from perfect, with current benchmarking showing that they’re generally less successful than humans at many tasks.
As agentic frameworks mature, the digital infrastructure of the world is likely to mature around them. Most people reading this will remember that it took a few years from the introduction of smartphones to mobile-friendly websites becoming the norm. However, at this early stage, this creates risk for operators of services like e-commerce or government portals that agents need to interact with. Who is responsible if an agent makes erroneous buying decisions or incorrectly files a legal document? Until issues like this are resolved, operators may shy away from letting agents interact with their systems.
Security Concerns
It doesn’t take much imagination to see that, in principle, AI agents could be a security nightmare. With their broad and trusted access to tools and platforms, as well as our data, they are powerful assistants and also high-value propositions for cybercriminals. If hijacked or exploited, criminals potentially have decision-making access to our lives. Combined with other high-tech attacks, such as deepfake phishing attempts, AI agents will create new and potentially highly problematic avenues of attack for hackers, fraudsters and extortionists. Agents must be deployed by individuals as well as businesses in a way that’s resilient to these types of threats, which not everyone is yet capable of doing.
Cultural And Societal Barriers
Finally, there are wider cultural concerns that go beyond technology. Some people are uncomfortable with the idea of letting AI make decisions for them, regardless of how routine or mundane those decisions may be. Others are nervous about the impact that AI will have on jobs, society or the planet. These are all totally valid and understandable concerns and can’t be dismissed as barriers to be overcome simply through top-down education and messaging.
Unfortunately, there’s no shortcut available here. Addressing this will involve demonstrating that agents can work in a reliable, trustworthy and ethical way. Pulling this off while also building a culture that manages change effectively and shares the benefits of agentic AI inclusively is the key here.
Agents Of Tomorrow
The vision of agentic AI is quite mind-boggling: Millions of intelligent systems around the world interacting to get things done, in ways that make us more efficient and capable.
As we’ve seen, however, the obstacles to this are just as likely to be human as they are technological. As well as solving fundamental issues like AI hallucination, and building infrastructure that enables agents in ways that are trustworthy and accountable, we have to prepare society for a fundamental shift in the way people work with machines.
Accomplishing this will pave the way for AI agents to hit the mainstream in a safe way that enhances our lives rather than exposes us to risks.
Editorial StandardsReprints & Permissions