Technology

The financial industry and agentic AI are on a cautious path

The financial industry and agentic AI are on a cautious path

Agentic artificial intelligence is rapidly moving into mainstream adoption, spurred on by faster AI technology adoption cycles. Although initially met with skepticism, the technology now dominates product releases for coding engines by the world’s preeminent AI companies.
Legendary software developers, including Steve Yegge, envision a future where managing AI agents becomes the core of software work. And that future is arriving fast. Anthropic has been so bold as to call 2025 the“year of the agent.” At the same time, AI is already tackling problems beyond human reach: Google’s AlphaEvolve, for instance, discovered algorithms thatcut 0.7% of compute time across Google’s infrastructure, delivering huge savings in energy and cost.
The implications are significant for virtually every industry across the global economy. But for at least one industry the path forward will be decidedly more cautious.
Financial services—an industry built on trust, security, and strict regulations—is not the place for unmitigated “vibe coding.” The potential of agentic AI to transform full-stack application development may be immense, but so are the risks. When code breaks or application security fails, customers suffer, and criminals can take advantage.
Subscribe to the Daily newsletter.Fast Company’s trending stories delivered to you every day
Privacy Policy
|
Fast Company Newsletters
Of course, the question isn’t if finance will adopt agentic AI for development, but how it can do so strategically and safely.
What is agentic AI?
Agentic AI in this context refers to software development, a promising area due to its well-defined, process-oriented, and objectively measurable and testable nature. AI coding and software development features aim to understand goals, formulate plans, and use tools and APIs to execute complex tasks autonomously. Human intervention can guide agents, but the goal is to have a “teammate” or “assistant” that can do multi-step work.
The dream is an AI that can not only write code but also write product specs, manage testing, assist in documentation, handle pull requests, and perform security reviews.
However, the financial world is unique. It demands near-perfect availability, ironclad security, and compliance with regulations. Unchecked AI autonomy could introduce security flaws or compliance breaches.
So how can financial institutions harness this power responsibly? The key is a phased, cautious approach, starting with lower-risk software development processes before progressing to more complex implementations.
Start small, start smart
Financial services enterprises should start by applying agentic AI in lower-risk, internal areas. A good entry point is automated testing, where AI can generate test cases for internal APIs or backend services in non-critical systems, helping teams build familiarity while improving test coverage in a controlled setting. AI can also speed up documentation by generating technical references for internal systems, accelerating code iteration and reducing technical debt. Another promising area is CI/CD pipeline automation for internal tools, where the stakes are lower but the benefits of more intelligent automation can be significant.
advertisement
Focusing on backend services rather than complex user interfaces is especially prudent, since backend logic is more structured and offers a better proving ground for AI-assisted development.
The non-negotiable element
Agentic AI in finance must operate with human oversight. These systems should be “co-pilots” or “digital apprentices,” augmenting human developers, not replacing them. Every significant piece of AI-generated code, design, or test plan needs rigorous human review and validation. This “human-in-the-loop” model is essential. Strict code review is already the norm, but financial institutions will need to adapt to a higher volume of reviews. This means improving developer tooling to help humans keep up with AI agents.
As capabilities mature, multi-agent systems managed by specialized frameworks will intelligently manage specialized AI agents (coding, security scanning, and compliance) to cooperate on complex tasks. Financial services companies should move towards specialization in agent flows to match human expertise maps.
Specific cybersecurity measures are also critical. Agents must operate in sandboxed environments with limited privileges. Defenses against “prompt injection” are necessary. Secure API usage and validation are fundamental. Humans must actively oversee security and treat AI like a potential adversary due to its novel deployment pattern.
Phased rollouts and extensive testing
A strong adoption strategy should unfold in phases. In the first year, enterprises should focus on foundational readiness by establishing governance, security protocols, and AI ethics frameworks. Ideally, this will happen in tandem with investments in talent, and the launch of small pilots in low-risk internal areas.
The second year should be about expanding capabilities, gradually introducing AI into more stages of the software development lifecycle—particularly backend components—while strengthening human validation processes.
From year three onward, enterprise leaders can begin scaling for broader impact, applying AI to more complex tasks such as front-end development, experimenting with multi-agent systems, and continuously monitoring, auditing, and refining their use. The guiding principle throughout is simple: Move smartly and carefully, and break nothing.
The future of application development in finance will be shaped by agentic AI. It’s about strategically integrating these tools as collaborators, enhancing human ingenuity, and ensuring innovation reinforces security, resilience, and trust. The path is one of careful steps, robust governance, and responsible innovation.
Ismail Amla is a senior vice president and leader of Kyndryl Consult.