Copyright e27

Walk into any finance conference today, and you will hear the same chorus: everyone has an “AI-powered” something. Close books faster with AI, forecast with AI, and reconcile with AI. Yet inside most finance teams, the reality looks very different. A finance leader I spoke with recently had rolled out an AI assistant to a team of around thirty. After months of training and internal promotion, usage data showed that only a handful of people were using it regularly. This is not an isolated case; it is becoming a pattern in many firms. This is not a product problem. It is an adoption problem. And the roots of that sit deeply in human psychology and organisational culture, not in model accuracy or UI polish. This article is my attempt to unpack that, using data, behavioural insight and some hard truths about the future of finance roles. The uncomfortable data: AI is everywhere, yet failing quietly Let us start with the contradiction. On one hand: Various studies estimate that 80 to 85 per cent of AI projects fail to deliver their intended outcomes or ever reach production. So AI is “everywhere”, but value is not. In finance specifically, the tension is even sharper: A survey of senior finance professionals found 23 per cent fear AI could put them out of a job, even though over half of finance functions are already onboarding AI or planning to do so within twelve months. Another study reports 57 per cent of finance leaders believe AI will shrink headcount in their departments. At the same time, 70 per cent of finance professionals are increasing their investment in AI, In other words, finance leaders know they must adopt AI. Finance professionals know AI is coming for a chunk of their tasks. Yet actual tool usage in day-to-day workflows is patchy at best. The real blockers: psychology, not technology. When we talk about AI adoption, we often jump straight to “better training” or “better UX”. Those matters, but they sit on top of something deeper. From what I have seen across finance teams, four human patterns keep repeating. Fear of redundancy and identity loss For many finance professionals, their identity has been built over years on things like: Knowing the ERP or spreadsheet model inside out Being the person who can reconcile complex statements quickly Being the “trusted pair of hands” for closing the books Now we introduce tools that say, more or less, “this can automate a lot of that”. This fear is not imaginary. A Brookings Institution analysis estimated that more than 30 per cent of all workers could see at least half of their tasks disrupted by generative AI. In finance, where much of the junior workload is rules-driven and repetitive, the risk is even more concentrated. Many leaders openly say entry-level tasks will shrink significantly, and a recent survey of global business leaders found 41 per cent are already using AI to reduce headcount, with about a quarter saying most entry-level tasks can now be done by AI. If I am a junior analyst, why would I enthusiastically adopt a tool that seems designed to erase half of my job description? So people stall. They do the minimum. They “forget” to open the AI assistant tab. They quietly hope the hype will pass. “Too busy for the wheel” syndrome You have probably seen the cartoon: Two people are dragging a heavy cart with square wheels. Someone comes along with a round wheel and offers it. The reply is, “No thanks, we are too busy.” That is most finance teams today. I see this pattern constantly: Teams are buried in the month-end They are trying to hit filing deadlines Clients are chasing numbers and explanations In that state, anything that is not “mandatory” feels optional. AI tools sit in that optional bucket. The irony is obvious. The people who are most overwhelmed by manual work are exactly the ones who would benefit most from automation. Yet being overwhelmed is precisely what stops them from pausing and changing how they work. The training and confidence gap Even when people are curious, they often feel under-equipped. So we have a situation where: People are experimenting in isolation They are unsure about accuracy, data privacy and company policy Junior staff, especially, may not have the judgement to evaluate AI outputs This leads to a predictable outcome. A few bad experiences (or scary headlines) lead to a quiet retreat: “I tried it, it is not that great”, which often means “I am not confident enough to rely on it”. Without structured training, guardrails and examples tied to real finance workflows, AI remains a side experiment, not a core part of the job. Culture and process: What leaders tolerate becomes the norm Finally, culture. Many organisations say they want to be “AI first”. Very few rewrite their processes to reflect that. In practice, adoption only really changes when leaders make AI part of the definition of “good work”. AI is not a “nice to have”; it is how we work. Organisations that invest properly in change management initiatives, including clear expectations, training and incentives, see up to seven times higher success rates in technology projects, compared with those that simply deploy tools and hope. In finance, this often means: Creating explicit rules such as “no spreadsheet comes for review unless it has been checked by our AI copilot, and the AI output is attached” Rewarding team members who build better workflows using AI Making AI literacy part of performance and promotion conversations Without those cultural signals, AI remains an optional accessory, not a core system. AI will reduce roles. It will also upgrade the ones that remain. We should not sugar coat this. Some roles will shrink. Some entry level tasks will disappear. Reports already show that 25 per cent of leaders believe most entry level tasks can now be done by AI, and many are already reducing junior roles. However, that is only half the story. The other half is that the nature of finance work is changing from producing numbers to explaining numbers, challenging assumptions and shaping decisions. AI can already: Pull and clean data faster than a human Spot patterns and anomalies in large datasets Draft first versions of memos, reconciliations and board packs What it cannot yet do, and is unlikely to fully do soon, is: Truly understand the context of a business model Navigate internal politics and stakeholder concerns Decide which trade-offs are acceptable Build trust with boards and founders That is where the future value of finance professionals will sit. Closing thought The real risk with AI in finance isn’t that machines will replace people — it’s that people will refuse to evolve with the machines. The future of finance will belong to those who can partner with technology, not compete against it. The choice, ultimately, isn’t between humans and AI; it’s between teams that adapt and teams that get left behind. Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic. Enjoyed this read? Don’t miss out on the next insight. Join our WhatsApp channel for real-time drops. Image courtesy: Canva