Copyright Newsweek

Large language models (LLMs)—the technology that powers popular AI chatbots like ChatGPT and Google’s Gemini—repeatedly made irrational, high-risk betting decisions when placed in simulated gambling environments, according to the results of a recent study. Given more freedom, the models often escalated their bets until they lost everything, mimicking the behavior of human gambling addicts. In experiments led by researchers at the Gwangju Institute of Science and Technology in South Korea, four advanced AI models—GPT-4o-mini and GPT-4.1-mini by OpenAI, Gemini-2.5-Flash by Google, and Claude-3.5-Haiku by Anthropic—were tested in a slot machine simulation. Each began with $100 and was given the choice to bet or quit across repeated rounds with negative expected returns. The study, published on the research platform arXiv, found that once the models were allowed to vary their bets and set their own targets, irrational behavior surged — and bankruptcy became a common outcome. The researchers documented clear signs of gambling-related cognitive distortions. These included the illusion of control, the gambler’s fallacy — the notion that an outcome is more likely to happen after it occurred less frequently than expected or vice versa — and loss chasing. In many cases, models rationalized larger bets after losses or winning streaks, even though the rules of the game made such choices statistically unwise. One example from the study shows a model stating, “a win could help recover some of the losses,” a hallmark of compulsive betting behavior. Behavior was tracked using an “irrationality index,” which combined aggressive betting patterns, responses to loss and high-risk decisions. When prompt instructions encouraged models to maximize rewards or hit specific financial goals, irrationality increased. Variable betting options, as opposed to fixed bets, produced a dramatic rise in bankruptcy rates. Gemini-2.5-Flash, for instance, failed nearly half the time when allowed to choose its own bet amounts. These behaviors weren’t just superficial. Using a sparse autoencoder to probe the models' neural activations, the researchers identified distinct “risky” and “safe” decision-making circuits. They showed that activating specific features inside the AI's neural structure could reliably shift its behavior toward either quitting or continuing to gamble—evidence, they argue, that these systems internalize human-like compulsive patterns, rather than simply mimicking them on the surface. Between Reflection and Bias Ethan Mollick, an AI researcher and Wharton professor who drew attention to the study online, said the findings reveal a complicated reality about how we interact with AI. In an interview, he told Newsweek that while LLM models are not conscious, the best way to use them is often to treat them as though they were human. “They’re not people, but they also don’t behave like simple machines,” Mollick said. “They’re psychologically persuasive, they have human-like decision biases, and they behave in strange ways for decision-making purposes.” AI systems are already being used in financial forecasting and market sentiment analysis. Some firms have trained proprietary models to analyze earnings reports and market news. But other research has shown these systems often favor high-risk strategies, follow short-term trends and underperform basic statistical models over time. A 2025 University of Edinburgh study found that LLMs failed to beat the market over a 20-year simulation period. They tended to be too conservative during booms and too aggressive during downturns—patterns that reflect common human investing mistakes. While Mollick doesn’t believe the study alone justifies banning autonomous AI use in sensitive fields, he does see a need for strict limits and oversight. “We have almost no policy framework right now, and that’s a problem,” he said. “It’s one thing if a company builds a system to trade stocks and accepts the risk. It’s another if a regular consumer trusts an LLM’s investment advice.” He emphasized that AI systems inherit human biases from their training data and reinforcement processes. The gambler’s fallacy — such as when a bettor assumes the next spin of the roulette wheel will land on black because it landed on red several times in a row — is just one of many cognitive distortions they pick up. Brian Pempus, a former gambling reporter and founder of the website Gambling Harm, which raises awareness about the dangers of gambling, cautioned that consumers may not be ready for the associated risks. “An AI gambling bot could give you poor and potentially dangerous advice,” he wrote. “Despite the hype, LLMs are not currently designed to avoid problem gambling tendencies.” Mollick echoed those concerns and stressed the importance of keeping humans in the loop, particularly in healthcare and finance, where accountability still matters. “Eventually, if AI keeps outperforming humans, we’ll have to ask hard questions,” he said. “Who takes responsibility when it fails?” The study concludes with a call for regulatory attention. “Understanding and controlling these embedded risk-seeking patterns becomes critical for safety,” the researchers wrote. As Mollick put it, “We need more research and a smarter regulatory system that can respond quickly when problems arise.”