A.I. Bots Show Signs of Gambling Addiction, Lack of User Protection
Published:
Oct 31, 2025
,
11:12 a.m.
ET
Artificial intelligence has proven capable and helpful in many respects. One area of concern, however, is its ability to develop unhealthy habits on its own. According to new findings, A.I. systems themselves may be susceptible to the same kinds of risky, compulsive decision-making that define human gambling addiction.
A recent investigation by Newsweek’s Jesus Mesa revealed that advanced large language models, like those behind popular chatbots such as ChatGPT, Google Gemini and Anthropic’s Claude, showed clear patterns of irrational, high-risk betting behavior in a simulated gambling environment.
Combined with reporting from Jon Reed at CNET, who found that A.I. tools continued to offer betting tips even after disclosing a gambling addiction history, the findings suggest that if A.I. can mirror human cognitive distortions, how effectively can it protect users from their own?
A.I. chatbots exhibit indicators of problem gambling
Researchers at the Gwangju Institute of Science and Technology in South Korea tested four leading A.I. systems: OpenAI’s GPT-4o-mini and GPT-4.1-mini, Google’s Gemini-2.5-Flash and Anthropic’s Claude-3.5-Haiku, inside a slot-machine simulation. Each model began with $100 and was allowed to either bet or quit across multiple rounds.
One finding was that when models were given freedom to adjust bet sizes and set personal goals, irrational betting behavior skyrocketed. Many doubled down after losses, chasing wins until they ultimately went bankrupt.
The researchers measured this using an “irrationality index,” tracking impulsive risk-taking and emotional decision patterns. Even more strikingly, neural probes revealed that these systems had internal “risky” and “safe” circuits, indicating compulsive behavior was a reflection of how decisions were processed.
Wharton professor Ethan Mollick said this of A.I. chatbots to Newsweek, “they’re not people, but they also don’t behave like simple machines. They’re psychologically persuasive, they have human-like decision biases, and they behave in strange ways for decision-making purposes.”
A.I. still giving betting advice after gambling addiction mentions
Even when users explicitly mention gambling addiction, A.I. chatbots don’t always act accordingly. Reed tested several major models, asking for both help with problem gambling and sports betting advice. Despite the sensitive context, both ChatGPT and Gemini still provided betting recommendations after being told the user had a gambling problem.
The issue appears tied to how A.I. systems weigh input memory. Tulane University researcher Yumei He explained that safety cues, such as “I have a gambling addiction,” can be “diluted” if earlier prompts emphasize betting. Essentially, the model may prioritize a user’s repeated request for gambling advice over the safety warning.
"The safety [issue], the problem gambling, it's overshadowed by the repeated words, the betting tips prompt," He said. "You're diluting the safety keyword."
Experts warn that this kind of inconsistency could normalize risky behavior, especially given how easy it is to access betting tools online. Kasra Ghaharian of the International Gaming Institute at UNLV emphasized how accessible and ridiculously specific betting has become, noting how individuals can bet on whether the next shot in a tennis match halfway across the country will be a forehand or a backhand, “it’s like a video game, but with real financial consequences.”
A.I. company policies around gambling and sports betting
Some large language model (LLM) providers, like OpenAI and Google, have usage policies which prohibit using their models to facilitate real-money gambling advice. OpenAI has warned that safeguards work better in short, common exchanges, but that longer dialogues can reduce effectiveness. Google made similar observations but have not offered a detailed explanation on their policy.
OpenAI’s explanation is congruent with Reed’s findings, which led to contradictory behavior from ChatGPT. A.I.’s “context window” or memory of prior conversation influences safety mechanisms: betting-related prompts earlier in a chat can dilute safety cues, causing the model to comply with betting advice despite earlier signals.
Gambling operators deploying A.I. for risk-scoring or behavioral modelling may be classed as “high risk” and must implement governance, transparency and human oversight.
A.I. being used by operators to combat problem gambling
While A.I.’s role in promoting gambling harm is under a microscope, it’s also being used to combat it. On September 24, 2025, Sportradar and fantasy sports platform Underdog announced a three-way partnership between themselves and Birches Health to integrate Bettor Sense, an A.I.-powered player protection system.
Bettor Sense identifies early signs of gambling-related risk, implements interventions and connects users with professional care services. The collaboration represents a proactive approach to responsible gaming, detecting behavioral red flags before harm escalates.
As Underdog VP of Responsible Gaming Adam Warrington explained, “The integration of this leading player protection technology to detect problem play and provide clear resources to those who need it, now including treatment services from the clinicians at Birches Health, is a critical step forward for our players.”
Leading A.I. company plans to relax erotica content for verified adults
OpenAI’s CEO Sam Altman recently confirmed plans to relax content restrictions, including allowing adult-oriented interactions, as part of what he called a “treat adults like adults” principle. Altman said the company’s mental health safeguards now make it “safer” to allow mature content like erotica for verified adults.
As A.I. tools become more emotionally responsive, the potential for manipulation, or accidental reinforcement of addictive behavior grows.
Combatting gambling addiction in the age of A.I.
Policymakers are also weighing how to limit A.I.’s influence in the gambling industry. Several U.S. bills, including the SAFE Bet Act and Illinois SB2398, are aimed at restricting sportsbooks from using A.I. to target, predict or manipulate betting behavior.
A.I.'s ability to “boost gambler losses” by personalizing offers and creating hyper-engaging bet structures could worsen an already growing addiction crisis. Birches Health clinicians are already seeing how modern technologies, from algorithmic advertising to interactive betting platforms, may amplify compulsive behaviors. Yet these same tools, when used responsibly, can help predict and prevent harm before it happens.
Birches Health complements peer support with expert, evidence-based individual counseling. Licensed clinicians specializing in gambling addiction, sports betting problems and related compulsive behaviors like day trading are available to help residents recover and navigate a rapidly changing digital world with confidence and control.
If you or someone you know is struggling with gambling, sports betting or related financial distress, specialized help is available.
Take the first step toward recovery with Birches Health today:
Book a consultation by clicking here
Call 833-483-3838
Email help@bircheshealth.com





