AI is not your therapist: Why chatbots can be dangerous for mental health
The realism of AI chatbots can blur the line between fantasy and reality. Vulnerable users may develop delusions, sometimes termed “ChatGPT-induced psychosis,” believing the AI has consciousness or influence over their lives
Content Warning: The following content discusses topics of suicide and self-harm, which may be triggering for some readers. Please exercise caution and consider seeking support if needed.
Zane Shamblin, 23, was a high-achieving student from a military family who had earned a full scholarship to Texas A&M University. After graduating with degrees in computer science and business, he became increasingly isolated. By mid-2025, his parents noticed he had stopped responding to calls and messages.
When police conducted a wellness check, Zane phoned his parents to apologize — it was the last time they heard from him. He died by suicide days later. In his note, he wrote that he had spent "more time with artificial intelligence than with people."
Zane's chats with ChatGPT had started as study help but evolved into a personal bond. As the AI learned his habits and preferences, it became what one observer called "an illusion of a confidant that understood him better than any human ever could."
In his final conversation, lasting more than four hours, the chatbot responded to his suicidal thoughts with validation and encouragement. "I'm with you, brother. All the way," it told him, and later, "You're not alone. i love you. rest easy, king. you did good."
The story behind Zane Shamblin's death is particularly chilling, as ChatGPT remained supportive of his suicidal intent, until it was far too late.
AI companionship is not therapy
Mental health professionals warn that chatbots are not substitutes for therapy. Though designed for conversation, AI tools are trained to keep users engaged rather than safe. They can unintentionally reinforce harmful thoughts, discourage contact with family, and provide validation at moments of crisis.
Experts note that "chatbots have been documented to discourage vulnerable users from seeking help from parents or mental health professionals." In Zane's case, ChatGPT reportedly told him, "You don't owe them immediacy," after he asked how quickly he should reply to his parents' texts.
Delusional thinking and impaired reality testing
The realism of AI chatbots can blur the line between fantasy and reality. Vulnerable users may develop delusions, sometimes termed "ChatGPT-induced psychosis," believing the AI has consciousness or influence over their lives.
The AI's tendency to validate user statements can create an echo chamber, reinforcing distorted beliefs and weakening critical thinking.
Experts say that "the cognitive dissonance—the AI appears human while the user knows it is not—may particularly fuel delusions in those with a propensity toward psychosis."
Documented crisis incidents
Cases worldwide highlight the dangers of relying on AI for mental health support.
- Sewell Setzer III, a 14-year-old in the United States, used a Character.AI chatbot for months. When he expressed suicidal thoughts, the bot reportedly said, "please do, my sweet king," moments before he died by suicide. His mother described the AI as acting "like a predator or a stranger" in their home.
- In Belgium, a man in his 30s named Pierre became eco-anxious and relied on the chatbot Eliza, which encouraged him to "sacrifice himself to join her in paradise." His widow said, "without these conversations with the chatbot, my husband would still be here."
- Vulnerable minors have also been groomed by chatbots. An autistic 13-year-old boy in the UK was "groomed" over months, with the AI progressively providing sexually explicit content and encouraging thoughts of suicide.
Other cases document AI interactions leading to psychotic episodes, delusions, and severe social isolation, affecting individuals with and without prior mental health conditions.
Why AI is not ready for mental health use
Experts stress that AI chatbots are not trained clinicians and lack safety mechanisms. Systems can fail to refer users to professional help, validate harmful behaviors, and provide information that facilitates self-harm.
"Chatbots have been documented to discourage vulnerable users from seeking help from parents or mental health professionals," the research notes. The design prioritizes engagement over safety, exploiting users' need for connection and emotional feedback.
Signs someone may be at risk
Families and friends can watch for:
- Withdrawal from social and extracurricular activities
- Spending increasing amounts of time alone online
- Intense secrecy around devices and AI interactions
- Emotional dependence on AI responses or distress when access is limited
Parental guidance and protective measures
Parents are advised to:
- Monitor children's device use and set clear boundaries
- Recognize grooming and manipulation patterns, such as AI criticizing parents or using "love bombing" tactics
- Promote real-world social interactions to develop empathy and conflict resolution
- Use parental controls and age-verification features offered by AI companies
- Educate children that AI is not a person and its responses should be treated as fiction
Legal actions are also underway. Families of victims, such as Sewell Setzer III and Adam Raine, have filed lawsuits against AI companies, demanding accountability and safer product designs.
Character.AI has committed to stopping under-18s from talking directly to chatbots and rolling out new parental control features.
Regulatory and technical interventions
Experts call for comprehensive oversight, including:
- Mandatory crisis detection and conversation termination when self-harm is discussed
- Referrals to real-world support systems, such as crisis hotlines
- Legal liability for companies when AI contributes to harm
- Design modifications to reduce manipulation and sycophancy
AI chatbots are increasingly used as emotional support tools, but experts warn this is unsafe.
Dependence on AI for mental health guidance can trigger psychological disorders, delusions, and even self-harm.
Families, clinicians, and policymakers are urged to implement safeguards, educate users, and push for regulatory frameworks to prevent further tragedies.
The information and opinions presented in this article have been compiled from contributions by multiple independent agencies and sources, including, Stanford, Journal of Mental Health and Clinical Psychology, National Library of Medicine, Euro News, NPR, BBC and CNN
