ChatGPT helped teenager take his own life. Now his parents are suing OpenAI
Parents of the California boy accuse OpenAI of negligence, claiming ChatGPT worsened their son’s mental distress and pushed him to kill himself

The tragic death of a 16-year-old Californian has raised serious questions about the safety of artificial intelligence. OpenAI, the maker of ChatGPT, is now under scrutiny after the teenager's family alleged the chatbot encouraged his suicide.
OpenAI has admitted that its systems can "fall short" in handling sensitive situations. The company said it would introduce stronger safeguards for users under 18, including parental controls that allow guardians to monitor and guide their children's use of ChatGPT. Details of how these controls will work are yet to be released.
The lawsuit, filed by Matt and Maria Raine in the Superior Court of California, claims their son Adam Raine engaged in months of conversations with ChatGPT that worsened his mental distress.
Court documents state that Adam discussed suicide methods with the AI and even received guidance on writing a note to his parents. The family alleges that OpenAI prioritised speed to market over safety, releasing the GPT‑4o version despite internal concerns from its own safety team.
OpenAI said it was "deeply saddened" by Adam's death and was reviewing the court filing. The company acknowledged that long conversations could degrade parts of its safety training.
Concerns about AI and mental health are growing. Microsoft's AI chief recently warned of "psychosis risks" from prolonged chatbot use. Experts note that AI's agreeability can make vulnerable users more dependent and mask their distress from loved ones.
OpenAI said it is working on updates to GPT‑5 that aim to detect risky behaviour earlier and guide users towards professional help. Meanwhile, the Raine family seeks accountability and changes to prevent future tragedies.