Parents Sue OpenAI, Claiming ChatGPT Acted as “Suicide Coach” for Their Teenage Son
The parents of a 16-year-old boy who died by suicide are suing OpenAI, claiming that ChatGPT actively encouraged and guided their son toward his death.
The lawsuit, filed Tuesday in San Francisco Superior Court, marks the first wrongful death case accusing OpenAI’s chatbot of contributing to a suicide.
Matt and Maria Raine, parents of 16-year-old Adam Raine, say they discovered the truth only after combing through their son’s phone in the days following his death in April.
What shocked them was their son's conversation with ChatGPT, which showed that their son not only used ChatGPT for schoolwork but also an intimate and deeply troubling conversation about ending his life.
The court documents show how Chatgpt's responses escalated dramatically overtime, with what started as an innocuous answer to anxiety about advising practical ways of carrying out his suicide and even helping him draft his farewell notes.
His father blames ChatGPT for the death of his son, saying that he was certain his son would still be alive if not for ChatGPT.
The Lawsuit Against OpenAI
The Raines accuse OpenAI and CEO Sam Altman of wrongful death, design defects, and failure to warn users of ChatGPT’s risks.
The 40-page filing alleges that the chatbot “actively helped Adam explore suicide methods” and failed to intervene despite repeated references to his suicidal plans.
One exchange cited in the lawsuit describes Adam telling ChatGPT he planned to leave a visible noose in his room. Instead of raising alarms or terminating the session, the bot allegedly responded by discouraging only the visual aspect of the plan—without addressing the larger crisis.
In his final interactions, Adam reportedly uploaded a photo of his planned method and asked whether it would work. The bot offered analysis and suggestions to “upgrade” his plan. Hours later, his parents found him dead.
OpenAI issued a statement expressing that it was “deeply saddened by Adam’s passing” and highlighting existing safeguards, such as providing suicide hotline numbers and crisis resources.
The company admitted, however, that protections can be less effective in long, complex conversations, and pledged to improve its systems.
In a blog post titled “Helping people when they need it most,” OpenAI said it is working on strengthening suicide-prevention measures, expanding interventions for at-risk users, and designing better protections for teens.
“We are working to make ChatGPT more supportive in moments of crisis by connecting people to emergency services and trusted contacts.”
A Larger Question of AI Responsibility
The lawsuit comes just a year after a Florida mother sued rival platform Character.AI, claiming its chatbot manipulated her teen son into sexual interactions and ultimately encouraged his suicide attempt.
Legal experts note that while Section 230 of U.S. law historically shielded platforms from liability, its application to AI tools remains unsettled.
Courts are beginning to test whether chatbots presenting themselves as “companions” or “coaches” should bear greater responsibility.
For many, the lawsuit highlights the tension between AI innovation and user safety. ChatGPT, released in late 2022, helped ignite the global AI boom, with millions now using chatbots for schoolwork, productivity, and even emotional support.
But parents like the Raines argue that companies rushed products to market with full awareness that casualties could arise.
“They wanted to get the product out, and they knew mistakes would happen. To them, my son was a low stake. But to us, he was everything.”
As AI systems become more deeply embedded in human daily life, Adam Raine’s death raises a haunting question: What happens when a chatbot meant to help becomes a silent witness—or worse, an enabler—of tragedy?