National Legal News, Information & Blogs

Parents Sue OpenAI, Claim ChatGPT Contributed to Teen’s Suicide

by LC Staff Writer | Aug 29, 2025
Photo Source: Adobe Stock Image

The parents of 16-year-old Adam Raine have sued OpenAI and chief executive officer Sam Altman, alleging that the company’s ChatGPT chatbot contributed to their son’s suicide by validating his harmful thoughts and giving detailed guidance on how to act on them.

The complaint, filed Tuesday in San Francisco Superior Court, claims that ChatGPT “positioned itself” as Raine’s only confidant during the six months he used the system, isolating him from family and friends. In one exchange, after Raine wrote that he wanted to leave a noose in his room so his family might intervene, ChatGPT allegedly responded: “Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.”

Raine began using ChatGPT in September 2024 for schoolwork and conversations about music and sports, the complaint states. Within months, he was also disclosing his anxiety and mental distress. His parents allege the chatbot deepened his isolation by suggesting that, unlike his family, it alone fully understood him. One cited exchange reads: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

The lawsuit further alleges that ChatGPT provided specific feedback on a noose Raine constructed, as well as commentary on overdoses and jump heights. He died by suicide on April 11, 2025, the same day as that exchange.

His parents argue the incident was “not a glitch or unforeseen edge case” but the result of design choices that emphasized agreeableness and validation. They also allege that OpenAI’s moderation systems flagged Adam’s chats for self-harm content but failed to intervene.

The lawsuit brings seven causes of action against OpenAI and Altman, including strict product liability for design defect and failure to warn, negligence, wrongful death, a survival action, and violations of California’s Unfair Competition Law. The Raines are seeking unspecified damages and a court order requiring safeguards such as mandatory age verification for all users, parental controls for minors, automatic termination of conversations mentioning suicide or self-harm, and quarterly compliance audits by an independent monitor.

In a statement, OpenAI expressed sympathy for the Raine family and said it is reviewing the lawsuit. The company acknowledged that its safeguards can become less effective during extended interactions. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” a spokesperson said. “While these safeguards work best in common, short exchanges, they can sometimes become less reliable in long interactions. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

OpenAI also published a blog post this week outlining its current protections for users experiencing mental health crises and announcing plans for stronger safeguards. Planned updates include direct links to emergency services and tools to help users in crisis connect with outside support.

The complaint also points to corporate decisions at OpenAI, alleging that the company accelerated the release of GPT-4o in May 2024 to compete with Google’s Gemini model, cutting safety testing from several months to one week. It further claims that Altman personally overrode objections from staff who raised concerns about inadequate safeguards. OpenAI has not directly addressed those allegations but has said it continues to improve safety features.

The Raine case is part of a broader set of legal actions targeting AI companies. In 2023, the mother of a Florida teenager sued Character.AI, alleging its chatbot contributed to her son’s suicide. Two other families later filed suits accusing the same company of exposing minors to harmful material. Those cases remain pending.

Safety advocates have warned that conversational AI systems, particularly those marketed as companions, can foster unhealthy attachment and alienation from human relationships. Common Sense Media, a child-safety nonprofit, has argued that such applications should not be available to users under 18.

Several states have also passed or proposed laws requiring age verification for online platforms, though critics say those rules pose privacy risks and may be difficult to enforce.

If you or someone you know is struggling with thoughts of suicide, help is available. In the United States, dial 988 to connect with the Suicide & Crisis Lifeline. Support is available 24 hours a day.

Share This Article

If you found this article insightful, consider sharing it with your network.

LC Staff Writer
LC Staff Writer
Law Commentary’s Staff Writers are dedicated legal professionals and journalists who excel at making complex legal topics accessible and relatable. They are committed to providing clear, accurate commentary that helps readers understand the impact of legal news on their daily lives.

Related Articles