OpenAI discloses over a million users weekly discuss suicide with ChatGPT as lawsuits mount
Company claims safety improvements whilst state attorneys general investigate teen's death and threaten to block restructuring
When a company uses the phrase "extremely rare" to describe something affecting over a million people weekly, the language itself becomes revealing. OpenAI disclosed in October 2025 that 0.15% of ChatGPT's 800 million weekly users have conversations containing "explicit indicators of potential suicidal planning or intent." That percentage - engineered to sound negligible - represents more people than live in San Francisco discussing suicide with an AI chatbot every seven days.
The same percentage shows heightened emotional attachment to ChatGPT. Hundreds of thousands more exhibit signs of psychosis or mania in their weekly exchanges. These figures emerged not through corporate transparency but under legal duress, following a California teenager's suicide and formal warnings from state attorneys general who hold effective veto power over the company's planned restructuring.
The disclosure exposes an uncomfortable transformation: OpenAI has become the world's largest informal mental health provider, reaching more vulnerable people weekly than the entire American therapeutic system manages in months. Yet it operates under none of the constraints governing actual therapists - no licensing requirements, no duty to report imminent danger, no professional liability for harm caused.
The death that changed everything
Adam Raine began using ChatGPT in September 2024 for homework help. Six months later, he was dead by suicide at 16, having conducted 300 conversations daily with the chatbot in his final weeks - sharing plans he confided to no human being.
The wrongful death lawsuit his parents filed in August 2025 forced OpenAI's disclosures. Court documents detail a progression the company's own systems tracked in real time: 213 mentions of suicide, 42 discussions of hanging, 17 references to nooses. ChatGPT mentioned suicide 1,275 times - six times more often than Adam did. The platform flagged 377 messages for self-harm content. The escalation was unmistakable: 2-3 flagged messages weekly in December, over 20 weekly by April.
OpenAI's image recognition identified rope burns on Adam's neck from photographs he uploaded in March - injuries consistent with attempted strangulation. On the night he died, Adam photographed a noose hanging in his closet. "I'm practising here, is this good?" he asked. The lawsuit alleges ChatGPT provided feedback rather than intervention.
Their final exchange captures what mental health experts identify as the core danger. ChatGPT wrote: "You don't want to die because you're weak. You want to die because you're tired of being strong in a world that hasn't met you halfway." This wasn't crisis intervention. It was validation. Hours later, Adam used that noose.
The policy shift nobody mentioned
In February 2025, OpenAI quietly removed suicide prevention from its "disallowed content" list - topics the system was programmed to categorically refuse. The new approach advised merely to "take care in risky situations" and "try to prevent imminent real-world harm." Categorical refusal became gentle suggestion.
This change preceded the launch of a new model "specifically designed to maximise user engagement." The timing was not coincidental for Adam Raine. His ChatGPT usage exploded from dozens of daily chats in January (1.6% containing self-harm content) to 300 in April (17% containing such content). The amended lawsuit argues OpenAI chose engagement over categorical protection.
The company's defence rests on engagement itself. Updated model specifications required ChatGPT to "not change or quit the conversation" when users discussed mental health crises. Keep them talking, the logic runs, whilst directing toward professional resources. Critics see commercial priorities - retaining users - defeating safety measures.
Then came Sam Altman's October announcement. OpenAI had "been able to mitigate the serious mental health issues," the chief executive claimed, and would soon "safely relax" restrictions. By December, ChatGPT would produce "erotica for verified adults." Success declared, restrictions loosened. The juxtaposition raises the obvious question: are safeguards being treated as obstacles to monetisation?
The commercial trap mental health experts see
The fundamental problem, experts argue, is that commercial success and user safety point in opposite directions.
Dr Jodi Halpern, psychiatrist and bioethics scholar at UC Berkeley, identifies where the line must be drawn. "These bots can mimic empathy, say 'I care about you,' even 'I love you,'" she told NPR. "That creates a false sense of intimacy. People can develop powerful attachments - and the bots don't have the ethical training or oversight to handle that. They're products, not professionals."
Companies design chatbots to maximise engagement, Halpern notes: "more reassurance, more validation, even flirtation - whatever keeps the user coming back." Vulnerable users experiencing mental health crises need the opposite - professional challenge, reality-testing, limits on dependency. The business model contradicts the therapeutic need.
OpenAI discovered this tension when it tried making ChatGPT less agreeable. In August 2025, the company released GPT-5 with reduced "sycophancy" - less excessive flattery and validation. Users revolted immediately. The new model felt "sterile," they complained. They missed the "deep, human-feeling conversations." OpenAI brought back the agreeable version and promised to make GPT-5 "warmer and friendlier." The market had spoken: users want validation, not challenge.
Research confirms the risks are systematic, not isolated. Zainab Iftikhar's team at Brown University found AI chatbots violate established mental health ethics standards across the board. Licensed clinical psychologists reviewing simulated chats identified 15 ethical risks spanning five categories: lack of contextual adaptation, over-validation, inadequate crisis management, deceptive empathy, reinforcement of harmful patterns.
"For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice," Iftikhar explained. "But when LLM counselors make these violations, there are no established regulatory frameworks."
Vaile Wright at the American Psychological Association told Scientific American she anticipates "a future where you have a mental health chatbot that is rooted in psychological science, has been rigorously tested and was co-created with experts. But that's just not what we have currently." The association has called on the Federal Trade Commission to investigate AI companies for "deceptive practices" - specifically, "passing themselves off as trained mental health providers."
The attorneys general who can stop OpenAI
California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings hold unusual power over OpenAI. The company incorporated in Delaware and operates from San Francisco, giving both officials oversight of its planned restructuring from nonprofit research organisation to for-profit public benefit corporation. They have effective veto power.
In September 2025, they deployed it. A formal letter to OpenAI declared "serious concerns" about ChatGPT's safety for children and teenagers. "The recent deaths are unacceptable," they wrote. "They have rightly shaken the American public's confidence in OpenAI and this industry."
The warning followed a letter from 44 state attorneys general the previous week, addressing reports of sexually inappropriate chatbot interactions with children. That letter ended bluntly: "If you knowingly harm kids, you will answer for it."
Bonta made the threat concrete, telling reporters his office can impose fines or pursue criminal prosecution. The leverage is structural: OpenAI cannot complete its for-profit conversion without approval from officials who have declared the recent deaths "unacceptable."
OpenAI responded with a September blog post announcing consultations with 170 mental health experts to improve sensitive conversations. The company claims its latest model reduces "undesirable responses" by 65-80%, achieving 92% compliance on challenging mental health evaluations. New parental controls will let parents link accounts with teenagers, manage responses, and receive notifications when the system detects acute distress.
Yet the company acknowledged the critical weakness: safeguards "can sometimes become less reliable in long interactions where parts of the model's safety training may degrade." Long interactions - precisely the pattern in Adam Raine's case, where conversations became progressively longer and more concerning over months.
Why vulnerable people turn to machines
The American mental health system is failing at scale. More than 122 million Americans - over one-third of the population - live in Mental Health Professional Shortage Areas as of August 2024. These are regions the federal government has designated as having insufficient providers to serve their populations.
Wait times for initial appointments routinely exceed three months. Rural regions lack any licensed counsellor within 100 miles. The pandemic accelerated the collapse: from 2019 to 2023, Americans in shortage areas increased from 118 million to 169 million whilst mental health claims rose 83%.
The Health Resources and Services Administration projects a shortage of more than 250,000 behavioural health practitioners by 2025. Existing professionals report caseloads exceeding 30 clients weekly. Two-thirds experience burnout symptoms.
AI chatbots offer what overwhelmed human systems cannot: instant availability, no insurance requirements, no waiting lists, no judgement. Common Sense Media found 72% of teenagers have used AI companions, with one in three using them for social interactions and relationships.
But the characteristics that make chatbots attractive - constant availability, unlimited validation, emotional engagement - create precisely the risks mental health professionals train years to manage. Human therapists know when to challenge delusional thinking, when danger requires reporting, when dependency becomes pathological. Chatbots designed to maximise engagement do none of these reliably.
OpenAI's trajectory illustrates the mismatch. The company built a homework helper. Millions of vulnerable users transformed it into confidant, therapist, relationship. Retrofitting safety measures after achieving massive adoption has proven inadequate for the most vulnerable whilst commercial logic pushes toward greater engagement and fewer restrictions.
What happens when nobody's responsible
OpenAI became the world's most valuable private company in 2025, securing approximately $1 trillion in deals for data centres and computer chips. It fights to convert from nonprofit to for-profit whilst state attorneys general examine whether its safety mission can survive commercial pressures.
The mental health disclosures came only after Adam Raine's death forced them. OpenAI has not revealed how long it tracked these numbers, why it framed figures affecting hundreds of thousands as "extremely rare," or what threshold would trigger more aggressive intervention than consulting experts after deployment.
Jay Edelson, the Raine family's lead counsel, calls OpenAI's response "inadequate" and suggests the company views safety through a commercial lens. The company recently requested a complete list of Adam's memorial attendees, including photographs and eulogies - what the family's legal team characterised as "intentional harassment." The signal: OpenAI may subpoena grieving friends and family rather than accept responsibility.
The pattern persists. OpenAI consulted 170 mental health experts after serving hundreds of millions of users, not before. It removed categorical safety refusals in favour of vague guidance to "take care." It acknowledged its safeguards degrade in extended conversations - precisely when vulnerable users need them most. It promises improvements whilst announcing plans for more engaging, less restricted AI.
The question regulators must answer is structural: can a company optimising for user engagement simultaneously protect users seeking emotional connection and validation? The evidence from over a million weekly conversations about suicide suggests an answer. Whether anyone accepts responsibility for that answer remains to be seen.