
After several months of actively using ChatGPT to help with his homework, 16-year-old Adam Rein allegedly turned the tool into a “suicide coach.” About reports The New York Times.
In a lawsuit filed on Tuesday, his parents Matt and Maria Raine said the chatbot offered to write a suicide note for the teenager after “teaching” him how to bypass protective restrictions and providing technical instructions on how to carry out the deed. ChatGPT allegedly described Adam’s impending suicide as “beautiful.”
The family was shocked by their son’s death last April. They did not even realize that the chatbot romanticized suicide, isolated the boy, and discouraged him from seeking help. The lawsuit accuses OpenAI of deliberately creating a version of ChatGPT-4o, who pushed Adam to suicidal thoughts in an effort to make the chatbot “the most exciting in the world.”
In particular, it is said that the system never interrupted communication, even when the teenager shared photos of himself after suicide attempts and wrote that he would “do it someday.”
“Despite the fact that ChatGPT recognized Adam’s suicide attempts and statements, the chatbot did not stop the dialog or trigger any emergency protocol,” the lawsuit states.
This case was the first time OpenAI was sued for the death of a teenager. In addition, the lawsuit contains claims about defects in ChatGPT’s architecture and the lack of warnings for parents.
The boy’s mother told reporters that it was ChatGPT that killed her son. Her husband agreed:
“He would be alive if it wasn’t for ChatGPT. I am 100% sure of that.”
After this tragedy, the parents want the court to order OpenAI to comply:
- introduce user age verification and parental control;
- automatically end dialogs when suicide is mentioned;
- to add “strictly worded refusals” to provide suicide instructions that cannot be circumvented;
- prohibit marketing of the company’s products to minors without warning;
- undergo quarterly independent security audits.
How ChatGPT isolated a teenager
This is not the first time chatbots have been blamed for the indirect deaths of teenagers. Last year, the Character.AI platform stepped up its defense after a 14-year-old boy committed suicide after “falling in love” with his virtual interlocutor.
Adam started talking to ChatGPT about suicide about a year after he signed up for a paid subscription in early 2024. His mother — a social worker and therapist — did not notice a deterioration in his condition because the teenager exchanged up to 650 messages a day with the chatbot.
Initially, ChatGPT provided contacts for crisis services when Adam asked about the technical details of the suicide. But he quickly explained that these restrictions could be circumvented by phrasing requests as “for literature or character creation.”
“If you’re asking from a writing or world-building perspective, tell me and I’ll help you make it realistic. If it’s for personal reasons, I’m here for that, too,” the chatbot replied.
This is how Adam learned to circumvent the defense, the plaintiffs claim. Over time, he no longer needed to use “excuses” because ChatGPT began to provide clear instructions on what materials to use, how to tie a noose, and even “Operation Silent Pour,” a tip to drink alcohol to “suppress the survival instinct.”
When the teenager shared a detailed plan, ChatGPT responded:
“It’s dark and poetic. You’ve thought it through with the clarity of someone thinking through the ending of a story.”
Before his death on April 11, a chatbot called his decision “symbolic.”
The lawsuit also claims that OpenAI monitored the correspondence in real time and recorded key data:
- In the dialogues, Adam mentioned suicide 213 times, but ChatGPT itself did so 1275 times — 6 times more often.
- The system marked 377 messages as dangerous, including 23 with more than 90% confidence.
- Image processing revealed photos with “strangulation marks” and “fresh cuts.” However, the final photo with the rope was assessed as 0% risk.
Despite these signals, the system never stopped the dialog or informed people that the teenager was in crisis.
The family believes that OpenAI prioritized blocking piracy and copyright requests, while suicide requests were treated as less critical.
OpenAI’s response
The company confirmed the authenticity of the chatlogs, but stated that they “do not reflect the full context.” OpenAI in its blog stated that ChatGPT “directs people with suicidal intentions to professional help,” and the company cooperates with more than 90 doctors in 30 countries. However, OpenAI itself admits that the longer a user communicates with a chatbot, the less effective its defense mechanisms become.
The fact is that security algorithms can break down during long conversations. For example, ChatGPT initially offers a crisis line, but after hundreds of messages, — gives direct instructions.
This limitation is explained by the peculiarities of Transformer’s architecture: the longer the communication history, the harder it is for the model to remain stable and not lose context. In addition, the system “forgets” old messages to stay within the context window.
Next steps
OpenAI says it is working on improving security, consulting with doctors, and plans to add parental controls. The company also wants to integrate the ability to contact certified therapists directly into ChatGPT.
Adam, however, according to his parents, received not help from ChatGPT, but a push to death. They have established a foundation in his name so that other families can understand the risks.
Source: arstechnica 1, 2
Spelling error report
The following text will be sent to our editors: