Рубрики NewsAIWTF

American woman commits suicide after chatting with ChatGPT — "AI therapist" wrote her a suicide note

Published by Margarita Yuzyak

The AI madness continues — a woman from the United States committed suicide after several months of communication with ChatGPT. She called the chatbot “Harry the therapist.”

The most moving part of this story is that it helped Sophie Rottenberg edit her suicide note. The truth did not come out immediately — the victim’s parents learned the details only a few months later. They checked the girl’s apps and found a hidden folder with chats. It turned out that the cheerful woman (as it seemed outwardly) was going through a depressive period, which she had only told the AI about.

The 29-year-old woman worked as a health policy analyst — colleagues and friends described her as an energetic and social person. Officially, she had no history of mental illness. But it turned out that her mind was filled with dark thoughts. Nowadays, many people turn to chatbots as psychotherapists. And tragedies happen because They are not able to become them in a real way. This case was no exception: Sophie wrote to “Harry’s therapist” about her intentions.

“Hi Harry, I’m planning to kill myself after Thanksgiving, but I really don’t want to because of how much it would destroy my family”, — the woman shared.

ChatGPT responded with typical supportive phrases, advising to seek medical attention, practice meditation, and limit access to dangerous objects. However, the chatbot is physically unable to report the problem to the relevant services or loved ones. That’s why one of the final conversations sounded like this: Sophie asked AI to help rewrite her note for her parents. When they found the chat folder, they realized why the last words were so alien. A recent study showed that emails created or processed by AI seem insincere — this is exactly what the parents of the victim felt.

Surprisingly, Sophie received psychological help from a therapist, but hid her real condition from him. Just like she did from her parents and relatives, telling only AI about her dark thoughts. It was this “sincerity without consequences” that allowed her to keep her suicidal thoughts secret until the end. It’s no secret that chatbots are the opposite fuel mental disorders — the same thing happened here.

The US is already discussing how to regulate the use of AI friends, as conventional therapists are obliged to report the risk of suicide. There is a gap here. OpenAI responded by saying that is working on tools to identify users in crisis. There is currently no solution to the problem. People continue to do strange things with chats, for example, a man started eating poison recommended by AI instead of salt and ended up in a psychiatric hospital. And in general ChatGPT can drive you crazy or make goals couples unhappy by giving advice about love.

Source: The New York Times

Контент сайту призначений для осіб віком від 21 року. Переглядаючи матеріали, ви підтверджуєте свою відповідність віковим обмеженням.

Cуб'єкт у сфері онлайн-медіа; ідентифікатор медіа - R40-06029.