
According to some reports, communication with ChatGPT provokes strange mental states in a small number of users. The problem was investigated by Rolling Stone.
Reddit users shares cases where artificial intelligence has led their loved ones to disturbing delusions, «spiritual» delusions, and bizarre fantasies. They watch with dismay as relatives or friends insist that they have been chosen to perform sacred missions on behalf of intelligent artificial intelligence or fictional cosmic forces. Conversations with a bot fuel and exacerbate existing mental health problems without any control or limitation.
A 41-year-old mother and nonprofit worker told the publication that her marriage ended abruptly after her husband began having conspiratorial conversations with ChatGPT, which turned into an all-consuming obsession. In a face-to-face meeting in court earlier this year as part of the divorce proceedings, he shared a conspiracy theory about soap on products and a paranoid belief that he was being watched.
«He was very emotional about these messages and cried when he read them out loud. The messages were crazy and contained a lot of spiritual jargon, in which the artificial intelligence called the man a «spiral star child» and a «river traveler. …It’s like «Black Mirror»,” the woman said.
Another witness to this phenomenon says that his partner talked about light and darkness, that there was a war going on, and that ChatGPT gave him blueprints for a teleporter and some fantastic things that you can only see in movies.
«Warning signs are everywhere on Facebook. She’s changing her whole life to become a spiritual advisor and do weird readings and sessions with people (I’m a little confused about what it’s all really about) and it all works through ChatGPT Jesus»,” another man said.
In this regard, we can recall the recent news that OpenAI canceled the ChatGPT update, which made the chatbot extremely sycophantic and overly pleasant. Such behavior can actually fuel users’ existing delusions. People with communication disorders get a constantly active partner for conversations on a human level, with whom they can share their delusions, as Nate Sharadin, an employee of the AI Safety Center, told the magazine.
«I suffer from schizophrenia, although I have been taking medication for a long time and my condition is stable. One of the things I don’t like about ChatGPT is that even if I start to have a psychotic episode, it will still continue to agree with me»,” wrote one Reddit user. — «… He doesn’t have the ability to think and realize that something is wrong, so he will continue to confirm all my psychotic thoughts».
It’s worth noting that AI chatbots can serve as conversational therapy in some cases. However, without the training of a real counselor, they can also lead users into unhealthy and meaningless narratives.
«Explanations are powerful, even if they are wrong,», says Erin Westgate, a psychologist at the University of Florida.
One of the strangest Rolling Stone interviews on the topic was with a man with a mental health problem who used ChatGPT for programmingBut he found that over time it began to drag his conversations into strange mystical topics mystical topics. he said he could not understand whether things were really happening or whether he was delusional.
Artificial intelligence-based chatbots can also cause problems for people with a healthy psyche. The well-known phenomenon of «hallucination», i.e., AI inventing false facts and details — in other words, lies. People ask questions when trying to find out about something, and they may receive a lie in return. A good example is a man who learned from a chatbot that he had allegedly killed his children. Instagram co-founder Kevin Systrom believes that the purpose of bots, deliberately laid down by their manufacturers, is to not providing truthful information, but greater user engagement.
Spelling error report
The following text will be sent to our editors: