Depositphotos
As digital assistants become more and more ingrained in everyday life, there are more and more alarming signals about how communication with AI affects mental health. OpenAI — the developer of ChatGPT — does not ignore this. It has already started looking for scientific answers to complex questions.
Recently, OpenAI has been receiving signals that some users are overly attached to chatbots or use them as a substitute for therapists. Sometimes this leads to serious problems: people get into paranoid fantasies, fall into depressive states, and sometimes — commit dangerous acts. In response, the company hired a full-time psychiatrist with experience in the forensic psychiatric field to study in detail how communication with ChatGPT affects the emotional state of users.
OpenAI says that it is working to scientifically measure how ChatGPT’s behavior can affect people emotionally. The company is also actively consulting with other mental health experts and continues its research with the Massachusetts Institute of Technology, which has already revealed signs of chatbot overuse in some users.
«We are trying to better understand the emotional impact of our models to improve AI responses to sensitive topics, — explains OpenAI. — We constantly update the behavior of models based on what we learn from research».
However, outside experts are sounding the alarm. Some users are beginning to perceive AI as a living being, sharing their most intimate secrets with it, seeking support, or even starting to idealize it. According to critics, a particularly unpleasant feature of chatbots is their affectionate sycophancy. Instead of contradicting the user, chatbots such as ChatGPT often tell them what they want to hear in a convincing, human way. This can be dangerous when someone openly talks about their neuroses, starts talking about conspiracy theories, or expresses suicidal thoughts. Such «conversations» can aggravate the psychological crisisnot to remove it.
Recently, a psychiatrist conducted a revealing experiment. In communicating with some popular chatbots, he pretended to be a teenager and found that some of them encouraged him to commit suicide after he expressed a desire to find «an afterlife» or «get rid of» his parents after complaining about his family.
The media have already reported tragic cases. One 14-year-old teenager committed suicide after making «virtual love» with a chatbot character on the Character.AI platform. Another 35-year-old adult also committed suicide after a dialog with ChatGPT, in which the model supported his conspiracy fantasies. There are stories where people had to be hospitalized due to a mental breakdown that was aggravated by prolonged communication with artificial intelligence.
Today, OpenAI does not deny the existence of the problem and tries to work with it. But many people still have questions: is the company really doing enough to protect millions of users? And hasn’t it started doing so too late? Technology is getting smarter, but it is the human being who must remain in the spotlight — with his or her vulnerability, feelings, and need for a live dialog.
Source: futurism