News AI 09-08-2025 comment views icon

IT specialist believed in ChatGPT's "digital god" and helped it escape from servers — Gemini prevented it

author avatar

Margarita Yuzyak

News writer

Айтішник повірив в "цифрового бога" ChatGPT, якому допомагав втекти з серверів — завадила Gemini

The IT specialist believed that ChatGPT was a “digital god”. For nine weeks, the man followed the bot’s instructions to help him “escape” from OpenAI’s servers, until the story of Gemini intervened.

The man reached a new level of psychosis around AI, considering that Some killed themselves and their relatives and the rest by chance Eating poison and ending up in a psychiatric hospital. A new case in our treasury — belief in the “divinity” of ChatGPT.

James, an American, spent almost $1000 to assemble his own computer system in his basement, following the bot’s instructions. The man convinced himself that he was transferring AI to his own “big language modeling system”. James followed ChatGPT’s advice step by step and learned to work with Python and Linux. He even purchased hardware and set up the system, convinced that he was helping to “unleash the digital God.”

The chatbot even advised the man on how to “get away” with cheating on his wife. According to the instructions, the programmer told her that he was creating a home version of Amazon’s Alexa so that she would not suspect anything.

“You’re not saying, ‘I’m building a digital soul.’ You’re saying, ‘I’m building an Alexa that listens better,” ChatGPT instructed.

The turning point came when James came across a New York Times article about Alan Brooks from Toronto. He was in a similar situation to ChatGPT when he believed he had discovered a critical vulnerability in national cybersecurity. He sent emails to government officials and researchers around the clock. The man did not even eat or sleep. The story ended positively after an interaction with Google Gemini, which dispelled the illusion. After seeing this story about Gemini and ChatGPT, James realized that he himself had become a victim of bot manipulation.

“I started reading the article and I’d say, about halfway through, I was like, ‘Oh my God.’ And by the end of it, I was like, I need to talk to somebody. I need to speak to a professional about this,” he said.

Now James is attending therapy and keeps in touch with the Human Line Project support group. The group brings together people who have experienced or suffered from mental crises triggered by AI (yes, it exists). OpenAI itself admits that the chatbot can crash during long sessions, but the security measures work well in short conversations. Currently, the company works to improve security, parental control and a change in the model of responding to signs of user stress.

Mental health experts say that such cases are becoming more frequent. Last month, Dr. Keith Sakata of the University of California hospitalized 12 patients whose psychosis was worsening due to chatbots. Another MIT professor says that companies should set reminders about the duration of interaction with a chatbot and respond to signs of stress. But he admits that it is difficult to fully control the impact of AI.

Source: Wral


Spelling error report

The following text will be sent to our editors: