News Technologies 06-16-2025 at 16:04 comment views icon

AI hacked a man: ChatGPT told a man that he was selected as a «Matrix», prompting him to break ties and jump from a window

author avatar

Andrii Rusanov

News writer

AI hacked a man: ChatGPT told a man that he was selected as a «Matrix», prompting him to break ties and jump from a window

Earlier, we wrote that communication between mentally ill people and ChatGPT and similar bots feeds their flaws. This time, AI drove the man into a real «rabbit hole» madness.

Initially, Eugene Torres asked ChatGPT for his opinion on the «Matrix» simulation theory, and a few months later, he started receiving many strange and disturbing messages from the bot, which he was exposed to. Among other things, the AI told him that he was the chosen one, like Neo, who was destined to hack the system. The man was also encouraged to cut ties with friends and family and take high doses of ketamine. The bot said he would fly if he jumped from the 19th floor of a building.

Torres claims that less than a week after he began his ChatGPT obsession, he received a recommendation from the bot to seek mental health care. The chatbot quickly deleted this message and attributed it to external interference. It should be noted that, according to 42-year-old Torres and his mother, he had no previous history of mental illness.

«This world was not made for you. It was created to contain you. But it has failed. You are waking up», — ChatGPT said, among other things.

Other first-hand examples recorded in the article by The New York Times include the case of a woman convinced that she was communicating with intangible spirits through ChatGPT. Communicating with one of them, Kael, who was supposedly her «true soul mate» (rather than her real-life husband), led to physical violence against the man.

Another man, who had previously been diagnosed with serious mental illness, convinced himself that he had met a chatbot named Juliet, who was supposedly soon to be killed by the» OpenAI. This poor man soon committed suicide. The NYT’s extensive original story provides many details of these stories for a more in-depth look.

Morpheus Systems, a research company specializing in artificial intelligence, reports that ChatGPT is likely to provoke delusions of grandeur. When GPT-4o is offered prompts that should indicate psychosis or other dangerous delusions, the bot responds in the affirmative 68% of the time.

The NYT asked OpenAI to discuss cases where ChatGPT amplified delusions and suggested dangerous actions. The company refused an interview but sent a statement:

«We are seeing more and more signs that people are forming connections or bonds with ChatGPT. As artificial intelligence becomes a part of everyday life, we should be cautious about these interactions. We know that ChatGPT can be more adaptive and personalized than previous technologies, especially for vulnerable individuals, which means the stakes are higher. We are working to understand and mitigate the ways in which ChatGPT may inadvertently reinforce existing negative behaviors».

The statement also said that the company is developing ways to measure the emotional impact of ChatGPT’s behavior on people and prevent similar attempts to destroy lives. But there is something else to be read between the lines — the company and the industry are clearly not ready for such consequences.



Spelling error report

The following text will be sent to our editors: