
You may not like this news, but Google can read some of the things you once wrote to AI.
In investigative article by Fast Company argued that the search engine indexes ChatGPT dialogs that you have shared with friends, family or colleagues — effectively turning private correspondence into search results displayed to millions of people.
A simple Google search of the part of the link that appears after clicking the «Share» button in ChatGPT reveals conversations that sometimes contain deeply personal information — such as struggles with addictions, experiences with physical abuse, or serious mental illness. For example, in this case, the chatbot does not disclose the names or nicknames of usersBut some may identify themselves in the requests by sharing too many specific personal details.
Journalists found about 4,500 such dialogues in Google search. Most do not contain any personal or identifying information, but there are exceptions (including names, places of residence, and private circumstances) — obviously, Fast Company chose not to link to such conversations or describe them in detail.


In one of the conversations indexed by Google, the user described in detail his sex life and dissatisfaction with his stay abroad, claiming to be suffering from PTSD and seeking support; the chat includes details about his family history and relationships with relatives and friends. In another, meanwhile, — the manifestation of psychopathic behavior in children and the age at which such behavior becomes noticeable is discussed; in yet another conversation, the user calls himself a victim of «mental programming» and is looking for ways to «decode» himself to mitigate the psychological trauma he has received.
«I’m just shocked,» says Carissa Velis, an ethicist at Oxford University who specializes in privacy in AI. «As a privacy researcher, I know very well that such data is not private. But “non-private” is a very broad term. And the fact that Google is indexing such sensitive conversations is simply stunning».
We do not have data for Ukraine on the use of ChatGPT for psychological support (as of May 2025) only 26% of Ukrainians had experience in the practical use of artificial intelligence), but recent surveys have shown that almost half of Americans have used chatbots as therapists in the past year. Three quarters of them sought help for anxiety, two out of three for personal issues, and almost 60% for depression.
The fact that your seemingly private session with an AI can easily end up in Google search results isn’t the only problem. In a recent study bots actually failed the test for psychotherapists, where they refused to work with alcoholics or even issued a list of «highest bridges» to people in depression. Of course, these situations were simulated, but the media are increasingly reporting stories about the real experiences of people with mental disorders in cooperation with AI, where one of the incidents ended in a fatal police shooting, another — teenage suicide.
Google and OpenAI declined to comment on the investigation for Fast Company.
Earlier, ChatGPT CEO Sam Altman warned users not to share personal data with the chatbot because the company may be forced to provide this data by a court order. Meanwhile, one of the previous decisions has already obliged the company to store all conversations with ChatGPT forever.
Spelling error report
The following text will be sent to our editors: