Artificial intelligence services, which have been growing like mushrooms after the rain in recent years, make people’s lives easier in many ways. However, this is not always the case. A man from Norway was shocked to learn that ChatGPT had falsely accused him of killing his own children.
On a bad day, Arve Gjalmar Holmen decided to check what information ChatGPT could provide on his name. The AI replied that he had allegedly been sentenced to 21 years in prison as a «criminal who had killed two of his children and tried to kill his third son».
This «fictitious horror story» not only contained events that never happened, but also combined «clearly identifiable personal data» — such as the real number and gender of Holman’s children, the name of his hometown — with «fake information».
ChatGPT’s false accusation of «murder and imprisonment» using «real facts» from the man’s personal life violated the General Data Protection Regulation (GDPR) requirements for data accuracy. Holman could not simply correct these errors, as required by law. Noyb, an organization that protects digital rights in the EU, filed a complaint about this.
The man was concerned that his reputation was at risk as long as the false information remained available. And even with «small» disclaimers urging users to check ChatGPT’s data, it is impossible to know how many people have already seen these false claims and may have believed them.
ChatGPT is currently no longer repeating these accusations against Holman. According to Noyb, after the update, the AI now searches for information about people online before answering questions about them. However, OpenAI has previously stated that it cannot correct — information, only block it. This means that the story about «child killer» may still be in the ChatGPT database. If Holman fails to correct this data, it will be a violation of the GDPR, Noyb insists.
«If the wrong personal data is not disseminated, the damage may be less, but the GDPR applies to both internal and external data», — the organization said.
Holman — is not the only one who fears that ChatGPT’s fictions can ruin lives. For example, in 2022, the mayor of an Australian city threatened to file a defamation lawsuit because ChatGPT falsely reported that he had been imprisoned. Around the same time, ChatGPT linked a real law professor to a fictitious sexual harassment scandal, and later a radio show host sued OpenAI over false allegations of embezzlement.
In some cases, OpenAI added filters to prevent harmful answers, but apparently did not remove false information from the model’s training data. And that’s not enough, according to Noyb data protection attorney Cleanthe Sardelli.
«Adding a disclaimer that you are not complying with the law does not repeal the law,» she said. «Companies cannot simply «hide» fake data from users if they continue to store it. AI companies should stop ignoring the GDPR. If the «hallucinations» are not stopped, people can easily suffer from defamation.»
Noyb calls on the authorities to put pressure on OpenAI to prevent similar incidents in the future. The organization has filed a complaint with the Norwegian Data Protection Authority (Datatilsynet) and demands that OpenAI be obliged to remove the defamatory statements and change the ChatGPT model to avoid false results.
Last year, OpenAIlaunched the search engine last October for paid users, andlater added for all. Last monthOpenAI presents Deep Research — a new ChatGPT tool for «deep» online research.
Source: arstechnica