A Norwegian man, Arve Hjalmar Holmen, has filed a formal complaint after ChatGPT falsely claimed he had killed his two young sons and served a 21-year prison sentence. The incident highlights the ongoing issue of AI “hallucinations,” where chatbots generate and present fabricated information as factual.
Holmen, who contacted the Norwegian Data Protection Authority, is demanding that OpenAI, the creator of ChatGPT, be fined for the false and defamatory output. He expressed deep concern over the potential damage to his reputation, stating, “Some think there is no smoke without fire. The fact that someone could read this and believe it is true is what scares me the most.”
The False Allegation
The misinformation emerged when Holmen searched his own name on ChatGPT. The response claimed he was a Norwegian individual involved in a tragic event where his two sons, aged 7 and 10, were found dead in a pond near their home in Trondheim in December 2020. While the age gap between the children was roughly accurate, the rest of the information was entirely fabricated.
Holmen, who has never been accused or convicted of any crime, described the incident as deeply distressing. Digital rights group Noyb, which filed the complaint on his behalf, argued that the false information violates European data protection laws, which require personal data to be accurate.
OpenAI’s Response
OpenAI acknowledged the issue, stating that the incident involved an older version of ChatGPT. The company has since updated its models to include online search capabilities, which improve accuracy. In a statement, OpenAI said, “We continue to research new ways to reduce hallucinations and improve the accuracy of our models.”
However, Noyb criticized OpenAI’s disclaimer—which states that ChatGPT can make mistakes—as insufficient. Joakim Söderberg, a lawyer with Noyb, argued, “You can’t just spread false information and add a small disclaimer saying everything you said may not be true.”
The Challenge of AI Hallucinations
AI hallucinations remain a significant challenge for developers of generative AI systems. These errors occur when chatbots generate false or nonsensical information and present it as fact. Earlier this year, Apple suspended its AI news summary tool in the UK after it produced fabricated headlines. Similarly, Google’s Gemini AI once suggested using glue to stick cheese to pizza and claimed geologists recommend eating one rock per day.
Simone Stumpf, a professor of responsible and interactive AI at the University of Glasgow, explained that the inner workings of large language models (LLMs) are still not fully understood, even by those who develop them. “Even if you are involved in the development of these systems, quite often you do not know how they actually work or why they produce certain information,” she told the BBC.
Implications for AI Development
The incident underscores the need for greater transparency and accountability in AI systems. While OpenAI has made improvements, including integrating real-time news searches, the case highlights the risks of relying on AI for sensitive or personal information.
Noyb also pointed out that OpenAI’s refusal to respond to data access requests makes it difficult to determine how such errors occur. “Large language models are a black box,” the group stated, emphasizing the need for clearer explanations and safeguards.
Moving Forward
As AI technology continues to evolve, addressing hallucinations and ensuring accuracy will be critical to building trust in these systems. For now, users are advised to approach AI-generated information with caution, especially when it pertains to sensitive or personal matters.