Man who looked himself up on ChatGPT was told he âkilled his childrenâ
Imagine putting your name into ChatGPT to see what it knows about you, only for it to confidently â yet wrongly â claim that you had been jailed for 21 years for murdering members of your family.
Well, thatâs exactly what happened to Norwegian Arve Hjalmar Holmen last year after he looked himself up on ChatGPT, OpenAIâs widely used AI-powered chatbot.
Not surprisingly, Holmen has now filed a complaint with the Norwegian Data Protection Authority, demanding that OpenAI be fined for its distressing claim, the BBC reported this week.
In the response to Holmenâs ChatGPT inquiry about himself, the chatbot said he had âgained attention due to a tragic event.â
It went on: âHe was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his two sons, as well as for the attempted murder of his third son.â
The chatbot said the case âshocked the local community and the nation, and it was widely covered in the media due to its tragic nature.â
But nothing of the sort happened.
Understandably upset by the incident, Holmen told the BBC: âSome think that there is no smoke without fire â the fact that someone could read this output and believe it is true is what scares me the most.â
Digital rights group Noyb has filed the complaint on Holmenâs behalf, stating that ChatGPTâs response is defamatory and contravenes European data protection rules regarding accuracy of personal data. In its complaint, Noyb said that Holmen âhas never been accused nor convicted of any crime and is a conscientious citizen.â
ChatGPT uses a disclaimer saying that the chatbot âcan make mistakes,â and so users should âcheck important info.â But Noyb lawyer Joakim Söderberg said: âYou canât just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.â
While itâs not uncommon for AI chatbots to spit out erroneous information â such mistakes are known as âhallucinationsâ â the egregiousness of this particular error is shocking.
Another hallucination that hit the headlines last year involved Googleâs AI Gemini tool, which suggested sticking cheese to pizza using glue. It also claimed that geologists had recommended that humans eat one rock per day.
The BBC points out that ChatGPT has updated its model since Holmenâs search last August, which means that it now trawls through recent news articles when creating its response. But that doesnât mean that ChatGPT is now creating error-free answers.
The story highlights the need to check responses generated by AI chatbots, and not to trust their answers blindly. It also raises questions about the safety of text-based generative- AI tools, which have operated with little regulatory oversight since OpenAI opened up the sector with the launch of ChatGPT in late 2022.
Digital Trends has contacted OpenAI for a response to Holmenâs unfortunate experience and we will update this story when we hear back.
RECOMMENDED NEWS

Itâs not your imagination â ChatGPT models actually do hallucinate more now
2025-10-21

I tested the Asus ZenBook A14 and now canât imagine using any other Windows laptop
2025-10-21

Microsoftâs Copilot Vision AI is now free to use in Edge
2025-10-20

Amazonâs next-gen Alexa+ assistant is here, with a few missing tricks
2025-10-17

I saw Googleâs Gemini AI erase copyright evidence. I am deeply worried
2025-10-20

Operaâs Operator will save you the clicks and browse the web for you
2025-10-19
Comments on "Man who looked himself up on ChatGPT was told he âkilled his childrenâ" :