AI Hallucinations: What is the best approach?

February 2, 2025
The time has come, from this week onwards organisations using AI must comply with the AI literacy rules. The AI Act prescribes that from February 2nd 2025 employees should be able to use, understand and critically evaluate the AI systems they work with. In this blog we will delve into one of the problems that make AI literacy so important: AI hallucinations.

What are AI hallucinations?

The term AI hallucination refers to a situation where AI chatbots relying on large language models (LLM) come up with information that is factually wrong. To build an AI chatbot based on an LLM, the model is trained on a lot of text. Subsequently, the model tries to predict the next word in every sequence of words. Through this exercise, the model finetunes the patterns it uses for prediction.1 So, in the end, the model does not learn how to converse, it learns to predict the next word in a sentence. This is different from a search engine that provides accurate (or at least existing) information. This difference explains how AI chatbots can come up with factually wrong information, that sounds plausible. 

Sometimes, these hallucinations can be a good thing. In the chemistry field, AI chatbots can come up with non-existing & novel molecular structures. Creative & out of the box thinking like this is essential for break throughs when developing new medication. The unpredictable ideas coming from AI chatbots have proven to be useful for this purpose.2 However, AI hallucinations can also have far-reaching, negative consequences. Just ask the mayor of Hepburn Shire, a small municipality in southeastern Australia, not too far from Melbourne with a population of 15.000. After he was elected, citizens voiced their concern about his involvement in a foreign bribery scandal two decades ago. Very alarming news for an elected official, whose reputation is essential for their position. It turned out that ChatGPT was spreading the false rumour that he was found guilty and served a prison sentence. In reality, the mayor did work for the bank in question but he was actually the one who reported the signs of bribery to the authorities.3

The black box & other problems

 How did the AI system hallucinate that the mayor was sentenced to prison? It is difficult to pinpoint how AI systems come to the conclusions they make. This is often referred to as the ‘black box’ problem. The lack of transparency when it comes to the interpretability of AI systems is visualised as a black box where you cannot see what is going on inside. 

When personal data is fabricated through AI hallucinations, it can be unlawful under the GDPR. Privacy activist Max Schrems filed a complaint when ChatGPT kept giving incorrect dates when asked for Max Schrems’s birthday. When ChatGPT was asked to rectify the information, it simply responded that rectification was impossible.4 

How do we solve this?

It is possible to look for solutions by focusing on improving the AI system through:

  • Continuous monitoring and improving of the system.
  • Keeping the training data accurate and representative.

But creating a perfect AI system might be unachievable. It is also possible to focus on solutions through AI literacy. Your employees can either be your weakest or your strongest link. If the person using the AI system knows how to make prompts and verify the information they receive, the problems with hallucinations can be nipped in the bud. Currently, the AI Act forces organisations to direct their attention to AI literacy.

Do you have any questions on how to implement AI literacy in your organisation? We can help! Contact us at: annemartine.koetsier@privacycompany.nl


 

1 Ars technica, ‘Why ChatGPT and Bing Chat are so good at making things up’, 6 April 2023. URL: https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/

2 Psychology Today. ‘Harnessing Jallucinations to Make AI More Creative’, 25 January 2025. URL: https://www.psychologytoday.com/intl/blog/the-digital-self/202501/harnessing-hallucinations-to-make-ai-more-creative

3 Reuters, Australian mayor readies world’s first defamation lawsuitover ChatGPT content, 5 April 2023. URL: https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/

4 Noyb, ChatGPT provides false information about people, and OpenAI can’t correct it, 29 April 2024. URL: https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it

Download
Lynn
Consultant