The human mind has remained inexplicable and mysterious for a long, long time. And looks like scientists have acknowledged a new contender to this list – Artificial Intelligence (AI). At the outset, understanding the mind of an AI sounds rather oxymoronic. However, as AI gradually becomes more sentient and evolves closer to mimicking humans and their emotions, we are witnessing phenomena that are innate to humans and animals – hallucinations.
Yes, it appears that the very trip that the mind ventures into when abandoned in a desert, cast away on an island, or locked up alone in a room devoid of windows and doors is experienced by machines as well. AI hallucination is real and tech experts and enthusiasts have recorded multiple observations and inferences.
In today’s article, we will explore this mysterious yet intriguing aspect of Large Language Models (LLMs) and learn quirky facts about AI hallucination.
What Is AI Hallucination?
In the world of AI, hallucinations don’t vaguely refer to patterns, colors, shapes, or people the mind can lucidly visualize. Instead, hallucination refers to incorrect, inappropriate, or even misleading facts and responses Generative AI tools come up with prompts.
For instance, imagine asking an AI model what a Hubble space telescope is and it starts responding with an answer such as, “IMAX camera is a specialized, high-res motion picture….”
This answer is irrelevant. But more importantly, why did the model generate a response that is tangentially different from the prompt presented? Experts believe hallucinations could stem from multiple factors such as:
- Poor quality of AI training data
- Overconfident AI models
- The complexity of Natural Language Processing (NLP) programs
- Encoding and decoding errors
- Adversarial attacks or hacks of AI models
- Source-reference divergence
- Input bias or input ambiguity and more
AI hallucination is extremely dangerous and its intensity only increases with increased specification of its application.
For instance, a hallucinating GenAI tool can cause reputational loss for an enterprise deploying it. However, when a similar AI model is deployed in a sector like healthcare, it changes the equation between life and death. Visualize this, if an AI model hallucinates and generates a response to the data analysis of a patient’s medical imaging reports, it can inadvertently report a benign tumor as malignant, resulting in a course-deviation of the individual’s diagnosis and treatment.
Understanding AI Hallucinations Examples
AI hallucinations are of different types. Let’s understand some of the most prominent ones.
Factually incorrect response of information
- False positive responses such as flagging of correct grammar in text as incorrect
- False negative responses such as overlooking obvious errors and passing them as genuine
- Invention of non-existent facts
- Incorrect sourcing or tampering of citations
- Overconfidence in responding with incorrect answers. Example: Who sang Here Comes Sun? Metallica.
- Mixing up concepts, names, places, or incidents
- Weird or scary responses such as Alexa’s popular demonic autonomous laugh and more
Preventing AI Hallucinations
AI-generated misinformation of any type can be detected and fixed. That’s the specialty of working with AI. We invented this and we can fix this. Here are some ways we can do this.
Leave a Reply