Saturday, 23 August 2025

Hallucination is Not a Bug, It’s a Feature: Lessons for AI and Humanity

Dear Friends,


Recently, I was watching a documentary on Makoko AKA, Lagos, also known as the Venice of Nigeria—the largest floating slum in Africa. This is a floating village. Long back, we visited Kerala, stayed in a floating cottage and boathouse on the Kochi backwaters, and earlier at the Alleppey backwaters. The purpose there was to recreate in nature. However, the Makoko scene is completely different. I was astonished and amused to witness their life on the waters. It is surrounded by dirty sewage water. People commute using boats, and a few children were swimming in those waters. Constant fear (natural calamities, epidemics, and neighborhood issues) haunts people. In contrast, just opposite this slum, we can witness Lagos city—the largest urban agglomeration in Nigeria and one of the fastest-growing megacities in the world.

Now, what is today’s story? Let’s contemplate the learning & thinking process of kids who grow up in rich conditions versus slum conditions. If you ask them the same question: What are you most scared of?

A rich kid may respond (of course, not all of them): “Monsters under the bed, using public transportation, or losing power & internet.”

A slum kid may respond (again, not all): “Demolition of their temporary shelter by the government, floods, hunger, or fights in the neighborhood.”

If a rich kid sees the slum kid’s answer, it causes amusement, and vice versa. There is nothing wrong or right here.

However, there is a huge uproar when it comes to AI responding to a few questions differently. After all, an AI model is like a child. What you feed, how you train—it comes out. Having said that, it is causing huge financial damage to the AI model owners. A human learns year after year and makes decisions. If it goes wrong, we accept it and say “human error.” But we are not giving sufficient time to AI to learn. If it says something wrong, we immediately call it a hallucination. (Dictionary meaning: a sight, sound, smell, taste, or touch that a person believes to be real but is not real.)

A Vectara study found that even the best models still make things up at least 0.7% of the time. According to allaboutai.com, these “hallucinations” caused $67.4 billion in damages globally in 2024.

We all need to understand: Hallucination is NOT A BUG. It is A FEATURE. That is how AI understands and responds—like any average human being’s response. Let’s not misunderstand it. Future jobs will include AI Human Reviewers—teaching AI specific lessons on domains and issues, and reducing hallucination. Later, AI Tutors will comprehensively teach humans! This is going to be a new cycle.

Today, The Hindu published an editorial: “Set the guardrails for AI use in courtrooms.” This was in the context of a recent case where an AI transcription tool repeatedly transcribed the claimant’s name, “Noel”, as “no.” If AI cites a paper—Journal of Applied AI, Vol. 12, 2019—that does not exist, we need to help AI understand the issue. If you ask AI, “What’s the capital of Brazil?” and it confidently replies “Buenos Aires” instead of Brasília, we need to teach the AI. These hallucination scenarios are to be patiently resolved with AI.

In the 1990s, we hired many manual testers to catch software bugs. Over time, manual testers started vanishing, and the era of test automation began. Now the same manual tester is coming back in a new avatar called an AI Human Reviewer. Their job is to catch and correct hallucinations before they reach users.

Human judgment and AI hallucination will always exist. They change from time to time, context to context, data to data, and many more factors. When we accept human judgment in the form of human error or rational decision, the same should apply to hallucination as well. Let’s accept it.

The ultimate solution forever would be to develop AI systems with a Human in the Loop. Fully autonomous systems are not practical for humanity (especially in the Indian context). And a human race without technological aid would also push us back to a primitive state. Both radical scenarios are not good for society.

Wishing policy makers balance this act—especially in the Indian context—with 56 million rich (>30L), 432 million middle class (5L to 30L), 732 million aspirers (1.25L to 5L), and 196 million destitutes (<1.25L) (2021 data). A Bharat AI Policy should cater to these levels, and AI training data should represent these four classes.

Ravi Saripalle

No comments:

Post a Comment