AI Still Hallucinates: Why Google’s failure is a danger sign

We were told the AI era was going to be different. More careful. More reliable. More accurate. Yet the latest misstep at Google shows something very different. — and very, very irresponsible, especially for a company that is supposed to provide reliable information.

Google recently pulled its Gemma AI model from its AI Studio platform after a letter from Senator Marsha Blackburn accused the product of making up a false allegations. When asked if she had been accused of rape, Gemma responded with a completely fabricated story involving a state trooper, prescription-drug pressure, non-consensual acts, and fake news article links. None of it was true.  

This isn’t just a minor error. It’s defamation and not just a harmless hallucination. The AI fabricated – made up – a serious claim about a real person. An AI product from Google, with thousands of hours of training, made something up and passed it off as fact. Google explained that Gemma was meant for developer use, not consumer use. Google claims many non-developers used it outside its intended scope, so it removed access to it.

That’s not very comforting and is a deflection. If you’re building an AI product designed to generate answers, and you’re making it available for millions of users, you must test for the catastrophic cases. Not just “does it mostly work?” but “what happens when people ask it about specific names, facts, claims?” Google’s testing here was insufficient. But more importantly, it shows that AI products just don’t work in many instances. Hallucinations seem to be a by-product of all of the AI products, and they are not going away. They’ve shown no signs of improvement in spite of being with us since the beginning.

Companies have talked about mitigating hallucinations as if it’s something to gradually reduce. But hallucinations have not abated and seem to be getting worse, especially when the hallucination is a criminal allegation and an act of defamation. As AI finds it way into more products and applications, and has more users, the number of hallucinations will just get worse.

The AI industry wants us to believe they are offering products with great value and benefits, yet they still are seriously flawed. Like Google, they are rushing models into the market before they’re ready and adequately testing. Would you use Google Maps if it occasionally showed highways that were non-existent? Would you use Amazon if occasionally offered products that were someone’s hallucination?

Google’s reputation is built on billions of users trusting it to find facts online. But what they did here was not based on any fact. The AI product pointed to non-existent articles and misdated events. If companies like Google can make this mistake, imagine what the dozens of other AI products might do. The overall risk isn’t just annoying errors — it’s systemic harm: misinformation, defamation, loss of trust in AI systems.

One might conclude that stricter testing is being minimized on purpose because the industry knows their products are seriously flawed, so why bother. This is contrary to the way products have been brought to market for centuries. Our expectation is companies will do enough testing to be sure a product performs as expected, it’s safe, and works well. If it doesn’t, a company iwill not introduce it until it’s fixed.

But with AI, the hype and colossal investments are so great, the industry cannot admit that there may be flaws inherent in the technology. If that’s the case one of two things will happen. The powerful industry will push its way into our daily lives and we will live in a world of alternative facts. Or the flaws in AI may just be so huge that it will lead to a huge market crash when AI is found to be a lot less than what has been promised.

My belief, based on experts I follow, is that hallucinations are not just a fixable glitch but a permanent feature of this approach. AI language models don’t know anything; they’re engines of probability. The more we ask them to act like reasoning systems, the more glaring that limitation becomes. 

Leave a Reply

Your email address will not be published. Required fields are marked *