When a new product is released, it’s usually tested to insure it works before it comes to market, but that convention is not followed when it comes to the AI software from OpenAI, Google, Microsoft, and others. Instead these companies that should know better are trying to one-up each other by introducing and overly hyping their AI software that’s untested, noticeably buggy, and often just plain wrong.
Brian Chen of the NY Times agrees, “Companies are releasing A.I. products in a premature state partly because they want people to use the technology to help them learn how to improve it. In the past, when companies unveiled new tech products like phones, what we were shown — features like new cameras and brighter screens — was what we were getting. With artificial intelligence, companies are giving a preview of a potential future, demonstrating technologies that are being developed and working only in limited, controlled conditions. A mature, reliable product might arrive — or might not.”
Google, who built their reputation on (once) providing the best search engine, has been positioning their search results from their very buggy AI product ahead of their normal results. You’d think they know better, but they’re doing not what’s best for their users, but in fear of Microsoft.
Suddenly, they’re in competition with Microsoft who threatens to upend the search market with their huge investment in Open AI. Competition is usually good for the consumer, but exaggerated claims and untested products are creating an unreliable experience for all of us.
AI has some real value right now, but it’s much less than its proponents want to admit.
Today’s AI is based on LLM (large language model) and answers questions by learning statistical relationships from millions of ingested text documents, requiring a computationally and energy intensive process to learn. It looks for word patterns to best match answers to the queries. But it has no intelligence and can’t distinguish between good documents and bad.
The problem is its data comes from a range of sources that are not always accurate. It often answers a question out of context, uses incorrect data, and often creates completely false answers, called hallucinations.
As a widely reported example, Google’s AI product recommended using Elmers Glue to keep cheese from sliding off pizza and suggesed the consumption of rocks is a source of healthy minerals.

Google CEO Sundar Pichai defended the errors, saying the company has tested the software with over a billion queries over the past year. They’ve also blamed users, saying, “Many of the examples we’ve seen have been uncommon queries and we’ve also seen examples that were doctored or that we couldn’t reproduce.”
Gary Marcus, an AI expert and NYU professor, someone who I follow, says the [AI] tech is now about 80% correct, but the final 20% is extremely challenging, and be the hardest thing of all to accomplish.
“Few fields have been more filled with hype and bravado than artificial intelligence. It has flitted from fad to fad decade by decade, always promising the moon, and only occasionally delivering”
The best way to use AI today is to not assume the answers are correct when asking it for specific facts. It’s best used for querying information whose accuracy you can assess based on your experience and common sense. For example, I’ve used ChatGPT successfully to ask it how to create a PDF book with pages that flip, what are the inside dimensions of typical side-by-side refrigerators, what are the options from traveling to San Diego to Bozeman, what are the best sites for auto reviews, and for creating a job description for an technical manager. Most of the answers can be found on the web with a search or two, but AI software is often much faster and provides the results in a convenient format without requiring additional clicks.
Gary Marcus, describes the glue and pizza error as partial regurgitation. Compare the automatically-generated answer on the left below (in this case produced by Google’s AI) with what appears to be the original source on the right.

In this case AI used a reference that was not serious. Other nonsensical results were attributed to using The Onion as a source of data. Google explained that there was so few answers to the original question, it selected the only one it found. And that’s the problem with AI, it can spout back what was noted somewhere, but has no way to discern its accuracy.
As far as search is concerned, we are in for a rough time until these companies remember that we are the customer and have a low tolerance of being their guniea pigs, and we have a long memory. More than a decade ago Apple released Apple Maps that was very buggy and often just plain wrong. While the product has greatly improved, and in some ways is even better than Google Maps, it still has the reputation of being much inferior after all these years. First impressions are very often lasting impressions.