I’ve spent more than five decades designing, building, and writing about consumer technology. In that time, I’ve watched countless waves of hype promising to reshape our lives. Some actually did — the personal computer, the smartphone, the internet itself. Many promised but never came close – the flying car, a home robot, a single-prick blood-testing machine, the Segway. Which is why I’m becoming increasingly aware by how easily we’re all buying into the promises coming from today’s leaders of the AI industry.
But, in spite of the promises, AI progress has not yet succeeded in the next big step of inventing new content, creating new medicines, or building new inventions based on using logical reasoning. Logical reasoning and human intelligence has been predicted, but is still missing from AI.
Today AI is essentially just a better search engine with great presentation skills. It can format the reply in most any way you’d like, using charts and graphs, essays in the words of others. and grammatically corrected papers. What it does currently is all based on what it finds from within its database and nothing based on intelligent human-like thinking. But you would never know from listening to its leaders.
These leaders — including Sam Altman of OpenAI, Dario Amodei of Anthropic, Sundar Pichai of Google, and Satya Nadella of Microsoft — are not the modern-day equivalents of Steve Jobs or Bill Gates. They come across more like salesmen who’ve mastered the art of fundraising, publicity, and market speculation, while offering surprisingly little of substance. They have not invented any of this, nor do they show a level of understanding of, say, Steve Jobs when he spoke about the iPhone.
Sam Altman spoke on a recent podcast interview where he described how AI will deliver “crazy new social experiences” and “virtual employees,” and “actually discover new science.” Pressed on how that will happen, he spoke word salad about their latest model being like a good PhD, with no explanation beyond mentioning how a scientist somewhere in the world was impressed. He never answered the question. This is the CEO of a company worth hundreds of billions — mostly because investors believe his crazy predictions.
It’s akin to a pattern I’ve seen through the years in tech. Visionary claims backed by charismatic leaders and flashy presentations to the press, all designed to attract massive amounts of capital from VCs. The difference this time is its scale. It’s beyond anything we’ve ever seen.
OpenAI is funded by Microsoft and SoftBank while companies like Oracle and Salesforce are spending tens of billions on Nvidia chips for capabilities that don’t exist. Facebook is building huge data centers around the country just in anticipation. Others are planning to build new nuclear power plants for the huge power needs predicted. All of this It assumes that AI will develop new capabilities that have yet to be even demoed.
But where’s the evidence? A Salesforce study showed so-called “AI agents” — the software designed to handle complex tasks for us — fail on even moderately complicated work. Apple’s own research just demonstrated that these systems don’t actually reason. They predict text based on large patterns. That’s not intelligence; it’s high-speed pattern matching.
Despite this, the industry’s top executives continue to get glowing profiles in the press, with little critical scrutiny. Few reporters seem willing to ask the obvious questions: Where’s the human logic? Where are examples using real thinking? No company has yet been able demonstrate human reasoning in their AI products.
In fact, to check my facts, I asked ChatGPT, “Has any company been able to demonstrate human reasoning in their AI products?”
It answered, “No — no company has yet been able to credibly demonstrate that their AI products can perform human-like reasoning…… Large language models (LLMs) like OpenAI’s GPT-4, Anthropic’s Claude, Google’s Gemini, and Meta’s LLaMA are statistical pattern predictors. They generate text by predicting the next most likely word based on huge datasets, not by understanding or reasoning in a human sense.”
Meanwhile, the costs are staggering. The capital required to keep these companies afloat would only make sense if AI revenues were projected to dwarf those of the smartphone and enterprise software markets combined — not just someday, but soon. Are we seeing one of the greatest misrepresentations of technology ever?
I’ve covered many genuinely transformative technologies. What’s striking here is how little accountability these AI leaders face from the public and the press. Promises made three years about how human reasoning was months away have not been fulfilled. I just see a huge disconnect.
We should treat them with much more skepticism. These CEOs should be questioned like other leaders making multi-trillion-dollar promises: How exactly does your technology work? Why should anyone believe your financial projections? When will we see evidence of the human intelligence you keep forecasting?
Don’t be fooled by thinking this is just too complicated to understand. Believe your eyes. The tech industry thrives on making complex things seem too arcane for outsiders to grasp. But it’s not.
The tech press should stop treating every product demo as the dawn of a new epoch. And yes, the press is as equally responsible as the CEOs. They seem awed about the future of AI, but there’s been little pushback, investigation, or critical questioning. They treat the CEOs with idolotry unlike how they treat most personalities. We shouldn’t let the AI companies coast by on glossy promises and PR releases. The burden of proof should be on them to show why this is more than another Silicon Valley bubble.
Technology changes the world when it delivers genuine value. Until AI leaders can clearly demonstrate that, we’d be wise to maintain a healthy skepticism and reserve judgement.
These LLMs are, as you note, glorified search engines, and most significantly, are willfully using copyrighted material with complete disregard for the author’s intellectual property. I’m very surprised some large publisher that has been ripped off hasn’t sued the hell out of Altman and these other snake oil salesmen.
The New York Times has sued ChatGPT, but whoever sues is up against a company with essentially infinite dollars to make a lawsuit very costly.
I watched a podcast yesterday where two well-known talking heads (one from the above mentioned NYT) yammered glowingly about their AI therapists – and they were dead serious. Can’t make this stuff up.