When ChatGPT was announced nearly nearly two years ago, it was described as one of the most significant tech products to come along in decades. It was bigger than the invention of the Internet, it would eliminate millions of jobs, and it might even be the end of mankind!
Now two years later, we can see how much it was overhyped, a problem that occurs regularly when the tech press mimics the tech companies’ wild claims. They also think something revolutionary will immediately displace an older product because it’s so much better. But the world doesn’t work that way.
Tech executives and reproters consistently underestimate the time it takes to bring out a product, ignoring the unforseen technical issues that always arise and slow development way down. And once a product does come out, adoption can take many years. Case in point: In 2017, Ford Motor CEO Mark Fields told CNBC that Ford planned to have a “Level 4 vehicle in 2021, no gas pedal, no steering wheel, and the passenger will never need to take control of the vehicle in a predefined area.” Toyota and BMW predicted it happening even sooner.
While AI is failing to deliver what was promised, it’s future impact can’t be ignored. We are still in the very early stages of an industry that has given us lots to look at but little to remember.
Two years ago I was eager to try ChatGPT. I asked it questions, used it to create job postings, and asked it to create itineraries for a vacation. I even tried using it to write a column. It was fun to use, a great novelty, but that novelty wore off pretty rapidly, especially when we discovered how AI hallucinated, a word used by the industry as a substitute for saying it just lies. And once you realize it is a very good liar you never look at it in quite the same way.
ChatGPT saved me time searching, writing, and organizing. Instead of creating a job description from scratch or from my memory, the software came back with something that I was able to edit for my purposes. But I discovered I still needed to have some familiarity with the subject because the results could not always be trusted.
Soon after ChatGPT, we got AI products that could create graphics or photos by just describing what we wanted, such as a zebra riding on a horse. What was the result? Really ugly images that made us chuckle, but of limited use.
Then we saw a rush to develop hardware products that used AI. The company Humane took in hundreds of millions of dollars to create the first AI hardware device, the AI Pin, and shipped it earlier this year. Buyers were disappointed in how poorly it worked and most units were returned. The company is now searching for a buyer.
Businesses next began rushing out their own use of AI products and things got worse. Using AI instead of a human to do online chat has delivered terrible customer service. Amazon is using AI to provide a summary of reviews, but they’re often poorly written and mundane, inferior to what a good writer could produce. And when CNET used AI to create articles on its website, they were so bad, the company was ridiculed and embarrassed and promising to return to real writers.
I was always skeptical of AI delivering what was promised, and I’m more so today. We are getting more quantity and less quality and still cannot fully trust its accuracy. AI’s reputation is that it allows more mediocre content to be delivered in less time to more people.
As Intelligencer noted, “Exciting, fast-changing tools with enormous theoretical potential are being used, in the real world, right now, to produce near-infinite quantities of bad-to-not-very-good stuff. In part, it’s a disconnect between forward-looking narratives and hype and lagging actual capabilities; it also illustrates a gap between how people imagine they might use knowledge-work automation and how it actually gets used. More than either of these things, though, it’s an example of the difference between the impressive and empowering feeling of using new AI tools and the far more common experience of having AI tools used on you — between generating previously impossible quantities of passable emails, documents, and imagery and being on the receiving end of all that new production.”
AI will get better over time and may eventually match some of it promises. But first it needs to solve two big problems. It needs hordes of data from newspapers, magazines, documents, books, and the internet, and it needs huge amounts of energy to power all of the processing. These come at at such high costs, some analysts are questioning whether AI can be a profitable business.