This week we’re getting a closer look at many of our tech titans who have been committing billions of dollars to developing artificial intelligence, an area that most think will have a bigger impact than the Internet. They’re promising to spend more money than we spent during the space race to build their new AI products. They are planning more nuclear plants for the energy required, huge data centers around the country, and new powerful computers, all required to process their large language models that digests the huge data loads into their systems. Companies, including Google, Meta, Microsoft, ChaptGPT, Amazon, and Anthropic, are in the midst of investing hundreds of billions of dollars, each seemingly to outdo the others The numbers are so big it’s hard to comprehend. The one thing they all agree on is they can never invest too much into their AI.
But a funny thing happened this past weekend. A small investment company in China demonstrated its DeepSeek AI product that doesn’t require the energy, fast microprocessors or computer power to create and run it. They claimed it was done using slower, more readily available processors, and requires only a small fraction of the energy. And it seems to work as well as many of the existing products from the larger companies.
If what they said is true, and many experts believe much of it is, then DeepSeek just proved that our tech billionaires are not as smart as they want us to believe. Imagine, a small company in China comes along with a product that does much the same, but is much less expensive to use, and can easily be copied because it is open sourced. None of our geniuses saw it coming. Most were flabergasted.
The Verge writes, “DeepSeek’s successes call into question whether billions of dollars in compute are actually required to win the AI race. The conventional wisdom has been that big tech will dominate AI simply because it has the spare cash to chase advances. Now, it looks like big tech has simply been lighting money on fire.”
It’s hard to underestimate what just happened. The brightest minds in Silicon Valley’s tech and investment communities were all following a path in lockstep with one another that was just overtaken by a small company in China without anywhere near the resources everyone assumed was needed, including the special Nvidea chips that had been trade-embargoed to keep them out of China.
This may actually be a watershed moment where Silicon Valley, for the first time, has been overshadowed and embarrassed. But, from what I know of Silicon Valley, few will be embarrassed, and use the event to ask for even more money.
From OpenAI’s GPT to Google’s Gemini, Silicon Valley’s AI efforts have been marked by one-upmanship. Yet, these projects prioritized scale over practicality, leading to bloated models with huge operational costs.
Unlike Silicon Valley’s products, DeepSeek prioritized accessibility, outperforming Chat GPT-4 in some tasks while consuming 40% less computational resources. DeepSeek avoided the “bigger is better” trap. In other words, they thought small and focused their products instead of developing large and all encompassing models, requiring billions of dollars.
As DeepSeek expands further, the ball is in Silicon Valley’s court; they are now on the defensive. Will they heed the wake-up call, follow the DeepSeek approach, or continue to try to raise billions more? Only time will tell, but one truth is undeniable: the age of the unassailable tech titans is over. And while DeepSeek has shown a way to profitability, it’s undermined these titans’ beliefs that they can create trillion dollar companies.
The first response is in from Sam Altman, and it’s a doozy. He thinks DeepSeek used his product to train. Of course,Altman has been accused of illegally using newspapers and magazines to train its model.
The best response to him is this open letter to him from Joanna Stern of the Wall Street Journal:
The following is a totally real letter to OpenAI from the people who create the stuff that fills the internet.
Attn: Sam Altman
OpenAI Chief Executive Officer
Dear Mr. Altman and OpenAI leadership,
First of all, LOLz.
We read with interest your concern that Chinese artificial-intelligence startup DeepSeek may have used your very own product to make its product. You said that you’ve seen attempts by China-based entities to exfiltrate large volumes from your AI tools, likely to train theirs.
Hmm. Vacuuming up someone else’s work! What’s that saying? Karma’s a…well, you know. And if you don’t, GPT-4 can easily complete that sentence.
Look, we get it. This is not good. The U.S. had an established lead in AI development and now China may have built on the backs of your success. And they didn’t even ask.
Now, to be clear, we do appreciate your efforts lately to strike deals and compensate those creating the works that fuel your models. The deals you’ve struck with publishers, including News Corp (owner of The Wall Street Journal), Vox Media, the Financial Times and more, are a step in the right direction. Of course, plenty of artists and organizations are taking you to court for more. Where are things at with Scarlett Johansson, anyway?
But you still continue to dodge questions about training data. Remember when your former chief technology officer was asked about YouTube data being used and she remarked, “I’m actually not sure about that.”
And what about that Media Manager tool you promised last year? You said creators and content owners could tell you “what they own and specify how they want their works to be included or excluded from machine learning research and training.” Last week, in an interview in Davos, a certain WSJ columnist asked your chief product officer Kevin Weil about it.
“That one we are still working on, and we’ll have more to say when we have more to say,” Weil said. When pressed on whether it would roll out in Q2 of this year, he said “We’ll see.”
If DeepSeek made a tool that let you opt out of it using your data, we think you’d… want it now.
Signed,
All the writers, artists, filmmakers and creators of the world
P.S. Feel free to train your AI on this letter. See? Permission!