When Technology Breaks, It Breaks Society

When a power outage in San Francisco last night caused Waymo’s autonomous cars to freeze in place, snarl traffic, and block emergency vehicles, it felt to me like a paradigm of big tech. It was a predictable failure.

The tech industry loves to release breakthrough products, but consistently refuses to confront how fragile these systems are when the world behaves in ways engineers did not anticipate or chose to ignore.

Waymo’s stopped vehicles are an example of how modern tech is built with narrow assumptions and deployed at scale without serious planning for unintended consequences. The system worked perfectly right up until it didn’t. Then it failed in a way that affected thousands of people who had no say in the experiment. And this was just a power failure over a third of the city. Imagine if there was an earthquake and the Waymos prevented mass evacuation and blocked all emergency vehicles!

But this is just a pattern we see more and more as big tech writes their own rules of behavior at our expense.

Facebook did not intend to destabilize democracies or supercharge misinformation. But it optimized relentlessly for engagement, ignored warnings from internal researchers, and shipped features without fully understanding how they would be exploited. When the damage became undeniable, executives framed it as an unfortunate side effect rather than a design failure or assuming any responsibility.

The AI industry is following the same script. Models are released quickly, trained on vast amounts of unvetted data, and dropped into the world with disclaimers instead of safeguards. Companies promise they will fix the problems later. Bias, hallucinations, deepfakes, and job displacement are treated as edge cases rather than core design challenges.

In each case, the failures are a lack of imagination or a callous disregard for the inevitable.

The industry is excellent at optimizing for best-case scenarios. It is terrible at stress-testing worst-case ones. What happens when the power goes out? What happens when bad actors intervene? What happens when systems interact with messy human behavior instead of clean data?

These questions are not unanswerable, but asking them slows deployment, delays revenue, and complicates the story investors want to hear. So they are ignored, minimized, or postponed until after the product is already embedded in our daily life.

Once released, accountability becomes fuzzy. Who is responsible when an algorithm makes the wrong call? The engineer? The company? The user? The regulator? In the absence of clear ownership, nothing changes. The companies rarely do much to fix things.

The tech industry prides itself on moving fast and breaking things. What it keeps breaking is trust. Trust that systems will work when conditions are not ideal. Trust that companies have thought through how their products might be misused. Trust that someone, somewhere, asked the uncomfortable questions before shipping.

Designing for unintended consequences is not optional when your products shape cities, elections, and economies. It is the job. Until the industry treats it that way, these failures will keep happening. And the rest of us will keep paying the price.

Leave a Reply

Your email address will not be published. Required fields are marked *