Part 4 of 25 in the The Philosophy of Future Inevitability series.


One man controls space, wifi, rockets, electricity, the town square, and AI.

He's unhinged. Possibly autistic. Definitely erratic. He posts through the night, picks fights with world leaders, and manipulates markets with tweets. He's the richest person on Earth and getting richer.

Another man controls real estate, data, and the attention of three billion humans.

He's probably on the spectrum too, but masks better. Cold. Strategic. Building an empire of behavioral manipulation so sophisticated that governments can't understand it, let alone regulate it.

These are the people shaping your future.

Now look at who's supposed to govern them: 80-year-old senators who think the internet is a series of tubes.

You think AI safeguards are coming?


The Asymmetry

The average age of a US Senator is 64. Many are in their 70s and 80s. They were adults before personal computers existed. They were middle-aged when the internet emerged. They've spent their careers in a world that no longer exists.

The people building the future are 30, 40, 50. They grew up on the internet. They understand exponential technology. They move at speeds the regulatory apparatus can't comprehend.

When Mark Zuckerberg testified before Congress, senators asked if Facebook tracks users across devices. Zuckerberg tried not to smirk. They didn't understand the question they were asking.

When AI executives testify, senators will ask whether AI could become sentient. They won't understand the actual risks—capability overhang, alignment failure, economic displacement. They'll miss everything that matters while grandstanding about science fiction.

The people with power to regulate don't understand what they're regulating. The people who understand have no incentive to be regulated.

This isn't just about age. It's about formation. The senators were formed in a world where technology changed slowly. Cars got incrementally better. Phones got incrementally better. The pace of change was linear. You could master something and it would stay mastered for decades.

The tech founders were formed in a world of exponential change. Moore's Law. Network effects. Every year brought capabilities that were impossible the year before. You couldn't master anything permanently—you had to keep learning just to stay current.

These are different operating systems. The senator's OS was built for a stable world. The founder's OS was built for constant flux. When they meet, they're not just speaking different languages—they're running incompatible cognitive frameworks.

The senator thinks in terms of precedent and stability. What have we done before? What's the safe, incremental change? The founder thinks in terms of disruption and scale. What becomes possible now? How fast can we move?

Neither is wrong. Both are adapted to different environments. But for regulating exponential technology, the senator's adaptations are obsolete.


The Musk Problem

Elon Musk is the most powerful private citizen in history.

SpaceX has more launch capability than most nations. It's the only American vehicle that can put humans in space. NASA depends on him. The Pentagon depends on him. When Musk decided not to enable Starlink for a Ukrainian military operation, he affected the course of a war.

Tesla is worth more than every other car company combined at various points. The electric vehicle transition runs through his company. Energy policy depends on his decisions.

Twitter/X is the de facto town square. Political discourse, news dissemination, cultural conversation—all run through a platform one man controls absolutely.

Neuralink is putting computers in human brains.

xAI is building artificial intelligence that will be among the most powerful on Earth.

One person. Unelected. Unaccountable. More powerful than most governments.

And the regulating body is a Senate where members can't figure out how to unmute themselves on Zoom.


The Zuckerberg Problem

Mark Zuckerberg is quieter but potentially more dangerous.

Three billion people use his platforms. Facebook, Instagram, WhatsApp. He has more data on human behavior than any entity in history. He knows what you look at, what you buy, who you talk to, what you believe, what you fear.

This data is used to manipulate your attention. Your behavior. Your beliefs. The algorithms optimize for engagement, and engagement means emotion. The platforms make you angry because anger keeps you scrolling.

This has measurably destabilized democracies. Amplified extremism. Harmed mental health at population scale. The evidence is overwhelming.

And Zuckerberg continues. Because engagement is revenue. Because stopping would crater the stock price. Because no one can make him stop.

The mechanics are specific: The algorithm learns what keeps you on the platform. It discovers that outrage works better than information. That conspiracy theories work better than nuance. That tribal signaling works better than considered thought.

So it serves you more outrage. More conspiracy. More tribalism. Not because anyone decided this was good. Because the optimization target is engagement and these things produce engagement.

The result is global-scale behavioral modification. Billions of people having their attention patterns, their emotional responses, their belief formation shaped by an algorithm optimizing for ad revenue.

Facebook's own research showed this. The company knew Instagram was harming teenage girls' mental health. Knew the algorithm was amplifying extremism. Knew the platform was being used to coordinate genocide in Myanmar.

They released the information to Congress. Zuckerberg testified. He apologized. He promised to do better. Nothing fundamentally changed.

Because the incentive structure doesn't change. The algorithm still optimizes for engagement. Engagement still means emotion. Emotion still means anger and fear and tribal signaling. The shareholders still expect growth.

Individual senators can grandstand. The system continues.


The Regulatory Fantasy

"We'll regulate them."

With what? The senators who don't understand the technology? The agencies staffed by people who'll work for these companies after their government stint? The laws written to address problems from 1996?

The European Union passes regulations. The companies comply minimally and lawyer around the rest. GDPR was supposed to protect privacy. It produced cookie banners and changed nothing fundamental.

The US passes regulations even more slowly. And the Supreme Court is dismantling regulatory authority anyway. Chevron deference is dead. Agencies can't interpret their own mandates. Every regulation faces legal challenge.

Meanwhile, the technology advances. By the time a regulation is written, debated, passed, implemented, and challenged in court, the thing being regulated has been obsolete for five years.

The regulatory state was designed for industrial capitalism. It cannot keep pace with exponential technology. This isn't a bug to be fixed. It's a fundamental mismatch between institutional speed and technological speed.


The AI Moment

Artificial intelligence is advancing faster than any technology in history.

GPT-3 to GPT-4 was a capability jump that surprised even the people building it. The next jumps will be larger. Multimodal, agentic, embedded in robotics, deployed at scale.

The people building this technology are not being slowed by regulation. They're competing in a race where second place might be worthless. They're motivated by billions in wealth, by the thrill of discovery, by genuine belief that they're building the future.

The people who might regulate them are holding hearings where they ask whether AI could develop feelings.

This is the mismatch. This is where we are.

AI safeguards are not coming—not meaningful ones, not in time. The technology will be deployed. The consequences will be managed afterward, if at all.


The Wealth Gap

Musk is worth $200+ billion.

The entire annual budget of the Consumer Financial Protection Bureau is about $600 million. The FTC budget is $400 million. The agencies meant to check corporate power operate on budgets these individuals could fund personally from pocket change.

Wealth at this scale isn't just money. It's a private army of lawyers. It's media properties. It's the ability to fund political campaigns, endow universities, hire every expert who might testify against you.

Citizens United meant unlimited corporate money in politics. These individuals are corporations unto themselves.

How does an 80-year-old senator, dependent on donations, regulate someone who could personally fund their opponent's campaign ten thousand times over?

The answer: they don't. They posture. They hold hearings. They extract minor concessions. But meaningful regulation? Against this kind of wealth, this kind of power, this kind of capability gap?

Not happening.

The legal asymmetry is crushing. When the FTC tries to regulate a tech company, that company has legal resources that dwarf the entire agency. They can litigate every decision. Appeal every ruling. Drag every case out for years.

The agency has to pick its battles. Limited budget, limited staff, limited time. The company can fight everything simultaneously. Different lawyers for different cases. Unlimited resources for appeals.

The result: The agency goes after small targets they can win. The big players get negotiated settlements that look like victories but change nothing structural.

This isn't corruption. It's just math. The resource asymmetry is too large. The legal playing field isn't level—it's vertical.

And the individuals running these companies know it. They know they can outlast any regulatory effort. They know the agency will run out of budget before they run out of lawyers. They know the political will will collapse before the legal strategy does.

So they do what they want. They apologize when caught. They promise reforms. They deploy the lawyers. They wait it out. Then they do it again.


The Competence Gap

It's not just age. It's competence type.

Musk and Zuckerberg are technical. They understand exponential systems, network effects, feedback loops. They think in terms of compounding and scale.

Senators are political. They understand coalition building, rhetoric, institutional navigation. They think in terms of legislation and elections.

These are different skills. Neither is superior in general. But for understanding and regulating technology? The technical mind has massive advantage.

When Zuckerberg speaks to Congress, he's explaining a smartphone to someone who's never seen electricity. The concepts don't translate. The mental models don't connect.

The senators will never catch up. The technology will keep advancing. The gap will widen.


What Comes Next

AI will be deployed at scale without meaningful safeguards.

Not because no one tried. Not because people don't care. But because the institutional capacity to regulate doesn't exist and can't be built fast enough.

The consequences—economic displacement, misinformation at scale, capability risks, power concentration—will be managed reactively. After they happen. Incompletely.

The wealth and power concentration will continue. The gap between those who own the technology and those who are subject to it will grow.

The political system will lag further and further behind. Hearings will be held. Laws will be proposed. None of it will match the pace of change.

This is the trajectory. This is where we're going. The 80-year-old senators cannot regulate the 20-year-old billionaires. The institutions built for the twentieth century cannot govern the twenty-first.

You think AI safeguards are coming?

Look at who would have to implement them. Look at who would have to be regulated. Look at the gap in understanding, in wealth, in speed.

Then adjust your expectations accordingly.

The optimistic take: Market forces will constrain the worst outcomes. Companies that harm users will lose users. Bad actors will be punished by competition. Self-regulation will emerge from enlightened self-interest.

This take is wrong. Network effects mean the biggest platforms can harm users and users stay anyway—where else would they go? First-mover advantages mean early winners entrench. Self-regulation means voluntary commitments that get quietly abandoned when inconvenient.

We've seen this movie. Social media was going to self-regulate. It didn't. Cryptocurrencies were going to self-regulate. They didn't. Every new technology sector promises self-regulation until the money gets big enough that self-regulation becomes self-limitation becomes uncompetitive.

The AI companies will say the right things. They'll establish ethics boards. They'll publish principles. They'll make commitments. And when those commitments conflict with shipping product, with beating competitors, with satisfying shareholders, the commitments will quietly bend.

Not because the people are evil. Because the incentives are overwhelming. Because the market rewards speed over safety. Because the competitors who move faster win.

The senators will hold hearings. The CEOs will testify. They'll promise to do better. The hearings will end. The cameras will leave. The technology will deploy.

And we'll deal with the consequences after. When they're already baked in. When millions of jobs are already displaced. When the misinformation is already everywhere. When the power is already concentrated.

This is how it goes. This is how it's always gone. The technology moves. The regulation follows. The gap between them is where we live.


Previous: Dulles the CIA and the Fentanyl Problem Next: Rogers Unconditional Positive Regard Meets Infinite Bandwidth

Return to series overview