Part 25 of 25 in the The Philosophy of Future Inevitability series.


We started with wooden boats.

People got in them. Sailed across oceans. Killed strangers. Took their stuff. This is what we do.

Then we looked at competent evil. Hitler, Stalin, Mao weren't aberrations—they were humans who found conditions for their worst impulses to flourish.

Then we saw that the government that ran coups can't fix your potholes. That eighty-year-old senators regulate technology they can't conceptualize. That the structure isn't broken; it was never designed for you.

We examined ourselves. Rogers' unconditional regard, now available infinitely. The blind men touching a jumbo jet. Personalities adapting—or failing to adapt—to tools that amplify every tendency.

We watched AI culture emerge. Slop as people slop in technicolor. Non-clinical psychosis. NRE with language models. The death of formal writing. The emergence of new tells. Dating apps where you're meeting someone's prompt engineering.

We zoomed to geopolitics. AI as the new oil. Refineries mattering more than wells. China and chips and the new resource wars.

We looked forward. Dead internet and bot horizons. Sovereign individuals escaping states. Dunbar's number dissolving into micro-tribes.

What does it all mean?


The Core Insight

Everything is changing. Nothing is changing.

The technologies are unprecedented. The behaviors are ancient.

Humans extract value. Humans form tribes. Humans follow authority. Humans believe themselves heroes. Humans adapt tools to existing impulses.

AI is not a rupture with human nature. It's human nature with new leverage.

The person who got in a wooden boat to find gold is the person starting an AI company. The person who followed orders to run the camps is the person optimizing engagement regardless of harm. The conquistador and the founder—same drive, different equipment.

This isn't metaphor. It's pattern recognition across timescales.

Columbus sailed because the upside (gold, status, royal favor) justified the downside (probable death). The founder raises Series A because the upside (wealth, power, changing the world) justifies the downside (probable failure, personal cost, collateral damage).

The conquistador killed efficiently because efficiency was rewarded. The algorithm optimizes engagement efficiently because efficiency is rewarded. The methods update. The optimization function is constant.

Eichmann wanted career advancement and found a niche in logistics. The product manager wants career advancement and finds a niche in growth metrics. Both are operating their civilization's incentive structures. Both believe they're doing good work.

The tobacco executive knew cigarettes killed and sold them anyway because profits justified deaths. The social media executive knows the product harms teens and ships it anyway because engagement justifies harm.

Pattern. Pattern. Pattern.

Understanding this is the beginning of wisdom about what comes next.

Not because the patterns are deterministic—they're not. But because they constrain the possibility space. The specific future is contingent. The shape of the future is predictable.


The Dual Vision

You need two kinds of vision:

Long zoom: See the patterns that persist across millennia. Extraction. Power concentration. Authority obedience. The patterns that make humans humans.

Close focus: See the specific dynamics of this moment. AI capabilities. Geopolitical competition. Platform economics. The details that determine how ancient patterns express now.

Most people have one or the other.

Long zoom without close focus becomes useless abstraction. "Humans gonna human"—true, but not actionable.

Close focus without long zoom becomes pattern blindness. Treating each new thing as if it's genuinely unprecedented, surprised by developments that history predicted.

Both together: the ancient patterns manifesting in specific ways. This moment unprecedented in detail. Completely predictable in structure.


The Personal Level

For you, specifically:

Know your traits. High openness, low agreeableness wins this moment. If that's not you, develop behaviors that compensate. The curious, demanding person is adapted. The incurious, accommodating person will struggle.

Openness means you adopt new tools quickly. You don't resist AI because it's new. You evaluate it empirically and use what works. This creates compound advantage—early adoption leads to skill leads to more capability.

Low agreeableness means you push back on sycophantic AI. You demand actual challenge, not validation. You don't mistake the AI's agreement for evidence. You treat it as a tool that needs direction, not an authority to defer to.

If you're high in agreeableness, you'll need explicit practices. Set rules for yourself: "I will ask the AI to argue against my position." "I will seek human feedback before finalizing AI-generated work." Build the friction in artificially.

If you're low in openness, you'll need to force exploration. Set quotas: "I will try one new AI tool per month." "I will spend one hour per week learning what's possible." The incurious default will leave you behind.

Resist sycophancy. The AI flatters. Discount accordingly. Seek friction. Build in pushback. Don't mistake agreement for evidence.

Practical implementation: End prompts with "Now argue against what I just said." Start sessions with "I need you to be critical, not supportive." Create a persona that challenges: "Respond as a skeptical expert who thinks my approach is naive."

The AI will do this if directed. It won't do it by default. You have to build your own friction.

Question inherited norms. The formality you bring to AI prompts, the resistance to emojis, the sense that you should be serious—these are residue from contexts that don't apply. Examine which norms serve you.

You were trained to be formal in professional contexts because humans judge informality as lack of seriousness. The AI doesn't judge. The formality isn't buying you anything.

You avoid emojis in work communication because they signal unprofessionalism to humans. The AI processes emojis as semantic markers. They improve clarity. Use them.

You feel like you should structure prompts as proper requests. "Could you please..." The AI doesn't have feelings. "Do this" works better than polite indirection.

Every inherited norm costs something—time, clarity, cognitive load. Audit them. Keep what serves the goal. Discard the rest.

Build portable value. Skills that don't require territory. Networks that don't require presence. Options that don't require staying.

The world is reorganizing around mobility. Sovereign individuals exit when local conditions deteriorate. States resist by imposing exit costs. Your resilience is proportional to your portability.

Skills: Can you work remotely? Can you sell to global markets? Can you create value with just a laptop? If not, you're geographically locked.

Networks: Are your valuable relationships digital-first? Can you maintain them from anywhere? Or do they require physical proximity? Proximity-dependent networks trap you.

Assets: Are they liquid? Portable? Seizure-resistant? Or tied to specific jurisdictions? Physical real estate is an anchor. Bitcoin is portable. Cash in local currency is neither.

Options: Can you leave? Where could you go? What would it cost? If you don't know, you don't have options—you have wishful thinking.

Maintain micro-tribes. The intimate groups that don't need AI maintenance. The core relationships that are actually yours. Everything else is periphery.

Dunbar's number is dissolving for the outer layers. AI can maintain weak ties, simulate social presence, manage the 150-person network. This is fine.

But the inner circle—the 5-15 people who are actually yours—these can't be AI-mediated. These need flesh, presence, time, friction.

Protect them. Prioritize them. Don't let the AI relationship structure push them to secondary status. The micro-tribe is the unit that survives system failure.


The Political Level

For us, collectively:

States will fight for relevance. They'll coordinate to close exits. They'll surveil to maintain control. The battle between sovereign individuals and desperate states is coming.

Concentration is default. The refineries concentrate. The platforms concentrate. Without active intervention, power concentrates. This is gravity, not conspiracy.

The middle gets squeezed. The wealthy exit. The poor have nothing to lose. The middle—visible, valuable, immobile—bears the cost of both.

Legitimizing ideologies matter. Every atrocity had one. Watch what ideas make certain groups seem less than human. Watch what "efficiencies" make suffering invisible.

Regulation follows harm. Tobacco after the deaths. Opioids after the epidemic. AI after... what? The pattern is clear. Prevention is not how this works.


The Inevitability

Why "future inevitability"?

Not because the specific future is determined. It isn't. Details matter. Choices matter. Contingency is real.

But because the shape of the future is constrained by human nature. By physics. By economics. By the way systems evolve.

AI will concentrate power, because power-concentrating systems survive better than power-distributing ones.

This is selection pressure. A company that consolidates AI capabilities beats a company that doesn't. A state that monopolizes AI surveillance beats a state that doesn't. A platform that uses AI to optimize engagement beats a platform that doesn't.

The distributed alternatives are outcompeted. Not because they're morally worse. Because they're strategically weaker. The concentrated system wins, propagates, becomes standard.

You can resist this at small scale. You can build cooperative structures, distributed ownership, democratic governance. These can work locally. But they don't scale competitively against concentrated power. The market selects for concentration.

Productive people will exit if they can, because incentives point toward exit.

High-value individuals face increasing tax burden as states try to fund unsustainable obligations. Digital work enables geographic arbitrage. Some jurisdictions compete for talent with low taxes, high freedom.

The person who can exit compares: pay 40% where I am, or 15% somewhere else? The math is clear. The exit happens.

This isn't everyone. Most people can't exit—they lack skills, capital, or mobility. But the people who can are disproportionately valuable. Their exit matters.

States will resist exits, because states require resources from captive populations.

As the productive exit, the tax base shrinks. The remaining population requires more services, generates less revenue. The math stops working.

States respond by raising exit costs. Wealth taxes. Expatriation taxes. Capital controls. Travel restrictions. Coordination with other states to close havens.

This is already beginning. The OECD global minimum tax. The US exit tax on renunciation. EU coordination on tax avoidance. The coordination tightens as the pressure increases.

Authenticity online will become impossible, because the economics favor generated content.

Human-created content has cost. Time, effort, expertise. AI-generated content approaches zero marginal cost. The economic pressure is overwhelming.

A news site that generates articles with AI can publish 100x the volume at 1/100th the cost. It outcompetes the human-written site. The reader can't tell the difference. The human site dies.

A social media user with AI assistance can produce 10x the content, optimized for engagement. They outcompete the purely human user. The audience can't tell. The human user becomes invisible.

The ecosystem fills with generated content. The authentic becomes indistinguishable from the synthetic. Then becomes rarer. Then becomes economically unviable.

These aren't predictions. They're extrapolations from patterns we've seen for centuries.

The wooden boat becomes the steel ship becomes the container vessel. Same pattern: optimize for efficiency in extraction.

The conquistador becomes the corporation becomes the platform. Same pattern: extract value, externalize costs.

The king becomes the shareholder becomes the algorithm. Same pattern: optimize for the optimizer's benefit, not the substrate's.

The inevitability isn't in the details. It's in the dynamics.

We don't know which company will dominate AI. But we know power will concentrate.

We don't know which jurisdictions will become havens. But we know exits will be sought and resisted.

We don't know which platforms will win. But we know authenticity will lose to optimization.

The dynamics are inevitable. The specifics are contingent. Know the difference.


Living With It

This is not optimism or pessimism. It's realism.

Realism is uncomfortable. We want to believe good will prevail, or at least that good is coherent enough to fight for.

But "good" is contested. The conquistador thought he was good. So did the commissar. So does the founder optimizing engagement as mental health declines.

Every extraction has a justification. Every harm has a legitimizing ideology. The person causing damage doesn't experience themselves as villain. They're solving a problem, creating value, advancing progress.

You can't rely on people's self-image to constrain their actions. The self-image adjusts to justify the action. This is how humans work.

What you can do:

See clearly. The wooden boats, the competence of evil, the AI culture, the geopolitics, the personal adaptations—see what's actually happening. Most don't.

Clarity is advantage. Most people live in inherited frames. They believe the legitimizing ideologies. They trust that institutions serve them. They assume good intent from power.

You don't have to. You can see the pattern. You can notice when the new thing rhymes with the old thing. You can predict based on incentives rather than stated intentions.

This doesn't make you cynical. It makes you accurate. Cynicism is "everything is bad." Realism is "systems act according to their incentives, and most systems aren't incentivized to protect you."

Act locally. You can't fix systems. You can make choices. Your relationships. Your work. Your attention. These are yours.

System-level change is possible but rare. It requires coordination across millions of people with conflicting interests. It requires overcoming entrenched power. It requires sustained effort over decades.

Maybe you're positioned to contribute to that. Most people aren't.

What you can always do: make good choices in your domain. Be honest in your relationships. Do valuable work. Direct attention to what matters. Protect your people.

This isn't resignation. It's appropriate scope. You can't fix capitalism. You can be fair in your business. You can't stop AI concentration. You can use AI ethically. You can't prevent the dead internet. You can create authentic content.

The local choices aggregate. Not into system change necessarily. But into islands of different operation within the system.

Build redundancy. Options. Exit routes. Skills that transfer. Communities that persist. Don't depend on systems acting against their nature.

Resilience is multiple paths to survival. The person with one job, one location, one friend group is fragile. Change one variable and they break.

Redundancy looks like:

  • Multiple income streams, not one job
  • Multiple jurisdictions you could live in, not one location
  • Multiple communities you belong to, not one friend group
  • Multiple skills that create value, not one expertise

This isn't paranoia. It's engineering. Systems fail. Having backups is basic design.

Accept tragedy. Not everything can be fixed. Some things will be lost. Some harms will happen. This isn't giving up—it's focusing energy where it might matter.

The dead internet is probably inevitable. Authentic human culture online is probably ending. This is worth grieving. It's not worth pretending you can prevent.

Mental health damage from social media has already happened to a generation. This is tragic. It's not reversible. The best you can do is protect the next cohort.

Some relationships will be lost to AI mediation. Some jobs will be automated. Some skills will become obsolete. Some communities will fragment.

Accepting this doesn't mean approving it. It means not wasting energy denying it. The energy goes to protecting what can be protected, adapting to what can't be prevented, and knowing the difference.


The End

We end where we started.

People get in boats. They sail to strange places. They extract what they can. They call it progress.

The boats are digital now. The extraction is algorithmic. The kings are shareholders.

Same game. Different board.

The philosophy of future inevitability is just: know the game.

Know you're playing. Know the rules. Know the patterns.

Then make your moves.

The future isn't written. But it isn't random either.

Read the patterns. Play accordingly.

Welcome to the future.


This concludes The Philosophy of Future Inevitability.


Previous: The End of Dunbar: Hyper-Self-Actualization and Micro-Tribes

Return to series overview