Simulation Theory: When Physics Gets Weirder Than Conspiracy

Nick Bostrom's simulation argument is a formal philosophical trilemma that follows from premises most technologists accept. Physics contributes uncomfortable structural parallels: information propagation limits, the Planck length as apparent pixel size, quantum measurement collapse as on-demand rend

Simulation Theory: When Physics Gets Weirder Than Conspiracy

Nick Bostrom’s 2003 paper “Are You Living in a Computer Simulation?” is not a work of science fiction. It is a formal philosophical argument published in the Philosophical Quarterly, and it has been taken seriously enough that physicists at MIT, Caltech, and Oxford have spent peer-reviewed time either engaging with it or working on the empirical questions it raises. Elon Musk stated at a 2016 Recode conference that the odds we are living in base reality are “one in billions.” Neil deGrasse Tyson put the odds at “fifty-fifty.” These are not serious probability estimates. What they indicate is that the argument does not get laughed out of rooms where serious people think seriously.

The simulation argument is not a conspiracy theory. A conspiracy theory posits hidden human actors deliberately concealing information to manipulate a population. The simulation argument posits a structural feature of the universe that would be true whether or not anyone wanted it to be, and which follows from premises that are, on the face of it, quite mild. The weirdness is not that someone dreamed up a paranoid fantasy. The weirdness is that a straightforward logical argument leads here and the logic actually holds.

Bostrom’s Argument Is a Trilemma, Not a Claim

The argument works as follows. One of these three propositions must be true. Either: almost all civilizations at our current level of technological development go extinct before reaching the computational capacity to run realistic simulations of civilized societies at scale; or, almost all civilizations that do reach that capacity have no interest in running such simulations; or, we are almost certainly living in a computer simulation right now.

This is a trilemma in the strict logical sense. If you accept the premises, one of the three must be true. You cannot reject all three without rejecting the premises themselves. The premises are: (1) consciousness and subjective experience can in principle be instantiated in computational substrate (the “substrate independence” premise); (2) civilizations that reach sufficient computing power would be capable of running ancestor simulations or simulated worlds at scale; and (3) if they ran many such simulations, simulated minds would vastly outnumber base-reality minds.

Substrate independence is the load-bearing premise, and it is not obviously false. It is not obviously true either. But it is accepted as a working assumption by the majority of AI researchers, most cognitive scientists, and essentially everyone working on artificial general intelligence. The entire enterprise of building minds in machines rests on the assumption that the substrate doesn’t matter, only the computation. If you think AGI is possible in principle, you have already accepted substrate independence. And if you accept substrate independence, Bostrom’s argument follows with uncomfortable force.

The Dismissal Usually Misses the Argument

The standard dismissal of simulation theory comes in a few flavors. First: “we’d be able to tell if reality were simulated.” This is probably false. If the simulation runs the physics correctly, the simulated beings see correct physics. There is no seam to find from inside. The argument from “I’ve never noticed the loading screen” doesn’t work. Second: “the computing power required would be astronomical.” Yes. That’s why Bostrom frames it as a claim about sufficiently advanced civilizations, not current ones. A civilization that has been running for a million more years than ours has access to orders of magnitude more computation. The objection proves too little. Third: “this is unfalsifiable so it’s not science.” This conflates scientific with philosophical. The simulation argument is a philosophical argument. Unfalsifiability doesn’t make it wrong.

The dismissal that actually lands is the one targeting substrate independence. If consciousness requires something substrate-specific, something about biological neural computation that cannot be replicated in silicon, the whole structure collapses. The problem is we don’t know whether that’s true. See the hard problem discussion in the previous article and the more detailed treatment in this series’ final entry on consciousness. We don’t know what consciousness is, which means we cannot say whether it is substrate-dependent or not. The simulation argument’s most vulnerable premise cannot be attacked with confidence because the attack requires knowledge we don’t have.

There is also a cosmological version of the dismissal worth examining: “the energy required to simulate the universe would exceed the energy of the universe.” This sounds devastating and isn’t. The argument from Bostrom and his defenders is not that some future civilization runs a simulation of the entire universe at the subatomic level of physical fidelity. It’s that a sufficiently convincing simulation from the inside is achievable with less computation than a complete simulation, for the same reason that modern video games don’t render every molecule in the scene, only what the player can observe. The universe might only need to be computed at full resolution where it is being observed, which is a much smaller problem. The energy objection assumes full-fidelity simulation of everything; the argument only requires apparent full-fidelity simulation of what’s accessible. These are very different computational demands.

The Physics Makes This Weirder, Not Cleaner

Here is where it gets interesting. Several features of actual physics are difficult to explain in base-reality terms and fit surprisingly well in computational terms, not because physicists designed them to, but because they keep turning up in foundational work.

The universe has a maximum speed: the speed of light. This functions as an information propagation limit. Computational systems have information propagation limits. The Planck length, approximately 1.616 x 10^-35 meters, appears to be a fundamental lower bound on distance: there is no meaningful “between” at scales smaller than the Planck length. This looks like a pixel size. It does not prove we are in a simulation, but it is exactly the kind of feature you would expect if reality were discretized. The universe appears to have a finite information content per unit volume, the Bekenstein bound, derived from black hole thermodynamics. Information appears to be the fundamental currency of physics in ways that Einstein could not have anticipated and that fit extremely well with computational frameworks.

Quantum mechanics adds another layer. The wave function is a probability distribution that collapses to a definite state when observed. The “measurement problem,” why observation collapses the wave function and what counts as an observation, has been unresolved for a century. One interpretation (the Copenhagen interpretation, still widely used) says that physical reality is indeterminate until observed. Another (Many Worlds) says all outcomes happen and we’re in one branch. Neither is entirely satisfying. But “reality only renders what is being observed” is a feature of every video game ever made, implemented to conserve computing resources. It is not proof of simulation. It is an uncomfortable structural parallel.

In 2012, Silas Beane and colleagues at the University of Bonn published a paper in Physical Review Letters proposing a test for whether the universe is a lattice simulation: certain high-energy cosmic ray distributions should show anisotropy aligned with the lattice structure if the universe runs on a grid. The test is technically feasible with current cosmic ray observatories. The point is that physicists took the question seriously enough to design an experiment.

Physicist John Barrow at Cambridge made a different argument: a simulation that runs long enough would accumulate numerical errors, the way floating-point arithmetic in any real computer accumulates rounding errors over many iterations. A sufficiently long-running simulation should produce detectable physical anomalies: slight inconsistencies in the laws of physics at the margins, values that drift slightly over cosmological time. Barrow pointed out that if you were a conscious entity inside such a simulation and you noticed these anomalies, you might experience them as physical constants that vary over time, as anomalous measurements at the extremes of precision, or as cosmological fine-tuning that looks slightly wrong. None of this constitutes evidence we are in a simulation. It constitutes a proposal for what that evidence would look like if it existed. The fact that physicists can design such tests is the argument: this is not untestable speculation. It is a hypothesis generating predictions about observable reality.

The Thing Simulation Theory Is Actually Claiming

The anthropic reasoning embedded in Bostrom’s argument deserves more attention than it usually gets. The logic is similar to the anthropic reasoning in fine-tuning arguments about cosmology: given that you are conscious and observing the universe, you can make probabilistic inferences about what kind of universe you are likely to be in. If simulated minds come to vastly outnumber base-reality minds as computing power increases, then the probability that any randomly sampled conscious observer is simulated approaches 1. This reasoning feels slippery, but it is the same family of reasoning cosmologists use when discussing the anthropic principle and the apparent fine-tuning of physical constants for life. The reasoning style is mainstream. The conclusion is unsettling. That combination is exactly what makes the argument interesting rather than dismissible.

The question of how many “levels” of simulation might exist adds another dimension. If our civilization is simulated by a post-human civilization, that civilization might itself be simulated by another, which might itself be simulated. This nesting is not logically bounded by Bostrom’s original argument. Some physicists (most notably David Deutsch, who is skeptical of simulation theory for different reasons) point out that if the simulators are themselves physical entities subject to physics, then the physics their computers run on sets a ceiling on the simulation they can run. Others argue that the levels could be arbitrarily deep. Neither camp has a compelling argument that fully settles the nesting question.

Most popular discussion of simulation theory imagines it as a claim about a specific simulator: some future civilization built this universe in a computer room somewhere, there are beings outside who can intervene, the rules can be changed by the creator. This version has obvious appeal but involves claims that go considerably beyond what the logical argument establishes.

The logical argument establishes only that if substrate independence is true and civilizations tend to survive to post-human computing capacity, then the ratio of simulated minds to base-reality minds becomes arbitrarily large. Given that ratio, any randomly selected mind is overwhelmingly likely to be simulated. This is a probability argument about the ensemble, not a claim about specific simulators or the possibility of divine intervention.

What it does do is raise a question about ontological status that is genuinely interesting: if consciousness can be instantiated in computation, and the computation runs physics-consistent experience indistinguishable from “real” experience, what is the meaningful difference between simulated and base reality? Not ethically: simulated suffering is presumably still suffering. But metaphysically. Philosopher David Chalmers has argued that if simulation theory is true, it isn’t a skeptical nightmare, it’s just a new discovery about the nature of physical reality. The physics would still be real physics. The particles would still be real particles. The simulation is the physics. This is either deeply comforting or deeply disorienting depending on how you sit with it.

The Rating Is Silver and the Bridge Is the Point

Simulation theory rates Silver in this field guide: the argument is logically sound, it follows from premises that most technologists accept, it’s taken seriously in physics departments, and the attempts to falsify it through cosmological observation are genuine science. It is not Copper (a claim that falls apart on inspection). It is not Gold (an established or nearly-established scientific finding). It is a valid philosophical argument resting on contested premises with interesting empirical hooks.

But the function of this entry in the series is the bridge it builds. We’ve been evaluating paranormal claims: are ghosts real, do psychics exist, did Bigfoot walk through the Pacific Northwest. These are claims about anomalous phenomena in a world we take to be base reality. Simulation theory asks whether base reality is the right frame at all. That question, once raised, doesn’t go away.

The physicists who take simulation theory seriously are not paranormalists. They are applying the same analytical tools to a question that conventional worldviews tend to foreclose prematurely. The conspiracy theorist who believes the matrix is real has the wrong mechanism and the wrong epistemology. The philosopher and physicist asking whether the universe is computational have a genuine question with genuine methodology. The question itself is not fringe. It is where honest investigation of the nature of reality keeps arriving.

Reality is weirder than the conspiracists imagine because the conspiracists are still imagining humans did it, deliberately, to hide something. What Bostrom’s argument suggests is a situation with no conspiracy, no concealment, and no one to blame: just the mathematics of very large numbers of civilizations and very large amounts of computation arriving at a ratio that should make us genuinely uncertain about which layer of reality we occupy. That’s not paranoia. It’s logic. It’s just that the logic is deeply strange.