The cognitive selection pressure nobody's talking about

Everyone's asking whether AI will replace humans. Almost nobody's asking what happens to your cognition after ten thousand AI interactions.

A new paper from Karlsruhe Institute of Technology—headed to the International Conference on Information Systems—finally names what practitioners have felt for years: your mental models aren't static. They're being reshaped by every AI interaction. The authors identify three cognitive structures that evolve through human-AI collaboration: your understanding of the domain, your model of how the AI thinks, and your awareness of when you're better versus when it is.

What they're too polite to say is that this reshaping has a direction. And for most people, it's not good.

The Selection Pressure

The paper frames this as a design problem: build better feedback mechanisms, add transparency, calibrate reliance. These are reasonable engineering suggestions. They're also missing the geometry of what's actually happening.

AI isn't a neutral tool that affects everyone equally. It's a selection pressure that's sorting the population into two groups.

The Boned: People with existing cognitive architecture—real domain expertise, metacognitive capacity, pattern recognition that's been earned through reps—use AI as an exoskeleton. Each interaction sharpens their mental models because they have structure to push back against. The AI's outputs become training data for their own cognition. They get stronger.

The Boneless: People without that architecture have nothing to resist with. Every AI interaction is a small surrender of cognitive territory. They're not collaborating—they're being slowly replaced from the inside. Their mental models aren't developing. They're atrophying into thin wrappers around 'ask the AI.'

The middle—the person who's 'pretty good' at their job, occasionally pushes back, mostly defers—is on a slower timeline to boneless. The selection pressure isn't averaging. It's sorting.

The Mechanism: Non-Ergodic Cognition

Here's where complexity science earns its keep. The paper's framework treats mental model development as if it operates on ensemble averages—'on average, these mechanisms improve cognitive outcomes.' But you don't live in an ensemble. You live one cognitive timeline.

The same math that explains why moderate-risk investing leads to ruin explains what's happening to cognition under AI exposure. The dynamics are multiplicative, not additive.

If you start with high-dimensional mental models—real expertise, not credentials—AI interactions are multiplicative in the right direction. You compound. Each good judgment call builds structure for the next. Your pattern recognition gets sharper because you're constantly testing it against the AI's outputs and learning where you add value.

If you start with low-dimensional models, AI interactions are subtractive. You're not learning—you're outsourcing. And outsourced cognition doesn't come back. The skill atrophies. Each deferred decision makes the next deferral more likely. You're approaching an absorbing state: total dependency.

This is the same geometry as ruin in finance. A small probability of cognitive capitulation per interaction compounds to certainty over enough interactions. The only safe probability is zero—which requires having something to push back with.

What the Paper Gets Right

Credit where due: the three-model framework is useful. Holstein and Satzger identify:

Domain mental models—your understanding of the actual territory. What patterns matter, what's noise, what causes what. This is the ground truth you're supposed to be checking AI outputs against.

Information processing mental models—your theory of how the AI thinks. What inputs it weighs, where it's reliable, where it fails. Without this, you can't know when to trust it.

Complementarity-awareness mental models—your honest assessment of when you're better and when the AI is better. This is the metacognition that enables calibrated reliance instead of blanket trust or blanket rejection.

The engineering example they give is perfect: An engineer sees an AI flagging a temperature anomaly. Her domain model recognizes it's just someone opening a window in winter—context the AI doesn't have. Her model of the AI's processing tells her it's pattern-matching without causal understanding. Her complementarity-awareness gives her confidence to override. All three models working together.

Now imagine someone without those models. They see the flag, they trust the flag, they escalate the flag. Repeat a thousand times. Their judgment atrophies because it's never exercised. The AI hasn't made an error—but it's hollowed out the human capacity that was supposed to catch errors.

The Barbell That's Already There

The paper proposes mechanisms for developing these mental models: data contextualization, reasoning transparency, performance feedback. Fine. But these mechanisms only work if you have the cognitive infrastructure to use them. Transparency doesn't help if you don't have domain knowledge to evaluate what's being made transparent.

What's actually needed is a barbell approach—one that most organizations are failing to implement because they don't see the geometry:

Safe core (90%): Your domain expertise, your metacognitive capacity, your irreducible judgment. The cognitive structure that can't be delegated without losing the game. This is what the AI should be serving, not replacing.

Convex tail (10%): AI as leverage. Pattern detection at scale. Speed on tasks where your judgment can verify outputs. The stuff it's genuinely better at, deployed where you maintain oversight.

Forbidden middle: 'I'll use AI to help me think.' No. You either think and use AI as tool, or AI thinks and uses you as interface. The middle is a transition state, not a stable position. It's where you tell yourself you're collaborating while slowly ceding cognitive territory.

What To Do About It

For knowledge workers: Track when you accept AI recommendations without friction. Not 'without disagreement'—without even the momentary pause where you check it against your own judgment. That frictionless acceptance is the early warning sign. It means your complementarity-awareness model is already damaged. You've stopped asking 'am I better here?' because you've pre-decided you're not.

For people designing AI systems: The paper's three mechanisms are necessary but insufficient. You need to actively preserve dimensional diversity—design friction that keeps humans in the loop as generators, not just validators. The goal isn't to maximize efficiency. It's to maintain the human cognitive capacity that makes the collaboration valuable in the first place.

For leaders: Stop measuring AI integration by how much it reduced headcount or how fast it made processes. Start measuring whether your people are getting sharper or duller. The efficiency gains are visible immediately. The cognitive erosion takes years to manifest and is nearly impossible to reverse.

For individuals: Treat AI like a training partner, not an oracle. The goal is to sharpen your pattern recognition, not to offload it. Every time you take an AI output without engaging your own judgment, you're training yourself to not have judgment. Every time you push back—even when the AI turns out to be right—you're maintaining the muscle that lets you push back when it matters.

The Real Question

The paper asks how AI systems can develop human mental models. That's the polite, engineering-forward framing. The real question is darker and more personal:

Do you have mental models worth developing?

If you do—if you've put in the reps, built the domain knowledge, developed the metacognitive awareness—then AI is about to make you extremely powerful. You'll think faster, see patterns you couldn't see before, and maintain the judgment to know when to trust it and when to override.

If you don't, the next few years are going to be rough. Not because AI will replace you directly—that's the wrong fear. But because AI will hollow out whatever cognitive capacity you had, leaving you as a warm body that routes queries to the machine and reports outputs to management. You'll still have a job. You just won't be doing it.

Your spine is your moat. The barbell has already claimed half the Gaussian curve. The only question is which half you're on.

Reference: Holstein, J. & Satzger, G. (2025). Development of Mental Models in Human-AI Collaboration: A Conceptual Framework. Proceedings of the 46th International Conference on Information Systems, Nashville, Tennessee.