The Reactions You Didn't Choose
A bias audit for the decisions that are supposed to be objective
Last week, a license plate frame that read “Happiness is being Norwegian” gave me a reaction I didn’t choose. It was already formed before I knew it was happening, small and quick, the kind of thing you’d miss entirely if you weren’t paying attention. Norway is one of my favorite destinations in the world, a place that lingers in my mind long after my trip. So when I saw the frame, my reaction was an instant warmth towards a stranger, with whom I shared a fondness for Norway.
And then a thought: what if that plate had reflected something I had no connection to? The warmth wouldn’t have been there. Same stranger, same car, same morning. The only thing that changed was whether I saw myself in them.
That’s when something clicked. I needed a way to get under reactions like that one in real time, before they hardened into decisions.
The mechanism
Most bias training asks you to suppress the snap judgment, override the instinct, and apply the framework. That approach misreads what a reaction is. Your reactions are data that hasn’t been decoded yet, and treating them like noise just buries the signal.
The variable swap is how you decode it. Take the situation you’re reacting to and swap one variable. Keep everything else identical. Then observe what changes.
If your reaction changes, you’ve found a gap between your stated principles and your actual behavior. Something worth examining, not because it makes you a bad person, but because the gap is information. Stay curious, instead of defensive.
If your reaction stays the same, you’ve found a genuine principle. And now you can ask a sharper question. Why does the world respond differently to identical things?
If you learn nothing, you swapped the wrong variable. Try another. The recursion is what gives this tool its depth. Run it enough times, and it becomes a path toward first principles, one swap at a time.
You can run this two ways. The first is after the fact. You catch yourself with a reaction you didn’t choose, a smile, a judgment already made, and you run the swap to find out what your instincts were carrying.
The second is before the fact. Before a hire, before a promotion call, before championing one person’s idea over another’s, you run the swap on yourself first. Same tool either way. The first builds self-awareness over time. The second builds integrity into a decision while still allowing you to change it.
This matters most in the decisions leaders think are objective. Hiring. Promotion. Whose idea gets championed in a meeting? Who gets the benefit of the doubt when a project misses? The answer should be independent of who’s asking and who’s being evaluated. Run the variable swap, and you’ll find out whether it actually is.
The limit nobody talks about
There’s a ceiling on this tool that’s worth naming.
You can only swap variables that you can imagine. The ones you’re most motivated to avoid testing are exactly the ones you won’t think to generate. The constraint here is the operator, every time. The unknown unknowns aren’t random. They cluster around the places where examination would be most uncomfortable.
The constraint here is the operator, every time.
I’ve run this sweep enough times to know that my variable list is not neutral. It reflects my frame of reference, my experiences, and the categories that feel salient to me. When I swapped the plate frame for other possibilities, I picked the ones already in my mental model. I didn’t pick the ones I had no instinct about, because I didn’t know they were missing.
That’s a real ceiling. And it’s where things get interesting.
What AI actually does here
There’s a version of the AI argument I find unconvincing: that AI can help you examine your reactions because it’s unbiased. It isn’t. AI is trained on human-generated data, which means it reflects the aggregate of human perception, not its absence.
Which is exactly what makes it useful.
The honest version of the argument is narrower. AI expands the variable set beyond your blind spots into what the world around you actually perceives. The variables you wouldn’t think to test. The variables that millions of people, with different frames of reference, different experiences, and different cultural contexts, would notice immediately.
I came across a parallel method recently from Wyndo on Substack: tell AI where you want to end up, then ask it to work backward and map the chain of assumptions you never examined. Different entry point, same underlying insight. AI as a tool for surfacing what you couldn’t see from where you were standing.
AI is widening the aperture of the test, nothing more. The reaction is still yours. The noticing is still yours. The examination is still yours. What AI contributes is a variable set that reaches beyond your own perception into the broader landscape of how the world sees things.
What this looks like in real life: you bring a decision to AI before you make it. You don’t ask “Am I biased?” because that question produces nothing useful. You ask: “Here is the situation. What variables might be influencing my reaction that I haven’t thought to test?” Let it generate the list. Then run your reactions against that list honestly.
What this builds over time
Running this sweep on every decision isn’t realistic, and it isn’t the goal anyway. The goal is to run it often enough, in hiring reviews, in performance cycles, in the five minutes after a meeting where something felt off but you couldn’t name why, so that over time, your instinct itself becomes more principled.
The human does the noticing. AI widens the aperture. Repetition does the recalibration.
You don’t become unbiased. No one does. But you can become someone whose instincts have been tested enough times, against enough variables, that they’re carrying less you don’t know about. What the license plate really taught me is that the instincts I trust most are exactly the ones that have never been stress-tested. If warmth can be trained into you without your noticing, then any reaction can. The plate frame just happened to be the one I caught.
There’s a harder problem waiting. Running the variable swap outward and backward mapping inward on the same decision, and watching what happens when they converge. That’s next.
If you've run a version of this swap on yourself, I'd love to hear what variable you landed on. Reply or leave a comment


