Smart people are sometimes the last to realize that the cognitive ship they are captaining is about to sink. Sometimes they don't even recognize that their ship is already under water.
Research suggests intelligence might actually make motivated reasoning worse: smart people are better at mobilizing sophisticated tools and conjuring rationalizations for their doomed enterprise.
Annie Duke, the professional poker player, put it perfectly: “The smarter you are, the better you are at constructing a narrative that supports your beliefs, rationalizing and framing the data to fit your argument or point of view.” I know this intimately because I lived it.
Back in 2015, when ego depletion was under attack, I desperately tried to keep my theoretical ship afloat. My response? I co-authored a paper throwing every statistical tool at this body of work. Depending on how you analyzed the data, the effect could be modest, tiny, or nonexistent. We concluded that maybe, just maybe, there was still something there.
I was being smart, but I was also motivated. When someone threatens your work's foundation, you become remarkably creative at finding reasons why the threat isn't real.
But then came the registered replication report with over 2,000 participants and a big fat zero effect. No single study should be treated as the final word[1], but I still remember reading those results and being absolutely gutted. I wasn't ready to surrender based on one study alone though.
So, I went back to the lab, and I tried desperately to find ego depletion using the best methods I could muster. Over and over I tried, and over and over I failed. That's when I finally accepted what the data were screaming at me: ego depletion, at least as we'd been studying it, was as real as The Dude’s work ethic[2].
I thought of this while watching efforts to save another theory. My friend Steve Heine at UBC recently published a comprehensive meta-analysis applying forensic tools to salvage terror management theory.
Proposed in the 1980s by Jeff Greenberg, Sheldon Solomon, and Tom Pyszczynski, terror management theory starts with a simple premise: awareness of mortality generates terror. To manage this terror, we cling to cultural worldviews as psychological buffers.
I've always found this theory convoluted and had trouble explaining it to students. Let me try here. The basic idea is that we're all going to die, and that scares the shit out of us. So, we convince ourselves we're part of something bigger—our country, our religion, our hockey team—that will outlive us. When reminded of death, we supposedly cling more tightly to these beliefs.
This theory still doesn't make much sense to me. Many of us aren't so terrified of death, and I'm not sure how clinging to fries smothered in gravy and squeaky cheese would help me or my fellow Canadians cope. Hell, many of my fellow citizens seem overly enthusiastic about dying, if the popularity of our world-famous medically assisted in dying program is to be believed.
Despite it being confused, social psychology went gaga over the theory. At its peak, terror management theory spawned studies with increasingly wild claims. Studies purported to show that reminding people of death would lead them to advocate for extreme punishments for moral transgressors. They'd also prefer to fry themselves in tanning booths, spend more on luxury goods, believe in benevolent gods, and even desire more children.
Death anxiety seemed to be the invisible puppeteer controlling nearly every aspect of human behaviour. The theory seemed to be able to explain everything. And… maybe that was the first red flag.
But theories don't collapse because they're too ambitious or seem silly or convoluted. They collapse because they can't survive rigorous testing. And that's exactly what started happening in the late 2010s.
It wasn’t until 2019 that the field saw its first large replication attempt of one of its core findings. This project, dubbed Many Labs 4, involved over 2,000 participants across 21 labs, with Tom Pyszczynski directly involved. Half the labs received his personal guidance.
The result? Complete failure. Regardless of whether the original authors were involved or not, the core finding didn’t replicate.
But remember our friend, motivated reasoning? If you’ve become famous—or “famous”—for a theory, you sure are motivated to show that theory in the best light. And right on cue, after the sorry results came in, Pyszczynski co-authored a commentary suggesting that the precious theory showed signs of life. Pyszczynski argued that when the preregistration was followed more closely, death psychology was alive. But closer reading reveals multiple preregistered paths, only one showing a hint of an effect after loosening evidentiary criteria. No one without skin in the game could see this as compelling evidence[3].
This wasn't isolated. In a registered report, Simon Schindler and colleagues failed to replicate the effect across three preregistered studies with 1,700 participants. Mark it zero!
So, what did my friend Steve Heine find when he turned his statistical microscope on terror management theory? Heine’s sophisticated analysis of over 800 studies reached conclusions that reminded me of my own attempts to salvage ego depletion: conflicting results from different tools, modest effect sizes, massive heterogeneity, but maybe there's still something there if we use just the right approach.
Watching him deploy these sophisticated statistical weapons, I couldn't help but see echoes of my own journey. After all, I wrote something similar when ego depletion was on life support. But, when a literature is contaminated by questionable research practices, publication bias, and theory-encumbered observations, no statistical wizardry can resurrect it.
I believe that you cannot fix meta-analysis's deep garbage-in-garbage-out problem. I once told a journalist that “meta analyses are fucked,” not expecting him to print my profanity. Others agree with my skepticism, saying “many, perhaps most, meta-analyses in the behavioral sciences are invalid”. Heine disagrees, arguing bias-correction tools can separate wheat from chaff. But I remain convinced some literatures are too contaminated to salvage. In my view, the only solution is to start over from basic principles and see if you find anything.
So, what happens if you look at preregistered replications? Terror management theory has flatlined. By my count, five preregistered replications have failed to find the basic effect, with numerous failed non-preregistered replications too[4].
Imre Lakatos distinguished between progressive and degenerating research programs. There's a difference between theories that discover new things and those desperately making excuses. Terror management theory has become the academic equivalent of Weekend at Bernie's: its defenders aren't breaking new ground; they're just propping up a corpse and pretending it's still dancing.
The psychology of death, at least as conceived by terror management theory, is dead. We killed it. The replication attempts of both supposedly basic effects and auxiliary assumptions[5]—for example, that death only becomes accessible after a sufficient delay—have shown us the corpse. The forensic analyses show us a body of work deeply contaminated. I would go further and say it is contaminated beyond repair.
Coda
The real lesson here is about the Annie Duke trap. I needed ego depletion to be real, and I can see that motivated reasoning clearly in retrospect. But here's the unsettling part: Am I engaged in motivated reasoning right now? We're always smart enough to spot self-deception in others, never clever enough to catch ourselves in the act. And maybe I'm the one captured by my own motivations.
I'm so convinced that terror management theory is dead that I might be blind to evidence suggesting otherwise. Heine is far more nuanced than I. In his paper, he acknowledges mixed evidence and presents both pessimistic and optimistic interpretations. Maybe my eagerness to declare theories dead says more about me than the evidence. Maybe my reputation as a so-called truth teller—whatever that means—has become a weird kink. After all, I'm the guy who sees ego depletion's ghost everywhere, convinced that if one major theory can collapse, they all can.
I too dabble in pacifism, so I’ve invited Steve Heine to write a commentary responding to this piece. I suspect he'll make a compelling case that I'm the one trapped in the Annie Duke trap. I look forward to him setting me straight.
[1] Thanks to Steve Heine for helping me more appreciate this point. Steve was good enough to read this essay in advance and gave me lots of notes, which was gracious of him, especially as I am disagreeing with him publicly. Steve is a mensh.
[2] Years later, I did eventually find a small depletion effect, but only after completely reinventing our approach and starting from scratch. This didn't vindicate the existing literature, though. It was almost as if we started studying a different phenomenon altogether—one robust and one less so.
[3] A multiverse analysis, where all possible ways of analyzing the data confirm this.
[4] In Heine’s paper he also counts five successful preregistered studies, but none of these are replications and some (most?) test ideas that are not uniquely derived from terror management theory. For example, here is a paper that finds that those reminded of terrorism show more anti-Muslim attitudes, which is hardly surprising and hardly a good test of terror management theory.
[5] By auxiliary assumption, I mean procedural requirements that researchers claim are necessary to observe an effect but aren't part of the theory's core predictions.
Loved this piece from beginning to end (but especially the Coda). We need more people honestly, openly sharing how they have changed their minds based on evidence and discussing the personal, motivational, and cognitive difficulties involved in doing so. To me, this is a piece about how science *sometimes* actually works.
I feel like the social and behavioral sciences in particular could really benefit from education that begins with “The subjects we study are profoundly complicated, which means that most of the claims you will one day make about it will probably be wrong; the work that you will do—and especially that for which you will receive acclaim—will probably be shown to be wrong. Care not about being right but about striving toward greater rightness, and you may actually produce something of value. Care only about being right (and the pride and acclaim that will bring you) and not only will you probably be wrong, but your wrong-ness will be all the more contagious across the discipline.”
I feel like if social and behavioral scientists really took this kind of thinking to heart, these fields would be much better off and would, ironically, progress toward something like “truth” a lot more quickly than they have.
Deeply grateful for social scientists who embrace science and can change their minds when data show that's necessary!