In the winter of 2015, I stood before the largest gathering of social psychologists in the world to accept one of the field’s highest honours. My collaborators and I were being celebrated for our theory about willpower—a theory I’d spent many years refining. For a kid who grew up with empty bookshelves, this should have been a moment of triumph [1].
Instead, I felt like a fraud.
At that same conference, I had to confront an uncomfortable truth: the foundation of our celebrated paper was crumbling. Ego depletion—the once-famous idea that self-control relies on a finite resource that can be depleted through use—wasn’t real. That award? It was like winning a Nobel prize for developing the frontal lobotomy as a treatment for mental illness; and, yes, that really happened.
This isn’t just another story about failed replications or p-hacking (though those shenanigans will make an appearance). It’s a story about what happens when we fall in love with our theories more than the truth. The replication crisis didn’t just shake the foundations of psychology; it shook those of us who had built our careers on ideas that no longer held up to scrutiny.
For me, this reckoning wasn’t just professional. It was personal. I couldn’t look away from what the data were telling me, even when it meant questioning—or outright repudiating—my own research. Grappling with the emotional fallout of the crisis was destabilizing. It left me hollow and apathetic, wondering if I had wasted twenty years of my life working on bullshit. Was there any purpose in continuing in a field that had allowed these failures to happen—and, in some ways, still does? I wallowed in these feelings for near a decade.
Thankfully, I’ve found my enthusiasm again. But the lessons of the past decade still haunt the field, as beloved theories continue to fall apart. Which is why I nearly choked on my White Russian when I saw a recent defense of ego depletion.
In a new paper, Roy Baumeister—the originator of ego depletion and someone I consider a friend despite our deep disagreements—makes a claim as bold as it is baffling: ego depletion, he argues, is “one of the most replicable findings in social psychology.” He goes further, asserting that critics of the theory are merely a vocal minority, suggesting most scientists still see ego depletion as solid and reliable.
To put it bluntly: I disagree. Strongly.
I’m no doubt blinded by my place in that vocal minority, but this claim is wildly out of step with the evidence. Far from being one of the most replicable findings in social psychology, ego depletion has become the textbook example of how seductive ideas and questionable research practices can lead an entire field astray. And I’m not exaggerating here, ego depletion is one of the main examples of replication failures in a textbook for undergraduates on the replication crisis written by Charlotte Pennington. What began as a compelling theory of self-control has collapsed under the weight of replication failures, flawed methods, and a refusal to confront inconvenient truths.
The rise of a big idea
Ego depletion emerged in the 1990s with a simple yet profound idea: self-control relies on a limited resource, and exerting this resource in one domain leaves you with less of it for another. Think of it as a kind of mental fuel tank that depletes with use. If you control your emotions while dealing with your intemperate boss, you might have less self-control to resist overeating at lunch—or maybe you’ll skip the gym after work.
This resource model of self-control captivated psychologists and the public alike. Even President Obama cited it during his time in office. Ego depletion seemed to explain everything: overeating, procrastination, why decision-making feels harder at the end of the day. It was an elegant, intuitive theory with wide-reaching implications.
But cracks began forming in the early 2010s. The first challenge wasn’t about replication, but theory. What exactly was this resource being depleted? Some researchers pointed to glucose, but that explanation fell apart quickly. Others suggested that self-control failures were more about shifting priorities or changes in motivation than running out of a finite resource. My own work joined this theoretical critique, questioning whether ego depletion was really a matter of resources at all. It was this theoretical critique that won me that big award.
Then came the far more devastating challenge: was ego depletion even real?
Replication failure
The undoing of ego depletion began as whispers in conference halls and seminar rooms—murmurs that something was off with this darling of social psychology. I remember Jordan Peterson cautioning me against the area in the late 2000s, citing his inability to replicate the basic effect. Occasionally, a failed ego depletion study would make it into print. Some of these failures came from studies that were preregistered—meaning the researchers specified their hypotheses and methods in advance—making the results particularly trustworthy by reducing the influence of questionable research practices.
Then came the meta-analyses, adding weight to these scattered failures. Meta-analyses, which aggregate findings from multiple studies, are often regarded—wrongly, in my opinion—as the gold standard of evidence. While an early meta-analysis published in 2010 seemed to vindicate ego depletion as replicable, robust, and large, new information about meta-analyses came to light, man. Meta-analyses have a serious and often unfixable flaw: the garbage in, garbage out problem. If the studies being analyzed are riddled with questionable research practices or if null results are excluded due to publication bias, the conclusions drawn from the meta-analysis are worse than useless; they’re actively misleading.
To address this problem, statisticians have developed bias-correction tools that adjust meta-analyses to account for small study effects and publication bias. And when these tools were applied to a series of ego depletion meta-analyses, the results were devastating: the effect of ego depletion disappeared. The robust, large effect celebrated in 2010? Gone.
Okay, this was bad, but still just theoretical, right? What we needed was an actual experiment—something big, something undeniable. Well, that’s exactly what happened next.
Enter the registered replication report. This was no ordinary study. It brought together 24 labs from around the world and tested over 2,000 participants, all to replicate a previously published ego depletion study. This massive effort wasn’t perfect, no study is, but its scale and rigor set a new standard. Before the study even began, the participating labs—which included many of the world’s leading experts on the topic—were polled on their expectations. Of the 24 labs, 23 predicted a real ego depletion effect.
And then the results came in: no effect. None. The ego depletion effect was no different from zero, confirming what the bias-corrected meta-analyses had already suggested.
When this was published, it hit me like a bombshell. I couldn’t believe that such a statistically powerful study failed to show what I thought was obviously true. Stunned, I went straight to my own lab to try to replicate ego depletion for myself. Over and over I tried. Over and over I failed. That’s when I became convinced: ego depletion, at least as typically studied in the lab, was a mirage.
But not everyone was convinced. Kathleen Vohs, perhaps the most prominent proponent of ego depletion after Baumeister himself, decided to launch her own registered replication. This study was a marked improvement on the previous one, with tighter controls, better manipulations, and an even larger sample. If ego depletion was real, this study should have found it.
But the result was the same. No meaningful effect. Any difference between the ego depletion condition and zero was so small, you’d need an electron microscope to see it.
Oof. Game over.
How can 600 studies be wrong?
A few years ago, I spoke with Baumeister about all of this, and he asked me a series of sharp questions: How can 600 apparently supportive studies all be wrong? Are we to believe there was some vast conspiracy of researchers around the world, with hundreds of labs working together to prop up a dead idea?
The answer, of course, is no. There was no conspiracy. Yes, I’m Jewish, but I never sat in a smoke-filled room with a secret cabal of researchers scheming to sustain a falsehood, and I’m sure no one else did either. So how is this possible? Say what you will about the tenets of ego depletion theory, at least it had an ethos.
Unfortunately, when you combine a seductive idea, questionable research practices, and publication bias, they can make the ludicrous seem plausible. Consider how Daryl Bem once supposedly proved that ESP is real, or how some methods-reformers managed to show that listening to the Beatles could make people age in reverse. When we abuse our experimental tools, we can make anything seem real. And when only positive findings are published while negative ones are filed away, the resulting literature can appear irrefutable.
But a closer look reveals that the foundation of our field was unstable from the very beginning. This is true for ego depletion, just as it’s true for stereotype threat (and many other so-called foundational concepts in psychology that we’re too scared to look more closely at).
Ego depletion is dead: Long live fatigue
In Baumeister’s recent defense of ego depletion, he points to work on fatigue, suggesting that longer, more effortful manipulations can have downstream consequences.
On this point, Baumeister and I agree.
Yes, when people work for many hours—not minutes—their willingness and even their capacity to exert effort diminishes. One Herculean study, for instance, had participants perform effortful tasks in a brain scanner for six hours. Fatigue effects only began to show up around hour four or five. Similarly, my lab found a small depletion effect when we maximized effort for each participant and decomposed subsequent performance using computational modeling, which allowed us to analyze latent psychological processes like changes in response caution and information processing beyond surface-level behavior.
But let’s be clear about two things.
First, the idea that a few minutes of self-control can leave you unable to resist temptation later has been thoroughly debunked. What remains is something far more mundane: fatigue. And we’ve known that fatigue affects attention, cognition, memory, emotion, and volition for over a century. Simply showing that people modestly underperform when tired isn’t groundbreaking; it’s one of the oldest topics of study in psychology.
Second, conceding that depletion only emerges after long, effortful tasks doesn’t redeem those 600 studies; it invalidates them. Most of those studies used tasks that were short and not particularly hard. In other words, the field can’t just say the existing literature stands because fatigue is real. It needs to start over.
Anybody up for that?
The Lessons We Should Learn
Ego depletion’s collapse isn’t just a story about one flawed theory—it’s a cautionary tale for all of science. The replication crisis has shown us how easy it is for compelling ideas to gain traction, even when the evidence is weak. Ego depletion was built on an elegant theory and many studies that appeared to work, but elegance isn’t evidence, and quantity isn’t quality.
Baumeister is rightfully lauded for helping to shape the field of self-control. But, by denying the collapse of the theory of ego depletion, he risks undermining that legacy. Admitting mistakes wouldn’t diminish his contributions—it would elevate them.
The theory of ego depletion reminds us of the dangers of scientific overreach, the power of compelling narratives, and the importance of rigorous methods. If we’re serious about building a more robust science, we need to confront these failures head-on—and learn from them.
A Nerdy Happy Hour Invitation
Before you go, a quick sidebar. I'm launching Drink and Regret, a monthly virtual happy hour where we can dive deeper into one of my posts. For our inaugural session, we're talking about my post on stereotype threat - another beloved psychological theory that didn't quite survive scientific scrutiny.
Normally, this would be a paid-subscriber only event, but I’ve decided to invite everyone interested for this first go around. Paid subscribers still get VIP treatment (first to ask questions, maybe some extra snark), but everyone's welcome.
📅 Date: Thursday, February 6, 2025
⏰ Time: 5pm EST
🍺 Beverage: Bring your drink of choice. I'll have a lager
Interested? Great. Not interested? Also great. No pressure. Just a chance to geek out about science, swap some regrets, and maybe learn something along the way.
[1] I've written about this moment before in my blog, but the story bears repeating because it captures something essential about how science can go wrong—and how we might make it right.
For a project I'm working on, I have a dataset of psychology papers. I just looked, and in the dataset, I have 763 sentences reporting a significant result that contain the word "depletion". Among those sentences, 41% of results report a p-value of .01 < p < .05 (i.e., just barely falling under the significance threshold). I figure that you're familiar with p-curve stuff... for reference, a study with 80% power will produce 26% of its p-values in that range, and the mean study over the past decade across all of psychology is about at 29%. The "depletion" percentage puts it very close to the rate associated with the word "priming" (rate of 40% across 3820 sentences)
I'm sorry for your tough time over this (I am a retired research psychologist myself), but really appreciate your putting it out here for everyone to see. This is indeed a shocking example for everyone's attention (!).