Sometimes the biggest changes in your life happen with a single click.
Ten years ago, I sat at my computer and hit “publish” on a blog post that would permanently alter my relationship with my field. The post, titled Reckoning with the Past, wasn't my first critique of social psychology. I had already dipped my toes in those waters with an earlier piece called Check Yourself Before You Wreck Yourself (a guest post on Simine Vazire’s blog), where I'd performed a pitiless audit of my own work.
But this post was different. This was my coming-out moment as a full-blown skeptic in social psychology.
At the time, the essay felt like a cri de cœur—a personal plea for our field to confront its shortcomings. But the day after it was published, as I watched the overwhelming response roll in on social media and my inbox, it became clear that this wasn’t just an essay; it was a turning point in my career. The post went viral—or as viral as things get in academia—caught the attention of The Atlantic, The Globe and Mail, and MIT’s Undark Magazine, and, almost overnight, it changed how I was seen within the field. With that one piece, I had effectively switched allegiances: publicly moving from the mainstream of social psychology to what some might call the replication-crisis camp.
The personal costs were painful. I lost a close friend and mentor—someone I deeply admired and looked up to—because they saw the replication movement as an act of treachery, a betrayal of the giants whose shoulders we stood on. The pain of that loss still stings. I still remember one of the last conversations I had with them, when they dismissed me as thoughtless, a lightweight. When someone who helped shape your intellectual journey sees your stance as a heedless betrayal rather than principled criticism, it cuts deep. It's one thing to know intellectually that speaking up might cost you relationships. It's another thing to, you know, actually experience that loss.
But the professional transformation was even more profound. Like a brother shamus, I found myself investigating a case that kept getting more complicated. I started seeing much of our cherished research as, frankly, worthless. And my skepticism, some would say cynicism, has only deepened over the years, evolving in ways I couldn't have anticipated when I first hit "publish."
After ten years, although I still see reliability and replicability as the field’s biggest problems, I now worry about validity a lot more than I used to. Maybe I should have heeded the wise words of my friend Eli Finkel, psychology professor at Northwestern University, who came to similar conclusions, years before I did. Replicability is a low bar for a science; it simply means we can reproduce our results when we try. But what if the results we’re reproducing say nothing about the real world? This is what I worry about now—that all these ingenious experiments we run say more about our cleverness than they do about the world we’re trying to understand.
Years after writing that post, while doing research for an episode of Two Psychologists Four Beers, I discovered William McGuire's work on perspectivism. His insights radically shifted my thinking about what our experiments can tell us—or more precisely, what they don't tell us. Lab experiments, I came to realize, are merely proofs of concept, demonstrations that something could happen under specific conditions. But that's a far cry from showing that these effects matter in the real world.
Let me give you an example.
Say we run an experiment showing that when people exercise self-control in a lab, they eat fewer cookies or chips or whatever tempting snack we put in front of them. Hooray! Hypothesis confirmed. Time to write that self-help book about how self-control is the key to weight loss, right? Not so fast. That lab study, pristine and controlled as it might be, tells us remarkably little about whether self-control helps people lose weight in the messy reality where we all live. In the real world, there are thousands of competing priorities and stimuli demanding our attention: work stress, family obligations, that neighbour who keeps playing Kenny Rogers at 3 in the morning. Maybe self-control does work outside the lab, but its effect is so tiny it gets drowned out by all these other, more powerful forces.
Marina Milyavskaya and I actually tested the power of momentary self-control in the real world. We followed people over weeks and months, tracking their moment-to-moment self-control and their progress toward various goals, including weight loss. What we found—much to our surprise and perhaps chagrin—was that self-control didn't predict success at all. The lab effects, so clean and promising, simply evaporated in the noise of real life.
So, what was the point of that original lab study? All it really did was prove that something could happen under perfectly controlled conditions. It's like showing that a hockey player has incredible stick-handling abilities in practice and a hard and accurate shot during shooting drills but then can’t find the net never mind get a small puck into it when there are other players competing and hitting. The gap between could happen and does happen turns out to be wider than we thought; and maybe wider than we'd like to admit.
That said, the field of social psychology has made many strides in the ten years since writing that piece. The biggest change is methodological. We’ve cleaned up our act. A lot. Sample sizes, for example, are way up. That's mostly good news, though it comes with its own complications. The costs of running such massive studies have created new inequities, making it harder for researchers without deep pockets or access to major funding to run all the studies we want. Hell, it’s even hard for labs with deep pockets, like mine, something I learned recently when my financial officer informed me my grants for this academic year had run dry.
Other changes have taken hold, too. What seemed radical in 2015—preregistration, open data, open methods—is now standard practice. The replication crisis evolved into a renaissance, leading to better, more reliable science.
I’ve also seen the field pivot toward more applied work, for example, studying interventions that aim to change real-world outcomes or tracking how misinformation spreads in real time on social media. This is a fantastic shift, and I’d love to see more of us take our work out into the wild. Alongside this, I’m noticing a growing appreciation for descriptive research—work that simply documents the world as it is, without rushing to test hypotheses. I’m a big fan of this approach, which has been undervalued in our field. The theory of evolution, after all, began as purely descriptive work.
I've also watched social psychologists embrace our personality psychology cousins—you know, the ones we nearly drove into academic extinction back in the day. Now we're not just collaborating with them; many of us are diving into individual differences work ourselves. The irony isn't lost on me: personality psychology emerged as one of our field's true pillars of strength, producing research that actually holds up under scrutiny. Which is more than I can say about the work of Walter Mischel, one of the chief architects of personality's near-death experience. For all his brilliance, Mischel's own research didn't escape the replication crisis unscathed. There's probably a Greek tragedy in there somewhere: the scholar who tried to bury personality psychology saw his own legacy become diminished under the weight of methodological reform.
At the same time, I’ve noticed an explosion of vignette studies paired with self-reports, where participants are asked how they think they’d respond to hypothetical scenarios. These studies have their place—for example, in exploring moral reasoning—but I’m concerned that our push for larger sample sizes has unintentionally encouraged overreliance on these quick and therefore inexpensive methods. The problem is, they often fail to answer the real-world questions we’re trying to pose. Instead, they contribute to a psychology not of actual behaviour, but of how people imagine they might behave. There’s a big difference between the two, and I worry we’re losing sight of that.
So, while the field has changed a lot in ten years, and mostly for the better, it has also been challenging. The field's transformation was both swift and disorienting. The old hierarchies have collapsed, toppling many of the field’s elder statesmen—respected figures whose work we all collectively admired. The prestigious programs that once dominated social psychology—Michigan, Ohio State, Stanford, Waterloo, Yale—no longer hold the same unquestioned authority. These days, I honestly couldn't tell you which are the top programs anymore. Sure, I think we've built something special at the University of Toronto, but how that's perceived beyond our halls? Your guess is as good as mine, man. Maybe this is all for the better. Maybe we needed this flattening so that we can better grok the truth.
The personal toll of this transformation was heavy. For much of the past decade, I felt like a shadow of my former self, grappling with questions that threatened to unravel everything I thought I knew about my field. There were dark moments when I dreamed of walking away entirely, retiring early and leaving the whole mess behind.
I share this not to elicit sympathy, but to highlight the challenge that many of us faced during the last decade in response to a science falling apart before our eyes. If you were an active social psychologist in that era, you had two choices: you could confront the problems head-on and try to fix them or you could downplay their severity and encourage others to keep doing business as usual. I chose the former path, but that choice came with costs I hadn't anticipated. Thankfully, I eventually found my spark again, though not before learning some hard lessons about the price of speaking up.
So, was my blog post worth it? The professional isolation, the lost friendships, the deepening skepticism about my chosen field and my ensuing burnout? Absolutely. When you publicly break ranks with your intellectual tribe, there's no going back. But, unless we acknowledge our own mistakes, there's also no moving forward.
To any young researchers reading this who might be harbouring their own doubts: it's okay to question. Science progresses through doubt, not certainty. And while coming out as a skeptic might feel risky, remember that there's a community waiting to welcome you. We might not have a clear hierarchy anymore, but maybe that's not such a bad thing. Science can always use more honest voices willing to say when the emperor has no clothes.
As for me? I'm still that same skeptic who impulsively hit "publish" a decade ago. Just a bit grayer, a bit more battle-scarred, and a lot more certain about the value of uncertainty. The journey from that first blog post to my current views on psychological science has been long and often painful, but it's also been clarifying. We need to be honest about what our experiments can and cannot tell us. And we need to be willing to admit when we might have been wrong.
*P.S. Ten years later, I still sometimes wonder if I should be fixing myself more to drink.
Since you mentioned "sample sizes going up", if you are curious, I recently collected some stats on this as part of a paper that is in press... you can see this graph here (https://imgur.com/a/XDFflW9) of my estimates for the typical sample size in each field. These numbers are based on scanning ~250k papers for t-values, taking the degrees of freedom associated with each t-value and just adding +1. These numbers reflect the median sample size of the median paper. Social psychology has gone up the most among the different psychology subfields (from N = 70 in 2004 to N = 250 people today)
Very good essay! As a final year PhD student and a lecturer in research methods who is, as you say, "grappling with questions that threatened to unravel everything I thought I knew about my field", I have decided to start blogging on Substack to practice writing and clarify my thoughts, but I do fear the costs that may come when my colleagues/college discovers this. I already suspect I don't get opportunities and invitations to collaborate because of some of the perspectives I have shared (and opportunities like that are already few down here in New Zealand). So I don't know about my future in the field.