I just became president of a scientific society dedicated to understanding human motivation, and my first official act was to sit through a keynote delivered by one of my heroes that made me want to stand up and shout “I don’t fully agree!”
I recently returned from a trip to Washington, DC, where I attended the annual meeting of the Society for the Science of Motivation. What has stayed with me the most is a talk delivered by one of the OGs of motivation research, Arie Kruglanski, that felt like a lament for the decline of grand theorizing.
In his view, social psychology (and motivation science) needs more theory. Big theory. Grand unifying frameworks to organize our scattered findings into something coherent and meaningful. The field, he argued, has become so fixated on rigor and methodological reform that it’s become distracted from the real work of theorizing. All this preregistration and replication stuff? That’s nice, but where are the bold ideas and sweeping explanations? Kruglanski lamented that without big new theories, social psychology is stagnating.
But what if the opposite is true? What if social psychology’s problem isn’t too little theory, but too many theories proposed too hastily?
Anthony Greenwald—the guy who gave us the Implicit Association Test—once suggested that theories can actually stifle progress. Once a theory is out there, you start seeing confirming evidence everywhere, even when it's weaker than the Nescafe “coffee” my mother drinks. And when stronger evidence comes along that doesn't fit? Well, there must be something wrong with the data, not the Big Beautiful Theory.
We all fall prey to confirmation bias, even those of us who study it for a living. Researchers become so invested in their theoretical hunches that they'll keep tweaking experiments, massaging methods, and reframing results until they get what they expect, essentially bending their methods to confirm their pet hypotheses.
It's like that rug in The Dude's apartment: the theory really ties the room together, even when it's covering up stains in the floorboards. And if the theory is appealing—because it is intuitive or elegant or socially compelling—it can be seductive, discouraging dissent and surviving on its rhetorical appeal or supposed importance even as the evidence starts to look threadbare.
I've lived through the spectacular collapse of two theories I once believed in: stereotype threat and ego depletion. Both were elegant, intuitive, and socially important. Both inspired hundreds of studies. And both turned out to be largely bullshit.
Imagine an alternative timeline where these effects were discovered without the grand theorizing. "Huh, some women seem to perform slightly worse on math tests under certain conditions when we remind them of negative stereotypes. Weird. Wonder if anyone else can find this?" If we'd approached it as a curiosity rather than a revolution in understanding inequality, maybe we would have noticed how fragile the initial evidence was. Maybe we would have demanded better replications before theorizing and building entire interventions around it.
But no.
We canonized these theories before we rigorously tested them. And, once canonized, theories stop serving truth and start serving the reputations of their creators. Our top journals actively encourage this theoretical gold rush, demanding that every paper advance or propose a new theory. Try submitting a careful descriptive study and watch reviewers ask: "But what theory does this advance?" As if documenting something interesting about human behaviour isn't valuable unless it moves some theoretical ball forward.
I don’t mean to imply that theory is unimportant. But when your celebrated theoretical work turns out to be an elaborate castle built on data that was tortured into compliance, you get humble real fast. Perhaps rushing to theorize—before understanding what you’re explaining—just to get published in top journals is not ideal.
What’s the alternative? Maybe we aim for something more modest, but no less important: describing interesting phenomena accurately and measuring them with fidelity.
Paul Rozin reminds us that fields like biology spent centuries in a descriptive phase, patiently collecting facts, before their theoretical revolutions. Scientists often proceed more abductively, guided by informed curiosity: their hunches and speculations guide them as they poke around in the natural world, collecting descriptive data, and only later building models.
I'm not suggesting we can observe without any theoretical hunches. Philosophers of science remind us that all observations are theory-laden[1]. You need some provisional hypotheses to know what's worth studying, what’s worth observing. But there's a crucial difference between having research questions and committing to elaborate theoretical mechanisms. The former is abduction: forming a tentative hypothesis to guide observation, ready to revise based on what is observed. The latter is a theoretical commitment that can blind us to what we're seeing. And once we make that commitment, especially if you have not mapped out and observed the phenomenon fully, psychological forces take over.
But descriptive research alone isn't enough. Anthony Greenwald argues that the key to progress is methodological: we need better tools for discovering things before we start theorizing about them. Think about it: when have breakthroughs occurred in physics and biology? Usually when someone invented a better telescope or microscope, not when they spun a cleverer theory. Great methods beget great theories because they generate the accurate observations that theories need to explain.
Indeed, some of psychology's greatest breakthroughs came not from theoretical leaps, but from careful observation and methodological innovation. The Stroop effect wasn't discovered by someone testing a theory about attention and automaticity—it emerged from Ridley Stroop's simple observation that people struggle to name the colors of words when the word and color don't match. That basic descriptive finding and new tool—not theoretical insight—opened up decades of research on cognitive control.
Or consider patient H.M., whose profound amnesia after brain surgery revealed the distinction between different memory systems. McGill University’s Brenda Milner and her colleagues weren't testing a grand theory when they studied H.M.; they were cataloguing what this man could and couldn't do. That painstaking descriptive work revolutionized our understanding of memory and laid the foundation for modern cognitive neuroscience.
Or take my friend Wilhelm Hofmann's landmark descriptive study on everyday self-control. Hofmann asked a simple question: what do people's actual self-control conflicts look like in daily life? Using experience sampling, he mapped the landscape of everyday desires and temptations. The result was a rich descriptive portrait that challenged existing assumptions and opened up research directions we're still exploring today.
This is what good descriptive work looks like: it doesn't (only) confirm what we think we know; it shows us what we don't know. If you want other examples of simple or even dumb studies that are revelatory despite being more-or-less theory free, check out Adam Mastroianni’s post over at Experimental History.
What would psychology look like if we took description seriously? For one, we'd be more patient before proposing new theories. Instead of racing to explain every interesting finding, we'd spend time mapping the phenomena first. When does an effect occur? How robust is it? What are its boundary conditions? These aren't trivial questions.
Kurt Lewin famously said, "There is nothing so practical as a good theory." But we've rushed to theorize, resulting in theories that are flimsy, trite, or false.
I propose canonizing another, less pithy line, this time from Solomon Ash (via Rozin): “Before we inquire into origins and functional relations, it is necessary to know the thing we are trying to explain.” In other words, description must precede explanation.
The path forward isn't to choose between rigor and theory; it's to use rigor and precision in service of better theory. Sometimes the most productive thing we can do is slow down, to ensure that we are observing carefully and describing what we actually see. Better theories through patience, not speed.
Look, Kruglanski's not wrong that we need good theories. But rushing to build them before we understand what we're explaining? That's how we got into this mess in the first place. Let's slow down and do this thing right this time.
[1] Thanks to Paul Bloom for reminding me of this. Paul was kind enough to read an earlier draft and helped me sharpen my argument.
Love this! We need more journals solely encouraging the publication of great descriptive research regardless. This is worth aligning incentives around.
Nice. This sort of awareness is lacking in other disciplines too. People so badly desire to eliminate uncertainty that they’ll make up just about anything to avoid the experience.