The ego-depletion literature is...exhausting.

Surprise surprise. Hao et al. (2023) found evidence of ego-depletion effects on logical reasoning and monitoring accuracy.

Listen, we’ve all been there. It’s been a long day, with lots of mental effort, and we get to that last task and just go, “Okay, I’m spent. I can’t do this. I’m going to go watch TV instead.” Intuitively, the more effort we exert, the harder it gets to continue exerting effort, in lieu of doing something “fun.” In the psychology literature, this phenomenon is called “ego-depletion” and for quite a while it was pretty much “settled science” (i.e., people agreed it was real, or what my kids would call “a thing”). But, then people started having trouble replicating findings and meta-analyses suggested earlier evidence wasn’t nearly as compelling as people thought. It’s puzzling when something so intuitive like ego-depletion, which anecdotally I feel just about every day, doesn’t pan out in research studies. But it happens, right? (cough, learning styles, cough) Sometimes, a thing isn’t a thing.

Well, a new study by Hao et al. (2023) further adds to the controversy about whether ego-depletion is a thing. Unlike other studies in education contexts, like ours, that did not find evidence of ego-depletion effects, Hao et al. found pretty compelling evidence of ego-depletion effects on logical reasoning and metacognitive monitoring accuracy. That tracks with the anecdotal experience that makes ego-depletion so appealing: when we exert a lot of mental effort, future instances of mental effort (like tough logical reasoning questions or carefully watching our performance) becomes that much harder to enact. So, what gives?

Well, like many psychological phenomena, we need to take the long view. It takes lots of studies, and lots of hard thinking (which can be exhausting), to thoroughly understand and investigate things like ego-depletion. Maybe past researchers hadn’t quite conceptualized it correctly. Or maybe our experiments were flawed. Or, maybe this latest study by Hao et al. was a fluke. Whatever the case, this study suggests there’s more work to be done in this area. But, before I dive back in, I need to watch some TV.

Addendum at 10:18am EDT, 10/11/23: On Bluesky (which I’m liking, so far), Brent Roberts pointed me to a post he wrote about the ego-depletion controversy, effect sizes, psychological methods, and the concern that previous research on ego-depletion is questionable, at best. It’s a good read and its arguments certainly apply to this study. What I probably should have written above is that Hao et al. is yet another data point and should be weighed against all the other data points, taking into account study quality factors like sample size. Like Brent, I hold out some small hope that better designs and conceptualizations might refine the idea of ego-depletion into something that matches, and explains, anecdotal experience. But we’re certainly not there, yet. I understand those who decide to abandon work on ego-depletion and I understand those who wish to continue pursuing it. I appreciate Brent’s emphasis on doing the work with careful and rigorous attention to methods.