Size matters? Science journalists pay attention to sample size when evaluating psychology research.

Bottesini et al (2023) tested what affects science journalists' perceptions of the trustworthiness of psychology findings. But their exploratory findings were even more interesting, to me.

I apologize - that easy joke in the headline is in poor taste (but as the Underminer said: “Nothing is beneath me!”). Don’t let my bad pun dissuade you from checking out Bottesini et al.’s (2023) very cool, pre-registered (yay!) study of what factors affect science journalists’ evaluations of the trustiworthiness and newsworthiness of psychology research (and the paper is open access, huzzah!). In short, they found a study’s sample size positively affected journalists’ ratings, but neither the representativeness of the sample, the p-value of the main finding, nor the institutional prestige of the mentioned researchers had any reliable effect. It was a clever design and I really like the idea of investigating how science journalists evaluate evidence and arguments in psychology research. We need more of that. It reminds me of some work I did with Clark Chinn and Vic Deekens on how professors evaluate controversies in disciplines outside of their own. It’s interesting to read how experts in one area (e.g., journalism) leverage that expertise to understand ideas in a different area (e.g., psychology research).

I have to say, though, I found Bottesini et al.’s exploratory findings even more intriguing than their pre-registered ones. After the journalists were done rating studies, they asked them open-ended questions about what characteristics of studies they consider when evaluating trustworthiness of the results. The themes were fascinating and adhered very well to ideas about how to vet psychology research findings. Bottesini et al. suggested more research is needed into those themes and I very much agree. I’d love to see follow-up experimental work examining those themes’ influence on journalists’ evaluations.