What the movie "The Jerk" has to teach us about source evaluation
Deep source evaluation strategies may be backfiring in a non-intuitive way.
This is dating me a whole lot, but here goes: on social media, I think many of us are not too different than Navin Johnson, in The Jerk:
Navin is impressed when he sees his name in print, even though it’s just a phone book, and I suspect many people are similarly impressed when they see people “in print” online and in social media. For many, the internet still holds a certain novelty, as if publishing something online means it should be as well-respected as something published by a reputable source. I have yet to develop the filters to immediately recognize trustworthy internet sources from untrustworthy ones, as I did in my youth when I learned to differentiate the New York Times from tabloids in the grocery store. Given lack of identity validation on most internet sites and how easy it is to pretend to be someone else, I should probably default to assuming people responding to me online are closer to a name in the phonebook than the New York Times.
That lack of an intuitive sense of which sources to trust is why research on source evaluation is so important. When people come upon a source online, do they evaluate its quality/accuracy/trustworthiness? Why or why not? And how should they evaluate sources? Researchers in library and information sciences, among others, have developed schema for people to use when evaluating sources, but I have to admit I’m a bit skeptical of them.
I appreciate work such as this article by Vamanu and Zak (2022), where they synthesize 15 ways of helping students evaluate online information, distilling it down to a series of questions about source credibility and content soundness (e.g., plausibility - an important idea explored by Lombardi, Sinatra, et al., among others). But there’s compelling research out there that asking students to do these kinds of source evaluations is a big ask - most lack the knowledge and skill to do it well. And, even more importantly, malinformation sources actually capitalize upon well-intended source evaluation efforts: the more students “dig in” to an intentionally deceptive source, the more opportunities that source has to convince them of the malinformation.
Instead of traditional source evaluation tools, Sam Wineburg, Sarah McGrew, Mike Caulfield and others have made the case for lateral reading. They tell students that when they come across a questionable claim, rather than “digging in” to evaluate the source and content, the students should stop, open a new browser tab, and search for the claim in other sources. When they find trusted sources, they should determine whether those sources make the same claim or not. If not, no need to “dig in” to the questionable claim and its source any longer. Close that source and move on. Here’s a good explanation of lateral reading.
I like lateral reading a lot, and I think it presents a serious challenge to more traditional (and certainly well-intentioned!) source evaluation models. Given I suspect many people have a Navin Johnson perspective on internet sources ("They’re published online, so they MUST be somebody!”), it may be better to enact lateral reading than deep source evaluation.