It may not matter when, it just matters that we combat misinformation.

It may not matter when, it just matters that we combat misinformation.
Cartoon-like image of a man in a tie leaping out of a laptop computer screen, using a bullhorn to yell about refuting fake news.

Information is sticky. Once we read it, it can be hard to get it out of our head, even if we later read that the information was actually false (i.e., misinformation). This is called the continued influence effect, and it's a serious concern in our modern, social-media infested world. More and more people are getting their news from social media, rather than newspapers or journalism sources. Unfortunately, social media does little to no curating of news topics, and AI has made it even easier to produce convincing misinformation, therefore misinformation is a real threat to people who expect important news to "find them" via social media. What can we do about this?

Well, there's been lots of great research into how to refute misinformation, online and elsewhere. One question has been whether it is better to "prebunk" misinformation (i.e., tell people about misinformation before they are exposed to it) or "debunk" it (i.e., tell people that they encountered misinformation after they did so). In particular, there's been some concern that refutations, which repeat the misinformation before refuting it, might actually reinforce the misinformation.

A new study by Edelijn et al (2024) suggests we shouldn't worry about when a refutation happens, because both prebunking and debunking are about equally effective (i.e., order of the misinformation and refutation presentations do not matter). What seems to matter is that the refutation include an explicit link to the misinformation. So, in support of interventions like refutation texts, we should be pre- AND debunking misinformation. If AI-generated misinformation is flooding social media, then we need to flood it with explicit refutations, as well.