Is AI the smile on a dog?

Is AI the smile on a dog?
A robot dog, smiling, in a field of flowers.

If you're around my age, you probably remember the song "What I Am" by Edie Brickell and the New Bohemians:

That song has some often-quoted lyrics, like these:

"Philosophy is the talk on a cereal box. Religion is the smile on a dog."

Now, I always thought the line "religion is the smile on a dog" was a critique of religion. See, dogs don't smile like humans smile (i.e., to mean they are happy or friendly), rather that's just the way dogs' mouths move. We think and act like the dog is smiling, but it's really not - that's something we "put" on the dog. And thus, that lyric is suggesting religion is similar, as something that humans "put" on phenomena in the world. (That's just my understanding of the lyric - I'm not personally agreeing that religion is the smile on a dog.) A quick Google search didn't turn up any interviews where Edie Brickell said that was what she meant, but let's go with that interpretation for a minute. Because there's a new article out by Messeri and Crockett (2024) on the use of Artificial Intelligence (AI) in scientific research that suggests humans might be making the "smile on a dog" mistake with AI.

The authors discuss how there are four "visions" for how AI can help people conduct research. These include the AI as an Oracle that can help with digesting the vast amount of literature out there, AI as Surrogate who can take the place of actual human subjects in research, AI as Quant who can analyze large datasets that humans cannot, and AI as Arbiter who can review manuscripts for publication. Each of these "personas" pose threats to scientific knowledge generation, in part because humans tend to treat AI as if it can think and do these things with intentionality (i.e., that it is "intelligent"), thus like the "smile on a dog."

This is a really thought-provoking article that I encourage you to read - I can't do it justice here. It covers a lot of ground, from good explanations of types of AI to discussions of social epistemology to how cognitive biases might affect, and be affected by, human's use of AI.

Importantly, the authors are not AI-doomsayers, nor are they AI-evangelists. Instead, I see them as raising important concerns about AI while also stressing that we should not surrender our agency when using AI. We should use and critique AI like any other research tool. Otherwise, our science might become mere "talk on a cereal box" as Edie sang.