Expertise matters. And AI isn't an expert. On anything.
Research on expertise makes me skeptical of ChatGPT, etc.
Yes, I know. The hype cycle for AI (e.g., ChatGPT, etc.) is growing faster and faster, with no signs of abating any time soon. Nonetheless, I am firmly in the “AI is a tool people can use in their work” camp, as opposed to the “AI will replace people at work” camp. Much of my skepticism about AI comes from research on experts and expertise.
Experts are built different. They don’t just know more than novices - their knowledge is structured differently and they use that knowledge in different ways. As Boshuizen and colleagues (2020) have discussed, experts create complex knowledge structures and scripts that enable them to think to solve problems better and faster than both novices and those who have had significant training in the field. The performance difference is big - and not just in amount but also in kind. Experts perform differently and better than other people, and differently and better than AI.
And developing expert knowledge isn’t a linear process of accumulating more or better information. It involves a complex, dynamic, ongoing set of interactions involving deliberate practice, self-regulation, reflection, coaching, and performance. Given large language models and other types of AI are opaque to us (i.e., we don’t know how or why they produce output, we can only evaluate whether it is on target or not), it is unlikely they are building expert knowledge or scripts at all like the ones experts build. Therefore, it seems even less likely that they can produce the reliably superior performance that experts produce.
In sum, I’m okay with an expert using AI to their work, but I’m very much not okay with an AI doing an expert’s work for them.