It’s been five months since Alison Gopnik, Cosma Shalizi, James Evans and myself wrote to argue that we should not think of Large Language Models (LLMs) as “intelligent, autonomous agents” paving the way to Artificial General Intelligence (AGI), but as cultural and social technologies. In the interim, these models have certainly improved on various metrics. However, even Sam Altman has started soft-pedaling the AGI talk. I repeat. Even Sam Altman.
So what does it mean to argue that LLMs are cultural (and social) technologies? This perspective pushes Singularity thinking to one side, so that changes to human culture and society are at the center. But that, obviously, is still too broad to be particularly useful. We need more specific ways of thinking – and usefully disagreeing – about the kinds of consequences that LLMs may have.