People who act like large language models

Geoff Mulgan on AI and bullshitters:

Since the arrival of ChatGPT there has been much debate about how AI can replace humans or work with them. Here I discuss a slightly different phenomenon which I increasingly notice, and probably wouldn’t have without the presence of generative AI: people who act rather like large language models. The phenomenon isn’t new. It’s just that we now have a new way of understanding it. As I show it’s quite malign, particularly in academia.

The strength and weakness of ChatGPT is that it can quickly assemble a plausible answer to a question, drawing on materials out in the world. It sucks them in, synthesises and mimics, sometimes quite convincingly, someone knowledgeable talking about a subject.

Sound like postmodernists much? This is where I came in. B&W started as a jaundiced look at that kind of empty pretentious blather, that was a mimicry of thought rather than the real thing.

Lots of people now use ChatGPT to help them with first drafts of articles or talks. But I’m more interested in the people who act like an LLM even if they don’t actually use one. These are the smart people who absorb ways of talking and framing things and become adept at sounding convincing.

And, more than convincing, deep. Over the heads of plebeians like us. See: Judith Butler, passim.

The classic example in academia was Alan Sokal’s piece ‘Transgressing the Boundaries: Towards a Transformative Hermeutics of Quantum Gravity’. The article was submitted and accepted by the journal Social Text. The piece was deliberately written to sound plausible, at least to the academic community served by the journal. Yet it was in fact wholly meaningless. It was a perfect example of vapid mimicry and was bitterly resented by the academic community it mocked.

Sokal’s stunt was an extreme example. But what he was mocking is not so exceptional. Many people in many fields, including quite a few in academia, also act rather like a ChatGPT, particularly in academic disciplines that don’t do much empirical work, work with facts or testable hypotheses – the more they are just commenting on texts (as a surprising proportion of the social sciences and humanities do) the more such foggy talk is a risk.

I don’t object to commenting on texts as such. There’s plenty of brilliant commenting on texts, which is enlightening and depth-excavating. I’m a humanities type and I do see value in thinking and talking about literature. What I detest is the attempt to make it sound artificially “difficult” and not for the mere plebs.

But one advantage of age and experience is that I now realise that me not understanding what someone is saying is sometimes a sign that they don’t know what they’re talking about, and that they are essentially acting like an LLM. This would become apparent if they were ever interviewed in the way that politicians are sometimes interviewed in the media, with forensic questioning: ‘what do you actually mean by x? What’s an example of what you just said? What would your best critic say about your comments?’.

That kind of sums up what I do every day, especially the “what do you actually mean” bit. The more people do that the better, in my view.

6 Responses to “People who act like large language models”