Bullshit engine
large language models can be phenomenal bullshit engines. The difficulty here is that the bullshit is so terribly plausible. We have seen falsehoods before, and errors, and goodness knows we have seen fluent bluffers. But this? This is something new.
It made me reflect on the types of, erm, lexical nonsense I come across in my day-to-day life.
The kind I find most irritating by far is deliberately obfuscatory language. I recently received an email referring to a ‘pathogen agnostic enabling technology’ and wondered why on Earth someone would write that: I didn’t understand what it meant, I’m fairly certain that most other readers didn’t either, and I’ve a strong suspicion that the author would also be unable to concisely define it. They were hiding their lack of understanding—or perhaps their inability to describe something clearly and concisely—behind a jargon phrase.
Yet, from a reader’s perspective, the advantage of this sort of word salad is that it’s clearly visible: it telegraphs a lack of understanding up front.
Perhaps I really ought to be more exercised about plausible rubbish: the sort of stuff whose clarity of expression lulls one into a false sense of assurance, from which the logical leaps are harder to spot. That seems far more insidious.
The image at the top of this post was generated by DALL·E 3.
This post was filed under: Miscellaneous, FT Weekend, Tim Harford.