Will A.I. make us write more clearly?
In Platformer last Tuesday, Casey Newton reported on CNET’s use of artificial intelligence tools to publish news stories.
Newton’s piece led me to Futurism, which pointed out serious errors in CNET’s AI-generated prose. Futurism argued that the tone of the piece was significant in disguising the errors:
The AI is writing with the panache of a knowledgable financial advisor. But as a human expert would know, it’s making another ignorant mistake.
If I were to develop an AI model, I would probably start with the writing style and hope the factual content would come later… and really, that’s quite human behaviour.
❧
Technical writing often includes numerous technical terms. Despite this, good technical writing remains clear. It is as simple as possible, concise and unambiguous.
Good writing is a skill. It is not something that is easy to master. When reading Jeanette Winterson’s 12 Bytes last year, I was particularly taken with her plea that scientists ought to work closely with writers to ensure that their ideas were communicated with precision and, perhaps, beauty.
Yet, when people are trying to imitate this style of writing, perhaps when starting out, they frequently do it badly. They confuse technical terms for obfuscatory terms. When I am marking scientific assignments, words like ‘whilst’ or ‘utilise’ are red flags for this: these are not words people typically use in everyday life, and they can signify that someone is intentionally trying to make their writing sound more complicated than necessary. This is the antithesis of communicating complex ideas as simply as they can.
Good students—and good writers—grow out of this. But some don’t. Some people just slip into using ridiculous language as a habitual thing.
Others—stereotypically in the corporate world or the Civil Service—intentionally use obfuscatory language to hide their own confusion or to avoid pinning down a particular meaning. Why say something plainly if it might turn out to be plainly wrong? Why give a hard deadline when you can just ‘work at pace’? ‘We’re going as fast as we can’ doesn’t have quite the same sense of vague authority, and also might turn out to be provably false.1
❧
When I reflect on my own professional practice, it occurs to me that when something is written in an obfuscatory style, I tend to assume it is, in the Harry Frankfurt sense, bullshit. This is not always fair, but it is my automatic response, and I find it difficult to overcome.
Let’s imagine, for example, that a chief executive talks about their organisation having an ‘integral role’ in ‘tackling incidents’ and providing ‘world-leading insights.’ I can’t help but automatically assume that this is bullshit. It gives the impression that the chief executive’s purpose is not really to inform, but perhaps to attempt to impress blindly.
None of the bold words is a technical term, and none of them can be interpreted as meaning anything specific. These phrases are empty, devoid of meaning.
But my automatic assumption that the whole text is bullshit may be false, and is really no more helpful than a response of ‘ooh, this person is using clever words and so really knows what they’re talking about.’
❧
I once worked with someone who was completely ruthless with challenging this sort of thing. I remember one particular charged discussion where the feedback to one unfortunate communications officer was, ‘Look, if you want me to include any of this, then bring it back when you’ve translated it. I speak English, not McKinsey.’
You may only be able to get away with that sort of challenge when you reach a certain level of organisational seniority; I would argue that it then becomes something akin to a prerequisite for good management.
❧
If AI mimics the style of this text while making fundamental errors, then perhaps readers will come round to my way of thinking. Perhaps the assumption that obfuscation and bullshit are closely related will become more commonly entrenched.
If so, this could have the wonderful side effect of spurring people to put extra effort into writing concisely and precisely, lest their work be automatically assumed to be an AI output riddled with errors.
I can hope.
- The shot at the Civil Service is a bit cheap. For all I whinge about gov.uk from time to time, they do have top-notch style guide which includes ‘words to avoid’ for exactly these reasons. Unfortunately, it is not always followed. ↩
The picture at the top of this post is an AI-generated image for the prompt ‘a robot talking nonsense, digital art’ created by OpenAI’s DALL-E 2.
This post was filed under: Post-a-day 2023, Artificial intelligence, Casey Newton, Platformer.