In summary, no thanks
One of the commonly promoted function of generative artificial intelligence tools like ChatGPT is summarising long pieces of text. Ryan Broderick recently wrote in Garbage Day:
The assumption that people want summaries of information when they receive news, is also a funny one. It seems to come around every four-to-eight years. Typically when Democrats are in the White House, I’ve noticed. This was the impetus behind Vox, for instance, with its big initial claims of inventing “explainer journalism,” which quickly just devolved into blogging, again. My own assumption here is that this is a byproduct of CEO brain. “I can’t possibly read all of the information I need to pretend to care about to run my company, so other people must treat information as a nuisance to be fixed, as well.” But, once again, that is not really the case. The internet has turned the consumption of information into its own form of entertainment — or in the context of conspiracy theories, madness.
This is something that I’ve often thought about, too.
There are some scenarios where an AI summary can be useful. Occasionally, I’m copied into long email chains with a vague subject line and have to spend time scrolling up and down to orientate myself as to what the conversation is about. I occasionally skimp on that step and end up missing the point. A couple of auto-generated sentences saying ‘this is a series of emails discussing x, with the goal of producing y, looking for input on aspect z’ can be a godsend.1
And yet, the products which advertising tends to push at me most frequently are services which offer summaries of things which I don’t think benefit from summarisation. The commonest one is books. Short books distilling the ‘key messages’ of longer books are clearly popular, and pre-date the web, let alone generative AI, but I’ve never really understood the point. The format assumes that books are about imparting a series of facts; in my experience, most are actually about encouraging readers to think differently about subjects. Even in the simplest airport bookshop management paperbacks, the identification of key messages is highly subjective.
With human-authored summaries, we can at least have a sense of whether we trust the subjective judgements of the summariser, but this becomes much trickier with the black box of artificial intelligence. Summarisation usually involves value judgements, and they are not easily ‘outsourced’ to AI. This is a problem when summarising books, but even more so in summarising news.
Alan Rusbridger recently commented on his podcast that an experiment using AI summaries to generate key points to draw people into reading Prospect articles had impressed both him and the magazine’s writers. But that’s a different goal from relying on the summary instead of reading the article. It would be a bit like relying on a headline rather than reading the full story… which seems to be common behaviour, not necessarily a behaviour for which proliferation benefits humanity.
It strikes me as unfortunate that we’re building tools—or at least promoting the ability of tools—to allow people to engage more superficially with subjects than they already do. It’s not like humanity is short of examples of the downsides of people engaging only with the headlines and glossing over the detail.
Fortunately, like Broderick, I doubt that’s what people are seeking in practice, I think people are more into ‘deep dives’. Broderick attributes the miscalculation to ‘CEO brain’; I’d attribute it, at least equally, to ‘social media brain’. The ‘BREAKING’ and ‘HUGE IF TRUE’ style of sharing information in bitesize chunk on social media might suggest that people like consuming information in vastly abbreviated forms, but I don’t buy it. I think those interactions are much more about socialisation than about assessing information.
Fortunately, generative AI can work perfectly well in the other direction, too, recommending books and sources that can help people to explore a topic more deeply. I think this would make the more interesting tool: not ‘summarise this webpage’, but ‘recommend another three web pages which explore this subject in more detail’.
The obvious difficulty in making such a tool work is the rabbit-hole phenomenon, much-discussed in the context of the YouTube algorithm. How do you imbue such as system with the sensitivity and awareness to avoid pulling people into ever-more extreme versions of conspiracy theories, for example?
It’s a difficult problem, but one that equally needs solving to make summarisation engines work in a reliable and trustworthy way. Let’s hope someone can tackle it.
- Experience over many years had taught me to start my own replies to email chains like that with my own couple of sentences, starting ’I understand from the below that…’. It’s a technique which can nip misunderstandings in the bud. ↩
The image at the top of this post was generated by DALL·E 3.
This post was filed under: Technology, Garbage Day, Ryan Broderick.