The end of shared reality
Last week, a colleague sidled up to me at my desk: ‘Simon, do you think Kate Middleton is dead?’
Until that point, I’d been only vaguely aware that theories were circulating about the status of the Princess of Wales. I’d seen passing mention of the existence of such discussion, but rolled my eyes, and wondered who had time for such things. After all, she’s been unwell, indications from the start were that she wouldn’t be returning to public life until after Easter, and it’s not Easter yet.
It turns out that I was just out of the loop: it appears that it’s been the hot topic of conversation for weeks, now.
For a little while now, I’ve been harbouring a contrarian theory about images generated by artificial intelligence. It’s widely assumed that these will cause chaos as people struggle to work out what’s real.
I’ve been unconvinced by those arguments. In my mind, there are two groups of people:
- Those who get their news from social media. These people often seem to be surprisingly gullible and develop quite peculiar beliefs. They are vulnerable to being conned by fake imagery, but they’re already conned by any number of weird theories spread by other means. The addition of fake images doesn’t change much.
- Those who get their news from professional outfits. It is the job of professional outfits to know the provenance of images they share, and so—by and large—they’re unlikely to be fooled for long by fake images.
I’ve long felt that AI imagery is unlikely to cause much movement between the groups, and therefore to have much impact on the news or how it is consumed.
On Sunday, Kensington Palace shared a picture of the Princess and her children to mark Mother’s Day. When professional outfits assessed the image, it was found to have been doctored, and was withdrawn from circulation.
To say this caused a furore is a substantial understatement. In his insightful article, Charlie Warzel shared this reflection:
Adobe Photoshop, the likely culprit of any supposed “manipulation” in the royal portrait, has been around for more than three decades. And although the tools are getting considerably better, the bigger change is cultural. The royal-photo debacle is merely a microcosm of our current moment, where trust in both governing institutions and gatekeeping organizations such as the mainstream press is low. This sensation has been building for some time and was exacerbated by the corrosive political lies of the Trump era.
The affair has made me reconsider my views on the threat of AI imagery. Unlike Warzel, I don’t worry excessively about trust in the mainstream press’s ability to separate fact from fiction, but more in their ability to focus on the issues that matter.
A photoshopped image has dominated the news agenda: it isn’t difficult to imagine arguments about AI images dominating in the run-up to an election, drowning out discussion of competing policies.
I still think I’m right that professional news organisations can sort fact from fiction, but I’d underestimated the likelihood of the process of dispelling the myth becoming the story—and the debate becoming framed by hand-wringing on how to deal with this stuff.
Fakery has proven to be more disruptive than I imagined it could be.
This post was filed under: News and Comment, Charlie Warzel, The Atlantic.