Images generated by AI
Concern seems to be escalating about the ability of artificial intelligence tools to manipulate or generate misleading images and videos, and the potential for those creations to spread misinformation. Examples of potential alarming scenarios being reported based on this sort of stuff are emerging with increasing frequency.
Nevertheless, I’m reasonably relaxed about this. I believe—perhaps naively—that we are merely witnessing the latest stage of a long period of refining our relationship with visual media.
Our relationship with photos and videos remains quite new, and is constantly evolving. The principle of ‘seeing is believing’, in photographic terms, is a fairly new concept—and has probably never quite been true.
In the fairly recent past, photos and videos were a rare commodity. Television broadcasting of the proceedings in the House of Commons began only within my lifetime. Easy access to digital photography, including among professionals, is an even more recent development. Full-colour newspaper supplements, brimming with photographs, also materialised during my lifespan. In fact, I was attending university before the first full-colour newspapers appeared.
The rise of Photoshop made us doubt the veracity of photographs we were seeing. The Guardian initially banned digitally altered photographs, and amended its Editorial Code in 2011 to allow them only if explicitly labelled as such.
The advent of mobile photography and social media revolutionised our relationship with photography all over again, as Nathan Jurgenson has compellingly argued. Jurgenson’s work is especially apposite here, in fact, as he argues that photographs rarely stand alone these days, but form part of larger conversations. A faked image has less impact in that context.
There’s an argument that in rapidly developing situations, fake photos can have an outsized impact. But that is only true while we are in the temporary, short-lived, and rapidly eroding cultural moment when we assume photographs are accurate representations of events.
Increasingly convincing fake imagery simply teaches us to be more sceptical of what we think we are seeing. It’s another step down a road leading us away from a very recent, and very temporary, and very limited assumption that imagery can tell an accurate story.
Just as we’ve all learned over millennia to be sceptical of hearsay in the heat of the moment, so we’ll come to be sceptical that photos show a true picture.
We’re in a uniquely precarious cultural moment right now, while we’re still coming to terms with this change. But long term, this all simply leads us to fall back on the things we’ve always fallen back on.
We trust the sources we trust because we judge them to be trustworthy. Shortcuts to that conclusion—like the implied authority of appearing on television or having a ‘verified’ status on social media—unfailingly turn out to be temporary and misleading. We don’t believe news stories solely because of the pictures—there often aren’t pictures.
Part of being human is working out who and what is trustworthy. We get it wrong sometimes, and that’s part of being human too. But it’s a skilled honed over the whole of evolution, and we’re pretty darn good at it—photos or no photos.
The images in this post were generated by Midjourney.
This post was filed under: Post-a-day 2023, Technology, Artificial intelligence.