Marion Fourcade and Henry Farrell have a short but fascinating thought in The Economist concerning how large language models (LLMs) like ChatGPT might influence work rituals.
Organisations couldn’t work without rituals. When you write a reference letter for a former colleague or give or get a tchotchke on Employee Appreciation Day, you are enacting a ceremony, reinforcing the foundations of a world in which everyone knows the rules and expects them to be observed—even if you sometimes secretly roll your eyes. Rituals also lay the paper and electronic trails through which organisations keep track of things.
Organisational ceremonies, such as the annual performance evaluations that can lead to employees being promoted or fired, can be carried out far more quickly and easily with LLMs. All the manager has to do is fire up ChatGPT, enter a brief prompt with some cut-and-pasted data, and voilà! Tweak it a little, and an hour’s work is done in seconds. The efficiency gains could be remarkable.
Exactly because LLMs are mindless, they might enact organisational rituals more efficiently, and sometimes more compellingly, than curious and probing humans ever could. For just the same reason, they can divorce ceremony from thoughtfulness, and judgment from knowledge.
I don’t think any of this is exactly surprising, but the way of thinking about it—through the lens of rituals—was new to me.
Rituals form a critical part of organisational life, even if we don’t always notice them. In health and higher education, rituals around topics like sustainability, inclusion, or diversity set the tone for how organisations present themselves. Yet these rituals can easily become hollow.
In my own experience, many organisations have rituals which are already divorced from their original intention. For instance, consider performance reviews. We all know the drill: managers gather feedback, write it up, and then sit through slightly awkward meetings where everyone knows what’s coming. This ritual started with the idea of providing useful feedback, promoting development, and assessing progress. But it has, in many places, become a tick-box exercise. Managers rush through the task, focus on compliance rather than insight, and employees nod along, knowing that what’s written is often more about playing politics than providing meaningful development.
Now, throw an LLM into the mix. For managers juggling a hundred other tasks, it’s tempting to get ChatGPT to churn out those reviews in seconds—especially if the task has already become perfunctory. But the consequence is that the process, already watered down, becomes even more superficial. The words become smoother, and probably more aligned with corporate standards, but they’re ultimately just noise—an efficient effluent, a downgrading of a ritual that’s already lost most of its meaning.
The same goes for things like corporate values. Having pronouns or a phonetic spelling in one’s email signature started off from a genuine desire to foster inclusivity. These days, their presence—or absence—often ends up being used as a proxy signal for other things, without deeper thought. It’s like the phrase ‘consider the environment before printing this email,’ tacked onto the end of countless emails. It almost certainly does nothing for the environment, but it signals that the person sending the email aligns with certain values.
What’s fascinating here is how easily LLMs could amplify these rituals. They can craft the perfect corporate spiel on inclusion, diversity, or sustainability, and they’ll do it without any sense of irony or understanding. A well-prompted LLM could pump out a flawless internal memo about the company’s dedication to [insert value here] without anyone needing to reflect on whether the company is actually doing anything meaningful about it.
It hadn’t previously occurred to me that LLMs have the potential to reinforce this effect by parroting the corporate lines to perfection, with absolutely no understanding or judgment behind them. Prompt an LLM to write an annual review for an employee in a way that aligns with corporate values, and it will do so—with absolutely no ability to thoughtfully probe whether or not the work actually demonstrates those things.
Of course, LLMs have a place in business, and can be transformative when thoughtfully applied. But the drive towards efficiency is not always thoughtful—even without LLMs involved, we can all think of times when processes have been made more efficient without proper regard to whether they remain effective.
It’s definitely something to think about. And perhaps that’s where the real work lies: recognising where rituals, human or automated, stop being useful and start being obstacles to real progress. If we’re not careful, we might find LLMs influencing deeper organisational habits and values in ways we don’t anticipate.
The image at the top of this post was generated by DALL·E 3.