About me
Bookshop

Get new posts by email.

About me

ChefGPT

Wendy and I like making meals in our slow cooker, not least because it means they are hot and ready when we get home from work. We’ve got a few books of slow cooker recipes, but we’ve always struggled with two aspects:

  1. Recipes typically require 6-8 hours of cooking, whereas we’re typically out of the house for around 12 hours. We could use a timer to delay the start of cooking, but having ingredients sitting at room temperature for 4-6 hours before cooking begins seems risky. We often ended up with overcooked food.
  2. Possibly because of the above, we found that the food we made often ended up being quite watery and bland.

In an end-of-year round-up somewhere—I can’t remember where, but I suspect it may have been in The Financial Times—I read a suggestion that 2024 would be the year of restaurants promoting their use of recipes generated by artificial intelligence. I don’t believe this, but it inspired me to ask ChatGPT about slow cooker recipes.

Over the course of a conversation where I set out my requirements, ChatGPT generated a recipe for a chicken curry. I asked many follow-up questions about things like substitute ingredients, the need to do most of the prep the night before, and my strong preference for avoiding wateriness, leading to ChatGPT iterating on the recipe.

Earlier this week, we made the curry, our first dinner generated by artificial intelligence. It turned out beautifully, far better than our versions of the book recipes.

It’s an excellent example of something ChatGPT does well: explaining simple things to clueless people. Wendy and I are hardly expert cooks; being able to ask for very simple clarifications and iterations provides a much better experience than trying to work it out ourselves from a static list of instructions.

We’ll probably use the same process again to expand our repertoire.


This isn’t a cookery blog, but if you’re interested, this is the current version of our ChatGPT chicken curry recipe. I think this is the first recipe I’ve posted in the twenty years I’ve been blogging, despite dedicating a chunk of my academic life to the topic!

Ingredients

  • For the marinade:
    • 500g boneless, skinless chicken thighs
    • 75g Greek-style yoghurt
    • 1 tbsp tikka masala paste
    • 1.5 tbsp bottled lemon juice
  • For the curry:
    • 150g frozen diced onions
    • 2 tsp jarred chopped garlic
    • 2 tbsp ginger paste
    • 1 tbsp tomato paste
    • 1/3 tsp ground cumin
    • 1/3 tsp ground coriander
    • 1/3 tsp paprika
    • 1/3 tsp turmeric
    • 1/3 tsp garam masala
    • 1 Knorr chicken stock pot
    • 60ml single cream (to add at the end)

Method

  • Marinate the chicken:
    • Combine the Greek-style yoghurt, tikka masala paste, and lemon juice in a bowl.
    • Add the chicken thighs, ensuring they’re well coated.
    • Refrigerate overnight.
  • Prepare the slow cooker:
    • Place the frozen diced onions, jarred chopped garlic, and ginger paste in the slow cooker.
    • Add the marinated chicken along with any leftover marinade.
    • Spread the tomato paste over the chicken.
    • Sprinkle with the spices (cumin, coriander, paprika, turmeric, and garam masala).
    • Add the chicken stock pot.
  • Cook:
    • Cover and set your slow cooker to low. Cook for 12 hours.
  • Finishing Touches:
    • Stir in 60ml of single cream about 10-15 minutes before serving.
  • Serve:
    • Serve the Chicken Tikka Masala with rice, naan bread, or your preferred sides.

The image at the top of this post was generated by DALL·E 3. A better blogger would have taken a photo of the meal, but I was too hungry.

This post was filed under: Post-a-day 2023, , .

Someone else’s thoughts on artificial intelligence

Last week, I reflected that I’d underestimated the potential of large language models by basing my opinion on the early versions of ChatGPT. Interestingly, Casey Newton has talked in the latest edition of Platformer about making the same mistake.

I had recently subscribed to ChatGPT Plus at the encouragement of a friend who had found it to be an excellent tutor in biology. A few days later, I found myself embarrassed: what I thought I knew about the state of the art had essentially been frozen a year ago when ChatGPT was first released. Only by using the updated model did I see how much better it performed at tasks involving reasoning and explanation.

I told the researcher I was surprised by how quickly my knowledge had gone out of date. Now that I had the more powerful model, the disruptive potential of large language models seemed much more tangible to me. 

The researcher nodded. “You can fast forward through time by spending money,” she said.

Naturally, Casey’s thoughts are more extensive and more fully formed than my own, and the whole piece is well worth reading.

This post was filed under: Post-a-day 2023, Technology, , , .

AI is not a single entity

There’s a typically brilliant piece by Liam Shaw on the LRB blog right now about the recent use of an AI tool to assist in the discovery of an antibiotic: abaucin.

There is much important detail in Shaw’s blog which was missing from most of the media coverage on this topic. Most crucially from a health perspective, this antibiotic is likely to be useful only in topical applications (onto the skin) whereas the majority of harm from the single species the antibiotic treats—Acinetobacter baumannii—is from sepsis. It is a significant discovery, but mostly in the sense of being a staging post on the long road of development, rather than as an end in itself.

Shaw is also specific about the techniques used, and their limitations:

As well as powerful neural networks, the machine learning model depends on the existence of carefully collected data from thousands of experiments. It’s still a vast screening project, just not as vast as it would be without the AI component: it uses the data to find the best ‘ready to use’ molecule from the available options.

The discovery of abaucin shows that AI is helpful for the early stage of winnowing down the vast space of chemical possibility, but there’s still a lot to do from that point onwards.

This is useful because it feels like we are in a moment where ‘AI’ is used to refer to myriad things, and using the term on its own is not very helpful. It feels akin to the early 2000s, when a whole group of technologies and applications were referred to as ‘the Internet’ (always capitalised) as though they were a single entity.

It’s notable that the abaucin study didn’t refer even once to ‘artificial intelligence,’1 but used the somewhat more specific term ‘deep learning.’

When so many technologies, from large language models to recommendation engines to deep learning algorithms to theoretical artificial general intelligence systems are all condensed into two letters—AI—it doesn’t aid understanding. I’ve spoken to people this week who have interpreted the headlines around this to mean that something akin to ChatGPT has synthesised a new antibiotic on request—an understandable misunderstanding.

When scientists are warning about AI threatening the future of humanity, they aren’t talking about chatbots—yet you’d be hard-pressed to discern that from breathless headlines that refer to anything and everything as simply ‘AI’. In just a handful of days, even the well-respected BBC News website has published articles with headlines referencing ‘AI’ about drone aircraft, machine learning, delivery robots and image generation: all entirely different applications of a very broad class of technology.

If we’re to have sensible conversations about the ethics and regulation of AI technologies, I think there’s much to be done to try to help the public understand what exactly is being discussed. That ought to be the job of the news. Currently, it feels like we’re stuck in a cycle of labelling things as ‘AI’ as a strategy to garner attention, leading to conflated ideas and complete misunderstanding.


The image at the top of this post was generated by Midjourney, whose idea of the appearance of a human brain seems sketchier that I might have imagined.


  1. Though, in fairness, the press release did.

This post was filed under: Media, News and Comment, Post-a-day 2023, , , .

Images generated by AI

Concern seems to be escalating about the ability of artificial intelligence tools to manipulate or generate misleading images and videos, and the potential for those creations to spread misinformation. Examples of potential alarming scenarios being reported based on this sort of stuff are emerging with increasing frequency.

Nevertheless, I’m reasonably relaxed about this. I believe—perhaps naively—that we are merely witnessing the latest stage of a long period of refining our relationship with visual media.

Our relationship with photos and videos remains quite new, and is constantly evolving. The principle of ‘seeing is believing’, in photographic terms, is a fairly new concept—and has probably never quite been true.

In the fairly recent past, photos and videos were a rare commodity. Television broadcasting of the proceedings in the House of Commons began only within my lifetime. Easy access to digital photography, including among professionals, is an even more recent development. Full-colour newspaper supplements, brimming with photographs, also materialised during my lifespan. In fact, I was attending university before the first full-colour newspapers appeared.

The rise of Photoshop made us doubt the veracity of photographs we were seeing. The Guardian initially banned digitally altered photographs, and amended its Editorial Code in 2011 to allow them only if explicitly labelled as such.

The advent of mobile photography and social media revolutionised our relationship with photography all over again, as Nathan Jurgenson has compellingly argued. Jurgenson’s work is especially apposite here, in fact, as he argues that photographs rarely stand alone these days, but form part of larger conversations. A faked image has less impact in that context.

There’s an argument that in rapidly developing situations, fake photos can have an outsized impact. But that is only true while we are in the temporary, short-lived, and rapidly eroding cultural moment when we assume photographs are accurate representations of events.

Increasingly convincing fake imagery simply teaches us to be more sceptical of what we think we are seeing. It’s another step down a road leading us away from a very recent, and very temporary, and very limited assumption that imagery can tell an accurate story.

Just as we’ve all learned over millennia to be sceptical of hearsay in the heat of the moment, so we’ll come to be sceptical that photos show a true picture.

We’re in a uniquely precarious cultural moment right now, while we’re still coming to terms with this change. But long term, this all simply leads us to fall back on the things we’ve always fallen back on.

We trust the sources we trust because we judge them to be trustworthy. Shortcuts to that conclusion—like the implied authority of appearing on television or having a ‘verified’ status on social media—unfailingly turn out to be temporary and misleading. We don’t believe news stories solely because of the pictures—there often aren’t pictures.

Part of being human is working out who and what is trustworthy. We get it wrong sometimes, and that’s part of being human too. But it’s a skilled honed over the whole of evolution, and we’re pretty darn good at it—photos or no photos.


The images in this post were generated by Midjourney.

This post was filed under: Post-a-day 2023, Technology, .

Will A.I. make us write more clearly?

In Platformer last Tuesday, Casey Newton reported on CNET’s use of artificial intelligence tools to publish news stories.

Newton’s piece led me to Futurism, which pointed out serious errors in CNET’s AI-generated prose. Futurism argued that the tone of the piece was significant in disguising the errors:

The AI is writing with the panache of a knowledgable financial advisor. But as a human expert would know, it’s making another ignorant mistake.

If I were to develop an AI model, I would probably start with the writing style and hope the factual content would come later… and really, that’s quite human behaviour.

Technical writing often includes numerous technical terms. Despite this, good technical writing remains clear. It is as simple as possible, concise and unambiguous.

Good writing is a skill. It is not something that is easy to master. When reading Jeanette Winterson’s 12 Bytes last year, I was particularly taken with her plea that scientists ought to work closely with writers to ensure that their ideas were communicated with precision and, perhaps, beauty.

Yet, when people are trying to imitate this style of writing, perhaps when starting out, they frequently do it badly. They confuse technical terms for obfuscatory terms. When I am marking scientific assignments, words like ‘whilst’ or ‘utilise’ are red flags for this: these are not words people typically use in everyday life, and they can signify that someone is intentionally trying to make their writing sound more complicated than necessary. This is the antithesis of communicating complex ideas as simply as they can.

Good students—and good writers—grow out of this. But some don’t. Some people just slip into using ridiculous language as a habitual thing.

Others—stereotypically in the corporate world or the Civil Service—intentionally use obfuscatory language to hide their own confusion or to avoid pinning down a particular meaning. Why say something plainly if it might turn out to be plainly wrong? Why give a hard deadline when you can just ‘work at pace’? ‘We’re going as fast as we can’ doesn’t have quite the same sense of vague authority, and also might turn out to be provably false.1

When I reflect on my own professional practice, it occurs to me that when something is written in an obfuscatory style, I tend to assume it is, in the Harry Frankfurt sense, bullshit. This is not always fair, but it is my automatic response, and I find it difficult to overcome.

Let’s imagine, for example, that a chief executive talks about their organisation having an ‘integral role’ in ‘tackling incidents’ and providing ‘world-leading insights.’ I can’t help but automatically assume that this is bullshit. It gives the impression that the chief executive’s purpose is not really to inform, but perhaps to attempt to impress blindly.

None of the bold words is a technical term, and none of them can be interpreted as meaning anything specific. These phrases are empty, devoid of meaning.

But my automatic assumption that the whole text is bullshit may be false, and is really no more helpful than a response of ‘ooh, this person is using clever words and so really knows what they’re talking about.’

I once worked with someone who was completely ruthless with challenging this sort of thing. I remember one particular charged discussion where the feedback to one unfortunate communications officer was, ‘Look, if you want me to include any of this, then bring it back when you’ve translated it. I speak English, not McKinsey.’

You may only be able to get away with that sort of challenge when you reach a certain level of organisational seniority; I would argue that it then becomes something akin to a prerequisite for good management.

If AI mimics the style of this text while making fundamental errors, then perhaps readers will come round to my way of thinking. Perhaps the assumption that obfuscation and bullshit are closely related will become more commonly entrenched.

If so, this could have the wonderful side effect of spurring people to put extra effort into writing concisely and precisely, lest their work be automatically assumed to be an AI output riddled with errors.

I can hope.


  1. The shot at the Civil Service is a bit cheap. For all I whinge about gov.uk from time to time, they do have top-notch style guide which includes ‘words to avoid’ for exactly these reasons. Unfortunately, it is not always followed.

The picture at the top of this post is an AI-generated image for the prompt ‘a robot talking nonsense, digital art’ created by OpenAI’s DALL-E 2.

This post was filed under: Post-a-day 2023, , , .




The content of this site is copyright protected by a Creative Commons License, with some rights reserved. All trademarks, images and logos remain the property of their respective owners. The accuracy of information on this site is in no way guaranteed. Opinions expressed are solely those of the author. No responsibility can be accepted for any loss or damage caused by reliance on the information provided by this site. Information about cookies and the handling of emails submitted for the 'new posts by email' service can be found in the privacy policy. This site uses affiliate links: if you buy something via a link on this site, I might get a small percentage in commission. Here's hoping.