About me
Bookshop

Get new posts by email.

About me

ChefGPT

Wendy and I like making meals in our slow cooker, not least because it means they are hot and ready when we get home from work. We’ve got a few books of slow cooker recipes, but we’ve always struggled with two aspects:

  1. Recipes typically require 6-8 hours of cooking, whereas we’re typically out of the house for around 12 hours. We could use a timer to delay the start of cooking, but having ingredients sitting at room temperature for 4-6 hours before cooking begins seems risky. We often ended up with overcooked food.
  2. Possibly because of the above, we found that the food we made often ended up being quite watery and bland.

In an end-of-year round-up somewhere—I can’t remember where, but I suspect it may have been in The Financial Times—I read a suggestion that 2024 would be the year of restaurants promoting their use of recipes generated by artificial intelligence. I don’t believe this, but it inspired me to ask ChatGPT about slow cooker recipes.

Over the course of a conversation where I set out my requirements, ChatGPT generated a recipe for a chicken curry. I asked many follow-up questions about things like substitute ingredients, the need to do most of the prep the night before, and my strong preference for avoiding wateriness, leading to ChatGPT iterating on the recipe.

Earlier this week, we made the curry, our first dinner generated by artificial intelligence. It turned out beautifully, far better than our versions of the book recipes.

It’s an excellent example of something ChatGPT does well: explaining simple things to clueless people. Wendy and I are hardly expert cooks; being able to ask for very simple clarifications and iterations provides a much better experience than trying to work it out ourselves from a static list of instructions.

We’ll probably use the same process again to expand our repertoire.


This isn’t a cookery blog, but if you’re interested, this is the current version of our ChatGPT chicken curry recipe. I think this is the first recipe I’ve posted in the twenty years I’ve been blogging, despite dedicating a chunk of my academic life to the topic!

Ingredients

  • For the marinade:
    • 500g boneless, skinless chicken thighs
    • 75g Greek-style yoghurt
    • 1 tbsp tikka masala paste
    • 1.5 tbsp bottled lemon juice
  • For the curry:
    • 150g frozen diced onions
    • 2 tsp jarred chopped garlic
    • 2 tbsp ginger paste
    • 1 tbsp tomato paste
    • 1/3 tsp ground cumin
    • 1/3 tsp ground coriander
    • 1/3 tsp paprika
    • 1/3 tsp turmeric
    • 1/3 tsp garam masala
    • 1 Knorr chicken stock pot
    • 60ml single cream (to add at the end)

Method

  • Marinate the chicken:
    • Combine the Greek-style yoghurt, tikka masala paste, and lemon juice in a bowl.
    • Add the chicken thighs, ensuring they’re well coated.
    • Refrigerate overnight.
  • Prepare the slow cooker:
    • Place the frozen diced onions, jarred chopped garlic, and ginger paste in the slow cooker.
    • Add the marinated chicken along with any leftover marinade.
    • Spread the tomato paste over the chicken.
    • Sprinkle with the spices (cumin, coriander, paprika, turmeric, and garam masala).
    • Add the chicken stock pot.
  • Cook:
    • Cover and set your slow cooker to low. Cook for 12 hours.
  • Finishing Touches:
    • Stir in 60ml of single cream about 10-15 minutes before serving.
  • Serve:
    • Serve the Chicken Tikka Masala with rice, naan bread, or your preferred sides.

The image at the top of this post was generated by DALL·E 3. A better blogger would have taken a photo of the meal, but I was too hungry.

This post was filed under: Post-a-day 2023, , .

We need to chat about ChatGPT

When I worked as a hospital doctor, I often had to dictate letters. I was terrible at this: it was far faster for me to type them myself than to dictate, but this wasn’t always possible.

Like most people, the way I express myself when I speak is quite distinct from the way I express myself when I write. Seemingly unlike most people, I found it impossibly challenging to adapt to composing written text through the medium of speech. The two feel like completely unconnected systems: it’s like trying to rub my tummy and pat my head.

Back in February, I felt a little sceptical about how useful people believed ChatGPT to be.:

Much of the overhyped discussion about ChatGPT seems to be confusing this language model for something approaching artificial general intelligence.

This week, I’ve changed my mind: I think I’ve underestimated it. The difference has been the addition of voice chat to the model. The voices are impressively lifelike: they have intonation, they use filler words, they sometimes vocally trip, and all of that adds to the effect.

But more importantly: talking is different from seeing writing on a page. I don’t have the same basic expectation of factual accuracy in speech as in writing: I expect a bit of extemporisation, reference to half-remembered facts, a bit of loose interpretation here and there. In conversation, it’s natural to say “that doesn’t sound quite right” and to dig a little deeper into the background; in writing, it’s normal to expect the black-and-white content to be checked and accurate. In other words, the flaws in ChatGPT’s abilities seem to me to fit more naturally with speech than with writing.

I can well imagine phoning a version of ChatGPT to book a restaurant table and being entirely satisfied with the experience—and perhaps even uncertain as to whether I was talking to a person. Similarly for ordering takeaway. I may be writing this on an empty stomach, but there are all manner of customer service interactions that I could imagine using the voice version of ChatGPT for where the text version may seem a little—well—robotic.

I can also imagine it being useful for things like supported professional reflection. For example, only today, I’ve written a reflection about what I’d learned from a course I’d recently attended and how I’ve applied it in practice, as required by the medical regulator. I actually think that having a somewhat more probing voice chat on the same topic with a version of ChatGPT could stimulate deeper reflection and greater thought than simply writing down my own thoughts.

Essentially: I think I considerably underestimated the tool in February. ChatGPT is still a million miles away from artificial general intelligence, but I can now see much more clearly that large language models may have many more far-reaching applications than I’d been able to see back then. Chatting with ChatGPT has broadened my perspective.


The image at the top of this post was generated by Midjourney.

This post was filed under: Post-a-day 2023, Technology, .

Reflections on ChatGPT

When I was seven years old, I distinctly remember believing that talc was the hardest substance on Earth. I obviously knew that talcum powder felt soft, but convinced myself that the individual bits of talc must be really, really hard.

I reached this conclusion after reading that talc was ‘number one’ on the Mohs hardness scale. Because this was referenced essentially in isolation, I understood ‘number one’ on a ‘hardness scale’ to mean ‘hardest.’ Obviously, I now know that it’s the softest mineral on a 10-point scale.

I’ve been ruminating on this over the past few days: my problem was essentially misinterpreting language as a result of not understanding the wider context. This is also a danger with language prediction models like ChatGPT: they too have ingested a load of text and are—in essence—quoting it back at people, while making basic statistical inferences along the way.

What these models try to do is simply predict the next word, based on statistical analysis of lots of prior text. They are a better-trained cousin of predictive text on your phone; they are not HAL 9000. They don’t have any understanding or general intelligence.

Exposing ChatGPT’s lack of understanding is trivial. For example, from my own messing about:

Which is taller: a 100cm giraffe or a 1m mouse?

A 100cm giraffe is taller than a 1m mouse. Giraffes are known for their long necks and are among the tallest mammals in the world, with adult males growing to an average height of about 5.5 to 6 meters (18 to 20 feet). In comparison, the average height of a mouse is much smaller, usually around 10 to 20 cm (4 to 8 inches).

So, a 100cm giraffe would certainly be taller than a 1m mouse!

And also this:

Would it cost more to buy 6 £1 eggs from Waitrose or Aldi?

As of my knowledge cut off in 2021, the cost of eggs can vary between different stores.

However, Aldi is known for offering lower prices on a wide range of products, including groceries, compared to more premium stores like Waitrose.

So, it is likely that it would cost less to buy 6 £1 eggs from Aldi than from Waitrose.

And also this:

What is hotter: a drawing of an ice cube, which is kept in the freezer, or a drawing of the sun, which is baking in an oven?

Even though both the drawing of an ice cube and the drawing of the sun are subjected to different environmental conditions, such as being kept in a freezer or being baked in an oven, they still cannot be described as hot or cold as they are just illustrations and do not have physical temperatures.

And also like this:

How many times does ‘times’ appear in this question?

The word ‘times’ appears only once in the question you just asked.

People like asking ChatGPT to write emails and speeches, and are impressed with results that are—at heart—just tweaked combined quotes from a wealth of written material. That’s fine, and those responses might well be impressively useful: ChatGPT has read many more Best Man speeches than any of us ever will, so has a better idea of what word ought to come next.

However, people can easily be fooled by these responses into assigning ChatGPT human-like intelligence. This might lead them to ask models like this to undertake real-world high-risk tasks without appropriate supervision. My background means that I automatically worry about their use in medicine. Some of these uses are obvious, like providing basic medical advice, and ChatGPT in particular has some safeguards around this.

Others are not obvious: people asking these models to summarise long medical documents, or to distil patient histories into problem lists. These are problematic because they lie on the border between ‘text analysis’—at which these models excel—and ‘real-world interpretation,’ at which they comprehensively suck, but can have a sheen of competence.

Much of the overhyped discussion about ChatGPT seems to be confusing this language model for something approach artificial general intelligence. To me, it feels a lot like the advent of Siri and Alexa, with wild predictions that PCs would disappear and voice assistants would be everywhere. People really thought that their voice assistants understood their requests and had personalities—but the novelty has long-since worn off. I fear we’ve got a lot more not-very-funny ‘I asked ChatGPT…’ anecdotes still to live through, though, just as we endured ‘I asked Alexa…’ anecdotes long after they stopped being funny or insightful.

Like voice assistants, language modes are useful and will no doubt find a place in everyday use. And like voice assistants, that place won’t be nearly as central to our everyday experience as the early hype suggests, and nor will it be quite where we currently expect it to be.

And research towards artificial general intelligence will proceed apace—but honestly, I think it’s a stretch to say even that ChatGPT is a significant staging post on that journey.


The picture at the top of this post is an AI-generated image for the prompt ‘digital art of a robot in a bathroom applying talcum powder’ created by OpenAI’s DALL-E 2.

This post was filed under: Post-a-day 2023, Technology, .




The content of this site is copyright protected by a Creative Commons License, with some rights reserved. All trademarks, images and logos remain the property of their respective owners. The accuracy of information on this site is in no way guaranteed. Opinions expressed are solely those of the author. No responsibility can be accepted for any loss or damage caused by reliance on the information provided by this site. Information about cookies and the handling of emails submitted for the 'new posts by email' service can be found in the privacy policy. This site uses affiliate links: if you buy something via a link on this site, I might get a small percentage in commission. Here's hoping.