About me
Bookshop

Get new posts by email.

About me

Driverless cars, algorithms and the ethics of valuing of human life

Today, RDM Group have unveiled the Lutz Pathfinder, a prototype driverless car. This is to be the first driverless car tested on public roads in Britain, after legislation was passed a few months ago to allow their operation.

Yet there are unresolved questions about the ethics underlying the algorithms which direct driverless cars; and, in particular, how they weigh the value of human life. Despite what other sources might say, these are not really new problems—but they are, nonetheless, interesting.

In this post, I’ll draw on some historic examples of similar problems, and see if they help us to make sense of this 21st century quandary.

Back in 1948, the Cold War between the Eastern Bloc and the Western Bloc was beginning to heat up… or cool down, depending on how you look at it. Either way, the US Air Force wanted the capacity to blow the Soviet Union to smithereens, should it come to that. So the US Air Force asked mathematician Edwin Paxson to use mathematical modelling to work out how best to co-ordinate a first nuclear strike.

Paxson and his team set about their work, considering almost half a million configurations of bombs and bombers. They took into account dozens of variables including countermeasures that might be deployed, targets that could be selected, and routes the bombers should fly.

In 1950, after months of work and billions of calculations, Paxson delivered his verdict in a now-famous report called Strategic Bombing Systems Analysis. His solution: fly a nuclear device to Russia in a cheap propeller plane, surrounded by a large number of similar decoy planes. The huge swarm would overwhelm Russia’s defensive capabilities and, although planes would be lost, the likelihood that the armed plane would be destroyed would be exceptionally low. One of his team described the strategy as “filling the Russian skies with empty bombers of only minor usefulness”.

image description

The response to this recommendation was not positive: Paxson was vilified. The Air Force responded with a combination of bewilderment and indignation: how could Paxson possibly suggest sending Air Crews on a suicide mission in cheap rickety planes? After all, war surely meant doing everything possible to protect allied servicemen while killing enemy servicemen—preferably using the leanest and meanest cutting-edge technology available.

But Paxson was vilified not because he gave the wrong answer: rather, he gave the right answer to the wrong question. His method was the way to cause the greatest amount of damage to the enemy for the lowest system cost—but it didn’t consider the value of human life.

Or, rather, it didn’t consider the value of the lives of the American Air Crews. Nobody thought for a moment that it should consider the value of Soviet lives. Of course, had it considered all human life as equal, it seems hard to imagine how a nuclear strike could ever come to have been proposed at all.

There’s a scene in the fourth season of The West Wing in which President Bartlet is considering intervening against genocide in Aaron Sorkin’s favourite fictional country, Equatorial Kundu. In frustration at his limited power to right the wrongs of the world, he muses

Why is a Kundunese life worth less to me than an American life?

Will Bailey, working as a speechwriter and having been in the show for a handful of episodes, gives the ballsy response

I don’t know, sir, but it is.

What is the value of human life?

This is a deeply philosophical question, but it’s also one that needs answering for practical purposes: without a value, we can’t make cost-effectiveness calculations to answer all sorts of important questions.

The US Environmental Agency pegs the value of a life at about £6m. The airline industry uses a value of around £2m. The UK Department of Transport puts it around £1m.

Most Western medical organisations, NICE included, price a year of life lived in full health at about £20-30,000. That’s a little tricksy, because—based on life expectancy—that means the UK value of a 20 year-old woman’s life is about £1.5m, versus about £1.1m for a 30 year-old man. It also means that a baby girl in East Dorset is worth about £360,000 more than a baby boy in Glasgow. And if you’ve a disability, your life is worth less than someone of equal life expectancy without a disability.

Variation in the value of lives, whether by gender, age, or nationality feels inherently wrong… but is it actually wrong? Or is it the reality of the world we live in?

So what of driverless cars? Effectively, they can be considered as robots, and we have an established set of laws for robots: science fiction writer Isaac Asimov proposed three laws of robotics in 1942, the first of which is

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Like much political legislation, this robotic law is well-intentioned but functionally useless in the situation we’re considering.

You may already be familiar with the “trolley problem”: a runaway train is heading down tracks towards a group of five people. A woman is stood next to a lever. Pulling the lever will shift the points in the track and send the train barrelling instead towards a single person. Should the woman pull the lever?

Some ethicists would say the woman should pull the lever: from a utilitarian viewpoint, she is obliged to reduce the number of people who come to harm. Others would say that the woman should not pull the lever: a deontological view might hold that the act of pulling the lever would make her complicit in the killing of another human being.

Replace the woman with a robot, and the robot is forced to break Asimov’s First Law of Robotics no matter what action it takes (or doesn’t take). We’re effectively entrapping the robot.

image description

Perhaps it isn’t surprising that we haven’t “solved” an ethical problem for robots given that we haven’t “solved” it for humans. But that doesn’t mean that it isn’t a problem. In humans, we can rely on the free agency of the individual and judge them post hoc.

Robots, at least for the time being, are not sentient. They do our bidding, and we must decide our bidding in advance. There is no ‘in the moment’ free agency to rely upon—we will know (or at least will be able to know) with certainty the action that will be taken in advance.

So what are driverless cars to do? If a driverless car finds itself in a situation where it must choose between a high speed collision with a pedestrian or with a wall, which should it choose? From the point of view of the car, should the lives of the pedestrian and the car’s occupant be of equal value? Or should the car prioritise the life of the owner? And what if the individual pedestrian is replaced by a group of pedestrians? Or a group of children?

It could be argued that the car should prioritise the lives of its driver, since that it what humans tend to do in practice. Or it could be argued that the car should value everyone equally, and protect the greatest possible number of lives possible, since that utilitarian view is how we might want humans to act. Or it could be argued that the risk should be borne entirely by the person choosing to operate the vehicle, and so the car should act to prioritise those outside of it.

Some writers have suggested that driverless cars will be forced to prioritise the life of the driver due to market forces—no-one will buy a car which might decide to kill them. Yet, of course, there is also society and legislature to consider—and it seems unlikely that cars which did not give due weight to the life of pedestrians and others outside the car would ever gain societal acceptance.

image description

And so, driverless cars look like they’re stuck in an ethical rut: they can neither prioritise the life of the driver nor prioritise the life of the pedestrian. So what can it do in the “wall or pedestrian” situation? Choose randomly? That also seems… unethical.

We’ve reach an impasse.

Much is written about the ethics of self-driving cars in these extreme situations, and they are interesting philosophical and ethical questions to ponder. But they aren’t particularly helpful in a practical sense. Much like Edwin Paxson, we being compelled to consider the wrong question.

One of the flaws in the trolley problem is that humans are rarely in a situation with two clear, diametrically opposed options. We have a range of choices available to us, not just pulling or not pulling the lever that controls the points. Maybe we could shout a warning to the people in the path of the train; maybe we could signal to the driver to stop; maybe we somehow derail the train.

And this is the first reason why the question is wrong: the car can take more than two actions. It can sound its horn; it can perform an emergency stop; it can can deploy an airbag; it can hand control back to a human. The dichotomous choice is unrealistic.

In addition, the technology isn’t at the standard required to assess a situation in the detail the problem describes—and the programming in the car will probably never consider the situation. It is unlikely that any self-driving car will be programmed with a “crash self” option. It will have a number of reactions to stimuli, including “do not crash into pedestrians” and “do not crash into walls”, and will respond in the event of a conflict probably by avoiding the pedestrians rather than the wall: just like a human, it would not know at the decision point what the outcome would be for the human driver, but there would no doubt be advanced protective mechanisms in place just as in non-driverless cars. In fact, by allowing the car to crash in a predictable way, the safety of the occupants can probably be increased even in the event of a crash.

Your washing machine at home is pretty much autonomous in operation. Does it prioritise preventing fire or preventing flood in the event of a malfunction? I have no idea what mine does, but I suspect that the situation is so far out of normal operating limits that it isn’t specifically programmed to do either. Perhaps the same is true of driverless cars.

It’s also worth considering that this sort of problem isn’t as new as it appears. Cars are not the first autonomous vehicles: aeroplanes have used autopilot for decades. Self-parking cars have been around for years. Both of these hand control back to the driver when the situation becomes difficult; perhaps that will turn out to be the solution for driverless cars, too.

I argue that we simply don’t need to worry too much about the ethics of driverless cars. They present an interesting philosophical discussion, but it isn’t a practical consideration at the moment, and nor will it be for a long time to come. By the time it does become an issue, incremental development which have occurred in the meantime will likely point us in the right direction.

For now, I’m just looking forward to sitting back and enjoying the ride!



Many thanks to Amrit Tigga for the wonderful cartoons he's drawn to illustrate this blog post.

This post was filed under: News and Comment, , .

Another irritating “my child’s not fat” story

Re: this article.

A mother chooses to disclose the contents of a private letter telling her that her son that he’s on the 98th centile for BMI. She does this by calling him “fat”. This upsets him. So she has a picture of him printed in a national newspaper with a report explaining that he’s reportedly “fat”. And then blames the NHS. Exasperating!

Perhaps the letter she received needs refining. Perhaps a letter isn’t the appropriate way to communicate this info.

But the bare choice is between:
a) Not monitoring children’s health
b) Monitoring but not disclosing the results
c) Monitoring and giving advice to parents of children with a high BMI

I can only ever see “c” being the ethical option.

Would this mother really have preferred not to know that her child is at statistically increased risk of a variety of diseases? Would she really rather not have been given advice on how to help? Was it really ethical of the Daily Mail to cash in on her unhappiness rather than pointing her in the direction of her GP?

I suspect the answer to all three is “no”.

Rant over.

This post was filed under: Health, , , , , , .

The Haltons and calling eleven-year-olds “fat”

A story is doing the rounds today, much like recent similar stories, about a child called Tom Halton and receipt of a letter telling his parents that his BMI was higher than expected for his age.

Before I go any further with this post, I need to point out that the BBC are talking rubbish about the story. Their second paragraph:

Tom Halton, 11, of Barnsley was told he was overweight after taking part in a national scheme which measured children’s body mass index.

Not true. The letter was sent to Tom’s parents, not Tom. They chose to share it with him. This upset Tom, and he didn’t eat his dinner that night.

The facts are these: Tom’s BMI is heavier than 93% of children his age. The World Health Organisation classifies him as overweight. An increased childhood BMI is associated with lifelong adult illness, in particular Type II Diabetes.

Yes, there are problems with using BMI for purposes like these, particularly in children, and the letter should have acknowledged those more clearly. But it is wrong to simply ignore the best indicators we have in children of their future adult health.

Tom’s dad said:

These letters are doing more harm than good. You might as well send a T shirt with FATTY on it. The impression it gives is that your child is fat, it’s your fault and they will die from a horrible disease.

The letter is not the best written in the world, but it makes the point fairly clearly that the high BMI increased Tom’s risk of future illness. Which, to be blunt, it does.

As for the T-shirt comment, it strikes me firstly that it was the parents who shared this letter with their child, and have now plastered him over the papers with headlines calling him “fat”. Why?

I note that the “grovelling apology” from the DoH actually apologises for causing the parents offence if they felt their parenting skills were being derided – which is not suggested at any point in the letter.

So where do Dan and Tracy Halton suggest we go from here? I’m genuinely interested to hear their views – and yours. Do we inform parents of modifiable statistical risks to their children’s health and wellbeing, or not? If so, how do we go about it? Is writing to parents not the best way to tackle this? Would individual consultation where the full facts could be clearly explained be a better option? Or does that come across as being “summoned to the headmaster’s office”, and yet more punitive (and expensive)?

What do we do? How do we tackle this? I’d love to know your thoughts.

This post was filed under: Health, , , , , .




The content of this site is copyright protected by a Creative Commons License, with some rights reserved. All trademarks, images and logos remain the property of their respective owners. The accuracy of information on this site is in no way guaranteed. Opinions expressed are solely those of the author. No responsibility can be accepted for any loss or damage caused by reliance on the information provided by this site. Information about cookies and the handling of emails submitted for the 'new posts by email' service can be found in the privacy policy. This site uses affiliate links: if you buy something via a link on this site, I might get a small percentage in commission. Here's hoping.