About me
Bookshop

Get new posts by email.

About me

Digital transformation in healthcare

Later today, NHS England will release quarterly waiting time statistics for A&E attendances and emergency admissions. This will doubtless spark political discussion about ‘reform’ of the NHS, including greater digitisation.

In the latest Wired, Yinka Makinde (Director of Digital Workforce at NHS England) talks briefly about some of the reasons that ‘digital’ projects in the NHS fail:

70 percent of digital transformation programs in the NHS, particularly complex ones, will fail to meet their desired objectives. There are many reasons for this. For one, we focus too much on technology and often forget to ask what problem our clinical staff and patients are facing and how they want things to change. We also have organizational silos, where digital is still often seen as the IT department with the office in the basement, rather than something more integral to the health service.

I’m not an expert in digital technology, and my personal experience of leading ‘digital transformation’ is limited to upgrading the router at home. Yet, as a doctor, I’ve been on the receiving end of these programmes more times than I’d like to count, sometimes in the NHS and sometimes in allied organisations. It almost always feels like something that is being done ‘to’ me—not ‘for’ me or ‘with’ me—and a seventy percent failure rate sounds about right. Some projects ‘fail’ in the sense of never reaching full roll-out, usually after a last-minute screeching emergency stop; others ‘fail’ in the sense of rolling out, but not delivering the intended outcome.

Today, I’d like to offer a few reflections on where—from my perspective—some of those programmes have gone wrong.


Failing to understand the problem

Lots of projects I’ve been involved in seem to start with process mapping. Someone might ask to interview me, or to observe me, and to diagrammatically represent what I am doing, often concentrating on the ‘information flows’ that I’m generating. This raises a practical problem and a philosophical problem.

The practical issue is that the generated process map is a subjective abstraction of reality. It does not completely record what is done, and some of what is recorded will be scenario-dependent. This ought not to be an issue, as the map ought only to be an aide memoire and understanding of the problem ought to be regularly checked back with the person observed. In my experience, this rarely happens. Worse, the opposite often happens. The process map is redrawn, refined, and reinterpreted, abstracting it further and further from reality.

I vividly remember one occasion on which someone misunderstood what was meant in a process map by ‘agreeing’ a decision. In practice, this meant chatting it over and sense-checking it with a senior member of the team, most often retrospectively, possibly up to a week or so after the decision was made. The resulting software had a mandatory field to be completed at the time the decision is made, including the name of the senior person ‘agreeing’ the decision. The team responded through a workaround, replacing the ‘name’ with a standard phrase regarding delegation; the bug was never fixed. The software misunderstood the process, and the resulting workaround means that records are slightly worse than they were before, as the name of the person providing ‘agreement’ is no longer recorded in a standardised form.

The philosophical problem is that process mapping does not always provide insight into why something is done, which can be valuable information. It appears to be common that processing mapping results in a finding that a particular process is stunningly inefficient: for example, it can be suggested that telephoning someone as part of a process is much less efficient than using some form of asynchronous communication. This is often true, but if the phone call has multiple purposes, only one of which is caught in your process map, then the phone call is still going to have to happen. The ‘more efficient’ approach will be an additional, and therefore inefficient, step.

I’m reminded of an IT-driven project in a general practice surgery which recalled patients for annual reviews pertaining to specific diagnoses—say asthma, or hypertension—using text messages, allowing them to use an automated system to book themselves in at a convenient time. The aim was to reduce pressure on receptionists. The project missed that a large proportion of the patient population had multiple conditions, and that many of them called for multiple reasons. The consequence was that patients ended up attending multiple appointments for annual reviews of multiple conditions, instead of them being covered in a single appointment. Call volumes also dropped less than expected because people were still calling about the ‘other business’ they would have completed during their appointment booking phone call.


Failing to set limits

In large organisations, everyone wants any new IT system to do something specific for their part of the business. Some requirements will inevitably be mutually incompatible—or, at least, not best suited to be completed on a single platform. All too often, the response of the developer seems to be to say ‘yes’ and add to the project cost, rather than setting limits.

This comes up in my field all the time. One common issue is the conflict between surveillance and case management. Surveillance is knowing how much of a disease is in the population at any given period of time. Case management is responding to each individual case. These sound superficially like sensible bed fellows, but they are not.

Surveillance requires very rigid, fixed case definitions: a person is a countable, confirmed case of Disease X if liver enzyme Y is above standard value Z. Absolute certain is required. The real world of case management is much murkier: the validity of interpreting a person with liver enzyme Y above standard value Z to be a confirmed case of Disease X might be questionable if their liver is already inflamed by disease A. The case might be epidemiologically confirmed, but in terms of individual case management, found to have something else entirely. Keeping those two seemingly contradictory facts in the same system is probably not advisable: the potential for confusion is endless, even with the best system architecture in the world. But that doesn’t stop people…


Failing to understand the environment

This is the error I find least straightforward to understand: people creating ‘IT solutions’ that fail to understand the environment in which they are being deployed. Some of these seemed screamingly obvious: rolling out electronic prescribing to wards with one or two computers, or asking care homes without computers to fill in an online dashboard.

Others lacked a more subtle kind of awareness: developing a system to communicate with staff across multiple Local Authorities which required the IT teams in each Authority to install specific software on their systems, for example, or expecting a website which required an up-to-date browser to be accessible in NHS hospitals running on ancient versions of Windows.

The common factor tends to be that it’s the environment external to the organisation commissioning the ‘IT solution’ that is often poorly understood. For a project to succeed, it needs to understand the limitations faced by its users, not just its commissioners.


Failing to plan to evaluate

In medicine, we’re almost obsessive about assessing outcomes. All too often, IT projects only plan to evaluate processes. This is a mistake: an inability to show that a system improves outcomes is often an inability to argue for continued funding.

I was once involved in a project which replaced emailed reports with an online dashboard. The function of the reports was to generate ‘awareness’: for example, to give people a bit of background awareness as to where in the country there might be outbreaks of a specific disease, to help inform risk assessments about individual potential cases who have travelled to the area.

The evaluation plan was entirely about the accuracy of the data on the dashboard and whether the dashboard was accessible to staff. That makes sense if viewing this as an ‘IT problem’: but the actual requirement was for awareness: moving from a model which pushed information to staff to one where staff had to pull information in from a dashboard was an unlikely way to achieve that goal. If those designing the system had planned a proper evaluation up front, that significant hurdle would have revealed itself early on, and they may have taken a different approach.


It’s interesting to reflect that these problems are not just problems with ‘digital transformation’: the broad topic areas are exactly the same as those that trip us up in outbreak management. Sometimes, we don’t understand the problem, perhaps because we misinterpret clinical results or talk to each other in language that means different things to different groups.1 Occasionally, we don’t properly set limits around what we’re managing, and so end up with outbreak control groups that last for eternity and consider every issue under the sun. We don’t always properly understand the environment, and can give advice that makes no sense on the ground.2 And we aren’t perfect at remembering to evaluate our approaches and share our learning, however much we try.

Perhaps these issues are universal. Perhaps they are problems of professional life—or just _life_—rather than anything specific to IT projects. The thing they have in common is that they seem superficially simple, but are hard to both spot and tackle in practice. Communication and teamwork are crucial to solving them: as Makinde says, organisational silos are unhelpful.

And, perhaps, we all need to be a bit—or maybe a byte—more humble in the face of complexity.


  1. Communication is the hardest bit of my job. I’ve reflected before about how I’ve gone wrong by fundamentally misunderstanding what someone is saying to me. I’ve mentioned the example of ‘vulnerable’ prisoners, which are two completely different groups of people from a health perspective (likely to become unwell) and from a justice perspective (likely to be attacked by other prisoners). Another example, which often caused confusion in the covid pandemic, is ‘contact tracing’, which can sometimes mean tracing those who have been in contact with an infectious person (to see if they’ve caught it) and can sometimes mean tracing those who were in prior contact (to see where the known case has caught it. The result is that doctors in my profession spend a huge amount of time and effort in trying to make sure that everyone has a shared understanding of what we’re trying to say, but even then, we sometimes fail.
  2. I’m a big advocate of visiting places and seeing them with my own eyes when trying to give outbreak advice… which has made recent times challenging.

The image at the top of this post was generated by Midjourney.

This post was filed under: Health, Post-a-day 2023, , , .

Recently published posts

Party affiliation in the afterlife / 23 November 2024

‘Lifelines’ by Ian Randall / 22 November 2024

‘Sinterlation’ by Ian Randall / 21 November 2024

‘Landed’ by Les Johnson / 20 November 2024

A incurious ego / 18 November 2024

‘Types of Happiness’ / 17 November 2024




Random posts from the archive





The content of this site is copyright protected by a Creative Commons License, with some rights reserved. All trademarks, images and logos remain the property of their respective owners. The accuracy of information on this site is in no way guaranteed. Opinions expressed are solely those of the author. No responsibility can be accepted for any loss or damage caused by reliance on the information provided by this site. Information about cookies and the handling of emails submitted for the 'new posts by email' service can be found in the privacy policy. This site uses affiliate links: if you buy something via a link on this site, I might get a small percentage in commission. Here's hoping.