AI has a problem with bias. In healthcare this can mean life or death.

Hannah sp Lippitt
3 min readOct 21, 2019

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.” Bill Gates

Picture this: you’re concerned you have the early signs of skin cancer. You use software where a machine learning algorithm can diagnose your condition and ninety percent of the time it’s successful.

Except for you it’s not.

This could be because you’re not from the West. You’re from a part of the world that has a high mortality rate for skin cancer, yet doctors aren’t trained on your skin type.

Racial bias in AI and machine learning isn’t new. Years of clinical research in the skin cancer area have primarily focused on those with light skin. This means that there are whole groups of people whose symptoms present differently. And this could literally kill them.

The problem with AI

We all know that no matter how intelligent human beings are, bias and bigotry cannot be ruled out, even if best efforts are made to overcome them.

Whilst in an ideal world, rational, artificial intelligence machines would be above such prejudices, machines learn about the world through human language and historical behaviour. In short, algorithms do not exist in a vacuum, they are programmed and influenced by human beings.

As they’re trained on datasets, they’re subject to the limitations of this data. The data could be lacking diversity or limited to just one area of study. These social biases will then be transferred into the judging processes of AI.

The Google image problem

The world’s biggest search engine has always seen itself as a pioneer of new technologies and human experiences, so it’s no surprise that Google’s increasingly turning its focus to healthcare.

There was the purchase of DeepMind Technologies in 2014. And there’s been much fanfare over Google buying Fitbit.

The problem is that not only has Google run into accuracy issues due to fake or dubious healthcare information being ranked higher than it should, its image search has also come under fire for being biased. Examples include images of white men coming up when ‘doctor’ is googled, and the ‘Google hands’ controversy, when the image results for ‘hands’ were shown to be almost all white.

Consider what could happen when doctors begin using Google images to diagnose diseases like skin cancer where it’s a matter of life or death. If Google wants to play doctor, it’s going to have to up its game when it comes to sorting out its image problem. So much of the diagnostic process relies on being able to see body parts, so Google can’t afford to delay its promise to build more diverse and representative image sets and to train algorithms to be more inclusive when learning from imperfect data.

Computer scientists: the diversity issue

Another problem is that there’s a distinct lack of diversity among computer scientists, with women making up just 12 percent of leading machine learning researchers. Women are more likely to die from a heart attack than men because symptoms in women are considered ‘atypical’. It’s presumed that chest pain is the main heart attack symptom, but symptoms present differently in women, which is just one example of how healthcare isn’t sufficiently serving half the population.

If computer scientists were more representative of the general population, it’s likely these data limitations would expand and that lives would be saved in the process.

But on the other hand…

What is fair anyway?

Ask three people about the same event and you’ll likely get three different answers. The same applies to fairness. It is hard enough for philosophers, lawyers and social scientists to agree, so perhaps it’s a little too much to ask computer scientists to define fairness in mathematical terms. As well as this the computer science field can be very rigid, with answers not really tending to change over time. So from the outset AI is programmed on biased data.

What does the future hold for AI and healthcare?

While researchers, think tanks and governments are moving fast to regulate AI, innovation also needs to be prioritised so new players aren’t put off by stifling red tape. But it’s also clear that AI-powered solutions aren’t having the global impact they could have because of the lack of data available that’s needed to train on to ensure optimal patient outcomes. It’s hard to regulate a machine that is constantly evolving, throwing up questions about who is liable for the patient if anything goes wrong.

Lots of questions and not that many solid answers — yet. But with AI developing quickly, time is of the essence, especially if we’re talking about people’s lives.

--

--