Skip to content

AI and Bias in Healthcare: What Is It and What Can Be Done about It?

In This Article

  • Artificial intelligence (AI) is set to transform the healthcare landscape, including the field of radiology
  • AI offers the potential to produce objective, unbiased results in any operation, but bias can still creep in, often to detrimental effect
  • A recent panel discussion at Massachusetts General Hospital explored how and where bias can enter the picture in science and research and what can be done about it

As is the case in so many corners of society, artificial intelligence (AI) is poised to transform healthcare—not least, the field of radiology. AI algorithms can automatically discern complex patterns in radiological scans, for example, and thus provide advanced image analysis for the detection, characterization, and monitoring of disease.

Part of the promise of AI is its seeming ability to produce objective, unbiased results, to remove from any operation the human equation—that is, the potential for error or unintended subjectivity when a person, not a machine, is making the calls. But AI is not immune from these biases. Any number of factors can skew the results, including the data used to train an AI algorithm and how the algorithm is deployed.

In September, the Women in Science (WiS) group in the Martinos Center for Biomedical Imaging at Massachusetts General Hospital hosted a panel discussion on AI and bias in science with research and clinical faculty from the center and elsewhere in the Mass General Brigham system. Led by WiS co-chairs Gabrielle Gilmer and Stephanie Langella, PhD, the fascinating, in-depth discussion covered a range of topics related to bias in AI, particularly concerning research and clinical care in radiology.

In conversations after the event, two of the panelists, Juan Eugenio Iglesias, PhD, and Ona Wu, PhD, revisited several of these topics. Dr. Iglesias is co-director of the Center for Machine Learning. Dr. Wu is the director of the Clinical Computational Neuroimaging Group. Both entities are housed in the Martinos Center. Highlights from those conversations are below.

The Women in Science group has also compiled a list of resources to help with bias in AI.

Q. What exactly is bias in AI?

Dr. Iglesias: Bias in AI systems happens when an AI algorithm is systematically predisposed to make predictions in a certain direction. In modern AI systems, these biases often stem from the data that are used to train the systems: if your data are prejudiced, so will your AI system. But not all biases stem from the training data. Some can also originate in the way humans design AI systems, or even in the way in which AI systems are used—even if the system is perfectly unbiased itself.

Dr. Wu: People assume that if something comes out of a computer, it's objective—or that if an algorithm has been "trained," it's correct. Unfortunately, this is not necessarily the case.

Q. At what points in the AI cycle are algorithms vulnerable to bias?

Dr. Iglesias: Anywhere. Literally everywhere.

Dr. Wu: Bias can enter the picture at all stages of the machine-learning cycle: task definition, data collection, model definition, training, testing, deployment, and stakeholder feedback.

Q. How might bias in AI impact healthcare, and radiology in particular?

Dr. Wu: Panch et al. proposed the following definition of AI bias in healthcare systems: "the instances when the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability or sexual orientation to amplify them and adversely impact inequities in health systems."

A well-known example in healthcare is a commercial algorithm that was used to guide decision-making in the U.S. healthcare system—specifically, to assess patients' risk level and recommend further care based on that. The problem was the algorithm used health costs as a proxy for health needs. Thus, because the healthcare system generally spends less on Black patients, the algorithm would recommend Black patients for advanced management less often than white patients, even when they were sicker than white patients.

Dr. Iglesias: Radiology is one of the fields of medicine that many believe will be most transformed by AI. This is partly because of the huge accuracy of convolutional neural networks in automated image analysis (including medical images) and the large success of large language models analyzing text (like medical reports).

Of course, bias in these systems can become very problematic. In the era of precision medicine, it is completely unreasonable and unfair to train a radiology AI system on one population (e.g., white males) and deploy it on other populations where the performance may be suboptimal or even dangerous.

Q. What steps can be taken to avoid bias in AI?

Dr. Wu: Generally, we would do well to integrate FATE principles (fairness, accountability, transparency, ethics) at all stages of the machine-learning cycle. Requiring models to explain how the trained algorithm makes its decisions/predictions can also help mitigate bias.

Q. What about once an algorithm has been deployed in, for example, clinical settings?

Dr. Wu: Treat it like a new drug. You need to have constant monitoring of an algorithm's performance—in this case, in terms of fairness and accuracy. Fairness metrics should be assessed and then used to refine the model. Accountability for errors should be predefined.

Learn more about the Women in Science group at the Martinos Center for Biomedical Imaging

Learn more about research in the Department of Radiology

Related topics

Related

Researchers at Massachusetts General Hospital are developing a new option for predicting risk of death from atherosclerotic cardiovascular disease (ASCVD): a deep-learning AI model that searches chest X-ray images to identify patterns associated with ASCVD risk.

Related

The Mass General AR/VR RAD Lab is developing and deploying AR and VR technologies for a host of training and clinical applications, ranging from anatomy education to presurgical planning and intraprocedural image overlay.