Employing artificial intelligence (AI) in healthcare can save lives. Bias in healthcare AI can cost them. The stakes are life and death. Let’s discuss what that looks like and how to avoid it.
About AI and ML
Machine learning (ML) powers artificial intelligence (AI) by practicing with enormous amounts of information. Training datasets reflect prior work. Since we don’t live in a perfect world, that work probably has imperfections. As ML learns and extrapolates from that, it can heighten the imperfections of the past work in its future efforts. It’s recognizing patterns – not only the patterns that you want it to recognize, but all patterns.
When mishandled, AI can be like a genie in a story, exploiting the loopholes in your wishes. You wanted to lose weight … so the genie chops off one of your limbs. You wanted to be wealthy … so the genie emptied a bank vault. When you’re looking for the most expedient solution, unexpected consequences are to be expected.
In healthcare, those unintended consequences can be life changing. For example, an October 2020 study found that a clinical-decision algorithm resulted in hundreds of Black patients being categorized as lower-risk than they would have been if they were white. In 64 of those cases, Black patients did not qualify for a kidney transplant waiting list but would have if they had been white. A New England Journal of Medicine review found dozens of other similarly flawed clinical-decision algorithms throughout medicine.
About Bias
“The way the human brains work is the way that AI works,” says Brendon Thomas, director of innovation at Intouch. “It categorizes and predicts based on prior experience. If a child learns that a four-legged thing is called a dog, it’s going to call a cow, horse, cat, and a rat a dog until someone corrects it. This is prediction – and prediction will always have bias, because it lacks a complete knowledge of the entire variable set. If we had complete knowledge of the entire variable set, it wouldn’t be prediction, but simply knowledge – or perhaps omniscience.”
Antonio Rivera, director of inclusion at Intouch, agrees: “Bias is the process of making decisions using finite data that can encourage more of one outcome than another.”
Common biases that can introduce problematic results can include (but are definitely not limited to) issues of culture, gender, ethnicity, mobility, neurodivergence, size, race, vision – or anything else upon which human experiences differ. Systems developed with bias may make any number of erroneous assumptions: they may assume that people move, speak, or appear in only certain ways; they may only allow users to interact with them in certain ways. The list goes on.
Machine learning, by design, jumps to conclusions. If the information the system has been given centers on something, an algorithm will assume that the easiest way to get the right answer is to select for that, even if the data set wasn’t intended to center on that type of information. A common example is a system to help HR teams sort through resumes. If it learns from the data set it’s been given that people with certain names, genders, zip codes, or educations have historically been more or less likely to be hired, that’s who it will select for in the future. Thomas points to an example of this at Amazon, when an algorithm concluded that the historically male-skewed hires meant that it should discard resumes that mentioned women’s organizations.
Machine learning, by design, jumps to conclusions.
Abid Rahman, VP of innovation at Intouch, offers a healthcare example, pointing to databases for genetic data, which is heavily skewed with Caucasian, Western genetic information, so using these data to predict outcomes or diagnoses in non-Western demographics will likely be inaccurate.
“Biases are typically ingrained in data,” states Rahman. “AI, in many ways, has worsened bias and inequity, especially in countries like the U.S. where healthcare disparities based on demographics are quite stark. The reason is pretty straightforward: the data used for training AI often comes from well-funded facilities where the patient population has more access, is more connected, and is more technologically equipped. Moreover, COVID-19 may have made the situation even worse by worsening the care disparity. In many cases, data collected pre-COVID may no longer be useful for predictions.”
What Is Equity?
It’s different from “equality.” This image helps to visualize how different people start from different positions to confront different challenges, and how arriving at equitable solutions can require different tools. It’s about more than intent. It’s also about systems, including systems that may have been built with the best intent. It’s not a single outcome. Ensuring equity is not a one-time achievement, but rather a never-ending process.
Finding Equity in AI
“The process of creating equity in AI is tethered to the context,” says Rivera. “In healthcare, this often means using AI and ML to augment a touchpoint, interface, process, or service that a human can access.” Examples of these include a wide variety of tasks – everything from providing initial diagnoses from imaging, to sending educational information or advertisements. The end goal is avoiding the development of system and associated outcomes that will perpetuate (much like humans do) existing social inequalities.”
“Along with potential benefits to healthcare delivery, machine learning healthcare applications raise a number of ethical concerns,” wrote researchers in a recent article in the American Journal of Bioethics, in which they proposed an approach to identifying those concerns by modeling the conception, development, and implementation of the applications “and the parallel pipeline of evaluation and oversight tasks at each stage.”
Stanford Medicine’s Presence: The Art and Science of Human Connection in Medicine is working to explore equity in health AI, noting the need for medicine to “ascend on this wave of these technologies.”
“If the context of the AI is human-centric, then the goal, among any others it may have, is to be equitable through perpetual optimization and analysis to avoid undesired consequences harmful to a group within the intended human population,” says Rivera. “Current healthcare disparities are not because of demographics, but because racism, ableism, and other ‘isms’ are the result of a collective human bias that results in prejudiced thinking and discriminatory, self-interested behavior.”
The purpose of AI is to help humankind – to help us make more informed decisions faster and more accurately. Just as we can learn how to individually move beyond the biases of the past, we can learn how to build systems that avoid those biases.
Life-sciences marketers working with AI can improve their systems with:
- Diversity from the very first stages of thinking and designing a system
- Data sets that include diverse, unconventional, or often-overlooked sources
- Check-ins built into the lifecycle of the tool to look at how outcomes either perpetuate or neutralize existing disparities
- Iterations that mitigate unintended consequences: introducing new data to improve the system, etc.
- Specific goals for a specific tool – that is, not hoping a system can do everything
“When we train our AI systems at Intouch for natural language processing, we try to make it specific to the pharma industry,” says Rahman. “The language used by healthcare professionals, patients, researchers, pharmaceutical professionals, and insurance companies is all different in significant ways. We try to minimize the bias by training AI on different segments of the population, but availability of data varies quite a lot for each segment.”
The Promise of AI Done Right
Leaders industry-wide echo our experts. As Stanford’s Presence initiative puts it: “With purposeful foresight, we can recognize the opportunities for future outcomes while mitigating the risk of unintended consequences.” And as the MIT Technology Review put it, “If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed.
When humans create, we use our own lived experiences, which are by definition incomplete and can’t reflect the real world fully. When we create AI algorithms, we magnify our abilities … but we also magnify our biases, both in the system’s design, and in the data that it uses to learn. In healthcare, the ramifications of that bias can be life-threatening.
The promise of AI is heady and tempting. It truly can make it possible to do more than humans ever dreamed we could. But as with the genie, we have to know exactly what to tell it to do before we let it out of the bottle, to make sure no one gets hurt.
Contributing Authors: Abid Rahman, VP of innovation; Antonio Rivera, director of inclusion; Brendon Thomas, director of innovation; Sarah Morgan, consultant/writer.