Garbage in, garbage out. If you’re unfamiliar with the expression, GIGO — we all love our acronyms, don’t we? — comes from the early days of computing and refers to the idea that when a computer is fed erroneous or incomplete information, the information that comes back out will also be erroneous and/or incomplete. When it comes to using artificial intelligence (AI) to address healthcare issues and outcomes, GIGO is not an option.
In many ways, and especially in countries like the United States, where healthcare disparities based on demographics are significant, AI has worsened bias and inequity. Take this example from a recent article in the Journal of Global Health: “At a given risk score [from a widely used algorithm], Black patients are considerably sicker than white patients, as evidenced by signs of uncontrolled illnesses … The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for white patients.” So, garbage in — less money is spent on Black patients — leads to garbage out — the algorithm incorrectly concludes that Black patients are healthier than white patients, which likely means they receive less care. It’s a vicious cycle, but it can be disrupted.
What’s AI Equity and How Do We Get There?
AI equity is about how effective artificial intelligence is in a range of scenarios with a variety of population groups and demographics. OK, but how do we get there? “It’s easier said than done, of course,” says Intouch’s vice president of innovation, Abid Rahman, in a new article for Med Ad News. “AI systems have to be trained by humans who are sometimes biased. And then even if the training is as unbiased as it can possibly be, AI systems have to be fed data, which is also sometimes biased.” Keeping an eye on four factors — governance, data quality, awareness, and transparency — when developing AI for healthcare scenarios can help. To learn more, read the entire article here.