Risks and Mitigation of Bias in Medical Ai
Publication year | 2024 |
Citation | Vol. 7 No. 2 |
[Page 141]
Judd Chamaa and Zach Harned *
In this article, the authors examine a number of research findings and real-world instances of bias in medical AI and analyze some of these associated risks and potential mitigations.
Artificial intelligence (AI) is a powerful tool that has the potential to transform various aspects of health care, including improving medical diagnosis and patient treatment. Ultimately, this would lead to more efficient and effective care, saving lives, and lowering costs. However, as with any technology, there are risks and challenges that need to be addressed in order to achieve these goals and avoid some potentially dangerous pitfalls. One such pressing concern regarding medical AI is bias, which can lead to misdiagnosis, incorrect treatment recommendations, and discriminatory outcomes. Additionally, this bias can potentially lead to significant legal repercussions, such as liability incurred under various antidiscrimination laws, or via contractual agreements between the medical AI vendors and the health care providers.
This article examines a number of research findings and real-world instances of bias in medical AI and analyzes some of these associated risks and potential mitigations.
Bias in Health Care
Medical AI is helping expand our understanding of various medical conditions. For example, one study 1 analyzed wearable sensor data with an AI model to more accurately predict the trajectory of Duchenne muscular dystrophy. Current assessments of the disease's progression have multiple shortcomings, including focusing on certain behavioral assessments, such as the number of steps or distance walked, or emphasizing degeneration of the motor use of the lower body. The researchers analysis of wearable sensor data allowed them to collect readouts of whole-body movements of patients in their daily lives. This enabled the AI model to
[Page 142]
identify complex patterns in patient behaviors that are imperceivable to human observers. The researchers note that this expanded understanding of the diseases progression could allow for more effective drug and therapy development by enabling more accurate tracking of the body's responses to potential treatments.
Sometimes in expanding our understanding of certain health conditions, medical AI reveals underlying bias in the applicable human-developed standards. One such recent study 2 showed how a medical AI model could be used to mitigate racial and socioeconomic biases that resulted from the use of the Kellgren and Lawrence Grade (KLG) scores. KLG is an evaluation used since the 1950s to determine knee pain and the severity of osteoarthritis in patients' knees. But because the KLG was developed using a nondiverse population, it failed to accurately gauge knee pain and the severity of osteoarthritis in minority populations, leading to over-prescription of risky opioid painkillers rather than such patients qualifying for knee-replacement surgeries.
The researchers demonstrated that medical AI models trained on diverse and representative data sets were the most accurate at predicting knee pain and thus mitigating the aforementioned bias. Researchers 3 also recently used an AI model to detect bias in dermatology educational materials (e.g., medical textbooks, lecture notes, presentation slides, journal articles) that are used to train human physicians. This AI model was able to quantitatively demonstrate the underrepresentation of black and brown skin in these materials—they found that only one in ten such images is on the black-brown range of the Fitzpatrick Scale, which is a standardized means to evaluating skin tone. The hope is that this will help make the training materials of physicians more diverse and representative, thereby leading to earlier, more frequent and more accurate diagnoses for individuals with these underrepresented skin tones.
Bias Introduced by Medical AI
Although medical AI can sometimes be used to mitigate biases encountered in the health care system—as seen above in the study regarding knee pain—medical AI can also introduce (or perpetuate) bias in the health care system. For example, 4 a major U.S. hospital relied in part on an algorithm to decide which patients would be approved for special programs designed for patients with complex
[Page 143]
chronic conditions, in order to provide them with extra support and treatment. A study of the algorithm found that it routinely picked white patients over black patients. This inequality was due to the fact that the algorithm used patient's future health costs as a proxy for their health care needs. This effectively primed the algorithm to perpetuate the inequalities in access to health care, with the authors noting that because "less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients."
The biases...
Get this document and AI-powered insights with a free trial of vLex and Vincent AI
Get Started for FreeStart Your 3-day Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

Start Your 3-day Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

Start Your 3-day Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

Start Your 3-day Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting

Start Your 3-day Free Trial of vLex and Vincent AI, Your Precision-Engineered Legal Assistant
-
Access comprehensive legal content with no limitations across vLex's unparalleled global legal database
-
Build stronger arguments with verified citations and CERT citator that tracks case history and precedential strength
-
Transform your legal research from hours to minutes with Vincent AI's intelligent search and analysis capabilities
-
Elevate your practice by focusing your expertise where it matters most while Vincent handles the heavy lifting
