UNEXPECTED INEQUALITY: DISPARATE-IMPACT FROM ARTIFICIAL INTELLIGENCE IN HEALTHCARE DECISIONS.

AuthorTakshi, Sahar

TABLE OF CONTENTS

  1. Introduction 217 II. Artificial Intelligence in Healthcare 219 A. How is Artificial Intelligence Used in the Healthcare Industry? 219 B. UnitedHealth as a Cautionary Tale: Biases in AI and Resulting Discrimination 221 III. Agency Oversight of Healthcare AI 224 A. Food & Drug Administration's Current Approach to AI 224 B. Barriers and Improvements to FDA's Approach 228 IV. Untested Waters: The Role of Nondiscrimination in Healthcare AI 231 A. Section 1557: Enforcement 233 B. Section 1557: Private Right of Action 239 V. Compliance Challenges 241 A. Licensing 241 B. Malpractice Liability 242 VI. Recommendations 245 A. Industry Standards and Internal Compliance 245 B. Recommended Regulations and a "Super Regulation" 248 VII. Conclusion 250 I. INTRODUCTION

    A 2019 study revealed that an algorithm used by UnitedHealth, one of the nation's largest managed care organizations, might be violating state and federal law: The algorithm had a racially discriminatory impact.(1) The algorithm (called "Impact Pro") makes eligibility determinations for "high risk care management" services by identifying patients with complex health needs.(2) The researchers found that it deemed black patients' health needs as "less than" white patients', and as a result, black patients were not targeted to benefit from specialized care management programs.(3) Such discriminatory effects from artificial intelligence and augmented intelligence (AI) are not undocumented;(4) however, the study was the first to expose these effects from automation in the healthcare industry. One can imagine an AI system that relies on a patient's oral description of their symptoms to design a treatment plan, or automated imaging technology that diagnoses skin conditions-both systems have the potential to discriminate against patients because it has been proven that AI systems have greater difficulty understanding African American vernacular and analyzing images of people of color.(5)

    Discrimination in the healthcare industry is not a novel concept. Thirty-five years ago, then-Secretary Margaret Heckler issued a report and recommendations based on the findings from the Task Force on Black and Minority health, with the report's focal point being minority groups experiencing tremendous amounts of "excess deaths" compared to their non-minority counterparts.(6) Despite the Heckler Report's call to action--increased education and information, professional development, and research and data gathering--health disparities have persisted. Racial and ethnic minorities continue to experience higher rates of premature death and chronic disease. Native Americans and Alaskan natives have higher rates of infant mortality, and black patients are more likely to be inaccurately deemed as having high pain tolerance.(7) From a healthcare entity's perspective, healthcare AI presents a significant compliance challenge issue because of the risk of discrimination and relevant regulations (or lack thereof).

    The introduction of AI-informed decision making into the healthcare sphere will continue to exacerbate many of these inequities, and possibly introduce new ones (e.g., in diagnosis and treatment decisions). The promise of AI as a more consistent, and even more accurate, decisionmaker means automation is likely to become the standard in healthcare, but should these positives outweigh its discriminatory impact? This Article proceeds in four parts. In Part I, this Article will outline the current and prospective uses of AI in healthcare and provide examples of potential discriminatory effects. Part II discusses the Food & Drug Administration's current efforts to regulate AI used in medical settings, particularly as clinical decision supports. This Part also highlights the gaps in regulations and makes recommendations to bolster the agency's role in fighting healthcare discrimination. Part III will introduce Section 1557 of the Affordable Care Act and argue that this nondiscrimination provision alone is inadequate to prevent or remedy disparate-impact from AI-informed decisions by providers and insurers beginning by describing the Department of Health and Human Service's enforcement of Section 1557, and further making recommendations for covered entities as they develop their compliance programs to address AI based on these enforcement actions. It then discusses the limited possibility of private rights of action for plaintiffs who are disparately impacted by healthcare AI. Part IV describes the novel compliance challenges posed by licensing laws and malpractice liability doctrines in relation to healthcare AI. Finally, Part V introduces recommendations for the healthcare industry to develop internal compliance standards and for regulators to promulgate policies that address biases in healthcare AI.

  2. ARTIFICIAL INTELLIGENCE IN HEALTHCARE

    1. How is Artificial Intelligence Used in the Healthcare Industry?

      AI refers to a broad subset of computer sciences where machines are capable of making decisions that are typically made by humans.(8) Other industries use AI for decisions such as public-benefits eligibility determinations, risk-threat analysis, and employment recruitment efforts. In the healthcare industry, AI is increasingly being used for both administrative decisions and clinical decisions. Some examples include:

      * Administrative decisions (e.g., making appointments, billing, reimbursement requests);

      * Custodial (e.g., driverless vehicles to pull laundry, food services, clean rooms, automated pharmacy, cross-check travel conditions);

      * Medical applications and wearables;

      * Caregiving ("robotic" cribs, voice companions, electric lifts);

      * Research and education;

      * Clinical data analytics;

      * Imaging, pathology, and radiology (e.g., detecting cancers, stroke, pneumonia, analyzing x-rays and scans);

      * Predictive diagnosis (i.e., clinical decision supports); and

      * Procedural AI (e.g., "tiny robots injected into the body for targeted drug delivery as an alternative to surgery").(9)

      AI is increasingly being used to make clinical determinations, such as to diagnose skin cancer(10) or recommend a combination of chemotherapy for cancer patients.(11) It can be used to determine individual patients' risk of deteriorating, which allows physicians to predict which patients are likely to need to be transferred to the intensive care unit and intervene before a clinical emergency, increasing the rate of survival.(12) Tools like reSET-O (created by Pear Therapeutics) treats opioid-use disorders with cognitive behavioral therapy through a mobile application.(13) Digital therapeutics created by Akili Interactive Labs work to treat or improvement cognitive impairments, such as ADHD, major depressive disorder, autism spectrum disorders, and multiple sclerosis through interactive digital therapies--similar to videogames.(14) Before proceeding, it is important to clarify that currently even when AI is used in healthcare for clinical purposes, "the physician, not the AI, has primacy."(15)

      Similarly, AI can be used to make administrative decisions outside of the examination or operating room. Managed care organizations use AI to prioritize risks in patients, allocate resources effectively, and allow physicians to intervene in patient care before health (and costs) skyrocket in critical situations.(16) AI can be used to automate medical billing--a task that is tedious, time consuming, and prone to errors when done manually.(17) AI also has the potential to further help providers make better clinical decisions--for example by informing physicians whether a patient is adhering to his or her therapy, and their response to that therapy.(18)

      The risks of disparate-impact on suspect classes arising from AI-informed decision making should be of concern to healthcare providers, hospitals and clinics, and insurers. As these entities update their compliance and ethics programs to include factors such as privacy and fraud and abuse violations related to healthcare AI, they should also include nondiscrimination principles. Just like AI, an effective compliance program is a dynamic and constantly evolving system. The remainder of this Article will include discussions on how healthcare entities can incorporate nondiscrimination standards into their written policies and procedures, compliance auditing and investigation, training and education, and remediation procedures.

    2. UnitedHealth as a Cautionary Tale: Biases in AI and Resulting Discrimination

      A recent study found that an algorithm used by U.S. health systems (UnitedHealth Group) has been discriminating against black patients. This algorithm, created by Optum, was used to identify the most high-risk patients to inform allocation of funds in the healthcare system. The algorithm used health care costs to make its predictions; however, spending for black patients is lower than for white patients due to "unequal access to care."(19) These historic racial disparities in access to care translated into a racial bias in the algorithm--as a result, only 17.7% of black patients were identified as high-risk, but the study estimates that the true number should have been 46.5%.(20) In a letter to UnitedHealth, New York officials stated: "By relying on historic spending to triage and diagnose current patients, your algorithm appears to inherently prioritize white patients who have had greater access to healthcare than black patients."(21) The racial bias in Optum's algorithm not only presents a discrimination problem (in the form of disparate-impact), but can also harm individual patients in that it hinders physicians from not intervening in advance of a medical crisis.

      Discrimination as a result of biases in AI is well-documented in other fields,(22) but the UnitedHealth case study is the only publicly available evidence of such effects in the health care context.(23) Scholars suspect that the discriminatory effects of AI will seep into the healthcare...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT