AuthorFenster, Jonathan

Introduction 334 I. Artificial Intelligence: Technical Perspectives and Artificial Intelligence's Benefits 337 A. Background to Artificial Intelligence in Healthcare 337 B. The Benefits of Artificial Intelligence in Healthcare 338 II. The Concerns for Artificial Intelligence in Healthcare 339 III. The Legal Issues of Artificial Intelligence in Healthcare 340 A. Disparate Impact 341 1. Title VI 341 2. Section 1557 of the Affordable Care Act 342 B. Disparate Treatment 344 IV. Current Regulations: Are They Effective? 345 A. FDA Regulations 345 B. Artificial Intelligence Accountability Acts 345 C. Private Right of Action 346 V. Proposed Solution 347 A. Can Artificial Intelligence Act with Intent? The "Personhood" Approach 347 1. Proving Artificial Intelligence's Disparate Treatment 348 B. Who is Responsible for Artificial Intelligence's Disparate Treatment? 349 1. A Copyright Approach 349 2. Work Made For Hire Doctrine and Artificial Intelligence in Healthcare 350 Conclusion 351 INTRODUCTION

Consider this hypothetical. In her state-of-the-art office, a prominent cardiologist utilizes the most technologically advanced echocardiogram machines, cardiac imaging devices, and computer systems to help diagnose and treat her patients. It is Friday afternoon, and she is getting ready to leave her office after a busy week. However, as she finishes writing up her final notes, she notices something strange. The last two patients that she had examined were of similar age, gender, and health, with a nearly identical medical history of minor heart problems. Interestingly, after inputting these two patients' medical information into a symptom checking program on her computer that is operated by Artificial Intelligence (AI), an algorithm had assigned drastically different risk assessments for the two patients. The first patient, a Black man, was assessed with little risk for a future heart attack and subsequently told that he should not follow up with his cardiologist for another five years. The second patient, a white man, was assessed with a higher risk for a future heart attack and was recommended to check back in with his doctor every six months. The doctor was dumbfounded, not understanding why or how the program offered such different treatment plans for two nearly identical patients. Still, she proceeded with the recommendations of her "revolutionary" computer program. (1)

Discrimination in healthcare has manifested itself in numerous forms throughout American history. (2) From racially segregated hospitals to the Tuskegee studies, the American healthcare system has seen, and been complicit in, overt discriminatory tactics. (3) In recent years, the discriminatory effects of healthcare have been felt in a more subtle manner, under the guise of AI. (4) Hospital systems and individual doctors are becoming increasingly reliant on AI, a technology that can quickly analyze a vast swath of data and spit out potential treatment plans in an enormously efficient manner. (5) This is forcing policymakers to reconsider the effectiveness of current laws regarding liability and accountability for AI's actions. (6) Individual patients also have concerns about the use of AI in making healthcare decisions. (7) In particular, from the perspective of the individual patient, what modes of recourse are there to recover from algorithmic discrimination? Who can be held accountable? And finally, how will our legal system assess these questions of responsibility in an era of technology that has never been dealt with?

Scholars have suggested that patients who suffer harm during their treatment can seek compensation through tort litigation. (8) For example, physicians can be sued for medical malpractice, and the manufacturers of the AI devices can be sued for design defect. (9) However, this Comment argues that it is inadequate to categorize healthcare discrimination as just another incidental tort issue that can simply be fixed by compensating victims who suffer such harm. This Comment focuses on developing a legal mechanism for patients seeking redress through discrimination theory, with the pointed goal of implementing a framework that does not shy away from or misdirect the root of the patients' suffering.

To bring a discrimination case, an individual can either claim intentional discrimination or unintentional discrimination. (10) It is assumed that in the context of AI, discrimination in healthcare is caused unintentionally; it is often presumed that the data providers and physicians are using it in good faith. Therefore, plaintiffs have attempted to litigate such cases through disparate impact theory brought as a private right of action. (11) However, district courts are split on whether a private right of action can be brought by patients who have been faced with discrimination in a healthcare setting. (12) Additionally, the Food and Drug Administration (FDA) has yet to regulate artificial intelligence effectively and thus far has only put out policy recommendations that suggest third-party audits and inspections for bias in AI. (13) Such recommendations focus on ex ante regulation, hoping to prevent discrimination in the first place. (14) However, when discriminatory data inevitably slips through the cracks of these protective regulations, a patient is left with no mode of recourse. Take, for instance, the Black patient in the hypothetical above. Under such circumstances, who can the patient sue, what can they sue for, and how can they formulate their claims?

Part I of this Comment describes how AI works and how it has been adopted by healthcare professionals. (15) Part II addresses the risks of using AI in healthcare. (16) Part III analyzes the legal issues that arise when AI discriminates in healthcare. (17) Part IV analyzes the current regulations and proposals in place to prevent AI discrimination in healthcare, and whether these solutions are adequate. (18) Finally, Part V proposes a new solution for patients seeking recourse. This solution would establish a framework for litigation through disparate treatment by proving that the AI acted with the intent to discriminate based on the "personhood" theories of AI. (19) In particular, this Part will suggest using the McDonnell Douglas (20) burden-shifting framework and statistical evidence to show patterns of AI discrimination, as doing so may allow patients to see success in disparate treatment claims. One challenge with this solution is that even once the framework is implemented, it is unclear who will actually provide the compensation for the patient. This Comment argues that through the "AI Work Made for Hire Doctrine," courts can hold that AI works as an employee for the physician. Thus, the physician will bear responsibility for the AI's actions, providing patients who suffered from AI discrimination with a much-needed avenue for recourse.


    1. Background to Artificial Intelligence in Healthcare

      AI systems work by classifying and identifying objects, people, events, and situations. (21) Similar to humans perceiving and organizing patterns, AI learns to make associations. (22) An algorithm will be presented with multiple examples of elements and their correct classifications. (23) Then, the algorithm will break down the data into electrical signals and identify hidden patterns, similarities, and connections on its own, in what is known as training. (24) Finally, through experience and new data, the AI system will evolve and complete tasks autonomously. (25)

      AI is transforming the landscape of the healthcare system, driven by the implementation of algorithmic programs in various settings including robotic surgery, medical imaging, and clinical decision support. (26) By 2027, artificial intelligence in the healthcare market is projected to grow to 67.4 billion U.S. dollars from 6.9 billion U.S. dollars in 2021, with a compounded annual growth rate of 46.2%. (27) The proliferation of AI in healthcare can be explained by a confluence of factors, including its ability to help physicians treat patients more accurately and efficiently. (28) Additionally, the COVID19 pandemic has piqued interest in advancing AI technology in healthcare. (29) By 2035, one investor suspects that artificial intelligence will replace doctors, and a 2017 MIT study found that in some contexts, AI already produces better results than physicians. (30)

      The advent of AI in medicine means that doctors will be "relinquishing control and entrusting artificial intelligence to perform dangerous and complicated tasks." (31) In June 2018, The American Medical Association ("AMA") passed its first policy recommendations for AI. AMA Board Member Jesse M. Ehrenfeld, M.D. M.P.H commented that AI can advance the delivery of care in a way that outperforms doctors or machines alone, but warning that "challenges in the design, evaluation and implementation" must be addressed, including the risks of algorithmic discrimination. (32)

    2. The Benefits of Artificial Intelligence in Healthcare

      Through Artificial Intelligence, physicians can interpret large amounts of data in patients' medical records, including imaging studies, laboratory results, medical history, genetic testing, and countless other data points to help make better informed recommendations to their patients. (33) These clinical decision support systems "provid[e] guidance on the safe prescription of medicines, guideline adherence, [and] simple risk screening." (34) For example, in providing radiation dosing to cancer patients, AI "[s]ystems... can analyze CT scans of a patient with cancer and by combining this data with learning from previous patients, provide a radiation treatment recommendation, tailored to that patient which aims to minimize damage to nearby organs." (35)


    Although we have seen the numerous benefits...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT