Machine Learning-Based Medical Devices: the FDA's Regulation, Requirements, and Restrictions.

Date22 September 2022
AuthorBeam, Charli

TABLE OF CONTENTS I.INTRODUCTION 421 II.Medical Data Sets and Machine Learning Intersect in Diagnostic Healthcare Tools 422 A.SOURCES OF BIAS IN MEDICAL DATA SETS 423 B.THE EFFECT OF BIAS IN MEDICAL DATA SETS 425 III.How The Current FDA Regulation of Machine Learning Fails to Address Bias Issues 427 A.RELEASE OF AI/ML ACTION PLAN 428 B.FDA RELIANCEONMANUFACTURERS FOR DEVELOPMENTANDEVALUATION 429 IV.Improving Pre and Post Market Requirements for Manufacturers Creating ML Devices 430 A. REQUIREDLABELINGOFMACHINE LEARNING MEDICALDEVICES 431 B. REPORTINGAND PERFORMANCEREQUIRMENTS 432 V. CONCLUSION 435 I. INTRODUCTION

The FDA should develop regulations that require evaluation of and increase transparency about potential bias in medical machine learning. Machine learning (ML) relies on data sets, which may have hidden biases, resulting in medical devices developed with ML algorithms having a risk of faulty results. All machine learning has a potential bias issue when using biased data sets. Medical devices using machine learning are no exception. Underrepresentation or bias in medical data is a problem because "diverse participation [in clinical trials and biobanking for medical research] is necessary to identify the most effective treatments in different groups." (2) The FDA should address these potential bias issues by requiring machine learning medical device manufacturers to disclose the demographics of the data sets that trained the algorithm to the FDA. This includes disclosure of underrepresented populations in the data or populations the data set did not include at all. The FDA should use its authority to require manufacturers to monitor and evaluate the safety, effectiveness, and reliability of the device after approval, including any negative effect on sub-populations. (3) The FDA should also increase support for regulatory science efforts focused on developing methods to evaluate ML-based medical software and eliminating algorithmic bias.

There is no simple solution to algorithmic bias. Every data set is unique and presents possible bias. Without a way to accurately measure the amount of bias in an algorithm, the FDA must require manufacturers to have accurate reports available about the data sets used to train the algorithm. The slogan "garbage in, garbage out" applies to machine learning practices, as does the related phrase "bias in, bias out." (4) Tracking the data is important because "completeness of metadata provides information about the population, disease, and data types on which the algorithm was trained or validated, which is essential to extrapolating assumptions of generalizability of algorithm performance to other populations." (5) Increased attention to the early stages of data collection and algorithm development may allow manufacturers to create less-biased data sets. In addition, the FDA should continue working with universities and industry experts to improve evaluation and development of ML devices.

  1. MEDICAL DATA SETS AND MACHINE LEARNING INTERSECT IN DIAGNOSTIC HEALTHCARE TOOLS

    Machine learning is "an automated process of discovering correlations (sometimes alternatively referred to as relationships or patterns) between variables in a dataset, often to make predictions or estimates of some outcome." (6) Once an individual or organization creates a machine learning algorithm, it can process a huge amount of data, such as electronic health records, in an extremely short amount of time. Some machine learning models can learn from real-world use and improve performance. (7) Other models use new data to upgrade or modify but do not learn and change on the fly.

    The healthcare industry uses machine learning medical devices for both treatment and diagnosis. (8) This paper will focus on diagnostic ML medical devices rather than both because most ML devices approved by the FDA are diagnostic. (9) Properly trained artificial intelligence (AI), which includes ML, "has the potential to dramatically improve diagnosis. [AI's] potential deserves emphasis, given that diagnostic errors effect five percent of U.S. outpatients annually, accounting for between six and 17 percent of adverse events." (10)

    1. SOURCES OF BIAS IN MEDICAL DATA SETS

      ML medical device training data sets rely heavily on patient data and health records. Patient data is an individual's medical information and includes "past and current health or illness, treatment history, lifestyle choices and genetic data." (11) A health record is a collection of patient data that stores the information in one spot. (12) The healthcare industry's use of existing healthcare technologies, such as electronic health records, have left behind minority and low socioeconomical status populations in the past and contributed to health disparities. (13) Bias in medical data often occurs because of health disparities, which are differences in health and healthcare between groups that stem from broader inequalities. (14) These disparities may adversely affect groups based on their racial or ethnic groups, gender, age, disabilities, sexual orientation, or other characteristics historically linked to discrimination or exclusion. (15)

      U.S. data sets "may inadequately reflect all groups in society, or may under-include women, and overrepresent persons of European ancestry, causing the software to provide unreliable or unsafe recommendations for minorities." (16) In addition, "machine learning algorithms used for medical image classification [may] underperform on images collected from populations independent to those on which the algorithms were trained." (17) Under-enrollment of minorities in clinical studies due to distrust, limited access to health care, and provider perceptions lead to underrepresentation of minorities in data sets. (18)

    2. THE EFFECT OF BIAS IN MEDICAL DATA SETS

      Algorithms trained on biased data sets can produce unjustly prejudicial results. In decision-making, fairness is described as "the absence of any prejudice or favoritism toward an individual or group based on...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT