PRESENTATION ON EXPLOITATION AND INDIVIDUAL AUTONOMY IN AI HEALTHCARE.

AuthorTschider, Charlotte
PositionArtificial intelligence - Journal of Law and Health's Digital Health & Technology Symposium

The Journal of Law and Health's Digital Health & Technology Symposium

CLEVELAND-MARSHALL COLLEGE OF LAW FRIDAY, APRIL 8, 2022

The following is a transcription from The Digital Health and Technology Symposium presented at Cleveland-Marshall College of Law by The Journal of Law & Health on Friday, April 8, 2022. This transcript has been lightly edited for clarity.

Charlotte A. Tschider:

First of all, it's wonderful to be amongst such fantastic scholars. I am so thankful to the students and the organizers of this publication. It is a great privilege to be able to discuss these topics with folks who are deep in figuring out what the right solutions might be.

I wanted to share a little bit of background about how I have gone down this path before I jump into the details. Dr. Krista Kennedy, of Syracuse University, and myself have worked extensively on analyzing the relationships between individuals and devices, especially pervasively attached devices like hearing aids.

As part of that research, we combed through blog posts, communications between data scientists and others in organizations. As we did this, we realized the concept of "datafication," which is the digital rendering of a person as representative of data. For example, if you have a person's data, that independent data might be treated differently than if a human being is sitting in front of you. We saw the concept of datafication in blog entries, where data scientists attempted to use as much data as possible or create devices to collect as much environmental data or as much behavioral data as possible. (2)

Something did not sit quite right about this practice in the context of medical devices. Many individuals are dependent on medical devices. This begs the question--will the medical device offer better treatment or a better diagnosis than we might have otherwise achieved without that same amount of data? Should we have to have a corresponding negative impact to the individuals who use these devices, even though the patient or their insurance have already paid for it?

These considerations brought me to question: what does exploitation look like in this space? And is it possible to connect the concepts of data loss and excessive data use to this concept of patient exploitation?

To address these issues, I like to start with the technology. It is always super exciting to discuss the underlying technology. We have smart hearing aids and insulin pumps. There are a lot of diagnostic efforts around the imaging space. There are also many surgical robots; some that are more complex and some that are more specific for a particular purpose. Artificial intelligence is not something far off in the future, it is something we're using today. So, the issues associated with it are things that we really need to prioritize and think about from a legal perspective now. One of the biggest problems is that data is essential for the creation and function of these devices.

In some cases, there may not be representative data which may lead to a disproportionate impact on certain populations, including safety issues that affect certain populations more than others. The idea is that data is essential to safety, and for useful development of artificial intelligence, that data must be identifiable. We often need this type of data over a long period of time, not just at a point in time. As you can imagine, when you have a device that...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT