Pages

Wednesday, 8 May 2024

Automated epistemic objectification in healthcare

The provision of healthcare is an intrinsically cooperative enterprise in which patients’ active engagement is crucial. It is paramount to recognize the value of patients' epistemic offerings in medical encounters and that their testimony is central to providing healthcare professionals epistemic access to their health state. These are fundamental preconditions to finding the most suitable course of medical action tailored to patients' needs in their singularity. 

However, the ever-growing body of literature tackling epistemic injustice in medicine and healthcare shows that identity prejudices often diminish patients’ credibility, thus hampering their possibility to play their role as epistemic subjects in medical encounters. Often, this leads to misrecognition of patients' epistemic status by, among others, hastily dismissing their testimony as irrelevant, thus leading to epistemic and practical harm. These situations are already highly problematic in healthcare encounters among human patients and physicians.

The introduction of artificial intelligence-based systems, such as machine learning (ML), in medical practice shows that medical care is no longer an exclusively human domain. So, what happens when an ML system, as an allegedly objective epistemic authority and powerful knowledge-generating entity, enters the picture by considerably influencing central medical procedures? As we argue in a recently published paper, we must be wary of ML systems' roles in crucial medical practices, such as providing treatment recommendations and diagnoses. More precisely, we advance the claim that ML systems can epistemically objectify patients in subtle but potentially extremely harmful ways that must be treated in their own right. In fact, these harms are not to be subsumed under other ethical concerns currently receiving extensive attention in the AI Ethics debate. 


Robot typing

Our paper discusses the hypothetical case of an ML system generating a treatment recommendation that goes against a patient's values. In similar situations, the patient is confronted with an unsuitable course of action. We argue that the patient risks being epistemically objectified if the system cannot pick up on their values and produce a new recommendation aligned with their expectations. This is the case because, crucially, when the patient needs to actively provide a piece of information relevant to further course of action, they are prevented from effectively doing so due to the system's setup. The bottom line is that the patient's status in this knowledge-producing endeavor may be degraded to merely being an object of medical action. 

Against these considerations, one could argue that a human physician could simply neutralize the recommendation produced by ML if it goes against the patient's values. However, we see this move as too simplistic for two main reasons. First, physicians may be epistemically dependent on an ML system and de facto unable to override it. Even though this is a surely undesirable scenario, it is recognizable in systems currently deployed. Second, disregarding ML systems' recommendations does not guarantee that physicians can readily find a further, more appropriate course of action. Rather, they might benefit from an ML system supporting their decision-making.

Overall, our general plea in the paper is thus to show the need for a flexible ML epistemology that can incorporate and adapt to newly acquired and ethically relevant information that patients can actively provide. This is key to avoid objectifying patients in morally salient medical interactions. 


Dr. Giorgia Pozzi is a postdoctoral researcher at Delft University of Technology (The Netherlands), working on the intersection between the ethics and epistemology of artificial intelligence (AI) in medicine and healthcare. She has a particular interest in tackling forms of epistemic injustice emerging due to the integration of AI systems in medical practice.

No comments:

Post a Comment

All comments are moderated.