Department of Values, Technology & Innovation, Section on Ethics and Philosophy of Technology, Delft University of Technology, Delft, the Netherlands.
Philosophy Department, University of Twente, Enschede, the Netherlands.
J Eval Clin Pract. 2021 Jun;27(3):529-536. doi: 10.1111/jep.13535. Epub 2021 Jan 22.
This paper aims to show how the focus on eradicating bias from Machine Learning decision-support systems in medical diagnosis diverts attention from the hermeneutic nature of medical decision-making and the productive role of bias. We want to show how an introduction of Machine Learning systems alters the diagnostic process. Reviewing the negative conception of bias and incorporating the mediating role of Machine Learning systems in the medical diagnosis are essential for an encompassing, critical and informed medical decision-making.
This paper presents a philosophical analysis, employing the conceptual frameworks of hermeneutics and technological mediation, while drawing on the case of Machine Learning algorithms assisting doctors in diagnosis. This paper unravels the non-neutral role of algorithms in the doctor's decision-making and points to the dialogical nature of interaction not only with the patients but also with the technologies that co-shape the diagnosis.
Following the hermeneutical model of medical diagnosis, we review the notion of bias to show how it is an inalienable and productive part of diagnosis. We show how Machine Learning biases join the human ones to actively shape the diagnostic process, simultaneously expanding and narrowing medical attention, highlighting certain aspects, while disclosing others, thus mediating medical perceptions and actions. Based on that, we demonstrate how doctors can take Machine Learning systems on board for an enhanced medical diagnosis, while being aware of their non-neutral role.
We show that Machine Learning systems join doctors and patients in co-designing a triad of medical diagnosis. We highlight that it is imperative to examine the hermeneutic role of the Machine Learning systems. Additionally, we suggest including not only the patient, but also colleagues to ensure an encompassing diagnostic process, to respect its inherently hermeneutic nature and to work productively with the existing human and machine biases.
本文旨在展示在医学诊断中消除机器学习决策支持系统偏差的重点是如何将注意力从医学决策的解释学性质和偏差的生产作用上转移开。我们希望展示机器学习系统的引入如何改变诊断过程。审查偏差的负面概念,并将机器学习系统在医学诊断中的中介作用纳入其中,对于全面、批判性和明智的医学决策至关重要。
本文采用哲学分析,运用解释学和技术中介的概念框架,并借鉴机器学习算法协助医生诊断的案例。本文揭示了算法在医生决策中的非中立角色,并指出了与患者互动的对话性质,以及与共同塑造诊断的技术的互动性质。
根据医学诊断的解释学模型,我们审查了偏差的概念,以表明它是诊断不可分割和富有成效的一部分。我们展示了机器学习偏差如何与人的偏差一起积极塑造诊断过程,同时扩大和缩小医学注意力,突出某些方面,同时揭示其他方面,从而调解医学感知和行动。在此基础上,我们展示了医生如何在意识到其非中立角色的情况下,将机器学习系统纳入增强型医学诊断。
我们表明,机器学习系统与医生和患者一起共同设计医学诊断的三方关系。我们强调,检查机器学习系统的解释学作用至关重要。此外,我们建议不仅包括患者,还包括同事,以确保全面的诊断过程,尊重其固有的解释学性质,并与现有的人为和机器偏差进行富有成效的合作。