Grote Thomas, Keeling Geoff
Ethics and Philosophy Lab; Cluster of Excellence: Machine Learning: New Perspectives for Science, University of Tübingen, Maria von Linden Str. 6, D-72076 Tübingen, Germany.
Institute for Human-Centered AI and McCoy Family Center for Ethics in Society, Stanford University, 450 Serra Mall, 94305 Stanford, CA USA.
Ethics Inf Technol. 2022;24(3):39. doi: 10.1007/s10676-022-09658-7. Epub 2022 Aug 31.
The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair.
在医疗保健领域使用机器学习系统进行决策支持可能会加剧健康不平等。然而,最近的研究表明,在足够多样化的数据集上训练的算法原则上可以对抗健康不平等。对这些算法的一个担忧是,它们对传统弱势群体患者的表现超过了对传统优势群体患者的表现。这使得算法决策相对于机器学习中的标准公平指标而言是不公平的。在本文中,我们为肯定性算法的合理使用进行辩护;也就是说,在多样化数据集上训练的算法,对传统弱势群体的表现更好。虽然这种算法决策可能不公平,但算法决策的公平性并非道德评估的恰当关注点。重要的是临床医生和算法协作产生的最终决策(如诊断)的公平性。我们认为,只要最终决策是公平的,肯定性算法就可以被合理部署。