Rueda Jon, Rodríguez Janet Delgado, Jounou Iris Parra, Hortal-Carmona Joaquín, Ausín Txetxu, Rodríguez-Arias David
Department of Philosophy 1, University of Granada, Granada, Spain.
FiloLab Scientific Unit of Excellence, University of Granada, Granada, Spain.
AI Soc. 2022 Dec 21:1-12. doi: 10.1007/s00146-022-01614-9.
The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps to maximize patients' benefits and optimizes limited resources. However, we claim that the opaqueness of the algorithmic black box and its absence of explainability threatens core commitments of procedural fairness such as accountability, avoidance of bias, and transparency. To illustrate this, we discuss liver transplantation as a case of critical medical resources in which the lack of explainability in AI-based allocation algorithms is procedurally unfair. Finally, we provide a number of ethical recommendations for when considering the use of unexplainable algorithms in the distribution of health-related resources.
人工智能(AI)在医疗保健领域的应用日益广泛,这既带来了希望,也引发了伦理方面的担忧。一些先进的机器学习方法能提供准确的临床预测,但代价是严重缺乏可解释性。亚历克斯·约翰·伦敦认为,在人工智能医学中,准确性比可解释性更重要。在本文中,我们在分配正义的背景下探讨准确性能与可解释算法之间的权衡。我们承认,从结果导向的正义来看,准确性至关重要,因为它有助于使患者的利益最大化,并优化有限的资源。然而,我们认为算法黑箱的不透明性及其缺乏可解释性,威胁到了程序公平的核心承诺,如问责制、避免偏见和透明度。为了说明这一点,我们将肝移植作为关键医疗资源的一个案例进行讨论,在这个案例中,基于人工智能的分配算法缺乏可解释性在程序上是不公平的。最后,我们针对在考虑将不可解释的算法用于分配与健康相关的资源时,提出了一些伦理建议。