Starke Georg, Ienca Marcello
College of Humanities, EPFL, 1015Lausanne, Switzerland.
Camb Q Healthc Ethics. 2024 Jul;33(3):360-369. doi: 10.1017/S0963180122000445. Epub 2022 Oct 20.
Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust in nonhuman agents constitutes a category error and worry about the concept being misused for ethics washing. Proponents of trust have responded to these worries from various angles, disentangling different concepts and aspects of trust in AI, potentially organized in layers or dimensions. Given the substantial disagreements across these accounts of trust and the important worries about ethics washing, we embrace a diverging strategy here. Instead of aiming for a positive definition of the elements and nature of trust in AI, we proceed , that is we look at cases where trust or distrust are misplaced. Comparing these instances with trust expedited in doctor-patient relationships, we systematize these instances and propose a taxonomy of both misplaced trust and distrust. By inverting the perspective and focusing on negative examples, we develop an account that provides useful ethical constraints for decisions in clinical as well as regulatory contexts and that highlights how we should engage with medical AI.
人工智能(AI)在临床护理中发挥着越来越重要的作用。例如,许多此类系统,如使用多层人工神经网络的深度学习应用程序,在某种意义上表现出认知不透明性,即它们妨碍了人类的全面理解。因此,来自行业、政策制定者和研究领域的声音都建议将信任作为与临床人工智能系统互动的一种态度。然而,在关于医疗人工智能的哲学和伦理文献中,信任的概念仍然存在激烈的争论。信任怀疑论者认为,谈论对非人类主体的信任构成了一种范畴错误,并担心这个概念会被滥用于道德粉饰。信任的支持者从各种角度回应了这些担忧,梳理了人工智能中信任的不同概念和方面,这些概念和方面可能按层次或维度进行组织。鉴于这些关于信任的观点存在重大分歧,以及对道德粉饰的重要担忧,我们在此采用一种不同的策略。我们不是试图对人工智能中信任的要素和本质进行积极定义,而是着手研究,也就是说,我们审视信任或不信任被误置的情况。将这些情况与医患关系中促进信任的情况进行比较,我们对这些情况进行系统化整理,并提出误置信任和不信任的分类法。通过转换视角并关注负面例子,我们构建了一种解释,为临床和监管背景下的决策提供有用的伦理约束,并突出我们应如何与医疗人工智能互动。