Verdicchio Mario, Perin Andrea
Department of Management Information and Production Engineering, University of Bergamo, Bergamo, Italy.
Berlin Ethics Lab, Technische Universität Berlin, Berlin, Germany.
Philos Technol. 2022;35(1):11. doi: 10.1007/s13347-022-00506-6. Epub 2022 Feb 19.
A discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as "artificial moral agents" (AMAs), has been going on for some time. In this regard, we argue that the notion of "moral agency" is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.
关于是否应将人工智能(AI)系统视为有责任的道德实体,即所谓的“人工道德主体”(AMA)的讨论已经持续了一段时间。在这方面,我们认为,“道德主体”的概念仅应基于人类的自主性和感知能力而归属于人类,而人工智能系统并不具备这些。我们从有意义的控制和尽职调查的角度分析了在人工智能系统存在的情况下人类的责任,并反对医学中的全自动系统。基于这一观点,我们关注基于人工智能的诊断系统的使用,并阐明在医学中设计、开发和使用人工智能系统时形成的人员、组织和人工制品的复杂网络。然后,我们讨论判断的关系标准,以支持在人工智能系统中的错误导致或引发不良事件时将责任归咎于人类。