TU Delft, Faculty of Technology, Delft, The Netherlands.
University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands.
Bioethics. 2022 Feb;36(2):162-169. doi: 10.1111/bioe.12887. Epub 2021 Jun 5.
Medical AI is increasingly being developed and tested to improve medical diagnosis, prediction and treatment of a wide array of medical conditions. Despite worries about the explainability and accuracy of such medical AI systems, it is reasonable to assume that they will be increasingly implemented in medical practice. Current ethical debates focus mainly on design requirements and suggest embedding certain values such as transparency, fairness, and explainability in the design of medical AI systems. Aside from concerns about their design, medical AI systems also raise questions with regard to physicians' responsibilities once these technologies are being implemented and used. How do physicians' responsibilities change with the implementation of medical AI? Which set of competencies do physicians have to learn to responsibly interact with medical AI? In the present article, we will introduce the notion of forward-looking responsibility and enumerate through this conceptual lens a number of competencies and duties that physicians ought to employ to responsibly utilize medical AI in practice. Those include amongst others understanding the range of reasonable outputs, being aware of own experience and skill decline, and monitoring potential accuracy decline of the AI systems.
医疗人工智能技术正日益发展和测试,以改善对各种医疗状况的诊断、预测和治疗。尽管人们对这类医疗人工智能系统的可解释性和准确性存在担忧,但可以合理地假设,它们将越来越多地在医疗实践中得到应用。目前的伦理争论主要集中在设计要求上,并建议在医疗人工智能系统的设计中嵌入某些价值观,如透明度、公平性和可解释性。除了对其设计的担忧之外,医疗人工智能系统在这些技术得到实施和使用后,也引发了关于医生责任的问题。随着医疗人工智能的实施,医生的责任发生了哪些变化?医生需要学习哪些能力来负责任地与医疗人工智能互动?在本文中,我们将介绍前瞻性责任的概念,并通过这一概念视角列举出一些医生在实践中需要采用的能力和职责,以负责任地利用医疗人工智能。这些能力和职责包括理解合理输出的范围、了解自身经验和技能的下降以及监测人工智能系统的潜在准确性下降等。