O'Sullivan Shane, Nevejans Nathalie, Allen Colin, Blyth Andrew, Leonard Simon, Pagallo Ugo, Holzinger Katharina, Holzinger Andreas, Sajid Mohammed Imran, Ashrafian Hutan
Department of Pathology, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil.
Research Center in Law, Ethics and Procedures, Faculty of Law of Douai, University of Artois, France.
Int J Med Robot. 2019 Feb;15(1):e1968. doi: 10.1002/rcs.1968.
This paper aims to move the debate forward regarding the potential for artificial intelligence (AI) and autonomous robotic surgery with a particular focus on ethics, regulation and legal aspects (such as civil law, international law, tort law, liability, medical malpractice, privacy and product/device legislation, among other aspects).
We conducted an intensive literature search on current or emerging AI and autonomous technologies (eg, vehicles), military and medical technologies (eg, surgical robots), relevant frameworks and standards, cyber security/safety- and legal-systems worldwide. We provide a discussion on unique challenges for robotic surgery faced by proposals made for AI more generally (eg, Explainable AI) and machine learning more specifically (eg, black box), as well as recommendations for developing and improving relevant frameworks or standards.
We classify responsibility into the following: (1) Accountability; (2) Liability; and (3) Culpability. All three aspects were addressed when discussing responsibility for AI and autonomous surgical robots, be these civil or military patients (however, these aspects may require revision in cases where robots become citizens). The component which produces the least clarity is Culpability, since it is unthinkable in the current state of technology. We envision that in the near future a surgical robot can learn and perform routine operative tasks that can then be supervised by a human surgeon. This represents a surgical parallel to autonomously driven vehicles. Here a human remains in the 'driving seat' as a 'doctor-in-the-loop' thereby safeguarding patients undergoing operations that are supported by surgical machines with autonomous capabilities.
本文旨在推动关于人工智能(AI)和自主机器人手术潜力的辩论,特别关注伦理、监管和法律方面(如民法、国际法、侵权法、责任、医疗事故、隐私以及产品/设备立法等其他方面)。
我们对当前或新兴的人工智能和自主技术(如车辆)、军事和医疗技术(如手术机器人)、相关框架和标准以及全球网络安全/安全和法律体系进行了深入的文献检索。我们讨论了人工智能(如可解释人工智能)更一般的提议以及机器学习(如黑箱)更具体的提议给机器人手术带来的独特挑战,以及制定和改进相关框架或标准的建议。
我们将责任分为以下几类:(1)问责制;(2)责任;(3)罪责。在讨论人工智能和自主手术机器人的责任时,涉及了所有这三个方面,无论是民事还是军事患者(然而,在机器人成为公民的情况下,这些方面可能需要修订)。最不明确的部分是罪责,因为在当前技术状态下这是不可想象的。我们设想在不久的将来,手术机器人可以学习并执行常规手术任务,然后由人类外科医生进行监督。这代表了与自动驾驶车辆类似的手术情况。在这里,人类作为“回路中的医生”留在“驾驶座”上,从而保障接受由具有自主能力的手术机器支持的手术的患者的安全。