Suppr超能文献

人工智能(AI)和自主机器人手术标准制定的法律、监管及伦理框架。

Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery.

作者信息

O'Sullivan Shane, Nevejans Nathalie, Allen Colin, Blyth Andrew, Leonard Simon, Pagallo Ugo, Holzinger Katharina, Holzinger Andreas, Sajid Mohammed Imran, Ashrafian Hutan

机构信息

Department of Pathology, Faculdade de Medicina, Universidade de São Paulo, São Paulo, Brazil.

Research Center in Law, Ethics and Procedures, Faculty of Law of Douai, University of Artois, France.

出版信息

Int J Med Robot. 2019 Feb;15(1):e1968. doi: 10.1002/rcs.1968.

Abstract

BACKGROUND

This paper aims to move the debate forward regarding the potential for artificial intelligence (AI) and autonomous robotic surgery with a particular focus on ethics, regulation and legal aspects (such as civil law, international law, tort law, liability, medical malpractice, privacy and product/device legislation, among other aspects).

METHODS

We conducted an intensive literature search on current or emerging AI and autonomous technologies (eg, vehicles), military and medical technologies (eg, surgical robots), relevant frameworks and standards, cyber security/safety- and legal-systems worldwide. We provide a discussion on unique challenges for robotic surgery faced by proposals made for AI more generally (eg, Explainable AI) and machine learning more specifically (eg, black box), as well as recommendations for developing and improving relevant frameworks or standards.

CONCLUSION

We classify responsibility into the following: (1) Accountability; (2) Liability; and (3) Culpability. All three aspects were addressed when discussing responsibility for AI and autonomous surgical robots, be these civil or military patients (however, these aspects may require revision in cases where robots become citizens). The component which produces the least clarity is Culpability, since it is unthinkable in the current state of technology. We envision that in the near future a surgical robot can learn and perform routine operative tasks that can then be supervised by a human surgeon. This represents a surgical parallel to autonomously driven vehicles. Here a human remains in the 'driving seat' as a 'doctor-in-the-loop' thereby safeguarding patients undergoing operations that are supported by surgical machines with autonomous capabilities.

摘要

背景

本文旨在推动关于人工智能(AI)和自主机器人手术潜力的辩论,特别关注伦理、监管和法律方面(如民法、国际法、侵权法、责任、医疗事故、隐私以及产品/设备立法等其他方面)。

方法

我们对当前或新兴的人工智能和自主技术(如车辆)、军事和医疗技术(如手术机器人)、相关框架和标准以及全球网络安全/安全和法律体系进行了深入的文献检索。我们讨论了人工智能(如可解释人工智能)更一般的提议以及机器学习(如黑箱)更具体的提议给机器人手术带来的独特挑战,以及制定和改进相关框架或标准的建议。

结论

我们将责任分为以下几类:(1)问责制;(2)责任;(3)罪责。在讨论人工智能和自主手术机器人的责任时,涉及了所有这三个方面,无论是民事还是军事患者(然而,在机器人成为公民的情况下,这些方面可能需要修订)。最不明确的部分是罪责,因为在当前技术状态下这是不可想象的。我们设想在不久的将来,手术机器人可以学习并执行常规手术任务,然后由人类外科医生进行监督。这代表了与自动驾驶车辆类似的手术情况。在这里,人类作为“回路中的医生”留在“驾驶座”上,从而保障接受由具有自主能力的手术机器支持的手术的患者的安全。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验