University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology.
Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London.
Eur Urol Focus. 2022 Mar;8(2):613-622. doi: 10.1016/j.euf.2021.04.006. Epub 2021 Apr 30.
As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.
To provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee.
The project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement.
There was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI.
Using the Delphi methodology, we achieved international consensus among experts to develop and reach content validation for guidance on ethical implications of AI in surgical training. Providing an ethical foundation for launching narrow AI applications in surgical training. This guidance will require further validation.
As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.In this paper we provide guidance on ethical implications of AI in surgical training.
随着人工智能在医疗保健领域的作用不断扩大,人们越来越意识到人工智能的潜在陷阱,以及需要指导来避免这些陷阱。
为开发用于外科培训课程的狭义人工智能应用提供伦理指导。我们定义了在外科培训中开发人工智能驱动应用的标准化方法,以解决利用人工智能处理外科数据的当前公认的伦理影响。我们旨在通过寻求专家委员会的共识,基于当前的证据、对人工智能的理解和现有技术,来描述一种基于伦理的方法。
该项目分三个阶段进行:(1)成立指导小组审查文献并总结现有证据。(2)召集一个更大的专家小组,根据现有证据讨论人工智能应用的伦理影响。根据小组成员的意见创建了一份调查。(3)最后,使用在线 Delphi 流程确定基于小组的共识结果,制定指导意见。包括临床医生、学者和行业在内的 30 名人工智能实施和/或培训方面的专家做出了贡献。德尔菲法经历了三轮。根据前几轮的答案和意见,对第二轮和第三轮调查进行了补充。共识意见定义为≥80%的同意。
三轮调查的回复率均为 100%。由此产生的指导意见显示出良好的内部一致性,克朗巴赫阿尔法系数>0.8。100%的人一致认为,目前缺乏关于在机器人外科培训中使用人工智能的指导。在多个领域达成了共识,包括:1.数据保护和隐私;2.可重复性和透明度;3.预测分析;4.固有偏见;5.最有可能从人工智能中受益的培训领域。
通过使用 Delphi 方法,我们在专家之间达成了国际共识,制定并达成了关于人工智能在外科培训中伦理影响的指导意见的内容验证。为在外科培训中推出狭义人工智能应用提供了伦理基础。该指南还需要进一步验证。
随着人工智能在医疗保健领域的作用不断扩大,人们越来越意识到人工智能的潜在陷阱,以及需要指导来避免这些陷阱。在本文中,我们为人工智能在外科培训中的伦理影响提供了指导。