Department of Urology, University of Virginia, Charlottesville, Virginia.
University of Virginia School of Medicine, Charlottesville, Virginia.
J Surg Educ. 2017 Nov-Dec;74(6):1052-1056. doi: 10.1016/j.jsurg.2017.05.011. Epub 2017 Jun 13.
To assess the relationship between robotic surgical simulation performance and the real-life surgical skill of attending surgeons. We hypothesized that simulation performance would not correlate with real-life robotic surgical skill in attending surgeons.
In 2013, Birkmeyer et al. demonstrated an association between laparoscopic surgical performance as determined by expert review of video clips and surgical outcomes. Using that model of expert review, we studied the relationship between robotic simulator performance and real-life surgical skill. Ten attending robotic surgeons performed 4 tasks on the da Vinci Skills Simulator (Camera Targeting 1, Ring Walk 3, Suture Sponge 3, and Energy Dissection 3). Two video clips of a robotic-assisted operation were then recorded for each surgeon. Three expert robotic surgeons reviewed the recordings and rated surgical technique using the Global Evaluative Assessment of Robotic Skills.
University of Virginia; Charlottesville, VA; tertiary hospital PARTICIPANTS: All attending surgeons who perform robotic-assisted surgery at our institution were enrolled and completed the study.
The surgeons had a median of 7.25 years of robotic surgical experience with a median of 91 cases (ranging: 20-346 cases) in the last 4 years. Median scores for each simulator task were 87.5%, 39.0%, 77.5%, and 81.5%. Using Pearson's correlation, scores for each of the individual tasks correlated poorly with expert review of intraoperative performance. There was also no correlation (r = -0.0304) between overall simulation score (mean: 70.7 ± 9.6%) and expert video ratings (mean: 3.66 ± 0.32 points).
There was no correlation between attending surgeons' simulator performance and expert ratings of intraoperative videos based on the Global Evaluative Assessment of Robotic Skills scale. Although novice surgeons may put considerable effort into training on robotic simulators, performance on a simulator may not correlate with attending robotic surgical performance. Development of simulation exercises that guide novice surgeons toward expert performance is needed.
评估机器人手术模拟表现与主治外科医生实际手术技能之间的关系。我们假设模拟表现与主治外科医生的实际机器人手术技能不相关。
2013 年,Birkmeyer 等人证明了通过专家审查视频剪辑确定的腹腔镜手术表现与手术结果之间存在关联。使用该专家审查模型,我们研究了机器人模拟器表现与实际手术技能之间的关系。10 名主治机器人外科医生在达芬奇技能模拟器上完成了 4 项任务(摄像机目标 1、环行行走 3、缝合海绵 3 和能量切割 3)。然后为每位外科医生记录了两个机器人辅助手术的视频片段。三名专家机器人外科医生审查了记录,并使用全球机器人技能评估量表对手术技术进行了评分。
弗吉尼亚大学;夏洛茨维尔,VA;三级医院
所有在我们机构进行机器人辅助手术的主治外科医生均参加了本研究并完成了研究。
外科医生的机器人手术经验中位数为 7.25 年,最近 4 年中位数为 91 例(范围:20-346 例)。每个模拟器任务的中位数得分为 87.5%、39.0%、77.5%和 81.5%。使用 Pearson 相关分析,每个任务的得分与专家对术中表现的审查相关性较差。总体模拟得分(平均值:70.7 ± 9.6%)与专家视频评分(平均值:3.66 ± 0.32 分)之间也没有相关性(r = -0.0304)。
主治外科医生的模拟器表现与基于全球机器人技能评估量表的专家对术中视频的评分之间没有相关性。尽管新手外科医生可能会在机器人模拟器上投入大量精力进行训练,但模拟器上的表现可能与主治机器人手术表现不相关。需要开发模拟练习,引导新手外科医生达到专家水平。