School of Surgery, Faculty of Medicine-UHP- Nancy University, Avenue de la Forêt de Haye, 54511 Vandoeuvre-les-Nancy, France.
Surg Endosc. 2012 Sep;26(9):2587-93. doi: 10.1007/s00464-012-2237-0. Epub 2012 Apr 5.
BACKGROUND: Exponential development of minimally invasive techniques, such as robotic-assisted devices, raises the question of how to assess robotic surgery skills. Early development of virtual simulators has provided efficient tools for laparoscopic skills certification based on objective scoring, high availability, and lower cost. However, similar evaluation is lacking for robotic training. The purpose of this study was to assess several criteria, such as reliability, face, content, construct, and concurrent validity of a new virtual robotic surgery simulator. METHODS: This prospective study was conducted from December 2009 to April 2010 using three simulators dV-Trainers(®) (MIMIC Technologies(®)) and one Da Vinci S(®) (Intuitive Surgical(®)). Seventy-five subjects, divided into five groups according to their initial surgical training, were evaluated based on five representative exercises of robotic specific skills: 3D perception, clutching, visual force feedback, EndoWrist(®) manipulation, and camera control. Analysis was extracted from (1) questionnaires (realism and interest), (2) automatically generated data from simulators, and (3) subjective scoring by two experts of depersonalized videos of similar exercises with robot. RESULTS: Face and content validity were generally considered high (77 %). Five levels of ability were clearly identified by the simulator (ANOVA; p = 0.0024). There was a strong correlation between automatic data from dV-Trainer and subjective evaluation with robot (r = 0.822). Reliability of scoring was high (r = 0.851). The most relevant criteria were time and economy of motion. The most relevant exercises were Pick and Place and Ring and Rail. CONCLUSIONS: The dV-Trainer(®) simulator proves to be a valid tool to assess basic skills of robotic surgery.
背景:微创技术(如机器人辅助设备)的快速发展引发了如何评估机器人手术技能的问题。早期开发的虚拟模拟器提供了基于客观评分、高可用性和低成本的腹腔镜技能认证的有效工具。然而,机器人培训缺乏类似的评估。本研究的目的是评估一种新的虚拟机器人手术模拟器的几个标准,如可靠性、表面效度、内容效度、结构效度和同时效度。
方法:这项前瞻性研究于 2009 年 12 月至 2010 年 4 月进行,使用三个 dV-Trainers(®)(MIMIC Technologies(®))和一个 Da Vinci S(®)(直觉外科(®))模拟器。75 名受试者根据其初始手术培训分为五组,根据五个代表机器人特定技能的练习进行评估:3D 感知、抓握、视觉力反馈、EndoWrist(®)操作和摄像头控制。分析从(1)问卷(真实性和趣味性)、(2)模拟器自动生成的数据和(3)对类似练习的机器人去个人化视频的主观评分由两位专家提取。
结果:表面效度和内容效度普遍较高(77%)。模拟器清楚地识别出五个能力等级(ANOVA;p = 0.0024)。dV-Trainer 的自动数据与机器人的主观评估之间存在很强的相关性(r = 0.822)。评分的可靠性很高(r = 0.851)。最相关的标准是运动的时间和经济性。最相关的练习是 Pick and Place 和 Ring and Rail。
结论:dV-Trainer(®)模拟器被证明是评估机器人手术基本技能的有效工具。
Surg Endosc. 2024-9
J Robot Surg. 2024-1-13
J Robot Surg. 2023-12
J Robot Surg. 2023-6
BJU Int. 2010-10-4
Urology. 2010-3-17
Surgery. 2009-12-16
Minerva Urol Nefrol. 2009-6