The Methodist Hospital Department of Surgery, The Methodist Institute for Technology Innovation and Education (MITIE SM), 6550 Fannin Street, Suite 1661A, Houston, TX 77030, USA.
Surg Endosc. 2013 Jun;27(6):2020-30. doi: 10.1007/s00464-012-2704-7. Epub 2013 Feb 7.
A novel computer simulator is now commercially available for robotic surgery using the da Vinci(®) System (Intuitive Surgical, Sunnyvale, CA). Initial investigations into its utility have been limited due to a lack of understanding of which of the many provided skills modules and metrics are useful for evaluation. In addition, construct validity testing has been done using medical students as a "novice" group-a clinically irrelevant cohort given the complexity of robotic surgery. This study systematically evaluated the simulator's skills tasks and metrics and established face, content, and construct validity using a relevant novice group.
Expert surgeons deconstructed the task of performing robotic surgery into eight separate skills. The content of the 33 modules provided by the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA) was then evaluated for these deconstructed skills and 8 of the 33 determined to be unique. These eight tasks were used for evaluating the performance of 46 surgeons and trainees on the simulator (25 novices, 8 intermediate, and 13 experts). Novice surgeons were general surgery and urology residents or practicing surgeons with clinical experience in open and laparoscopic surgery but limited exposure to robotics. Performance was measured using 85 metrics across all eight tasks.
Face and content validity were confirmed using global rating scales. Of the 85 metrics provided by the simulator, 11 were found to be unique, and these were used for further analysis. Experts performed significantly better than novices in all eight tasks and for nearly every metric. Intermediates were inconsistently better than novices, with only four tasks showing a significant difference in performance. Intermediate and expert performance did not differ significantly.
This study systematically determined the important modules and metrics on the da Vinci Skills Simulator and used them to demonstrate face, content, and construct validity with clinically relevant novice, intermediate, and expert groups. These data will be used to develop proficiency-based training programs on the simulator and to investigate predictive validity.
目前有一款新型计算机模拟器可用于使用达芬奇(®)系统(直觉外科公司,加利福尼亚州森尼韦尔)进行机器人手术。由于缺乏对提供的许多技能模块和指标中哪些对评估有用的理解,因此对其效用的初步研究受到限制。此外,由于机器人手术的复杂性,使用医学生作为“新手”组进行了结构有效性测试-这是一个与临床无关的队列。本研究使用相关的新手组系统地评估了模拟器的技能任务和指标,并建立了其表面、内容和结构有效性。
专家外科医生将机器人手术的任务分解为八个单独的技能。然后,评估了达芬奇技能模拟器(直觉外科公司,加利福尼亚州森尼韦尔)提供的 33 个模块中的内容,以确定这些分解技能的内容,并确定其中 8 个是独特的。使用这八项任务评估了 46 位外科医生和学员在模拟器上的表现(25 位新手,8 位中级,13 位专家)。新手外科医生是普通外科和泌尿科住院医师或从事开放和腹腔镜手术的临床经验丰富的外科医生,但对机器人技术的接触有限。使用八项任务中的 85 个指标来衡量绩效。
使用总体评分量表确认了表面和内容有效性。在模拟器提供的 85 个指标中,发现有 11 个是独特的,并对这些指标进行了进一步分析。在所有八项任务中,专家的表现明显优于新手,几乎每个指标都是如此。中级的表现并不总是优于新手,只有四项任务在表现上有显著差异。中级和专家的表现没有明显差异。
本研究系统地确定了达芬奇技能模拟器上的重要模块和指标,并使用它们来证明具有临床相关新手,中级和专家组的表面,内容和结构有效性。这些数据将用于开发模拟器上的基于熟练程度的培训计划,并研究预测有效性。