Suppr超能文献

机器人技能全球评估(GEARS)的外部验证

External validation of Global Evaluative Assessment of Robotic Skills (GEARS).

作者信息

Aghazadeh Monty A, Jayaratna Isuru S, Hung Andrew J, Pan Michael M, Desai Mihir M, Gill Inderbir S, Goh Alvin C

机构信息

Department of Urology, Methodist Institute for Technology, Innovation, and Education, Houston Methodist Hospital, 6560 Fannin Street, Suite 2100, Houston, TX, 77030, USA.

Scott Department of Urology, Baylor College of Medicine, Houston, TX, USA.

出版信息

Surg Endosc. 2015 Nov;29(11):3261-6. doi: 10.1007/s00464-015-4070-8. Epub 2015 Jan 22.

Abstract

BACKGROUND

We demonstrate the construct validity, reliability, and utility of Global Evaluative Assessment of Robotic Skills (GEARS), a clinical assessment tool designed to measure robotic technical skills, in an independent cohort using an in vivo animal training model.

METHODS

Using a cross-sectional observational study design, 47 voluntary participants were categorized as experts (>30 robotic cases completed as primary surgeon) or trainees. The trainee group was further divided into intermediates (≥5 but ≤30 cases) or novices (<5 cases). All participants completed a standardized in vivo robotic task in a porcine model. Task performance was evaluated by two expert robotic surgeons and self-assessed by the participants using the GEARS assessment tool. Kruskal-Wallis test was used to compare the GEARS performance scores to determine construct validity; Spearman's rank correlation measured interobserver reliability; and Cronbach's alpha was used to assess internal consistency.

RESULTS

Performance evaluations were completed on nine experts and 38 trainees (14 intermediate, 24 novice). Experts demonstrated superior performance compared to intermediates and novices overall and in all individual domains (p < 0.0001). In comparing intermediates and novices, the overall performance difference trended toward significance (p = 0.0505), while the individual domains of efficiency and autonomy were significantly different between groups (p = 0.0280 and 0.0425, respectively). Interobserver reliability between expert ratings was confirmed with a strong correlation observed (r = 0.857, 95 % CI [0.691, 0.941]). Experts and participant scoring showed less agreement (r = 0.435, 95 % CI [0.121, 0.689] and r = 0.422, 95 % CI [0.081, 0.0672]). Internal consistency was excellent for experts and participants (α = 0.96, 0.98, 0.93).

CONCLUSIONS

In an independent cohort, GEARS was able to differentiate between different robotic skill levels, demonstrating excellent construct validity. As a standardized assessment tool, GEARS maintained consistency and reliability for an in vivo robotic surgical task and may be applied for skills evaluation in a broad range of robotic procedures.

摘要

背景

我们使用体内动物训练模型,在一个独立队列中证明了全球机器人技能评估(GEARS)这一旨在测量机器人技术技能的临床评估工具的结构效度、信度和实用性。

方法

采用横断面观察性研究设计,47名自愿参与者被分为专家(作为主刀医生完成超过30例机器人手术病例)或学员。学员组进一步分为中级(≥5但≤30例)或新手(<5例)。所有参与者在猪模型中完成一项标准化的体内机器人任务。任务表现由两名专家机器人外科医生评估,并由参与者使用GEARS评估工具进行自我评估。采用Kruskal-Wallis检验比较GEARS表现分数以确定结构效度;Spearman等级相关性测量观察者间信度;Cronbach's α用于评估内部一致性。

结果

对9名专家和38名学员(14名中级,24名新手)完成了表现评估。总体而言,在所有个体领域中,专家的表现均优于中级和新手(p < 0.0001)。在比较中级和新手时,总体表现差异有显著趋势(p = 0.0505),而效率和自主性的个体领域在两组之间有显著差异(分别为p = 0.0280和0.0425)。专家评分之间的观察者间信度通过强相关性得到证实(r = 0.857,95% CI [0.691, 0.941])。专家和参与者评分的一致性较低(r = 0.435,95% CI [0.121, 0.689]和r = 0.422,95% CI [0.081, 0.672])。专家和参与者的内部一致性都很好(α = 0.96、0.98、0.93)。

结论

在一个独立队列中,GEARS能够区分不同的机器人技能水平,证明了出色的结构效度。作为一种标准化评估工具,GEARS在体内机器人手术任务中保持了一致性和可靠性,可应用于广泛的机器人手术技能评估。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验