Suppr超能文献

技术技能的众包评估:一种区分基本机器人手术技能的有效方法。

Crowd-Sourced Assessment of Technical Skill: A Valid Method for Discriminating Basic Robotic Surgery Skills.

作者信息

White Lee W, Kowalewski Timothy M, Dockter Rodney Lee, Comstock Bryan, Hannaford Blake, Lendvay Thomas S

机构信息

1 School of Medicine, Stanford University, Palo Alto, California. (At time of data collection and analysis: Department of Bioengineering, University of Washington , Seattle, Washington.).

2 Department of Mechanical Engineering, University of Minnesota , Minneapolis, Minnesota.

出版信息

J Endourol. 2015 Nov;29(11):1295-301. doi: 10.1089/end.2015.0191. Epub 2015 Aug 24.

Abstract

BACKGROUND

A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large readily available crowds through the Internet, rapid, accurate, and low-cost assessments may be achieved. We hypothesized that assessments provided by crowd workers highly correlate with expert surgeons' assessments.

MATERIALS AND METHODS

A group of 49 surgeons from two hospitals performed two dry-laboratory robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk. The surgical tasks in each video were graded by (n=30) varying crowd workers and (n=3) experts using a modified global evaluative assessment of Robotic Skills (GEARS) grading tool, and the mean scores were compared using Cronbach's alpha statistic.

RESULTS

GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively.

CONCLUSION

The assessment of surgical skill by crowd workers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-laboratory tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.

摘要

背景

研究表明,外科医生在手术室的技能与患者的临床结局相关。准确、迅速地评估手术技能仍然是一项挑战,部分原因在于往往难以获得专家评审人员。通过利用互联网上大量现成的人群力量,可以实现快速、准确且低成本的评估。我们假设众包工人提供的评估与专家外科医生的评估高度相关。

材料与方法

来自两家医院的49名外科医生进行了两项模拟机器人手术技能评估任务。这些任务的操作过程被录像并发布到网上,供亚马逊土耳其机器人平台的用户进行评估。每个视频中的手术任务由30名不同的众包工人和3名专家使用改良的机器人技能全球评估(GEARS)分级工具进行评分,并使用克朗巴赫α统计量比较平均得分。

结果

针对每个视频和任务,均获得了众包工人的GEARS评估结果,并与专家外科医生的GEARS评分进行比较。两项任务中,基于众包工人的表现得分与专家的表现评估结果相符,克朗巴赫α系数分别为0.84和0.92。

结论

在评估基本的机器人手术模拟任务时,众包工人对手术技能的评估结果与专家外科医生提供的评分高度一致。众包工人的反馈成本更低,获取速度更快。本研究提供了证据,表明众包可以为培训和执业外科医生快速提供技能反馈提供一种辅助方法。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验