D'Angelo Anne-Lise D, Law Katherine E, Cohen Elaine R, Greenberg Jacob A, Kwan Calvin, Greenberg Caprice, Wiegmann Douglas A, Pugh Carla M
Department of Surgery, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI.
Department of Industrial and Systems Engineering, School of Engineering, University of Wisconsin-Madison, Madison, WI.
Surgery. 2015 Nov;158(5):1408-14. doi: 10.1016/j.surg.2015.04.010. Epub 2015 May 21.
The aim of this study was to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure.
Seven postgraduate year 4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems.
Residents committed 121 total errors on day 1 compared with 146 on day 2. One of 7 residents successfully completed the LVH repair on day 1 compared with all 7 residents on day 2 (P = .001). The majority of errors (85%) committed on day 2 were technical and occurred during the last 2 steps of the procedure. There were significant differences in error type (P ≤ .001) and level (P = .019) from day 1 to day 2. The proportion of omission errors decreased from day 1 (33%) to day 2 (14%). In addition, there were more technical and commission errors on day 2.
The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness.
本研究的目的是评估一种人为因素错误评估方法在模拟手术过程中评估住院医师表现的有效性。
在全国性的高级腹腔镜课程的第一天,7名四年级至五年级的住院医师有30分钟时间完成模拟腹腔镜腹疝(LVH)修补术。教员对手术错误提供即时反馈,住院医师参与其修补术的最终产品分析。然后,住院医师在讲座和动画实验室中接受了关于几种高级腹腔镜手术的理论和实践培训。在第二天,住院医师使用模拟器进行了一次不等效的LVH修补术。三名研究人员使用先前开发的人为错误分类系统对修补术的视频进行了审查和编码。
住院医师在第一天共犯了121个错误,而第二天为146个。7名住院医师中有1名在第一天成功完成了LVH修补术,而第二天所有7名住院医师都成功完成了(P = .001)。第二天犯的大多数错误(85%)是技术性的,发生在手术的最后两步。从第一天到第二天,错误类型(P ≤ .001)和级别(P = .019)存在显著差异。遗漏错误的比例从第一天的33%降至第二天的14%。此外,第二天的技术性错误和执行错误更多。
错误评估工具成功地对表现错误进行了分类,支持了已知群体有效性证据。通过错误分类评估住院医师的表现对于促进我们对手术准备情况的理解具有巨大潜力。