Noveanu Juliane, Amsler Felix, Ummenhofer Wolfgang, von Wyl Thomas, Zuercher Mathias
Prehosp Emerg Care. 2017 Jul-Aug;21(4):511-524. doi: 10.1080/10903127.2017.1302528. Epub 2017 Apr 14.
Simulation-based medical training is associated with superior educational outcomes and improved cost efficiency. Self- and peer-assessment may be a cost-effective and flexible alternative to expert-led assessment. We compared accuracy of self- and peer-assessment of untrained raters using basic evaluation tools to expert assessment using advanced validation tools including validated questionnaires and post hoc video-based analysis.
Twenty-eight simulated emergency airway management scenarios were observed and video-recorded for further assessment. Participants consisted of 28 emergency physicians who were involved in four different airway management scenarios with different roles: One scenario as a team leader, one as an assisting team member, and two as an observer. Non-technical skills (NTS) and technical skills (TS) were analyzed by three independent groups: 1) the performing team (PT) consisted of the two emergency physicians acting either in the role of team leader or team member (self-assessment); 2) the observing team (OT), consisted of two of the participating emergency physicians not involved in the current clinical scenario (peer-assessment) and assessment occurred during (OT) or directly after (PT) the simulation without prior specific interpretational training but using standardized questionnaires; and 3) the expert team (ET) consisted of two specifically trained external observers (one psychologist and one emergency physician) using video-assisted objective assessment combined with standardized questionnaires.
Intragroup reliability demonstrated by intra-class correlation (ICC) was moderate to good for TS (ICC 0.42) and NTS (ICC 0.55) in PT and moderate to good for TS (ICC 0.41) or poor for NTS (ICC 0.27) in OT. ET showed an excellent intragroup reliability for both TS (ICC 0.78) and NTS (ICC 0.81). Interrater reliability was significantly different between ET and PT and between ET and OT for both TS and NTS. There was no difference between OT and PT for neither TS nor NTS; p < 0.05.
Expert assessment of simulation-based medical training scenarios using validated checklists and performance of post hoc video-based analysis was superior to self- or peer-assessment of untrained observers for both TS and NTS.
基于模拟的医学培训与卓越的教育成果及更高的成本效益相关。自我评估和同伴评估可能是由专家主导的评估的一种经济高效且灵活的替代方式。我们比较了未经培训的评估者使用基本评估工具进行自我评估和同伴评估的准确性,与使用包括经过验证的问卷和事后基于视频的分析等先进验证工具的专家评估的准确性。
观察并视频记录了28个模拟紧急气道管理场景以供进一步评估。参与者包括28名急诊医生,他们参与了四个不同角色的气道管理场景:一个场景中担任团队领导,一个场景中担任协助团队成员,两个场景中担任观察者。非技术技能(NTS)和技术技能(TS)由三个独立的组进行分析:1)执行团队(PT)由两名分别担任团队领导或团队成员角色的急诊医生组成(自我评估);2)观察团队(OT)由两名未参与当前临床场景的参与急诊医生组成(同伴评估),评估在模拟期间(OT)或模拟结束后立即(PT)进行,无需事先进行特定的解释培训,但使用标准化问卷;3)专家团队(ET)由两名经过专门培训的外部观察者(一名心理学家和一名急诊医生)组成,他们使用视频辅助客观评估并结合标准化问卷。
组内相关性(ICC)显示,PT组中TS(ICC 0.42)和NTS(ICC 0.55)的组内信度为中等至良好,OT组中TS(ICC 0.41)的组内信度为中等至良好,NTS(ICC 0.27)的组内信度较差。ET组中TS(ICC 0.78)和NTS(ICC 0.81)的组内信度均极佳。对于TS和NTS,ET与PT之间以及ET与OT之间的评分者间信度存在显著差异。对于TS和NTS,OT与PT之间均无差异;p < 0.05。
对于TS和NTS,使用经过验证的检查表对基于模拟的医学培训场景进行专家评估以及进行事后基于视频的分析,优于未经培训的观察者的自我评估或同伴评估。