McEvoy Matthew D, Hand William R, Furse Cory M, Field Larry C, Clark Carlee A, Moitra Vivek K, Nietert Paul J, O'Connor Michael F, Nunnally Mark E
From the Department of Anesthesiology (M.D.M.), Vanderbilt University Medical Center, Nashville, TN; Departments of Anesthesia and Perioperative Medicine (W.R.H., C.M.F., L.C.F., C.A.C.), and Public Health Sciences (P.J.N.), Medical University of South Carolina, Charleston, SC; Department of Anesthesiology (V.K.M.), Columbia University Medical Center, New York, NY; and Section of Critical Care Medicine (M.F.O.) Department of Anesthesia and Critical Care (M.E.N.), University of Chicago, Chicago, IL.
Simul Healthc. 2014 Oct;9(5):295-303. doi: 10.1097/SIH.0000000000000048.
Few valid and reliable grading checklists have been published for the evaluation of performance during simulated high-stakes perioperative event management. As such, the purposes of this study were to construct valid scoring checklists for a variety of perioperative emergencies and to determine the reliability of scores produced by these checklists during continuous video review.
A group of anesthesiologists, intensivists, and educators created a set of simulation grading checklists for the assessment of the following scenarios: severe anaphylaxis, cerebrovascular accident, hyperkalemic arrest, malignant hyperthermia, and acute coronary syndrome. Checklist items were coded as critical or noncritical. Nonexpert raters evaluated 10 simulation videos in a random order, with each video being graded 4 times. A group of faculty experts also graded the videos to create a reference standard to which nonexpert ratings were compared. P < 0.05 was considered significant.
Team leaders in the simulation videos were scored by the expert panel as having performed 56.5% of all items on the checklist (range, 43.8%-84.0%), and 67.2% of the critical items (range, 30.0%-100%). Nonexpert raters agreed with the expert assessment 89.6% of the time (95% confidence interval, 87.2%-91.6%). No learning curve development was found with repetitive video assessment or checklist use. The κ values comparing nonexpert rater assessments to the reference standard averaged 0.76 (95% confidence interval, 0.71-0.81).
The findings indicate that the grading checklists described are valid, are reliable, and could be used in perioperative crisis management assessment.
针对模拟高风险围手术期事件管理过程中的表现评估,已发表的有效且可靠的评分清单很少。因此,本研究的目的是构建各种围手术期紧急情况的有效评分清单,并确定在连续视频审查期间这些清单所产生分数的可靠性。
一组麻醉医生、重症监护医生和教育工作者创建了一套模拟评分清单,用于评估以下场景:严重过敏反应、脑血管意外、高钾血症心脏骤停、恶性高热和急性冠状动脉综合征。清单项目被编码为关键或非关键。非专家评分者以随机顺序评估10个模拟视频,每个视频被评分4次。一组教师专家也对视频进行评分,以创建一个参考标准,并将非专家评分与之比较。P < 0.05被认为具有统计学意义。
模拟视频中的团队领导者被专家小组评为完成了清单上所有项目的56.5%(范围为43.8% - 84.0%),以及关键项目的67.2%(范围为30.0% - 100%)。非专家评分者在89.6%的时间内与专家评估结果一致(95%置信区间,87.2% - 91.6%)。重复视频评估或使用清单未发现学习曲线的发展。将非专家评分者评估与参考标准进行比较的κ值平均为0.76(95%置信区间,0.71 - 0.81)。
研究结果表明,所描述的评分清单是有效的、可靠的,可用于围手术期危机管理评估。