Wang Henry E, Schmicker Robert H, Herren Heather, Brown Siobhan, Donnelly John P, Gray Randal, Ragsdale Sally, Gleeson Andrew, Byers Adam, Jasti Jamie, Aguirre Christina, Owens Pam, Condle Joe, Leroux Brian
Department of Emergency Medicine, University of Alabama School of Medicine, Birmingham, AL.
Acad Emerg Med. 2015 Feb;22(2):204-11. doi: 10.1111/acem.12577. Epub 2015 Jan 29.
New chest compression detection technology allows for the recording and graphical depiction of clinical cardiopulmonary resuscitation (CPR) chest compressions. The authors sought to determine the inter-rater reliability of chest compression pattern classifications by human raters. Agreement with automated chest compression classification was also evaluated by computer analysis.
This was an analysis of chest compression patterns from cardiac arrest patients enrolled in the ongoing Resuscitation Outcomes Consortium (ROC) Continuous Chest Compressions Trial. Thirty CPR process files from patients in the trial were selected. Using written guidelines, research coordinators from each of eight participating ROC sites classified each chest compression pattern as 30:2 chest compressions, continuous chest compressions (CCC), or indeterminate. A computer algorithm for automated chest compression classification was also developed for each case. Inter-rater agreement between manual classifications was tested using Fleiss's kappa. The criterion standard was defined as the classification assigned by the majority of manual raters. Agreement between the automated classification and the criterion standard manual classifications was also tested.
The majority of the eight raters classified 12 chest compression patterns as 30:2, 12 as CCC, and six as indeterminate. Inter-rater agreement between manual classifications of chest compression patterns was κ = 0.62 (95% confidence interval [CI] = 0.49 to 0.74). The automated computer algorithm classified chest compression patterns as 30:2 (n = 15), CCC (n = 12), and indeterminate (n = 3). Agreement between automated and criterion standard manual classifications was κ = 0.84 (95% CI = 0.59 to 0.95).
In this study, good inter-rater agreement in the manual classification of CPR chest compression patterns was observed. Automated classification showed strong agreement with human ratings. These observations support the consistency of manual CPR pattern classification as well as the use of automated approaches to chest compression pattern analysis.
新型胸外按压检测技术可记录和以图形方式描绘临床心肺复苏(CPR)的胸外按压情况。作者试图确定人工评分者对胸外按压模式分类的评分者间可靠性。还通过计算机分析评估了与自动胸外按压分类的一致性。
这是一项对参与正在进行的复苏结果联盟(ROC)持续胸外按压试验的心脏骤停患者胸外按压模式的分析。从试验中的患者选取了30份CPR过程文件。按照书面指南,来自八个参与ROC站点的研究协调员将每种胸外按压模式分类为30:2胸外按压、持续胸外按压(CCC)或不确定。还针对每个病例开发了用于自动胸外按压分类的计算机算法。使用Fleiss卡方检验手动分类之间的评分者间一致性。将标准定义为大多数手动评分者指定的分类。还测试了自动分类与标准手动分类之间的一致性。
八位评分者中的大多数将12种胸外按压模式分类为30:2,12种分类为CCC,六种分类为不确定。胸外按压模式手动分类之间评分者间一致性为κ = 0.62(95%置信区间[CI] = 0.49至0.74)。自动计算机算法将胸外按压模式分类为30:2(n = 15)、CCC(n = 12)和不确定(n = 3)。自动分类与标准手动分类之间的一致性为κ = 0.84(95% CI = 0.59至0.95)。
在本研究中,观察到CPR胸外按压模式手动分类中有良好的评分者间一致性。自动分类与人工评分显示出高度一致性。这些观察结果支持了手动CPR模式分类的一致性以及使用自动方法进行胸外按压模式分析。