Ihejirika Rivka C, Thakore Rachel V, Sathiyakumar Vasanth, Ehrenfeld Jesse M, Obremskey William T, Sethi Manish K
The Vanderbilt Orthopaedic Institute Center for Health Policy, Vanderbilt University, Suite 4200, South Tower, MCE, Nashville, TN 37221, United States.
The Vanderbilt Orthopaedic Institute Center for Health Policy, Vanderbilt University, Suite 4200, South Tower, MCE, Nashville, TN 37221, United States.
Injury. 2015 Apr;46(4):542-6. doi: 10.1016/j.injury.2014.02.039. Epub 2014 Mar 11.
Although recent literature has demonstrated the utility of the ASA score in predicting postoperative length of stay, complication risk and potential utilization of other hospital resources, the ASA score has been inconsistently assigned by anaesthesia providers. This study tested the reliability of assignment of the ASA score classification by both attending anaesthesiologists and anaesthesia residents specifically among the orthopaedic trauma patient population.
Nine case-based scenarios were created involving preoperative patients with isolated operative orthopaedic trauma injuries. The cases were created and assigned a reference score by both an attending anaesthesiologist and orthopaedic trauma surgeon. Attending and resident anaesthesiologists were asked to assign an ASA score for each case. Rater versus reference and inter-rater agreement amongst respondents was then analyzed utilizing Fleiss's Kappa and weighted and unweighted Cohen's Kappa.
Thirty three individuals provided ASA scores for each of the scenarios. The average rater versus reference reliability was substantial (Kw=0.78, SD=0.131, 95% CI=0.73-0.83). The average rater versus reference Kuw was also substantial (Kuw=0.64, SD=0.21, 95% CI=0.56-0.71). The inter-rater reliability as evaluated by Fleiss's Kappa was moderate (K=0.51, p<.001). An inter-rater comparison within the group of attendings (K=0.50, p<.001) and within the group of residents were both moderate (K=0.55, p<.001). There was a significant increase in the level of inter-rater reliability from the self-reported 'very uncomfortable' participants to the 'very comfortable' participants (uncomfortable K=0.43, comfortable K=0.59, p<.001).
This study shows substantial agreement strength for reliability of the ASA score among anaesthesiologists when evaluating orthopaedic trauma patients. The significant increase in inter-rater reliability based on anaesthesiologists' comfort with the ASA scoring method implies a need for further evaluation of ASA assessment training and routine use on the ground. These findings support the use of the ASA score as a statistically reliable tool in orthopaedic trauma.
尽管近期文献已证明美国麻醉医师协会(ASA)评分在预测术后住院时间、并发症风险及其他医院资源潜在利用情况方面具有实用性,但麻醉医生对ASA评分的评定并不一致。本研究专门针对骨科创伤患者群体,测试了主治麻醉医生和麻醉住院医生对ASA评分分类评定的可靠性。
创建了9个基于病例的场景,涉及术前单纯接受骨科创伤手术的患者。这些病例由一名主治麻醉医生和一名骨科创伤外科医生共同创建并指定一个参考评分。要求主治麻醉医生和住院麻醉医生为每个病例评定一个ASA评分。然后利用Fleiss卡方检验以及加权和非加权的Cohen卡方检验分析评分者与参考评分之间以及受访者之间的评分者一致性。
33人对每个场景都给出了ASA评分。评分者与参考评分的平均可靠性较高(Kw = 0.78,标准差 = 0.131,95%置信区间 = 0.73 - 0.83)。评分者与参考评分的平均Kuw也较高(Kuw = 0.64,标准差 = 0.21,95%置信区间 = 0.56 - 0.71)。通过Fleiss卡方检验评估的评分者间可靠性为中等(K = 0.51,p <.001)。主治医生组内(K = 0.50,p <.001)和住院医生组内的评分者间比较均为中等(K = 0.55,p <.001)。从自我报告为“非常不舒服”的参与者到“非常舒服”的参与者,评分者间可靠性水平有显著提高(不舒服组K = 0.43,舒服组K = 0.59,p <.001)。
本研究表明,在评估骨科创伤患者时,麻醉医生对ASA评分可靠性的评定具有较高的一致性。基于麻醉医生对ASA评分方法的舒适度,评分者间可靠性显著提高,这意味着需要进一步评估ASA评估培训及实际中的常规使用情况。这些发现支持将ASA评分用作骨科创伤领域具有统计学可靠性的工具。