Suppr超能文献

儿科住院医师 JumpSTART 灾难分诊评分工具的设计、有效性和可靠性。

Design, validity, and reliability of a pediatric resident JumpSTART disaster triage scoring instrument.

机构信息

Department of Pediatrics, Yale University School of Medicine, New Haven, CT 06511, USA.

出版信息

Acad Pediatr. 2013 Jan-Feb;13(1):48-54. doi: 10.1016/j.acap.2012.09.002. Epub 2012 Nov 13.

Abstract

OBJECTIVE

To design an instrument for scoring residents learning pediatric disaster triage (PDT), and to test the validity and reliability of the instrument.

METHODS

We designed a checklist-based scoring instrument including PDT knowledge and skills and performance, as well as a global assessment. Learners' performance in a 10-patient school bus crash simulation was video recorded and scored with the instrument. Learners triaged the patients with a color-coded algorithm (JumpSTART, Simple Triage and Rapid Treatment). Three evaluators observed the recordings and scored triage performance for each learner. Internal and construct validity of the instrument were established via comparison of resident performance by postgraduate year (PGY) and correlating instrument items with overall score. Validity was assessed with analysis of variance and the D statistic. We calculated evaluators' intraclass correlation coefficient (ICC) for each patient, skill, triage decision, and global assessment.

RESULTS

There were 37 learners and 111 observations. There was no difference in total scores by PGY (P = .77), establishing internal validity. Regarding construct validity, most instrument items had a D statistic of >0.5. The overall ICC among scores was 0.83 (95% confidence interval [CI] 0.74-0.89). Individual patient score reliability was high and was greatest among patients with head injury (ICC 0.86; 95% CI 0.79-0.91). Reliability was low for an ambulatory patient (ICC 0.29; 95% CI 0.07-0.48). Triage skills evaluation showed excellent reliability, including airway management (ICC 0.91; 95% CI 0.86-0.94) and triage speed (ICC 0.81; 95% CI 0.72-0.88). The global assessment had moderate reliability for skills (ICC 0.63; 95% CI 0.47-0.75) and knowledge (ICC 0.64; 95% CI 0.49-0.76).

CONCLUSIONS

We report the validity and reliability testing of a PDT-scoring instrument. Validity was confirmed with no performance differential by PGY. Reliability of the scoring instrument for most patient-level triage, knowledge, and specific skills was high.

摘要

目的

设计一种用于对住院医师儿科灾难分诊(PDT)学习进行评分的工具,并检验该工具的效度和信度。

方法

我们设计了一个基于清单的评分工具,包括 PDT 知识和技能以及表现,还有一个总体评估。学习者在 10 名学生校车碰撞模拟中的表现被录像,并使用该工具进行评分。学习者使用颜色编码算法(JumpSTART、Simple Triage and Rapid Treatment)对患者进行分诊。三名评估者观察录像并对每位学习者的分诊表现进行评分。通过比较住院医师的表现(毕业后年数(PGY))和将工具项目与总分相关联,建立工具的内部和结构效度。通过方差分析和 D 统计量评估有效性。我们为每位患者、技能、分诊决策和总体评估计算了评估者的组内相关系数(ICC)。

结果

共有 37 名学习者和 111 次观察。PGY 对总分没有影响(P=0.77),确立了内部有效性。关于结构有效性,大多数工具项目的 D 统计量>0.5。评分的总体 ICC 为 0.83(95%置信区间[CI]0.74-0.89)。个别患者评分的可靠性较高,头部受伤患者的可靠性最高(ICC0.86;95%CI0.79-0.91)。活动患者的可靠性较低(ICC0.29;95%CI0.07-0.48)。分诊技能评估显示出很高的可靠性,包括气道管理(ICC0.91;95%CI0.86-0.94)和分诊速度(ICC0.81;95%CI0.72-0.88)。全球评估对技能(ICC0.63;95%CI0.47-0.75)和知识(ICC0.64;95%CI0.49-0.76)的可靠性为中等。

结论

我们报告了 PDT 评分工具的效度和信度检验。通过 PGY 没有表现差异确认了有效性。该评分工具对大多数患者层面的分诊、知识和特定技能的可靠性较高。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验