Suppr超能文献

评估急诊护理培训中的评估。

Assessing the assessment in emergency care training.

作者信息

Dankbaar Mary E W, Stegers-Jager Karen M, Baarveld Frank, Merrienboer Jeroen J G van, Norman Geoff R, Rutten Frans L, van Saase Jan L C M, Schuit Stephanie C E

机构信息

Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands.

Training institution for family practice SBOH, Utrecht, the Netherlands.

出版信息

PLoS One. 2014 Dec 18;9(12):e114663. doi: 10.1371/journal.pone.0114663. eCollection 2014.

Abstract

OBJECTIVE

Each year over 1.5 million health care professionals attend emergency care courses. Despite high stakes for patients and extensive resources involved, little evidence exists on the quality of assessment. The aim of this study was to evaluate the validity and reliability of commonly used formats in assessing emergency care skills.

METHODS

Residents were assessed at the end of a 2-week emergency course; a subgroup was videotaped. Psychometric analyses were conducted to assess the validity and inter-rater reliability of the assessment instrument, which included a checklist, a 9-item competency scale and a global performance scale.

RESULTS

A group of 144 residents and 12 raters participated in the study; 22 residents were videotaped and re-assessed by 8 raters. The checklists showed limited validity and poor inter-rater reliability for the dimensions "correct" and "timely" (ICC = .30 and.39 resp.). The competency scale had good construct validity, consisting of a clinical and a communication subscale. The internal consistency of the (sub)scales was high (α = .93/.91/.86). The inter-rater reliability was moderate for the clinical competency subscale (.49) and the global performance scale (.50), but poor for the communication subscale (.27). A generalizability study showed that for a reliable assessment 5-13 raters are needed when using checklists, and four when using the clinical competency scale or the global performance scale.

CONCLUSIONS

This study shows poor validity and reliability for assessing emergency skills with checklists but good validity and moderate reliability with clinical competency or global performance scales. Involving more raters can improve the reliability substantially. Recommendations are made to improve this high stakes skill assessment.

摘要

目的

每年有超过150万医护人员参加急救护理课程。尽管这对患者关系重大且涉及大量资源,但关于评估质量的证据却很少。本研究的目的是评估常用评估形式在评估急救护理技能方面的有效性和可靠性。

方法

在为期2周的急救课程结束时对住院医师进行评估;对一个亚组进行录像。进行心理测量分析以评估评估工具的有效性和评分者间信度,该工具包括一份检查表、一个9项能力量表和一个整体表现量表。

结果

144名住院医师和12名评分者参与了该研究;对22名住院医师进行了录像,并由8名评分者重新评估。检查表在“正确”和“及时”维度上显示出有限的有效性和较差的评分者间信度(组内相关系数分别为0.30和0.39)。能力量表具有良好的结构效度,由一个临床亚量表和一个沟通亚量表组成。(亚)量表的内部一致性较高(α=0.93/0.91/0.86)。临床能力亚量表(0.49)和整体表现量表(0.50)的评分者间信度中等,但沟通亚量表的评分者间信度较差(0.27)。一项概化研究表明,使用检查表进行可靠评估时需要5至13名评分者,使用临床能力量表或整体表现量表时需要4名评分者。

结论

本研究表明,使用检查表评估急救技能的有效性和可靠性较差,但使用临床能力或整体表现量表时有效性良好且信度中等。增加评分者数量可显著提高信度。提出了改进这一高风险技能评估的建议。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验