Suppr超能文献

国家临床评估工具在急诊医学中的多机构实施:第一年使用数据

Multi-institutional Implementation of the National Clinical Assessment Tool in Emergency Medicine: Data From the First Year of Use.

作者信息

Hiller Katherine, Jung Julianna, Lawson Luan, Riddell Rebecca, Franzen Doug

机构信息

Department of Emergency Medicine University of Arizona Tucson AZ USA.

the Department of Emergency Medicine Johns Hopkins University Baltimore MD USA.

出版信息

AEM Educ Train. 2020 Jul 20;5(2):e10496. doi: 10.1002/aet2.10496. eCollection 2021 Apr.

Abstract

OBJECTIVES

Uniformly training physicians to provide safe, high-quality care requires reliable assessment tools to ensure learner competency. The consensus-derived National Clinical Assessment Tool in Emergency Medicine (NCAT-EM) has been adopted by clerkships across the country. Analysis of large-scale deidentified data from a consortium of users is reported.

METHODS

Thirteen sites entered data into a Web-based platform resulting in over 6,400 discrete NCAT-EM assessments from 748 students and 704 assessors. Reliability, internal consistency analysis, and factorial analysis of variance for hypothesis generation were performed.

RESULTS

All categories on the NCAT-EM rating scales and professionalism subdomains were used. Clinical rating scale and global assessment scores were positively skewed, similar to other assessments commonly used in emergency medicine (EM). Professionalism lapses were noted in <1% of assessments. Cronbach's alpha was >0.8 for each site; however, interinstitutional variability was significant. M4 students scored higher than M3 students, and EM-bound students scored higher than non-EM-bound students. There were site-specific differences based on number of prior EM rotations, but no overall association. There were differences in scores based on assessor faculty rank and resident training year, but not by years in practice. There were site-specific differences based on student sex, but overall no difference.

CONCLUSIONS

To our knowledge, this is the first large-scale multi-institutional implementation of a single clinical assessment tool. This study demonstrates the feasibility of a unified approach to clinical assessment across multiple diverse sites. Challenges remain in determining appropriate score distributions and improving consistency in scoring between sites.

摘要

目的

统一培训医生以提供安全、高质量的医疗服务需要可靠的评估工具来确保学习者的能力。全国急诊医学临床评估工具(NCAT-EM)已被全国各实习科室采用。本文报告了对来自一个用户联盟的大规模匿名数据的分析。

方法

13个站点将数据输入基于网络的平台,共产生了来自748名学生和704名评估者的6400多项离散的NCAT-EM评估。进行了可靠性、内部一致性分析以及用于假设生成的方差因子分析。

结果

使用了NCAT-EM评分量表和职业素养子领域的所有类别。临床评分量表和整体评估分数呈正偏态分布,与急诊医学(EM)中常用的其他评估类似。在不到1%的评估中发现了职业素养方面的失误。每个站点的Cronbach's alpha系数均大于0.8;然而,机构间的变异性很大。四年级学生的得分高于三年级学生,打算从事急诊医学工作的学生得分高于不打算从事急诊医学工作的学生。根据之前急诊医学轮转次数存在站点特异性差异,但无总体关联。根据评估者的教师级别和住院医师培训年份,分数存在差异,但与实际工作年限无关。根据学生性别存在站点特异性差异,但总体无差异。

结论

据我们所知,这是首次对单一临床评估工具进行大规模多机构实施。本研究证明了在多个不同站点采用统一临床评估方法的可行性。在确定合适的分数分布以及提高各站点之间评分的一致性方面仍存在挑战。

相似文献

本文引用的文献

4
What Do Program Directors Look for in an Applicant?项目主任在申请人身上看重什么?
J Emerg Med. 2019 May;56(5):e95-e101. doi: 10.1016/j.jemermed.2019.01.010. Epub 2019 Mar 20.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验