Dubosh Nicole M, Fisher Jonathan, Lewis Jason, Ullman Edward A
Department of Emergency Medicine, Harvard Medical School, Boston, Massachusetts; Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts.
Department of Emergency Medicine, Maricopa Medical Center, Phoenix, Arizona.
J Emerg Med. 2017 Jun;52(6):850-855. doi: 10.1016/j.jemermed.2016.09.018. Epub 2017 Mar 22.
Clerkship directors routinely evaluate medical students using multiple modalities, including faculty assessment of clinical performance and written examinations. Both forms of evaluation often play a prominent role in final clerkship grade. The degree to which these modalities correlate in an emergency medicine (EM) clerkship is unclear.
We sought to correlate faculty clinical evaluations with medical student performance on a written, standardized EM examination of medical knowledge.
This is a retrospective study of fourth-year medical students in a 4-week EM elective at one academic medical center. EM faculty performed end of shift evaluations of students via a blinded online system using a 5-point Likert scale for 8 domains: data acquisition, data interpretation, medical knowledge base, professionalism, patient care and communication, initiative/reliability/dependability, procedural skills, and overall evaluation. All students completed the National EM M4 Examination in EM. Means, medians, and standard deviations for end of shift evaluation scores were calculated, and correlations with examination scores were assessed using a Spearman's rank correlation coefficient.
Thirty-nine medical students with 224 discrete faculty evaluations were included. The median number of evaluations completed per student was 6. The mean score (±SD) on the examination was 78.6% ± 6.1%. The examination score correlated poorly with faculty evaluations across all 8 domains (ρ 0.074-0.316).
Faculty evaluations of medical students across multiple domains of competency correlate poorly with written examination performance during an EM clerkship. Educators need to consider the limitations of examination score in assessing students' ability to provide quality patient clinical care.
临床实习主任通常使用多种方式对医学生进行评估,包括教师对临床技能表现的评估和书面考试。这两种评估形式在最终的临床实习成绩中往往都起着重要作用。在急诊医学(EM)临床实习中,这些评估方式之间的关联程度尚不清楚。
我们试图将教师的临床评估与医学生在标准化的急诊医学知识书面考试中的表现进行关联分析。
这是一项针对在某学术医学中心参加为期4周急诊医学选修课的四年级医学生的回顾性研究。急诊医学教师通过一个盲法在线系统,使用5分制李克特量表对学生进行轮班结束时的评估,评估内容包括8个领域:数据采集、数据解读、医学知识库、职业素养、患者护理与沟通、主动性/可靠性/可依赖性、操作技能以及总体评价。所有学生都参加了全国急诊医学四年级考试。计算轮班结束时评估分数的均值、中位数和标准差,并使用斯皮尔曼等级相关系数评估与考试分数的相关性。
纳入了39名医学生,共获得224项独立的教师评估。每名学生完成的评估中位数为6次。考试的平均分数(±标准差)为78.6% ± 6.1%。在所有8个领域中,考试分数与教师评估的相关性都很差(ρ为0.074 - 0.316)。
在急诊医学临床实习期间,教师对医学生多个能力领域的评估与书面考试成绩的相关性很差。教育工作者在评估学生提供高质量患者临床护理能力时,需要考虑考试分数的局限性。