K. Hu is an MD/MPH student, University of Illinois at Chicago, Chicago, Illinois.
P.J. Hicks is professor of pediatrics, University of Texas Southwestern Medical School, Dallas, Texas.
Acad Med. 2020 Nov;95(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 59th Annual Research in Medical Education Presentations):S89-S94. doi: 10.1097/ACM.0000000000003644.
Semiannually, U.S. pediatrics residency programs report resident milestone levels to the Accreditation Council for Graduate Medical Education (ACGME). The Pediatrics Milestones Assessment Collaborative (PMAC, consisting of the National Board of Medical Examiners, American Board of Pediatrics, and Association of Pediatric Program Directors) developed workplace-based assessments of 2 inferences: readiness to serve as an intern with a supervisor present (D1) and readiness to care for patients with a supervisor nearby in the pediatric inpatient setting (D2). The authors compared learner and program variance in PMAC scores with ACGME milestones.
The authors examined sources of variance in PMAC scores and milestones between November 2015 and May 2017 of 181 interns at 8 U.S. pediatrics residency programs using random effects models with program, competency, learner, and program × competency components.
Program-related milestone variance was substantial (54% D1, 68% D2), both in comparison to learner milestone variance (22% D1, 14% D2) and program variance in the PMAC scores (12% D1, 10% D2). In contrast, learner variance represented 44% (D1) or 26% (D2) of variance in PMAC scores. Within programs, PMAC scores were positively correlated with milestones for all but one competency.
PMAC assessments provided scores with little program-specific variance and were more sensitive to differences in learners within programs compared with milestones. Milestones reflected greater differences by program than by learner. This may represent program-based differences in intern performance or in use of milestones as a reporting scale. Comparing individual learner milestones without adjusting for programs is problematic.
美国儿科学住院医师培训计划每半年向研究生医学教育认证委员会(ACGME)报告住院医师的里程碑水平。儿科学里程碑评估协作组织(PMAC,由美国国家医师考试委员会、美国儿科学会和儿科学项目主任协会组成)开发了 2 种基于工作场所的评估:在有监督者在场的情况下担任实习医生的准备情况(D1)和在儿科住院环境中有监督者在附近照顾患者的准备情况(D2)。作者将 PMAC 评分与 ACGME 里程碑比较,以比较学习者和计划之间的差异。
作者使用具有计划、能力、学习者和计划×能力组成部分的随机效应模型,检查了 2015 年 11 月至 2017 年 5 月在美国 8 个儿科学住院医师培训计划的 181 名实习生的 PMAC 评分和里程碑的来源差异。
与学习者里程碑差异(D1 为 22%,D2 为 14%)和 PMAC 评分中的计划差异(D1 为 12%,D2 为 10%)相比,与计划相关的里程碑差异较大(D1 为 54%,D2 为 68%)。相比之下,PMAC 评分中的学习者差异分别占 D1(44%)或 D2(26%)的差异。在计划内,除了一个能力外,PMAC 评分与所有里程碑都呈正相关。
PMAC 评估提供的分数与特定计划的差异很小,与里程碑相比,对计划内学习者的差异更敏感。里程碑反映了计划之间的差异大于学习者之间的差异。这可能代表实习医生表现或使用里程碑作为报告标准的基于计划的差异。不调整计划比较个别学习者的里程碑是有问题的。