C.K. Boscardin is associate professor, Department of Medicine, University of California, San Francisco, San Francisco, California; ORCID: https://orcid.org/0000-0002-9070-8859.
G. Earnest is research analyst, Center for Faculty Educators, University of California, San Francisco, San Francisco, California.
Acad Med. 2020 Nov;95(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 59th Annual Research in Medical Education Presentations):S109-S113. doi: 10.1097/ACM.0000000000003629.
Despite advances in learning sciences that highlight the efficacy of elaborative interrogation, in which students explain and elaborate on concepts in their own words, assessment techniques in medical education have commonly employed multiple-choice questions (MCQs). Educators' reluctance to consider alternatives such as open-ended questions (OEQs) stems from practical advantages of MCQs and the lack of empirical data on the predictability of OEQs for performance on other high-stakes assessments. In this study, the authors compared the predictive value of preclerkship assessments using OEQs for the outcomes of clerkship examinations and United States Medical Licensing Examination (USMLE) Step 1.
The authors compared outcomes of 2 assessment formats using multiyear performance data (2015 and 2016 cohorts) on preclerkship MCQ versus OEQ examinations for predicting students' subsequent performance on 6 clerkship examinations and USMLE Step 1. The authors conducted a regression analysis to compare the predictability of MCQs and OEQs by using clerkship exam scores and Step 1 scores as dependent variables and performance on MCQs and OEQs as predictors in the models.
Regression models with OEQs were consistently higher for predicting clerkship exam (NBME shelf-exam) scores, except for one clerkship, compared with models using MCQs. For Step 1, R-square using MCQs was higher with 59% of the variance explained compared with 46% with OEQs, but the OEQ cohort scored significantly higher on Step 1.
OEQ examinations predict performance on subsequent high-stakes MCQ examinations. Given the predictive value and closer alignment with scientific principles of effective learning, OEQ examinations are an examination format worthy of consideration in preclerkship medical education programs.
尽管学习科学的进步强调了精心设计的提问(学生用自己的话解释和阐述概念)的有效性,但医学教育中的评估技术通常采用多项选择题(MCQ)。教育工作者不愿意考虑替代方法,如开放式问题(OEQ),这源于 MCQ 的实际优势,以及缺乏关于 OEQ 对其他高风险评估表现的可预测性的经验数据。在这项研究中,作者比较了使用 OEQ 进行预科 MCQ 评估对实习考试和美国医师执照考试(USMLE)第 1 步结果的预测价值。
作者使用多年的表现数据(2015 年和 2016 年队列)比较了预科 MCQ 与 OEQ 考试的结果,以预测学生随后在 6 项实习考试和 USMLE 第 1 步中的表现。作者进行了回归分析,以比较 MCQ 和 OEQ 的可预测性,使用实习考试成绩和 USMLE 第 1 步成绩作为因变量,将 MCQ 和 OEQ 的表现作为模型中的预测因子。
与使用 MCQ 的模型相比,使用 OEQ 的回归模型在预测实习考试(NBME 支架考试)成绩方面始终更高,除了一项实习考试。对于 USMLE 第 1 步,使用 MCQ 的 R 平方值更高,解释了 59%的方差,而 OEQ 为 46%,但 OEQ 队列在 USMLE 第 1 步的得分明显更高。
OEQ 考试可预测后续高风险 MCQ 考试的表现。鉴于预测价值以及与有效学习科学原则的更紧密一致性,OEQ 考试是预科医学教育项目中值得考虑的考试形式。