Puri Nitin, McCarthy Michael, Miller Bobby
Office of Medical Education, Joan C. Edwards School of Medicine, Marshall University, Huntington, WV, United States.
Front Med (Lausanne). 2022 Jan 27;8:798876. doi: 10.3389/fmed.2021.798876. eCollection 2021.
We have observed that students' performance in our pre-clerkship curriculum does not align well with their United States Medical Licensing Examination (USMLE) STEP1 scores. Students at-risk of failing or underperforming on STEP1 have often excelled on our institutional assessments. We sought to test the validity and reliability of our course assessments in predicting STEP1 scores, and in the process, generate and validate a more accurate prediction model for STEP1 performance.
Student pre-matriculation and course assessment data of the Class of 2020 ( = 76) is used to generate a stepwise STEP1 prediction model, which is tested with the students of the Class of 2021 ( = 71). Predictions are developed at the time of matriculation and subsequently at the end of each course in the programing language R. For the Class of 2021, the predicted STEP1 score is correlated with their actual STEP1 scores, and data agreement is tested with means-difference plots. A similar model is generated and tested for the Class of 2022.
STEP1 predictions based on pre-matriculation data are unreliable and fail to identify at-risk students (R2 = 0.02). STEP1 predictions for most year one courses (anatomy, biochemistry, physiology) correlate poorly with students' actual STEP1 scores (R = 0.30). STEP1 predictions improve for year two courses (microbiology, pathology, and pharmacology). But integrated courses with customized NBMEs provide more reliable predictions (R = 0.66). Predictions based on these integrated courses are reproducible for the Class of 2022.
MCAT and undergraduate GPA are poor predictors of student's STEP1 scores. Partially integrated courses with biweekly assessments do not promote problem-solving skills and leave students' at-risk of failing STEP1. Only courses with integrated and comprehensive assessments are reliable indicators of students' STEP1 preparation.
我们观察到,学生在我们的临床实习前课程中的表现与他们的美国医师执照考试(USMLE)第一步考试成绩不太相符。在第一步考试中面临不及格或表现不佳风险的学生,在我们的机构评估中往往表现出色。我们试图测试我们课程评估在预测第一步考试成绩方面的有效性和可靠性,并在此过程中生成并验证一个更准确的第一步考试成绩预测模型。
使用2020届学生(n = 76)的入学前和课程评估数据来生成一个逐步的第一步考试预测模型,该模型用2021届学生(n = 71)进行测试。在入学时以及随后在该课程的每门课程结束时,使用编程语言R进行预测。对于2021届学生,将预测的第一步考试成绩与他们的实际第一步考试成绩进行关联,并使用均值差异图测试数据一致性。为2022届学生生成并测试了一个类似的模型。
基于入学前数据的第一步考试预测不可靠,无法识别有风险的学生(R2 = 0.02)。大多数一年级课程(解剖学、生物化学、生理学)的第一步考试预测与学生的实际第一步考试成绩相关性较差(R = 0.30)。二年级课程(微生物学、病理学和药理学)的第一步考试预测有所改善。但带有定制美国国家医学考试委员会(NBME)题目的综合课程提供了更可靠的预测(R = 0.66)。基于这些综合课程的预测在2022届学生中具有可重复性。
医学院入学考试(MCAT)成绩和本科平均绩点(GPA)对学生第一步考试成绩的预测能力较差。每两周进行一次评估的部分综合课程无法提升解决问题的能力,会让学生面临第一步考试不及格的风险。只有进行综合全面评估的课程才是学生第一步考试准备情况的可靠指标。