Imperial College School of Medicine, Imperial College London, London, UK.
Lee Kong Chian School of Medicine, Nanyang Technological University Singapore, Singapore.
Med Educ. 2022 Sep;56(9):936-948. doi: 10.1111/medu.14819. Epub 2022 May 16.
Although used widely, there is limited evidence of the BioMedical Admissions Test's (BMAT) predictive validity and incremental validity over prior educational attainment (PEA). We investigated BMAT's predictive and incremental validity for performance in two undergraduate medical schools, Imperial College School of Medicine (ICSM), UK, and Lee Kong Chian School of Medicine (LKCMedicine), Singapore. Our secondary goal was to compare the evidence collected with published evidence relating to comparable tools.
This was a retrospective cohort study of four ICSM (1188 students, entering 2010-2013) and three LKCMedicine cohorts (222 students, 2013-2015). We investigated associations between BMAT Section 1 ('Thinking Skills'), Section 2 ('Scientific Knowledge and Applications') and Section 3a ('Writing Task') scores, with written and clinical assessment performance across all programme years. Incremental validity was investigated over PEA (A-levels) in a subset of ICSM students.
When BMAT sections were investigated independently, Section 2 scores predicted performance on all written assessments in both institutions with mainly small effect sizes (standardised coefficient ranges: ICSM: 0.08-0.19; LKCMedicine: 0.22-0.36). Section 1 scores predicted Years 5 and 6 written assessment performance at ICSM (0.09-0.14) but nothing at LKCMedicine. Section 3a scores only predicted Year 5 clinical assessment performance at ICSM with a coefficient <0.1. There were no positive associations with standardised coefficients >0.1 between BMAT performance and clinical assessment performance. Multivariable regressions confirmed that Section 2 scores were the most predictive. We found no clear evidence of incremental validity for any BMAT section scores over A-level grades.
Schools who wish to assess scientific knowledge independently of A-levels may find BMAT Section 2 useful. Comparison with previous studies indicates that, overall, BMAT seems less useful than comparable tools. Larger scale studies are needed. Broader questions regarding why institutions adopt certain admissions tests, including those with little evidence, need consideration.
尽管生物医学入学考试(BMAT)被广泛应用,但它在预测两个本科医学院(英国帝国理工学院医学院和新加坡莱佛士医学院)学生表现方面的预测效度和增量效度的证据有限,而这些预测效度和增量效度与先前的教育成就(PEA)有关。我们的次要目标是将收集到的证据与有关可比工具的已发表证据进行比较。
这是一项回顾性队列研究,涉及帝国理工学院医学院的四个入学年份(2010-2013 年入学的 1188 名学生)和莱佛士医学院的三个入学年份(2013-2015 年入学的 222 名学生)。我们调查了 BMAT 第 1 部分(“思维技能”)、第 2 部分(“科学知识和应用”)和第 3a 部分(“写作任务”)的分数与所有课程年份的书面和临床评估表现之间的关联。在帝国理工学院医学院的一个学生子集中,我们还调查了 BMAT 成绩相对于 PEA(A 级成绩)的增量效度。
当独立研究 BMAT 各部分时,第 2 部分的分数在两个机构的所有书面评估中均与表现相关,且主要为小效应量(标准化系数范围:帝国理工学院医学院:0.08-0.19;莱佛士医学院:0.22-0.36)。第 1 部分的分数预测了帝国理工学院医学院第 5 年和第 6 年的书面评估成绩(0.09-0.14),但在莱佛士医学院则没有。第 3a 部分的分数仅预测了帝国理工学院医学院第 5 年的临床评估成绩,且系数小于 0.1。BMAT 成绩与临床评估成绩之间不存在大于 0.1 的标准化系数的正相关关系。多变量回归证实,第 2 部分的分数是最具预测性的。我们没有发现任何证据表明 BMAT 任何部分的分数在 PEA 成绩之上具有增量效度。
希望独立于 A 级成绩评估科学知识的学校可能会发现 BMAT 第 2 部分有用。与之前的研究相比,总体而言,BMAT 的效果似乎不如可比工具。需要进行更大规模的研究。需要考虑机构采用某些招生考试的原因,包括那些证据不足的原因。