McGown Patrick J, Nichols Molly M, Forshaw Jennifer A, Rich Antonia, Harrison David, Brown Celia, Sam Amir H
Imperial College School of Medicine, Imperial College London, London, UK.
Chelsea & Westminster NHS Trust, London, UK.
BMC Med Educ. 2025 Jun 5;25(1):840. doi: 10.1186/s12909-025-07237-0.
Evaluation of clinical performance is essential in all medical school programmes. Students undergo multiple clinical placements in different disciplines and settings, and typically must pass an end-of-placement supervisor sign-off evaluation to progress. However, the validity of this sign-off model remains unclear. This study aims to assess the extent to which this assessment method predicts performance in summative medical school examinations.
We compared summative knowledge and clinical skills examination scores with end-of-placement supervisor sign-off ratings of ‘knowledge’, ‘clinical skills’ and ‘practical skills’ for medical undergraduate students, across three clinical placements at Imperial College London, UK (n = 355). Statistical analysis for predictive validity was performed through Ordinary Least Squares regression.
End-of-placement supervisor ratings in hospital did not significantly predict student performance in summative knowledge tests or clinical skills assessment. ‘Knowledge’ and ‘practical skills’ ratings lacked predictive validity across all supervisors. Statistically significant predictive validity was evident for GP supervisor ratings of ‘clinical skills’ and examination scores, but the effect size was educationally insignificant (p = 0.01, r = 0.02).
End-of-placement supervisor ratings did not demonstrate educationally significant predictive validity towards end-of-year examinations. Multi-source feedback, embedded in-placement assessment, and additional formalised supervision time in supervisors’ work schedules could be beneficial to improve the educational value for the student and the clinical placement sign-off process. Different sign-off requirements could be considered for GP and hospital settings, with tailoring of constructs to suit the clinical environment.
The online version contains supplementary material available at 10.1186/s12909-025-07237-0.
临床能力评估在所有医学院课程中都至关重要。学生要在不同学科和环境中进行多次临床实习,通常必须通过实习结束时导师的签字评估才能继续学业。然而,这种签字评估模式的有效性仍不明确。本研究旨在评估这种评估方法在医学院结业考试中预测学生表现的程度。
我们将英国伦敦帝国理工学院三个临床实习阶段的医学本科生结业知识和临床技能考试成绩,与实习结束时导师对“知识”“临床技能”和“实践技能”的签字评级进行了比较(n = 355)。通过普通最小二乘法回归对预测效度进行统计分析。
医院实习结束时导师的评级并不能显著预测学生在结业知识测试或临床技能评估中的表现。所有导师对“知识”和“实践技能”的评级都缺乏预测效度。全科医生导师对“临床技能”的评级与考试成绩之间存在统计学上显著的预测效度,但效应量在教育意义上不显著(p = 0.01,r = 0.02)。
实习结束时导师的评级对年终考试没有显示出具有教育意义的预测效度。多源反馈、实习期间的嵌入式评估以及在导师工作安排中增加正式的监督时间,可能有助于提高对学生的教育价值和临床实习签字流程。对于全科医生和医院环境,可以考虑不同的签字要求,并根据临床环境调整评估指标。
在线版本包含可在10.1186/s12909 - 025 - 07237 - 0获取的补充材料。