Suppr超能文献

基于考官经验和专业知识,学生外科客观结构化临床考试(OSCE)表现评分是否存在差异?

Is There Variability in Scoring of Student Surgical OSCE Performance Based on Examiner Experience and Expertise?

作者信息

Donohoe Claire L, Reilly Frank, Donnelly Suzanne, Cahill Ronan A

机构信息

Department of Surgery, Mater Misericordiae University Hospital, Dublin, Ireland; Department of Surgery, St James' Hospital, Dublin 8 and Trinity College, Dublin, Ireland.

Department of Surgery, Mater Misericordiae University Hospital, Dublin, Ireland.

出版信息

J Surg Educ. 2020 Sep-Oct;77(5):1202-1210. doi: 10.1016/j.jsurg.2020.03.009. Epub 2020 Apr 23.

Abstract

OBJECTIVE

To investigate the influence of clinical experience and content expertise on global assessment scores in a Surgical Objective Structured Clinical Exam (OSCE) for senior medical undergraduate students.

DESIGN

Scripted videos of simulated student performance in an OSCE at two standards (clear pass and borderline) were awarded a global score on each of two rating scales by a range of clinical assessors. Results were analysed by examiner experience and content expertise.

SETTING

The study was designed in a large Medical School in Ireland. Examiners were consultant and training grade doctors from three university teaching hospitals.

PARTICIPANTS

147 assessors participated. Of these, 75 (51%) were surgeons and 25 (17%) had sub-speciality surgical expertise directly relevant to the OSCE station. 41 were consultants.

RESULTS

Responsible academic scoring set the benchmark. By multivariable linear regression analysis, neither clinical experience (consultant status) nor relevant content expertise in surgery was independently predictive of assessor grading for either clear pass or borderline student performance. No educational factor (previous examining experience/training, self-rated confidence in assessment or frame of reference) was significant. Assessor gender (male) was associated with award of a fail grade for borderline performance. Trainees were reliable graders of borderline performance but more lenient than the gold standard for clear pass. We report greater agreement with the gold standard score using the global descriptive scale, with strong agreement for all assessors in the borderline case.

CONCLUSIONS

Neither assessor clinical experience nor content expertise is independently predictive of grade awarded in an OSCE. Where non-experts or trainees assess, we find evidence for use of a descriptive global score to maximise agreement with expert gold standard, particularly for borderline performance. These results inform the fair and reliable participation of a range of examiners across subspecialty stations in the surgical OSCE format.

摘要

目的

探讨临床经验和专业知识对高年级医学本科生外科客观结构化临床考试(OSCE)全球评估分数的影响。

设计

一系列临床评估人员根据两个评分量表,对OSCE中模拟学生表现的脚本视频(两种标准:明确通过和临界通过)给出全球分数。结果根据考官经验和专业知识进行分析。

地点

该研究在爱尔兰一所大型医学院进行设计。考官为来自三家大学教学医院的顾问医生和培训阶段医生。

参与者

147名评估人员参与。其中,75名(51%)为外科医生,25名(17%)具有与OSCE考站直接相关的外科亚专业知识。41名是顾问医生。

结果

负责的学术评分设定了基准。通过多变量线性回归分析,无论是临床经验(顾问身份)还是外科相关专业知识,均不能独立预测评估人员对明确通过或临界通过的学生表现的评分。没有教育因素(以前的考试经验/培训、自我评估的评估信心或参照标准)具有显著性。评估人员性别(男性)与临界表现的不及格评分相关。实习生对临界表现的评分可靠,但比对明确通过的黄金标准更宽松。我们报告使用全球描述性量表与黄金标准分数的一致性更高,在临界情况下所有评估人员的一致性都很强。

结论

评估人员的临床经验和专业知识均不能独立预测OSCE中的评分。在非专家或实习生进行评估时,我们发现有证据表明使用描述性全球分数可最大程度地与专家黄金标准达成一致,特别是对于临界表现。这些结果为外科OSCE形式下各亚专业考站的一系列考官公平、可靠地参与提供了依据。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验