Suppr超能文献

构建可靠且可推广的临床实习能力评估:“鹰鸽”校正的影响

Building reliable and generalizable clerkship competency assessments: Impact of 'hawk-dove' correction.

作者信息

Santen Sally A, Ryan Michael, Helou Marieka A, Richards Alicia, Perera Robert A, Haley Kellen, Bradner Melissa, Rigby Fidelma B, Park Yoon Soo

机构信息

Virginia Commonwealth University School of Medicine, Richmond, VA, USA.

College of Medicine, University of Illinois at Chicago, Chicago, IL, USA.

出版信息

Med Teach. 2021 Dec;43(12):1374-1380. doi: 10.1080/0142159X.2021.1948519. Epub 2021 Sep 17.

Abstract

PURPOSE

Systematic differences among raters' approaches to student assessment may result in leniency or stringency of assessment scores. This study examines the generalizability of medical student workplace-based competency assessments including the impact of rater-adjusted scores for leniency and stringency.

METHODS

Data were collected from summative clerkship assessments completed for 204 students during 2017-2018 the clerkship at a single institution. Generalizability theory was used to explore variance attributed to different facets (rater, learner, item, and competency domain) through three unbalanced random-effects models by clerkship including applying assessor stringency-leniency adjustments.

RESULTS

In the original assessments, only 4-8% of the variance was attributed to the student with the remainder being rater variance and error. Aggregating items to create a composite score increased variability attributable to the student (5-13% of variance). Applying a stringency-leniency ('hawk-dove') correction substantially increased the variance attributed to the student (14.8-17.8%) and reliability. Controlling for assessor leniency/stringency reduced measurement error, decreasing the number of assessments required for generalizability from 16-50 to 11-14.

CONCLUSIONS

Similar to prior research, most of the variance in competency assessment scores was attributable to raters, with only a small proportion attributed to the student. Making stringency-leniency corrections using rater-adjusted scores improved the psychometric characteristics of assessment scores.

摘要

目的

评分者对学生评估方法的系统性差异可能导致评估分数的宽松或严格。本研究考察了医学生基于工作场所的能力评估的可推广性,包括评分者调整分数对宽松度和严格度的影响。

方法

数据收集自2017 - 2018年在单一机构完成的204名学生的期末临床实习评估。通过三个不平衡随机效应模型,利用概化理论探索归因于不同方面(评分者、学习者、项目和能力领域)的方差,这些模型按临床实习分类,包括应用评估者严格度 - 宽松度调整。

结果

在原始评估中,只有4 - 8%的方差归因于学生,其余为评分者方差和误差。将项目汇总以创建综合分数增加了可归因于学生的变异性(方差的5 - 13%)。应用严格度 - 宽松度(“鹰派 - 鸽派”)校正显著增加了可归因于学生的方差(14.8 - 17.8%)和信度。控制评估者的宽松度/严格度减少了测量误差,将可推广性所需的评估次数从16 - 50次减少到11 - 14次。

结论

与先前研究类似,能力评估分数的大部分方差归因于评分者,只有一小部分归因于学生。使用评分者调整分数进行严格度 - 宽松度校正改善了评估分数的心理测量特征。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验