Lertsakulbunlue Sethapong, Kantiwong Anupong
Department of Pharmacology, Phramongkutklao College of Medicine, Bangkok, 10400, Thailand.
Adv Simul (Lond). 2024 Jun 24;9(1):25. doi: 10.1186/s41077-024-00301-7.
Peer assessment can enhance understanding of the simulation-based learning (SBL) process and promote feedback, though research on its rubrics remains limited. This study assesses the validity and reliability of a peer assessment rubric and determines the appropriate number of items and raters needed for a reliable assessment in the advanced cardiac life support (ACLS) context.
Ninety-five third-year medical students participated in the ACLS course and were assessed by two teachers (190 ratings) and three peers (285 ratings). Students rotated roles and were assessed once as a team leader on a ten-item rubric in three domains: electrocardiogram and ACLS skills, management and mechanisms, and affective domains. Messick's validity framework guided the collection of validity evidence.
Five sources of validity evidence were collected: (1) content: expert reviews and alpha, beta, and pilot tests for iterative content validation; (2) response process: achieved acceptable peer interrater reliability (intraclass correlation = 0.78, p = 0.001) and a Cronbach's alpha of 0.83; (3) internal structure: demonstrated reliability through generalizability theory, where one peer rater with ten items achieved sufficient reliability (Phi-coefficient = 0.76), and two raters enhanced reliability (Phi-coefficient = 0.85); construct validity was supported by confirmatory factor analysis. (4) Relations to other variables: Peer and teacher ratings were similar. However, peers rated higher in scenario management; further generalizability theory analysis indicated comparable reliability with the same number of teachers. (5) Consequences: Over 80% of students positively perceived peer assessment on a 5-point Likert scale survey.
This study confirms the validity and reliability of ACLS SBL rubrics while utilizing peers as raters. Rubrics can exhibit clear performance criteria, ensure uniform grading, provide targeted feedback, and promote peer assessment skills.
同伴评估可以增强对基于模拟的学习(SBL)过程的理解并促进反馈,尽管关于其评分标准的研究仍然有限。本研究评估了同伴评估评分标准的有效性和可靠性,并确定了在高级心脏生命支持(ACLS)背景下进行可靠评估所需的项目数量和评分者数量。
95名三年级医学生参加了ACLS课程,并由两名教师(190次评分)和三名同伴(285次评分)进行评估。学生们轮换角色,并在一个包含十个项目的评分标准上,在三个领域中作为团队领导者接受一次评估:心电图和ACLS技能、管理与机制以及情感领域。梅西克的有效性框架指导了有效性证据的收集。
收集了五个有效性证据来源:(1)内容:专家评审以及用于迭代内容验证的α、β和预测试;(2)反应过程:实现了可接受的同伴评分者间信度(组内相关系数=0.78,p=0.001),克朗巴哈α系数为0.83;(3)内部结构:通过概化理论证明了可靠性,一名同伴评分者对十个项目进行评分可实现足够的可靠性(Phi系数=0.76),两名评分者可提高可靠性(Phi系数=0.85);验证性因素分析支持了结构效度。(4)与其他变量的关系:同伴和教师的评分相似。然而,同伴在情景管理方面的评分更高;进一步的概化理论分析表明,与相同数量的教师相比,可靠性相当。(5)后果:超过80%的学生在五分制李克特量表调查中对同伴评估给予了积极评价。
本研究证实了将同伴作为评分者时ACLS SBL评分标准的有效性和可靠性。评分标准可以展示明确的表现标准,确保评分统一,可以提供有针对性的反馈,并促进同伴评估技能。