Campbell University College of Pharmacy & Health Sciences, PO Box 1090, Buies Creek, NC 27506, USA.
Curr Pharm Teach Learn. 2024 Nov;16(11):102159. doi: 10.1016/j.cptl.2024.102159. Epub 2024 Jul 31.
Objective structured clinical examinations (OSCE) are a valuable assessment within healthcare education, as they provide the opportunity for students to demonstrate clinical competency, but can be resource intensive to provide faculty graders. The purpose of this study was to determine how overall OSCE scores compared between faculty, peer, and self-evaluations within a Doctor of Pharmacy (PharmD) curriculum.
This study was conducted during the required nonprescription therapeutics course. Seventy-seven first-year PharmD students were included in the study, with 6 faculty members grading 10-15 students each. Students were evaluated by 3 graders: self, peer, and faculty. All evaluators utilized the same rubric. The primary endpoint of the study was to compare the overall scores between groups. Secondary endpoints included interrater reliability and quantification of feedback type based on the evaluator group.
The maximum possible score for the OSCE was 50 points; the mean scores for self, peer, and faculty evaluations were 43.3, 43.5, and 41.7 points, respectively. No statistically significant difference was found between the self and peer raters. However, statistical significance was found in the comparison of self versus faculty (p = 0.005) and in peer versus faculty (p < 0.001). When these scores were correlated to a letter grade (A, B, C or less), higher grades had greater similarity among raters compared to lower scores. Despite differences in scoring, the interrater reliability, or W score, on overall letter grade was 0.79, which is considered strong agreement.
This study successfully demonstrated how peer and self-evaluation of an OSCE provides a comparable alternative to traditional faculty grading, especially in higher performing students. However, due to differences in overall grades, this strategy should be reserved for low-stakes assessments and basic skill evaluations.
客观结构化临床考试(OSCE)是医疗保健教育中一种有价值的评估方法,因为它为学生提供了展示临床能力的机会,但为教师评分者提供考试机会需要投入大量资源。本研究的目的是确定在药学博士(PharmD)课程中,整体 OSCE 评分在教师、同伴和自我评估之间的比较情况。
本研究在非处方治疗学必修课中进行。共有 77 名一年级 PharmD 学生参与了这项研究,其中 6 名教师对 10-15 名学生进行评分。学生由 3 名评分者进行评估:自我、同伴和教师。所有评估者均使用相同的评分表。研究的主要终点是比较组间的总体评分。次要终点包括组间的评分者间信度和基于评估者组的反馈类型的量化。
OSCE 的最高得分为 50 分;自我、同伴和教师评估的平均得分为 43.3、43.5 和 41.7 分。自我和同伴评分者之间没有发现统计学上的显著差异。然而,自我与教师(p=0.005)和同伴与教师(p<0.001)之间的比较存在统计学意义。当这些分数与字母等级(A、B、C 或更低)相关联时,较高的等级在评分者之间具有更高的相似性,而较低的分数则具有较低的相似性。尽管评分存在差异,但总体字母等级的评分者间信度(W 分数)为 0.79,这被认为是强一致性。
本研究成功地证明了 OSCE 的同伴和自我评估如何为传统的教师评分提供了一种可比较的替代方案,尤其是对于表现较好的学生。然而,由于总体成绩的差异,这种策略应保留用于低风险评估和基本技能评估。