College of Pharmacy, Qatar University, PO Box 2713, Doha, Qatar.
BMC Med Educ. 2017 Nov 21;17(1):222. doi: 10.1186/s12909-017-1054-5.
In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports.
A random sample of ITERs submitted in a pharmacy program during 2013-2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015-2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 ("not at all") and 5 ("exemplary"), with 3 categorized as "acceptable".
Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively.
This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training.
学生工作场所学习的培训评估报告(ITER)由各个医疗学科的临床主管完成。然而,在医学之外,提交的工作场所评估的质量在很大程度上尚未得到调查。本研究评估了药学中 ITER 的质量,以及临床主管是否可以接受培训以完成更高质量的报告。
评估了 2013-2014 年期间提交的药学计划中的随机样本 ITER。这些 ITER 作为历史对照(对照组 1),与参加互动式教师发展研讨会(干预组)的临床主管在 2015-2016 年提交的 ITER 进行比较,以及那些没有参加的(对照组 2)。两名经过培训的独立评估员使用先前验证的九项评估报告质量的量表,即已完成临床评估报告评分(CCERR)对 ITER 进行评分。每个项目的评分量表的锚定点为 1(“一点也不”)和 5(“出色”),3 归类为“可接受”。
与前瞻性对照组 2(22.7±3.63,p=0.84)相比,研讨会后完成的报告的平均 CCERR 评分(22.9±3.39)并没有显著提高,并且比历史对照组 1(37.9±8.21,p=0.001)差。在对照组 1 中,9 个 CCERR 项目中有 5 个项目的单个 CCERR 项目的平均得分低于可接受的阈值,包括主管记录的明确解释弱点的具体示例的证据和学生改进的具体建议。在对照组 2 和干预组中,9 个 CCERR 项目中有 6 个和 7 个项目的单个 CCERR 项目的平均得分低于可接受的阈值。
这是首次使用 CCERR 评估医学以外的 ITER 质量。研究结果表明,药学课程的 CCERR 得分较低,且教师发展研讨会并没有明显改变,但确定了增强未来评估员培训的策略。