Peeters Michael J, Cor M Ken, Castleberry Ashley N, Gonyeau Michael J
University of Toledo College of Pharmacy & Pharmaceutical Sciences, Toledo, OH, USA.
University of Alberta Faculty of Pharmacy & Pharmaceutical Sciences, Edmonton, AB, Canada.
Am J Pharm Educ. 2025 Apr;89(4):101379. doi: 10.1016/j.ajpe.2025.101379. Epub 2025 Feb 28.
Poster quality at academic conferences has varied. Furthermore, the few poster-quality rubrics in the literature have limited psychometric evidence. Thus, we compared holistic vs mixed-approach scoring using a recently created poster rubric, scored by multiple raters, to evaluate validation evidence and time-to-score utility.
Sixty research posters were randomly selected from an academic conference's online poster repository. Using a previously created rubric (and without rubric training), 4 pharmacy education faculty members with varying levels of poster-related experience scored each poster. Initially, each rater holistically scored the posters, providing a single overall score for each. Approximately 1 month later, the raters scored the posters again using a mixed approach, assigning 4 sub-scores and a new overall score. We used the Generalizability Theory to assess the effect of rater experience and the Rasch Measurement Model to examine rating scale effectiveness and construct validation. Time-to-score for each poster was also compared.
Generalizability Theory showed greater reliability with more experienced raters or when using the mixed approach. Rasch analysis indicated that rating scales functioned better with the mixed approach, and Wright maps of the construct provided useful measurement validation evidence. Raters reported scoring more quickly (30-60 s per poster) with holistic scoring, though differences in rater experience affected reliability. Meanwhile, mixed-approach scoring was slightly slower (60-90 s per poster), but the impact of the rater experience was reduced.
Scoring was slightly faster with the holistic approach than with the mixed-approach rubric; however, differences in rater experience were lessened using the mixed-approach. The mixed approach was preferable because it allowed for quick scoring while reducing the need for prior training. This rubric could be used by students and new faculty when creating posters or by poster-competition judges. Furthermore, mixed-approach rubrics may be applied beyond posters, including oral presentations or objective structured clinical examination stations.
学术会议上海报的质量参差不齐。此外,文献中少数海报质量评分标准的心理测量学证据有限。因此,我们使用最近创建的海报评分标准,由多名评分者进行评分,比较整体评分法与混合评分法,以评估效度证据和评分用时的效用。
从一个学术会议的在线海报库中随机选取60张研究海报。4名具有不同海报相关经验的药学教育教员,使用先前创建的评分标准(且未接受评分标准培训)对每张海报进行评分。最初,每位评分者对海报进行整体评分,为每张海报给出一个总体分数。大约1个月后,评分者再次使用混合评分法对海报进行评分,给出4个分项分数和一个新的总体分数。我们使用概化理论评估评分者经验的影响,并使用拉施测量模型检验评分量表的有效性和结构效度。同时比较了每张海报的评分用时。
概化理论表明,经验更丰富的评分者或使用混合评分法时,可靠性更高。拉施分析表明,评分量表在混合评分法下功能更好,结构的赖特图提供了有用的测量效度证据。评分者报告说,整体评分时速度更快(每张海报30 - 60秒),不过评分者经验的差异会影响可靠性。同时,混合评分法稍慢(每张海报60 - 90秒),但评分者经验的影响较小。
整体评分法的评分速度比混合评分法略快;然而,使用混合评分法时评分者经验的差异较小。混合评分法更可取,因为它既能快速评分,又减少了事先培训的需求。学生和新教员在制作海报时或海报竞赛评委可以使用这个评分标准。此外,混合评分法可能不仅适用于海报,还包括口头报告或客观结构化临床考试站。