Palermo Corey
Measurement Incorporated, Durham, NC, United States.
Front Psychol. 2022 Aug 10;13:937097. doi: 10.3389/fpsyg.2022.937097. eCollection 2022.
Raters may introduce construct-irrelevant variance when evaluating written responses to performance assessments, threatening the validity of students' scores. Numerous factors in the rating process, including the content of students' responses, the characteristics of raters, and the context in which the scoring occurs, are thought to influence the quality of raters' scores. Despite considerable study of rater effects, little research has examined the relative impacts of the factors that influence rater accuracy. In practice, such integrated examinations are needed to afford evidence-based decisions of rater selection, training, and feedback. This study provides the first naturalistic, integrated examination of rater accuracy in a large-scale assessment program. Leveraging rater monitoring data from an English language arts (ELA) summative assessment program, I specified cross-classified, multilevel models via Bayesian (i.e., Markov chain Monte Carlo) estimation to decompose the impact of response content, rater characteristics, and scoring contexts on rater accuracy. Results showed relatively little variation in accuracy attributable to teams, items, and raters. Raters did not collectively exhibit differential accuracy over time, though there was significant variation in individual rater's scoring accuracy from response to response and day to day. I found considerable variation in accuracy across responses, which was in part explained by text features and other measures of response content that influenced scoring difficulty. Some text features differentially influenced the difficulty of scoring research and writing content. Multiple measures of raters' qualification performance predicted their scoring accuracy, but general rater background characteristics including experience and education did not. Site-based and remote raters demonstrated comparable accuracy, while evening-shift raters were slightly less accurate, on average, than day-shift raters. This naturalistic, integrated examination of rater accuracy extends previous research and provides implications for rater recruitment, training, monitoring, and feedback to improve human evaluation of written responses.
在评估对绩效评估的书面回复时,评分者可能会引入与结构无关的方差,从而威胁到学生分数的有效性。评分过程中的众多因素,包括学生回复的内容、评分者的特征以及评分发生的背景,都被认为会影响评分者分数的质量。尽管对评分者效应进行了大量研究,但很少有研究考察影响评分者准确性的因素的相对影响。在实践中,需要进行这样的综合考察,以便为评分者的选择、培训和反馈提供基于证据的决策。本研究首次在大规模评估项目中对评分者准确性进行了自然主义的综合考察。利用来自英语语言艺术(ELA)总结性评估项目的评分者监测数据,我通过贝叶斯(即马尔可夫链蒙特卡罗)估计指定了交叉分类的多层次模型,以分解回复内容、评分者特征和评分背景对评分者准确性的影响。结果显示,可归因于团队、项目和评分者的准确性差异相对较小。评分者在总体上没有随着时间的推移表现出不同的准确性,尽管单个评分者在不同回复之间以及每天的评分准确性存在显著差异。我发现不同回复之间的准确性存在很大差异,部分原因是文本特征和其他影响评分难度的回复内容度量。一些文本特征对研究和写作内容的评分难度有不同的影响。评分者资格表现的多种度量预测了他们的评分准确性,但包括经验和教育程度在内的一般评分者背景特征则没有。现场评分者和远程评分者表现出相当的准确性,而晚班评分者平均而言比日班评分者的准确性略低。这种对评分者准确性的自然主义综合考察扩展了先前的研究,并为评分者的招募、培训、监测和反馈提供了启示,以改进对书面回复的人工评估。