Soland James, Kuhfeld Megan
University of Virginia, Charlottesville, VA, USA.
NWEA, Portland, OR, USA.
Appl Psychol Meas. 2022 Jan;46(1):53-67. doi: 10.1177/01466216211051728. Epub 2021 Dec 7.
Researchers in the social sciences often obtain ratings of a construct of interest provided by multiple raters. While using multiple raters provides a way to help avoid the subjectivity of any given person's responses, rater disagreement can be a problem. A variety of models exist to address rater disagreement in both structural equation modeling and item response theory frameworks. Recently, a model was developed by Bauer et al. (2013) and referred to as the "trifactor model" to provide applied researchers with a straightforward way of estimating scores that are purged of variance that is idiosyncratic by rater. Although the intent of the model is to be usable and interpretable, little is known about the circumstances under which it performs well, and those it does not. We conduct simulation studies to examine the performance of the trifactor model under a range of sample sizes and model specifications and then compare model fit, bias, and convergence rates.
社会科学领域的研究人员常常会获取多个评分者对感兴趣的某个构念给出的评分。虽然使用多个评分者提供了一种有助于避免任何单个个体回答主观性的方法,但评分者之间的分歧可能会成为一个问题。在结构方程建模和项目反应理论框架中,存在多种模型来处理评分者之间的分歧。最近,鲍尔等人(2013年)开发了一种模型,称为“三因素模型”,为应用研究人员提供了一种直接的方法来估计消除了评分者特异方差后的分数。尽管该模型的目的是易于使用和解释,但对于它在哪些情况下表现良好以及哪些情况下表现不佳,人们了解甚少。我们进行模拟研究,以检验三因素模型在一系列样本量和模型规格下的表现,然后比较模型拟合度、偏差和收敛率。