Merhof Viola, Böhm Caroline M, Meiser Thorsten
University of Mannheim, Germany.
Rhineland-Palatinate Technical University of Kaiserslautern-Landau, Germany.
Educ Psychol Meas. 2024 Oct;84(5):927-956. doi: 10.1177/00131644231213319. Epub 2023 Dec 22.
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person parameters can be separated from each other. Here we investigate conditions under which the substantive meanings of estimated extreme response style parameters are potentially invalid and do not correspond to the meanings attributed to them, that is, content-unrelated category preferences. Rather, the response style factor may mimic the trait and capture part of the trait-induced variance in item responding, thus impairing the meaningful separation of the person parameters. Such a mimicry effect is manifested in a biased estimation of the covariance of response style and trait, as well as in an overestimation of the response style variance. Both can lead to severely misleading conclusions drawn from IRTree analyses. A series of simulation studies reveals that mimicry effects depend on the distribution of observed responses and that the estimation biases are stronger the more asymmetrically the responses are distributed across the rating scale. It is further demonstrated that extending the commonly used IRTree model with unidimensional sub-decisions by multidimensional parameterizations counteracts mimicry effects and facilitates the meaningful separation of parameters. An empirical example of the Program for International Student Assessment (PISA) background questionnaire illustrates the threat of mimicry effects in real data. The implications of applying IRTree models for empirical research questions are discussed.
项目反应树(IRTree)模型是一种灵活的框架,用于控制自我报告的特质测量中的反应风格。为此,IRTree模型将对评分项目的反应分解为子决策,这些子决策被假定是基于所测量的特质或反应风格做出的,从而可以将这些个体参数的影响相互分离。在这里,我们研究了估计的极端反应风格参数的实质意义可能无效且与赋予它们的意义不相符的条件,即与内容无关的类别偏好。相反,反应风格因素可能会模仿特质并捕捉项目反应中部分由特质引起的方差,从而损害个体参数的有意义分离。这种模仿效应表现为反应风格与特质协方差的有偏估计,以及反应风格方差的高估。两者都可能导致从IRTree分析中得出严重误导性的结论。一系列模拟研究表明,模仿效应取决于观察到的反应分布,并且反应在评分量表上分布越不对称,估计偏差就越强。进一步证明,通过多维参数化扩展具有单维子决策的常用IRTree模型可以抵消模仿效应,并有助于有意义地分离参数。国际学生评估项目(PISA)背景问卷的一个实证例子说明了真实数据中模仿效应的威胁。讨论了将IRTree模型应用于实证研究问题的意义。