Zeng Biao, Jeon Minjeong, Wen Hongbo
Collaborative Innovation Center of Assessment toward Basic Education Quality, Beijing Normal University, Beijing, China.
Department of Education, School of Education and Information Studies, University of California, Los Angeles, CA, United States.
Front Psychol. 2024 Oct 4;15:1304870. doi: 10.3389/fpsyg.2024.1304870. eCollection 2024.
Researchers often combine both positively and negatively worded items when constructing Likert scales. This combination, however, may introduce method effects due to the variances in item wording. Although previous studies have tried to quantify these effects by using factor analysis on scales with different content, the impact of varied item wording on participants' choices among specific options remains unexplored. To address this gap, we utilized four versions of the Undergraduate Learning Burnout (ULB) scale, each characterized by a unique valence of item wording. After collecting responses from 1,131 college students, we employed unidimensional, multidimensional, and bi-factor Graded Response Models for analysis. The results suggested that the ULB scale supports a unidimensional structure for the learning burnout trait. However, the inclusion of different valences of wording within items introduced additional method factors, explaining a considerable degree of variance. Notably, positively worded items demonstrated greater discriminative power and more effectively counteracted the biased outcomes associated with negatively worded items, especially between the "Strongly Disagree" and "Disagree" options. While there were no substantial differences in the overall learning burnout traits among respondents of different scale versions, slight variations were noted in their distributions. The integration of both positive and negative wordings reduced the reliability of the learning burnout trait measurement. Consequently, it is recommended to use exclusively positively worded items and avoid a mix in item wording during scale construction. If a combination is essential, the bi-factor IRT model might help segregate the method effects resulting from the wording valence.
研究人员在构建李克特量表时,常常会同时纳入正向和负向表述的项目。然而,这种组合可能会因项目措辞的差异而引入方法效应。尽管先前的研究试图通过对具有不同内容的量表进行因子分析来量化这些效应,但不同的项目措辞对参与者在特定选项中的选择所产生的影响仍未得到探讨。为了填补这一空白,我们使用了本科学习倦怠(ULB)量表的四个版本,每个版本的项目措辞都具有独特的效价。在收集了1131名大学生的回答后,我们采用了单维、多维和双因素等级反应模型进行分析。结果表明,ULB量表支持学习倦怠特质的单维结构。然而,项目中纳入不同效价的措辞引入了额外的方法因素,解释了相当程度的方差。值得注意的是,正向表述的项目表现出更大的区分能力,并且更有效地抵消了与负向表述项目相关的偏差结果,尤其是在“强烈不同意”和“不同意”选项之间。虽然不同量表版本的受访者在总体学习倦怠特质上没有实质性差异,但在其分布上发现了细微变化。正负措辞的结合降低了学习倦怠特质测量的可靠性。因此,建议在量表构建过程中仅使用正向表述的项目,避免项目措辞的混合。如果必须进行组合,双因素IRT模型可能有助于分离由措辞效价产生的方法效应。