Birney Damian P, Beckmann Jens F, Beckmann Nadin, Stemler Steven E
School of Psychology, The University of Sydney, Sydney, NSW, Australia.
School of Education, Durham University, Durham, United Kingdom.
Front Psychol. 2022 Feb 16;13:812963. doi: 10.3389/fpsyg.2022.812963. eCollection 2022.
Researchers rely on psychometric principles when trying to gain understanding of unobservable psychological phenomena disconfounded from the methods used. Psychometric models provide us with tools to support this endeavour, but they are agnostic to the meaning researchers intend to attribute to the data. We define method effects as resulting from actions which weaken the psychometric structure of measurement, and argue that solution to this confounding will ultimately rest on testing whether data collected fit a psychometric model based on a substantive theory, rather than a search for a model that best fits the data. We highlight the importance of taking the notions of fundamental measurement seriously by reviewing distinctions between the Rasch measurement model and more generalised 2PL and 3PL IRT models. We then present two lines of research that highlight considerations of making method effects explicit in experimental designs. First, we contrast the use of experimental manipulations to study measurement reactivity during the assessment of metacognitive processes with factor-analytic research of the same. The former suggests differential performance-facilitating and -inhibiting reactivity as a function of other individual differences, whereas factor-analytic research suggests a ubiquitous monotonically predictive confidence factor. Second, we evaluate differential effects of context and source on within-individual variability indices of personality derived from multiple observations, highlighting again the importance of a structured and theoretically grounded observational framework. We conclude by arguing that substantive variables can act as method effects and should be considered at the time of design rather than after the fact, and without compromising measurement ideals.
研究人员在试图理解那些与所用方法混淆在一起的不可观察的心理现象时,依赖心理测量学原理。心理测量模型为我们提供了支持这一努力的工具,但它们对研究人员打算赋予数据的意义持不可知态度。我们将方法效应定义为由削弱测量心理测量结构的行为所导致的,并认为解决这种混淆最终将取决于测试所收集的数据是否符合基于实质性理论的心理测量模型,而不是寻找最适合数据的模型。我们通过回顾拉施测量模型与更广义的2PL和3PL IRT模型之间的区别,强调了认真对待基本测量概念的重要性。然后,我们提出了两条研究路线,突出了在实验设计中明确方法效应的考虑因素。首先,我们将在元认知过程评估中使用实验操作来研究测量反应性与对其进行的因素分析研究进行对比。前者表明,作为其他个体差异的函数,存在促进和抑制表现的不同反应性,而因素分析研究则表明存在一个普遍的单调预测信心因素。其次,我们评估情境和来源对从多次观察中得出的人格个体内变异性指标的不同影响,再次强调结构化和基于理论的观察框架的重要性。我们在结论中指出,实质性变量可以起到方法效应的作用,应该在设计时而非事后考虑,并且不损害测量理想。