Dornsife Center for Self-Report Science, and Center for Economic & Social Research, University of Southern California, 635 Downey Way, Los Angeles, CA, 90089-3332, USA.
Department of Psychology, University of Southern California, Los Angeles, CA, USA.
Behav Res Methods. 2024 Oct;56(7):6741-6758. doi: 10.3758/s13428-024-02388-2. Epub 2024 Mar 25.
Questionnaires are ever present in survey research. In this study, we examined whether an indirect indicator of general cognitive ability could be developed based on response patterns in questionnaires. We drew on two established phenomena characterizing connections between cognitive ability and people's performance on basic cognitive tasks, and examined whether they apply to questionnaires responses. (1) The worst performance rule (WPR) states that people's worst performance on multiple sequential tasks is more indicative of their cognitive ability than their average or best performance. (2) The task complexity hypothesis (TCH) suggests that relationships between cognitive ability and performance increase with task complexity. We conceptualized items of a questionnaire as a series of cognitively demanding tasks. A graded response model was used to estimate respondents' performance for each item based on the difference between the observed and model-predicted response ("response error" scores). Analyzing data from 102 items (21 questionnaires) collected from a large-scale nationally representative sample of people aged 50+ years, we found robust associations of cognitive ability with a person's largest but not with their smallest response error scores (supporting the WPR), and stronger associations of cognitive ability with response errors for more complex than for less complex questions (supporting the TCH). Results replicated across two independent samples and six assessment waves. A latent variable of response errors estimated for the most complex items correlated .50 with a latent cognitive ability factor, suggesting that response patterns can be utilized to extract a rough indicator of general cognitive ability in survey research.
问卷在调查研究中随处可见。在这项研究中,我们探讨了是否可以基于问卷中的反应模式,开发出一种一般认知能力的间接指标。我们借鉴了两个已确立的现象,即认知能力与人们在基本认知任务上的表现之间的联系,并检验了它们是否适用于问卷反应。(1)最差表现规则(WPR)指出,人们在多项连续任务中的最差表现比他们的平均或最佳表现更能反映他们的认知能力。(2)任务复杂性假说(TCH)表明,认知能力与表现之间的关系随着任务复杂性的增加而增加。我们将问卷中的项目概念化为一系列认知要求较高的任务。使用分级响应模型,根据观察到的和模型预测的响应之间的差异(“响应误差”得分)来估计每个项目的响应者的表现。对来自 21 份问卷的 102 个项目(共 102 个项目)的数据进行分析,这些项目来自于一个大规模的全国代表性的 50 岁以上人群样本,我们发现认知能力与一个人最大的而不是最小的响应误差得分之间存在着强有力的关联(支持 WPR),并且认知能力与更复杂的问题的响应误差之间的关联比与较简单的问题的关联更强(支持 TCH)。结果在两个独立的样本和六个评估波次中得到了复制。对最复杂项目进行估计的响应误差的潜在变量与认知能力的潜在变量相关系数为.50,这表明响应模式可用于在调查研究中提取一般认知能力的粗略指标。