Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
VISN4 Mental Illness Research, Education, and Clinical Center at the Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, USA.
J Int Neuropsychol Soc. 2023 Oct;29(8):789-797. doi: 10.1017/S1355617722000893. Epub 2022 Dec 12.
Data from neurocognitive assessments may not be accurate in the context of factors impacting validity, such as disengagement, unmotivated responding, or intentional underperformance. Performance validity tests (PVTs) were developed to address these phenomena and assess underperformance on neurocognitive tests. However, PVTs can be burdensome, rely on cutoff scores that reduce information, do not examine potential variations in task engagement across a battery, and are typically not well-suited to acquisition of large cognitive datasets. Here we describe the development of novel performance validity measures that could address some of these limitations by leveraging psychometric concepts using data embedded within the Penn Computerized Neurocognitive Battery (PennCNB).
We first developed these validity measures using simulations of invalid response patterns with parameters drawn from real data. Next, we examined their application in two large, independent samples: 1) children and adolescents from the Philadelphia Neurodevelopmental Cohort ( = 9498); and 2) adult servicemembers from the Marine Resiliency Study-II ( = 1444).
Our performance validity metrics detected patterns of invalid responding in simulated data, even at subtle levels. Furthermore, a combination of these metrics significantly predicted previously established validity rules for these tests in both developmental and adult datasets. Moreover, most clinical diagnostic groups did not show reduced validity estimates.
These results provide proof-of-concept evidence for multivariate, data-driven performance validity metrics. These metrics offer a novel method for determining the performance validity for individual neurocognitive tests that is scalable, applicable across different tests, less burdensome, and dimensional. However, more research is needed into their application.
在影响效度的因素(如脱机、无动机反应或故意表现不佳)背景下,神经认知评估数据可能不准确。为了解决这些现象并评估神经认知测试中的表现不佳,开发了绩效效度测试(PVT)。然而,PVT 可能会带来负担,依赖于降低信息量的截断分数,不检查电池内任务参与的潜在变化,并且通常不适合获取大型认知数据集。在这里,我们描述了开发新的绩效效度测量方法的进展,这些方法可以通过利用心理计量学概念并利用 Penn 计算机化神经认知电池(PennCNB)中嵌入的数据来解决这些限制。
我们首先使用从真实数据中提取参数的无效反应模式模拟来开发这些有效性测量方法。接下来,我们在两个独立的大型样本中检查了它们的应用:1)费城神经发育队列中的儿童和青少年(n = 9498);2)海洋适应研究 II 中的成年军人(n = 1444)。
我们的绩效效度指标在模拟数据中检测到了无效反应模式,即使在细微水平也是如此。此外,这些指标的组合在发育和成人数据集中都显著预测了先前为这些测试建立的有效性规则。此外,大多数临床诊断组并未显示出较低的效度估计值。
这些结果为基于多元、数据驱动的绩效效度指标提供了概念验证证据。这些指标为确定个别神经认知测试的绩效效度提供了一种新颖的方法,该方法具有可扩展性、适用于不同测试、负担更小且具有维度性。然而,还需要更多的研究来应用这些指标。