Finley John-Christopher A, Phillips Matthew S, Soble Jason R, Rodriguez Violeta J
Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
Department of Psychiatry, University of Illinois Chicago College of Medicine, Chicago, IL, USA.
J Clin Exp Neuropsychol. 2024 Dec;46(10):1015-1025. doi: 10.1080/13803395.2025.2458547. Epub 2025 Jan 25.
Diagnostic evaluations for attention-deficit/hyperactivity disorder (ADHD) are becoming increasingly complicated by the number of adults who fabricate or exaggerate symptoms. Novel methods are needed to improve the assessment process required to detect these noncredible symptoms. The present study investigated whether unsupervised machine learning (ML) could serve as one such method, and detect noncredible symptom reporting in adults undergoing ADHD evaluations.
Participants were 623 adults who underwent outpatient ADHD evaluations. Patients' scores from symptom validity tests embedded in two self-report questionnaires were examined in an unsupervised ML model. The model, called "sidClustering," is based on a clustering and random forest algorithm. The model synthesized the raw scores (without cutoffs) from the symptom validity tests into an unspecified number of groups. The groups were then compared to predetermined ratings of credible versus noncredible symptom reporting. The noncredible symptom ratings were defined by either two or three or more symptom validity test elevations.
The model identified two groups that were significantly ( < .001) and meaningfully associated with the predetermined ratings of credible or noncredible symptom reporting, regardless of the number of elevations used to define noncredible reporting. The validity test assessing overreporting of various types of psychiatric symptoms was most influential in determining group membership; but symptom validity tests regarding ADHD-specific symptoms were also contributory.
These findings suggest that unsupervised ML can effectively identify noncredible symptom reporting using scores from multiple symptom validity tests without predetermined cutoffs. The ML-derived groups also support the use of two validity test elevations to identify noncredible symptom reporting. Collectively, these findings serve as a proof of concept that unsupervised ML can improve the process of detecting noncredible symptoms during ADHD evaluations. With additional research, unsupervised ML may become a useful supplementary tool for quickly and accurately detecting noncredible symptoms during these evaluations.
注意力缺陷多动障碍(ADHD)的诊断评估正变得日益复杂,因为有越来越多的成年人伪造或夸大症状。需要新的方法来改进检测这些不可信症状所需的评估过程。本研究调查了无监督机器学习(ML)是否可以作为这样一种方法,以及能否在接受ADHD评估的成年人中检测出不可信的症状报告。
参与者为623名接受门诊ADHD评估的成年人。在一个无监督的ML模型中检查了患者在两份自我报告问卷中嵌入的症状效度测试的分数。该模型名为“sidClustering”,基于聚类和随机森林算法。该模型将症状效度测试的原始分数(无临界值)综合为数量未指定的组。然后将这些组与可信与不可信症状报告的预定评级进行比较。不可信症状评级由两次或三次及以上症状效度测试升高来定义。
无论用于定义不可信报告的升高次数如何,该模型都识别出了两组与可信或不可信症状报告的预定评级显著相关( < .001)且有意义的组。评估各种类型精神症状过度报告的效度测试在确定组成员方面最具影响力;但关于ADHD特异性症状的症状效度测试也有作用。
这些发现表明,无监督ML可以使用来自多个症状效度测试的分数有效识别不可信症状报告,而无需预定临界值。ML得出的组也支持使用两次效度测试升高来识别不可信症状报告。总体而言,这些发现证明了无监督ML可以改善ADHD评估期间检测不可信症状的过程。随着进一步研究,无监督ML可能成为在这些评估中快速准确检测不可信症状的有用辅助工具。