Kuleindiren Narayan, Rifkin-Zybutz Raphael Paul, Johal Monika, Selim Hamzah, Palmon Itai, Lin Aaron, Yu Yizhou, Alim-Marvasti Ali, Mahmud Mohammad
Mindset Technologies Ltd, London, United Kingdom.
Imperial College School of Medicine, Faculty of Medicine, Imperial College London, London, United Kingdom.
JMIR Form Res. 2022 Mar 22;6(3):e31209. doi: 10.2196/31209.
Mindstep is an app that aims to improve dementia screening by assessing cognition and risk factors. It considers important clinical risk factors, including prodromal symptoms, mental health disorders, and differential diagnoses of dementia. The 9-item Patient Health Questionnaire for depression (PHQ-9) and the 7-item Generalized Anxiety Disorder Scale (GAD-7) are widely validated and commonly used scales used in screening for depression and anxiety disorders, respectively. Shortened versions of both (PHQ-2/GAD-2) have been produced.
We sought to develop a method that maintained the brevity of these shorter questionnaires while maintaining the better precision of the original questionnaires.
Single questions were designed to encompass symptoms covered in the original questionnaires. Answers to these questions were combined with PHQ-2/GAD-2, and anonymized risk factors were collected by Mindset4Dementia from 2235 users. Machine learning models were trained to use these single questions in combination with data already collected by the app: age, response to a joke, and reporting of functional impairment to predict binary and continuous outcomes as measured using PHQ-9/GAD-7. Our model was developed with a training data set by using 10-fold cross-validation and a holdout testing data set and compared to results from using the shorter questionnaires (PHQ-2/GAD-2) alone to benchmark performance.
We were able to achieve superior performance in predicting PHQ-9/GAD-7 screening cutoffs compared to PHQ-2 (difference in area under the curve 0.04, 95% CI 0.00-0.08, P=.02) but not GAD-2 (difference in area under the curve 0.00, 95% CI -0.02 to 0.03, P=.42). Regression models were able to accurately predict total questionnaire scores in PHQ-9 (R=0.655, mean absolute error=2.267) and GAD-7 (R=0.837, mean absolute error=1.780).
We app-adapted PHQ-4 by adding brief summary questions about factors normally covered in the longer questionnaires. We additionally trained machine learning models that used the wide range of additional information already collected in Mindstep to make a short app-based screening tool for affective disorders, which appears to have superior or equivalent performance to well-established methods.
Mindstep是一款旨在通过评估认知和风险因素来改善痴呆筛查的应用程序。它考虑了重要的临床风险因素,包括前驱症状、心理健康障碍以及痴呆的鉴别诊断。用于筛查抑郁症的9项患者健康问卷(PHQ - 9)和用于筛查焦虑症的7项广泛性焦虑障碍量表(GAD - 7)是经过广泛验证且常用的量表。两者都有缩短版本(PHQ - 2/GAD - 2)。
我们试图开发一种方法,在保持这些较短问卷简洁性的同时,维持原始问卷更高的精准度。
设计单个问题以涵盖原始问卷中的症状。这些问题的答案与PHQ - 2/GAD - 2相结合,Mindset4Dementia从2235名用户那里收集了匿名风险因素。训练机器学习模型,将这些单个问题与应用程序已经收集的数据(年龄、对笑话的反应以及功能障碍报告)结合起来,以预测使用PHQ - 9/GAD - 7衡量的二元和连续结果。我们的模型通过使用10折交叉验证的训练数据集和一个保留测试数据集来开发,并与仅使用较短问卷(PHQ - 2/GAD - 2)的结果进行比较,以评估性能。
与PHQ - 2相比,我们在预测PHQ - 9/GAD - 7筛查临界值方面能够实现更优的性能(曲线下面积差异为0.04,95%置信区间为0.00 - 0.08,P = 0.02),但与GAD - 2相比则不然(曲线下面积差异为0.00,95%置信区间为 - 0.02至0.03,P = 0.42)。回归模型能够准确预测PHQ - 9(R = 0.655,平均绝对误差 = 2.267)和GAD - 7(R = 0.837,平均绝对误差 = 1.780)的问卷总分。
我们通过添加关于较长问卷中通常涵盖因素的简短总结问题对PHQ - 4进行了应用适配。我们还训练机器学习模型,该模型利用Mindstep中已经收集的广泛额外信息,制作了一个基于应用程序的情感障碍简短筛查工具,其性能似乎优于或等同于成熟方法。