Jang Hyeju, Soroski Thomas, Rizzo Matteo, Barral Oswald, Harisinghani Anuj, Newton-Mason Sally, Granby Saffrin, Stutz da Cunha Vasco Thiago Monnerat, Lewis Caitlin, Tutt Pavan, Carenini Giuseppe, Conati Cristina, Field Thalia S
Department of Computer Science, University of British Columbia, Vancouver, BC, Canada.
Vancouver Stroke Program and Division of Neurology, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada.
Front Hum Neurosci. 2021 Sep 20;15:716670. doi: 10.3389/fnhum.2021.716670. eCollection 2021.
Alzheimer's disease (AD) is a progressive neurodegenerative condition that results in impaired performance in multiple cognitive domains. Preclinical changes in eye movements and language can occur with the disease, and progress alongside worsening cognition. In this article, we present the results from a machine learning analysis of a novel multimodal dataset for AD classification. The cohort includes data from two novel tasks not previously assessed in classification models for AD (pupil fixation and description of a pleasant past experience), as well as two established tasks (picture description and paragraph reading). Our dataset includes language and eye movement data from 79 memory clinic patients with diagnoses of mild-moderate AD, mild cognitive impairment (MCI), or subjective memory complaints (SMC), and 83 older adult controls. The analysis of the individual novel tasks showed similar classification accuracy when compared to established tasks, demonstrating their discriminative ability for memory clinic patients. Fusing the multimodal data across tasks yielded the highest overall AUC of 0.83 ± 0.01, indicating that the data from novel tasks are complementary to established tasks.
阿尔茨海默病(AD)是一种进行性神经退行性疾病,会导致多个认知领域的功能受损。该疾病会出现眼球运动和语言方面的临床前变化,并随着认知功能的恶化而进展。在本文中,我们展示了对一个用于AD分类的新型多模态数据集进行机器学习分析的结果。该队列包括来自两个以前在AD分类模型中未评估过的新任务(瞳孔注视和描述愉快的过去经历)以及两个既定任务(图片描述和段落阅读)的数据。我们的数据集包括来自79名记忆门诊患者的语言和眼球运动数据,这些患者被诊断为轻度至中度AD、轻度认知障碍(MCI)或主观记忆障碍(SMC),以及83名老年对照者。与既定任务相比,对各个新任务的分析显示出相似的分类准确率,表明它们对记忆门诊患者具有判别能力。跨任务融合多模态数据产生了最高的总体曲线下面积(AUC),为0.83±0.01,表明来自新任务的数据与既定任务具有互补性。