Zu Chen, Jie Biao, Liu Mingxia, Chen Songcan, Shen Dinggang, Zhang Daoqiang
Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
School of Mathematics and Computer Science, Anhui Normal University, Wuhu, 241000, China.
Brain Imaging Behav. 2016 Dec;10(4):1148-1159. doi: 10.1007/s11682-015-9480-7.
Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer's disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI.
最近,使用不同成像和非成像数据模态的多模态分类方法在阿尔茨海默病(AD)及其前驱阶段即轻度认知障碍(MCI)的诊断和预后方面,相对于传统的基于单模态的方法显示出巨大优势。然而,据我们所知,大多数现有方法侧重于挖掘同一受试者多种模态之间的关系,而忽略了不同受试者之间潜在有用的关系。因此,在本文中,我们通过充分探索模态和受试者之间的关系,提出了一种用于AD/MCI多模态分类的新颖学习方法。具体来说,我们提出的方法包括两个后续组件,即标签对齐的多任务特征选择和多模态分类。第一步,将从多种模态进行的特征选择学习视为不同的学习任务,并施加组稀疏正则化器以联合选择相关特征的子集。此外,为了利用标记受试者之间的判别信息,在标准多任务特征选择的目标函数中添加了一个新的标签对齐正则化项,其中标签对齐意味着具有相同类别标签的所有多模态受试者在新的特征降维空间中应该更接近。第二步,采用多核支持向量机(SVM)融合从多模态数据中选择的特征进行最终分类。为了验证我们的方法,我们使用基线MRI和FDG-PET成像数据在阿尔茨海默病神经成像倡议(ADNI)数据库上进行实验。实验结果表明,与几种用于AD/MCI多模态分类的最先进方法相比,我们提出的方法实现了更好的分类性能。