Liu Mingxia, Zhang Jun, Yap Pew-Thian, Shen Dinggang
Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
Med Image Comput Comput Assist Interv. 2016 Oct;9900:308-316. doi: 10.1007/978-3-319-46720-7_36. Epub 2016 Oct 2.
Effectively utilizing incomplete multi-modality data for diagnosis of Alzheimer's disease (AD) is still an area of active research. Several multi-view learning methods have recently been developed to deal with missing data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to suboptimal learning performance. In this paper, we propose a view-aligned hypergraph learning (VAHL) method to explicitly model the coherence among the views. Specifically, we first divide the original data into several views based on possible combinations of modalities, followed by a sparse representation based hypergraph construction process in each view. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to model the view coherence. We further assemble the class probability scores generated from VAHC via a multi-view label fusion method to make a final classification decision. We evaluate our method on the baseline ADNI-1 database having 807 subjects and three modalities (, MRI, PET, and CSF). Our method achieves at least a 4.6% improvement in classification accuracy compared with state-of-the-art methods for AD/MCI diagnosis.
有效利用不完整的多模态数据来诊断阿尔茨海默病(AD)仍是一个活跃的研究领域。最近已经开发了几种多视图学习方法来处理缺失数据,每个视图对应于一种特定的模态或几种模态的组合。然而,现有方法通常忽略了视图之间潜在的一致性,这可能导致次优的学习性能。在本文中,我们提出了一种视图对齐超图学习(VAHL)方法,以显式地对视图之间的一致性进行建模。具体来说,我们首先根据模态的可能组合将原始数据划分为几个视图,然后在每个视图中进行基于稀疏表示的超图构建过程。接着,通过使用视图对齐正则化器对视图一致性进行建模,提出了一种视图对齐超图分类(VAHC)模型。我们还通过多视图标签融合方法汇总VAHC生成的类别概率分数,以做出最终的分类决策。我们在拥有807名受试者和三种模态(MRI、PET和脑脊液)的基线ADNI-1数据库上评估了我们的方法。与用于AD/MCI诊断的最先进方法相比,我们的方法在分类准确率上至少提高了4.6%。