Suppr超能文献

通过挖掘潜在的共享模态信息构建阿尔茨海默病跨模态合成与诊断的联合学习框架。

Joint learning framework of cross-modal synthesis and diagnosis for Alzheimer's disease by mining underlying shared modality information.

作者信息

Wang Chenhui, Piao Sirong, Huang Zhizhong, Gao Qi, Zhang Junping, Li Yuxin, Shan Hongming

机构信息

Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China.

Department of Radiology, Huashan Hospital, Fudan University, Shanghai 200040, China.

出版信息

Med Image Anal. 2024 Jan;91:103032. doi: 10.1016/j.media.2023.103032. Epub 2023 Nov 18.

Abstract

Alzheimer's disease (AD) is one of the most common neurodegenerative disorders presenting irreversible progression of cognitive impairment. How to identify AD as early as possible is critical for intervention with potential preventive measures. Among various neuroimaging modalities used to diagnose AD, functional positron emission tomography (PET) has higher sensitivity than structural magnetic resonance imaging (MRI), but it is also costlier and often not available in many hospitals. How to leverage massive unpaired unlabeled PET to improve the diagnosis performance of AD from MRI becomes rather important. To address this challenge, this paper proposes a novel joint learning framework of unsupervised cross-modal synthesis and AD diagnosis by mining underlying shared modality information, improving the AD diagnosis from MRI while synthesizing more discriminative PET images. We mine underlying shared modality information in two aspects: diversifying modality information through the cross-modal synthesis network and locating critical diagnosis-related patterns through the AD diagnosis network. First, to diversify the modality information, we propose a novel unsupervised cross-modal synthesis network, which implements the inter-conversion between 3D PET and MRI in a single model modulated by the AdaIN module. Second, to locate shared critical diagnosis-related patterns, we propose an interpretable diagnosis network based on fully 2D convolutions, which takes either 3D synthesized PET or original MRI as input. Extensive experimental results on the ADNI dataset show that our framework can synthesize more realistic images, outperform the state-of-the-art AD diagnosis methods, and have better generalization on external AIBL and NACC datasets.

摘要

阿尔茨海默病(AD)是最常见的神经退行性疾病之一,其认知障碍呈不可逆进展。如何尽早识别AD对于采取潜在的预防措施进行干预至关重要。在用于诊断AD的各种神经成像模态中,功能正电子发射断层扫描(PET)比结构磁共振成像(MRI)具有更高的灵敏度,但它成本更高,而且在许多医院往往无法使用。如何利用大量未配对的未标记PET来提高基于MRI的AD诊断性能变得相当重要。为应对这一挑战,本文提出了一种新颖的联合学习框架,通过挖掘潜在的共享模态信息进行无监督跨模态合成和AD诊断,在合成更具判别力的PET图像的同时提高基于MRI的AD诊断。我们从两个方面挖掘潜在的共享模态信息:通过跨模态合成网络使模态信息多样化,以及通过AD诊断网络定位与诊断相关的关键模式。首先,为了使模态信息多样化,我们提出了一种新颖的无监督跨模态合成网络,该网络在由自适应实例归一化(AdaIN)模块调制的单个模型中实现3D PET和MRI之间的相互转换。其次,为了定位共享的关键诊断相关模式,我们提出了一种基于全二维卷积的可解释诊断网络,该网络以3D合成PET或原始MRI作为输入。在阿尔茨海默病神经成像计划(ADNI)数据集上的大量实验结果表明,我们的框架可以合成更逼真的图像,优于当前最先进的AD诊断方法,并且在外部澳大利亚成像生物标志物和生活方式(AIBL)及国家阿尔茨海默协调中心(NACC)数据集上具有更好的泛化能力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验