Suppr超能文献

用于阿尔茨海默病诊断的多尺度多模态深度学习框架

Multi-scale multimodal deep learning framework for Alzheimer's disease diagnosis.

作者信息

Abdelaziz Mohammed, Wang Tianfu, Anwaar Waqas, Elazab Ahmed

机构信息

National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China; Department of Communications and Electronics, Delta Higher Institute for Engineering and Technology (DHIET), Mansoura, 35516, Egypt.

National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, 518060, China.

出版信息

Comput Biol Med. 2025 Jan;184:109438. doi: 10.1016/j.compbiomed.2024.109438. Epub 2024 Nov 22.

Abstract

Multimodal neuroimaging data, including magnetic resonance imaging (MRI) and positron emission tomography (PET), provides complementary information about the brain that can aid in Alzheimer's disease (AD) diagnosis. However, most existing deep learning methods still rely on patch-based extraction from neuroimaging data, which typically yields suboptimal performance due to its isolation from the subsequent network and does not effectively capture the varying scales of structural changes in the cerebrum. Moreover, these methods often simply concatenate multimodal data, ignoring the interactions between them that can highlight discriminative regions and thereby improve the diagnosis of AD. To tackle these issues, we develop a multimodal and multi-scale deep learning model that effectively leverages the interaction between the multimodal and multiscale of the neuroimaging data. First, we employ a convolutional neural network to embed each scale of the multimodal images. Second, we propose multimodal scale fusion mechanisms that utilize both multi-head self-attention and multi-head cross-attention, which capture global relations among the embedded features and weigh each modality's contribution to another, and hence enhancing feature extraction and interaction between each scale of MRI and PET images. Third, we introduce a cross-modality fusion module that includes a multi-head cross-attention to fuse MRI and PET data at different scales and promote global features from the previous attention layers. Finally, all the features from every scale are fused to discriminate between the different stages of AD. We evaluated our proposed method on the ADNI dataset, and the results show that our model achieves better performance than the state-of-the-art methods.

摘要

多模态神经影像数据,包括磁共振成像(MRI)和正电子发射断层扫描(PET),提供了有关大脑的互补信息,有助于阿尔茨海默病(AD)的诊断。然而,大多数现有的深度学习方法仍然依赖于从神经影像数据中基于补丁的提取,由于其与后续网络的隔离,通常会产生次优性能,并且不能有效地捕捉大脑结构变化的不同尺度。此外,这些方法通常只是简单地拼接多模态数据,忽略了它们之间的相互作用,而这种相互作用可以突出判别区域,从而改善AD的诊断。为了解决这些问题,我们开发了一种多模态多尺度深度学习模型,该模型有效地利用了神经影像数据的多模态和多尺度之间的相互作用。首先,我们使用卷积神经网络来嵌入多模态图像的每个尺度。其次,我们提出了多模态尺度融合机制,该机制利用多头自注意力和多头交叉注意力,捕捉嵌入特征之间的全局关系,并权衡每种模态对另一种模态的贡献,从而增强MRI和PET图像各尺度之间的特征提取和相互作用。第三,我们引入了一个跨模态融合模块,该模块包括一个多头交叉注意力,以在不同尺度上融合MRI和PET数据,并促进来自先前注意力层的全局特征。最后,融合每个尺度的所有特征以区分AD的不同阶段。我们在ADNI数据集上评估了我们提出的方法,结果表明我们的模型比现有最先进的方法具有更好的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验