Suppr超能文献

多模态交叉增强融合网络用于阿尔茨海默病和主观记忆主诉的诊断。

Multimodal cross enhanced fusion network for diagnosis of Alzheimer's disease and subjective memory complaints.

机构信息

Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.

Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.

出版信息

Comput Biol Med. 2023 May;157:106788. doi: 10.1016/j.compbiomed.2023.106788. Epub 2023 Mar 15.

Abstract

Deep learning methods using multimodal imagings have been proposed for the diagnosis of Alzheimer's disease (AD) and its early stages (SMC, subjective memory complaints), which may help to slow the progression of the disease through early intervention. However, current fusion methods for multimodal imagings are generally coarse and may lead to suboptimal results through the use of shared extractors or simple downscaling stitching. Another issue with diagnosing brain diseases is that they often affect multiple areas of the brain, making it important to consider potential connections throughout the brain. However, traditional convolutional neural networks (CNNs) may struggle with this issue due to their limited local receptive fields. To address this, many researchers have turned to transformer networks, which can provide global information about the brain but can be computationally intensive and perform poorly on small datasets. In this work, we propose a novel lightweight network called MENet that adaptively recalibrates the multiscale long-range receptive field to localize discriminative brain regions in a computationally efficient manner. Based on this, the network extracts the intensity and location responses between structural magnetic resonance imagings (sMRI) and 18-Fluoro-Deoxy-Glucose Positron Emission computed Tomography (FDG-PET) as an enhancement fusion for AD and SMC diagnosis. Our method is evaluated on the publicly available ADNI datasets and achieves 97.67% accuracy in AD diagnosis tasks and 81.63% accuracy in SMC diagnosis tasks using sMRI and FDG-PET. These results achieve state-of-the-art (SOTA) performance in both tasks. To the best of our knowledge, this is one of the first deep learning research methods for SMC diagnosis with FDG-PET.

摘要

深度学习方法利用多模态成像已经被提出用于阿尔茨海默病(AD)及其早期阶段(SMC,主观记忆抱怨)的诊断,这可能有助于通过早期干预来减缓疾病的进展。然而,目前多模态成像的融合方法通常比较粗糙,可能会通过使用共享提取器或简单的缩小拼接而导致不理想的结果。诊断脑部疾病的另一个问题是,它们通常会影响大脑的多个区域,因此考虑大脑中的潜在连接非常重要。然而,由于传统卷积神经网络(CNN)的局部感受野有限,它们可能难以解决这个问题。为了解决这个问题,许多研究人员转向了 Transformer 网络,它可以提供关于大脑的全局信息,但可能计算量很大,并且在小数据集上表现不佳。在这项工作中,我们提出了一种名为 MENet 的新型轻量级网络,它可以自适应地重新校准多尺度长程感受野,以在计算上高效的方式定位有区别的大脑区域。在此基础上,该网络提取结构磁共振成像(sMRI)和 18-Fluoro-Deoxy-Glucose Positron Emission 计算机断层扫描(FDG-PET)之间的强度和位置响应,作为 AD 和 SMC 诊断的增强融合。我们的方法在公开可用的 ADNI 数据集上进行了评估,在 AD 诊断任务中达到了 97.67%的准确率,在 sMRI 和 FDG-PET 用于 SMC 诊断任务中达到了 81.63%的准确率。这些结果在两个任务中都达到了最先进的(SOTA)性能。据我们所知,这是使用 FDG-PET 进行 SMC 诊断的首批深度学习研究方法之一。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验