Castellano Giovanna, Esposito Andrea, Lella Eufemia, Montanaro Graziano, Vessio Gennaro
Department of Computer Science, University of Bari Aldo Moro, Bari, Italy.
Sirio - Research & Innovation, Sidea Group, Bari, Italy.
Sci Rep. 2024 Mar 3;14(1):5210. doi: 10.1038/s41598-024-56001-9.
Recent advances in deep learning and imaging technologies have revolutionized automated medical image analysis, especially in diagnosing Alzheimer's disease through neuroimaging. Despite the availability of various imaging modalities for the same patient, the development of multi-modal models leveraging these modalities remains underexplored. This paper addresses this gap by proposing and evaluating classification models using 2D and 3D MRI images and amyloid PET scans in uni-modal and multi-modal frameworks. Our findings demonstrate that models using volumetric data learn more effective representations than those using only 2D images. Furthermore, integrating multiple modalities enhances model performance over single-modality approaches significantly. We achieved state-of-the-art performance on the OASIS-3 cohort. Additionally, explainability analyses with Grad-CAM indicate that our model focuses on crucial AD-related regions for its predictions, underscoring its potential to aid in understanding the disease's causes.
深度学习和成像技术的最新进展彻底改变了自动医学图像分析,尤其是在通过神经成像诊断阿尔茨海默病方面。尽管同一患者可获得多种成像模态,但利用这些模态的多模态模型的开发仍未得到充分探索。本文通过在单模态和多模态框架中使用二维和三维MRI图像以及淀粉样蛋白PET扫描提出并评估分类模型来解决这一差距。我们的研究结果表明,使用体积数据的模型比仅使用二维图像的模型学习到更有效的表示。此外,整合多种模态显著提高了模型性能,优于单模态方法。我们在OASIS-3队列上取得了领先的性能。此外,使用Grad-CAM的可解释性分析表明,我们的模型在预测时专注于与阿尔茨海默病相关的关键区域,突出了其有助于理解疾病病因的潜力。