Seshimo Hiroyuki, Rashed Essam A
Graduate School of Information Science, University of Hyogo, Kobe 650-0047, Japan.
Advanced Medical Engineering Research Institute, University of Hyogo, Himeji 670-0836, Japan.
Sensors (Basel). 2024 Nov 27;24(23):7576. doi: 10.3390/s24237576.
Early detection and precise characterization of brain tumors play a crucial role in improving patient outcomes and extending survival rates. Among neuroimaging modalities, magnetic resonance imaging (MRI) is the gold standard for brain tumor diagnostics due to its ability to produce high-contrast images across a variety of sequences, each highlighting distinct tissue characteristics. This study focuses on enabling multimodal MRI sequences to advance the automatic segmentation of low-grade astrocytomas, a challenging task due to their diffuse and irregular growth patterns. A novel mutual-attention deep learning framework is proposed, which integrates complementary information from multiple MRI sequences, including T2-weighted and fluid-attenuated inversion recovery (FLAIR) sequences, to enhance the segmentation accuracy. Unlike conventional segmentation models, which treat each modality independently or simply concatenate them, our model introduces mutual attention mechanisms. This allows the network to dynamically focus on salient features across modalities by jointly learning interdependencies between imaging sequences, leading to more precise boundary delineations even in regions with subtle tumor signals. The proposed method is validated using the UCSF-PDGM dataset, which consists of 35 astrocytoma cases, presenting a realistic and clinically challenging dataset. The results demonstrate that T2w/FLAIR modalities contribute most significantly to the segmentation performance. The mutual-attention model achieves an average Dice coefficient of 0.87. This study provides an innovative pathway toward improving segmentation of low-grade tumors by enabling context-aware fusion across imaging sequences. Furthermore, the study showcases the clinical relevance of integrating AI with multimodal MRI, potentially improving non-invasive tumor characterization and guiding future research in radiological diagnostics.
脑肿瘤的早期检测和精确特征描述在改善患者预后和延长生存率方面起着至关重要的作用。在神经成像模态中,磁共振成像(MRI)是脑肿瘤诊断的金标准,因为它能够通过各种序列生成高对比度图像,每个序列都突出显示不同的组织特征。本研究专注于使多模态MRI序列能够推进低级别星形细胞瘤的自动分割,由于其弥漫性和不规则的生长模式,这是一项具有挑战性的任务。提出了一种新颖的相互注意力深度学习框架,该框架整合了来自多个MRI序列(包括T2加权和液体衰减反转恢复(FLAIR)序列)的互补信息,以提高分割精度。与传统的分割模型不同,传统模型独立处理每个模态或简单地将它们连接起来,我们的模型引入了相互注意力机制。这使得网络能够通过联合学习成像序列之间的相互依赖性,动态地关注跨模态的显著特征,即使在肿瘤信号微弱的区域也能实现更精确的边界描绘。所提出的方法使用UCSF-PDGM数据集进行了验证,该数据集由35例星形细胞瘤病例组成,呈现了一个现实且具有临床挑战性的数据集。结果表明,T2w/FLAIR模态对分割性能的贡献最为显著。相互注意力模型的平均Dice系数达到0.87。本研究通过实现跨成像序列的上下文感知融合,为改善低级别肿瘤的分割提供了一条创新途径。此外,该研究展示了将人工智能与多模态MRI相结合的临床相关性,有可能改善非侵入性肿瘤特征描述并指导放射诊断学的未来研究。