Zeng Qingrun, Yang Lin, Li Yongqiang, Xie Lei, Feng Yuanjing
College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China.
Med Biol Eng Comput. 2025 May;63(5):1397-1411. doi: 10.1007/s11517-024-03248-z. Epub 2025 Jan 2.
The segmentation of the retinogeniculate visual pathway (RGVP) enables quantitative analysis of its anatomical structure. Multimodal learning has exhibited considerable potential in segmenting the RGVP based on structural MRI (sMRI) and diffusion MRI (dMRI). However, the intricate nature of the skull base environment and the slender morphology of the RGVP pose challenges for existing methodologies to adequately leverage the complementary information from each modality. In this study, we propose a multimodal information fusion network designed to optimize and select the complementary information across multiple modalities: the T1-weighted (T1w) images, the fractional anisotropy (FA) images, and the fiber orientation distribution function (fODF) peaks, and the modalities can supervise each other during the process. Specifically, we add a supervised master-assistant cross-modal learning framework between the encoder layers of different modalities so that the characteristics of different modalities can be more fully utilized to achieve a more accurate segmentation result. We apply RGVPSeg to an MRI dataset with 102 subjects from the Human Connectome Project (HCP) and 10 subjects from Multi-shell Diffusion MRI (MDM), the experimental results show promising results, which demonstrate that the proposed framework is feasible and outperforms the methods mentioned in this paper. Our code is freely available at https://github.com/yanglin9911/RGVPSeg .
视网膜膝状体视觉通路(RGVP)的分割能够对其解剖结构进行定量分析。多模态学习在基于结构磁共振成像(sMRI)和扩散磁共振成像(dMRI)对RGVP进行分割方面展现出了巨大潜力。然而,颅底环境的复杂性以及RGVP纤细的形态给现有方法充分利用各模态的互补信息带来了挑战。在本研究中,我们提出了一种多模态信息融合网络,旨在优化和选择跨多种模态的互补信息:T1加权(T1w)图像、分数各向异性(FA)图像、纤维方向分布函数(fODF)峰值,并且各模态在这个过程中可以相互监督。具体而言,我们在不同模态的编码器层之间添加了一个有监督的主-辅跨模态学习框架,以便更充分地利用不同模态的特征来获得更准确的分割结果。我们将RGVPSeg应用于来自人类连接组计划(HCP)的102名受试者和多壳扩散磁共振成像(MDM)的10名受试者的磁共振成像数据集,实验结果显示出了良好的效果,这表明所提出的框架是可行的,并且优于本文中提到的方法。我们的代码可在https://github.com/yanglin9911/RGVPSeg上免费获取。