Xie Qihang, Li Xuefei, Li Yuanyuan, Lu Jiayi, Ma Shaodong, Zhao Yitian, Zhang Jiong
Cixi Biomedical Research Institute, Wenzhou Medical University, Ningbo, China.
Laboratory of Advanced Theranostic Materials and Technology, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China.
Front Cell Dev Biol. 2025 Jan 8;12:1532228. doi: 10.3389/fcell.2024.1532228. eCollection 2024.
Vessel segmentation in fundus photography has become a cornerstone technique for disease analysis. Within this field, Ultra-WideField (UWF) fundus images offer distinct advantages, including an expansive imaging range, detailed lesion data, and minimal adverse effects. However, the high resolution and low contrast inherent to UWF fundus images present significant challenges for accurate segmentation using deep learning methods, thereby complicating disease analysis in this context.
To address these issues, this study introduces M3B-Net, a novel multi-modal, multi-branch framework that leverages fundus fluorescence angiography (FFA) images to improve retinal vessel segmentation in UWF fundus images. Specifically, M3B-Net tackles the low segmentation accuracy caused by the inherently low contrast of UWF fundus images. Additionally, we propose an enhanced UWF-based segmentation network in M3B-Net, specifically designed to improve the segmentation of fine retinal vessels. The segmentation network includes the Selective Fusion Module (SFM), which enhances feature extraction within the segmentation network by integrating features generated during the FFA imaging process. To further address the challenges of high-resolution UWF fundus images, we introduce a Local Perception Fusion Module (LPFM) to mitigate context loss during the segmentation cut-patch process. Complementing this, the Attention-Guided Upsampling Module (AUM) enhances segmentation performance through convolution operations guided by attention mechanisms.
Extensive experimental evaluations demonstrate that our approach significantly outperforms existing state-of-the-art methods for UWF fundus image segmentation.
眼底摄影中的血管分割已成为疾病分析的一项基础技术。在该领域,超广角(UWF)眼底图像具有显著优势,包括成像范围广、病变数据详细以及副作用极小。然而,UWF眼底图像固有的高分辨率和低对比度给使用深度学习方法进行精确分割带来了重大挑战,从而使在此背景下的疾病分析变得复杂。
为解决这些问题,本研究引入了M3B-Net,这是一种新颖的多模态、多分支框架,它利用眼底荧光血管造影(FFA)图像来改善UWF眼底图像中的视网膜血管分割。具体而言,M3B-Net解决了UWF眼底图像固有低对比度导致的分割精度低的问题。此外,我们在M3B-Net中提出了一种基于UWF的增强分割网络,专门设计用于改善视网膜细血管的分割。该分割网络包括选择性融合模块(SFM),它通过整合FFA成像过程中生成的特征来增强分割网络内的特征提取。为进一步应对高分辨率UWF眼底图像的挑战,我们引入了局部感知融合模块(LPFM)以减轻分割裁剪补丁过程中的上下文丢失。与此相辅相成的是,注意力引导上采样模块(AUM)通过注意力机制引导的卷积操作来提高分割性能。
广泛的实验评估表明,我们的方法在UWF眼底图像分割方面显著优于现有的最先进方法。