Suppr超能文献

DEAF-Net:用于视网膜血管分割的细节增强注意力特征融合网络

DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation.

作者信息

Cai Pengfei, Li Biyuan, Sun Gaowei, Yang Bo, Wang Xiuwei, Lv Chunjie, Yan Jun

机构信息

School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin, 300222, China.

Tianjin Development Zone Jingnuohanhai Data Technology Co., Ltd, Tianjin, China.

出版信息

J Imaging Inform Med. 2025 Feb;38(1):496-519. doi: 10.1007/s10278-024-01207-6. Epub 2024 Aug 5.

Abstract

Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.

摘要

视网膜血管分割对于眼科和心血管疾病的诊断至关重要。然而,视网膜血管分布密集且不规则,许多毛细血管与背景融合,对比度较低。此外,基于编码器 - 解码器的视网膜血管分割网络由于多次编码和解码会遭受详细特征的不可逆损失,导致血管分割错误。同时,单维注意力机制存在局限性,忽略了多维特征的重要性。为了解决这些问题,在本文中,我们提出了一种用于视网膜血管分割的细节增强注意力特征融合网络(DEAF - Net)。首先,提出了细节增强残差块(DERB)模块以增强详细表示能力,确保在精细血管分割过程中复杂特征得到有效保留。其次,提出了多维协作注意力编码器(MCAE)模块以优化多维信息的提取。然后,引入动态解码器(DYD)模块在解码过程中保留空间信息并减少上采样操作引起的信息损失。最后,由DERB、MCAE和DYD模块组成的所提出的细节增强特征融合(DEFF)模块融合来自编码和解码的特征图,并实现多尺度上下文信息的有效聚合。在DRIVE、CHASEDB1和STARE数据集上进行的实验,在DRIVE、CHASEDB1和STARE上分别实现了0.8305、0.8784和0.8654的敏感度(Sen)以及0.9886、0.9... (原文此处CHASEDB1和STARE的AUC值未完整给出),证明了我们所提出网络的性能,特别是在精细视网膜血管分割方面。

相似文献

7
MFI-Net: Multiscale Feature Interaction Network for Retinal Vessel Segmentation.MFI-Net:用于视网膜血管分割的多尺度特征交互网络。
IEEE J Biomed Health Inform. 2022 Sep;26(9):4551-4562. doi: 10.1109/JBHI.2022.3182471. Epub 2022 Sep 9.

本文引用的文献

10
BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation.BCU-Net:桥接 ConvNeXt 和 U-Net 进行医学图像分割。
Comput Biol Med. 2023 Jun;159:106960. doi: 10.1016/j.compbiomed.2023.106960. Epub 2023 Apr 20.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验