• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于双跳连接并结合深度监督的眼底血管分割方法。

A fundus vessel segmentation method based on double skip connections combined with deep supervision.

作者信息

Liu Qingyou, Zhou Fen, Shen Jianxin, Xu Jianguo, Wan Cheng, Xu Xiangzhong, Yan Zhipeng, Yao Jin

机构信息

College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China.

The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China.

出版信息

Front Cell Dev Biol. 2024 Oct 3;12:1477819. doi: 10.3389/fcell.2024.1477819. eCollection 2024.

DOI:10.3389/fcell.2024.1477819
PMID:39430046
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11487527/
Abstract

BACKGROUND

Fundus vessel segmentation is vital for diagnosing ophthalmic diseases like central serous chorioretinopathy (CSC), diabetic retinopathy, and glaucoma. Accurate segmentation provides crucial vessel morphology details, aiding the early detection and intervention of ophthalmic diseases. However, current algorithms struggle with fine vessel segmentation and maintaining sensitivity in complex regions. Challenges also stem from imaging variability and poor generalization across multimodal datasets, highlighting the need for more advanced algorithms in clinical practice.

METHODS

This paper aims to explore a new vessel segmentation method to alleviate the above problems. We propose a fundus vessel segmentation model based on a combination of double skip connections, deep supervision, and TransUNet, namely DS2TUNet. Initially, the original fundus images are improved through grayscale conversion, normalization, histogram equalization, gamma correction, and other preprocessing techniques. Subsequently, by utilizing the U-Net architecture, the preprocessed fundus images are segmented to obtain the final vessel information. Specifically, the encoder firstly incorporates the ResNetV1 downsampling, dilated convolution downsampling, and Transformer to capture both local and global features, which upgrades its vessel feature extraction ability. Then, the decoder introduces the double skip connections to facilitate upsampling and refine segmentation outcomes. Finally, the deep supervision module introduces multiple upsampling vessel features from the decoder into the loss function, so that the model can learn vessel feature representations more effectively and alleviate gradient vanishing during the training phase.

RESULTS

Extensive experiments on publicly available multimodal fundus datasets such as DRIVE, CHASE_DB1, and ROSE-1 demonstrate that the DS2TUNet model attains F1-scores of 0.8195, 0.8362, and 0.8425, with Accuracy of 0.9664, 0.9741, and 0.9557, Sensitivity of 0.8071, 0.8101, and 0.8586, and Specificity of 0.9823, 0.9869, and 0.9713, respectively. Additionally, the model also exhibits excellent test performance on the clinical fundus dataset CSC, with F1-score of 0.7757, Accuracy of 0.9688, Sensitivity of 0.8141, and Specificity of 0.9801 based on the weight trained on the CHASE_DB1 dataset. These results comprehensively validate that the proposed method obtains good performance in fundus vessel segmentation, thereby aiding clinicians in the further diagnosis and treatment of fundus diseases in terms of effectiveness and feasibility.

摘要

背景

眼底血管分割对于诊断诸如中心性浆液性脉络膜视网膜病变(CSC)、糖尿病视网膜病变和青光眼等眼科疾病至关重要。准确的分割提供了关键的血管形态细节,有助于眼科疾病的早期检测和干预。然而,当前的算法在精细血管分割以及在复杂区域保持敏感性方面存在困难。挑战还源于成像的变异性以及跨多模态数据集的泛化性较差,这凸显了临床实践中对更先进算法的需求。

方法

本文旨在探索一种新的血管分割方法以缓解上述问题。我们提出了一种基于双跳连接、深度监督和TransUNet相结合的眼底血管分割模型,即DS2TUNet。首先,通过灰度转换、归一化、直方图均衡化、伽马校正等预处理技术对原始眼底图像进行改进。随后,利用U-Net架构对预处理后的眼底图像进行分割以获得最终的血管信息。具体而言,编码器首先结合ResNetV1下采样、扩张卷积下采样和Transformer来捕获局部和全局特征,从而提升其血管特征提取能力。然后,解码器引入双跳连接以促进上采样并细化分割结果。最后,深度监督模块将来自解码器的多个上采样血管特征引入损失函数,使得模型能够更有效地学习血管特征表示并缓解训练阶段的梯度消失问题。

结果

在诸如DRIVE、CHASE_DB1和ROSE-1等公开可用的多模态眼底数据集上进行的大量实验表明,DS2TUNet模型分别获得了0.8195、0.8362和0.8425的F1分数,准确率分别为0.9664、0.9741和0.9557,敏感性分别为0.8071、0.8101和0.8586,特异性分别为0.9823、0.9869和0.9713。此外,基于在CHASE_DB1数据集上训练的权重,该模型在临床眼底数据集CSC上也表现出出色的测试性能,F1分数为0.7757,准确率为0.9688,敏感性为0.8141,特异性为0.9801。这些结果全面验证了所提出的方法在眼底血管分割中取得了良好的性能,从而在有效性和可行性方面辅助临床医生对眼底疾病进行进一步的诊断和治疗。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/6ca1ea5750d2/fcell-12-1477819-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/f89867880103/fcell-12-1477819-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/9232e6fd268b/fcell-12-1477819-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/4ac518d46471/fcell-12-1477819-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/53aef4469da1/fcell-12-1477819-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/e5b6a2864074/fcell-12-1477819-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/c6e6939fb7c4/fcell-12-1477819-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/580747b6b8f5/fcell-12-1477819-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/9172b170ec8c/fcell-12-1477819-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/259bcb12a5b4/fcell-12-1477819-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/a2465055d292/fcell-12-1477819-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/8eeec5038576/fcell-12-1477819-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/4704732ec29d/fcell-12-1477819-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/6ca1ea5750d2/fcell-12-1477819-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/f89867880103/fcell-12-1477819-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/9232e6fd268b/fcell-12-1477819-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/4ac518d46471/fcell-12-1477819-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/53aef4469da1/fcell-12-1477819-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/e5b6a2864074/fcell-12-1477819-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/c6e6939fb7c4/fcell-12-1477819-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/580747b6b8f5/fcell-12-1477819-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/9172b170ec8c/fcell-12-1477819-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/259bcb12a5b4/fcell-12-1477819-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/a2465055d292/fcell-12-1477819-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/8eeec5038576/fcell-12-1477819-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/4704732ec29d/fcell-12-1477819-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d90d/11487527/6ca1ea5750d2/fcell-12-1477819-g013.jpg

相似文献

1
A fundus vessel segmentation method based on double skip connections combined with deep supervision.一种基于双跳连接并结合深度监督的眼底血管分割方法。
Front Cell Dev Biol. 2024 Oct 3;12:1477819. doi: 10.3389/fcell.2024.1477819. eCollection 2024.
2
LMBiS-Net: A lightweight bidirectional skip connection based multipath CNN for retinal blood vessel segmentation.LMBiS-Net:一种基于轻量化双向跳跃连接的多路径卷积神经网络,用于视网膜血管分割。
Sci Rep. 2024 Jul 2;14(1):15219. doi: 10.1038/s41598-024-63496-9.
3
TDCAU-Net: retinal vessel segmentation using transformer dilated convolutional attention-based U-Net method.TDCAU-Net:基于 Transformer 扩张卷积注意力的 U-Net 方法进行视网膜血管分割。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad1273.
4
Gated Skip-Connection Network with Adaptive Upsampling for Retinal Vessel Segmentation.门控跳连接网络与自适应上采样相结合的视网膜血管分割。
Sensors (Basel). 2021 Sep 15;21(18):6177. doi: 10.3390/s21186177.
5
SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation.SegR-Net:一种具有多尺度特征融合的深度学习框架,用于稳健的视网膜血管分割。
Comput Biol Med. 2023 Sep;163:107132. doi: 10.1016/j.compbiomed.2023.107132. Epub 2023 Jun 10.
6
PCAT-UNet: UNet-like network fused convolution and transformer for retinal vessel segmentation.PCAT-UNet:融合卷积和变形注意力的 U 型网络用于视网膜血管分割。
PLoS One. 2022 Jan 24;17(1):e0262689. doi: 10.1371/journal.pone.0262689. eCollection 2022.
7
MINet: Multi-scale input network for fundus microvascular segmentation.MINet:用于眼底微血管分割的多尺度输入网络。
Comput Biol Med. 2023 Mar;154:106608. doi: 10.1016/j.compbiomed.2023.106608. Epub 2023 Jan 24.
8
A multi-scale feature extraction and fusion-based model for retinal vessel segmentation in fundus images.一种基于多尺度特征提取与融合的眼底图像视网膜血管分割模型。
Med Biol Eng Comput. 2025 Feb;63(2):595-608. doi: 10.1007/s11517-024-03223-8. Epub 2024 Oct 21.
9
CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images.CSU-Net:用于眼底图像中精确血管分割的上下文空间 U-Net。
IEEE J Biomed Health Inform. 2021 Apr;25(4):1128-1138. doi: 10.1109/JBHI.2020.3011178. Epub 2021 Apr 7.
10
MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image.MIC-Net:用于眼底图像中自动视网膜血管分割的多尺度集成上下文网络。
Math Biosci Eng. 2023 Feb 8;20(4):6912-6931. doi: 10.3934/mbe.2023298.

本文引用的文献

1
Attention-guided cascaded network with pixel-importance-balance loss for retinal vessel segmentation.用于视网膜血管分割的具有像素重要性平衡损失的注意力引导级联网络。
Front Cell Dev Biol. 2023 May 9;11:1196191. doi: 10.3389/fcell.2023.1196191. eCollection 2023.
2
Joint conditional generative adversarial networks for eyelash artifact removal in ultra-wide-field fundus images.用于去除超广角眼底图像中睫毛伪影的联合条件生成对抗网络。
Front Cell Dev Biol. 2023 May 5;11:1181305. doi: 10.3389/fcell.2023.1181305. eCollection 2023.
3
Cross-convolutional transformer for automated multi-organs segmentation in a variety of medical images.
用于各种医学图像中自动多器官分割的交叉卷积式转换器。
Phys Med Biol. 2023 Jan 23;68(3). doi: 10.1088/1361-6560/acb19a.
4
UNet retinal blood vessel segmentation algorithm based on improved pyramid pooling method and attention mechanism.基于改进金字塔池化方法和注意力机制的 UNet 视网膜血管分割算法。
Phys Med Biol. 2021 Aug 26;66(17). doi: 10.1088/1361-6560/ac1c4c.
5
ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model.ROSE:一个视网膜 OCT-A 血管分割数据集和新模型。
IEEE Trans Med Imaging. 2021 Mar;40(3):928-939. doi: 10.1109/TMI.2020.3042802. Epub 2021 Mar 2.
6
Retinal microvasculature dysfunction is associated with Alzheimer's disease and mild cognitive impairment.视网膜微血管功能障碍与阿尔茨海默病和轻度认知障碍有关。
Alzheimers Res Ther. 2020 Dec 4;12(1):161. doi: 10.1186/s13195-020-00724-0.
7
CS-Net: Deep learning segmentation of curvilinear structures in medical imaging.CS-Net:医学影像中曲线结构的深度学习分割。
Med Image Anal. 2021 Jan;67:101874. doi: 10.1016/j.media.2020.101874. Epub 2020 Oct 21.
8
MAU-Net: A Retinal Vessels Segmentation Method.MAU-Net:一种视网膜血管分割方法。
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:1958-1961. doi: 10.1109/EMBC44109.2020.9176093.
9
Recurrent residual U-Net for medical image segmentation.用于医学图像分割的循环残差U-Net
J Med Imaging (Bellingham). 2019 Jan;6(1):014006. doi: 10.1117/1.JMI.6.1.014006. Epub 2019 Mar 27.
10
CE-Net: Context Encoder Network for 2D Medical Image Segmentation.CE-Net:用于二维医学图像分割的上下文编码器网络。
IEEE Trans Med Imaging. 2019 Oct;38(10):2281-2292. doi: 10.1109/TMI.2019.2903562. Epub 2019 Mar 7.