• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

TCDDU-Net:结合变压器和卷积双通道解码 U-Net 的视网膜血管分割。

TCDDU-Net: combining transformer and convolutional dual-path decoding U-Net for retinal vessel segmentation.

机构信息

College of Information Engineering, Xinjiang Institute of Technology, No.1 Xuefu West Road, Aksu, 843100, Xinjiang, China.

School of Information Engineering, Mianyang Teachers' College, No. 166 Mianxing West Road, High Tech Zone, Mianyang, 621000, Sichuan, China.

出版信息

Sci Rep. 2024 Oct 29;14(1):25978. doi: 10.1038/s41598-024-77464-w.

DOI:10.1038/s41598-024-77464-w
PMID:39472606
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11522399/
Abstract

Accurate segmentation of retinal blood vessels is crucial for enhancing diagnostic efficiency and preventing disease progression. However, the small size and complex structure of retinal blood vessels, coupled with low contrast in corresponding fundus images, pose significant challenges for this task. We propose a novel approach for retinal vessel segmentation, which combines the transformer and convolutional dual-path decoding U-Net (TCDDU-Net). We propose the selective dense connection swin transformer block, which converts the input feature map into patches, introduces MLPs to generate probabilities, and performs selective fusion at different stages. This structure forms a dense connection framework, enabling the capture of long-distance dependencies and effective fusion of features across different stages. The subsequent stage involves the design of the background decoder, which utilizes deformable convolution to learn the background information of retinal vessels by treating them as segmentation objects. This is then combined with the foreground decoder to form a dual-path decoding U-Net. Finally, the foreground segmentation results and the processed background segmentation results are fused to obtain the final retinal vessel segmentation map. To evaluate the effectiveness of our method, we performed experiments on the DRIVE, STARE, and CHASE datasets for retinal vessel segmentation. Experimental results show that the segmentation accuracies of our algorithms are 96.98, 97.40, and 97.23, and the AUC metrics are 98.68, 98.56, and 98.50, respectively.In addition, we evaluated our methods using F1 score, specificity, and sensitivity metrics. Through a comparative analysis, we found that our proposed TCDDU-Net method effectively improves retinal vessel segmentation performance and achieves impressive results on multiple datasets compared to existing methods.

摘要

准确的视网膜血管分割对于提高诊断效率和防止疾病进展至关重要。然而,视网膜血管的小尺寸和复杂结构,以及相应眼底图像的对比度低,给这项任务带来了巨大的挑战。我们提出了一种新的视网膜血管分割方法,它结合了变压器和卷积双路径解码 U-Net(TCDDU-Net)。我们提出了选择性密集连接 Swin 变压器块,它将输入特征图转换为补丁,引入 MLPs 生成概率,并在不同阶段进行选择性融合。这种结构形成了一个密集连接框架,能够捕捉远距离依赖关系,并有效地融合不同阶段的特征。随后的阶段涉及背景解码器的设计,它利用变形卷积通过将视网膜血管视为分割对象来学习其背景信息。然后,它与前景解码器结合,形成双路径解码 U-Net。最后,将前景分割结果和处理后的背景分割结果融合,得到最终的视网膜血管分割图。为了评估我们方法的有效性,我们在 DRIVE、STARE 和 CHASE 数据集上进行了视网膜血管分割实验。实验结果表明,我们算法的分割精度分别为 96.98%、97.40%和 97.23%,AUC 指标分别为 98.68%、98.56%和 98.50%。此外,我们还使用 F1 分数、特异性和敏感性指标评估了我们的方法。通过对比分析,我们发现我们提出的 TCDDU-Net 方法能够有效地提高视网膜血管分割性能,与现有方法相比,在多个数据集上取得了令人印象深刻的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/a5162db03f56/41598_2024_77464_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/908be95990cf/41598_2024_77464_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/7b0f4d0bf156/41598_2024_77464_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/826d419e854e/41598_2024_77464_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/847cf13cb5fa/41598_2024_77464_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/6f4b4c9bc113/41598_2024_77464_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/0d0f4ed30bd7/41598_2024_77464_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/3924b16f9dde/41598_2024_77464_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/e10976b27890/41598_2024_77464_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/cc0ea0814b94/41598_2024_77464_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/a5162db03f56/41598_2024_77464_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/908be95990cf/41598_2024_77464_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/7b0f4d0bf156/41598_2024_77464_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/826d419e854e/41598_2024_77464_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/847cf13cb5fa/41598_2024_77464_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/6f4b4c9bc113/41598_2024_77464_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/0d0f4ed30bd7/41598_2024_77464_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/3924b16f9dde/41598_2024_77464_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/e10976b27890/41598_2024_77464_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/cc0ea0814b94/41598_2024_77464_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/a5162db03f56/41598_2024_77464_Fig10_HTML.jpg

相似文献

1
TCDDU-Net: combining transformer and convolutional dual-path decoding U-Net for retinal vessel segmentation.TCDDU-Net:结合变压器和卷积双通道解码 U-Net 的视网膜血管分割。
Sci Rep. 2024 Oct 29;14(1):25978. doi: 10.1038/s41598-024-77464-w.
2
PCAT-UNet: UNet-like network fused convolution and transformer for retinal vessel segmentation.PCAT-UNet:融合卷积和变形注意力的 U 型网络用于视网膜血管分割。
PLoS One. 2022 Jan 24;17(1):e0262689. doi: 10.1371/journal.pone.0262689. eCollection 2022.
3
TLTNet: A novel transscale cascade layered transformer network for enhanced retinal blood vessel segmentation.TLTNet:一种新颖的跨尺度级联分层Transformer 网络,用于增强视网膜血管分割。
Comput Biol Med. 2024 Aug;178:108773. doi: 10.1016/j.compbiomed.2024.108773. Epub 2024 Jun 25.
4
TDCAU-Net: retinal vessel segmentation using transformer dilated convolutional attention-based U-Net method.TDCAU-Net:基于 Transformer 扩张卷积注意力的 U-Net 方法进行视网膜血管分割。
Phys Med Biol. 2023 Dec 22;69(1). doi: 10.1088/1361-6560/ad1273.
5
MTPA_Unet: Multi-Scale Transformer-Position Attention Retinal Vessel Segmentation Network Joint Transformer and CNN.MTPA_Unet:多尺度Transformer-位置注意力视网膜血管分割网络联合 Transformer 和 CNN。
Sensors (Basel). 2022 Jun 17;22(12):4592. doi: 10.3390/s22124592.
6
SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation.SegR-Net:一种具有多尺度特征融合的深度学习框架,用于稳健的视网膜血管分割。
Comput Biol Med. 2023 Sep;163:107132. doi: 10.1016/j.compbiomed.2023.107132. Epub 2023 Jun 10.
7
LMBiS-Net: A lightweight bidirectional skip connection based multipath CNN for retinal blood vessel segmentation.LMBiS-Net:一种基于轻量化双向跳跃连接的多路径卷积神经网络,用于视网膜血管分割。
Sci Rep. 2024 Jul 2;14(1):15219. doi: 10.1038/s41598-024-63496-9.
8
Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction.基于多尺度特征卷积提取的密集连接 U-Net 视网膜血管分割算法。
Med Phys. 2021 Jul;48(7):3827-3841. doi: 10.1002/mp.14944. Epub 2021 Jun 16.
9
Gated Skip-Connection Network with Adaptive Upsampling for Retinal Vessel Segmentation.门控跳连接网络与自适应上采样相结合的视网膜血管分割。
Sensors (Basel). 2021 Sep 15;21(18):6177. doi: 10.3390/s21186177.
10
Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules.基于带有自校准卷积和空间注意力模块的 U-Net 的眼底图像血管分割。
Med Biol Eng Comput. 2023 Jul;61(7):1745-1755. doi: 10.1007/s11517-023-02806-1. Epub 2023 Mar 10.

本文引用的文献

1
Retinal vessel segmentation via a Multi-resolution Contextual Network and adversarial learning.基于多分辨率上下文网络和对抗学习的视网膜血管分割。
Neural Netw. 2023 Aug;165:310-320. doi: 10.1016/j.neunet.2023.05.029. Epub 2023 Jun 2.
2
Segmentation of retinal blood vessels by a novel hybrid technique- Principal Component Analysis (PCA) and Contrast Limited Adaptive Histogram Equalization (CLAHE).基于主成分分析(PCA)和对比度受限自适应直方图均衡化(CLAHE)的新型混合技术对视网膜血管的分割。
Microvasc Res. 2023 Jul;148:104477. doi: 10.1016/j.mvr.2023.104477. Epub 2023 Feb 4.
3
The influence of etiology on surgical outcomes in neovascular glaucoma.
病因对新生血管性青光眼手术结局的影响。
BMC Ophthalmol. 2021 Dec 20;21(1):440. doi: 10.1186/s12886-021-02212-x.
4
SCS-Net: A Scale and Context Sensitive Network for Retinal Vessel Segmentation.SCS-Net:用于视网膜血管分割的尺度和上下文敏感网络。
Med Image Anal. 2021 May;70:102025. doi: 10.1016/j.media.2021.102025. Epub 2021 Mar 4.
5
Psi-Net: Shape and boundary aware joint multi-task deep network for medical image segmentation.Psi-Net:用于医学图像分割的形状和边界感知联合多任务深度网络。
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:7223-7226. doi: 10.1109/EMBC.2019.8857339.
6
UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation.UNet++:重新设计跳过连接以利用图像分割中的多尺度特征。
IEEE Trans Med Imaging. 2020 Jun;39(6):1856-1867. doi: 10.1109/TMI.2019.2959609. Epub 2019 Dec 13.
7
3D convolutional neural networks for tumor segmentation using long-range 2D context.使用长程 2D 上下文的三维卷积神经网络进行肿瘤分割。
Comput Med Imaging Graph. 2019 Apr;73:60-72. doi: 10.1016/j.compmedimag.2019.02.001. Epub 2019 Feb 21.
8
A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation.一种用于精确视网膜血管分割的三阶段深度学习模型。
IEEE J Biomed Health Inform. 2019 Jul;23(4):1427-1436. doi: 10.1109/JBHI.2018.2872813. Epub 2018 Sep 28.
9
H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes.H-DenseUNet:用于 CT 容积的肝脏和肿瘤分割的混合密集连接 UNet。
IEEE Trans Med Imaging. 2018 Dec;37(12):2663-2674. doi: 10.1109/TMI.2018.2845918. Epub 2018 Jun 11.
10
Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation.基于深度学习的视网膜血管分割的联合分段级和像素级损失。
IEEE Trans Biomed Eng. 2018 Sep;65(9):1912-1923. doi: 10.1109/TBME.2018.2828137. Epub 2018 Apr 19.