Suppr超能文献

TCDDU-Net:结合变压器和卷积双通道解码 U-Net 的视网膜血管分割。

TCDDU-Net: combining transformer and convolutional dual-path decoding U-Net for retinal vessel segmentation.

机构信息

College of Information Engineering, Xinjiang Institute of Technology, No.1 Xuefu West Road, Aksu, 843100, Xinjiang, China.

School of Information Engineering, Mianyang Teachers' College, No. 166 Mianxing West Road, High Tech Zone, Mianyang, 621000, Sichuan, China.

出版信息

Sci Rep. 2024 Oct 29;14(1):25978. doi: 10.1038/s41598-024-77464-w.

Abstract

Accurate segmentation of retinal blood vessels is crucial for enhancing diagnostic efficiency and preventing disease progression. However, the small size and complex structure of retinal blood vessels, coupled with low contrast in corresponding fundus images, pose significant challenges for this task. We propose a novel approach for retinal vessel segmentation, which combines the transformer and convolutional dual-path decoding U-Net (TCDDU-Net). We propose the selective dense connection swin transformer block, which converts the input feature map into patches, introduces MLPs to generate probabilities, and performs selective fusion at different stages. This structure forms a dense connection framework, enabling the capture of long-distance dependencies and effective fusion of features across different stages. The subsequent stage involves the design of the background decoder, which utilizes deformable convolution to learn the background information of retinal vessels by treating them as segmentation objects. This is then combined with the foreground decoder to form a dual-path decoding U-Net. Finally, the foreground segmentation results and the processed background segmentation results are fused to obtain the final retinal vessel segmentation map. To evaluate the effectiveness of our method, we performed experiments on the DRIVE, STARE, and CHASE datasets for retinal vessel segmentation. Experimental results show that the segmentation accuracies of our algorithms are 96.98, 97.40, and 97.23, and the AUC metrics are 98.68, 98.56, and 98.50, respectively.In addition, we evaluated our methods using F1 score, specificity, and sensitivity metrics. Through a comparative analysis, we found that our proposed TCDDU-Net method effectively improves retinal vessel segmentation performance and achieves impressive results on multiple datasets compared to existing methods.

摘要

准确的视网膜血管分割对于提高诊断效率和防止疾病进展至关重要。然而,视网膜血管的小尺寸和复杂结构,以及相应眼底图像的对比度低,给这项任务带来了巨大的挑战。我们提出了一种新的视网膜血管分割方法,它结合了变压器和卷积双路径解码 U-Net(TCDDU-Net)。我们提出了选择性密集连接 Swin 变压器块,它将输入特征图转换为补丁,引入 MLPs 生成概率,并在不同阶段进行选择性融合。这种结构形成了一个密集连接框架,能够捕捉远距离依赖关系,并有效地融合不同阶段的特征。随后的阶段涉及背景解码器的设计,它利用变形卷积通过将视网膜血管视为分割对象来学习其背景信息。然后,它与前景解码器结合,形成双路径解码 U-Net。最后,将前景分割结果和处理后的背景分割结果融合,得到最终的视网膜血管分割图。为了评估我们方法的有效性,我们在 DRIVE、STARE 和 CHASE 数据集上进行了视网膜血管分割实验。实验结果表明,我们算法的分割精度分别为 96.98%、97.40%和 97.23%,AUC 指标分别为 98.68%、98.56%和 98.50%。此外,我们还使用 F1 分数、特异性和敏感性指标评估了我们的方法。通过对比分析,我们发现我们提出的 TCDDU-Net 方法能够有效地提高视网膜血管分割性能,与现有方法相比,在多个数据集上取得了令人印象深刻的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5f24/11522399/908be95990cf/41598_2024_77464_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验