• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于U-Net的多尺度特征融合方法用于视网膜血管分割。

A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation.

作者信息

Yang Dan, Liu Guoru, Ren Mengcheng, Xu Bin, Wang Jiao

机构信息

Key Laboratory of Data Analytics and Optimization for Smart Industry (Northeastern University), Ministry of Education, Shenyang 110819, China.

Key Laboratory of Infrared Optoelectric Materials and Micro-Nano Devices, Shenyang 110819, China.

出版信息

Entropy (Basel). 2020 Jul 24;22(8):811. doi: 10.3390/e22080811.

DOI:10.3390/e22080811
PMID:33286584
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7517387/
Abstract

Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations-rotating, mirroring, shifting and cropping-are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.

摘要

计算机辅助的视网膜血管自动分割在糖尿病、青光眼和黄斑变性等疾病的诊断中发挥着重要作用。在本文中,我们提出了一种基于U-Net的多尺度特征融合视网膜血管分割模型,名为MSFFU-Net。该模型在多尺度特征提取编码器部分引入了Inception结构,并在改进网络的特征融合解码器的上采样过程中应用了最大池化索引。使用跳跃层连接将编码器路径上生成的每组特征图传输到解码器路径上的相应特征图。此外,设计了一种基于Dice系数和交叉熵的代价敏感损失函数。使用旋转、镜像、平移和裁剪四种变换作为数据增强策略,并将CLAHE算法应用于图像预处理。所提出的框架在DRIVE和STARE数据集上进行测试和训练,并采用灵敏度(Sen)、特异性(Spe)、准确率(Acc)和曲线下面积(AUC)作为评估指标。最后与U-Net模型进行了详细比较,验证了所提模型的有效性和鲁棒性。在DRIVE和STARE数据库上分别取得了Sen为0.7762和0.7721、Spe为0.9835和0.9885、Acc为0.9694和0.9537以及AUC值为0.9790和0.9680的结果。还将结果与其他现有最先进方法进行了比较,表明所提方法的性能优于其他方法,并展示了其具有竞争力的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/33990a37529c/entropy-22-00811-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/c611b21f2b78/entropy-22-00811-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/639c209fc567/entropy-22-00811-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/9c11f46f1cca/entropy-22-00811-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/cb0bf73286a6/entropy-22-00811-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/fbdc47ab68fb/entropy-22-00811-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/ba0d1498434c/entropy-22-00811-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/6ff6e88d1305/entropy-22-00811-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/419da7e2293a/entropy-22-00811-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/ffe513f4f0be/entropy-22-00811-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/17d57734193e/entropy-22-00811-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/1d8b33f559e5/entropy-22-00811-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/0ad1741906bf/entropy-22-00811-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/20472dcb0e69/entropy-22-00811-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/eba6ce184f63/entropy-22-00811-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/227b68ae8b58/entropy-22-00811-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/33990a37529c/entropy-22-00811-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/c611b21f2b78/entropy-22-00811-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/639c209fc567/entropy-22-00811-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/9c11f46f1cca/entropy-22-00811-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/cb0bf73286a6/entropy-22-00811-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/fbdc47ab68fb/entropy-22-00811-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/ba0d1498434c/entropy-22-00811-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/6ff6e88d1305/entropy-22-00811-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/419da7e2293a/entropy-22-00811-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/ffe513f4f0be/entropy-22-00811-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/17d57734193e/entropy-22-00811-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/1d8b33f559e5/entropy-22-00811-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/0ad1741906bf/entropy-22-00811-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/20472dcb0e69/entropy-22-00811-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/eba6ce184f63/entropy-22-00811-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/227b68ae8b58/entropy-22-00811-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/069c/7517387/33990a37529c/entropy-22-00811-g016.jpg

相似文献

1
A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation.一种基于U-Net的多尺度特征融合方法用于视网膜血管分割。
Entropy (Basel). 2020 Jul 24;22(8):811. doi: 10.3390/e22080811.
2
Feature-guided attention network for medical image segmentation.基于特征引导的注意力网络的医学图像分割。
Med Phys. 2023 Aug;50(8):4871-4886. doi: 10.1002/mp.16253. Epub 2023 Feb 16.
3
DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation.DEAF-Net:用于视网膜血管分割的细节增强注意力特征融合网络
J Imaging Inform Med. 2025 Feb;38(1):496-519. doi: 10.1007/s10278-024-01207-6. Epub 2024 Aug 5.
4
DilUnet: A U-net based architecture for blood vessels segmentation.DilUnet:一种基于 U-net 的血管分割架构。
Comput Methods Programs Biomed. 2022 May;218:106732. doi: 10.1016/j.cmpb.2022.106732. Epub 2022 Mar 5.
5
Gated Skip-Connection Network with Adaptive Upsampling for Retinal Vessel Segmentation.门控跳连接网络与自适应上采样相结合的视网膜血管分割。
Sensors (Basel). 2021 Sep 15;21(18):6177. doi: 10.3390/s21186177.
6
Densely connected U-Net retinal vessel segmentation algorithm based on multi-scale feature convolution extraction.基于多尺度特征卷积提取的密集连接 U-Net 视网膜血管分割算法。
Med Phys. 2021 Jul;48(7):3827-3841. doi: 10.1002/mp.14944. Epub 2021 Jun 16.
7
SegR-Net: A deep learning framework with multi-scale feature fusion for robust retinal vessel segmentation.SegR-Net:一种具有多尺度特征融合的深度学习框架,用于稳健的视网膜血管分割。
Comput Biol Med. 2023 Sep;163:107132. doi: 10.1016/j.compbiomed.2023.107132. Epub 2023 Jun 10.
8
MIC-Net: multi-scale integrated context network for automatic retinal vessel segmentation in fundus image.MIC-Net:用于眼底图像中自动视网膜血管分割的多尺度集成上下文网络。
Math Biosci Eng. 2023 Feb 8;20(4):6912-6931. doi: 10.3934/mbe.2023298.
9
BSEResU-Net: An attention-based before-activation residual U-Net for retinal vessel segmentation.BSEResU-Net:基于注意力的激活前残差 U-Net 视网膜血管分割。
Comput Methods Programs Biomed. 2021 Jun;205:106070. doi: 10.1016/j.cmpb.2021.106070. Epub 2021 Apr 1.
10
Curv-Net: Curvilinear structure segmentation network based on selective kernel and multi-Bi-ConvLSTM.Curv-Net:基于选择性内核和多双向卷积长短期记忆网络的曲线结构分割网络
Med Phys. 2022 May;49(5):3144-3158. doi: 10.1002/mp.15546. Epub 2022 Feb 25.

引用本文的文献

1
Hierarchical Multi-Scale Mamba with Tubular Structure-Aware Convolution for Retinal Vessel Segmentation.用于视网膜血管分割的具有管状结构感知卷积的分层多尺度曼巴网络
Entropy (Basel). 2025 Aug 14;27(8):862. doi: 10.3390/e27080862.
2
Ensemble-based eye disease detection system utilizing fundus and vascular structures.基于眼底和血管结构的集成式眼病检测系统。
Sci Rep. 2025 Jun 2;15(1):19298. doi: 10.1038/s41598-025-04503-5.
3
Optimization of table tennis target detection algorithm guided by multi-scale feature fusion of deep learning.

本文引用的文献

1
Optimizing Deep CNN Architectures for Face Liveness Detection.优化用于面部活体检测的深度卷积神经网络架构
Entropy (Basel). 2019 Apr 20;21(4):423. doi: 10.3390/e21040423.
2
A Multi-Scale Directional Line Detector for Retinal Vessel Segmentation.用于视网膜血管分割的多尺度方向线检测器。
Sensors (Basel). 2019 Nov 13;19(22):4949. doi: 10.3390/s19224949.
3
Learning Semantic Graphics Using Convolutional Encoder-Decoder Network for Autonomous Weeding in Paddy.使用卷积编码器-解码器网络学习语义图形以实现稻田自动除草
基于深度学习多尺度特征融合的乒乓球目标检测算法优化
Sci Rep. 2024 Jan 16;14(1):1401. doi: 10.1038/s41598-024-51865-3.
4
MHA-Net: A Multibranch Hybrid Attention Network for Medical Image Segmentation.MHA-Net:一种用于医学图像分割的多分支混合注意力网络。
Comput Math Methods Med. 2022 Oct 6;2022:8375981. doi: 10.1155/2022/8375981. eCollection 2022.
5
Early Glaucoma Detection by Using Style Transfer to Predict Retinal Nerve Fiber Layer Thickness Distribution on the Fundus Photograph.通过使用风格迁移预测眼底照片上视网膜神经纤维层厚度分布进行早期青光眼检测。
Ophthalmol Sci. 2022 Jun 11;2(3):100180. doi: 10.1016/j.xops.2022.100180. eCollection 2022 Sep.
6
Weakly Supervised Building Semantic Segmentation Based on Spot-Seeds and Refinement Process.基于点种子和细化过程的弱监督建筑语义分割
Entropy (Basel). 2022 May 23;24(5):741. doi: 10.3390/e24050741.
7
DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel.DBFU-Net:采用难例加权训练策略的双分支融合U-Net用于分割视网膜血管。
PeerJ Comput Sci. 2022 Feb 18;8:e871. doi: 10.7717/peerj-cs.871. eCollection 2022.
8
New Trends in Melanoma Detection Using Neural Networks: A Systematic Review.利用神经网络进行黑色素瘤检测的新趋势:系统评价。
Sensors (Basel). 2022 Jan 10;22(2):496. doi: 10.3390/s22020496.
Front Plant Sci. 2019 Oct 31;10:1404. doi: 10.3389/fpls.2019.01404. eCollection 2019.
4
Inception Modules Enhance Brain Tumor Segmentation.初始模块增强脑肿瘤分割。
Front Comput Neurosci. 2019 Jul 12;13:44. doi: 10.3389/fncom.2019.00044. eCollection 2019.
5
Multi-proportion channel ensemble model for retinal vessel segmentation.多比例通道集成模型在视网膜血管分割中的应用。
Comput Biol Med. 2019 Aug;111:103352. doi: 10.1016/j.compbiomed.2019.103352. Epub 2019 Jul 9.
6
Retinal vascular segmentation using superpixel-based line operator and its application to vascular topology estimation.基于超像素的线运算符的视网膜血管分割及其在血管拓扑估计中的应用。
Med Phys. 2018 Jul;45(7):3132-3146. doi: 10.1002/mp.12953. Epub 2018 May 23.
7
Multi-level deep supervised networks for retinal vessel segmentation.多层深度监督网络用于视网膜血管分割。
Int J Comput Assist Radiol Surg. 2017 Dec;12(12):2181-2193. doi: 10.1007/s11548-017-1619-0. Epub 2017 Jun 2.
8
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.
9
Segmenting Retinal Blood Vessels With Deep Neural Networks.基于深度神经网络的视网膜血管分割。
IEEE Trans Med Imaging. 2016 Nov;35(11):2369-2380. doi: 10.1109/TMI.2016.2546227. Epub 2016 Mar 24.
10
A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images.一种用于眼底图像血管分割的判别式训练全连接条件随机场模型
IEEE Trans Biomed Eng. 2017 Jan;64(1):16-27. doi: 10.1109/TBME.2016.2535311. Epub 2016 Feb 26.