• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

DBFU-Net:采用难例加权训练策略的双分支融合U-Net用于分割视网膜血管。

DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel.

作者信息

Huang Jianping, Lin Zefang, Chen Yingyin, Zhang Xiao, Zhao Wei, Zhang Jie, Li Yong, He Xu, Zhan Meixiao, Lu Ligong, Jiang Xiaofei, Peng Yongjun

机构信息

Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China.

Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of Nuclear Medicine, Zhuhai, China.

出版信息

PeerJ Comput Sci. 2022 Feb 18;8:e871. doi: 10.7717/peerj-cs.871. eCollection 2022.

DOI:10.7717/peerj-cs.871
PMID:35494791
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9044242/
Abstract

BACKGROUND

Many fundus imaging modalities measure ocular changes. Automatic retinal vessel segmentation (RVS) is a significant fundus image-based method for the diagnosis of ophthalmologic diseases. However, precise vessel segmentation is a challenging task when detecting micro-changes in fundus images, tiny vessels, vessel edges, vessel lesions and optic disc edges.

METHODS

In this paper, we will introduce a novel double branch fusion U-Net model that allows one of the branches to be trained by a weighting scheme that emphasizes harder examples to improve the overall segmentation performance. A new mask, we call a hard example mask, is needed for those examples that include a weighting strategy that is different from other methods. The method we propose extracts the hard example mask by morphology, meaning that the hard example mask does not need any rough segmentation model. To alleviate overfitting, we propose a random channel attention mechanism that is better than the drop-out method or the L2-regularization method in RVS.

RESULTS

We have verified the proposed approach on the DRIVE, STARE and CHASE datasets to quantify the performance metrics. Compared to other existing approaches, using those dataset platforms, the proposed approach has competitive performance metrics. (DRIVE: F1-Score = 0.8289, G-Mean = 0.8995, AUC = 0.9811; STARE: F1-Score = 0.8501, G-Mean = 0.9198, AUC = 0.9892; CHASE: F1-Score = 0.8375, G-Mean = 0.9138, AUC = 0.9879).

DISCUSSION

The segmentation results showed that DBFU-Net with RCA achieves competitive performance in three RVS datasets. Additionally, the proposed morphological-based extraction method for hard examples can reduce the computational cost. Finally, the random channel attention mechanism proposed in this paper has proven to be more effective than other regularization methods in the RVS task.

摘要

背景

许多眼底成像方式可测量眼部变化。自动视网膜血管分割(RVS)是一种基于眼底图像的重要眼科疾病诊断方法。然而,在检测眼底图像中的微小变化、细小血管、血管边缘、血管病变和视盘边缘时,精确的血管分割是一项具有挑战性的任务。

方法

在本文中,我们将介绍一种新颖的双分支融合U-Net模型,该模型允许其中一个分支通过强调更难示例的加权方案进行训练,以提高整体分割性能。对于那些包含与其他方法不同加权策略的示例,需要一个新的掩码,我们称之为难示例掩码。我们提出的方法通过形态学提取难示例掩码,这意味着难示例掩码不需要任何粗糙的分割模型。为了缓解过拟合,我们提出了一种随机通道注意力机制,在RVS中它比随机失活方法或L2正则化方法更有效。

结果

我们在DRIVE、STARE和CHASE数据集上验证了所提出的方法,以量化性能指标。与其他现有方法相比,在这些数据集平台上,所提出的方法具有有竞争力的性能指标。(DRIVE:F1分数 = 0.8289,几何均值 = 0.8995,AUC = 0.9811;STARE:F1分数 = 0.8501,几何均值 = 0.9198,AUC = 0.9892;CHASE:F1分数 = 0.8375,几何均值 = 0.9138,AUC = 0.9879)。

讨论

分割结果表明,带有RCA的DBFU-Net在三个RVS数据集上具有有竞争力的性能。此外,所提出的基于形态学的难示例提取方法可以降低计算成本。最后,本文提出的随机通道注意力机制在RVS任务中已被证明比其他正则化方法更有效。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/f9a01db6d4e0/peerj-cs-08-871-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/26cb73153248/peerj-cs-08-871-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/c857e2dfee66/peerj-cs-08-871-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/d021baf29554/peerj-cs-08-871-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/c9cc92c40bee/peerj-cs-08-871-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/4e3e7cfadf32/peerj-cs-08-871-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/b2cda559dd06/peerj-cs-08-871-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/118b881a8961/peerj-cs-08-871-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/b2bb27aca5e3/peerj-cs-08-871-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/cdd24ef7b548/peerj-cs-08-871-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/da0c973385d7/peerj-cs-08-871-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/bd6c08bce9ab/peerj-cs-08-871-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/69b96ce65ff8/peerj-cs-08-871-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/f9a01db6d4e0/peerj-cs-08-871-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/26cb73153248/peerj-cs-08-871-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/c857e2dfee66/peerj-cs-08-871-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/d021baf29554/peerj-cs-08-871-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/c9cc92c40bee/peerj-cs-08-871-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/4e3e7cfadf32/peerj-cs-08-871-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/b2cda559dd06/peerj-cs-08-871-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/118b881a8961/peerj-cs-08-871-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/b2bb27aca5e3/peerj-cs-08-871-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/cdd24ef7b548/peerj-cs-08-871-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/da0c973385d7/peerj-cs-08-871-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/bd6c08bce9ab/peerj-cs-08-871-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/69b96ce65ff8/peerj-cs-08-871-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/f9a01db6d4e0/peerj-cs-08-871-g013.jpg

相似文献

1
DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel.DBFU-Net:采用难例加权训练策略的双分支融合U-Net用于分割视网膜血管。
PeerJ Comput Sci. 2022 Feb 18;8:e871. doi: 10.7717/peerj-cs.871. eCollection 2022.
2
Hard Attention Net for Automatic Retinal Vessel Segmentation.硬注意力网络在自动视网膜血管分割中的应用。
IEEE J Biomed Health Inform. 2020 Dec;24(12):3384-3396. doi: 10.1109/JBHI.2020.3002985. Epub 2020 Dec 4.
3
A high resolution representation network with multi-path scale for retinal vessel segmentation.一种具有多路径尺度的高分辨率表示网络用于视网膜血管分割。
Comput Methods Programs Biomed. 2021 Sep;208:106206. doi: 10.1016/j.cmpb.2021.106206. Epub 2021 Jun 4.
4
MCFSA-Net: A multi-scale channel fusion and spatial activation network for retinal vessel segmentation.MCFSA-Net:一种用于视网膜血管分割的多尺度通道融合与空间激活网络。
J Biophotonics. 2023 Apr;16(4):e202200295. doi: 10.1002/jbio.202200295. Epub 2022 Dec 1.
5
Segmentation of retinal vessels in fundus images based on U-Net with self-calibrated convolutions and spatial attention modules.基于带有自校准卷积和空间注意力模块的 U-Net 的眼底图像血管分割。
Med Biol Eng Comput. 2023 Jul;61(7):1745-1755. doi: 10.1007/s11517-023-02806-1. Epub 2023 Mar 10.
6
Spatial attention U-Net model with Harris hawks optimization for retinal blood vessel and optic disc segmentation in fundus images.基于哈里斯鹰优化的空间注意力 U-Net 模型在眼底图像中视网膜血管和视盘分割。
Int Ophthalmol. 2024 Aug 29;44(1):359. doi: 10.1007/s10792-024-03279-3.
7
RFARN: Retinal vessel segmentation based on reverse fusion attention residual network.RFARN:基于反向融合注意力残差网络的视网膜血管分割。
PLoS One. 2021 Dec 3;16(12):e0257256. doi: 10.1371/journal.pone.0257256. eCollection 2021.
8
Multi-proportion channel ensemble model for retinal vessel segmentation.多比例通道集成模型在视网膜血管分割中的应用。
Comput Biol Med. 2019 Aug;111:103352. doi: 10.1016/j.compbiomed.2019.103352. Epub 2019 Jul 9.
9
BSEResU-Net: An attention-based before-activation residual U-Net for retinal vessel segmentation.BSEResU-Net:基于注意力的激活前残差 U-Net 视网膜血管分割。
Comput Methods Programs Biomed. 2021 Jun;205:106070. doi: 10.1016/j.cmpb.2021.106070. Epub 2021 Apr 1.
10
Diabetic and Hypertensive Retinopathy Screening in Fundus Images Using Artificially Intelligent Shallow Architectures.利用人工智能浅层架构进行眼底图像中的糖尿病性和高血压性视网膜病变筛查。
J Pers Med. 2021 Dec 23;12(1):7. doi: 10.3390/jpm12010007.

引用本文的文献

1
DEAF-Net: Detail-Enhanced Attention Feature Fusion Network for Retinal Vessel Segmentation.DEAF-Net:用于视网膜血管分割的细节增强注意力特征融合网络
J Imaging Inform Med. 2025 Feb;38(1):496-519. doi: 10.1007/s10278-024-01207-6. Epub 2024 Aug 5.

本文引用的文献

1
PixelBNN: Augmenting the PixelCNN with Batch Normalization and the Presentation of a Fast Architecture for Retinal Vessel Segmentation.像素贝叶斯神经网络:通过批量归一化增强像素卷积神经网络以及用于视网膜血管分割的快速架构展示
J Imaging. 2019 Feb 2;5(2):26. doi: 10.3390/jimaging5020026.
2
A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation.一种基于U-Net的多尺度特征融合方法用于视网膜血管分割。
Entropy (Basel). 2020 Jul 24;22(8):811. doi: 10.3390/e22080811.
3
MAU-Net: A Retinal Vessels Segmentation Method.MAU-Net:一种视网膜血管分割方法。
Annu Int Conf IEEE Eng Med Biol Soc. 2020 Jul;2020:1958-1961. doi: 10.1109/EMBC44109.2020.9176093.
4
BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation.BTS-DSN:用于视网膜血管分割的具有短连接的深度监督神经网络。
Int J Med Inform. 2019 Jun;126:105-113. doi: 10.1016/j.ijmedinf.2019.03.015. Epub 2019 Apr 1.
5
Automated techniques for blood vessels segmentation through fundus retinal images: A review.通过眼底视网膜图像进行血管分割的自动化技术:综述
Microsc Res Tech. 2019 Feb;82(2):153-170. doi: 10.1002/jemt.23172. Epub 2019 Jan 5.
6
A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation.一种用于精确视网膜血管分割的三阶段深度学习模型。
IEEE J Biomed Health Inform. 2019 Jul;23(4):1427-1436. doi: 10.1109/JBHI.2018.2872813. Epub 2018 Sep 28.
7
Retinal Vessel Segmentation Using Minimum Spanning Superpixel Tree Detector.基于最小生成超像素树检测器的视网膜血管分割
IEEE Trans Cybern. 2019 Jul;49(7):2707-2719. doi: 10.1109/TCYB.2018.2833963. Epub 2018 May 22.
8
A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks.基于传统方法和深度神经网络的伤口分割综合模型。
Comput Intell Neurosci. 2018 May 31;2018:4149103. doi: 10.1155/2018/4149103. eCollection 2018.
9
Improving dense conditional random field for retinal vessel segmentation by discriminative feature learning and thin-vessel enhancement.通过判别特征学习和细血管增强改进用于视网膜血管分割的密集条件随机场
Comput Methods Programs Biomed. 2017 Sep;148:13-25. doi: 10.1016/j.cmpb.2017.06.016. Epub 2017 Jun 24.
10
A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images.一种用于眼底图像血管分割的判别式训练全连接条件随机场模型
IEEE Trans Biomed Eng. 2017 Jan;64(1):16-27. doi: 10.1109/TBME.2016.2535311. Epub 2016 Feb 26.