Suppr超能文献

DBFU-Net:采用难例加权训练策略的双分支融合U-Net用于分割视网膜血管。

DBFU-Net: Double branch fusion U-Net with hard example weighting train strategy to segment retinal vessel.

作者信息

Huang Jianping, Lin Zefang, Chen Yingyin, Zhang Xiao, Zhao Wei, Zhang Jie, Li Yong, He Xu, Zhan Meixiao, Lu Ligong, Jiang Xiaofei, Peng Yongjun

机构信息

Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai, China.

Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Department of Nuclear Medicine, Zhuhai, China.

出版信息

PeerJ Comput Sci. 2022 Feb 18;8:e871. doi: 10.7717/peerj-cs.871. eCollection 2022.

Abstract

BACKGROUND

Many fundus imaging modalities measure ocular changes. Automatic retinal vessel segmentation (RVS) is a significant fundus image-based method for the diagnosis of ophthalmologic diseases. However, precise vessel segmentation is a challenging task when detecting micro-changes in fundus images, tiny vessels, vessel edges, vessel lesions and optic disc edges.

METHODS

In this paper, we will introduce a novel double branch fusion U-Net model that allows one of the branches to be trained by a weighting scheme that emphasizes harder examples to improve the overall segmentation performance. A new mask, we call a hard example mask, is needed for those examples that include a weighting strategy that is different from other methods. The method we propose extracts the hard example mask by morphology, meaning that the hard example mask does not need any rough segmentation model. To alleviate overfitting, we propose a random channel attention mechanism that is better than the drop-out method or the L2-regularization method in RVS.

RESULTS

We have verified the proposed approach on the DRIVE, STARE and CHASE datasets to quantify the performance metrics. Compared to other existing approaches, using those dataset platforms, the proposed approach has competitive performance metrics. (DRIVE: F1-Score = 0.8289, G-Mean = 0.8995, AUC = 0.9811; STARE: F1-Score = 0.8501, G-Mean = 0.9198, AUC = 0.9892; CHASE: F1-Score = 0.8375, G-Mean = 0.9138, AUC = 0.9879).

DISCUSSION

The segmentation results showed that DBFU-Net with RCA achieves competitive performance in three RVS datasets. Additionally, the proposed morphological-based extraction method for hard examples can reduce the computational cost. Finally, the random channel attention mechanism proposed in this paper has proven to be more effective than other regularization methods in the RVS task.

摘要

背景

许多眼底成像方式可测量眼部变化。自动视网膜血管分割(RVS)是一种基于眼底图像的重要眼科疾病诊断方法。然而,在检测眼底图像中的微小变化、细小血管、血管边缘、血管病变和视盘边缘时,精确的血管分割是一项具有挑战性的任务。

方法

在本文中,我们将介绍一种新颖的双分支融合U-Net模型,该模型允许其中一个分支通过强调更难示例的加权方案进行训练,以提高整体分割性能。对于那些包含与其他方法不同加权策略的示例,需要一个新的掩码,我们称之为难示例掩码。我们提出的方法通过形态学提取难示例掩码,这意味着难示例掩码不需要任何粗糙的分割模型。为了缓解过拟合,我们提出了一种随机通道注意力机制,在RVS中它比随机失活方法或L2正则化方法更有效。

结果

我们在DRIVE、STARE和CHASE数据集上验证了所提出的方法,以量化性能指标。与其他现有方法相比,在这些数据集平台上,所提出的方法具有有竞争力的性能指标。(DRIVE:F1分数 = 0.8289,几何均值 = 0.8995,AUC = 0.9811;STARE:F1分数 = 0.8501,几何均值 = 0.9198,AUC = 0.9892;CHASE:F1分数 = 0.8375,几何均值 = 0.9138,AUC = 0.9879)。

讨论

分割结果表明,带有RCA的DBFU-Net在三个RVS数据集上具有有竞争力的性能。此外,所提出的基于形态学的难示例提取方法可以降低计算成本。最后,本文提出的随机通道注意力机制在RVS任务中已被证明比其他正则化方法更有效。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c9f2/9044242/26cb73153248/peerj-cs-08-871-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验