Suppr超能文献

EARDS:用于联合视盘(OD)和视杯(OC)分割的基于高效网络(EfficientNet)和注意力的残差深度可分离卷积

EARDS: EfficientNet and attention-based residual depth-wise separable convolution for joint OD and OC segmentation.

作者信息

Zhou Wei, Ji Jianhang, Jiang Yan, Wang Jing, Qi Qi, Yi Yugen

机构信息

College of Computer Science, Shenyang Aerospace University, Shenyang, China.

School of Software, Jiangxi Normal University, Nanchang, China.

出版信息

Front Neurosci. 2023 Mar 9;17:1139181. doi: 10.3389/fnins.2023.1139181. eCollection 2023.

Abstract

BACKGROUND

Glaucoma is the leading cause of irreversible vision loss. Accurate Optic Disc (OD) and Optic Cup (OC) segmentation is beneficial for glaucoma diagnosis. In recent years, deep learning has achieved remarkable performance in OD and OC segmentation. However, OC segmentation is more challenging than OD segmentation due to its large shape variability and cryptic boundaries that leads to performance degradation when applying the deep learning models to segment OC. Moreover, the OD and OC are segmented independently, or pre-requirement is necessary to extract the OD centered region with pre-processing procedures.

METHODS

In this paper, we suggest a one-stage network named EfficientNet and Attention-based Residual Depth-wise Separable Convolution (EARDS) for joint OD and OC segmentation. In EARDS, EfficientNet-b0 is regarded as an encoder to capture more effective boundary representations. To suppress irrelevant regions and highlight features of fine OD and OC regions, Attention Gate (AG) is incorporated into the skip connection. Also, Residual Depth-wise Separable Convolution (RDSC) block is developed to improve the segmentation performance and computational efficiency. Further, a novel decoder network is proposed by combining AG, RDSC block and Batch Normalization (BN) layer, which is utilized to eliminate the vanishing gradient problem and accelerate the convergence speed. Finally, the focal loss and dice loss as a weighted combination is designed to guide the network for accurate OD and OC segmentation.

RESULTS AND DISCUSSION

Extensive experimental results on the Drishti-GS and REFUGE datasets indicate that the proposed EARDS outperforms the state-of-the-art approaches. The code is available at https://github.com/M4cheal/EARDS.

摘要

背景

青光眼是不可逆视力丧失的主要原因。准确的视盘(OD)和视杯(OC)分割有助于青光眼的诊断。近年来,深度学习在OD和OC分割方面取得了显著成效。然而,OC分割比OD分割更具挑战性,因为其形状变化大且边界模糊,这导致在应用深度学习模型分割OC时性能下降。此外,OD和OC是独立分割的,或者需要预处理程序来提取以OD为中心的区域。

方法

在本文中,我们提出了一种名为基于高效网络和注意力的残差深度可分离卷积(EARDS)的单阶段网络,用于联合OD和OC分割。在EARDS中,EfficientNet-b0被视为编码器,以捕获更有效的边界表示。为了抑制无关区域并突出OD和OC精细区域的特征,注意力门(AG)被纳入跳跃连接中。此外,还开发了残差深度可分离卷积(RDSC)块来提高分割性能和计算效率。进一步,通过结合AG、RDSC块和批归一化(BN)层提出了一种新颖的解码器网络,用于消除梯度消失问题并加快收敛速度。最后,设计了焦点损失和骰子损失的加权组合,以指导网络进行准确的OD和OC分割。

结果与讨论

在Drishti-GS和REFUGE数据集上的大量实验结果表明,所提出的EARDS优于现有方法。代码可在https://github.com/M4cheal/EARDS获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4cf2/10033527/c1548491774a/fnins-17-1139181-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验