Suppr超能文献

RFARN:基于反向融合注意力残差网络的视网膜血管分割。

RFARN: Retinal vessel segmentation based on reverse fusion attention residual network.

机构信息

College of Computer Science and Engineering, Northwest Normal University, Lanzhou Gansu, China.

出版信息

PLoS One. 2021 Dec 3;16(12):e0257256. doi: 10.1371/journal.pone.0257256. eCollection 2021.

Abstract

Accurate segmentation of retinal vessels is critical to the mechanism, diagnosis, and treatment of many ocular pathologies. Due to the poor contrast and inhomogeneous background of fundus imaging and the complex structure of retinal fundus images, this makes accurate segmentation of blood vessels from retinal images still challenging. In this paper, we propose an effective framework for retinal vascular segmentation, which is innovative mainly in the retinal image pre-processing stage and segmentation stage. First, we perform image enhancement on three publicly available fundus datasets based on the multiscale retinex with color restoration (MSRCR) method, which effectively suppresses noise and highlights the vessel structure creating a good basis for the segmentation phase. The processed fundus images are then fed into an effective Reverse Fusion Attention Residual Network (RFARN) for training to achieve more accurate retinal vessel segmentation. In the RFARN, we use Reverse Channel Attention Module (RCAM) and Reverse Spatial Attention Module (RSAM) to highlight the shallow details of the channel and spatial dimensions. And RCAM and RSAM are used to achieve effective fusion of deep local features with shallow global features to ensure the continuity and integrity of the segmented vessels. In the experimental results for the DRIVE, STARE and CHASE datasets, the evaluation metrics were 0.9712, 0.9822 and 0.9780 for accuracy (Acc), 0.8788, 0.8874 and 0.8352 for sensitivity (Se), 0.9803, 0.9891 and 0.9890 for specificity (Sp), area under the ROC curve(AUC) was 0.9910, 0.9952 and 0.9904, and the F1-Score was 0.8453, 0.8707 and 0.8185. In comparison with existing retinal image segmentation methods, e.g. UNet, R2UNet, DUNet, HAnet, Sine-Net, FANet, etc., our method in three fundus datasets achieved better vessel segmentation performance and results.

摘要

视网膜血管的准确分割对于许多眼部病变的机制、诊断和治疗都至关重要。由于眼底图像对比度差、背景不均匀,以及视网膜眼底图像结构复杂,这使得从视网膜图像中准确分割血管仍然具有挑战性。在本文中,我们提出了一种有效的视网膜血管分割框架,主要创新点在于视网膜图像预处理阶段和分割阶段。首先,我们基于多尺度视网膜色彩恢复(MSRCR)方法对三个公开眼底数据集进行图像增强,有效抑制了噪声并突出了血管结构,为分割阶段奠定了良好的基础。然后,将处理后的眼底图像输入到有效的反向融合注意力残差网络(RFARN)中进行训练,以实现更准确的视网膜血管分割。在 RFARN 中,我们使用反向通道注意力模块(RCAM)和反向空间注意力模块(RSAM)来突出通道和空间维度的浅层细节。RCAM 和 RSAM 用于有效融合深层局部特征和浅层全局特征,以确保分割血管的连续性和完整性。在 DRIVE、STARE 和 CHASE 数据集的实验结果中,准确性(Acc)的评估指标分别为 0.9712、0.9822 和 0.9780,敏感性(Se)分别为 0.8788、0.8874 和 0.8352,特异性(Sp)分别为 0.9803、0.9891 和 0.9890,ROC 曲线下面积(AUC)分别为 0.9910、0.9952 和 0.9904,F1-Score 分别为 0.8453、0.8707 和 0.8185。与现有的视网膜图像分割方法,如 UNet、R2UNet、DUNet、HAnet、Sine-Net、FANet 等相比,我们的方法在三个眼底数据集上取得了更好的血管分割性能和结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8454/8641866/ac17df0d6c2f/pone.0257256.g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验