Suppr超能文献

基于全注意力网络的彩色眼底图像中精确的视网膜血管分割。

Accurate Retinal Vessel Segmentation in Color Fundus Images via Fully Attention-Based Networks.

出版信息

IEEE J Biomed Health Inform. 2021 Jun;25(6):2071-2081. doi: 10.1109/JBHI.2020.3028180. Epub 2021 Jun 3.

Abstract

Automatic retinal vessel segmentation is important for the diagnosis and prevention of ophthalmic diseases. The existing deep learning retinal vessel segmentation models always treat each pixel equally. However, the multi-scale vessel structure is a vital factor affecting the segmentation results, especially in thin vessels. To address this crucial gap, we propose a novel Fully Attention-based Network (FANet) based on attention mechanisms to adaptively learn rich feature representation and aggregate the multi-scale information. Specifically, the framework consists of the image pre-processing procedure and the semantic segmentation networks. Green channel extraction (GE) and contrast limited adaptive histogram equalization (CLAHE) are employed as pre-processing to enhance the texture and contrast of retinal blood images. Besides, the network combines two types of attention modules with the U-Net. We propose a lightweight dual-direction attention block to model global dependencies and reduce intra-class inconsistencies, in which the weights of feature maps are updated based on the semantic correlation between pixels. The dual-direction attention block utilizes horizontal and vertical pooling operations to produce the attention map. In this way, the network aggregates global contextual information from semantic-closer regions or a series of pixels belonging to the same object category. Meanwhile, we adopt the selective kernel (SK) unit to replace the standard convolution for obtaining multi-scale features of different receptive field sizes generated by soft attention. Furthermore, we demonstrate that the proposed model can effectively identify irregular, noisy, and multi-scale retinal vessels. The abundant experiments on DRIVE, STARE, and CHASE_DB1 datasets show that our method achieves state-of-the-art performance.

摘要

自动视网膜血管分割对于眼科疾病的诊断和预防至关重要。现有的基于深度学习的视网膜血管分割模型通常平等对待每个像素。然而,多尺度血管结构是影响分割结果的重要因素,尤其是在细血管中。为了解决这一关键差距,我们提出了一种基于注意力机制的新型全注意力网络(FANet),以自适应地学习丰富的特征表示并聚合多尺度信息。具体来说,该框架由图像预处理过程和语义分割网络组成。绿色通道提取(GE)和对比度受限自适应直方图均衡(CLAHE)被用作预处理,以增强视网膜血液图像的纹理和对比度。此外,该网络结合了两种类型的注意力模块和 U-Net。我们提出了一种轻量级的双向注意力模块,用于建模全局依赖关系并减少类内不一致性,其中根据像素之间的语义相关性更新特征图的权重。双向注意力模块利用水平和垂直池化操作生成注意力图。通过这种方式,网络从语义上更接近的区域或属于同一对象类别的一系列像素中聚合全局上下文信息。同时,我们采用选择核(SK)单元来替代标准卷积,以获取不同感受野大小的多尺度特征,这些特征是通过软注意力生成的。此外,我们证明了所提出的模型可以有效地识别不规则、嘈杂和多尺度的视网膜血管。在 DRIVE、STARE 和 CHASE_DB1 数据集上的大量实验表明,我们的方法达到了最先进的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验