Suppr超能文献

基于刺激引导的自适应变换网络的眼底图像血管分割。

Stimulus-guided adaptive transformer network for retinal blood vessel segmentation in fundus images.

机构信息

School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom.

School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom.

出版信息

Med Image Anal. 2023 Oct;89:102929. doi: 10.1016/j.media.2023.102929. Epub 2023 Aug 9.

Abstract

Automated retinal blood vessel segmentation in fundus images provides important evidence to ophthalmologists in coping with prevalent ocular diseases in an efficient and non-invasive way. However, segmenting blood vessels in fundus images is a challenging task, due to the high variety in scale and appearance of blood vessels and the high similarity in visual features between the lesions and retinal vascular. Inspired by the way that the visual cortex adaptively responds to the type of stimulus, we propose a Stimulus-Guided Adaptive Transformer Network (SGAT-Net) for accurate retinal blood vessel segmentation. It entails a Stimulus-Guided Adaptive Module (SGA-Module) that can extract local-global compound features based on inductive bias and self-attention mechanism. Alongside a light-weight residual encoder (ResEncoder) structure capturing the relevant details of appearance, a Stimulus-Guided Adaptive Pooling Transformer (SGAP-Former) is introduced to reweight the maximum and average pooling to enrich the contextual embedding representation while suppressing the redundant information. Moreover, a Stimulus-Guided Adaptive Feature Fusion (SGAFF) module is designed to adaptively emphasize the local details and global context and fuse them in the latent space to adjust the receptive field (RF) based on the task. The evaluation is implemented on the largest fundus image dataset (FIVES) and three popular retinal image datasets (DRIVE, STARE, CHASEDB1). Experimental results show that the proposed method achieves a competitive performance over the other existing method, with a clear advantage in avoiding errors that commonly happen in areas with highly similar visual features. The sourcecode is publicly available at: https://github.com/Gins-07/SGAT.

摘要

眼底图像中的自动视网膜血管分割为眼科医生提供了一种高效、非侵入式的方法来应对普遍存在的眼部疾病。然而,由于血管的尺度和形态存在高度多样性,病变和视网膜血管之间的视觉特征也高度相似,因此眼底图像中的血管分割是一项具有挑战性的任务。受视觉皮层自适应响应刺激类型方式的启发,我们提出了一种用于精确视网膜血管分割的基于刺激的自适应 Transformer 网络(SGAT-Net)。它包含一个基于归纳偏差和自注意力机制的基于刺激的自适应模块(SGA-Module),可以提取局部-全局复合特征。此外,引入了一种轻量级的残差编码器(ResEncoder)结构,用于捕获外观的相关细节,以及一种基于刺激的自适应池化 Transformer(SGAP-Former),用于重新加权最大池化和平均池化,以丰富上下文嵌入表示,同时抑制冗余信息。此外,还设计了一种基于刺激的自适应特征融合(SGAFF)模块,用于自适应地强调局部细节和全局上下文,并在潜在空间中融合它们,根据任务调整感受野(RF)。评估是在最大的眼底图像数据集(FIVES)和三个流行的视网膜图像数据集(DRIVE、STARE、CHASEDB1)上进行的。实验结果表明,该方法在避免常见于高度相似视觉特征区域的错误方面具有明显优势,优于其他现有方法,具有竞争力。代码可在以下网址获得:https://github.com/Gins-07/SGAT。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验