Suppr超能文献

用于医学图像分割的多尺度自引导注意力机制

Multi-Scale Self-Guided Attention for Medical Image Segmentation.

作者信息

Sinha Ashish, Dolz Jose

出版信息

IEEE J Biomed Health Inform. 2021 Jan;25(1):121-130. doi: 10.1109/JBHI.2020.2986926. Epub 2021 Jan 5.

Abstract

Even though convolutional neural networks (CNNs) are driving progress in medical image segmentation, standard models still have some drawbacks. First, the use of multi-scale approaches, i.e., encoder-decoder architectures, leads to a redundant use of information, where similar low-level features are extracted multiple times at multiple scales. Second, long-range feature dependencies are not efficiently modeled, resulting in non-optimal discriminative feature representations associated with each semantic class. In this paper we attempt to overcome these limitations with the proposed architecture, by capturing richer contextual dependencies based on the use of guided self-attention mechanisms. This approach is able to integrate local features with their corresponding global dependencies, as well as highlight interdependent channel maps in an adaptive manner. Further, the additional loss between different modules guides the attention mechanisms to neglect irrelevant information and focus on more discriminant regions of the image by emphasizing relevant feature associations. We evaluate the proposed model in the context of semantic segmentation on three different datasets: abdominal organs, cardiovascular structures and brain tumors. A series of ablation experiments support the importance of these attention modules in the proposed architecture. In addition, compared to other state-of-the-art segmentation networks our model yields better segmentation performance, increasing the accuracy of the predictions while reducing the standard deviation. This demonstrates the efficiency of our approach to generate precise and reliable automatic segmentations of medical images. Our code is made publicly available at: https://github.com/sinAshish/Multi-Scale-Attention.

摘要

尽管卷积神经网络(CNN)正在推动医学图像分割领域的进步,但标准模型仍存在一些缺点。首先,多尺度方法的使用,即编码器-解码器架构,导致信息的冗余使用,其中相似的低级特征在多个尺度上被多次提取。其次,长距离特征依赖没有得到有效建模,导致与每个语义类别相关的判别特征表示不理想。在本文中,我们试图通过基于引导自注意力机制的使用捕获更丰富的上下文依赖,以所提出的架构克服这些限制。这种方法能够将局部特征与其相应的全局依赖集成起来,并以自适应方式突出相互依赖的通道映射。此外,不同模块之间的额外损失引导注意力机制忽略无关信息,并通过强调相关特征关联来关注图像中更具判别力的区域。我们在三个不同的数据集上,即腹部器官、心血管结构和脑肿瘤,在语义分割的背景下评估了所提出的模型。一系列消融实验支持了这些注意力模块在所提出架构中的重要性。此外,与其他现有的最先进分割网络相比,我们的模型产生了更好的分割性能,在提高预测准确性的同时降低了标准差。这证明了我们的方法在生成精确可靠的医学图像自动分割方面的有效性。我们的代码可在以下网址公开获取:https://github.com/sinAshish/Multi-Scale-Attention

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验