Suppr超能文献

用于息肉分割的具有跨层特征融合的边界约束网络

Boundary Constraint Network With Cross Layer Feature Integration for Polyp Segmentation.

作者信息

Yue Guanghui, Han Wanwan, Jiang Bin, Zhou Tianwei, Cong Runmin, Wang Tianfu

出版信息

IEEE J Biomed Health Inform. 2022 Aug;26(8):4090-4099. doi: 10.1109/JBHI.2022.3173948. Epub 2022 Aug 11.

Abstract

Clinically, proper polyp localization in endoscopy images plays a vital role in the follow-up treatment (e.g., surgical planning). Deep convolutional neural networks (CNNs) provide a favoured prospect for automatic polyp segmentation and evade the limitations of visual inspection, e.g., subjectivity and overwork. However, most existing CNNs-based methods often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary constraint network, namely BCNet, for accurate polyp segmentation. The success of BCNet benefits from integrating cross-level context information and leveraging edge information. Specifically, to avoid the drawbacks caused by simple feature addition or concentration, BCNet applies a cross-layer feature integration strategy (CFIS) in fusing the features of the top-three highest layers, yielding a better performance. CFIS consists of three attention-driven cross-layer feature interaction modules (ACFIMs) and two global feature integration modules (GFIMs). ACFIM adaptively fuses the context information of the top-three highest layers via the self-attention mechanism instead of direct addition or concentration. GFIM integrates the fused information across layers with the guidance from global attention. To obtain accurate boundaries, BCNet introduces a bilateral boundary extraction module that explores the polyp and non-polyp information of the shallow layer collaboratively based on the high-level location information and boundary supervision. Through joint supervision of the polyp area and boundary, BCNet is able to get more accurate polyp masks. Experimental results on three public datasets show that the proposed BCNet outperforms seven state-of-the-art competing methods in terms of both effectiveness and generalization.

摘要

临床上,在内窥镜图像中正确定位息肉在后续治疗(如手术规划)中起着至关重要的作用。深度卷积神经网络(CNN)为息肉自动分割提供了良好的前景,并规避了目视检查的局限性,如主观性和工作量过大等问题。然而,大多数现有的基于CNN的方法往往提供不尽人意的分割性能。在本文中,我们提出了一种新颖的边界约束网络,即BCNet,用于精确的息肉分割。BCNet的成功得益于整合跨层次上下文信息和利用边缘信息。具体而言,为避免简单特征相加或集中带来的弊端,BCNet在融合前三最高层的特征时应用了跨层特征集成策略(CFIS),从而产生更好的性能。CFIS由三个注意力驱动的跨层特征交互模块(ACFIM)和两个全局特征集成模块(GFIM)组成。ACFIM通过自注意力机制自适应地融合前三最高层的上下文信息,而非直接相加或集中。GFIM在全局注意力的引导下跨层整合融合后的信息。为了获得精确的边界,BCNet引入了一个双边边界提取模块,该模块基于高层位置信息和边界监督协同探索浅层的息肉和非息肉信息。通过对息肉区域和边界的联合监督,BCNet能够获得更精确的息肉掩码。在三个公共数据集上的实验结果表明,所提出的BCNet在有效性和泛化性方面均优于七种最先进的竞争方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验