Suppr超能文献

轮廓感知损失:用于显著目标分割的边界感知学习

Contour-Aware Loss: Boundary-Aware Learning for Salient Object Segmentation.

作者信息

Chen Zixuan, Zhou Huajun, Lai Jianhuang, Yang Lingxiao, Xie Xiaohua

出版信息

IEEE Trans Image Process. 2021;30:431-443. doi: 10.1109/TIP.2020.3037536. Epub 2020 Nov 23.

Abstract

We present a learning model that makes full use of boundary information for salient object segmentation. Specifically, we come up with a novel loss function, i.e., Contour Loss, which leverages object contours to guide models to perceive salient object boundaries. Such a boundary-aware network can learn boundary-wise distinctions between salient objects and background, hence effectively facilitating the salient object segmentation. Yet the Contour Loss emphasizes the boundaries to capture the contextual details in the local range. We further propose the hierarchical global attention module (HGAM), which forces the model hierarchically to attend to global contexts, thus captures the global visual saliency. Comprehensive experiments on six benchmark datasets show that our method achieves superior performance over state-of-the-art ones. Moreover, our model has a real-time speed of 26 fps on a TITAN X GPU.

摘要

我们提出了一种充分利用边界信息进行显著目标分割的学习模型。具体而言,我们提出了一种新颖的损失函数,即轮廓损失(Contour Loss),它利用目标轮廓来引导模型感知显著目标的边界。这样一个具有边界感知能力的网络可以学习显著目标与背景之间在边界方面的差异,从而有效地促进显著目标分割。然而,轮廓损失强调边界以捕捉局部范围内的上下文细节。我们进一步提出了分层全局注意力模块(HGAM),它迫使模型分层关注全局上下文,从而捕捉全局视觉显著性。在六个基准数据集上进行的综合实验表明,我们的方法比现有最先进的方法具有更优的性能。此外,我们的模型在TITAN X GPU上具有26帧每秒的实时速度。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验