Yang Ziyun, Soltanian-Zadeh Somayyeh, Farsiu Sina
Department of Biomedical Engineering, Duke University, Durham, 27708, NC, USA.
Department of Ophthalmology, Duke University Medical Center, Durham, 27710, NC, USA.
Pattern Recognit. 2022 Jan;121. doi: 10.1016/j.patcog.2021.108231. Epub 2021 Aug 13.
Salient object detection (SOD) is viewed as a pixel-wise saliency modeling task by traditional deep learning-based methods. A limitation of current SOD models is insufficient utilization of inter-pixel information, which usually results in imperfect segmentation near edge regions and low spatial coherence. As we demonstrate, using a saliency mask as the only label is suboptimal. To address this limitation, we propose a connectivity-based approach called bilateral connectivity network (BiconNet), which uses connectivity masks together with saliency masks as labels for effective modeling of inter-pixel relationships and object saliency. Moreover, we propose a bilateral voting module to enhance the output connectivity map, and a novel edge feature enhancement method that efficiently utilizes edge-specific features. Through comprehensive experiments on five benchmark datasets, we demonstrate that our proposed method can be plugged into any existing state-of-the-art saliency-based SOD framework to improve its performance with negligible parameter increase.
显著目标检测(SOD)在传统的基于深度学习的方法中被视为一种逐像素的显著性建模任务。当前SOD模型的一个局限性是像素间信息利用不足,这通常会导致边缘区域附近的分割不完美以及空间连贯性较低。正如我们所证明的,仅使用显著性掩码作为唯一标签是次优的。为了解决这一局限性,我们提出了一种基于连通性的方法,称为双边连通性网络(BiconNet),该方法使用连通性掩码和显著性掩码作为标签,以有效地对像素间关系和目标显著性进行建模。此外,我们提出了一个双边投票模块来增强输出连通性图,以及一种新颖的边缘特征增强方法,该方法能有效地利用特定于边缘的特征。通过在五个基准数据集上进行的全面实验,我们证明了我们提出的方法可以插入到任何现有的基于显著性的最先进SOD框架中,以在参数增加可忽略不计的情况下提高其性能。