IEEE Trans Image Process. 2017 Sep;26(9):4204-4216. doi: 10.1109/TIP.2017.2711277.
This paper proposes a novel depth-aware salient object detection and segmentation framework via multiscale discriminative saliency fusion (MDSF) and bootstrap learning for RGBD images (RGB color images with corresponding Depth maps) and stereoscopic images. By exploiting low-level feature contrasts, mid-level feature weighted factors and high-level location priors, various saliency measures on four classes of features are calculated based on multiscale region segmentation. A random forest regressor is learned to perform the discriminative saliency fusion (DSF) and generate the DSF saliency map at each scale, and DSF saliency maps across multiple scales are combined to produce the MDSF saliency map. Furthermore, we propose an effective bootstrap learning-based salient object segmentation method, which is bootstrapped with samples based on the MDSF saliency map and learns multiple kernel support vector machines. Experimental results on two large datasets show how various categories of features contribute to the saliency detection performance and demonstrate that the proposed framework achieves the better performance on both saliency detection and salient object segmentation.
本文提出了一种新颖的深度感知显著目标检测和分割框架,通过多尺度判别显著融合(MDSF)和引导学习,用于 RGBD 图像(带有对应深度图的 RGB 彩色图像)和立体图像。通过利用低水平特征对比度、中水平特征加权因子和高水平位置先验,基于多尺度区域分割计算四类特征的各种显著度度量。学习随机森林回归器以执行判别显著融合(DSF)并在每个尺度上生成 DSF 显著度图,并且跨多个尺度组合 DSF 显著度图以产生 MDSF 显著度图。此外,我们提出了一种有效的基于引导学习的显著目标分割方法,该方法基于 MDSF 显著度图进行基于样本的引导,并学习多个核支持向量机。在两个大型数据集上的实验结果表明了各种类别特征如何有助于显著度检测性能,并证明了所提出的框架在显著度检测和显著目标分割方面都取得了更好的性能。