Chen Chenglizhao, Wei Jipeng, Peng Chong, Qin Hong
IEEE Trans Image Process. 2021;30:2350-2363. doi: 10.1109/TIP.2021.3052069. Epub 2021 Jan 27.
The existing fusion-based RGB-D salient object detection methods usually adopt the bistream structure to strike a balance in the fusion trade-off between RGB and depth (D). While the D quality usually varies among the scenes, the state-of-the-art bistream approaches are depth-quality-unaware, resulting in substantial difficulties in achieving complementary fusion status between RGB and D and leading to poor fusion results for low-quality D. Thus, this paper attempts to integrate a novel depth-quality-aware subnet into the classic bistream structure in order to assess the depth quality prior to conducting the selective RGB-D fusion. Compared to the SOTA bistream methods, the major advantage of our method is its ability to lessen the importance of the low-quality, no-contribution, or even negative-contribution D regions during RGB-D fusion, achieving a much improved complementary status between RGB and D. Our source code and data are available online at https://github.com/qdu1995/DQSD.
现有的基于融合的RGB-D显著目标检测方法通常采用双流结构,以便在RGB和深度(D)之间的融合权衡中取得平衡。虽然不同场景下的深度质量通常有所不同,但当前最先进的双流方法并未考虑深度质量,这使得在RGB和D之间实现互补融合状态面临巨大困难,导致低质量D的融合效果不佳。因此,本文尝试将一种新颖的深度质量感知子网集成到经典的双流结构中,以便在进行选择性RGB-D融合之前评估深度质量。与当前最先进的双流方法相比,我们方法的主要优势在于,在RGB-D融合过程中,它能够降低低质量、无贡献甚至有负贡献的D区域的重要性,在RGB和D之间实现更好的互补状态。我们的源代码和数据可在https://github.com/qdu1995/DQSD上在线获取。