Wang Haotian, Yang Meng, Lan Xuguang, Zhu Ce, Zheng Nanning
IEEE Trans Image Process. 2022;31:7020-7035. doi: 10.1109/TIP.2022.3216768. Epub 2022 Nov 14.
Depth maps acquired by either physical sensors or learning methods are often seriously distorted due to boundary distortion problems, including missing, fake, and misaligned boundaries (compared with RGB images). An RGB-guided depth map recovery method is proposed in this paper to recover true boundaries in seriously distorted depth maps. Therefore, a unified model is first developed to observe all these kinds of distorted boundaries in depth maps. Observing distorted boundaries is equivalent to identifying erroneous regions in distorted depth maps, because depth boundaries are essentially formed by contiguous regions with different intensities. Then, erroneous regions are identified by separately extracting local structures of RGB image and depth map with Gaussian kernels and comparing their similarity on the basis of the SSIM index. A depth map recovery method is then proposed on the basis of the unified model. This method recovers true depth boundaries by iteratively identifying and correcting erroneous regions in recovered depth map based on the unified model and a weighted median filter. Because RGB image generally includes additional textural contents compared with depth maps, texture-copy artifacts problem is further addressed in the proposed method by restricting the model works around depth boundaries in each iteration. Extensive experiments are conducted on five RGB-depth datasets including depth map recovery, depth super-resolution, depth estimation enhancement, and depth completion enhancement. The results demonstrate that the proposed method considerably improves both the quantitative and visual qualities of recovered depth maps in comparison with fifteen competitive methods. Most object boundaries in recovered depth maps are corrected accurately, and kept sharply and well aligned with the ones in RGB images.
通过物理传感器或学习方法获取的深度图,由于边界失真问题(包括缺失、虚假和未对齐的边界,与RGB图像相比),常常会严重失真。本文提出一种RGB引导的深度图恢复方法,以恢复严重失真的深度图中的真实边界。因此,首先开发了一个统一模型来观察深度图中所有这些类型的失真边界。观察失真边界等同于识别失真深度图中的错误区域,因为深度边界本质上是由具有不同强度的相邻区域形成的。然后,通过使用高斯核分别提取RGB图像和深度图的局部结构,并基于结构相似性(SSIM)指数比较它们的相似性,来识别错误区域。在此统一模型的基础上,提出了一种深度图恢复方法。该方法基于统一模型和加权中值滤波器,通过迭代识别和校正恢复深度图中的错误区域,来恢复真实的深度边界。由于与深度图相比,RGB图像通常包含额外的纹理内容,因此在所提出的方法中,通过限制模型在每次迭代中围绕深度边界工作,进一步解决了纹理复制伪影问题。在五个RGB-深度数据集上进行了广泛的实验,包括深度图恢复、深度超分辨率、深度估计增强和深度完成增强。结果表明,与十五种竞争方法相比,所提出的方法显著提高了恢复深度图的定量和视觉质量。恢复深度图中的大多数物体边界都得到了准确校正,并且与RGB图像中的边界保持清晰且良好对齐。