Xu Dan, Fan Xiaopeng, Gao Wen
School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China.
Pengcheng Laboratory, Shenzhen 518052, China.
Entropy (Basel). 2023 May 23;25(6):836. doi: 10.3390/e25060836.
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color-depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model.
彩色图像长期以来一直被用作指导深度图超分辨率的重要补充信息。然而,如何定量测量彩色图像对深度图的引导效果一直是一个被忽视的问题。为了解决这个问题,受生成对抗网络在彩色图像超分辨率方面最近取得的优异成果的启发,我们提出了一种使用多尺度注意力融合的生成对抗网络深度图超分辨率框架。在分层融合注意力模块下,相同尺度的颜色特征和深度特征的融合有效地测量了彩色图像对深度图的引导效果。不同尺度的联合颜色-深度特征的融合平衡了不同尺度特征对深度图超分辨率的影响。由内容损失、对抗损失和边缘损失组成的生成器损失函数有助于恢复深度图更清晰的边缘。在不同类型的基准深度图数据集上的实验结果表明,所提出的基于多尺度注意力融合的深度图超分辨率框架相对于最新算法在主观和客观上都有显著改进,验证了模型的有效性和泛化能力。