Tan Daniel Stanley, Lin Jun-Ming, Lai Yu-Chi, Ilao Joel, Hua Kai-Lung
Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.
Center for Automation Research, College of Computer Studies, De La Salle University, Manila 1004, Philippines.
Sensors (Basel). 2019 Apr 2;19(7):1587. doi: 10.3390/s19071587.
Autonomous robots for smart homes and smart cities mostly require depth perception in order to interact with their environments. However, depth maps are usually captured in a lower resolution as compared to RGB color images due to the inherent limitations of the sensors. Naively increasing its resolution often leads to loss of sharpness and incorrect estimates, especially in the regions with depth discontinuities or depth boundaries. In this paper, we propose a novel Generative Adversarial Network (GAN)-based framework for depth map super-resolution that is able to preserve the smooth areas, as well as the sharp edges at the boundaries of the depth map. Our proposed model is trained on two different modalities, namely color images and depth maps. However, at test time, our model only requires the depth map in order to produce a higher resolution version. We evaluated our model both quantitatively and qualitatively, and our experiments show that our method performs better than existing state-of-the-art models.
用于智能家居和智慧城市的自主机器人大多需要深度感知才能与周围环境进行交互。然而,由于传感器的固有局限性,与RGB彩色图像相比,深度图通常以较低分辨率捕获。单纯提高其分辨率往往会导致清晰度丧失和估计错误,特别是在深度不连续或深度边界的区域。在本文中,我们提出了一种基于生成对抗网络(GAN)的新型深度图超分辨率框架,该框架能够保留深度图的平滑区域以及边界处的锐利边缘。我们提出的模型在两种不同模态上进行训练,即彩色图像和深度图。然而,在测试时,我们的模型仅需要深度图就能生成更高分辨率的版本。我们对模型进行了定量和定性评估,实验表明我们的方法比现有的最先进模型表现更好。