Suppr超能文献

通过多模态生成对抗网络进行深度图上采样

Depth Map Upsampling via Multi-Modal Generative Adversarial Network.

作者信息

Tan Daniel Stanley, Lin Jun-Ming, Lai Yu-Chi, Ilao Joel, Hua Kai-Lung

机构信息

Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.

Center for Automation Research, College of Computer Studies, De La Salle University, Manila 1004, Philippines.

出版信息

Sensors (Basel). 2019 Apr 2;19(7):1587. doi: 10.3390/s19071587.

Abstract

Autonomous robots for smart homes and smart cities mostly require depth perception in order to interact with their environments. However, depth maps are usually captured in a lower resolution as compared to RGB color images due to the inherent limitations of the sensors. Naively increasing its resolution often leads to loss of sharpness and incorrect estimates, especially in the regions with depth discontinuities or depth boundaries. In this paper, we propose a novel Generative Adversarial Network (GAN)-based framework for depth map super-resolution that is able to preserve the smooth areas, as well as the sharp edges at the boundaries of the depth map. Our proposed model is trained on two different modalities, namely color images and depth maps. However, at test time, our model only requires the depth map in order to produce a higher resolution version. We evaluated our model both quantitatively and qualitatively, and our experiments show that our method performs better than existing state-of-the-art models.

摘要

用于智能家居和智慧城市的自主机器人大多需要深度感知才能与周围环境进行交互。然而,由于传感器的固有局限性,与RGB彩色图像相比,深度图通常以较低分辨率捕获。单纯提高其分辨率往往会导致清晰度丧失和估计错误,特别是在深度不连续或深度边界的区域。在本文中,我们提出了一种基于生成对抗网络(GAN)的新型深度图超分辨率框架,该框架能够保留深度图的平滑区域以及边界处的锐利边缘。我们提出的模型在两种不同模态上进行训练,即彩色图像和深度图。然而,在测试时,我们的模型仅需要深度图就能生成更高分辨率的版本。我们对模型进行了定量和定性评估,实验表明我们的方法比现有的最先进模型表现更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/2f140a4c9b45/sensors-19-01587-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验