• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过多模态生成对抗网络进行深度图上采样

Depth Map Upsampling via Multi-Modal Generative Adversarial Network.

作者信息

Tan Daniel Stanley, Lin Jun-Ming, Lai Yu-Chi, Ilao Joel, Hua Kai-Lung

机构信息

Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taipei 10607, Taiwan.

Center for Automation Research, College of Computer Studies, De La Salle University, Manila 1004, Philippines.

出版信息

Sensors (Basel). 2019 Apr 2;19(7):1587. doi: 10.3390/s19071587.

DOI:10.3390/s19071587
PMID:30986925
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6480680/
Abstract

Autonomous robots for smart homes and smart cities mostly require depth perception in order to interact with their environments. However, depth maps are usually captured in a lower resolution as compared to RGB color images due to the inherent limitations of the sensors. Naively increasing its resolution often leads to loss of sharpness and incorrect estimates, especially in the regions with depth discontinuities or depth boundaries. In this paper, we propose a novel Generative Adversarial Network (GAN)-based framework for depth map super-resolution that is able to preserve the smooth areas, as well as the sharp edges at the boundaries of the depth map. Our proposed model is trained on two different modalities, namely color images and depth maps. However, at test time, our model only requires the depth map in order to produce a higher resolution version. We evaluated our model both quantitatively and qualitatively, and our experiments show that our method performs better than existing state-of-the-art models.

摘要

用于智能家居和智慧城市的自主机器人大多需要深度感知才能与周围环境进行交互。然而,由于传感器的固有局限性,与RGB彩色图像相比,深度图通常以较低分辨率捕获。单纯提高其分辨率往往会导致清晰度丧失和估计错误,特别是在深度不连续或深度边界的区域。在本文中,我们提出了一种基于生成对抗网络(GAN)的新型深度图超分辨率框架,该框架能够保留深度图的平滑区域以及边界处的锐利边缘。我们提出的模型在两种不同模态上进行训练,即彩色图像和深度图。然而,在测试时,我们的模型仅需要深度图就能生成更高分辨率的版本。我们对模型进行了定量和定性评估,实验表明我们的方法比现有的最先进模型表现更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/a22ce5062e5f/sensors-19-01587-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/2f140a4c9b45/sensors-19-01587-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/401e4551e230/sensors-19-01587-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/97940e82742a/sensors-19-01587-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/d13dc1278d50/sensors-19-01587-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/41f8e007ded5/sensors-19-01587-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/784f314eab5c/sensors-19-01587-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/e84bdef11fed/sensors-19-01587-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/c8bad190e573/sensors-19-01587-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/9df78602ba1a/sensors-19-01587-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/27f886d8ae93/sensors-19-01587-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/951b7886c399/sensors-19-01587-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/8e6889be62ec/sensors-19-01587-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/ef1e3b9abb25/sensors-19-01587-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/a22ce5062e5f/sensors-19-01587-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/2f140a4c9b45/sensors-19-01587-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/401e4551e230/sensors-19-01587-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/97940e82742a/sensors-19-01587-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/d13dc1278d50/sensors-19-01587-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/41f8e007ded5/sensors-19-01587-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/784f314eab5c/sensors-19-01587-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/e84bdef11fed/sensors-19-01587-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/c8bad190e573/sensors-19-01587-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/9df78602ba1a/sensors-19-01587-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/27f886d8ae93/sensors-19-01587-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/951b7886c399/sensors-19-01587-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/8e6889be62ec/sensors-19-01587-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/ef1e3b9abb25/sensors-19-01587-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/88bb/6480680/a22ce5062e5f/sensors-19-01587-g014.jpg

相似文献

1
Depth Map Upsampling via Multi-Modal Generative Adversarial Network.通过多模态生成对抗网络进行深度图上采样
Sensors (Basel). 2019 Apr 2;19(7):1587. doi: 10.3390/s19071587.
2
Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks.用于深度图超分辨率生成对抗网络的多尺度注意力融合
Entropy (Basel). 2023 May 23;25(6):836. doi: 10.3390/e25060836.
3
Single-Image Depth Inference Using Generative Adversarial Networks.基于生成对抗网络的单幅图像深度推断。
Sensors (Basel). 2019 Apr 10;19(7):1708. doi: 10.3390/s19071708.
4
Generative adversarial networks with decoder-encoder output noises.生成对抗网络与解码器编码器输出噪声。
Neural Netw. 2020 Jul;127:19-28. doi: 10.1016/j.neunet.2020.04.005. Epub 2020 Apr 9.
5
Robust Color Guided Depth Map Restoration.鲁棒的彩色引导深度图恢复。
IEEE Trans Image Process. 2017 Jan;26(1):315-327. doi: 10.1109/TIP.2016.2612826.
6
A consensus-driven approach for structure and texture aware depth map upsampling.一种基于共识的结构和纹理感知深度图上采样方法。
IEEE Trans Image Process. 2014 Aug;23(8):3321-35. doi: 10.1109/TIP.2014.2329766. Epub 2014 Jun 9.
7
High-quality depth map upsampling and completion for RGB-D cameras.高质量深度图的 RGB-D 相机上采样和补全。
IEEE Trans Image Process. 2014 Dec;23(12):5559-72. doi: 10.1109/TIP.2014.2361034.
8
Edge-Preserving Depth Map Upsampling by Joint Trilateral Filter.基于联合三边滤波的边缘保持深度图上采样。
IEEE Trans Cybern. 2018 Jan;48(1):371-384. doi: 10.1109/TCYB.2016.2637661. Epub 2017 Jan 24.
9
StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks.StackGAN++:基于堆叠生成对抗网络的逼真图像合成
IEEE Trans Pattern Anal Mach Intell. 2019 Aug;41(8):1947-1962. doi: 10.1109/TPAMI.2018.2856256. Epub 2018 Jul 16.
10
SCH-GAN: Semi-Supervised Cross-Modal Hashing by Generative Adversarial Network.SCH-GAN:基于生成对抗网络的半监督跨模态哈希。
IEEE Trans Cybern. 2020 Feb;50(2):489-502. doi: 10.1109/TCYB.2018.2868826. Epub 2018 Sep 26.

引用本文的文献

1
Improving Multi-Agent Generative Adversarial Nets with Variational Latent Representation.利用变分潜在表示改进多智能体生成对抗网络
Entropy (Basel). 2020 Sep 21;22(9):1055. doi: 10.3390/e22091055.
2
Presentation Attack Face Image Generation Based on a Deep Generative Adversarial Network.基于深度生成对抗网络的人脸图像生成
Sensors (Basel). 2020 Mar 25;20(7):1810. doi: 10.3390/s20071810.
3
The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN.基于ACGAN机器学习方法的用于分类处理的新型传感器网络结构

本文引用的文献

1
Edge-Preserving Depth Map Upsampling by Joint Trilateral Filter.基于联合三边滤波的边缘保持深度图上采样。
IEEE Trans Cybern. 2018 Jan;48(1):371-384. doi: 10.1109/TCYB.2016.2637661. Epub 2017 Jan 24.
2
A consensus-driven approach for structure and texture aware depth map upsampling.一种基于共识的结构和纹理感知深度图上采样方法。
IEEE Trans Image Process. 2014 Aug;23(8):3321-35. doi: 10.1109/TIP.2014.2329766. Epub 2014 Jun 9.
3
Depth video enhancement based on weighted mode filtering.基于加权模态滤波的深度视频增强。
Sensors (Basel). 2019 Jul 17;19(14):3145. doi: 10.3390/s19143145.
IEEE Trans Image Process. 2012 Mar;21(3):1176-90. doi: 10.1109/TIP.2011.2163164. Epub 2011 Jul 29.