• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

考虑视图合成质量的深度图超分辨率。

Depth Map Super-Resolution Considering View Synthesis Quality.

出版信息

IEEE Trans Image Process. 2017 Apr;26(4):1732-1745. doi: 10.1109/TIP.2017.2656463. Epub 2017 Jan 20.

DOI:10.1109/TIP.2017.2656463
PMID:28113341
Abstract

Accurate and high-quality depth maps are required in lots of 3D applications, such as multi-view rendering, 3D reconstruction and 3DTV. However, the resolution of captured depth image is much lower than that of its corresponding color image, which affects its application performance. In this paper, we propose a novel depth map super-resolution (SR) method by taking view synthesis quality into account. The proposed approach mainly includes two technical contributions. First, since the captured low-resolution (LR) depth map may be corrupted by noise and occlusion, we propose a credibility based multi-view depth maps fusion strategy, which considers the view synthesis quality and interview correlation, to refine the LR depth map. Second, we propose a view synthesis quality based trilateral depth-map up-sampling method, which considers depth smoothness, texture similarity and view synthesis quality in the up-sampling filter. Experimental results demonstrate that the proposed method outperforms state-of-the-art depth SR methods for both super-resolved depth maps and synthesized views. Furthermore, the proposed method is robust to noise and achieves promising results under noise-corruption conditions.

摘要

在许多 3D 应用中,如多视图渲染、3D 重建和 3DTV,都需要准确和高质量的深度图。然而,捕获的深度图像的分辨率远低于其对应的彩色图像,这影响了其应用性能。在本文中,我们提出了一种新的深度图超分辨率(SR)方法,考虑了视图合成质量。所提出的方法主要包括两个技术贡献。首先,由于捕获的低分辨率(LR)深度图可能受到噪声和遮挡的影响,我们提出了一种基于可信度的多视图深度图融合策略,该策略考虑了视图合成质量和采访相关性,以细化 LR 深度图。其次,我们提出了一种基于三线性的视图合成质量的深度图上采样方法,该方法在上采样滤波器中考虑了深度平滑度、纹理相似性和视图合成质量。实验结果表明,所提出的方法在超分辨深度图和合成视图方面均优于最先进的深度 SR 方法。此外,该方法对噪声具有鲁棒性,在噪声污染条件下可取得有前景的结果。

相似文献

1
Depth Map Super-Resolution Considering View Synthesis Quality.考虑视图合成质量的深度图超分辨率。
IEEE Trans Image Process. 2017 Apr;26(4):1732-1745. doi: 10.1109/TIP.2017.2656463. Epub 2017 Jan 20.
2
Depth image super-resolution reconstruction based on a modified joint trilateral filter.基于改进联合三边滤波器的深度图像超分辨率重建
R Soc Open Sci. 2019 Jan 30;6(1):181074. doi: 10.1098/rsos.181074. eCollection 2019 Jan.
3
NIQSV+: A No-Reference Synthesized View Quality Assessment Metric.NIQSV+:一种无参考合成视图质量评估指标。
IEEE Trans Image Process. 2018 Apr;27(4):1652-1664. doi: 10.1109/TIP.2017.2781420.
4
Cross-View Multi-Lateral Filter for Compressed Multi-View Depth Video.用于压缩多视角深度视频的跨视图多边滤波器。
IEEE Trans Image Process. 2019 Jan;28(1):302-315. doi: 10.1109/TIP.2018.2867740. Epub 2018 Aug 29.
5
Depth Completion and Super-Resolution with Arbitrary Scale Factors for Indoor Scenes.具有任意比例因子的室内场景深度完成和超分辨率。
Sensors (Basel). 2021 Jul 18;21(14):4892. doi: 10.3390/s21144892.
6
Edge-Preserving Depth Map Upsampling by Joint Trilateral Filter.基于联合三边滤波的边缘保持深度图上采样。
IEEE Trans Cybern. 2018 Jan;48(1):371-384. doi: 10.1109/TCYB.2016.2637661. Epub 2017 Jan 24.
7
Edge-Guided Single Depth Image Super Resolution.边缘引导的单幅深度图像超分辨率。
IEEE Trans Image Process. 2016 Jan;25(1):428-38. doi: 10.1109/TIP.2015.2501749. Epub 2015 Nov 20.
8
Multi-Image Blind Super-Resolution of 3D Scenes.多幅图像的三维场景盲超分辨率。
IEEE Trans Image Process. 2017 Nov;26(11):5337-5352. doi: 10.1109/TIP.2017.2723243. Epub 2017 Jul 4.
9
Rate-constrained 3D surface estimation from noise-corrupted multiview depth videos.基于噪声多视角深度视频的速率约束 3D 表面估计。
IEEE Trans Image Process. 2014 Jul;23(7):3138-51. doi: 10.1109/TIP.2014.2326413.
10
Fully Cross-Attention Transformer for Guided Depth Super-Resolution.基于全交叉注意力变换的导向深度超分辨率
Sensors (Basel). 2023 Mar 2;23(5):2723. doi: 10.3390/s23052723.

引用本文的文献

1
Moving Object Detection Based on Fusion of Depth Information and RGB Features.基于深度信息与 RGB 特征融合的运动目标检测。
Sensors (Basel). 2022 Jun 22;22(13):4702. doi: 10.3390/s22134702.
2
Double-Constraint Inpainting Model of a Single-Depth Image.单深度图像的双约束图像修复模型
Sensors (Basel). 2020 Mar 24;20(6):1797. doi: 10.3390/s20061797.