Suppr超能文献

基于联合特征引导和人脸先验的深度图超分辨率方法。

Joint-Feature Guided Depth Map Super-Resolution With Face Priors.

出版信息

IEEE Trans Cybern. 2018 Jan;48(1):399-411. doi: 10.1109/TCYB.2016.2638856. Epub 2016 Dec 22.

Abstract

In this paper, we present a novel method to super-resolve and recover the facial depth map nicely. The key idea is to exploit the exemplar-based method to obtain the reliable face priors from high-quality facial depth map to improve the depth image. Specifically, a new neighbor embedding (NE) framework is designed for face prior learning and depth map reconstruction. First, face components are decomposed to form specialized dictionaries and then reconstructed, respectively. Joint features, i.e., low-level depth, intensity cues and high-level position cues, are put forward for robust patch similarity measurement. The NE results are used to obtain the face priors of facial structures and smooth maps, which are then combined in an uniform optimization framework to recover high-quality facial depth maps. Finally, an edge enhancement process is implemented to estimate the final high resolution depth map. Experimental results demonstrate the superiority of our method compared to state-of-the-art depth map super-resolution techniques on both synthetic data and real-world data from Kinect.

摘要

在本文中,我们提出了一种新颖的方法来很好地超分辨和恢复面部深度图。关键思想是利用基于示例的方法从高质量面部深度图中获取可靠的面部先验信息,以改善深度图像。具体来说,我们设计了一种新的邻域嵌入 (NE) 框架用于人脸先验学习和深度图重建。首先,将人脸组件分解以形成专门的字典,然后分别进行重建。联合特征,即低水平深度、强度线索和高水平位置线索,被提出用于稳健的补丁相似性度量。NE 结果用于获得面部结构和平滑图的人脸先验信息,然后将它们结合在一个统一的优化框架中以恢复高质量的面部深度图。最后,实施边缘增强过程以估计最终的高分辨率深度图。实验结果表明,与 Kinect 上的基于合成数据和真实世界数据的最先进的深度图超分辨率技术相比,我们的方法具有优越性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验