Suppr超能文献

HeadFusion:结合 3D 可变形模型和 3D 重建的 360 度头部姿势跟踪。

HeadFusion: 360 Head Pose Tracking Combining 3D Morphable Model and 3D Reconstruction.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2018 Nov;40(11):2653-2667. doi: 10.1109/TPAMI.2018.2841403. Epub 2018 May 29.

Abstract

Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.

摘要

头部姿势估计是面部和社交相关研究的基本任务。虽然基于 3D 可变形模型(3DMM)的方法依赖于深度信息,通常可以实现准确的结果,但它们通常需要正面或侧面姿势,这排除了大量无法保证这些条件的应用场景,例如从环境中固定传感器监测自然交互。一个主要原因是 3DMM 模型通常只覆盖面部区域。在本文中,我们提出了一个框架,该框架结合了在线拟合 3DMM 模型的优势和无先验的 3D 全头部模型的重建,从而支持从任何视角进行姿势估计。此外,我们还提出了一种对称正则化器,用于在部分观测下进行准确的 3DMM 拟合,并利用视觉跟踪来解决具有快速加速度的自然头部动态问题。广泛的实验表明,我们的方法在公共 BIWI 数据集上达到了最新的性能水平,并且在我们公开的自然交互注释数据集 UbiPose 上也取得了准确和稳健的结果,该数据集经常出现不利姿势、遮挡或快速运动。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验