Suppr超能文献

UNOC:理解虚拟现实中的嵌入式存在中的闭塞。

UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality.

出版信息

IEEE Trans Vis Comput Graph. 2022 Dec;28(12):4240-4251. doi: 10.1109/TVCG.2021.3085407. Epub 2022 Oct 26.

Abstract

Tracking body and hand motions in 3D space is essential for social and self-presence in augmented and virtual environments. Unlike the popular 3D pose estimation setting, the problem is often formulated as egocentric tracking based on embodied perception (e.g., egocentric cameras, handheld sensors). In this article, we propose a new data-driven framework for egocentric body tracking, targeting challenges of omnipresent occlusions in optimization-based methods (e.g., inverse kinematics solvers). We first collect a large-scale motion capture dataset with both body and finger motions using optical markers and inertial sensors. This dataset focuses on social scenarios and captures ground truth poses under self-occlusions and body-hand interactions. We then simulate the occlusion patterns in head-mounted camera views on the captured ground truth using a ray casting algorithm and learn a deep neural network to infer the occluded body parts. Our experiments show that our method is able to generate high-fidelity embodied poses by applying the proposed method to the task of real-time egocentric body tracking, finger motion synthesis, and 3-point inverse kinematics.

摘要

在增强和虚拟现实环境中,跟踪三维空间中的身体和手部动作对于社交和自我存在感至关重要。与流行的 3D 姿势估计设置不同,该问题通常基于主体感知(例如,主体相机、手持传感器)来制定自我中心跟踪。在本文中,我们提出了一种新的数据驱动的自我中心身体跟踪框架,针对基于优化方法中的普遍遮挡挑战(例如,逆运动学求解器)。我们首先使用光学标记和惯性传感器收集了一个具有身体和手指运动的大规模运动捕捉数据集。该数据集侧重于社交场景,并在自我遮挡和身体-手交互下捕获真实姿势。然后,我们使用光线投射算法模拟在捕获的真实姿势的头戴式相机视图中的遮挡模式,并学习深度神经网络来推断遮挡的身体部位。我们的实验表明,我们的方法能够通过将所提出的方法应用于实时自我中心身体跟踪、手指运动合成和 3 点逆运动学任务来生成高保真的主体姿势。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验