Suppr超能文献

基于模型的系统,使用简单的数据手套和深度摄像机实现实时关节手跟踪。

A Model-Based System for Real-Time Articulated Hand Tracking Using a Simple Data Glove and a Depth Camera.

机构信息

Beijing Key Laboratory of Network System Architecture and Convergence, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China.

Beijing Laboratory of Advanced Information Networks, Beijing 100876, China.

出版信息

Sensors (Basel). 2019 Oct 28;19(21):4680. doi: 10.3390/s19214680.

Abstract

Tracking detailed hand motion is a fundamental research topic in the area of human-computer interaction (HCI) and has been widely studied for decades. Existing solutions with single-model inputs either require tedious calibration, are expensive or lack sufficient robustness and accuracy due to occlusions. In this study, we present a real-time system to reconstruct the exact hand motion by iteratively fitting a triangular mesh model to the absolute measurement of hand from a depth camera under the robust restriction of a simple data glove. We redefine and simplify the function of the data glove to lighten its limitations, i.e., tedious calibration, cumbersome equipment, and hampering movement and keep our system lightweight. For accurate hand tracking, we introduce a new set of degrees of freedom (DoFs), a shape adjustment term for personalizing the triangular mesh model, and an adaptive collision term to prevent self-intersection. For efficiency, we extract a strong pose-space prior to the data glove to narrow the pose searching space. We also present a simplified approach for computing tracking correspondences without the loss of accuracy to reduce computation cost. Quantitative experiments show the comparable or increased accuracy of our system over the state-of-the-art with about 40% improvement in robustness. Besides, our system runs independent of Graphic Processing Unit (GPU) and reaches 40 frames per second (FPS) at about 25% Central Processing Unit (CPU) usage.

摘要

追踪详细的手部动作是人机交互(HCI)领域的一个基础研究课题,已经被广泛研究了几十年。现有的单模型输入解决方案要么需要繁琐的校准,要么由于遮挡而价格昂贵或缺乏足够的鲁棒性和准确性。在本研究中,我们提出了一种实时系统,通过迭代拟合三角网格模型来重建精确的手部运动,该模型基于深度相机对手部的绝对测量值,同时受到简单数据手套的稳健限制。我们重新定义和简化了数据手套的功能,以减轻其局限性,例如繁琐的校准、繁琐的设备以及妨碍运动,同时保持系统的轻量化。为了实现准确的手部跟踪,我们引入了一组新的自由度(DoFs),即个性化三角网格模型的形状调整项,以及自适应碰撞项,以防止自交。为了提高效率,我们从数据手套中提取了一个强大的姿态空间先验,以缩小姿态搜索空间。我们还提出了一种简化的方法来计算跟踪对应关系,而不会降低准确性,从而降低计算成本。定量实验表明,我们的系统在与最先进的系统相比具有相当或更高的准确性,同时鲁棒性提高了约 40%。此外,我们的系统独立于图形处理单元(GPU)运行,在大约 25%的中央处理器(CPU)使用率下达到 40 帧每秒(FPS)。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7362/6865016/5b66ece6abeb/sensors-19-04680-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验