Suppr超能文献

穿戴式运动捕捉:使用智能手表进行无处不在的机器人控制的多模态姿态跟踪

WearMoCap: multimodal pose tracking for ubiquitous robot control using a smartwatch.

作者信息

Weigend Fabian C, Kumar Neelesh, Aran Oya, Ben Amor Heni

机构信息

Interactive Robotics Laboratory, School of Computing and Augmented Intelligence (SCAI), Arizona State University (ASU), Tempe, AZ, United States.

Corporate Functions-R&D, Procter and Gamble, Mason, OH, United States.

出版信息

Front Robot AI. 2025 Jan 3;11:1478016. doi: 10.3389/frobt.2024.1478016. eCollection 2024.

Abstract

We present WearMoCap, an open-source library to track the human pose from smartwatch sensor data and leveraging pose predictions for ubiquitous robot control. WearMoCap operates in three modes: 1) a Watch Only mode, which uses a smartwatch only, 2) a novel Upper Arm mode, which utilizes the smartphone strapped onto the upper arm and 3) a Pocket mode, which determines body orientation from a smartphone in any pocket. We evaluate all modes on large-scale datasets consisting of recordings from up to 8 human subjects using a range of consumer-grade devices. Further, we discuss real-robot applications of underlying works and evaluate WearMoCap in handover and teleoperation tasks, resulting in performances that are within 2 cm of the accuracy of the gold-standard motion capture system. Our Upper Arm mode provides the most accurate wrist position estimates with a Root Mean Squared prediction error of 6.79 cm. To evaluate WearMoCap in more scenarios and investigate strategies to mitigate sensor drift, we publish the WearMoCap system with thorough documentation as open source. The system is designed to foster future research in smartwatch-based motion capture for robotics applications where ubiquity matters. www.github.com/wearable-motion-capture.

摘要

我们展示了WearMoCap,这是一个开源库,用于从智能手表传感器数据中跟踪人体姿势,并利用姿势预测实现无处不在的机器人控制。WearMoCap有三种操作模式:1)仅手表模式,仅使用智能手表;2)新颖的上臂模式,利用绑在上臂的智能手机;3)口袋模式,通过放在任何口袋中的智能手机确定身体方向。我们使用一系列消费级设备,在由多达8名人类受试者的记录组成的大规模数据集上评估所有模式。此外,我们讨论了基础工作在实际机器人应用中的情况,并在交接和远程操作任务中评估了WearMoCap,其性能与金标准动作捕捉系统的精度相差在2厘米以内。我们的上臂模式提供了最准确的手腕位置估计,均方根预测误差为6.79厘米。为了在更多场景中评估WearMoCap并研究减轻传感器漂移的策略,我们将带有详细文档的WearMoCap系统作为开源发布。该系统旨在促进未来在基于智能手表的动作捕捉方面的研究,以用于对普及性有要求的机器人应用。www.github.com/wearable-motion-capture

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/99f8/11738771/e0f05d9a0533/frobt-11-1478016-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验