Suppr超能文献

使用无基础设施的头眼注视接口实现笛卡尔空间中的稳健机器人控制。

Towards Robust Robot Control in Cartesian Space Using an Infrastructureless Head- and Eye-Gaze Interface.

机构信息

Group of Sensors and Actuators, Department of Electrical Engineering and Applied Sciences, Westphalian University of Applied Sciences, 45877 Gelsenkirchen, Germany.

出版信息

Sensors (Basel). 2021 Mar 5;21(5):1798. doi: 10.3390/s21051798.

Abstract

This paper presents a lightweight, infrastructureless head-worn interface for robust and real-time robot control in Cartesian space using head- and eye-gaze. The interface comes at a total weight of just 162 g. It combines a state-of-the-art visual simultaneous localization and mapping algorithm (ORB-SLAM 2) for RGB-D cameras with a Magnetic Angular rate Gravity (MARG)-sensor filter. The data fusion process is designed to dynamically switch between magnetic, inertial and visual heading sources to enable robust orientation estimation under various disturbances, e.g., magnetic disturbances or degraded visual sensor data. The interface furthermore delivers accurate eye- and head-gaze vectors to enable precise robot end effector (EFF) positioning and employs a head motion mapping technique to effectively control the robots end effector orientation. An experimental proof of concept demonstrates that the proposed interface and its data fusion process generate reliable and robust pose estimation. The three-dimensional head- and eye-gaze position estimation pipeline delivers a mean Euclidean error of 19.0±15.7 mm for head-gaze and 27.4±21.8 mm for eye-gaze at a distance of 0.3-1.1 m to the user. This indicates that the proposed interface offers a precise control mechanism for hands-free and full six degree of freedom (DoF) robot teleoperation in Cartesian space by head- or eye-gaze and head motion.

摘要

本文提出了一种轻量级、无基础设施的头戴式接口,用于使用头眼注视在笛卡尔空间中进行稳健和实时的机器人控制。该接口的总重量仅为 162 克。它结合了最先进的 RGB-D 相机的视觉同时定位和映射算法(ORB-SLAM 2)和磁角速率重力(MARG)传感器滤波器。数据融合过程旨在在各种干扰下(例如磁场干扰或视觉传感器数据降级)动态切换磁、惯性和视觉航向源,以实现稳健的方向估计。该接口还提供准确的头眼注视向量,以实现机器人末端执行器(EFF)的精确定位,并采用头部运动映射技术来有效控制机器人末端执行器的方向。实验验证了概念证明,所提出的接口及其数据融合过程可生成可靠且稳健的姿态估计。三维头眼注视位置估计管道在距离用户 0.3-1.1 米处对头眼注视和眼注视的平均欧几里得误差分别为 19.0±15.7 毫米和 27.4±21.8 毫米。这表明,所提出的接口通过头眼注视和头部运动提供了一种用于笛卡尔空间中免提和全六自由度(DoF)机器人遥操作的精确控制机制。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/85cc/7962065/506b132f7153/sensors-21-01798-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验