Suppr超能文献

通过基于头部手势的界面进行演示实现机器人对辅助操作任务的学习。

Robot Learning of Assistive Manipulation Tasks by Demonstration via Head Gesture-based Interface.

作者信息

Kyrarini Maria, Zheng Quan, Haseeb Muhammad Abdul, Graser Axel

出版信息

IEEE Int Conf Rehabil Robot. 2019 Jun;2019:1139-1146. doi: 10.1109/ICORR.2019.8779379.

Abstract

Assistive robotic manipulators have the potential to support the lives of people suffering from severe motor impairments. They can support individuals with disabilities to independently perform daily living activities, such as drinking, eating, manipulation tasks, and opening doors. An attractive solution is to enable motor impaired users to teach a robot by providing demonstrations of daily living tasks. The user controls the robot 'manually' with an intuitive human-robot interface to provide demonstration, which is followed by the robot learning of the performed task. However, the control of robotic manipulators by motor impaired individuals is a challenging topic. In this paper, a novel head gesture-based interface for hands-free robot control and a framework for robot learning from demonstration are presented. The head gesture-based interface consists of a camera mounted on the user's hat, which records the changes in the viewed scene due to the head motion. The head gesture recognition is performed using the optical flow for feature extraction and support vector machine for gesture classification. The recognized head gestures are further mapped into robot control commands to perform object manipulation task. The robot learns the demonstrated task by generating the sequence of actions and Gaussian Mixture Model method is used to segment the demonstrated path of the robot's end-effector. During the robotic reproduction of the task, the modified Gaussian Mixture Model and Gaussian Mixture Regression are used to adapt to environmental changes. The proposed framework was evaluated in a real-world assistive robotic scenario in a small study involving 13 participants; 12 able-bodied and one tetraplegic. The presented results demonstrate a potential of the proposed framework to enable severe motor impaired individuals to demonstrate daily living tasks to robotic manipulators.

摘要

辅助机器人操纵器有潜力支持患有严重运动障碍的人的生活。它们可以帮助残疾人独立完成日常生活活动,如饮水、进食、操作任务和开门。一个有吸引力的解决方案是让运动障碍用户通过提供日常生活任务的演示来教导机器人。用户通过直观的人机界面“手动”控制机器人以提供演示,随后机器人学习执行的任务。然而,让运动障碍者控制机器人操纵器是一个具有挑战性的课题。本文提出了一种用于免提机器人控制的基于头部手势的新型界面以及一个用于从演示中学习机器人的框架。基于头部手势的界面由安装在用户帽子上的摄像头组成,它记录由于头部运动而导致的可见场景的变化。使用光流进行特征提取并使用支持向量机进行手势分类来执行头部手势识别。识别出的头部手势进一步映射到机器人控制命令中以执行对象操纵任务。机器人通过生成动作序列来学习演示任务,并且使用高斯混合模型方法来分割机器人末端执行器的演示路径。在任务的机器人再现过程中,使用改进的高斯混合模型和高斯混合回归来适应环境变化。在一项涉及13名参与者(12名健全人和1名四肢瘫痪者)的小型研究中,在实际的辅助机器人场景中对所提出的框架进行了评估。所呈现的结果证明了所提出的框架能够使严重运动障碍个体向机器人操纵器演示日常生活任务的潜力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验