State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China.
Sensors (Basel). 2020 Sep 25;20(19):5505. doi: 10.3390/s20195505.
In manufacturing, traditional task pre-programming methods limit the efficiency of human-robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy.
在制造业中,传统的任务预编程方法限制了人机技能转移的效率。本文提出了一种新的任务学习策略,使机器人能够灵活地从人类演示中学习技能,并在新的任务情境下推广技能。具体来说,我们建立了一个无标记视觉捕获系统来获取连续的人手运动,并开发了一种基于阈值的启发式分割算法,将完整的运动分割成不同的运动基元(MP),用面向任务的模型对人手运动进行编码。对于运动基元学习,我们采用高斯混合模型和高斯混合回归(GMM-GMR)来提取包含足够人类特征的最优轨迹,并利用动态运动基元(DMP)进行轨迹泛化学习。此外,我们提出了一种改进的视觉空间技能学习(VSL)算法来学习与任务相关对象之间空间关系相关的目标配置。学习只需要一个多操作演示,机器人可以根据演示的任务执行顺序,在新的任务情境下推广目标配置。一系列的销钉入孔实验证明了所提出的任务学习策略可以获得准确的拾放点,并生成平滑的类人轨迹,验证了所提出策略的有效性。