Xu Shuqi, Zhang Hao, Wang Zhuping
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9717-9724. doi: 10.1109/TNNLS.2024.3414470. Epub 2025 May 2.
Human-robot skill transfer is an important means for robots to learn skills and has received more and more attention and research in recent years. Typically, to ensure effective skill transfer, a skill is demonstrated several times by a human, from which a robot learns the features contained in the demonstrations and reproduces the skill in a new environment. However, it is necessary to consider the cases such as errors in human demonstrations and sensor issues, resulting in imperfect demonstrations, unrelated data, information loss, and variations in the lengths and amplitudes of the demonstrations. Therefore, this brief proposes a new trajectory alignment and filtering method for extracting relatively useful information from multiple demonstrations. This method can be used in conjunction with most probabilistic movement learning methods (this brief uses probabilistic movement primitives (ProMPs) as an example) for learning from demonstrations (LfDs), so that the robot can eventually learn and generate trajectories for completing skills from multiple demonstrations of varying quality. The effectiveness of the proposed method is verified by simulation results.
人机技能转移是机器人学习技能的重要手段,近年来受到了越来越多的关注和研究。通常,为确保有效的技能转移,人类需多次演示一项技能,机器人从中学习演示中包含的特征,并在新环境中再现该技能。然而,有必要考虑诸如人类演示中的错误和传感器问题等情况,这些会导致演示不完美、数据无关、信息丢失以及演示的长度和幅度变化。因此,本简报提出了一种新的轨迹对齐和滤波方法,用于从多个演示中提取相对有用的信息。该方法可与大多数概率运动学习方法(本简报以概率运动基元(ProMPs)为例)结合用于示范学习(LfDs),以便机器人最终能够从多个质量各异的演示中学习并生成完成技能的轨迹。仿真结果验证了所提方法的有效性。