• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过生成进行模仿:用于模仿交互式任务的深度生成模型

Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks.

作者信息

Bütepage Judith, Ghadirzadeh Ali, Öztimur Karadaǧ Özge, Björkman Mårten, Kragic Danica

机构信息

Robotics, Perception and Learning, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.

Intelligent Robotics Research Group, Aalto University, Espoo, Finland.

出版信息

Front Robot AI. 2020 Apr 16;7:47. doi: 10.3389/frobt.2020.00047. eCollection 2020.

DOI:10.3389/frobt.2020.00047
PMID:33501215
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7806025/
Abstract

To coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks "hand-shake," "hand-wave," "parachute fist-bump," and "rocket fist-bump." We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.

摘要

要与交互伙伴协调行动,需要不断交换感觉运动信号。人类在婴儿期和幼儿期主要通过模仿学习以及与熟练伙伴的积极互动来获得这些技能。在互动过程中,他们需要具备预测并适应伙伴的能力。在这项工作中,我们希望在人机交互环境中探索这些理念,其中要求机器人通过观察学习和动觉学习相结合的方式来学习交互任务。为此,我们提出了一个深度学习框架,该框架由多个组件组成,用于(1)人类和机器人运动嵌入,(2)人类伙伴的运动预测,以及(3)生成与人类运动相匹配的机器人关节轨迹。由于长期运动预测方法常常存在均值回归问题,我们在此的技术贡献是一种新颖的概率潜在变量模型,该模型不在关节空间而是在潜在空间中进行预测。为了测试所提出方法,我们收集了“握手”“挥手”“降落伞式碰拳”和“火箭式碰拳”这四项交互任务的人与人交互数据和人机交互数据。我们通过实验证明了预测和自适应组件以及低级抽象对于在交互式社交任务中成功学习模仿人类行为的重要性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/294d0715c939/frobt-07-00047-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/6d97760a1e03/frobt-07-00047-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/3f4f8d3d2084/frobt-07-00047-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/fa6b2307b712/frobt-07-00047-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/5f2dc0d13610/frobt-07-00047-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/b52a8e4943f7/frobt-07-00047-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/debe260ca621/frobt-07-00047-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/ab185ca83c9e/frobt-07-00047-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/0c869fa84455/frobt-07-00047-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/b1b29c43d448/frobt-07-00047-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/a24781233e01/frobt-07-00047-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/294d0715c939/frobt-07-00047-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/6d97760a1e03/frobt-07-00047-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/3f4f8d3d2084/frobt-07-00047-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/fa6b2307b712/frobt-07-00047-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/5f2dc0d13610/frobt-07-00047-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/b52a8e4943f7/frobt-07-00047-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/debe260ca621/frobt-07-00047-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/ab185ca83c9e/frobt-07-00047-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/0c869fa84455/frobt-07-00047-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/b1b29c43d448/frobt-07-00047-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/a24781233e01/frobt-07-00047-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/954f/7806025/294d0715c939/frobt-07-00047-g0011.jpg

相似文献

1
Imitating by Generating: Deep Generative Models for Imitation of Interactive Tasks.通过生成进行模仿:用于模仿交互式任务的深度生成模型
Front Robot AI. 2020 Apr 16;7:47. doi: 10.3389/frobt.2020.00047. eCollection 2020.
2
A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.一种基于贝叶斯发展方法的机器人目标导向模仿学习
PLoS One. 2015 Nov 4;10(11):e0141965. doi: 10.1371/journal.pone.0141965. eCollection 2015.
3
Probabilistic Dual-Space Fusion for Real-Time Human-Robot Interaction.用于实时人机交互的概率双空间融合
Biomimetics (Basel). 2023 Oct 19;8(6):497. doi: 10.3390/biomimetics8060497.
4
Robot End Effector Tracking Using Predictive Multisensory Integration.基于预测性多感官整合的机器人末端执行器跟踪
Front Neurorobot. 2018 Oct 16;12:66. doi: 10.3389/fnbot.2018.00066. eCollection 2018.
5
THERAPIST: Towards an Autonomous Socially Interactive Robot for Motor and Neurorehabilitation Therapies for Children.治疗师:致力于开发一款用于儿童运动和神经康复治疗的自主社交互动机器人。
JMIR Rehabil Assist Technol. 2014 Oct 7;1(1):e1. doi: 10.2196/rehab.3151.
6
Multi-Channel Interactive Reinforcement Learning for Sequential Tasks.用于序列任务的多通道交互式强化学习
Front Robot AI. 2020 Sep 24;7:97. doi: 10.3389/frobt.2020.00097. eCollection 2020.
7
A developmental roadmap for learning by imitation in robots.机器人模仿学习的发展路线图。
IEEE Trans Syst Man Cybern B Cybern. 2007 Apr;37(2):308-21. doi: 10.1109/tsmcb.2006.886949.
8
An Adaptive Imitation Learning Framework for Robotic Complex Contact-Rich Insertion Tasks.用于机器人复杂的富含接触的插入任务的自适应模仿学习框架
Front Robot AI. 2022 Jan 11;8:777363. doi: 10.3389/frobt.2021.777363. eCollection 2021.
9
Learning from Examples: Imitation Learning and Emerging Cognition从示例中学习:模仿学习与新兴认知
10
Restored Action Generative Adversarial Imitation Learning from observation for robot manipulator.基于观察的机器人操纵器恢复动作生成对抗模仿学习
ISA Trans. 2022 Oct;129(Pt B):684-690. doi: 10.1016/j.isatra.2022.02.041. Epub 2022 Mar 7.

引用本文的文献

1
Multi-Humanoid Robot Arm Motion Imitation and Collaboration Based on Improved Retargeting.基于改进重定向的多拟人机器人手臂运动模仿与协作
Biomimetics (Basel). 2025 Mar 19;10(3):190. doi: 10.3390/biomimetics10030190.

本文引用的文献

1
Advances in Variational Inference.变分推理的进展
IEEE Trans Pattern Anal Mach Intell. 2019 Aug;41(8):2008-2026. doi: 10.1109/TPAMI.2018.2889774. Epub 2018 Dec 25.
2
Joint Action: Mental Representations, Shared Information and General Mechanisms for Coordinating with Others.联合行动:心理表征、共享信息与与他人协调的一般机制
Front Psychol. 2017 Jan 4;7:2039. doi: 10.3389/fpsyg.2016.02039. eCollection 2016.
3
Anticipating Human Activities Using Object Affordances for Reactive Robotic Response.使用物体功能来预测人类活动,以实现机器人的反应式响应。
IEEE Trans Pattern Anal Mach Intell. 2016 Jan;38(1):14-29. doi: 10.1109/TPAMI.2015.2430335.
4
Early Developments in Joint Action.联合行动的早期发展
Rev Philos Psychol. 2011 Jun;2(2):193-211. doi: 10.1007/s13164-011-0056-1.
5
Correspondence mapping induced state and action metrics for robotic imitation.用于机器人模仿的对应映射诱导状态和动作度量
IEEE Trans Syst Man Cybern B Cybern. 2007 Apr;37(2):299-307. doi: 10.1109/tsmcb.2006.886947.
6
Socially intelligent robots: dimensions of human-robot interaction.具备社交智能的机器人:人机交互的维度
Philos Trans R Soc Lond B Biol Sci. 2007 Apr 29;362(1480):679-704. doi: 10.1098/rstb.2006.2004.
7
Joint action: bodies and minds moving together.联合行动:身体与心灵协同运动。
Trends Cogn Sci. 2006 Feb;10(2):70-6. doi: 10.1016/j.tics.2005.12.009. Epub 2006 Jan 10.
8
Guided participation in cultural activity by toddlers and caregivers.幼儿和照顾者对文化活动的引导式参与。
Monogr Soc Res Child Dev. 1993;58(8):v-vi, 1-174; discussion 175-9.