• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

协作式人机交互中用于选择动作时机的人体运动理解

Human Motion Understanding for Selecting Action Timing in Collaborative Human-Robot Interaction.

作者信息

Rea Francesco, Vignolo Alessia, Sciutti Alessandra, Noceti Nicoletta

机构信息

Robotics Brain and Cognitive Sciences (RBCS), Istituto Italiano di Tecnologia, Genova, Italy.

CONTACT, Istituto Italiano di Tecnologia, Genova, Italy.

出版信息

Front Robot AI. 2019 Jul 16;6:58. doi: 10.3389/frobt.2019.00058. eCollection 2019.

DOI:10.3389/frobt.2019.00058
PMID:33501073
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7805633/
Abstract

In the industry of the future, so as in healthcare and at home, robots will be a familiar presence. Since they will be working closely with human operators not always properly trained for human-machine interaction tasks, robots will need the ability of automatically adapting to changes in the task to be performed or to cope with variations in how the human partner completes the task. The goal of this work is to make a further step toward endowing robot with such capability. To this purpose, we focus on the identification of relevant time instants in an observed action, called , informative on the partner's movement timing, and marking instants where an action starts or ends, or changes to another action. The time instants are temporal locations where the motion can be ideally segmented, providing a set of primitives that can be used to build a temporal signature of the action and finally support the understanding of the dynamics and coordination in time. We validate our approach in two contexts, considering first a situation in which the human partner can perform multiple different activities, and then moving to settings where an action is already recognized and shows a certain degree of periodicity. In the two contexts we address different challenges. In the first one, working in batch on a dataset collecting videos of a variety of cooking activities, we investigate whether the action signature we compute could facilitate the understanding of which type of action is occurring in front of the observer, with tolerance to viewpoint changes. In the second context, we evaluate online on the robot iCub the capability of the action signature in providing hints to establish an actual temporal coordination during the interaction with human participants. In both cases, we show promising results that speak in favor of the potentiality of our approach.

摘要

在未来的行业中,就像在医疗保健领域和家庭中一样,机器人将随处可见。由于它们将与并非总是接受过人机交互任务适当培训的人类操作员密切合作,机器人将需要具备自动适应待执行任务变化或应对人类伙伴完成任务方式变化的能力。这项工作的目标是朝着赋予机器人这种能力迈出进一步的步伐。为此,我们专注于识别观察到的动作中的相关时刻,这些时刻被称为对伙伴运动时间有信息价值的时刻,并标记动作开始、结束或转变为另一个动作的时刻。这些时刻是运动可以理想地进行分段的时间位置,提供了一组原语,可用于构建动作的时间特征,最终支持对动态和时间协调的理解。我们在两种情况下验证我们的方法,首先考虑人类伙伴可以执行多种不同活动的情况,然后转向动作已经被识别且呈现一定程度周期性的场景。在这两种情况下,我们应对不同的挑战。在第一种情况下,我们对收集各种烹饪活动视频的数据集进行批量处理,研究我们计算出的动作特征是否有助于理解观察者面前正在发生哪种类型的动作,同时对视角变化具有容忍度。在第二种情况下,我们在机器人iCub上进行在线评估,评估动作特征在与人类参与者交互期间提供线索以建立实际时间协调的能力。在这两种情况下,我们都展示了有前景的结果,这些结果支持我们方法的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/7ace2a16c2e2/frobt-06-00058-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/c99a397611d9/frobt-06-00058-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/325837d40d2b/frobt-06-00058-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/20dd9917725c/frobt-06-00058-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/4810a14622ab/frobt-06-00058-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/8b533c5dac03/frobt-06-00058-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/bf72d99a7236/frobt-06-00058-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/41e18cf33e3c/frobt-06-00058-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/665a8fdf4ac2/frobt-06-00058-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/320d65059a89/frobt-06-00058-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/2aee3afcbc9b/frobt-06-00058-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/03827ae4f1fe/frobt-06-00058-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/8ba1a1f1abff/frobt-06-00058-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/f9e1a3e977a6/frobt-06-00058-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/7ace2a16c2e2/frobt-06-00058-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/c99a397611d9/frobt-06-00058-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/325837d40d2b/frobt-06-00058-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/20dd9917725c/frobt-06-00058-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/4810a14622ab/frobt-06-00058-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/8b533c5dac03/frobt-06-00058-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/bf72d99a7236/frobt-06-00058-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/41e18cf33e3c/frobt-06-00058-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/665a8fdf4ac2/frobt-06-00058-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/320d65059a89/frobt-06-00058-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/2aee3afcbc9b/frobt-06-00058-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/03827ae4f1fe/frobt-06-00058-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/8ba1a1f1abff/frobt-06-00058-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/f9e1a3e977a6/frobt-06-00058-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8222/7805633/7ace2a16c2e2/frobt-06-00058-g0014.jpg

相似文献

1
Human Motion Understanding for Selecting Action Timing in Collaborative Human-Robot Interaction.协作式人机交互中用于选择动作时机的人体运动理解
Front Robot AI. 2019 Jul 16;6:58. doi: 10.3389/frobt.2019.00058. eCollection 2019.
2
Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States.适应低级和高级机器人 - 对象交互状态的动作生成
Front Neurorobot. 2019 Jul 24;13:56. doi: 10.3389/fnbot.2019.00056. eCollection 2019.
3
THERAPIST: Towards an Autonomous Socially Interactive Robot for Motor and Neurorehabilitation Therapies for Children.治疗师:致力于开发一款用于儿童运动和神经康复治疗的自主社交互动机器人。
JMIR Rehabil Assist Technol. 2014 Oct 7;1(1):e1. doi: 10.2196/rehab.3151.
4
Human-machine-human interaction in motor control and rehabilitation: a review.人-机-人交互在运动控制和康复中的应用:综述
J Neuroeng Rehabil. 2021 Dec 27;18(1):183. doi: 10.1186/s12984-021-00974-5.
5
Preferred Interaction Styles for Human-Robot Collaboration Vary Over Tasks With Different Action Types.人机协作的首选交互方式会因具有不同动作类型的任务而有所不同。
Front Neurorobot. 2018 Jul 4;12:36. doi: 10.3389/fnbot.2018.00036. eCollection 2018.
6
Interacting With Robots to Investigate the Bases of Social Interaction.与机器人互动,探究社交互动的基础。
IEEE Trans Neural Syst Rehabil Eng. 2017 Dec;25(12):2295-2304. doi: 10.1109/TNSRE.2017.2753879. Epub 2017 Oct 16.
7
Teaching a Robot Bimanual Hand-Clapping Games via Wrist-Worn IMUs.通过腕戴式惯性测量单元教机器人进行双手鼓掌游戏。
Front Robot AI. 2018 Jul 17;5:85. doi: 10.3389/frobt.2018.00085. eCollection 2018.
8
Rhythm patterns interaction--synchronization behavior for human-robot joint action.节奏模式交互——人机联合行动的同步行为
PLoS One. 2014 Apr 21;9(4):e95195. doi: 10.1371/journal.pone.0095195. eCollection 2014.
9
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.学习用于人机协作的手势指令语义
Front Neurorobot. 2018 Mar 19;12:7. doi: 10.3389/fnbot.2018.00007. eCollection 2018.
10
Smooth leader or sharp follower? Playing the mirror game with a robot.圆滑的领导者还是敏锐的跟随者?与机器人玩镜像游戏。
Restor Neurol Neurosci. 2018;36(2):147-159. doi: 10.3233/RNN-170756.

引用本文的文献

1
The MoCA dataset, kinematic and multi-view visual streams of fine-grained cooking actions.MoCA数据集,细粒度烹饪动作的运动学和多视图视觉流。
Sci Data. 2020 Dec 15;7(1):432. doi: 10.1038/s41597-020-00776-9.

本文引用的文献

1
Structured Time Series Analysis for Human Action Segmentation and Recognition.结构化时间序列分析在人体动作分割与识别中的应用
IEEE Trans Pattern Anal Mach Intell. 2014 Jul;36(7):1414-27. doi: 10.1109/TPAMI.2013.244.
2
Motor contagion during human-human and human-robot interaction.人与人以及人与机器人互动过程中的运动传染
PLoS One. 2014 Aug 25;9(8):e106172. doi: 10.1371/journal.pone.0106172. eCollection 2014.
3
Visual gravity influences arm movement planning.视觉重力会影响手臂运动规划。
J Neurophysiol. 2012 Jun;107(12):3433-45. doi: 10.1152/jn.00420.2011. Epub 2012 Mar 21.
4
The iCub humanoid robot: an open-systems platform for research in cognitive development.iCub 人形机器人:认知发展研究的开放式系统平台。
Neural Netw. 2010 Oct-Nov;23(8-9):1125-34. doi: 10.1016/j.neunet.2010.08.010. Epub 2010 Sep 22.
5
Action plans used in action observation.动作观察中使用的行动计划。
Nature. 2003 Aug 14;424(6950):769-71. doi: 10.1038/nature01861.
6
Comparing smooth arm movements with the two-thirds power law and the related segmented-control hypothesis.将平滑手臂运动与三分之二幂定律及相关的分段控制假设进行比较。
J Neurosci. 2002 Sep 15;22(18):8201-11. doi: 10.1523/JNEUROSCI.22-18-08201.2002.
7
Kinematic features of unrestrained vertical arm movements.无约束垂直手臂运动的运动学特征。
J Neurosci. 1985 Sep;5(9):2318-30. doi: 10.1523/JNEUROSCI.05-09-02318.1985.