• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于抓取和放置任务中共享自主性的基于注视的意图估计

Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks.

作者信息

Fuchs Stefan, Belardinelli Anna

机构信息

Honda Research Institute Europe, Offenbach, Germany.

出版信息

Front Neurorobot. 2021 Apr 16;15:647930. doi: 10.3389/fnbot.2021.647930. eCollection 2021.

DOI:10.3389/fnbot.2021.647930
PMID:33935675
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8085393/
Abstract

Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention. We show that scan paths are, as expected, heavily shaped by the current intention and that two types of Gaussian Hidden Markov Models, one more scene-specific and one more action-specific, achieve a very good prediction performance, while also generalizing to new users and spatial arrangements. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher-level planning across object configurations, which can be leveraged by cooperative robotic systems.

摘要

共享自主性旨在将机器人控制与人类控制相结合,以执行远程遥操作任务。如果机器人不能首先以快速且可靠的方式识别当前人类意图,就无法实现这种协作交互,以便能够迅速实例化并执行合适的辅助计划。长期以来,人们都知道眼球运动能高度预测手动任务中展开的认知议程,因此,它构成了意图估计的最早且最可靠的行为线索。在本研究中,我们展示了一项实验,旨在分析模拟场景中简单遥操作拾取和放置任务中的人类行为,并设计一个合适的模型,用于早期估计当前的近端意图。我们表明,正如预期的那样,扫视路径在很大程度上受当前意图的影响,并且两种类型的高斯隐马尔可夫模型,一种更针对场景,另一种更针对动作,都能实现非常好的预测性能,同时还能推广到新用户和空间布局。我们最后讨论了行为和模型结果如何表明眼球运动在一定程度上反映了跨对象配置的高级规划的不变性和普遍性,协作机器人系统可以利用这一点。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/5ae3e4971a7d/fnbot-15-647930-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/eb4c3d977b0b/fnbot-15-647930-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/e546ff8bdee2/fnbot-15-647930-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/49b7e265abc8/fnbot-15-647930-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/161b29cfbb47/fnbot-15-647930-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/dea889fa65f8/fnbot-15-647930-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/1367820b05af/fnbot-15-647930-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/038079e9b280/fnbot-15-647930-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/7f1808d377b0/fnbot-15-647930-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/830267270189/fnbot-15-647930-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/7104bdde21a0/fnbot-15-647930-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/33b70f7205b7/fnbot-15-647930-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/a19d5580e9ed/fnbot-15-647930-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/cb7a84866ce4/fnbot-15-647930-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/a6b6efa64da9/fnbot-15-647930-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/2e27664390a7/fnbot-15-647930-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/2b7db801d0f2/fnbot-15-647930-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/5ae3e4971a7d/fnbot-15-647930-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/eb4c3d977b0b/fnbot-15-647930-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/e546ff8bdee2/fnbot-15-647930-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/49b7e265abc8/fnbot-15-647930-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/161b29cfbb47/fnbot-15-647930-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/dea889fa65f8/fnbot-15-647930-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/1367820b05af/fnbot-15-647930-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/038079e9b280/fnbot-15-647930-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/7f1808d377b0/fnbot-15-647930-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/830267270189/fnbot-15-647930-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/7104bdde21a0/fnbot-15-647930-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/33b70f7205b7/fnbot-15-647930-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/a19d5580e9ed/fnbot-15-647930-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/cb7a84866ce4/fnbot-15-647930-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/a6b6efa64da9/fnbot-15-647930-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/2e27664390a7/fnbot-15-647930-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/2b7db801d0f2/fnbot-15-647930-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/38b2/8085393/5ae3e4971a7d/fnbot-15-647930-g0017.jpg

相似文献

1
Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks.用于抓取和放置任务中共享自主性的基于注视的意图估计
Front Neurorobot. 2021 Apr 16;15:647930. doi: 10.3389/fnbot.2021.647930. eCollection 2021.
2
Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features.迈向人机系统的共享自主控制方案:利用目光特征进行动作原语识别
Front Neurorobot. 2020 Oct 15;14:567571. doi: 10.3389/fnbot.2020.567571. eCollection 2020.
3
Failure Handling of Robotic Pick and Place Tasks With Multimodal Cues Under Partial Object Occlusion.部分物体遮挡下基于多模态线索的机器人抓取与放置任务的故障处理
Front Neurorobot. 2021 Mar 8;15:570507. doi: 10.3389/fnbot.2021.570507. eCollection 2021.
4
Human-in-the-Loop Robot Control for Human-Robot Collaboration: HUMAN INTENTION ESTIMATION AND SAFE TRAJECTORY TRACKING CONTROL FOR COLLABORATIVE TASKS.用于人机协作的人工干预机器人控制:协作任务中的人类意图估计与安全轨迹跟踪控制
IEEE Control Syst. 2020 Dec;40(6):29-56. Epub 2020 Nov 16.
5
Comparative Analysis of Model-Based Predictive Shared Control for Delayed Operation in Object Reaching and Recognition Tasks With Tactile Sensing.基于模型的预测共享控制在具有触觉感知的物体抓取与识别任务中延迟操作的比较分析
Front Robot AI. 2021 Sep 27;8:730946. doi: 10.3389/frobt.2021.730946. eCollection 2021.
6
Gaze-Based Shared Autonomy Framework With Real-Time Action Primitive Recognition for Robot Manipulators.基于注视的共享自主框架,具有实时动作基元识别功能,用于机器人操纵器。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:4306-4317. doi: 10.1109/TNSRE.2023.3328888. Epub 2023 Nov 3.
7
Task-Level Authoring for Remote Robot Teleoperation.远程机器人遥操作的任务级创作
Front Robot AI. 2021 Sep 27;8:707149. doi: 10.3389/frobt.2021.707149. eCollection 2021.
8
Gaze gesture based human robot interaction for laparoscopic surgery.基于注视手势的腹腔镜手术人机交互。
Med Image Anal. 2018 Feb;44:196-214. doi: 10.1016/j.media.2017.11.011. Epub 2017 Nov 28.
9
A Data-Driven Framework for Intention Prediction via Eye Movement With Applications to Assistive Systems.基于眼动数据的意图预测框架及其在辅助系统中的应用
IEEE Trans Neural Syst Rehabil Eng. 2021;29:974-984. doi: 10.1109/TNSRE.2021.3083815. Epub 2021 Jun 4.
10
Context matters during pick-and-place in VR: Impact on search and transport phases.在虚拟现实中进行拾取和放置操作时,情境很重要:对搜索和运输阶段的影响。
Front Psychol. 2022 Sep 8;13:881269. doi: 10.3389/fpsyg.2022.881269. eCollection 2022.

引用本文的文献

1
Intention Reasoning for User Action Sequences via Fusion of Object Task and Object Action Affordances Based on Dempster-Shafer Theory.基于邓普斯特-谢弗理论,通过融合对象任务与对象动作能力实现用户动作序列的意图推理
Sensors (Basel). 2025 Mar 22;25(7):1992. doi: 10.3390/s25071992.
2
An analysis of the role of different levels of exchange of explicit information in human-robot cooperation.对人机合作中不同层次明确信息交流的作用分析。
Front Robot AI. 2025 Feb 10;12:1511619. doi: 10.3389/frobt.2025.1511619. eCollection 2025.
3
Integrating Egocentric and Robotic Vision for Object Identification Using Siamese Networks and Superquadric Estimations in Partial Occlusion Scenarios.

本文引用的文献

1
Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human-Robot Collaboration.利用三维注视跟踪实现双手操作过程中的动作识别以增强人机协作
Front Robot AI. 2018 Apr 4;5:25. doi: 10.3389/frobt.2018.00025. eCollection 2018.
2
Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features.迈向人机系统的共享自主控制方案:利用目光特征进行动作原语识别
Front Neurorobot. 2020 Oct 15;14:567571. doi: 10.3389/fnbot.2020.567571. eCollection 2020.
3
Probabilistic Human Intent Recognition for Shared Autonomy in Assistive Robotics.
在部分遮挡场景中使用暹罗网络和超二次曲面估计将自我中心视觉与机器人视觉相结合以进行目标识别
Biomimetics (Basel). 2024 Feb 8;9(2):100. doi: 10.3390/biomimetics9020100.
4
Object Affordance-Based Implicit Interaction for Wheelchair-Mounted Robotic Arm Using a Laser Pointer.基于物体可供性的激光指针轮椅机械臂隐式交互
Sensors (Basel). 2023 May 4;23(9):4477. doi: 10.3390/s23094477.
5
A Cooperative Shared Control Scheme Based on Intention Recognition for Flexible Assembly Manufacturing.一种基于意图识别的柔性装配制造协同共享控制方案。
Front Neurorobot. 2022 Mar 16;16:850211. doi: 10.3389/fnbot.2022.850211. eCollection 2022.
用于辅助机器人共享自主性的概率性人类意图识别
ACM Trans Hum Robot Interact. 2019 Dec;9(1). doi: 10.1145/3359614.
4
Recursive Bayesian Human Intent Recognition in Shared-Control Robotics.共享控制机器人技术中的递归贝叶斯人类意图识别
Rep U S. 2018 Oct;2018:3905-3912. doi: 10.1109/IROS.2018.8593766. Epub 2019 Jan 7.
5
Semi-Autonomous Robotic Arm Reaching With Hybrid Gaze-Brain Machine Interface.基于混合注视-脑机接口的半自主机器人手臂伸展
Front Neurorobot. 2020 Jan 24;13:111. doi: 10.3389/fnbot.2019.00111. eCollection 2019.
6
Proof of Concept of an Assistive Robotic Arm Control Using Artificial Stereovision and Eye-Tracking.利用人工立体视觉和眼动追踪的辅助机械臂控制的概念验证。
IEEE Trans Neural Syst Rehabil Eng. 2019 Dec;27(12):2344-2352. doi: 10.1109/TNSRE.2019.2950619. Epub 2019 Oct 30.
7
Prediction in goal-directed action.目标导向行动中的预测。
J Vis. 2019 Aug 1;19(9):10. doi: 10.1167/19.9.10.
8
Shared Autonomy via Hindsight Optimization.通过事后优化实现共享自主权
Robot Sci Syst. 2015 Jul;2015. doi: 10.15607/RSS.2015.XI.032.
9
Toward a framework for levels of robot autonomy in human-robot interaction.迈向人机交互中机器人自主水平的框架。
J Hum Robot Interact. 2014 Jul;3(2):74-99. doi: 10.5898/JHRI.3.2.Beer.
10
Vision and Action.视野与行动。
Annu Rev Vis Sci. 2017 Sep 15;3:389-413. doi: 10.1146/annurev-vision-102016-061437. Epub 2017 Jul 17.