• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于注视的共享自主框架,具有实时动作基元识别功能,用于机器人操纵器。

Gaze-Based Shared Autonomy Framework With Real-Time Action Primitive Recognition for Robot Manipulators.

出版信息

IEEE Trans Neural Syst Rehabil Eng. 2023;31:4306-4317. doi: 10.1109/TNSRE.2023.3328888. Epub 2023 Nov 3.

DOI:10.1109/TNSRE.2023.3328888
PMID:37906485
Abstract

Robots capable of robust, real-time recognition of human intent during manipulation tasks could be used to enhance human-robot collaboration for activities of daily living. Eye gaze-based control interfaces offer a non-invasive way to infer intent and reduce the cognitive burden on operators of complex robots. Eye gaze is traditionally used for "gaze triggering" (GT) in which staring at an object, or sequence of objects, triggers pre-programmed robotic movements. We propose an alternative approach: a neural network-based "action prediction" (AP) mode that extracts gaze-related features to recognize, and often predict, an operator's intended action primitives. We integrated the AP mode into a shared autonomy framework capable of 3D gaze reconstruction, real-time intent inference, object localization, obstacle avoidance, and dynamic trajectory planning. Using this framework, we conducted a user study to directly compare the performance of the GT and AP modes using traditional subjective performance metrics, such as Likert scales, as well as novel objective performance metrics, such as the delay of recognition. Statistical analyses suggested that the AP mode resulted in more seamless robotic movement than the state-of-the-art GT mode, and that participants generally preferred the AP mode.

摘要

机器人能够在操作任务中稳健、实时地识别人类意图,这将有助于增强人机协作,以完成日常生活活动。基于眼动追踪的控制接口提供了一种非侵入式的方法来推断意图,并降低了复杂机器人操作人员的认知负担。传统上,眼动追踪用于“注视触发”(GT),即盯着一个或一组物体,触发预先编程的机器人运动。我们提出了一种替代方法:基于神经网络的“动作预测”(AP)模式,该模式提取与注视相关的特征,以识别并经常预测操作人员的预期动作基元。我们将 AP 模式集成到一个共享自主框架中,该框架能够进行 3D 注视重建、实时意图推断、物体定位、障碍物回避和动态轨迹规划。使用这个框架,我们进行了一项用户研究,直接比较了传统主观性能指标(如李克特量表)和新颖的客观性能指标(如识别延迟)下的 GT 和 AP 模式的性能。统计分析表明,与最先进的 GT 模式相比,AP 模式的机器人运动更加流畅,并且参与者普遍更喜欢 AP 模式。

相似文献

1
Gaze-Based Shared Autonomy Framework With Real-Time Action Primitive Recognition for Robot Manipulators.基于注视的共享自主框架,具有实时动作基元识别功能,用于机器人操纵器。
IEEE Trans Neural Syst Rehabil Eng. 2023;31:4306-4317. doi: 10.1109/TNSRE.2023.3328888. Epub 2023 Nov 3.
2
Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human-Robot Collaboration.利用三维注视跟踪实现双手操作过程中的动作识别以增强人机协作
Front Robot AI. 2018 Apr 4;5:25. doi: 10.3389/frobt.2018.00025. eCollection 2018.
3
Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features.迈向人机系统的共享自主控制方案:利用目光特征进行动作原语识别
Front Neurorobot. 2020 Oct 15;14:567571. doi: 10.3389/fnbot.2020.567571. eCollection 2020.
4
3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.基于 3D 注视的机器人抓取,通过模仿运动障碍患者的视动功能实现。
IEEE Trans Biomed Eng. 2017 Dec;64(12):2824-2835. doi: 10.1109/TBME.2017.2677902. Epub 2017 Mar 3.
5
Eye-gaze control of a wheelchair mounted 6DOF assistive robot for activities of daily living.眼控轮椅搭载六自由度辅助机器人,辅助日常生活活动。
J Neuroeng Rehabil. 2021 Dec 18;18(1):173. doi: 10.1186/s12984-021-00969-2.
6
Enhancing Human-Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition.通过融合传感器的多模块交互框架增强人机协作:物体识别、语音交流、目标用户检测、手势和眼神识别。
Sensors (Basel). 2023 Jun 21;23(13):5798. doi: 10.3390/s23135798.
7
Physiological Indicators of Fluency and Engagement during Sequential and Simultaneous Modes of Human-Robot Collaboration.人机协作的顺序模式和同步模式下流畅性与参与度的生理指标
IISE Trans Occup Ergon Hum Factors. 2024 Jan-Jun;12(1-2):97-111. doi: 10.1080/24725838.2023.2287015. Epub 2023 Dec 6.
8
Collaborative gaze channelling for improved cooperation during robotic assisted surgery.协作注视引导提高机器人辅助手术中的协作。
Ann Biomed Eng. 2012 Oct;40(10):2156-67. doi: 10.1007/s10439-012-0578-4. Epub 2012 May 12.
9
A feasibility study of eye gaze with biofeedback in a human-robot interface.人机界面中眼动注视与生物反馈的可行性研究。
Assist Technol. 2022 Mar 4;34(2):148-156. doi: 10.1080/10400435.2020.1719557. Epub 2020 Jan 30.
10
Natural Grasp Intention Recognition Based on Gaze in Human-Robot Interaction.基于注视的人机交互自然手势意图识别
IEEE J Biomed Health Inform. 2023 Apr;27(4):2059-2070. doi: 10.1109/JBHI.2023.3238406.