• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于与人类进行逼真交互的机器人头部的注视控制。

Gaze Control of a Robotic Head for Realistic Interaction With Humans.

作者信息

Duque-Domingo Jaime, Gómez-García-Bermejo Jaime, Zalama Eduardo

机构信息

ITAP-DISA, University of Valladolid, Valladolid, Spain.

出版信息

Front Neurorobot. 2020 Jun 17;14:34. doi: 10.3389/fnbot.2020.00034. eCollection 2020.

DOI:10.3389/fnbot.2020.00034
PMID:32625075
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7311780/
Abstract

When there is an interaction between a robot and a person, gaze control is very important for face-to-face communication. However, when a robot interacts with several people, neurorobotics plays an important role to determine the person to look at and those to pay attention to among the others. There are several factors which can influence the decision: who is speaking, who he/she is speaking to, where people are looking, if the user wants to attract attention, etc. This article presents a novel method to decide who to pay attention to when a robot interacts with several people. The proposed method is based on a competitive network that receives different stimuli (look, speak, pose, hoard conversation, habituation, etc.) that compete with each other to decide who to pay attention to. The dynamic nature of this neural network allows a smooth transition in the focus of attention to a significant change in stimuli. A conversation is created between different participants, replicating human behavior in the robot. The method deals with the problem of several interlocutors appearing and disappearing from the visual field of the robot. A robotic head has been designed and built and a virtual agent projected on the robot's face display has been integrated with the gaze control. Different experiments have been carried out with that robotic head integrated into a ROS architecture model. The work presents the analysis of the method, how the system has been integrated with the robotic head and the experiments and results obtained.

摘要

当机器人与人进行交互时,注视控制对于面对面交流非常重要。然而,当机器人与多个人进行交互时,神经机器人学在确定注视对象以及在其他人中关注哪些人方面起着重要作用。有几个因素会影响这一决策:谁在说话、他/她在和谁说话、人们看向哪里、用户是否想吸引注意力等等。本文提出了一种新颖的方法,用于在机器人与多个人进行交互时决定关注谁。所提出的方法基于一个竞争网络,该网络接收不同的刺激(眼神、说话、姿势、对话内容、习惯化等),这些刺激相互竞争以决定关注谁。这种神经网络的动态特性允许在注意力焦点上随着刺激的显著变化进行平滑过渡。在不同参与者之间创建了对话,在机器人中复制了人类行为。该方法解决了多个对话者在机器人视野中出现和消失的问题。设计并制造了一个机器人头部,并将投射在机器人面部显示屏上的虚拟代理与注视控制集成在一起。将那个机器人头部集成到ROS架构模型中进行了不同的实验。这项工作展示了对该方法的分析、系统如何与机器人头部集成以及所进行的实验和获得的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/7d39515e454d/fnbot-14-00034-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/3b811a839616/fnbot-14-00034-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/09677b295acc/fnbot-14-00034-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/c1cd74cd7da1/fnbot-14-00034-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/8050716eab74/fnbot-14-00034-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/b7a452dd8129/fnbot-14-00034-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/0600d0e21f8c/fnbot-14-00034-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/e72aaf98ea33/fnbot-14-00034-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/ca56e58e01c6/fnbot-14-00034-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/fe8d09c73a30/fnbot-14-00034-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/d0d709072d6f/fnbot-14-00034-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/9e35b6ca454a/fnbot-14-00034-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/7d39515e454d/fnbot-14-00034-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/3b811a839616/fnbot-14-00034-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/09677b295acc/fnbot-14-00034-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/c1cd74cd7da1/fnbot-14-00034-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/8050716eab74/fnbot-14-00034-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/b7a452dd8129/fnbot-14-00034-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/0600d0e21f8c/fnbot-14-00034-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/e72aaf98ea33/fnbot-14-00034-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/ca56e58e01c6/fnbot-14-00034-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/fe8d09c73a30/fnbot-14-00034-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/d0d709072d6f/fnbot-14-00034-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/9e35b6ca454a/fnbot-14-00034-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d8c5/7311780/7d39515e454d/fnbot-14-00034-g0012.jpg

相似文献

1
Gaze Control of a Robotic Head for Realistic Interaction With Humans.用于与人类进行逼真交互的机器人头部的注视控制。
Front Neurorobot. 2020 Jun 17;14:34. doi: 10.3389/fnbot.2020.00034. eCollection 2020.
2
See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.你看我,我看你:眼神交流在多模态人机交互中的作用。
ACM Trans Interact Intell Syst. 2016 May;6(1). doi: 10.1145/2882970.
3
Robotic gaze and human views: A systematic exploration of robotic gaze aversion and its effects on human behaviors and attitudes.机器人的注视与人类的视角:对机器人注视回避及其对人类行为和态度影响的系统探索。
Front Robot AI. 2023 Apr 10;10:1062714. doi: 10.3389/frobt.2023.1062714. eCollection 2023.
4
Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.基于模糊积分的人机交互机器人头部的视线控制。
IEEE Trans Cybern. 2015 Sep;45(9):1769-83. doi: 10.1109/TCYB.2014.2360205. Epub 2014 Oct 8.
5
3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.基于 3D 注视的机器人抓取,通过模仿运动障碍患者的视动功能实现。
IEEE Trans Biomed Eng. 2017 Dec;64(12):2824-2835. doi: 10.1109/TBME.2017.2677902. Epub 2017 Mar 3.
6
Robotic Motion Learning Framework to Promote Social Engagement.促进社交参与的机器人运动学习框架
Appl Sci (Basel). 2018 Feb;8(2). doi: 10.3390/app8020241. Epub 2018 Feb 5.
7
Can the robot "see" what I see? Robot gaze drives attention depending on mental state attribution.机器人能“看到”我所看到的东西吗?机器人的注视会根据心理状态归因来引导注意力。
Front Psychol. 2023 Jul 13;14:1215771. doi: 10.3389/fpsyg.2023.1215771. eCollection 2023.
8
Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement.机器人在团队学习任务中的主动参与会增加互动的节奏,但不会增加感知到的参与度。
Front Neurorobot. 2014 Feb 17;8:5. doi: 10.3389/fnbot.2014.00005. eCollection 2014.
9
Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.理解人机交互中的社会线索和信号:机器人注视和空间行为的影响。
Front Psychol. 2013 Nov 27;4:859. doi: 10.3389/fpsyg.2013.00859. eCollection 2013.
10
Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.跟随目光的机器人面孔有助于吸引注意力并增加其受欢迎程度。
Front Psychol. 2018 Feb 5;9:70. doi: 10.3389/fpsyg.2018.00070. eCollection 2018.

引用本文的文献

1
Perceptive Recommendation Robot: Enhancing Receptivity of Product Suggestions Based on Customers' Nonverbal Cues.感知推荐机器人:基于客户非语言线索增强产品建议的接受度。
Biomimetics (Basel). 2024 Jul 2;9(7):404. doi: 10.3390/biomimetics9070404.

本文引用的文献

1
A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.辅助与康复机器人技术的人机交互视角
Front Neurorobot. 2017 May 23;11:24. doi: 10.3389/fnbot.2017.00024. eCollection 2017.
2
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
3
Audio-visual perception system for a humanoid robotic head.用于类人机器人头部的视听感知系统。
Sensors (Basel). 2014 May 28;14(6):9522-45. doi: 10.3390/s140609522.
4
Robot evolutionary localization based on attentive visual short-term memory.基于注意视觉短期记忆的机器人进化定位。
Sensors (Basel). 2013 Jan 21;13(1):1268-99. doi: 10.3390/s130101268.
5
User localization during human-robot interaction.人机交互中的用户定位。
Sensors (Basel). 2012;12(7):9913-35. doi: 10.3390/s120709913. Epub 2012 Jul 23.
6
I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.当我看到你在看时,我会更快地到达:面对面合作中的人类-人类和人类-机器人注视效应。
Front Neurorobot. 2012 May 3;6:3. doi: 10.3389/fnbot.2012.00003. eCollection 2012.
7
Integrating verbal and nonverbal communication in a dynamic neural field architecture for human-robot interaction.在用于人机交互的动态神经场架构中整合言语和非言语沟通。
Front Neurorobot. 2010 May 21;4. doi: 10.3389/fnbot.2010.00005. eCollection 2010.
8
Recognizing visual focus of attention from head pose in natural meetings.在自然会议中从头部姿势识别视觉注意力焦点。
IEEE Trans Syst Man Cybern B Cybern. 2009 Feb;39(1):16-33. doi: 10.1109/TSMCB.2008.927274. Epub 2008 Sep 16.
9
Face description with local binary patterns: application to face recognition.基于局部二值模式的面部描述:在人脸识别中的应用。
IEEE Trans Pattern Anal Mach Intell. 2006 Dec;28(12):2037-41. doi: 10.1109/TPAMI.2006.244.
10
Some nonlinear networks capable of learning a spatial pattern of arbitrary complexity.一些能够学习任意复杂空间模式的非线性网络。
Proc Natl Acad Sci U S A. 1968 Feb;59(2):368-72. doi: 10.1073/pnas.59.2.368.