• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

创造能够传达符号化和自然交流的富有表现力的社交机器人。

Creating Expressive Social Robots That Convey Symbolic and Spontaneous Communication.

作者信息

Fernández-Rodicio Enrique, Castro-González Álvaro, Gamboa-Montero Juan José, Carrasco-Martínez Sara, Salichs Miguel A

机构信息

RoboticsLab, Department of Systems Engineering and Automation, Universidad Carlos III de Madrid, Av. de la Universidad 30, 28911 Madrid, Spain.

出版信息

Sensors (Basel). 2024 Jun 5;24(11):3671. doi: 10.3390/s24113671.

DOI:10.3390/s24113671
PMID:38894462
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11175349/
Abstract

Robots are becoming an increasingly important part of our society and have started to be used in tasks that require communicating with humans. Communication can be decoupled in two dimensions: symbolic (information aimed to achieve a particular goal) and spontaneous (displaying the speaker's emotional and motivational state) communication. Thus, to enhance human-robot interactions, the expressions that are used have to convey both dimensions. This paper presents a method for modelling a robot's expressiveness as a combination of these two dimensions, where each of them can be generated independently. This is the first contribution of our work. The second contribution is the development of an expressiveness architecture that uses predefined multimodal expressions to convey the symbolic dimension and integrates a series of modulation strategies for conveying the robot's mood and emotions. In order to validate the performance of the proposed architecture, the last contribution is a series of experiments that aim to study the effect that the addition of the spontaneous dimension of communication and its fusion with the symbolic dimension has on how people perceive a social robot. Our results show that the modulation strategies improve the users' perception and can convey a recognizable affective state.

摘要

机器人正日益成为我们社会的重要组成部分,并已开始应用于需要与人类交流的任务中。交流可以在两个维度上进行分解:符号性交流(旨在实现特定目标的信息)和自发性交流(展现说话者的情感和动机状态)。因此,为了增强人机交互,所使用的表达方式必须同时传达这两个维度。本文提出了一种将机器人的表现力建模为这两个维度组合的方法,其中每个维度都可以独立生成。这是我们工作的第一个贡献。第二个贡献是开发了一种表现力架构,该架构使用预定义的多模态表达方式来传达符号维度,并集成了一系列用于传达机器人情绪和情感的调制策略。为了验证所提出架构的性能,最后一个贡献是进行了一系列实验,旨在研究添加交流的自发性维度及其与符号维度的融合对人们如何感知社交机器人的影响。我们的结果表明,调制策略改善了用户的感知,并能够传达可识别的情感状态。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/02f205c23ae5/sensors-24-03671-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/2a041462a003/sensors-24-03671-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/7c139c7e9fba/sensors-24-03671-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/cf8a06adf55f/sensors-24-03671-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/a1c3cad6ea18/sensors-24-03671-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/59c1d8e2070d/sensors-24-03671-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/644abd73c013/sensors-24-03671-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/24dce1152b42/sensors-24-03671-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/ee9cdbb6375e/sensors-24-03671-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/dc3ca406de96/sensors-24-03671-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/d561b347b3ee/sensors-24-03671-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/82d4b5b0403f/sensors-24-03671-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/02f205c23ae5/sensors-24-03671-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/2a041462a003/sensors-24-03671-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/7c139c7e9fba/sensors-24-03671-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/cf8a06adf55f/sensors-24-03671-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/a1c3cad6ea18/sensors-24-03671-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/59c1d8e2070d/sensors-24-03671-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/644abd73c013/sensors-24-03671-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/24dce1152b42/sensors-24-03671-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/ee9cdbb6375e/sensors-24-03671-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/dc3ca406de96/sensors-24-03671-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/d561b347b3ee/sensors-24-03671-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/82d4b5b0403f/sensors-24-03671-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8a13/11175349/02f205c23ae5/sensors-24-03671-g012.jpg

相似文献

1
Creating Expressive Social Robots That Convey Symbolic and Spontaneous Communication.创造能够传达符号化和自然交流的富有表现力的社交机器人。
Sensors (Basel). 2024 Jun 5;24(11):3671. doi: 10.3390/s24113671.
2
The Role of Coherent Robot Behavior and Embodiment in Emotion Perception and Recognition During Human-Robot Interaction: Experimental Study.连贯机器人行为与具身性在人机交互中情绪感知与识别中的作用:实验研究
JMIR Hum Factors. 2024 Jan 26;11:e45494. doi: 10.2196/45494.
3
A Multimodal Emotional Human-Robot Interaction Architecture for Social Robots Engaged in Bidirectional Communication.一种用于从事双向通信的社交机器人的多模态情感人机交互架构。
IEEE Trans Cybern. 2021 Dec;51(12):5954-5968. doi: 10.1109/TCYB.2020.2974688. Epub 2021 Dec 22.
4
Affect-Driven Learning of Robot Behaviour for Collaborative Human-Robot Interactions.用于人机协作交互的基于情感驱动的机器人行为学习
Front Robot AI. 2022 Feb 21;9:717193. doi: 10.3389/frobt.2022.717193. eCollection 2022.
5
Modelling Multimodal Dialogues for Social Robots Using Communicative Acts.使用交际行为对社交机器人进行多模态对话建模。
Sensors (Basel). 2020 Jun 18;20(12):3440. doi: 10.3390/s20123440.
6
Real-time emotion generation in human-robot dialogue using large language models.使用大语言模型在人机对话中进行实时情感生成
Front Robot AI. 2023 Dec 1;10:1271610. doi: 10.3389/frobt.2023.1271610. eCollection 2023.
7
Freedom comes at a cost?: An exploratory study on affordances' impact on users' perception of a social robot.自由是有代价的?:关于可供性对用户对社交机器人认知影响的探索性研究
Front Robot AI. 2024 Mar 18;11:1288818. doi: 10.3389/frobt.2024.1288818. eCollection 2024.
8
Flat vs. Expressive Storytelling: Young Children's Learning and Retention of a Social Robot's Narrative.平淡叙事与生动叙事:幼儿对社交机器人故事的学习与记忆
Front Hum Neurosci. 2017 Jun 7;11:295. doi: 10.3389/fnhum.2017.00295. eCollection 2017.
9
Mind Perception in HRI: Exploring Users' Attribution of Mental and Emotional States to Robots with Different Behavioural Styles.人机交互中的心智感知:探究用户对具有不同行为风格的机器人的心理和情感状态归因
Int J Soc Robot. 2023;15(5):867-877. doi: 10.1007/s12369-023-00989-z. Epub 2023 Mar 26.
10
Robots with display screens: a robot with a more humanlike face display is perceived to have more mind and a better personality.带显示屏的机器人:具有更类人面部显示的机器人被认为具有更多的思维和更好的个性。
PLoS One. 2013 Aug 28;8(8):e72589. doi: 10.1371/journal.pone.0072589. eCollection 2013.

引用本文的文献

1
Evaluating the effects of active social touch and robot expressiveness on user attitudes and behaviour in human-robot interaction.评估主动社交触摸和机器人表现力对人机交互中用户态度和行为的影响。
Sci Rep. 2025 May 27;15(1):18483. doi: 10.1038/s41598-025-01490-5.

本文引用的文献

1
Zero-shot style transfer for gesture animation driven by text and speech using adversarial disentanglement of multimodal style encoding.利用多模态风格编码的对抗解缠实现由文本和语音驱动的手势动画的零样本风格迁移。
Front Artif Intell. 2023 Jun 12;6:1142997. doi: 10.3389/frai.2023.1142997. eCollection 2023.
2
Facing the FACS-Using AI to Evaluate and Control Facial Action Units in Humanoid Robot Face Development.面向FACS——在人形机器人面部开发中利用人工智能评估和控制面部动作单元
Front Robot AI. 2022 Jun 14;9:887645. doi: 10.3389/frobt.2022.887645. eCollection 2022.
3
ExGenNet: Learning to Generate Robotic Facial Expression Using Facial Expression Recognition.
ExGenNet:利用面部表情识别学习生成机器人面部表情
Front Robot AI. 2022 Jan 4;8:730317. doi: 10.3389/frobt.2021.730317. eCollection 2021.
4
Effects of Robot Facial Characteristics and Gender in Persuasive Human-Robot Interaction.机器人面部特征和性别在人机说服性交互中的作用
Front Robot AI. 2018 Jun 21;5:73. doi: 10.3389/frobt.2018.00073. eCollection 2018.
5
Modelling Multimodal Dialogues for Social Robots Using Communicative Acts.使用交际行为对社交机器人进行多模态对话建模。
Sensors (Basel). 2020 Jun 18;20(12):3440. doi: 10.3390/s20123440.
6
Behavioral and Neurobiological Convergence of Odor, Mood and Emotion: A Review.气味、情绪和情感的行为与神经生物学趋同:综述
Front Behav Neurosci. 2020 Mar 10;14:35. doi: 10.3389/fnbeh.2020.00035. eCollection 2020.
7
A Multimodal Emotional Human-Robot Interaction Architecture for Social Robots Engaged in Bidirectional Communication.一种用于从事双向通信的社交机器人的多模态情感人机交互架构。
IEEE Trans Cybern. 2021 Dec;51(12):5954-5968. doi: 10.1109/TCYB.2020.2974688. Epub 2021 Dec 22.
8
Automating the Production of Communicative Gestures in Embodied Characters.在具身角色中实现交际手势的自动化生成。
Front Psychol. 2018 Jul 9;9:1144. doi: 10.3389/fpsyg.2018.01144. eCollection 2018.
9
A meta-analysis of factors affecting trust in human-robot interaction.元分析影响人机交互信任的因素。
Hum Factors. 2011 Oct;53(5):517-27. doi: 10.1177/0018720811417254.