• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

谁应对对话中断负责?一种通过在人机对话中相互归因责任来促进人类合作意图的错误恢复策略。

Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues.

作者信息

Uchida Takahisa, Minato Takashi, Koyama Tora, Ishiguro Hiroshi

机构信息

Advanced Telecommunications Research Institute International, Kyoto, Japan.

Graduate School of Engineering Science, Osaka University, Osaka, Japan.

出版信息

Front Robot AI. 2019 Apr 24;6:29. doi: 10.3389/frobt.2019.00029. eCollection 2019.

DOI:10.3389/frobt.2019.00029
PMID:33501045
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7806060/
Abstract

We propose a strategy with which conversational android robots can handle dialogue breakdowns. For smooth human-robot conversations, we must not only improve a robot's dialogue capability but also elicit cooperative intentions from users for avoiding and recovering from dialogue breakdowns. A cooperative intention can be encouraged if users recognize their own responsibility for breakdowns. If the robot always blames users, however, they will quickly become less cooperative and lose their motivation to continue a discussion. This paper hypothesizes that for smooth dialogues, the robot and the users must share the responsibility based on psychological reciprocity. In other words, the robot should alternately attribute the responsibility to itself and to the users. We proposed a dialogue strategy for recovering from dialogue breakdowns based on the hypothesis and experimentally verified it with an android. The experimental result shows that the proposed method made the participants aware of their share of the responsibility of the dialogue breakdowns without reducing their motivation, even though the number of dialogue breakdowns was not statistically reduced compared with a control condition. This suggests that the proposed method effectively elicited cooperative intentions from users during dialogues.

摘要

我们提出了一种让对话式安卓机器人能够处理对话中断的策略。为了实现顺畅的人机对话,我们不仅要提高机器人的对话能力,还要激发用户的合作意愿,以避免对话中断并从中恢复。如果用户认识到自己对对话中断负有责任,就可以鼓励他们产生合作意愿。然而,如果机器人总是责怪用户,他们很快就会变得不那么合作,失去继续讨论的动力。本文假设,为了实现顺畅的对话,机器人和用户必须基于心理互惠原则分担责任。换句话说,机器人应该交替地将责任归咎于自己和用户。我们基于这一假设提出了一种从对话中断中恢复的对话策略,并通过一个安卓机器人进行了实验验证。实验结果表明,尽管与控制条件相比,对话中断的次数在统计上没有减少,但所提出的方法让参与者意识到了他们在对话中断中应承担的责任份额,同时又没有降低他们的积极性。这表明所提出的方法在对话过程中有效地激发了用户的合作意愿。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/8cdef485ac64/frobt-06-00029-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/c899d4233d38/frobt-06-00029-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/e9577abd0edb/frobt-06-00029-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/5e3cf350436d/frobt-06-00029-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/1ddd09562bb6/frobt-06-00029-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/8cdef485ac64/frobt-06-00029-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/c899d4233d38/frobt-06-00029-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/e9577abd0edb/frobt-06-00029-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/5e3cf350436d/frobt-06-00029-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/1ddd09562bb6/frobt-06-00029-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0ef8/7806060/8cdef485ac64/frobt-06-00029-g0005.jpg

相似文献

1
Who Is Responsible for a Dialogue Breakdown? An Error Recovery Strategy That Promotes Cooperative Intentions From Humans by Mutual Attribution of Responsibility in Human-Robot Dialogues.谁应对对话中断负责?一种通过在人机对话中相互归因责任来促进人类合作意图的错误恢复策略。
Front Robot AI. 2019 Apr 24;6:29. doi: 10.3389/frobt.2019.00029. eCollection 2019.
2
Opinion attribution improves motivation to exchange subjective opinions with humanoid robots.观点归属提高了与类人机器人交流主观观点的积极性。
Front Robot AI. 2024 Feb 19;11:1175879. doi: 10.3389/frobt.2024.1175879. eCollection 2024.
3
Asymmetric communication: cognitive models of humans toward an android robot.不对称交流:人类对仿人机器人的认知模型
Front Robot AI. 2024 Jan 11;10:1267560. doi: 10.3389/frobt.2023.1267560. eCollection 2023.
4
Forming We-intentions under breakdown situations in human-robot interactions.在人机交互的故障情况下形成“我们意图”
Comput Methods Programs Biomed. 2023 Dec;242:107817. doi: 10.1016/j.cmpb.2023.107817. Epub 2023 Sep 20.
5
Modelling Multimodal Dialogues for Social Robots Using Communicative Acts.使用交际行为对社交机器人进行多模态对话建模。
Sensors (Basel). 2020 Jun 18;20(12):3440. doi: 10.3390/s20123440.
6
A study of interactive robot architecture through the practical implementation of conversational android.通过对话式安卓机器人的实际应用对交互式机器人架构进行的一项研究。
Front Robot AI. 2022 Oct 11;9:905030. doi: 10.3389/frobt.2022.905030. eCollection 2022.
7
Expecting, understanding, relating, and interacting-older, middle-aged and younger adults' perspectives on breakdown situations in human-robot dialogues.期望、理解、关联与互动——老年人、中年人和年轻人对人机对话中故障情况的看法。
Front Robot AI. 2022 Oct 14;9:956709. doi: 10.3389/frobt.2022.956709. eCollection 2022.
8
Can a robot laugh with you?: Shared laughter generation for empathetic spoken dialogue.机器人能和你一起笑吗?:用于共情口语对话的共享笑声生成
Front Robot AI. 2022 Sep 15;9:933261. doi: 10.3389/frobt.2022.933261. eCollection 2022.
9
Scenario-based dialogue system based on pause detection toward daily health monitoring.基于停顿检测的面向日常健康监测的情景对话系统。
J Rehabil Assist Technol Eng. 2022 Oct 13;9:20556683221133367. doi: 10.1177/20556683221133367. eCollection 2022 Jan-Dec.
10
Development of an excretion care support robot with human cooperative characteristics.具有人机协作特性的排泄护理辅助机器人的研发。
Annu Int Conf IEEE Eng Med Biol Soc. 2015;2015:6868-71. doi: 10.1109/EMBC.2015.7319971.

引用本文的文献

1
Working with roubles and failures in conversation between humans and robots: workshop report.人类与机器人对话中的问题与失败:研讨会报告
Front Robot AI. 2023 Dec 1;10:1202306. doi: 10.3389/frobt.2023.1202306. eCollection 2023.
2
A study of interactive robot architecture through the practical implementation of conversational android.通过对话式安卓机器人的实际应用对交互式机器人架构进行的一项研究。
Front Robot AI. 2022 Oct 11;9:905030. doi: 10.3389/frobt.2022.905030. eCollection 2022.
3
Measuring Collaboration Load With Pupillary Responses - Implications for the Design of Instructions in Task-Oriented HRI.

本文引用的文献

1
Formalizing Human-Robot Mutual Adaptation: A Bounded Memory Model.形式化人机相互适应:一种有界记忆模型。
Proc ACM SIGCHI. 2016 Mar;2016:75-82. doi: 10.1109/HRI.2016.7451736. Epub 2016 Apr 14.
2
Shame and guilt in neurosis.神经症中的羞耻与内疚。
Psychoanal Rev. 1971 Fall;58(3):419-38.
利用瞳孔反应测量协作负荷——对面向任务的人机交互中指令设计的启示
Front Psychol. 2021 Jul 20;12:623657. doi: 10.3389/fpsyg.2021.623657. eCollection 2021.
4
Trouble and Repair in Child-Robot Interaction: A Study of Complex Interactions With a Robot Tutee in a Primary School Classroom.儿童与机器人互动中的问题与修复:对小学课堂中与机器人学生的复杂互动的研究
Front Robot AI. 2020 Apr 9;7:46. doi: 10.3389/frobt.2020.00046. eCollection 2020.