Uchida Takahisa, Minato Takashi, Koyama Tora, Ishiguro Hiroshi
Advanced Telecommunications Research Institute International, Kyoto, Japan.
Graduate School of Engineering Science, Osaka University, Osaka, Japan.
Front Robot AI. 2019 Apr 24;6:29. doi: 10.3389/frobt.2019.00029. eCollection 2019.
We propose a strategy with which conversational android robots can handle dialogue breakdowns. For smooth human-robot conversations, we must not only improve a robot's dialogue capability but also elicit cooperative intentions from users for avoiding and recovering from dialogue breakdowns. A cooperative intention can be encouraged if users recognize their own responsibility for breakdowns. If the robot always blames users, however, they will quickly become less cooperative and lose their motivation to continue a discussion. This paper hypothesizes that for smooth dialogues, the robot and the users must share the responsibility based on psychological reciprocity. In other words, the robot should alternately attribute the responsibility to itself and to the users. We proposed a dialogue strategy for recovering from dialogue breakdowns based on the hypothesis and experimentally verified it with an android. The experimental result shows that the proposed method made the participants aware of their share of the responsibility of the dialogue breakdowns without reducing their motivation, even though the number of dialogue breakdowns was not statistically reduced compared with a control condition. This suggests that the proposed method effectively elicited cooperative intentions from users during dialogues.
我们提出了一种让对话式安卓机器人能够处理对话中断的策略。为了实现顺畅的人机对话,我们不仅要提高机器人的对话能力,还要激发用户的合作意愿,以避免对话中断并从中恢复。如果用户认识到自己对对话中断负有责任,就可以鼓励他们产生合作意愿。然而,如果机器人总是责怪用户,他们很快就会变得不那么合作,失去继续讨论的动力。本文假设,为了实现顺畅的对话,机器人和用户必须基于心理互惠原则分担责任。换句话说,机器人应该交替地将责任归咎于自己和用户。我们基于这一假设提出了一种从对话中断中恢复的对话策略,并通过一个安卓机器人进行了实验验证。实验结果表明,尽管与控制条件相比,对话中断的次数在统计上没有减少,但所提出的方法让参与者意识到了他们在对话中断中应承担的责任份额,同时又没有降低他们的积极性。这表明所提出的方法在对话过程中有效地激发了用户的合作意愿。