Groß André, Singh Amit, Banh Ngoc Chi, Richter Birte, Scharlau Ingrid, Rohlfing Katharina J, Wrede Britta
Medical Assistance Systems, Medical School OWL, Bielefeld University, Bielefeld, Germany.
Center for Cognitive Interaction Technology, CITEC, Bielefeld University, Bielefeld, Germany.
Front Robot AI. 2023 Oct 30;10:1236184. doi: 10.3389/frobt.2023.1236184. eCollection 2023.
Explanation has been identified as an important capability for AI-based systems, but research on systematic strategies for achieving understanding in interaction with such systems is still sparse. Negation is a linguistic strategy that is often used in explanations. It creates a contrast space between the affirmed and the negated item that enriches explaining processes with additional contextual information. While negation in human speech has been shown to lead to higher processing costs and worse task performance in terms of recall or action execution when used in isolation, it can decrease processing costs when used in context. So far, it has not been considered as a guiding strategy for explanations in human-robot interaction. We conducted an empirical study to investigate the use of negation as a guiding strategy in explanatory human-robot dialogue, in which a virtual robot explains tasks and possible actions to a human explainee to solve them in terms of gestures on a touchscreen. Our results show that negation vs. affirmation 1) increases processing costs measured as reaction time and 2) increases several aspects of task performance. While there was no significant effect of negation on the number of initially correctly executed gestures, we found a significantly lower number of attempts-measured as breaks in the finger movement data before the correct gesture was carried out-when being instructed through a negation. We further found that the gestures significantly resembled the presented prototype gesture more following an instruction with a negation as opposed to an affirmation. Also, the participants rated the benefit of contrastive vs. affirmative explanations significantly higher. Repeating the instructions decreased the effects of negation, yielding similar processing costs and task performance measures for negation and affirmation after several iterations. We discuss our results with respect to possible effects of negation on linguistic processing of explanations and limitations of our study.
解释已被确认为基于人工智能的系统的一项重要能力,但关于在与此类系统交互中实现理解的系统策略的研究仍然很少。否定是一种常用于解释的语言策略。它在被肯定的项目和被否定的项目之间创建了一个对比空间,用额外的上下文信息丰富了解释过程。虽然人类言语中的否定已被证明在单独使用时会导致更高的处理成本,并且在回忆或行动执行方面的任务表现更差,但在上下文中使用时它可以降低处理成本。到目前为止,它尚未被视为人类与机器人交互中解释的指导策略。我们进行了一项实证研究,以调查否定作为解释性人机对话中的指导策略的使用情况,在该对话中,一个虚拟机器人向人类解释者解释任务和可能的行动,以便通过触摸屏上的手势来解决这些任务。我们的结果表明,否定与肯定相比:1)增加了以反应时间衡量的处理成本;2)提高了任务表现的几个方面。虽然否定对最初正确执行的手势数量没有显著影响,但我们发现,在通过否定进行指示时,以正确手势执行前手指运动数据中的中断次数衡量的尝试次数显著减少。我们还发现,与肯定指示相比,在否定指示后,手势与呈现的原型手势更相似。此外,参与者对对比性解释与肯定性解释的益处评价明显更高。重复指示降低了否定的影响,经过几次迭代后,否定和肯定产生了相似的处理成本和任务表现指标。我们讨论了我们的结果关于否定对解释性语言处理的可能影响以及我们研究的局限性。