Schoeller Felix, Miller Mark, Salomon Roy, Friston Karl J
Massachusetts Institute of Technology, Cambridge, MA, United States.
Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel.
Front Syst Neurosci. 2021 Oct 13;15:669810. doi: 10.3389/fnsys.2021.669810. eCollection 2021.
In order to interact seamlessly with robots, users must infer the causes of a robot's behavior-and be confident about that inference (and its predictions). Hence, trust is a necessary condition for human-robot collaboration (HRC). However, and despite its crucial role, it is still largely unknown how trust emerges, develops, and supports human relationship to technological systems. In the following paper we review the literature on trust, human-robot interaction, HRC, and human interaction at large. Early models of trust suggest that it is a trade-off between benevolence and competence; while studies of human to human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We go on to introduce a model of trust as an agent' best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be casted in terms of virtual control over an artificial agent. Interactive feedback is a necessary condition to the extension of the trustor's perception-action cycle. This model has important implications for understanding human-robot interaction and collaboration-as it allows the traditional determinants of human trust, such as the benevolence and competence attributed to the trustee, to be defined in terms of hierarchical active inference, while vulnerability can be described in terms of information exchange and empowerment. Furthermore, this model emphasizes the role of user feedback during HRC and suggests that boredom and surprise may be used in personalized interactions as markers for under and over-reliance on the system. The description of trust as a sense of virtual control offers a crucial step toward grounding human factors in cognitive neuroscience and improving the design of human-centered technology. Furthermore, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems.
为了与机器人无缝交互,用户必须推断机器人行为的原因,并对这种推断(及其预测)充满信心。因此,信任是人机协作(HRC)的必要条件。然而,尽管信任起着至关重要的作用,但关于信任如何产生、发展以及如何支持人类与技术系统的关系,我们仍然知之甚少。在接下来的论文中,我们回顾了关于信任、人机交互、人机协作以及一般人类交互的文献。早期的信任模型表明,信任是善意与能力之间的权衡;而对人际交互的研究则强调共享行为和共同知识在信任逐步建立过程中的作用。我们接着介绍一种信任模型,将其作为主体对与扩展运动装置或伙伴进行可靠感官交换的最佳解释。该模型基于主动推理的认知神经科学,并表明在人机协作的背景下,信任可以从对人工主体的虚拟控制角度来阐释。交互式反馈是扩展信任者感知 - 行动循环的必要条件。这个模型对于理解人机交互与协作具有重要意义,因为它允许将人类信任的传统决定因素,如归因于受托人的善意和能力,根据分层主动推理来定义,而脆弱性可以从信息交换和赋权的角度来描述。此外,该模型强调了用户反馈在人机协作过程中的作用,并表明无聊和惊讶可在个性化交互中用作对系统依赖不足和过度依赖的标志。将信任描述为一种虚拟控制感,为将人为因素建立在认知神经科学基础上以及改进以人为本技术的设计迈出了关键一步。此外,我们研究了共享行为在信任产生过程中的作用,特别是在二元协作的背景下,这对人机协作系统的可接受性和设计具有重要影响。