Galland Lucie, Pelachaud Catherine, Pecune Florian
Département d'Informatique de l'ENS, ENS, CNRS, PSL University, Paris, France.
ISIR, CNRS, Paris, France.
Front Artif Intell. 2022 Oct 25;5:1029340. doi: 10.3389/frai.2022.1029340. eCollection 2022.
In this work, we focus on human-agent interaction where the role of the socially interactive agent is to optimize the amount of information to give to a user. In particular, we developed a dialog manager able to adapt the agent's conversational strategies to the preferences of the user it is interacting with to maximize the user's engagement during the interaction. For this purpose, we train an agent in interaction with a user using the reinforcement learning approach. The engagement of the user is measured using their non-verbal behaviors and turn-taking status. This measured engagement is used in the reward function, which balances the task of the agent (giving information) and its social goal (maintaining the user highly engaged). Agent's dialog acts may have different impact on the user's engagement depending on several factors, such as their personality, interest in the discussion topic, and attitude toward the agent. A subjective study was conducted with 120 participants to measure how third-party observers can perceive the adaptation of our dialog model. The results show that adapting the agent's conversational strategies has an influence on the participants' perception.
在这项工作中,我们专注于人与智能体的交互,其中社交交互智能体的作用是优化向用户提供的信息量。具体而言,我们开发了一种对话管理器,它能够使智能体的对话策略适应与之交互的用户的偏好,以在交互过程中最大限度地提高用户的参与度。为此,我们使用强化学习方法训练一个与用户进行交互的智能体。通过用户的非语言行为和轮流发言状态来衡量用户的参与度。这个测量得到的参与度被用于奖励函数中,该函数平衡了智能体的任务(提供信息)及其社交目标(保持用户的高参与度)。智能体的对话行为可能会因多种因素而对用户的参与度产生不同影响,比如用户的个性、对讨论话题的兴趣以及对智能体的态度。我们对120名参与者进行了一项主观研究,以衡量第三方观察者如何看待我们对话模型的适应性。结果表明,调整智能体的对话策略会对参与者的看法产生影响。