Lee Inju, Hahn Sowon
Human Factors Psychology Lab, Department of Psychology, Seoul National University, Seoul, Republic of Korea.
Front Psychol. 2024 Mar 6;15:1282036. doi: 10.3389/fpsyg.2024.1282036. eCollection 2024.
The social support provided by chatbots is typically designed to mimic the way humans support others. However, individuals have more conflicting attitudes toward chatbots providing emotional support (e.g., empathy and encouragement) compared to informational support (e.g., useful information and advice). This difference may be related to whether individuals associate a certain type of support with the realm of the human mind and whether they attribute human-like minds to chatbots. In the present study, we investigated whether perceiving human-like minds in chatbots affects users' acceptance of various support provided by the chatbot. In the experiment, the chatbot posed questions about participants' interpersonal stress events, prompting them to write down their stressful experiences. Depending on the experimental condition, the chatbot provided two kinds of social support: informational support or emotional support. Our results showed that when participants explicitly perceived a human-like mind in the chatbot, they considered the support to be more helpful in resolving stressful events. The relationship between implicit mind perception and perceived message effectiveness differed depending on the type of support. More specifically, if participants did not implicitly attribute a human-like mind to the chatbot, emotional support undermined the effectiveness of the message, whereas informational support did not. The present findings suggest that users' mind perception is essential for understanding the user experience of chatbot social support. Our findings imply that informational support can be trusted when building social support chatbots. In contrast, the effectiveness of emotional support depends on the users implicitly giving the chatbot a human-like mind.
聊天机器人提供的社会支持通常旨在模仿人类支持他人的方式。然而,与信息支持(如有用的信息和建议)相比,个体对聊天机器人提供情感支持(如同感和鼓励)的态度更具冲突性。这种差异可能与个体是否将某种类型的支持与人类思维领域联系起来,以及他们是否将类似人类的思维赋予聊天机器人有关。在本研究中,我们调查了在聊天机器人中感知到类似人类的思维是否会影响用户对聊天机器人提供的各种支持的接受度。在实验中,聊天机器人询问参与者的人际压力事件,促使他们写下自己的压力经历。根据实验条件,聊天机器人提供两种社会支持:信息支持或情感支持。我们的结果表明,当参与者明确在聊天机器人中感知到类似人类的思维时,他们认为这种支持对解决压力事件更有帮助。内隐思维感知与感知到的信息有效性之间的关系因支持类型而异。更具体地说,如果参与者没有在内隐层面将类似人类的思维赋予聊天机器人,情感支持会削弱信息的有效性,而信息支持则不会。本研究结果表明,用户的思维感知对于理解聊天机器人社会支持的用户体验至关重要。我们的研究结果意味着在构建社会支持聊天机器人时,信息支持是可以信赖的。相比之下,情感支持的有效性取决于用户在内隐层面赋予聊天机器人类似人类的思维。