MIT Media Lab, Cambridge, MA, United States.
Language Technologies Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, United States.
JMIR Ment Health. 2024 Sep 25;11:e62679. doi: 10.2196/62679.
Empathy is a driving force in our connection to others, our mental well-being, and resilience to challenges. With the rise of generative artificial intelligence (AI) systems, mental health chatbots, and AI social support companions, it is important to understand how empathy unfolds toward stories from human versus AI narrators and how transparency plays a role in user emotions.
We aim to understand how empathy shifts across human-written versus AI-written stories, and how these findings inform ethical implications and human-centered design of using mental health chatbots as objects of empathy.
We conducted crowd-sourced studies with 985 participants who each wrote a personal story and then rated empathy toward 2 retrieved stories, where one was written by a language model, and another was written by a human. Our studies varied disclosing whether a story was written by a human or an AI system to see how transparent author information affects empathy toward the narrator. We conducted mixed methods analyses: through statistical tests, we compared user's self-reported state empathy toward the stories across different conditions. In addition, we qualitatively coded open-ended feedback about reactions to the stories to understand how and why transparency affects empathy toward human versus AI storytellers.
We found that participants significantly empathized with human-written over AI-written stories in almost all conditions, regardless of whether they are aware (t=7.07, P<.001, Cohen d=0.60) or not aware (t=3.46, P<.001, Cohen d=0.24) that an AI system wrote the story. We also found that participants reported greater willingness to empathize with AI-written stories when there was transparency about the story author (t=-5.49, P<.001, Cohen d=0.36).
Our work sheds light on how empathy toward AI or human narrators is tied to the way the text is presented, thus informing ethical considerations of empathetic artificial social support or mental health chatbots.
同理心是我们与他人建立联系、保持心理健康和应对挑战的动力。随着生成式人工智能(AI)系统、心理健康聊天机器人和 AI 社交支持伙伴的兴起,了解同理心如何从人类叙述者和 AI 叙述者的故事中展开,以及透明度如何在用户情感中发挥作用,这一点至关重要。
我们旨在了解同理心在人类撰写的故事和 AI 撰写的故事之间如何转移,以及这些发现如何为使用心理健康聊天机器人作为同理心对象的伦理考虑和以人为中心的设计提供信息。
我们对 985 名参与者进行了众包研究,每位参与者都撰写了一个个人故事,然后对 2 个检索到的故事进行同理心评分,其中一个故事是由语言模型撰写的,另一个故事是由人类撰写的。我们的研究通过透露故事是由人类还是 AI 系统撰写,来观察作者信息的透明度如何影响对叙述者的同理心,从而进行了不同的实验。我们进行了混合方法分析:通过统计测试,我们比较了参与者在不同条件下对故事的自我报告状态同理心。此外,我们对关于对故事的反应的开放式反馈进行了定性编码,以了解透明度如何以及为何影响对人类与 AI 叙述者的同理心。
我们发现,无论参与者是否意识到(t=7.07,P<.001,Cohen d=0.60)或不知道(t=3.46,P<.001,Cohen d=0.24)AI 系统撰写了故事,他们都显著地对人类撰写的故事比对 AI 撰写的故事更有同理心。我们还发现,当故事作者的身份是透明的时候,参与者报告更愿意对 AI 撰写的故事产生同理心(t=-5.49,P<.001,Cohen d=0.36)。
我们的工作阐明了对 AI 或人类叙述者的同理心如何与文本呈现方式相关,从而为具有同理心的人工智能社交支持或心理健康聊天机器人的伦理考虑提供信息。