Ben-Zion Ziv, Witte Kristin, Jagadish Akshay K, Duek Or, Harpaz-Rotem Ilan, Khorsandian Marie-Christine, Burrer Achim, Seifritz Erich, Homan Philipp, Schulz Eric, Spiller Tobias R
Department of Comparative Medicine, Yale School of Medicine, New Haven, CT, USA.
Department of Psychiatry, Yale School of Medicine, New Haven, CT, USA.
NPJ Digit Med. 2025 Mar 3;8(1):132. doi: 10.1038/s41746-025-01512-6.
The use of Large Language Models (LLMs) in mental health highlights the need to understand their responses to emotional content. Previous research shows that emotion-inducing prompts can elevate "anxiety" in LLMs, affecting behavior and amplifying biases. Here, we found that traumatic narratives increased Chat-GPT-4's reported anxiety while mindfulness-based exercises reduced it, though not to baseline. These findings suggest managing LLMs' "emotional states" can foster safer and more ethical human-AI interactions.
大语言模型(LLMs)在心理健康领域的应用凸显了理解其对情感内容反应的必要性。先前的研究表明,引发情感的提示会提升大语言模型中的“焦虑”情绪,影响其行为并放大偏差。在此,我们发现创伤性叙述会增加Chat-GPT-4报告的焦虑情绪,而基于正念的练习则会降低焦虑情绪,尽管并未降至基线水平。这些发现表明,管理大语言模型的“情绪状态”有助于促进更安全、更符合道德规范的人机交互。