Suppr超能文献

大语言模型中拟人化的双刃剑

The Double-Edged Sword of Anthropomorphism in LLMs .

作者信息

Reinecke Madeline G, Ting Fransisca, Savulescu Julian, Singh Ilina

机构信息

Uehiro Oxford Institute, University of Oxford, Oxford OX1 1PT, UK.

Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK.

出版信息

Proceedings (MDPI). 2025 Feb 26;114(1):4. doi: 10.3390/proceedings2025114004.

Abstract

Humans may have evolved to be "hyperactive agency detectors". Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism-and related mechanisms of anthropomorphism-explain some of people's contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people's tendency to perceive a "mind in the machine" is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust-and potentially over-trust-content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate-avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.

摘要

人类可能已经进化成为“过度活跃的能动者探测器”。听到树叶堆里有沙沙声时,假定有一个像狮子这样的能动者藏在下面会更安全(即使最终可能什么都没有)。这种进化而来的认知机制以及相关的拟人化机制能否解释人们当下使用聊天机器人(如ChatGPT、Gemini)的一些体验呢?在本文中,我们概述了这样的机制可能如何导致基于大语言的聊天机器人出现看似无法抗拒的拟人化现象。然后我们在教育背景下探讨了这一现象的影响。具体而言,我们认为人们感知到“机器中有心灵”的倾向对教育进步来说是一把双刃剑:虽然拟人化可以促进动机和学习,但它也可能导致学生信任——甚至可能过度信任——聊天机器人生成的内容。诚然,学生似乎确实意识到大语言模型生成的内容有时可能不准确。然而,我们认为,对聊天机器人的拟人化倾向的增加只会进一步掩盖这些不准确之处。最后,我们思考了研究如何能够转向帮助学生提高数字素养,避免因在聊天机器人中感知到能动性和类人心理状态而导致的陷阱。

相似文献

1
The Double-Edged Sword of Anthropomorphism in LLMs .大语言模型中拟人化的双刃剑
Proceedings (MDPI). 2025 Feb 26;114(1):4. doi: 10.3390/proceedings2025114004.

本文引用的文献

1
Language is primarily a tool for communication rather than thought.语言主要是一种交流工具,而不是思维工具。
Nature. 2024 Jun;630(8017):575-586. doi: 10.1038/s41586-024-07522-w. Epub 2024 Jun 19.
2
Deception abilities emerged in large language models.大型语言模型中出现了欺骗能力。
Proc Natl Acad Sci U S A. 2024 Jun 11;121(24):e2317967121. doi: 10.1073/pnas.2317967121. Epub 2024 Jun 4.
3
Folk psychological attributions of consciousness to large language models.民间心理学对大语言模型的意识归因。
Neurosci Conscious. 2024 Apr 13;2024(1):niae013. doi: 10.1093/nc/niae013. eCollection 2024.
8
Inoculating against COVID-19 vaccine misinformation.防范新冠疫苗错误信息。
EClinicalMedicine. 2021 Mar;33:100772. doi: 10.1016/j.eclinm.2021.100772. Epub 2021 Feb 26.
9
Inoculating the Public against Misinformation about Climate Change.让公众免受关于气候变化的错误信息的影响。
Glob Chall. 2017 Jan 23;1(2):1600008. doi: 10.1002/gch2.201600008. eCollection 2017 Feb 27.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验