Reinecke Madeline G, Ting Fransisca, Savulescu Julian, Singh Ilina
Uehiro Oxford Institute, University of Oxford, Oxford OX1 1PT, UK.
Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK.
Proceedings (MDPI). 2025 Feb 26;114(1):4. doi: 10.3390/proceedings2025114004.
Humans may have evolved to be "hyperactive agency detectors". Upon hearing a rustle in a pile of leaves, it would be safer to assume that an agent, like a lion, hides beneath (even if there may ultimately be nothing there). Can this evolutionary cognitive mechanism-and related mechanisms of anthropomorphism-explain some of people's contemporary experience with using chatbots (e.g., ChatGPT, Gemini)? In this paper, we sketch how such mechanisms may engender the seemingly irresistible anthropomorphism of large language-based chatbots. We then explore the implications of this within the educational context. Specifically, we argue that people's tendency to perceive a "mind in the machine" is a double-edged sword for educational progress: Though anthropomorphism can facilitate motivation and learning, it may also lead students to trust-and potentially over-trust-content generated by chatbots. To be sure, students do seem to recognize that LLM-generated content may, at times, be inaccurate. We argue, however, that the rise of anthropomorphism towards chatbots will only serve to further camouflage these inaccuracies. We close by considering how research can turn towards aiding students in becoming digitally literate-avoiding the pitfalls caused by perceiving agency and humanlike mental states in chatbots.
人类可能已经进化成为“过度活跃的能动者探测器”。听到树叶堆里有沙沙声时,假定有一个像狮子这样的能动者藏在下面会更安全(即使最终可能什么都没有)。这种进化而来的认知机制以及相关的拟人化机制能否解释人们当下使用聊天机器人(如ChatGPT、Gemini)的一些体验呢?在本文中,我们概述了这样的机制可能如何导致基于大语言的聊天机器人出现看似无法抗拒的拟人化现象。然后我们在教育背景下探讨了这一现象的影响。具体而言,我们认为人们感知到“机器中有心灵”的倾向对教育进步来说是一把双刃剑:虽然拟人化可以促进动机和学习,但它也可能导致学生信任——甚至可能过度信任——聊天机器人生成的内容。诚然,学生似乎确实意识到大语言模型生成的内容有时可能不准确。然而,我们认为,对聊天机器人的拟人化倾向的增加只会进一步掩盖这些不准确之处。最后,我们思考了研究如何能够转向帮助学生提高数字素养,避免因在聊天机器人中感知到能动性和类人心理状态而导致的陷阱。