Suppr超能文献

成为一个聊天机器人是什么样的?GPT-4眼中的世界。

What is it like to be a bot? The world according to GPT-4.

作者信息

Lloyd Dan

机构信息

Trinity College, Hartford, CT, United States.

出版信息

Front Psychol. 2024 Aug 7;15:1292675. doi: 10.3389/fpsyg.2024.1292675. eCollection 2024.

Abstract

The recent explosion of Large Language Models (LLMs) has provoked lively debate about "emergent" properties of the models, including intelligence, insight, creativity, and meaning. These debates are rocky for two main reasons: The emergent properties sought are not well-defined; and the grounds for their dismissal often rest on a fallacious appeal to extraneous factors, like the LLM training regime, or fallacious assumptions about processes within the model. The latter issue is a particular roadblock for LLMs because their internal processes are largely unknown - they are colossal black boxes. In this paper, I try to cut through these problems by, first, identifying one salient feature shared by systems we regard as intelligent/conscious/sentient/etc., namely, their responsiveness to environmental conditions that may not be near in space and time. They engage with subjective worlds ("s-worlds") which may or may not conform to the actual environment. Observers can infer s-worlds from behavior alone, enabling hypotheses about perception and cognition that do not require evidence from the internal operations of the systems in question. The reconstruction of s-worlds offers a framework for comparing cognition across species, affording new leverage on the possible sentience of LLMs. Here, we examine one prominent LLM, OpenAI's GPT-4. Inquiry into the emergence of a complex subjective world is facilitated with philosophical phenomenology and cognitive ethology, examining the pattern of errors made by GPT-4 and proposing their origin in the absence of an analogue of the human subjective awareness of time. This deficit suggests that GPT-4 ultimately lacks a capacity to construct a stable perceptual world; the temporal vacuum undermines any capacity for GPT-4 to construct a consistent, continuously updated, model of its environment. Accordingly, none of GPT-4's statements are epistemically secure. Because the anthropomorphic illusion is so strong, I conclude by suggesting that GPT-4 works with its users to construct improvised works of fiction.

摘要

近期大语言模型(LLMs)的爆发引发了关于模型“涌现”特性的热烈讨论,这些特性包括智能、洞察力、创造力和意义。这些讨论存在两大难点:一是所探寻的涌现特性定义不明确;二是对这些特性的否定往往基于对外部因素(如大语言模型的训练机制)的错误诉诸,或者基于对模型内部过程的错误假设。后一个问题对大语言模型来说是个特别的障碍,因为它们的内部过程很大程度上不为人知——它们是巨大的黑箱。在本文中,我试图解决这些问题,首先识别我们认为具有智能/意识/感知等的系统所共有的一个显著特征,即它们对时空上可能并不临近的环境条件的响应能力。它们与主观世界(“s - 世界”)互动,这些主观世界可能与实际环境相符,也可能不符。观察者仅从行为就能推断出s - 世界,从而提出关于感知和认知的假设,而无需来自相关系统内部运作的证据。s - 世界的重建提供了一个跨物种比较认知的框架,为探讨大语言模型可能的感知能力提供了新的视角。在此,我们研究一个著名的大语言模型,即OpenAI的GPT - 4。借助哲学现象学和认知行为学来探究复杂主观世界的涌现,研究GPT - 4所犯错误的模式,并提出这些错误源于缺乏人类对时间的主观意识的类似物。这种不足表明GPT - 4最终缺乏构建稳定感知世界的能力;时间真空破坏了GPT - 4构建其环境的一致、持续更新模型的任何能力。因此,GPT - 4的任何陈述在认知上都不可靠。由于拟人化错觉非常强烈,我最后建议GPT - 4与用户合作创作即兴虚构作品。

相似文献

1
What is it like to be a bot? The world according to GPT-4.成为一个聊天机器人是什么样的?GPT-4眼中的世界。
Front Psychol. 2024 Aug 7;15:1292675. doi: 10.3389/fpsyg.2024.1292675. eCollection 2024.
9
Peer review of GPT-4 technical report and systems card.GPT-4技术报告和系统卡片的同行评审。
PLOS Digit Health. 2024 Jan 18;3(1):e0000417. doi: 10.1371/journal.pdig.0000417. eCollection 2024 Jan.

引用本文的文献

1
Digital Doppelgängers and Lifespan Extension: What Matters?数字分身与寿命延长:关键何在?
Am J Bioeth. 2025 Feb;25(2):95-110. doi: 10.1080/15265161.2024.2416133. Epub 2024 Nov 14.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验