Yax Nicolas, Anlló Hernán, Palminteri Stefano
Laboratoire de neurosciences cognitives et computationnelles, Institut national de la santé et de la recherche médicale, Paris, France.
Département d'études cognitives, Ecole normale supérieure - PSL Research University, Paris, France.
Commun Psychol. 2024 Jun 3;2(1):51. doi: 10.1038/s44271-024-00091-8.
In the present study, we investigate and compare reasoning in large language models (LLMs) and humans, using a selection of cognitive psychology tools traditionally dedicated to the study of (bounded) rationality. We presented to human participants and an array of pretrained LLMs new variants of classical cognitive experiments, and cross-compared their performances. Our results showed that most of the included models presented reasoning errors akin to those frequently ascribed to error-prone, heuristic-based human reasoning. Notwithstanding this superficial similarity, an in-depth comparison between humans and LLMs indicated important differences with human-like reasoning, with models' limitations disappearing almost entirely in more recent LLMs' releases. Moreover, we show that while it is possible to devise strategies to induce better performance, humans and machines are not equally responsive to the same prompting schemes. We conclude by discussing the epistemological implications and challenges of comparing human and machine behavior for both artificial intelligence and cognitive psychology.
在本研究中,我们使用一系列传统上专门用于研究(有限)理性的认知心理学工具,对大语言模型(LLMs)和人类的推理进行了调查和比较。我们向人类参与者和一系列预训练的大语言模型展示了经典认知实验的新变体,并对它们的表现进行了交叉比较。我们的结果表明,大多数纳入的模型都出现了类似于那些经常归因于容易出错的、基于启发式的人类推理的推理错误。尽管存在这种表面上的相似性,但对人类和大语言模型的深入比较表明,它们与人类式推理存在重要差异,随着模型的不断更新发布,模型的局限性几乎完全消失。此外,我们表明,虽然可以设计策略来诱导更好的表现,但人类和机器对相同的提示方案的反应并不相同。我们通过讨论比较人类和机器行为对人工智能和认知心理学的认识论影响及挑战来结束本文。