Bhatia Sudeep
Department of Psychology, University of Pennsylvania.
Psychol Rev. 2024 Nov;131(6):1373-1391. doi: 10.1037/rev0000446. Epub 2023 Sep 21.
Induction-the ability to generalize from existing knowledge-is the cornerstone of intelligence. Cognitive models of human induction are largely limited to toy problems and cannot make quantitative predictions for the thousands of different induction arguments that have been studied by researchers, or to the countless induction arguments that could be encountered in everyday life. Leading large language models (LLMs) go beyond toy problems but fail to mimic observed patterns of human induction. In this article, we combine rich knowledge representations obtained from LLMs with theories of human inductive reasoning developed by cognitive psychologists. We show that this integrative approach can capture several benchmark empirical findings on human induction and generate human-like responses to natural language arguments with thousands of common categories and properties. These findings shed light on the cognitive mechanisms at play in human induction and show how existing theories in psychology and cognitive science can be integrated with new methods in artificial intelligence, to successfully model high-level human cognition. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
归纳——从现有知识进行概括的能力——是智能的基石。人类归纳的认知模型在很大程度上局限于简单问题,无法对研究人员所研究的数千种不同归纳论证进行定量预测,也无法对日常生活中可能遇到的无数归纳论证进行定量预测。领先的大语言模型(LLMs)超越了简单问题,但未能模仿观察到的人类归纳模式。在本文中,我们将从大语言模型中获得的丰富知识表示与认知心理学家发展的人类归纳推理理论相结合。我们表明,这种综合方法可以捕捉关于人类归纳的几个基准实证发现,并对具有数千个常见类别和属性的自然语言论证生成类似人类的回应。这些发现揭示了人类归纳中起作用的认知机制,并展示了心理学和认知科学中的现有理论如何与人工智能中的新方法相结合,以成功地对高级人类认知进行建模。(PsycInfo数据库记录(c)2024美国心理学会,保留所有权利)