Grossberg Stephen
Departments of Mathematics and Statistics, Psychological and Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States.
Front Syst Neurosci. 2025 Jul 30;19:1630151. doi: 10.3389/fnsys.2025.1630151. eCollection 2025.
This article describes a biological neural network model that explains how humans learn to understand large language models and their meanings. This kind of learning typically occurs when a student learns from a teacher about events that they experience together. Multiple types of self-organizing brain processes are involved, including content-addressable memory; conscious visual perception; joint attention; object learning, categorization, and cognition; conscious recognition; cognitive working memory; cognitive planning; neural-symbolic computing; emotion; cognitive-emotional interactions and reinforcement learning; volition; and goal-oriented actions. The article advances earlier results showing how small language models are learned that have perceptual and affective meanings. The current article explains how humans, and neural network models thereof, learn to consciously see and recognize an unlimited number of visual scenes. Then, bi-directional associative links can be learned and stably remembered between these scenes, the emotions that they evoke, and the descriptive language utterances associated with them. Adaptive resonance theory circuits control model learning and self-stabilizing memory. These human capabilities are not found in AI models such as ChatGPT. The current model is called ChatSOME, where SOME abbreviates Self-Organizing MEaning. The article summarizes neural network highlights since the 1950s and leading models, including adaptive resonance, deep learning, LLMs, and transformers.
本文描述了一种生物神经网络模型,该模型解释了人类如何学习理解大语言模型及其含义。这种学习通常发生在学生向教师学习他们共同经历的事件时。它涉及多种自组织大脑过程,包括内容可寻址记忆、有意识的视觉感知、共同注意力、物体学习、分类和认知、有意识的识别、认知工作记忆、认知规划、神经符号计算、情感、认知 - 情感交互和强化学习、意志以及目标导向行动。本文推进了早期的研究成果,展示了具有感知和情感意义的小语言模型是如何被学习的。当前文章解释了人类及其神经网络模型如何学习有意识地看到并识别无限数量的视觉场景。然后,可以在这些场景、它们所引发的情感以及与之相关的描述性语言表达之间学习并稳定地记住双向关联链接。自适应共振理论电路控制模型学习和自稳定记忆。这些人类能力在诸如ChatGPT等人工智能模型中并不存在。当前模型被称为ChatSOME,其中SOME是自组织意义(Self-Organizing MEaning)的缩写。本文总结了自20世纪50年代以来的神经网络亮点以及领先模型,包括自适应共振、深度学习、大语言模型和变换器。