PMI Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, People's Republic of China.
Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, California 92093, USA.
Phys Rev E. 2024 Apr;109(4-1):044309. doi: 10.1103/PhysRevE.109.044309.
Large language models based on self-attention mechanisms have achieved astonishing performances, not only in natural language itself, but also in a variety of tasks of different nature. However, regarding processing language, our human brain may not operate using the same principle. Then, a debate is established on the connection between brain computation and artificial self-supervision adopted in large language models. One of most influential hypotheses in brain computation is the predictive coding framework, which proposes to minimize the prediction error by local learning. However, the role of predictive coding and the associated credit assignment in language processing remains unknown. Here, we propose a mean-field learning model within the predictive coding framework, assuming that the synaptic weight of each connection follows a spike and slab distribution, and only the distribution, rather than specific weights, is trained. This meta predictive learning is successfully validated on classifying handwritten digits where pixels are input to the network in sequence, and moreover, on the toy and real language corpus. Our model reveals that most of the connections become deterministic after learning, while the output connections have a higher level of variability. The performance of the resulting network ensemble changes continuously with data load, further improving with more training data, in analogy with the emergent behavior of large language models. Therefore, our model provides a starting point to investigate the connection among brain computation, next-token prediction, and general intelligence.
基于自注意力机制的大型语言模型在自然语言本身以及各种不同性质的任务中都取得了惊人的表现。然而,对于语言处理,我们的大脑可能并非采用相同的原理。因此,关于大脑计算和大型语言模型中采用的人工自我监督之间的联系存在争议。大脑计算中最有影响力的假设之一是预测编码框架,该框架提出通过局部学习来最小化预测误差。然而,预测编码及其相关的信用分配在语言处理中的作用仍然未知。在这里,我们在预测编码框架内提出了一个平均场学习模型,假设每个连接的突触权重遵循尖峰和板片分布,并且仅训练分布,而不是特定的权重。这种元预测学习在手写数字分类中得到了成功验证,其中网络以顺序的方式输入像素,并且在玩具和真实语言语料库上也是如此。我们的模型揭示了在学习后大多数连接变得确定性,而输出连接具有更高的可变性。所得网络集合的性能随着数据负载的变化而连续变化,随着更多训练数据的增加而进一步提高,与大型语言模型的涌现行为类似。因此,我们的模型为研究大脑计算、下一个标记预测和一般智能之间的联系提供了一个起点。