Zavatone-Veth Jacob A, Bordelon Blake, Pehlevan Cengiz
Center for Brain Science, Harvard University, Cambridge, MA, USA.
Society of Fellows, Harvard University, Cambridge, MA, USA.
ArXiv. 2025 Jul 14:arXiv:2504.16920v2.
How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical physics offer a potential answer: for a given task, there are often a small set of summary statistics that are sufficient to predict performance as the network learns. Here, we review recent advances in how summary statistics can be used to build theoretical understanding of neural network learning. We then argue for how this perspective can inform the analysis of neural data, enabling better understanding of learning in biological and artificial neural networks.
我们如何理解学习过程中大规模神经活动的记录?起源于统计物理学的神经网络学习理论提供了一个可能的答案:对于给定的任务,通常有一小组汇总统计量足以预测网络学习时的性能。在这里,我们回顾了汇总统计量如何用于构建对神经网络学习的理论理解的最新进展。然后,我们论证了这种观点如何为神经数据分析提供信息,从而更好地理解生物和人工神经网络中的学习。