Chou Zane Z, Bouteiller Jean-Marie C
Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States.
Institute for Technology and Medical Systems (ITEMS), Keck School of Medicine, University of Southern California, Los Angeles, CA, United States.
Front Comput Neurosci. 2025 Aug 25;19:1646810. doi: 10.3389/fncom.2025.1646810. eCollection 2025.
Artificial neural networks are limited in the number of patterns that they can store and accurately recall, with capacity constraints arising from factors such as network size, architectural structure, pattern sparsity, and pattern dissimilarity. Exceeding these limits leads to recall errors, eventually leading to catastrophic forgetting, which is a major challenge in continual learning. In this study, we characterize the theoretical maximum memory capacity of single-layer feedforward networks as a function of these parameters. We derive analytical expressions for maximum theoretical memory capacity and introduce a grid-based construction and sub-sampling method for pattern generation that takes advantage of the full storage potential of the network. Our findings indicate that maximum capacity scales as (/) , where N is the number of input/output units and S the pattern sparsity, under threshold constraints related to minimum pattern differentiability. Simulation results validate these theoretical predictions and show that the optimal pattern set can be constructed deterministically for any given network size and pattern sparsity, systematically outperforming random pattern generation in terms of storage capacity. This work offers a foundational framework for maximizing storage efficiency in neural network systems and supports the development of data-efficient, sustainable AI.
人工神经网络能够存储和准确回忆的模式数量有限,其容量限制源于网络规模、架构结构、模式稀疏性和模式差异等因素。超过这些限制会导致回忆错误,最终导致灾难性遗忘,这是持续学习中的一个重大挑战。在本研究中,我们将单层前馈网络的理论最大存储容量表征为这些参数的函数。我们推导了最大理论存储容量的解析表达式,并引入了一种基于网格的构建和子采样方法来生成模式,该方法利用了网络的全部存储潜力。我们的研究结果表明,在与最小模式可区分性相关的阈值约束下,最大容量按(/ ) 缩放,其中N是输入/输出单元的数量,S是模式稀疏性。仿真结果验证了这些理论预测,并表明对于任何给定的网络规模和模式稀疏性,都可以确定性地构建最优模式集,在存储容量方面系统地优于随机模式生成。这项工作为最大化神经网络系统中的存储效率提供了一个基础框架,并支持数据高效、可持续人工智能的发展。