Department of Applied and Computational Mathematics and Statistics, University of Notre Dame, Notre Dame, IN, United States of America.
PLoS One. 2022 Mar 31;17(3):e0266102. doi: 10.1371/journal.pone.0266102. eCollection 2022.
Artificial neural networks are often interpreted as abstract models of biological neuronal networks, but they are typically trained using the biologically unrealistic backpropagation algorithm and its variants. Predictive coding has been proposed as a potentially more biologically realistic alternative to backpropagation for training neural networks. This manuscript reviews and extends recent work on the mathematical relationship between predictive coding and backpropagation for training feedforward artificial neural networks on supervised learning tasks. Implications of these results for the interpretation of predictive coding and deep neural networks as models of biological learning are discussed along with a repository of functions, Torch2PC, for performing predictive coding with PyTorch neural network models.
人工神经网络通常被解释为生物神经元网络的抽象模型,但它们通常使用生物上不现实的反向传播算法及其变体进行训练。预测编码被提议作为反向传播的一种潜在更具生物合理性的替代方法,用于训练神经网络。本文综述并扩展了最近关于在监督学习任务上训练前馈人工神经网络的预测编码和反向传播之间的数学关系的工作。讨论了这些结果对预测编码和深度神经网络作为生物学习模型的解释的影响,以及一个用于使用 PyTorch 神经网络模型执行预测编码的函数库 Torch2PC。