Biophysics Program, Harvard University, Cambridge, MA, USA.
Wolfram Physics Project, Cape Town, South Africa.
Nat Commun. 2023 Apr 19;14(1):2226. doi: 10.1038/s41467-023-37980-1.
Machine learning (ML) models have long overlooked innateness: how strong pressures for survival lead to the encoding of complex behaviors in the nascent wiring of a brain. Here, we derive a neurodevelopmental encoding of artificial neural networks that considers the weight matrix of a neural network to be emergent from well-studied rules of neuronal compatibility. Rather than updating the network's weights directly, we improve task fitness by updating the neurons' wiring rules, thereby mirroring evolutionary selection on brain development. We find that our model (1) provides sufficient representational power for high accuracy on ML benchmarks while also compressing parameter count, and (2) can act as a regularizer, selecting simple circuits that provide stable and adaptive performance on metalearning tasks. In summary, by introducing neurodevelopmental considerations into ML frameworks, we not only model the emergence of innate behaviors, but also define a discovery process for structures that promote complex computations.
机器学习 (ML) 模型长期以来忽视了本能:生存的强大压力如何导致复杂行为在大脑的早期连接中被编码。在这里,我们得出了一种神经发育的人工神经网络编码,该编码考虑到神经网络的权重矩阵是从神经元兼容性的既定规则中涌现出来的。我们不是直接更新网络的权重,而是通过更新神经元的连接规则来提高任务适应性,从而反映了对大脑发育的进化选择。我们发现,我们的模型 (1) 在提供高准确性的同时,也具有足够的表示能力,同时压缩了参数数量,并且 (2) 可以作为正则化器,选择简单的电路,为元学习任务提供稳定和自适应的性能。总之,通过将神经发育的考虑因素引入到 ML 框架中,我们不仅模拟了本能行为的出现,而且还定义了一种用于促进复杂计算的结构发现过程。