Valle-Lisboa Juan C, Pomi Andrés, Mizraji Eduardo
Group of Cognitive Systems Modeling, Biophysics and Systems Biology Section, Facultad de Ciencias, Universidad de la República, Iguá 4225, 11400 Montevideo, Uruguay.
Centro Interdisciplinario en Cognición para la Enseñanza y el Aprendizaje (CICEA), Universidad de la República, Espacio Interdisciplinario, 11200 Montevideo, Uruguay.
Biophys Rev. 2023 Jun 22;15(4):767-785. doi: 10.1007/s12551-023-01074-5. eCollection 2023 Aug.
Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, inseparable from the birth of computing and artificial intelligence. Since then, neural network models have traveled a long path, both in the biophysical and the computational disciplines. The biological, neurocomputational aspect reached its representational maturity with the Distributed Associative Memory models developed in the early 70 s. In this framework, the inclusion of signal-signal multiplication within neural network models was presented as a necessity to provide matrix associative memories with adaptive, context-sensitive associations, while greatly enhancing their computational capabilities. In this review, we show that several of the most successful neural network models use a form of multiplication of signals. We present several classical models that included such kind of multiplication and the computational reasons for the inclusion. We then turn to the different proposals about the possible biophysical implementation that underlies these computational capacities. We pinpoint the important ideas put forth by different theoretical models using a tensor product representation and show that these models endow memories with the context-dependent adaptive capabilities necessary to allow for evolutionary adaptation to changing and unpredictable environments. Finally, we show how the powerful abilities of contemporary computationally deep-learning models, inspired in neural networks, also depend on multiplications, and discuss some perspectives in view of the wide panorama unfolded. The computational relevance of multiplications calls for the development of new avenues of research that uncover the mechanisms our nervous system uses to achieve multiplication.
自20世纪40年代麦卡洛克和皮茨在芝加哥生物物理学派开创先河,以及20世纪50年代跨学科控制论者会议以来,解释神经系统在信息处理中认知能力的基础就一直是生物物理学的开端,这与计算和人工智能的诞生密不可分。从那时起,神经网络模型在生物物理和计算学科领域都走过了漫长的道路。生物神经计算方面在70年代初开发的分布式联想记忆模型中达到了其代表性的成熟阶段。在这个框架中,神经网络模型中纳入信号-信号乘法被认为是为矩阵联想记忆提供自适应、上下文敏感关联的必要条件,同时极大地增强了它们的计算能力。在这篇综述中,我们表明几个最成功的神经网络模型使用了一种信号乘法形式。我们介绍了几个包含这种乘法的经典模型以及纳入这种乘法的计算原因。然后我们转向关于这些计算能力背后可能的生物物理实现的不同提议。我们指出了使用张量积表示的不同理论模型提出的重要观点,并表明这些模型赋予记忆上下文相关的自适应能力,这对于进化适应不断变化和不可预测的环境是必要的。最后,我们展示了受神经网络启发的当代计算深度学习模型的强大能力如何也依赖于乘法,并鉴于所展现的广阔全景讨论了一些观点。乘法的计算相关性要求开发新的研究途径,以揭示我们的神经系统用于实现乘法的机制。