Suppr超能文献

构建具有预先指定动力学的神经网络。

Constructing neural networks with pre-specified dynamics.

作者信息

Mininni Camilo J, Zanutto B Silvano

机构信息

Instituto de Biología y Medicina Experimental, Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina.

Instituto de Ingeniería Biomédica, Universidad de Buenos Aires, Buenos Aires, Argentina.

出版信息

Sci Rep. 2024 Aug 14;14(1):18860. doi: 10.1038/s41598-024-69747-z.

Abstract

A main goal in neuroscience is to understand the computations carried out by neural populations that give animals their cognitive skills. Neural network models allow to formulate explicit hypotheses regarding the algorithms instantiated in the dynamics of a neural population, its firing statistics, and the underlying connectivity. Neural networks can be defined by a small set of parameters, carefully chosen to procure specific capabilities, or by a large set of free parameters, fitted with optimization algorithms that minimize a given loss function. In this work we alternatively propose a method to make a detailed adjustment of the network dynamics and firing statistic to better answer questions that link dynamics, structure, and function. Our algorithm-termed generalised Firing-to-Parameter (gFTP)-provides a way to construct binary recurrent neural networks whose dynamics strictly follows a user pre-specified transition graph that details the transitions between population firing states triggered by stimulus presentations. Our main contribution is a procedure that detects when a transition graph is not realisable in terms of a neural network, and makes the necessary modifications in order to obtain a new transition graph that is realisable and preserves all the information encoded in the transitions of the original graph. With a realisable transition graph, gFTP assigns values to the network firing states associated with each node in the graph, and finds the synaptic weight matrices by solving a set of linear separation problems. We test gFTP performance by constructing networks with random dynamics, continuous attractor-like dynamics that encode position in 2-dimensional space, and discrete attractor dynamics. We then show how gFTP can be employed as a tool to explore the link between structure, function, and the algorithms instantiated in the network dynamics.

摘要

神经科学的一个主要目标是了解神经群体所执行的计算,正是这些计算赋予了动物认知技能。神经网络模型有助于就神经群体动态、其放电统计以及潜在连接中实例化的算法提出明确的假设。神经网络可以由精心选择以获得特定能力的一小组参数定义,也可以由通过优化算法进行拟合的大量自由参数定义,这些优化算法可使给定的损失函数最小化。在这项工作中,我们提出了一种替代方法,对网络动态和放电统计进行详细调整,以更好地回答将动态、结构和功能联系起来的问题。我们的算法——广义放电到参数(gFTP)——提供了一种构建二元递归神经网络的方法,其动态严格遵循用户预先指定的转换图,该图详细说明了由刺激呈现触发的群体放电状态之间的转换。我们主要的贡献是一个程序,它能检测转换图何时在神经网络方面不可实现,并进行必要的修改,以获得一个可实现的新转换图,该图保留了原始图转换中编码的所有信息。有了可实现的转换图,gFTP为图中每个节点关联的网络放电状态赋值,并通过解决一组线性分离问题找到突触权重矩阵。我们通过构建具有随机动态、编码二维空间位置的连续吸引子样动态以及离散吸引子动态的网络来测试gFTP的性能。然后我们展示了gFTP如何作为一种工具来探索网络动态中实例化结构、功能和算法之间的联系。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3ab4/11324765/952e27047b2b/41598_2024_69747_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验