• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

迈向可重复的序列学习模型:具有基于奖励学习的模块化脉冲神经网络的复制与分析

Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning.

作者信息

Zajzon Barna, Duarte Renato, Morrison Abigail

机构信息

Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-BRAIN Institute I, Jülich Research Centre, Jülich, Germany.

Department of Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany.

出版信息

Front Integr Neurosci. 2023 Jun 15;17:935177. doi: 10.3389/fnint.2023.935177. eCollection 2023.

DOI:10.3389/fnint.2023.935177
PMID:37396571
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10310927/
Abstract

To acquire statistical regularities from the world, the brain must reliably process, and learn from, spatio-temporally structured information. Although an increasing number of computational models have attempted to explain how such sequence learning may be implemented in the neural hardware, many remain limited in functionality or lack biophysical plausibility. If we are to harvest the knowledge within these models and arrive at a deeper mechanistic understanding of sequential processing in cortical circuits, it is critical that the models and their findings are accessible, reproducible, and quantitatively comparable. Here we illustrate the importance of these aspects by providing a thorough investigation of a recently proposed sequence learning model. We re-implement the modular columnar architecture and reward-based learning rule in the open-source NEST simulator, and successfully replicate the main findings of the original study. Building on these, we perform an in-depth analysis of the model's robustness to parameter settings and underlying assumptions, highlighting its strengths and weaknesses. We demonstrate a limitation of the model consisting in the hard-wiring of the sequence order in the connectivity patterns, and suggest possible solutions. Finally, we show that the core functionality of the model is retained under more biologically-plausible constraints.

摘要

为了从世界中获取统计规律,大脑必须可靠地处理时空结构化信息并从中学习。尽管越来越多的计算模型试图解释这种序列学习如何在神经硬件中实现,但许多模型在功能上仍有局限或缺乏生物物理合理性。如果我们要汲取这些模型中的知识并对皮质回路中的序列处理有更深入的机制理解,那么模型及其发现的可访问性、可重复性和定量可比性至关重要。在这里,我们通过对最近提出的一个序列学习模型进行全面研究来说明这些方面的重要性。我们在开源的NEST模拟器中重新实现了模块化柱状结构和基于奖励的学习规则,并成功复制了原始研究的主要发现。在此基础上,我们对模型对参数设置和潜在假设的鲁棒性进行了深入分析,突出了其优点和缺点。我们展示了该模型的一个局限性,即序列顺序在连接模式中是硬连线的,并提出了可能的解决方案。最后,我们表明在更符合生物学合理性的约束条件下,该模型的核心功能得以保留。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/79768663e3e0/fnint-17-935177-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/215e747c78cd/fnint-17-935177-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/b08dcf0f9710/fnint-17-935177-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/49b8e7d51cea/fnint-17-935177-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/c197ff569623/fnint-17-935177-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/ff54d9391a8d/fnint-17-935177-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/54adc1c7a56b/fnint-17-935177-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/79768663e3e0/fnint-17-935177-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/215e747c78cd/fnint-17-935177-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/b08dcf0f9710/fnint-17-935177-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/49b8e7d51cea/fnint-17-935177-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/c197ff569623/fnint-17-935177-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/ff54d9391a8d/fnint-17-935177-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/54adc1c7a56b/fnint-17-935177-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/51f7/10310927/79768663e3e0/fnint-17-935177-g0007.jpg

相似文献

1
Toward reproducible models of sequence learning: replication and analysis of a modular spiking network with reward-based learning.迈向可重复的序列学习模型:具有基于奖励学习的模块化脉冲神经网络的复制与分析
Front Integr Neurosci. 2023 Jun 15;17:935177. doi: 10.3389/fnint.2023.935177. eCollection 2023.
2
Learning and replaying spatiotemporal sequences: A replication study.学习与重放时空序列:一项重复研究。
Front Integr Neurosci. 2022 Oct 14;16:974177. doi: 10.3389/fnint.2022.974177. eCollection 2022.
3
Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network.通过模块化、脉冲神经网络中的生物物理现实学习规则学习精确的时空序列。
Elife. 2021 Mar 18;10:e63751. doi: 10.7554/eLife.63751.
4
Using a Low-Power Spiking Continuous Time Neuron (SCTN) for Sound Signal Processing.使用低功耗尖峰连续时间神经元(SCTN)进行声音信号处理。
Sensors (Basel). 2021 Feb 4;21(4):1065. doi: 10.3390/s21041065.
5
Performance of a Computational Model of the Mammalian Olfactory System哺乳动物嗅觉系统计算模型的性能
6
A biologically plausible supervised learning method for spiking neural networks using the symmetric STDP rule.基于对称 STDP 规则的尖峰神经网络的生物合理有监督学习方法。
Neural Netw. 2020 Jan;121:387-395. doi: 10.1016/j.neunet.2019.09.007. Epub 2019 Sep 27.
7
Investigating visual navigation using spiking neural network models of the insect mushroom bodies.使用昆虫蘑菇体的脉冲神经网络模型研究视觉导航。
Front Physiol. 2024 May 22;15:1379977. doi: 10.3389/fphys.2024.1379977. eCollection 2024.
8
Introducing double bouquet cells into a modular cortical associative memory model.将双花束细胞引入模块化皮质联想记忆模型。
J Comput Neurosci. 2019 Dec;47(2-3):223-230. doi: 10.1007/s10827-019-00729-1. Epub 2019 Sep 9.
9
Characteristic columnar connectivity caters to cortical computation: Replication, simulation, and evaluation of a microcircuit model.独特的柱状连接性有助于皮层计算:一个微电路模型的复制、模拟和评估。
Front Integr Neurosci. 2022 Oct 3;16:923468. doi: 10.3389/fnint.2022.923468. eCollection 2022.
10
RatInABox, a toolkit for modelling locomotion and neuronal activity in continuous environments.盒子里的老鼠,一个用于模拟连续环境中运动和神经元活动的工具包。
Elife. 2024 Feb 9;13:e85274. doi: 10.7554/eLife.85274.

引用本文的文献

1
Learning to express reward prediction error-like dopaminergic activity requires plastic representations of time.学习表达类似于奖励预测误差的多巴胺能活动需要时间的可塑性表示。
Nat Commun. 2024 Jul 12;15(1):5856. doi: 10.1038/s41467-024-50205-3.
2
Correction: Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network.更正:在模块化脉冲神经网络中通过生物物理现实学习规则学习精确的时空序列。
Elife. 2023 Mar 13;12:e87507. doi: 10.7554/eLife.87507.

本文引用的文献

1
Correction: Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network.更正:在模块化脉冲神经网络中通过生物物理现实学习规则学习精确的时空序列。
Elife. 2023 Mar 13;12:e87507. doi: 10.7554/eLife.87507.
2
Characteristic columnar connectivity caters to cortical computation: Replication, simulation, and evaluation of a microcircuit model.独特的柱状连接性有助于皮层计算:一个微电路模型的复制、模拟和评估。
Front Integr Neurosci. 2022 Oct 3;16:923468. doi: 10.3389/fnint.2022.923468. eCollection 2022.
3
Sequence learning, prediction, and replay in networks of spiking neurons.
脉冲神经元网络中的序列学习、预测和重放。
PLoS Comput Biol. 2022 Jun 21;18(6):e1010233. doi: 10.1371/journal.pcbi.1010233. eCollection 2022 Jun.
4
Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons.通过脉冲神经元的层次网络学习具有多个时间尺度的组合序列。
PLoS Comput Biol. 2021 Mar 25;17(3):e1008866. doi: 10.1371/journal.pcbi.1008866. eCollection 2021 Mar.
5
Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network.通过模块化、脉冲神经网络中的生物物理现实学习规则学习精确的时空序列。
Elife. 2021 Mar 18;10:e63751. doi: 10.7554/eLife.63751.
6
Learning hierarchical sequence representations across human cortex and hippocampus.在人类大脑皮层和海马体中学习层次化序列表示。
Sci Adv. 2021 Feb 19;7(8). doi: 10.1126/sciadv.abc4530. Print 2021 Feb.
7
Neuronal spike-rate adaptation supports working memory in language processing.神经元尖峰率适应支持语言处理中的工作记忆。
Proc Natl Acad Sci U S A. 2020 Aug 25;117(34):20881-20889. doi: 10.1073/pnas.2000222117. Epub 2020 Aug 11.
8
Non-adjacent Dependency Learning in Humans and Other Animals.人类和其他动物中的非相邻依赖学习
Top Cogn Sci. 2020 Jul;12(3):843-858. doi: 10.1111/tops.12381. Epub 2018 Sep 8.
9
Synaptic Plasticity Forms and Functions.突触可塑性的形式和功能。
Annu Rev Neurosci. 2020 Jul 8;43:95-117. doi: 10.1146/annurev-neuro-090919-022842. Epub 2020 Feb 19.
10
Passing the Message: Representation Transfer in Modular Balanced Networks.传递信息:模块化平衡网络中的表示转移
Front Comput Neurosci. 2019 Dec 5;13:79. doi: 10.3389/fncom.2019.00079. eCollection 2019.