• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

探索触发器存储器及其他:利用关键见解训练递归神经网络

Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights.

作者信息

Jarne Cecilia

机构信息

Departamento de Ciencia y Tecnologia de la Universidad Nacional de Quilmes, Bernal, Quilmes, Buenos Aires, Argentina.

CONICET, Buenos Aires, Argentina.

出版信息

Front Syst Neurosci. 2024 Mar 27;18:1269190. doi: 10.3389/fnsys.2024.1269190. eCollection 2024.

DOI:10.3389/fnsys.2024.1269190
PMID:38600907
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11004305/
Abstract

Training neural networks to perform different tasks is relevant across various disciplines. In particular, Recurrent Neural Networks (RNNs) are of great interest in Computational Neuroscience. Open-source frameworks dedicated to Machine Learning, such as Tensorflow and Keras have produced significant changes in the development of technologies that we currently use. This work contributes by comprehensively investigating and describing the application of RNNs for temporal processing through a study of a 3-bit Flip Flop memory implementation. We delve into the entire modeling process, encompassing equations, task parametrization, and software development. The obtained networks are meticulously analyzed to elucidate dynamics, aided by an array of visualization and analysis tools. Moreover, the provided code is versatile enough to facilitate the modeling of diverse tasks and systems. Furthermore, we present how memory states can be efficiently stored in the vertices of a cube in the dimensionally reduced space, supplementing previous results with a distinct approach.

摘要

训练神经网络执行不同任务在各个学科中都具有相关性。特别是,循环神经网络(RNN)在计算神经科学中备受关注。致力于机器学习的开源框架,如TensorFlow和Keras,给我们当前使用的技术发展带来了重大变革。这项工作通过对一个3位触发器存储器实现的研究,全面调查和描述RNN在时间处理中的应用,从而做出了贡献。我们深入研究整个建模过程,包括方程、任务参数化和软件开发。借助一系列可视化和分析工具,对获得的网络进行细致分析以阐明其动态特性。此外,所提供的代码通用性强,足以方便对各种任务和系统进行建模。此外,我们展示了如何在降维空间中高效地将记忆状态存储在立方体的顶点中,用一种独特的方法补充了先前的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/11f78eb83dc6/fnsys-18-1269190-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/8ef62f30094a/fnsys-18-1269190-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/f8eee2c37779/fnsys-18-1269190-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/684f0bfa2fbe/fnsys-18-1269190-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/51c5cd9fa2e0/fnsys-18-1269190-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/d1ce390b4ac5/fnsys-18-1269190-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/7c08212f9817/fnsys-18-1269190-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/11f78eb83dc6/fnsys-18-1269190-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/8ef62f30094a/fnsys-18-1269190-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/f8eee2c37779/fnsys-18-1269190-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/684f0bfa2fbe/fnsys-18-1269190-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/51c5cd9fa2e0/fnsys-18-1269190-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/d1ce390b4ac5/fnsys-18-1269190-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/7c08212f9817/fnsys-18-1269190-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/9d89/11004305/11f78eb83dc6/fnsys-18-1269190-g0007.jpg

相似文献

1
Exploring Flip Flop memories and beyond: training Recurrent Neural Networks with key insights.探索触发器存储器及其他:利用关键见解训练递归神经网络
Front Syst Neurosci. 2024 Mar 27;18:1269190. doi: 10.3389/fnsys.2024.1269190. eCollection 2024.
2
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
3
tension: A Python package for FORCE learning.张力:用于 FORCE 学习的 Python 包。
PLoS Comput Biol. 2022 Dec 19;18(12):e1010722. doi: 10.1371/journal.pcbi.1010722. eCollection 2022 Dec.
4
Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks.打开黑箱:高维递归神经网络中的低维动力学。
Neural Comput. 2013 Mar;25(3):626-49. doi: 10.1162/NECO_a_00409. Epub 2012 Dec 28.
5
Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks.探讨在用于时间和决策任务的递归神经网络训练中,权重初始化、解决方案多样性和退化的问题。
J Comput Neurosci. 2023 Nov;51(4):407-431. doi: 10.1007/s10827-023-00857-9. Epub 2023 Aug 10.
6
Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies.在具有长期依赖关系的认知任务上训练具有生物学合理性的循环神经网络。
bioRxiv. 2023 Oct 10:2023.10.10.561588. doi: 10.1101/2023.10.10.561588.
7
Hybrid deep learning for computational precision in cardiac MRI segmentation: Integrating Autoencoders, CNNs, and RNNs for enhanced structural analysis.用于心脏磁共振成像分割计算精度的混合深度学习:整合自动编码器、卷积神经网络和循环神经网络以增强结构分析。
Comput Biol Med. 2025 Mar;186:109597. doi: 10.1016/j.compbiomed.2024.109597. Epub 2025 Jan 1.
8
Computational implementation of a tunable multicellular memory circuit for engineered eukaryotic consortia.用于工程化真核生物群落的可调谐多细胞记忆电路的计算实现
Front Physiol. 2015 Oct 9;6:281. doi: 10.3389/fphys.2015.00281. eCollection 2015.
9
Winning the Lottery With Neural Connectivity Constraints: Faster Learning Across Cognitive Tasks With Spatially Constrained Sparse RNNs.通过神经连接约束赢得彩票:使用空间约束稀疏循环神经网络在认知任务中实现更快学习。
Neural Comput. 2023 Oct 10;35(11):1850-1869. doi: 10.1162/neco_a_01613.
10
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.

本文引用的文献

1
Effect in the spectra of eigenvalues and dynamics of RNNs trained with excitatory-inhibitory constraint.具有兴奋性-抑制性约束训练的循环神经网络(RNN)的特征值谱和动力学效应。
Cogn Neurodyn. 2024 Jun;18(3):1323-1335. doi: 10.1007/s11571-023-09956-w. Epub 2023 Apr 6.
2
Trained recurrent neural networks develop phase-locked limit cycles in a working memory task.经过训练的循环神经网络在工作记忆任务中发展出锁相的极限环。
PLoS Comput Biol. 2024 Feb 5;20(2):e1011852. doi: 10.1371/journal.pcbi.1011852. eCollection 2024 Feb.
3
Effective Surrogate Gradient Learning With High-Order Information Bottleneck for Spike-Based Machine Intelligence.
基于尖峰的机器智能的具有高阶信息瓶颈的有效替代梯度学习
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1734-1748. doi: 10.1109/TNNLS.2023.3329525. Epub 2025 Jan 7.
4
Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks.探讨在用于时间和决策任务的递归神经网络训练中,权重初始化、解决方案多样性和退化的问题。
J Comput Neurosci. 2023 Nov;51(4):407-431. doi: 10.1007/s10827-023-00857-9. Epub 2023 Aug 10.
5
Nerve growth factor promotes differentiation and protects the oligodendrocyte precursor cells from hypoxia/ischemia.神经生长因子促进分化,并保护少突胶质细胞前体细胞免受缺氧/缺血的影响。
Front Neurosci. 2023 Feb 16;17:1111170. doi: 10.3389/fnins.2023.1111170. eCollection 2023.
6
Input correlations impede suppression of chaos and learning in balanced firing-rate networks.输入相关性会阻碍平衡发放率网络中的混沌抑制和学习。
PLoS Comput Biol. 2022 Dec 5;18(12):e1010590. doi: 10.1371/journal.pcbi.1010590. eCollection 2022 Dec.
7
Heterogeneous Ensemble-Based Spike-Driven Few-Shot Online Learning.基于异构集成的尖峰驱动少样本在线学习
Front Neurosci. 2022 May 9;16:850932. doi: 10.3389/fnins.2022.850932. eCollection 2022.
8
Different eigenvalue distributions encode the same temporal tasks in recurrent neural networks.不同的特征值分布在循环神经网络中编码相同的时间任务。
Cogn Neurodyn. 2023 Feb;17(1):257-275. doi: 10.1007/s11571-022-09802-5. Epub 2022 Apr 20.
9
Robust Spike-Based Continual Meta-Learning Improved by Restricted Minimum Error Entropy Criterion.基于受限最小误差熵准则改进的稳健基于脉冲的持续元学习
Entropy (Basel). 2022 Mar 25;24(4):455. doi: 10.3390/e24040455.
10
A geometric framework for understanding dynamic information integration in context-dependent computation.用于理解上下文相关计算中动态信息整合的几何框架。
iScience. 2021 Jul 30;24(8):102919. doi: 10.1016/j.isci.2021.102919. eCollection 2021 Aug 20.