• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。

PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.

机构信息

Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06520-8074.

Department of Computer Science, Yale University, New Haven, CT 06520-8285.

出版信息

eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.

DOI:10.1523/ENEURO.0427-20.2020
PMID:33328247
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7814477/
Abstract

Task-trained artificial recurrent neural networks (RNNs) provide a computational modeling framework of increasing interest and application in computational, systems, and cognitive neuroscience. RNNs can be trained, using deep-learning methods, to perform cognitive tasks used in animal and human experiments and can be studied to investigate potential neural representations and circuit mechanisms underlying cognitive computations and behavior. Widespread application of these approaches within neuroscience has been limited by technical barriers in use of deep-learning software packages to train network models. Here, we introduce PsychRNN, an accessible, flexible, and extensible Python package for training RNNs on cognitive tasks. Our package is designed for accessibility, for researchers to define tasks and train RNN models using only Python and NumPy, without requiring knowledge of deep-learning software. The training backend is based on TensorFlow and is readily extensible for researchers with TensorFlow knowledge to develop projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience. Users can impose neurobiologically relevant constraints on synaptic connectivity patterns. Furthermore, specification of cognitive tasks has a modular structure, which facilitates parametric variation of task demands to examine their impact on model solutions. PsychRNN also enables task shaping during training, or curriculum learning, in which tasks are adjusted in closed-loop based on performance. Shaping is ubiquitous in training of animals in cognitive tasks, and PsychRNN allows investigation of how shaping trajectories impact learning and model solutions. Overall, the PsychRNN framework facilitates application of trained RNNs in neuroscience research.

摘要

任务训练的人工递归神经网络(RNN)为计算、系统和认知神经科学领域中越来越感兴趣和应用的计算建模框架提供了支持。RNN 可以使用深度学习方法进行训练,以执行动物和人类实验中使用的认知任务,并可以进行研究,以调查潜在的神经表示和认知计算和行为的电路机制。这些方法在神经科学中的广泛应用受到在训练网络模型时使用深度学习软件包的技术障碍的限制。在这里,我们引入了 PsychRNN,这是一个易于使用、灵活且可扩展的 Python 包,用于在认知任务上训练 RNN。我们的软件包旨在实现易用性,以便研究人员仅使用 Python 和 NumPy 定义任务和训练 RNN 模型,而无需了解深度学习软件。训练后端基于 TensorFlow,并且对于具有 TensorFlow 知识的研究人员来说,很容易进行扩展,以开发具有附加自定义功能的项目。PsychRNN 实现了许多专门的功能,以支持系统和认知神经科学中的应用。用户可以对突触连接模式施加神经生物学相关的约束。此外,认知任务的规范具有模块化结构,这便于检查任务需求的参数变化对模型解决方案的影响。PsychRNN 还允许在训练期间进行任务塑造或课程学习,其中任务可以根据性能进行闭环调整。在认知任务中对动物进行训练时,塑造是无处不在的,PsychRNN 允许研究塑造轨迹如何影响学习和模型解决方案。总的来说,PsychRNN 框架促进了经过训练的 RNN 在神经科学研究中的应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/c4cdeb3cff2d/SN-ENUJ200335F006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/8c9ad6ce7f3a/SN-ENUJ200335F007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/0d71c5f1cdf3/SN-ENUJ200335F001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/52dd852f6483/SN-ENUJ200335F002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/7e0d6d827607/SN-ENUJ200335F003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/3fe7a1652bd3/SN-ENUJ200335F004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/02c2b52fd4fb/SN-ENUJ200335F005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/c4cdeb3cff2d/SN-ENUJ200335F006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/8c9ad6ce7f3a/SN-ENUJ200335F007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/0d71c5f1cdf3/SN-ENUJ200335F001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/52dd852f6483/SN-ENUJ200335F002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/7e0d6d827607/SN-ENUJ200335F003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/3fe7a1652bd3/SN-ENUJ200335F004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/02c2b52fd4fb/SN-ENUJ200335F005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1aba/7814477/c4cdeb3cff2d/SN-ENUJ200335F006.jpg

相似文献

1
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks.PsychRNN:一个用于在认知任务上训练递归神经网络模型的易于访问和灵活的 Python 包。
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
2
tension: A Python package for FORCE learning.张力:用于 FORCE 学习的 Python 包。
PLoS Comput Biol. 2022 Dec 19;18(12):e1010722. doi: 10.1371/journal.pcbi.1010722. eCollection 2022 Dec.
3
Training Excitatory-Inhibitory Recurrent Neural Networks for Cognitive Tasks: A Simple and Flexible Framework.用于认知任务的兴奋性-抑制性循环神经网络训练:一个简单灵活的框架。
PLoS Comput Biol. 2016 Feb 29;12(2):e1004792. doi: 10.1371/journal.pcbi.1004792. eCollection 2016 Feb.
4
Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies.在具有长期依赖关系的认知任务上训练具有生物学合理性的循环神经网络。
bioRxiv. 2023 Oct 10:2023.10.10.561588. doi: 10.1101/2023.10.10.561588.
5
Considerations in using recurrent neural networks to probe neural dynamics.使用循环神经网络探究神经动力学的注意事项。
J Neurophysiol. 2019 Dec 1;122(6):2504-2521. doi: 10.1152/jn.00467.2018. Epub 2019 Oct 16.
6
Winning the Lottery With Neural Connectivity Constraints: Faster Learning Across Cognitive Tasks With Spatially Constrained Sparse RNNs.通过神经连接约束赢得彩票:使用空间约束稀疏循环神经网络在认知任务中实现更快学习。
Neural Comput. 2023 Oct 10;35(11):1850-1869. doi: 10.1162/neco_a_01613.
7
Task representations in neural networks trained to perform many cognitive tasks.神经网络中执行多项认知任务的任务表示。
Nat Neurosci. 2019 Feb;22(2):297-306. doi: 10.1038/s41593-018-0310-2. Epub 2019 Jan 14.
8
Exploring weight initialization, diversity of solutions, and degradation in recurrent neural networks trained for temporal and decision-making tasks.探讨在用于时间和决策任务的递归神经网络训练中,权重初始化、解决方案多样性和退化的问题。
J Comput Neurosci. 2023 Nov;51(4):407-431. doi: 10.1007/s10827-023-00857-9. Epub 2023 Aug 10.
9
Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.循环神经网络中符合生物学原理的学习再现了认知任务期间观察到的神经动力学。
Elife. 2017 Feb 23;6:e20899. doi: 10.7554/eLife.20899.
10
Reward-based training of recurrent neural networks for cognitive and value-based tasks.用于认知和基于价值任务的循环神经网络的基于奖励的训练。
Elife. 2017 Jan 13;6:e21492. doi: 10.7554/eLife.21492.

引用本文的文献

1
Flexible computation of object motion and depth based on viewing geometry inferred from optic flow.基于从光流推断出的视觉几何对物体运动和深度进行灵活计算。
bioRxiv. 2025 May 19:2024.10.29.620928. doi: 10.1101/2024.10.29.620928.
2
Hierarchy between forelimb premotor and primary motor cortices and its manifestation in their firing patterns.前肢运动前区和初级运动皮层之间的层级关系及其在放电模式中的表现。
Elife. 2025 Jun 5;13:RP103069. doi: 10.7554/eLife.103069.
3
Mixed recurrent connectivity in primate prefrontal cortex.灵长类前额叶皮层中的混合循环连接

本文引用的文献

1
Artificial Neural Networks for Neuroscientists: A Primer.人工神经网络:神经科学家入门指南。
Neuron. 2020 Sep 23;107(6):1048-1070. doi: 10.1016/j.neuron.2020.09.005.
2
Deep Reinforcement Learning and Its Neuroscientific Implications.深度强化学习及其神经科学意义。
Neuron. 2020 Aug 19;107(4):603-616. doi: 10.1016/j.neuron.2020.06.014. Epub 2020 Jul 13.
3
Automated task training and longitudinal monitoring of mouse mesoscale cortical circuits using home cages.使用鼠笼对小鼠介观皮质回路进行自动化任务训练和纵向监测。
PLoS Comput Biol. 2025 Mar 11;21(3):e1012867. doi: 10.1371/journal.pcbi.1012867. eCollection 2025 Mar.
4
Linking neural population formatting to function.将神经群体形成与功能联系起来。
bioRxiv. 2025 Jan 3:2025.01.03.631242. doi: 10.1101/2025.01.03.631242.
5
Rapid context inference in a thalamocortical model using recurrent neural networks.使用递归神经网络进行丘脑皮质模型中的快速上下文推断。
Nat Commun. 2024 Sep 27;15(1):8275. doi: 10.1038/s41467-024-52289-3.
6
Coordinated Response Modulations Enable Flexible Use of Visual Information.协调的反应调制使视觉信息能够灵活运用。
bioRxiv. 2024 Jul 15:2024.07.10.602774. doi: 10.1101/2024.07.10.602774.
7
Emergent behaviour and neural dynamics in artificial agents tracking odour plumes.追踪气味羽流的人工主体中的涌现行为与神经动力学
Nat Mach Intell. 2023 Jan;5(1):58-70. doi: 10.1038/s42256-022-00599-w. Epub 2023 Jan 25.
8
Initial conditions combine with sensory evidence to induce decision-related dynamics in premotor cortex.初始条件与感官证据相结合,诱发前运动皮层的决策相关动力学。
Nat Commun. 2023 Oct 16;14(1):6510. doi: 10.1038/s41467-023-41752-2.
9
Geometry of neural computation unifies working memory and planning.神经计算的几何结构统一了工作记忆和规划。
Proc Natl Acad Sci U S A. 2022 Sep 13;119(37):e2115610119. doi: 10.1073/pnas.2115610119. Epub 2022 Sep 6.
10
Dynamic task-belief is an integral part of decision-making.动态任务信念是决策的一个组成部分。
Neuron. 2022 Aug 3;110(15):2503-2511.e3. doi: 10.1016/j.neuron.2022.05.010. Epub 2022 Jun 13.
Elife. 2020 May 15;9:e55964. doi: 10.7554/eLife.55964.
4
A deep learning framework for neuroscience.深度学习在神经科学中的应用框架。
Nat Neurosci. 2019 Nov;22(11):1761-1770. doi: 10.1038/s41593-019-0520-2. Epub 2019 Oct 28.
5
Circuit mechanisms for the maintenance and manipulation of information in working memory.工作记忆中信息的维持和操作的电路机制。
Nat Neurosci. 2019 Jul;22(7):1159-1167. doi: 10.1038/s41593-019-0414-3. Epub 2019 Jun 10.
6
A diverse range of factors affect the nature of neural representations underlying short-term memory.多种因素影响短期记忆的神经表象的本质。
Nat Neurosci. 2019 Feb;22(2):275-283. doi: 10.1038/s41593-018-0314-y. Epub 2019 Jan 24.
7
Thalamic regulation of switching between cortical representations enables cognitive flexibility.丘脑对皮质代表之间切换的调节使认知灵活性成为可能。
Nat Neurosci. 2018 Dec;21(12):1753-1763. doi: 10.1038/s41593-018-0269-z. Epub 2018 Nov 19.
8
Flexible Sensorimotor Computations through Rapid Reconfiguration of Cortical Dynamics.通过快速重新配置皮层动态实现灵活的感觉运动计算。
Neuron. 2018 Jun 6;98(5):1005-1019.e5. doi: 10.1016/j.neuron.2018.05.020.
9
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.超级脉冲:多层脉冲神经网络中的监督学习
Neural Comput. 2018 Jun;30(6):1514-1541. doi: 10.1162/neco_a_01086. Epub 2018 Apr 13.
10
Standardized automated training of rhesus monkeys for neuroscience research in their housing environment.在恒河猴的饲养环境中对其进行神经科学研究的标准化自动训练。
J Neurophysiol. 2018 Mar 1;119(3):796-807. doi: 10.1152/jn.00614.2017. Epub 2017 Nov 15.