Interdepartmental Neuroscience Program, Yale University, New Haven, CT 06520-8074.
Department of Computer Science, Yale University, New Haven, CT 06520-8285.
eNeuro. 2021 Jan 15;8(1). doi: 10.1523/ENEURO.0427-20.2020. Print 2021 Jan-Feb.
Task-trained artificial recurrent neural networks (RNNs) provide a computational modeling framework of increasing interest and application in computational, systems, and cognitive neuroscience. RNNs can be trained, using deep-learning methods, to perform cognitive tasks used in animal and human experiments and can be studied to investigate potential neural representations and circuit mechanisms underlying cognitive computations and behavior. Widespread application of these approaches within neuroscience has been limited by technical barriers in use of deep-learning software packages to train network models. Here, we introduce PsychRNN, an accessible, flexible, and extensible Python package for training RNNs on cognitive tasks. Our package is designed for accessibility, for researchers to define tasks and train RNN models using only Python and NumPy, without requiring knowledge of deep-learning software. The training backend is based on TensorFlow and is readily extensible for researchers with TensorFlow knowledge to develop projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience. Users can impose neurobiologically relevant constraints on synaptic connectivity patterns. Furthermore, specification of cognitive tasks has a modular structure, which facilitates parametric variation of task demands to examine their impact on model solutions. PsychRNN also enables task shaping during training, or curriculum learning, in which tasks are adjusted in closed-loop based on performance. Shaping is ubiquitous in training of animals in cognitive tasks, and PsychRNN allows investigation of how shaping trajectories impact learning and model solutions. Overall, the PsychRNN framework facilitates application of trained RNNs in neuroscience research.
任务训练的人工递归神经网络(RNN)为计算、系统和认知神经科学领域中越来越感兴趣和应用的计算建模框架提供了支持。RNN 可以使用深度学习方法进行训练,以执行动物和人类实验中使用的认知任务,并可以进行研究,以调查潜在的神经表示和认知计算和行为的电路机制。这些方法在神经科学中的广泛应用受到在训练网络模型时使用深度学习软件包的技术障碍的限制。在这里,我们引入了 PsychRNN,这是一个易于使用、灵活且可扩展的 Python 包,用于在认知任务上训练 RNN。我们的软件包旨在实现易用性,以便研究人员仅使用 Python 和 NumPy 定义任务和训练 RNN 模型,而无需了解深度学习软件。训练后端基于 TensorFlow,并且对于具有 TensorFlow 知识的研究人员来说,很容易进行扩展,以开发具有附加自定义功能的项目。PsychRNN 实现了许多专门的功能,以支持系统和认知神经科学中的应用。用户可以对突触连接模式施加神经生物学相关的约束。此外,认知任务的规范具有模块化结构,这便于检查任务需求的参数变化对模型解决方案的影响。PsychRNN 还允许在训练期间进行任务塑造或课程学习,其中任务可以根据性能进行闭环调整。在认知任务中对动物进行训练时,塑造是无处不在的,PsychRNN 允许研究塑造轨迹如何影响学习和模型解决方案。总的来说,PsychRNN 框架促进了经过训练的 RNN 在神经科学研究中的应用。