Diekmann Nicolas, Vijayabaskaran Sandhiya, Zeng Xiangshuai, Kappel David, Menezes Matheus Chaves, Cheng Sen
Faculty for Computer Science, Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany.
International Graduate School of Neuroscience, Ruhr University Bochum, Bochum, Germany.
Front Neuroinform. 2023 Mar 9;17:1134405. doi: 10.3389/fninf.2023.1134405. eCollection 2023.
Reinforcement learning (RL) has become a popular paradigm for modeling animal behavior, analyzing neuronal representations, and studying their emergence during learning. This development has been fueled by advances in understanding the role of RL in both the brain and artificial intelligence. However, while in machine learning a set of tools and standardized benchmarks facilitate the development of new methods and their comparison to existing ones, in neuroscience, the software infrastructure is much more fragmented. Even if sharing theoretical principles, computational studies rarely share software frameworks, thereby impeding the integration or comparison of different results. Machine learning tools are also difficult to port to computational neuroscience since the experimental requirements are usually not well aligned. To address these challenges we introduce CoBeL-RL, a closed-loop simulator of complex behavior and learning based on RL and deep neural networks. It provides a neuroscience-oriented framework for efficiently setting up and running simulations. CoBeL-RL offers a set of virtual environments, e.g., T-maze and Morris water maze, which can be simulated at different levels of abstraction, e.g., a simple gridworld or a 3D environment with complex visual stimuli, and set up using intuitive GUI tools. A range of RL algorithms, e.g., Dyna-Q and deep Q-network algorithms, is provided and can be easily extended. CoBeL-RL provides tools for monitoring and analyzing behavior and unit activity, and allows for fine-grained control of the simulation interfaces to relevant points in its closed-loop. In summary, CoBeL-RL fills an important gap in the software toolbox of computational neuroscience.
强化学习(RL)已成为一种流行的范式,用于对动物行为进行建模、分析神经元表征以及研究它们在学习过程中的出现。对RL在大脑和人工智能中作用的理解取得进展,推动了这一发展。然而,在机器学习中,一套工具和标准化基准有助于新方法的开发及其与现有方法的比较,而在神经科学中,软件基础设施则更加分散。即使共享理论原则,计算研究也很少共享软件框架,从而阻碍了不同结果的整合或比较。机器学习工具也难以移植到计算神经科学中,因为实验要求通常不太匹配。为应对这些挑战,我们引入了CoBeL-RL,这是一种基于RL和深度神经网络的复杂行为和学习的闭环模拟器。它提供了一个面向神经科学的框架,用于高效地设置和运行模拟。CoBeL-RL提供了一组虚拟环境,例如T迷宫和莫里斯水迷宫,可以在不同抽象层次上进行模拟,例如简单的网格世界或具有复杂视觉刺激的3D环境,并使用直观的GUI工具进行设置。提供了一系列RL算法,例如Dyna-Q和深度Q网络算法,并且可以轻松扩展。CoBeL-RL提供了用于监测和分析行为以及单元活动的工具,并允许对其闭环中的相关点进行模拟接口的细粒度控制。总之,CoBeL-RL填补了计算神经科学软件工具箱中的一个重要空白。