Princeton Neuroscience Institute, Princeton University, USA.
DeepMind, London, UK; Gatsby Computational Neuroscience Unit, University College London, London, UK.
Neural Netw. 2022 Jan;145:80-89. doi: 10.1016/j.neunet.2021.10.004. Epub 2021 Oct 18.
The intersection between neuroscience and artificial intelligence (AI) research has created synergistic effects in both fields. While neuroscientific discoveries have inspired the development of AI architectures, new ideas and algorithms from AI research have produced new ways to study brain mechanisms. A well-known example is the case of reinforcement learning (RL), which has stimulated neuroscience research on how animals learn to adjust their behavior to maximize reward. In this review article, we cover recent collaborative work between the two fields in the context of meta-learning and its extension to social cognition and consciousness. Meta-learning refers to the ability to learn how to learn, such as learning to adjust hyperparameters of existing learning algorithms and how to use existing models and knowledge to efficiently solve new tasks. This meta-learning capability is important for making existing AI systems more adaptive and flexible to efficiently solve new tasks. Since this is one of the areas where there is a gap between human performance and current AI systems, successful collaboration should produce new ideas and progress. Starting from the role of RL algorithms in driving neuroscience, we discuss recent developments in deep RL applied to modeling prefrontal cortex functions. Even from a broader perspective, we discuss the similarities and differences between social cognition and meta-learning, and finally conclude with speculations on the potential links between intelligence as endowed by model-based RL and consciousness. For future work we highlight data efficiency, autonomy and intrinsic motivation as key research areas for advancing both fields.
神经科学和人工智能 (AI) 研究的交叉在这两个领域产生了协同效应。神经科学的发现启发了人工智能架构的发展,而人工智能研究的新思想和算法则为研究大脑机制提供了新的方法。一个著名的例子是强化学习 (RL),它激发了神经科学对动物如何学习调整行为以最大化奖励的研究。在这篇综述文章中,我们涵盖了这两个领域在元学习及其在社会认知和意识方面的扩展方面的最新合作工作。元学习是指学习如何学习的能力,例如学习调整现有学习算法的超参数,以及如何利用现有模型和知识来有效地解决新任务。这种元学习能力对于使现有 AI 系统更加自适应和灵活以有效地解决新任务非常重要。由于这是人类表现和当前 AI 系统之间存在差距的领域之一,因此成功的合作应该会产生新的想法和进展。从 RL 算法在推动神经科学方面的作用出发,我们讨论了应用于模拟前额叶皮层功能的深度 RL 的最新进展。即使从更广泛的角度来看,我们也讨论了社会认知和元学习之间的异同,最后推测了基于模型的 RL 赋予的智力与意识之间的潜在联系。对于未来的工作,我们强调数据效率、自主性和内在动机是推进这两个领域的关键研究领域。