DeepMind, London EC4 5TW, United Kingdom;
DeepMind, London EC4 5TW, United Kingdom.
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.
The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.
以序列方式学习任务的能力对人工智能的发展至关重要。到目前为止,神经网络还无法做到这一点,人们普遍认为灾难性遗忘是连接主义模型的一个必然特征。我们表明,克服这一限制并训练能够长时间保持对其未经验证的任务的专业知识的网络是可能的。我们的方法通过选择性地减缓对那些任务重要的权重的学习来记住旧任务。我们通过解决一组基于手写数字数据集的分类任务并顺序学习多个雅达利 2600 游戏来证明我们的方法是可扩展和有效的。