Department of Medicine, University of California, San Diego, La Jolla, United States.
Department of Computer Science and Engineering, University of California, San Diego, La Jolla, United States.
Elife. 2020 Aug 4;9:e51005. doi: 10.7554/eLife.51005.
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training. Building upon data suggesting the importance of sleep in learning and memory, we tested a hypothesis that sleep protects old memories from being forgotten after new learning. In the thalamocortical model, training a new memory interfered with previously learned old memories leading to degradation and forgetting of the old memory traces. Simulating sleep after new learning reversed the damage and enhanced old and new memories. We found that when a new memory competed for previously allocated neuronal/synaptic resources, sleep replay changed the synaptic footprint of the old memory to allow overlapping neuronal populations to store multiple memories. Our study predicts that memory storage is dynamic, and sleep enables continual learning by combining consolidation of new memory traces with reconsolidation of old memory traces to minimize interference.
持续学习仍然是人工神经网络中的一个未解决的问题。大脑已经进化出了一些机制,以防止在新的训练过程中旧知识的灾难性遗忘。基于睡眠在学习和记忆中的重要性这一数据,我们提出了一个假设,即睡眠可以保护新学习后旧记忆不被遗忘。在丘脑皮质模型中,新记忆的训练会干扰以前学到的旧记忆,导致旧记忆痕迹的退化和遗忘。在新学习后模拟睡眠可以逆转这种损伤,并增强新旧记忆。我们发现,当新记忆与之前分配的神经元/突触资源竞争时,睡眠重放会改变旧记忆的突触足迹,允许重叠的神经元群体存储多个记忆。我们的研究预测,记忆存储是动态的,睡眠通过将新记忆痕迹的巩固与旧记忆痕迹的再巩固相结合,以最小化干扰,从而实现持续学习。