Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
Psychol Rev. 2007 Oct;114(4):887-953. doi: 10.1037/0033-295X.114.4.887.
Retrieval-induced forgetting (RIF) refers to the finding that retrieving a memory can impair subsequent recall of related memories. Here, the authors present a new model of how the brain gives rise to RIF in both semantic and episodic memory. The core of the model is a recently developed neural network learning algorithm that leverages regular oscillations in feedback inhibition to strengthen weak parts of target memories and to weaken competing memories. The authors use the model to address several puzzling findings relating to RIF, including why retrieval practice leads to more forgetting than simply presenting the target item, how RIF is affected by the strength of competing memories and the strength of the target (to-be-retrieved) memory, and why RIF sometimes generalizes to independent cues and sometimes does not. For all of these questions, the authors show that the model can account for existing results, and they generate novel predictions regarding boundary conditions on these results.
提取诱发遗忘(RIF)是指这样一种发现,即提取一段记忆会损害随后对相关记忆的回忆。在这里,作者提出了一个新的模型,用于解释大脑如何在语义记忆和情景记忆中产生 RIF。该模型的核心是一种新开发的神经网络学习算法,它利用反馈抑制中的规则性振荡来增强目标记忆的薄弱部分,并削弱竞争记忆。作者使用该模型来解决与 RIF 相关的几个令人困惑的发现,包括为什么检索练习会导致比简单呈现目标项目更多的遗忘,RIF 如何受到竞争记忆和目标(待检索)记忆强度的影响,以及为什么 RIF 有时会泛化到独立线索,而有时则不会。对于所有这些问题,作者都表明,该模型可以解释现有的结果,并针对这些结果的边界条件生成新的预测。