French Robert M, Chater Nick
Quantitative Psychology and Cognitive Science, Psychology Department, University of Liège, 4000 Liège, Belgium.
Neural Comput. 2002 Jul;14(7):1755-69. doi: 10.1162/08997660260028700.
In error-driven distributed feedforward networks, new information typically interferes, sometimes severely, with previously learned information. We show how noise can be used to approximate the error surface of previously learned information. By combining this approximated error surface with the error surface associated with the new information to be learned, the network's retention of previously learned items can be improved and catastrophic interference significantly reduced. Further, we show that the noise-generated error surface is produced using only first-derivative information and without recourse to any explicit error information.
在错误驱动的分布式前馈网络中,新信息通常会干扰,有时甚至严重干扰先前学习到的信息。我们展示了如何利用噪声来近似先前学习到的信息的误差曲面。通过将这个近似的误差曲面与要学习的新信息相关联的误差曲面相结合,可以提高网络对先前学习项目的保留率,并显著减少灾难性干扰。此外,我们表明,噪声生成的误差曲面仅使用一阶导数信息生成,无需借助任何明确的误差信息。