• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

多层反向传播网络是否具有灾难性遗忘?

Are multi-layer backpropagation networks catastrophically amnesic?

作者信息

Yamaguchi Makoto

机构信息

Waseda University, Tokyo 169-8050, Japan.

出版信息

Scand J Psychol. 2004 Nov;45(5):357-61. doi: 10.1111/j.1467-9450.2004.00417.x.

DOI:10.1111/j.1467-9450.2004.00417.x
PMID:15535804
Abstract

Connectionist models with a backpropagation learning rule are known to have a serious problem. Such models exhibit catastrophic interference (or forgetting) with sequential training. Having learned a set of patterns, if the model is trained on another set of patterns, its performance on the first set can dramatically deteriorate very rapidly. The present study reconsiders this issue with four simulations. The model learned arithmetic facts sequentially, but the interference was only modest with random (hence approximately orthogonal) inputs. Essentially the same result was obtained when the inputs are made less orthogonal by adding irrelevant elements. Reducing the number of hidden units did not have major effects. This study suggests that the interference problem has been somewhat overstated.

摘要

具有反向传播学习规则的联结主义模型存在一个严重问题。这类模型在序列训练中会表现出灾难性干扰(或遗忘)。在学习了一组模式后,如果该模型再接受另一组模式的训练,那么它在第一组模式上的表现会迅速大幅下降。本研究通过四项模拟重新审视了这个问题。该模型按顺序学习算术事实,但对于随机(因此近似正交)输入,干扰只是适度的。当通过添加无关元素使输入的正交性降低时,也得到了基本相同的结果。减少隐藏单元的数量并没有产生重大影响。这项研究表明,干扰问题可能被有些夸大了。

相似文献

1
Are multi-layer backpropagation networks catastrophically amnesic?多层反向传播网络是否具有灾难性遗忘?
Scand J Psychol. 2004 Nov;45(5):357-61. doi: 10.1111/j.1467-9450.2004.00417.x.
2
Orthogonality is not a panacea: backpropagation and "catastrophic interference".正交性并非万灵药:反向传播与“灾难性干扰”。
Scand J Psychol. 2006 Oct;47(5):339-44. doi: 10.1111/j.1467-9450.2006.00528.x.
3
Reassessment of catastrophic interference.灾难性干扰的重新评估。
Neuroreport. 2004 Oct 25;15(15):2423-6. doi: 10.1097/00001756-200410250-00024.
4
Size invariance does not hold for connectionist models: dangers of using a toy model.大小不变性不适用于联结主义模型:使用玩具模型的风险。
Neuroreport. 2004 Mar 1;15(3):565-7. doi: 10.1097/00001756-200403010-00036.
5
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
6
Catastrophic forgetting in simple networks: an analysis of the pseudorehearsal solution.简单网络中的灾难性遗忘:伪排练解决方案分析
Network. 1999 Aug;10(3):227-36.
7
Methods for reducing interference in the Complementary Learning Systems model: oscillating inhibition and autonomous memory rehearsal.减少互补学习系统模型中干扰的方法:振荡抑制和自主记忆复述。
Neural Netw. 2005 Nov;18(9):1212-28. doi: 10.1016/j.neunet.2005.08.010. Epub 2005 Nov 2.
8
A robust method for distinguishing between learned and spurious attractors.一种区分习得吸引子和虚假吸引子的稳健方法。
Neural Netw. 2004 Apr;17(3):313-26. doi: 10.1016/j.neunet.2003.11.007.
9
The loading problem for recursive neural networks.递归神经网络的负载问题。
Neural Netw. 2005 Oct;18(8):1064-79. doi: 10.1016/j.neunet.2005.07.006. Epub 2005 Sep 29.
10
A new backpropagation learning algorithm for layered neural networks with nondifferentiable units.一种用于具有不可微单元的分层神经网络的新反向传播学习算法。
Neural Comput. 2007 May;19(5):1422-35. doi: 10.1162/neco.2007.19.5.1422.