IEEE Trans Neural Netw Learn Syst. 2013 Oct;24(10):1623-34. doi: 10.1109/TNNLS.2013.2264952.
Catastrophic forgetting is a well-studied attribute of most parameterized supervised learning systems. A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings. The majority of the schemes proposed in the literature for mitigating catastrophic forgetting were not data driven and did not scale well. We introduce the fixed expansion layer (FEL) feedforward neural network, which embeds a sparsely encoding hidden layer to help mitigate forgetting of prior learned representations. In addition, we investigate a novel framework for training ensembles of FEL networks, based on exploiting an information-theoretic measure of diversity between FEL learners, to further control undesired plasticity. The proposed methodology is demonstrated on a basic classification task, clearly emphasizing its advantages over existing techniques. The architecture proposed can be enhanced to address a range of computational intelligence tasks, such as regression problems and system control.
灾难性遗忘是大多数参数化监督学习系统的一个研究得很好的属性。在前馈神经网络的上下文中,当非平稳输入导致先前学习的映射丢失时,会出现这种现象的变体。文献中提出的大多数用于减轻灾难性遗忘的方案不是基于数据驱动的,也没有很好地扩展。我们引入了固定扩展层(FEL)前馈神经网络,它嵌入了一个稀疏编码的隐藏层,以帮助减轻先前学习表示的遗忘。此外,我们研究了一种基于利用 FEL 学习者之间信息论多样性度量的训练 FEL 网络集合的新框架,以进一步控制不必要的可塑性。所提出的方法在基本分类任务上进行了演示,明显强调了其相对于现有技术的优势。所提出的架构可以增强,以解决一系列计算智能任务,如回归问题和系统控制。