Suppr超能文献

快速漂移神经网络的自组织和序列学习。

Snap-drift neural network for self-organisation and sequence learning.

机构信息

Faculty of Computing, London Metropolitan University, 166-220 Holloway Road, London N7 8DB, UK.

出版信息

Neural Netw. 2011 Oct;24(8):897-905. doi: 10.1016/j.neunet.2011.05.007. Epub 2011 Jun 6.

Abstract

This paper presents two novel neural networks based on snap-drift in the context of self-organisation and sequence learning. The snap-drift neural network employs modal learning that is a combination of two modes; fuzzy AND learning (snap), and Learning Vector Quantisation (drift). We present the snap-drift self-organising map (SDSOM) and the recurrent snap-drift neural network (RSDNN). The SDSOM uses the standard SOM architecture, where a layer of input nodes connects to the self-organising map layer and the weight update consists of either snap (min of input and weight) or drift (LVQ, as in SOM). The RSDNN uses a simple recurrent network (SRN) architecture, with the hidden layer values copied back to the input layer. A form of reinforcement learning is deployed in which the mode is swapped between the snap and drift when performance drops, and in which adaptation is probabilistic, whereby the probability of a neuron being adapted is reduced as performance increases. The algorithms are evaluated on several well known data sets, and it is found that these exhibit effective learning that is faster than alternative neural network methods.

摘要

本文提出了两种基于自组织和序列学习中的突发漂移的新型神经网络。突发漂移神经网络采用模态学习,这是两种模式的组合;模糊 AND 学习(突发)和学习向量量化(漂移)。我们提出了突发漂移自组织图(SDSOM)和递归突发漂移神经网络(RSDNN)。SDSOM 使用标准 SOM 架构,其中输入节点层连接到自组织映射层,并且权重更新包括突发(输入和权重的最小值)或漂移(与 SOM 中的 LVQ 一样)。RSDNN 使用简单的递归网络(SRN)架构,其中隐藏层的值复制回输入层。采用强化学习的形式,当性能下降时,在突发和漂移之间切换模式,并且适应是概率的,即随着性能的提高,适应神经元的概率降低。该算法在几个著名的数据集上进行了评估,结果表明,这些算法表现出了比其他神经网络方法更快的有效学习。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验