Suppr超能文献

SpikeProp学习中对训练干扰的鲁棒性

Robustness to Training Disturbances in SpikeProp Learning.

作者信息

Shrestha Sumit Bam, Song Qing

出版信息

IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3126-3139. doi: 10.1109/TNNLS.2017.2713125. Epub 2017 Jul 4.

Abstract

Stability is a key issue during spiking neural network training using SpikeProp. The inherent nonlinearity of Spiking Neuron means that the learning manifold changes abruptly; therefore, we need to carefully choose the learning steps at every instance. Other sources of instability are the external disturbances that come along with training sample as well as the internal disturbances that arise due to modeling imperfection. The unstable learning scenario can be indirectly observed in the form of surges, which are sudden increases in the learning cost and are a common occurrence during SpikeProp training. Research in the past has shown that proper learning step size is crucial to minimize surges during training process. To determine proper learning step in order to avoid steep learning manifolds, we perform weight convergence analysis of SpikeProp learning in the presence of disturbance signals. The weight convergence analysis is further extended to robust stability analysis linked with overall system error. This ensures boundedness of the total learning error with minimal assumption of bounded disturbance signals. These analyses result in the learning rate normalization scheme, which are the key results of this paper. The performance of learning using this scheme has been compared with the prevailing methods for different benchmark data sets and the results show that this method has stable learning reflected by minimal surges during learning, higher success in training instances, and faster learning as well.

摘要

在使用SpikeProp进行脉冲神经网络训练期间,稳定性是一个关键问题。脉冲神经元固有的非线性意味着学习流形会突然变化;因此,我们需要在每个实例中仔细选择学习步长。其他不稳定来源包括与训练样本一起出现的外部干扰以及由于建模不完善而产生的内部干扰。不稳定的学习情况可以以激增的形式间接观察到,激增是学习成本的突然增加,在SpikeProp训练期间很常见。过去的研究表明,合适的学习步长对于在训练过程中最小化激增至关重要。为了确定合适的学习步长以避免陡峭的学习流形,我们在存在干扰信号的情况下对SpikeProp学习进行权重收敛分析。权重收敛分析进一步扩展到与整体系统误差相关的鲁棒稳定性分析。这在对有界干扰信号的假设最小的情况下确保了总学习误差的有界性。这些分析得出了学习率归一化方案,这是本文的关键结果。使用该方案进行学习的性能已与针对不同基准数据集的现有方法进行了比较,结果表明该方法具有稳定的学习性能,表现为学习期间激增最小、训练实例成功率更高以及学习速度更快。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验