IEEE Trans Neural Netw Learn Syst. 2015 Feb;26(2):394-9. doi: 10.1109/TNNLS.2014.2312421.
Learning algorithms play an important role in the practical application of neural networks based on principal component analysis, often determining the success, or otherwise, of these applications. These algorithms cannot be divergent, but it is very difficult to directly study their convergence properties, because they are described by stochastic discrete time (SDT) algorithms. This brief analyzes the original SDT algorithms directly, and derives some invariant sets that guarantee the nondivergence of these algorithms in a stochastic environment by selecting proper learning parameters. Our theoretical results are verified by a series of simulation examples.
学习算法在基于主成分分析的神经网络的实际应用中起着重要作用,往往决定了这些应用的成败。这些算法不能发散,但很难直接研究它们的收敛性质,因为它们是由随机离散时间(SDT)算法描述的。本研究直接分析原始的 SDT 算法,并通过选择合适的学习参数,推导出一些不变集,在随机环境中保证这些算法的不发散性。我们的理论结果通过一系列模拟实例得到了验证。