Warsaw University of Technology, Institute of Computer Science, Nowowiejska 15/19, 00-665 Warsaw, Poland.
Neural Netw. 2017 Dec;96:1-10. doi: 10.1016/j.neunet.2017.07.007. Epub 2017 Sep 7.
In this paper the classic momentum algorithm for stochastic optimization is considered. A method is introduced that adjusts coefficients for this algorithm during its operation. The method does not depend on any preliminary knowledge of the optimization problem. In the experimental study, the method is applied to on-line learning in feed-forward neural networks, including deep auto-encoders, and outperforms any fixed coefficients. The method eliminates coefficients that are difficult to determine, with profound influence on performance. While the method itself has some coefficients, they are ease to determine and sensitivity of performance to them is low. Consequently, the method makes on-line learning a practically parameter-free process and broadens the area of potential application of this technology.
本文考虑了用于随机优化的经典动量算法。引入了一种在算法运行过程中调整系数的方法。该方法不依赖于优化问题的任何先验知识。在实验研究中,该方法应用于前馈神经网络(包括深度自动编码器)的在线学习,并优于任何固定系数。该方法消除了难以确定的系数,对性能有深远的影响。虽然该方法本身有一些系数,但它们易于确定,并且性能对它们的敏感性较低。因此,该方法使得在线学习成为一个实际上无参数的过程,并拓宽了该技术的潜在应用领域。