Sakai Hiroyuki, Iiduka Hideaki
IEEE Trans Cybern. 2022 Aug;52(8):7328-7339. doi: 10.1109/TCYB.2021.3049845. Epub 2022 Jul 19.
This article proposes a Riemannian adaptive optimization algorithm to optimize the parameters of deep neural networks. The algorithm is an extension of both AMSGrad in Euclidean space and RAMSGrad on a Riemannian manifold. The algorithm helps to resolve two issues affecting RAMSGrad. The first is that it can solve the Riemannian stochastic optimization problem directly, in contrast to RAMSGrad which only achieves a low regret. The other is that it can use constant learning rates, which makes it implementable in practice. Additionally, we apply the proposed algorithm to Poincaré embeddings that embed the transitive closure of the WordNet nouns into the Poincaré ball model of hyperbolic space. Numerical experiments show that regardless of the initial value of the learning rate, our algorithm stably converges to the optimal solution and converges faster than the existing algorithms.
本文提出了一种黎曼自适应优化算法来优化深度神经网络的参数。该算法是欧几里得空间中的AMSGrad和黎曼流形上的RAMSGrad的扩展。该算法有助于解决影响RAMSGrad的两个问题。第一个问题是,与仅实现低遗憾度的RAMSGrad不同,它可以直接解决黎曼随机优化问题。另一个问题是,它可以使用恒定的学习率,这使得它在实践中可实现。此外,我们将所提出的算法应用于庞加莱嵌入,即将WordNet名词的传递闭包嵌入到双曲空间的庞加莱球模型中。数值实验表明,无论学习率的初始值如何,我们的算法都能稳定地收敛到最优解,并且比现有算法收敛得更快。