Suppr超能文献

约束四元数变量凸优化:一种基于四元数值递归神经网络的方法。

Constrained Quaternion-Variable Convex Optimization: A Quaternion-Valued Recurrent Neural Network Approach.

作者信息

Liu Yang, Zheng Yanling, Lu Jianquan, Cao Jinde, Rutkowski Leszek

出版信息

IEEE Trans Neural Netw Learn Syst. 2020 Mar;31(3):1022-1035. doi: 10.1109/TNNLS.2019.2916597. Epub 2019 Jun 20.

Abstract

This paper proposes a quaternion-valued one-layer recurrent neural network approach to resolve constrained convex function optimization problems with quaternion variables. Leveraging the novel generalized Hamilton-real (GHR) calculus, the quaternion gradient-based optimization techniques are proposed to derive the optimization algorithms in the quaternion field directly rather than the methods of decomposing the optimization problems into the complex domain or the real domain. Via chain rules and Lyapunov theorem, the rigorous analysis shows that the deliberately designed quaternion-valued one-layer recurrent neural network stabilizes the system dynamics while the states reach the feasible region in finite time and converges to the optimal solution of the considered constrained convex optimization problems finally. Numerical simulations verify the theoretical results.

摘要

本文提出了一种四元数值单层递归神经网络方法,用于解决具有四元数变量的约束凸函数优化问题。利用新颖的广义哈密顿-实(GHR)微积分,提出了基于四元数梯度的优化技术,以直接在四元数域中推导优化算法,而不是将优化问题分解到复数域或实数域的方法。通过链式法则和李雅普诺夫定理,严格分析表明,精心设计的四元数值单层递归神经网络可稳定系统动力学,同时状态在有限时间内到达可行区域,并最终收敛到所考虑的约束凸优化问题的最优解。数值模拟验证了理论结果。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验