Suppr超能文献

一种协同神经动力学方法,通过最大化最小化设计了双时间尺度投影神经网络,用于全局优化和分布式全局优化。

A collaborative neurodynamic approach with two-timescale projection neural networks designed via majorization-minimization for global optimization and distributed global optimization.

机构信息

School of Mathematical Sciences, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China.

School of Mathematics, Southeast University, Nanjing, Jiangsu, 210096, China.

出版信息

Neural Netw. 2024 Nov;179:106525. doi: 10.1016/j.neunet.2024.106525. Epub 2024 Jul 11.

Abstract

In this paper, two two-timescale projection neural networks are proposed based on the majorization-minimization principle for nonconvex optimization and distributed nonconvex optimization. They are proved to be globally convergent to Karush-Kuhn-Tucker points. A collaborative neurodynamic approach leverages multiple two-timescale projection neural networks repeatedly re-initialized using a meta-heuristic rule for global optimization and distributed global optimization. Two numerical examples are elaborated to demonstrate the efficacy of the proposed approaches.

摘要

本文提出了两种基于非凸优化和分布式非凸优化的主极大极小化原理的双时标投影神经网络。证明了它们全局收敛到 Karush-Kuhn-Tucker 点。一种协同神经动力学方法利用多个使用启发式规则重新初始化的双时标投影神经网络,用于全局优化和分布式全局优化。通过两个数值例子来说明所提出方法的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验