• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种协同神经动力学方法,通过最大化最小化设计了双时间尺度投影神经网络,用于全局优化和分布式全局优化。

A collaborative neurodynamic approach with two-timescale projection neural networks designed via majorization-minimization for global optimization and distributed global optimization.

机构信息

School of Mathematical Sciences, Zhejiang Normal University, Jinhua, Zhejiang, 321004, China.

School of Mathematics, Southeast University, Nanjing, Jiangsu, 210096, China.

出版信息

Neural Netw. 2024 Nov;179:106525. doi: 10.1016/j.neunet.2024.106525. Epub 2024 Jul 11.

DOI:10.1016/j.neunet.2024.106525
PMID:39042949
Abstract

In this paper, two two-timescale projection neural networks are proposed based on the majorization-minimization principle for nonconvex optimization and distributed nonconvex optimization. They are proved to be globally convergent to Karush-Kuhn-Tucker points. A collaborative neurodynamic approach leverages multiple two-timescale projection neural networks repeatedly re-initialized using a meta-heuristic rule for global optimization and distributed global optimization. Two numerical examples are elaborated to demonstrate the efficacy of the proposed approaches.

摘要

本文提出了两种基于非凸优化和分布式非凸优化的主极大极小化原理的双时标投影神经网络。证明了它们全局收敛到 Karush-Kuhn-Tucker 点。一种协同神经动力学方法利用多个使用启发式规则重新初始化的双时标投影神经网络,用于全局优化和分布式全局优化。通过两个数值例子来说明所提出方法的有效性。

相似文献

1
A collaborative neurodynamic approach with two-timescale projection neural networks designed via majorization-minimization for global optimization and distributed global optimization.一种协同神经动力学方法,通过最大化最小化设计了双时间尺度投影神经网络,用于全局优化和分布式全局优化。
Neural Netw. 2024 Nov;179:106525. doi: 10.1016/j.neunet.2024.106525. Epub 2024 Jul 11.
2
An event-triggered collaborative neurodynamic approach to distributed global optimization.基于事件触发的协同神经动态分布式全局优化方法。
Neural Netw. 2024 Jan;169:181-190. doi: 10.1016/j.neunet.2023.10.022. Epub 2023 Oct 19.
3
Two-timescale projection neural networks in collaborative neurodynamic approaches to global optimization and distributed optimization.协作神经动力学方法中的双时间尺度投影神经网络在全局优化和分布式优化中的应用。
Neural Netw. 2024 Jan;169:83-91. doi: 10.1016/j.neunet.2023.10.011. Epub 2023 Oct 16.
4
A collective neurodynamic optimization approach to bound-constrained nonconvex optimization.一种有界约束非凸优化的集体神经动力学优化方法。
Neural Netw. 2014 Jul;55:20-9. doi: 10.1016/j.neunet.2014.03.006. Epub 2014 Mar 28.
5
Distributed continuous-time accelerated neurodynamic approaches for sparse recovery via smooth approximation to L-minimization.通过 L 极小化的光滑逼近实现稀疏恢复的分布式连续时间加速神经动力学方法。
Neural Netw. 2024 Apr;172:106123. doi: 10.1016/j.neunet.2024.106123. Epub 2024 Jan 10.
6
Sparse Bayesian Learning Based on Collaborative Neurodynamic Optimization.基于协同神经动力学优化的稀疏贝叶斯学习。
IEEE Trans Cybern. 2022 Dec;52(12):13669-13683. doi: 10.1109/TCYB.2021.3090204. Epub 2022 Nov 18.
7
Collaborative neurodynamic optimization for solving nonlinear equations.协同神经动力学优化求解非线性方程。
Neural Netw. 2023 Aug;165:483-490. doi: 10.1016/j.neunet.2023.05.054. Epub 2023 Jun 7.
8
A collective neurodynamic penalty approach to nonconvex distributed constrained optimization.一种用于非凸分布式约束优化的集体神经动力学惩罚方法。
Neural Netw. 2024 Mar;171:145-158. doi: 10.1016/j.neunet.2023.12.011. Epub 2023 Dec 9.
9
Cardinality-constrained portfolio selection via two-timescale duplex neurodynamic optimization.通过双时标对偶神经动力优化进行约束基数的投资组合选择。
Neural Netw. 2022 Sep;153:399-410. doi: 10.1016/j.neunet.2022.06.023. Epub 2022 Jun 23.
10
Smoothing inertial neurodynamic approach for sparse signal reconstruction via L-norm minimization.基于 L 范数最小化的稀疏信号重构的平滑惯性神经动力学方法。
Neural Netw. 2021 Aug;140:100-112. doi: 10.1016/j.neunet.2021.02.006. Epub 2021 Feb 27.