• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种新型离散时间多约束 $K$ 胜者全得递归网络及其在优先级调度中的应用

A New Discrete-Time Multi-Constrained $K$-Winner-Take-All Recurrent Network and Its Application to Prioritized Scheduling.

作者信息

Tien Po-Lung

出版信息

IEEE Trans Neural Netw Learn Syst. 2017 Nov;28(11):2674-2685. doi: 10.1109/TNNLS.2016.2600410. Epub 2016 Aug 26.

DOI:10.1109/TNNLS.2016.2600410
PMID:28113608
Abstract

In this paper, we propose a novel discrete-time recurrent neural network aiming to resolve a new class of multi-constrained K-winner-take-all (K-WTA) problems. By facilitating specially designed asymmetric neuron weights, the proposed model is capable of operating in a fully parallel manner, thereby allowing true digital implementation. This paper also provides theorems that delineate the theoretical upper bound of the convergence latency, which is merely O(K). Importantly, via simulations, the average convergence time is close to O(1) in most general cases. Moreover, as the multi-constrained K-WTA problem degenerates to a traditional single-constrained problem, the upper bound becomes exactly two parallel iterations, which significantly outperforms the existing K-WTA models. By associating the neurons and neuron weights with routing paths and path priorities, respectively, we then apply the model to a prioritized flow scheduler for the data center networks. Through extensive simulations, we demonstrate that the proposed scheduler converges to the equilibrium state within near-constant time for different scales of networks while achieving maximal throughput, quality-of-service priority differentiation, and minimum energy consumption, subject to the flow contention-free constraints.

摘要

在本文中,我们提出了一种新颖的离散时间递归神经网络,旨在解决一类新的多约束K胜者全得(K-WTA)问题。通过引入专门设计的非对称神经元权重,所提出的模型能够以完全并行的方式运行,从而实现真正的数字实现。本文还给出了定理,阐述了收敛延迟的理论上限,该上限仅为O(K)。重要的是,通过仿真,在大多数一般情况下平均收敛时间接近O(1)。此外,当多约束K-WTA问题退化为传统的单约束问题时,上限恰好变为两个并行迭代,这显著优于现有的K-WTA模型。通过分别将神经元和神经元权重与路由路径和路径优先级相关联,我们随后将该模型应用于数据中心网络的优先级流调度器。通过广泛的仿真,我们证明了所提出的调度器在不同规模的网络中能在近乎恒定的时间内收敛到平衡状态,同时在无流竞争约束的情况下实现最大吞吐量、服务质量优先级区分和最小能耗。

相似文献

1
A New Discrete-Time Multi-Constrained $K$-Winner-Take-All Recurrent Network and Its Application to Prioritized Scheduling.一种新型离散时间多约束 $K$ 胜者全得递归网络及其在优先级调度中的应用
IEEE Trans Neural Netw Learn Syst. 2017 Nov;28(11):2674-2685. doi: 10.1109/TNNLS.2016.2600410. Epub 2016 Aug 26.
2
A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application.一类求解二次规划问题的有限时间对偶神经网络及其 k-胜者全拿应用
Neural Netw. 2013 Mar;39:27-39. doi: 10.1016/j.neunet.2012.12.009. Epub 2013 Jan 7.
3
A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.一种具有单个神经元且用于k胜者全得操作的有限时间收敛的新型递归神经网络。
IEEE Trans Neural Netw. 2010 Jul;21(7):1140-8. doi: 10.1109/TNN.2010.2050781.
4
Dynamic analysis of a general class of winner-take-all competitive neural networks.一类通用的胜者全得竞争神经网络的动态分析
IEEE Trans Neural Netw. 2010 May;21(5):771-83. doi: 10.1109/TNN.2010.2041671. Epub 2010 Mar 8.
5
A general mean-based iterative winner-take-all neural network.一种基于均值的通用迭代胜者全得神经网络。
IEEE Trans Neural Netw. 1995;6(1):14-24. doi: 10.1109/72.363454.
6
A new recurrent neural network for solving convex quadratic programming problems with an application to the k-winners-take-all problem.一种用于求解凸二次规划问题的新型递归神经网络及其在k胜者全得问题中的应用。
IEEE Trans Neural Netw. 2009 Apr;20(4):654-64. doi: 10.1109/TNN.2008.2011266. Epub 2009 Feb 18.
7
Initialization-Based k-Winners-Take-All Neural Network Model Using Modified Gradient Descent.基于初始化的采用改进梯度下降法的k胜者全得神经网络模型
IEEE Trans Neural Netw Learn Syst. 2023 Aug;34(8):4130-4138. doi: 10.1109/TNNLS.2021.3123240. Epub 2023 Aug 4.
8
Layer Winner-Take-All neural networks based on existing competitive structures.基于现有竞争结构的胜者全得神经网络。
IEEE Trans Syst Man Cybern B Cybern. 2000;30(1):25-30. doi: 10.1109/3477.826944.
9
Distributed k-Winners-Take-All Network: An Optimization Perspective.分布式胜者全得网络:一种优化视角。
IEEE Trans Cybern. 2023 Aug;53(8):5069-5081. doi: 10.1109/TCYB.2022.3170236. Epub 2023 Jul 18.
10
Selective positive-negative feedback produces the winner-take-all competition in recurrent neural networks.选择性正反馈-负反馈在递归神经网络中产生胜者全得竞争。
IEEE Trans Neural Netw Learn Syst. 2013 Feb;24(2):301-9. doi: 10.1109/TNNLS.2012.2230451.