• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

广义 M-稀疏算法在构建容错 RBF 网络中的应用。

Generalized M-sparse algorithms for constructing fault tolerant RBF networks.

机构信息

Center for Intelligent Multidimensional Data Analysis, Hong Kong Science Park, Shatin, Hong Kong Special Administrative Region of China; Department of Electrical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region of China.

Department of Electrical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region of China.

出版信息

Neural Netw. 2024 Dec;180:106633. doi: 10.1016/j.neunet.2024.106633. Epub 2024 Aug 14.

DOI:10.1016/j.neunet.2024.106633
PMID:39208461
Abstract

In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network's performance does not undergo significant deterioration or decrease. However, without employing a fault tolerant procedure, a trained RBF network may exhibit significantly poor performance. Unfortunately, most existing algorithms are unable to simultaneously address all of the aforementioned issues. This paper proposes fault tolerant training algorithms that can simultaneously select RBF nodes and train RBF output weights. Additionally, our algorithms can directly control the number of RBF nodes in an explicit manner, eliminating the need for a time-consuming procedure to tune the regularization parameter and achieve the target RBF network size. Based on simulation results, our algorithms demonstrate improved test set performance when more RBF nodes are used, effectively utilizing the given source without encountering the overfitting problem. This paper first defines a fault tolerant objective function, which includes a term to suppress the effects of weight faults and weight noise. This term also prevents the issue of overfitting, resulting in better test set performance when more RBF nodes are utilized. With the defined objective function, the training process is designed to solve a generalized M-sparse problem by incorporating an ℓ-norm constraint. The ℓ-norm constraint allows us to directly and explicitly control the number of RBF nodes. To address the generalized M-sparse problem, we introduce the noise-resistant iterative hard thresholding (NR-IHT) algorithm. The convergence properties of the NR-IHT algorithm are subsequently discussed theoretically. To further enhance performance, we incorporate the momentum concept into the NR-IHT algorithm, referring to the modified version as "NR-IHT-Mom". Simulation results show that both the NR-IHT algorithm and the NR-IHT-Mom algorithm outperform several state-of-the-art comparison algorithms.

摘要

在径向基函数 (RBF) 网络的构建过程中,有两个常见的关键问题:RBF 中心的选择和在不遇到过拟合问题的情况下有效地利用给定的源。另一个重要的问题是容错能力。也就是说,当训练网络中存在噪声或故障时,至关重要的是网络的性能不会显著恶化或降低。然而,如果不采用容错过程,训练后的 RBF 网络可能会表现出明显较差的性能。不幸的是,大多数现有的算法都无法同时解决所有上述问题。本文提出了容错训练算法,可以同时选择 RBF 节点并训练 RBF 输出权重。此外,我们的算法可以直接以显式方式控制 RBF 节点的数量,无需耗时的过程来调整正则化参数并达到目标 RBF 网络的大小。基于仿真结果,当使用更多的 RBF 节点时,我们的算法可以提高测试集的性能,有效地利用给定的源而不会遇到过拟合问题。本文首先定义了一个容错目标函数,其中包括一个抑制权重故障和权重噪声影响的项。这个项还防止了过拟合的问题,从而在使用更多的 RBF 节点时获得更好的测试集性能。有了定义的目标函数,训练过程被设计成通过结合 ℓ-范数约束来解决广义 M-稀疏问题。ℓ-范数约束允许我们直接和显式地控制 RBF 节点的数量。为了解决广义 M-稀疏问题,我们引入了抗噪迭代硬阈值 (NR-IHT) 算法。随后从理论上讨论了 NR-IHT 算法的收敛性。为了进一步提高性能,我们将动量概念引入到 NR-IHT 算法中,将修改后的版本称为“NR-IHT-Mom”。仿真结果表明,NR-IHT 算法和 NR-IHT-Mom 算法都优于几种最先进的比较算法。

相似文献

1
Generalized M-sparse algorithms for constructing fault tolerant RBF networks.广义 M-稀疏算法在构建容错 RBF 网络中的应用。
Neural Netw. 2024 Dec;180:106633. doi: 10.1016/j.neunet.2024.106633. Epub 2024 Aug 14.
2
ADMM-Based Algorithm for Training Fault Tolerant RBF Networks and Selecting Centers.基于交替方向乘子法的容错径向基函数网络训练及中心选择算法
IEEE Trans Neural Netw Learn Syst. 2018 Aug;29(8):3870-3878. doi: 10.1109/TNNLS.2017.2731319. Epub 2017 Aug 15.
3
A fault-tolerant regularizer for RBF networks.一种用于径向基函数网络的容错正则化器。
IEEE Trans Neural Netw. 2008 Mar;19(3):493-507. doi: 10.1109/TNN.2007.912320.
4
Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks.基于故障/噪声注入的径向基函数(RBF)网络在线学习算法的收敛性和目标函数
IEEE Trans Neural Netw. 2010 Jun;21(6):938-47. doi: 10.1109/TNN.2010.2046179. Epub 2010 Apr 12.
5
A Regularizer Approach for RBF Networks Under the Concurrent Weight Failure Situation.在并发权重失效情况下 RBF 网络的正则化方法。
IEEE Trans Neural Netw Learn Syst. 2017 Jun;28(6):1360-1372. doi: 10.1109/TNNLS.2016.2536172. Epub 2016 Mar 28.
6
On the selection of weight decay parameter for faulty networks.关于故障网络权重衰减参数的选择
IEEE Trans Neural Netw. 2010 Aug;21(8):1232-44. doi: 10.1109/TNN.2010.2049580.
7
Efficient algorithm for training interpolation RBF networks with equally spaced nodes.用于训练具有等距节点的插值径向基函数网络的高效算法。
IEEE Trans Neural Netw. 2011 Jun;22(6):982-8. doi: 10.1109/TNN.2011.2120619. Epub 2011 May 10.
8
Regularization Effect of Random Node Fault/Noise on Gradient Descent Learning Algorithm.随机节点故障/噪声对梯度下降学习算法的正则化效应
IEEE Trans Neural Netw Learn Syst. 2023 May;34(5):2619-2632. doi: 10.1109/TNNLS.2021.3107051. Epub 2023 May 2.
9
The superior fault tolerance of artificial neural network training with a fault/noise injection-based genetic algorithm.基于故障/噪声注入的遗传算法进行人工神经网络训练时具有卓越的容错能力。
Protein Cell. 2016 Oct;7(10):735-748. doi: 10.1007/s13238-016-0302-5. Epub 2016 Aug 9.
10
Online sequential echo state network with sparse RLS algorithm for time series prediction.基于稀疏 RLS 算法的在线序列回声状态网络时间序列预测。
Neural Netw. 2019 Oct;118:32-42. doi: 10.1016/j.neunet.2019.05.006. Epub 2019 May 29.