Center for Intelligent Multidimensional Data Analysis, Hong Kong Science Park, Shatin, Hong Kong Special Administrative Region of China; Department of Electrical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region of China.
Department of Electrical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region of China.
Neural Netw. 2024 Dec;180:106633. doi: 10.1016/j.neunet.2024.106633. Epub 2024 Aug 14.
In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network's performance does not undergo significant deterioration or decrease. However, without employing a fault tolerant procedure, a trained RBF network may exhibit significantly poor performance. Unfortunately, most existing algorithms are unable to simultaneously address all of the aforementioned issues. This paper proposes fault tolerant training algorithms that can simultaneously select RBF nodes and train RBF output weights. Additionally, our algorithms can directly control the number of RBF nodes in an explicit manner, eliminating the need for a time-consuming procedure to tune the regularization parameter and achieve the target RBF network size. Based on simulation results, our algorithms demonstrate improved test set performance when more RBF nodes are used, effectively utilizing the given source without encountering the overfitting problem. This paper first defines a fault tolerant objective function, which includes a term to suppress the effects of weight faults and weight noise. This term also prevents the issue of overfitting, resulting in better test set performance when more RBF nodes are utilized. With the defined objective function, the training process is designed to solve a generalized M-sparse problem by incorporating an ℓ-norm constraint. The ℓ-norm constraint allows us to directly and explicitly control the number of RBF nodes. To address the generalized M-sparse problem, we introduce the noise-resistant iterative hard thresholding (NR-IHT) algorithm. The convergence properties of the NR-IHT algorithm are subsequently discussed theoretically. To further enhance performance, we incorporate the momentum concept into the NR-IHT algorithm, referring to the modified version as "NR-IHT-Mom". Simulation results show that both the NR-IHT algorithm and the NR-IHT-Mom algorithm outperform several state-of-the-art comparison algorithms.
在径向基函数 (RBF) 网络的构建过程中,有两个常见的关键问题:RBF 中心的选择和在不遇到过拟合问题的情况下有效地利用给定的源。另一个重要的问题是容错能力。也就是说,当训练网络中存在噪声或故障时,至关重要的是网络的性能不会显著恶化或降低。然而,如果不采用容错过程,训练后的 RBF 网络可能会表现出明显较差的性能。不幸的是,大多数现有的算法都无法同时解决所有上述问题。本文提出了容错训练算法,可以同时选择 RBF 节点并训练 RBF 输出权重。此外,我们的算法可以直接以显式方式控制 RBF 节点的数量,无需耗时的过程来调整正则化参数并达到目标 RBF 网络的大小。基于仿真结果,当使用更多的 RBF 节点时,我们的算法可以提高测试集的性能,有效地利用给定的源而不会遇到过拟合问题。本文首先定义了一个容错目标函数,其中包括一个抑制权重故障和权重噪声影响的项。这个项还防止了过拟合的问题,从而在使用更多的 RBF 节点时获得更好的测试集性能。有了定义的目标函数,训练过程被设计成通过结合 ℓ-范数约束来解决广义 M-稀疏问题。ℓ-范数约束允许我们直接和显式地控制 RBF 节点的数量。为了解决广义 M-稀疏问题,我们引入了抗噪迭代硬阈值 (NR-IHT) 算法。随后从理论上讨论了 NR-IHT 算法的收敛性。为了进一步提高性能,我们将动量概念引入到 NR-IHT 算法中,将修改后的版本称为“NR-IHT-Mom”。仿真结果表明,NR-IHT 算法和 NR-IHT-Mom 算法都优于几种最先进的比较算法。