Zhao Yang, Si Daokun, Pei Jihong, Yang Xuan
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):8386-8400. doi: 10.1109/TNNLS.2022.3227296. Epub 2024 Jun 3.
In the learning of existing radial basis function neural networks-based methods, it is difficult to propagate errors back. This leads to an inconsistency between the learning and recognition task. This article proposes a geodesic basis function neural network with subclass extension learning (GBFNN-ScE). The geodesic basis function (GBF), which is defined here for the first time, uses the geodetic distance in the manifold as a measure to obtain the response of the sample with respect to the local center. To learn network parameters by back-propagating errors for the purpose of correct classification, a specific GBF based on a pruned gamma encoding cosine function is constructed. This function has a concise and explicit expression on the hyperspherical manifold, which is conducive to the realization of error back propagation. In the preprocessing layer, a sample unitization method with no loss of information, nonnegative unit hyperspherical crown (NUHC) mapping, is proposed. The sample can be mapped to the support set of the GBF. To alleviate the problem that one-hot encoding is not effective enough in the differential expression of data labels within a class, a subclass extension (ScE) learning strategy is proposed. The ScE learning strategy can help the learned network be more robust. For the working of GBFNN-ScE, the original sample is projected onto the support set of GBF through the NUHC mapping. Then the mapped samples are sent to the nonlinear computation units composed of GBFs in the hidden layer. Finally, the response obtained in the hidden layer is weighted by the learned weight to obtain the network output. This article theoretically proves that the separability of the data with ScE learning is stronger. The experimental results show that the proposed GBFNN-ScE has a better performance in recognition tasks than existing methods. The ablation experiments show that the ideas of the GBFNN-ScE contribute to the algorithm performance.
在基于现有径向基函数神经网络的方法学习中,误差难以反向传播。这导致学习任务和识别任务之间不一致。本文提出了一种具有子类扩展学习的测地线基函数神经网络(GBFNN-ScE)。首次在此定义的测地线基函数(GBF),使用流形中的测地距离作为度量,以获得样本相对于局部中心的响应。为了通过反向传播误差来学习网络参数以进行正确分类,构建了一种基于修剪伽马编码余弦函数的特定GBF。该函数在超球流形上具有简洁明了的表达式,有利于误差反向传播的实现。在预处理层,提出了一种无信息损失的样本单位化方法,即非负单位超球冠(NUHC)映射。样本可以映射到GBF的支持集。为了缓解独热编码在类内数据标签的差异表达中不够有效的问题,提出了一种子类扩展(ScE)学习策略。ScE学习策略可以帮助学习到的网络更加鲁棒。对于GBFNN-ScE的工作,原始样本通过NUHC映射投影到GBF的支持集上。然后将映射后的样本发送到由隐藏层中的GBF组成的非线性计算单元。最后,隐藏层中获得的响应通过学习到的权重进行加权,以获得网络输出。本文从理论上证明了采用ScE学习的数据的可分性更强。实验结果表明,所提出的GBFNN-ScE在识别任务中比现有方法具有更好的性能。消融实验表明,GBFNN-ScE的思想有助于提高算法性能。