Zhang Congcong, Oh Sung-Kwun, Fu Zunwei, Pedrycz Witold
IEEE Trans Cybern. 2024 May;54(5):2978-2991. doi: 10.1109/TCYB.2022.3228303. Epub 2024 Apr 16.
Fuzzy clustering-based neural networks (FCNNs) based on information granulation techniques have been shown to be effective Takagi-Sugeno (TS)-type fuzzy models. However, the existing FCNNs could not cope well with sequential learning tasks. In this study, we introduce incremental FCNNs (IFCNNs), which could dynamically update themselves whenever new learning data (e.g., single datum or block data) are incorporated into the dataset. Specifically, we employ dynamic (incremental) fuzzy C-means (FCMs) clustering algorithms to reveal a structure in data and divide the entire input space into several subregions. In the aforementioned partition, the dynamic FCM adaptively adjusts the position of its prototypes by using sequential data. Due to the time-sharing arrival of training data, compared with batch learning models, incremental learning methods may lose classification (prediction) accuracy. In order to tackle this challenge, we utilize quasi-fuzzy local models (QFLMs) based on modified Schmidt neural networks to replace the popular linear functions in TS-type fuzzy models to refine and enhance the ability to represent the behavior of fuzzy subspaces. Meanwhile, the recursive least square error (LSE) estimation is utilized to update the weights of QFLMs from one-by-one or block-by-block (fixed or varying block size) learning data. In addition, the L regularization is considered to ameliorate the deterioration of generalization abilities caused by potential overfitting when carrying out weight estimation. The proposed method leads to the construction of FCNNs in a new way, which can effectively deal with incremental data as well as deliver sound generalization capability. Extensive machine-learning datasets and a real-world application are employed to show the validity and performance of the presented methods. From the experimental results, we show that the proposal can maintain sound classification accuracy when effectively processing sequential data.
基于信息粒化技术的模糊聚类神经网络(FCNNs)已被证明是有效的高木-关野(TS)型模糊模型。然而,现有的FCNNs不能很好地处理序列学习任务。在本研究中,我们引入了增量FCNNs(IFCNNs),它可以在新的学习数据(如单个数据或数据块)被纳入数据集时动态更新自身。具体而言,我们采用动态(增量)模糊C均值(FCM)聚类算法来揭示数据中的结构,并将整个输入空间划分为几个子区域。在上述划分中,动态FCM利用序列数据自适应调整其原型的位置。由于训练数据分时到达,与批学习模型相比,增量学习方法可能会损失分类(预测)精度。为了应对这一挑战,我们利用基于改进施密特神经网络的准模糊局部模型(QFLMs)来取代TS型模糊模型中常用的线性函数,以细化和增强表示模糊子空间行为的能力。同时,利用递归最小二乘误差(LSE)估计从逐个或逐块(固定或可变块大小)的学习数据中更新QFLMs的权重。此外,考虑L正则化以改善在进行权重估计时由潜在过拟合导致的泛化能力的恶化。所提出的方法以一种新的方式构建FCNNs,它可以有效地处理增量数据并具有良好的泛化能力。我们使用了广泛的机器学习数据集和一个实际应用来展示所提出方法的有效性和性能。从实验结果来看,我们表明该方法在有效处理序列数据时能够保持良好的分类精度。