Constantinopoulos Constantinos, Likas Aristidis
IEEE Trans Neural Netw. 2006 Jul;17(4):966-974. doi: 10.1109/TNN.2006.875982.
The probabilistic radial basis function (PRBF) network constitutes a probabilistic version of the RBF network for classification that extends the typical mixture model approach to classification by allowing the sharing of mixture components among all classes. The typical learning method of PRBF for a classification task employs the expectation-maximization (EM) algorithm and depends strongly on the initial parameter values. In this paper, we propose a technique for incremental training of the PRBF network for classification. The proposed algorithm starts with a single component and incrementally adds more components at appropriate positions in the data space. The addition of a new component is based on criteria for detecting a region in the data space that is crucial for the classification task. After the addition of all components, the algorithm splits every component of the network into subcomponents, each one corresponding to a different class. Experimental results using several well-known classification data sets indicate that the incremental method provides solutions of superior classification performance compared to the hierarchical PRBF training method. We also conducted comparative experiments with the support vector machines method and present the obtained results along with a qualitative comparison of the two approaches.
概率径向基函数(PRBF)网络构成了用于分类的径向基函数网络的概率版本,它通过允许在所有类别之间共享混合成分,扩展了典型的混合模型分类方法。PRBF用于分类任务的典型学习方法采用期望最大化(EM)算法,并且强烈依赖于初始参数值。在本文中,我们提出了一种用于PRBF网络分类的增量训练技术。所提出的算法从单个成分开始,并在数据空间中的适当位置逐步添加更多成分。新成分的添加基于检测数据空间中对分类任务至关重要的区域的标准。在添加所有成分之后,该算法将网络的每个成分拆分为子成分,每个子成分对应一个不同的类别。使用几个著名分类数据集的实验结果表明,与分层PRBF训练方法相比,增量方法提供了具有更高分类性能的解决方案。我们还与支持向量机方法进行了对比实验,并展示了获得的结果以及两种方法的定性比较。