Elizondo David A, Birkenhead Ralph, Góngora Mario, Taillard Eric, Luyima Patrick
Centre for Computational Intelligence, School of Computing, Faculty of Computing Sciences and Engineering, De Montfort University, Leicester, UK.
Neural Netw. 2007 Dec;20(10):1095-108. doi: 10.1016/j.neunet.2007.07.009. Epub 2007 Aug 29.
The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. Three methods for constructing RDP neural networks exist: Batch, Incremental, and Modular. The Batch method has been extensively tested and it has been shown to produce results comparable with those obtained with other neural network methods such as Back Propagation, Cascade Correlation, Rulex, and Ruleneg. However, no testing has been done before on the Incremental and Modular methods. Contrary to the Batch method, the complexity of these two methods is not NP-Complete. For the first time, a study on the three methods is presented. This study will allow the highlighting of the main advantages and disadvantages of each of these methods by comparing the results obtained while building RDP neural networks with the three methods in terms of the convergence time, the level of generalisation, and the topology size. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer. The results obtained show the effectiveness of the Incremental and the Modular methods which are as good as that of the NP-Complete Batch method but with a much lower complexity level. The results obtained with the RDP are comparable to those obtained with the backpropagation and the Cascade Correlation algorithms.
递归确定性感知器(RDP)前馈多层神经网络是单层感知器拓扑结构的一种推广。与只能解决处理线性可分集合的分类问题的单层感知器不同,该模型能够解决任何两类分类问题。对于所有分类问题,RDP的构建是自动完成的,并且总能保证收敛。存在三种构建RDP神经网络的方法:批处理法、增量法和模块化法。批处理法已经经过广泛测试,结果表明它所产生的结果与通过其他神经网络方法(如反向传播、级联相关、Rulex和Ruleneg)获得的结果相当。然而,之前尚未对增量法和模块化法进行过测试。与批处理法不同,这两种方法的复杂度不是NP完全问题。首次对这三种方法进行了研究。通过比较使用这三种方法构建RDP神经网络时在收敛时间、泛化水平和拓扑大小方面获得的结果,本研究将突出每种方法的主要优缺点。使用以下标准基准分类数据集对网络进行训练和测试:鸢尾花数据集、大豆数据集和威斯康星乳腺癌数据集。获得的结果表明增量法和模块化法是有效的,它们与NP完全批处理法一样好,但复杂度要低得多。使用RDP获得的结果与使用反向传播和级联相关算法获得的结果相当。