Ozyildirim Buse Melis, Avci Mutlu
Department of Computer Engineering, Adana Science and Technology University, Adana, Turkey.
Department of Biomedical Engineering, University of Cukurova, Adana, Turkey.
Neural Netw. 2014 Dec;60:133-40. doi: 10.1016/j.neunet.2014.08.004. Epub 2014 Aug 19.
Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.
广义分类器神经网络作为一种高效的分类器被引入。除非初始平滑参数值接近最优值,否则广义分类器神经网络会遇到收敛问题,并且需要相当长的时间才能收敛。在这项工作中,为了克服这个问题,提出了一种对数学习方法。所提出的方法使用对数代价函数而不是平方误差。最小化这个代价函数减少了达到最小值所需的迭代次数。该方法在15个不同的数据集上进行了测试,并将对数学习广义分类器神经网络的性能与标准的进行了比较。由于广义分类器神经网络包含径向基函数的操作范围,所提出的对数方法及其导数具有连续值。这使得通过所提出的学习方法利用对数快速收敛的优势成为可能。由于对数代价函数的快速收敛能力,训练时间最多可减少到99.2%。除了训练时间的减少,分类性能也可能提高到60%。根据测试结果,所提出的方法在为广义分类器神经网络的时间需求问题提供解决方案的同时,还可以提高分类精度。所提出的方法可以被认为是减少广义分类器神经网络时间需求问题的一种有效方法。