Mahalingam N, Kumar D
Department of Electronic and Communication Engineering, Royal Melbourne Institute of Technology, Australia.
Australas Phys Eng Sci Med. 1997 Sep;20(3):147-51.
QRS complexes were classified using a Multi Layer Perceptron (MLP) using a modified version of the original Backpropagation (BP) algorithm. The input to the network were bitmaps of the QRS complexes represented in the form of a 20 x 20 matrix. A number of hidden layers (and neurons in each hidden layer) were experimented upon to observe the rate at which the network converged. Larger Networks were observed to find the minimum on the error curve with ease. Increasing the network size beyond a certain size did not improve the performance rate, rather it decreased the performance rate. It is evident that there exists an optimal neural network architecture for every given problem. The weights change rules in Backpropagation algorithm were modified to include a variation of the relationship between momentum and learning rate to observe any increase in network's performance rate. A learning rate adaptation factor was introduced into the learning algorithm to decrease the network's chances of missing a minimum on the error curve. The network was found to perform extremely well with the modified version of the algorithm. The network converged after only 9000 learning cycles when compared to 14,000 cycles with the original algorithm.
使用多层感知器(MLP)并采用原始反向传播(BP)算法的修改版本对QRS复合波进行分类。网络的输入是20×20矩阵形式表示的QRS复合波的位图。对多个隐藏层(以及每个隐藏层中的神经元)进行了试验,以观察网络收敛的速率。观察到较大的网络能够轻松找到误差曲线上的最小值。将网络规模增加到超过一定大小并没有提高性能速率,反而降低了性能速率。显然,对于每个给定的问题都存在一种最优的神经网络架构。对反向传播算法中的权重变化规则进行了修改,以纳入动量和学习率之间关系的变化,从而观察网络性能速率是否有任何提高。在学习算法中引入了学习率自适应因子,以降低网络错过误差曲线上最小值的可能性。发现该网络在算法的修改版本下表现极其出色。与原始算法的14000个周期相比,该网络仅经过9000个学习周期就收敛了。