Shynk J J
Dept. of Electr. and Comput. Eng., California Univ., Santa Barbara, CA.
IEEE Trans Neural Netw. 1990;1(3):268-74. doi: 10.1109/72.80252.
A perceptron learning algorithm may be viewed as a steepest-descent method whereby an instantaneous performance function is iteratively minimized. An appropriate performance function for the most widely used perceptron algorithm is described and it is shown that the update term of the algorithm is the gradient of this function. An example is given of the corresponding performance surface based on Gaussian assumptions and it is shown that there is an infinity of stationary points. The performance surfaces of two related performance functions are examined. Computer simulations that demonstrate the convergence properties of the adaptive algorithms are given.
感知器学习算法可以被视为一种最速下降法,通过该方法迭代地最小化一个瞬时性能函数。文中描述了最广泛使用的感知器算法的一个合适的性能函数,并表明该算法的更新项是此函数的梯度。给出了基于高斯假设的相应性能曲面的一个示例,并表明存在无穷多个驻点。研究了两个相关性能函数的性能曲面。给出了证明自适应算法收敛特性的计算机模拟。