Spackman K A
Biomedical Information Communication Center, Oregon Health Sciences University.
Proc Annu Symp Comput Appl Med Care. 1991:285-9.
This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research.
本文介绍了最大似然反向传播(ML-BP),这是一种训练神经网络的方法。广泛报道的原始方法使用最小二乘反向传播(LS-BP),即最小化平方误差之和(SSE)。不幸的是,最小二乘估计并不能给出网络中权重的最大似然(ML)估计。另一方面,逻辑回归仅能给出单层线性模型的ML估计。本报告描述了如何在多层模型中获得权重的ML估计,并通过几个例子将LS-BP与ML-BP进行了比较。结果表明,在许多神经网络中,最小二乘估计的效果较差,应转而采用最大似然估计。关于多级连接主义模型在诊断系统和结果研究中的风险分层等领域的潜在用途,仍存在一些问题。