Suppr超能文献

神经网络模型的最大似然训练:与最小二乘反向传播和逻辑回归的比较。

Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

作者信息

Spackman K A

机构信息

Biomedical Information Communication Center, Oregon Health Sciences University.

出版信息

Proc Annu Symp Comput Appl Med Care. 1991:285-9.

Abstract

This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research.

摘要

本文介绍了最大似然反向传播(ML-BP),这是一种训练神经网络的方法。广泛报道的原始方法使用最小二乘反向传播(LS-BP),即最小化平方误差之和(SSE)。不幸的是,最小二乘估计并不能给出网络中权重的最大似然(ML)估计。另一方面,逻辑回归仅能给出单层线性模型的ML估计。本报告描述了如何在多层模型中获得权重的ML估计,并通过几个例子将LS-BP与ML-BP进行了比较。结果表明,在许多神经网络中,最小二乘估计的效果较差,应转而采用最大似然估计。关于多级连接主义模型在诊断系统和结果研究中的风险分层等领域的潜在用途,仍存在一些问题。

相似文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验