Suppr超能文献

神经网络分类器估计贝叶斯概率。

Neural Network Classifiers Estimate Bayesian Probabilities.

作者信息

Richard Michael D, Lippmann Richard P

机构信息

Room B-349, Lincoln Laboratory, MIT, Lexington, MA 02173-9108 USA.

出版信息

Neural Comput. 1991 Winter;3(4):461-483. doi: 10.1162/neco.1991.3.4.461.

Abstract

Many neural network classifiers provide outputs which estimate Bayesian probabilities. When the estimation is accurate, network outputs can be treated as probabilities and sum to one. Simple proofs show that Bayesian probabilities are estimated when desired network outputs are (one output unity, all others zero) and a squared-error or cross-entropy cost function is used. Results of Monte Carlo simulations performed using multilayer perceptron (MLP) networks trained with backpropagation, radial basis function (RBF) networks, and high-order polynomial networks graphically demonstrate that network outputs provide good estimates of Bayesian probabilities. Estimation accuracy depends on network complexity, the amount of training data, and the degree to which training data reflect true likelihood distributions and class probabilities. Interpretation of network outputs as Bayesian probabilities allows outputs from multiple networks to be combined for higher level decision making, simplifies creation of rejection thresholds, makes it possible to compensate for differences between pattern class probabilities in training and test data, allows outputs to be used to minimize alternative risk functions, and suggests alternative measures of network performance.

摘要

许多神经网络分类器提供的输出可估计贝叶斯概率。当估计准确时,网络输出可视为概率且总和为1。简单证明表明,当期望的网络输出为(一个输出为1,其他所有输出为0)且使用平方误差或交叉熵代价函数时,可估计贝叶斯概率。使用反向传播训练的多层感知器(MLP)网络、径向基函数(RBF)网络和高阶多项式网络进行的蒙特卡罗模拟结果以图形方式表明,网络输出能很好地估计贝叶斯概率。估计精度取决于网络复杂度、训练数据量以及训练数据反映真实似然分布和类概率的程度。将网络输出解释为贝叶斯概率可使多个网络的输出进行组合以进行更高层次的决策,简化拒绝阈值的创建,能够补偿训练数据和测试数据中模式类概率之间的差异,允许使用输出使替代风险函数最小化,并提出网络性能的替代度量。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验