Suppr超能文献

基于最大熵线性插值的非参数监督学习。

Nonparametric supervised learning by linear interpolation with maximum entropy.

作者信息

Gupta Maya R, Gray Robert M, Olshen Richard A

机构信息

Department of Electrical Engineering, University of Washington, Seattle 98195, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2006 May;28(5):766-81. doi: 10.1109/TPAMI.2006.101.

Abstract

Nonparametric neighborhood methods for learning entail estimation of class conditional probabilities based on relative frequencies of samples that are "near-neighbors" of a test point. We propose and explore the behavior of a learning algorithm that uses linear interpolation and the principle of maximum entropy (LIME). We consider some theoretical properties of the LIME algorithm: LIME weights have exponential form; the estimates are consistent; and the estimates are robust to additive noise. In relation to bias reduction, we show that near-neighbors contain a test point in their convex hull asymptotically. The common linear interpolation solution used for regression on grids or look-up-tables is shown to solve a related maximum entropy problem. LIME simulation results support use of the method, and performance on a pipeline integrity classification problem demonstrates that the proposed algorithm has practical value.

摘要

用于学习的非参数邻域方法需要基于测试点“近邻”样本的相对频率来估计类条件概率。我们提出并探究了一种使用线性插值和最大熵原理(LIME)的学习算法的行为。我们考虑了LIME算法的一些理论性质:LIME权重具有指数形式;估计是一致的;并且估计对加性噪声具有鲁棒性。关于偏差减少,我们表明近邻在其凸包中渐近地包含一个测试点。用于网格或查找表回归的常见线性插值解被证明可以解决一个相关的最大熵问题。LIME仿真结果支持该方法的使用,并且在管道完整性分类问题上的性能表明所提出的算法具有实用价值。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验