Suppr超能文献

一种用于联合特征选择和分类器设计的贝叶斯方法。

A Bayesian approach to joint feature selection and classifier design.

作者信息

Krishnapuram Balaji, Hartemink Alexander J, Carin Lawrence, Figueiredo Mário A T

机构信息

Department of Electrical Engineering, Duke University, Durham, NC 27708-0291, USA.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2004 Sep;26(9):1105-11. doi: 10.1109/TPAMI.2004.55.

Abstract

This paper adopts a Bayesian approach to simultaneously learn both an optimal nonlinear classifier and a subset of predictor variables (or features) that are most relevant to the classification task. The approach uses heavy-tailed priors to promote sparsity in the utilization of both basis functions and features; these priors act as regularizers for the likelihood function that rewards good classification on the training data. We derive an expectation-maximization (EM) algorithm to efficiently compute a maximum a posteriori (MAP) point estimate of the various parameters. The algorithm is an extension of recent state-of-the-art sparse Bayesian classifiers, which in turn can be seen as Bayesian counterparts of support vector machines. Experimental comparisons using kernel classifiers demonstrate both parsimonious feature selection and excellent classification accuracy on a range of synthetic and benchmark data sets.

摘要

本文采用贝叶斯方法同时学习最优非线性分类器和与分类任务最相关的预测变量(或特征)子集。该方法使用重尾先验来促进基函数和特征利用的稀疏性;这些先验作为似然函数的正则化项,对训练数据上的良好分类给予奖励。我们推导了一种期望最大化(EM)算法,以有效地计算各种参数的最大后验(MAP)点估计。该算法是最近最先进的稀疏贝叶斯分类器的扩展,而稀疏贝叶斯分类器又可视为支持向量机的贝叶斯对应物。使用核分类器的实验比较表明,在一系列合成数据集和基准数据集上,该方法既能进行简约的特征选择,又具有出色的分类准确率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验