Suppr超能文献

高斯混合模型的变分学习

Variational learning for Gaussian mixture models.

作者信息

Nasios Nikolaos, Bors Adrian G

机构信息

Department of Computer Science, University of York, UK.

出版信息

IEEE Trans Syst Man Cybern B Cybern. 2006 Aug;36(4):849-62. doi: 10.1109/tsmcb.2006.872273.

Abstract

This paper proposes a joint maximum likelihood and Bayesian methodology for estimating Gaussian mixture models. In Bayesian inference, the distributions of parameters are modeled, characterized by hyperparameters. In the case of Gaussian mixtures, the distributions of parameters are considered as Gaussian for the mean, Wishart for the covariance, and Dirichlet for the mixing probability. The learning task consists of estimating the hyperparameters characterizing these distributions. The integration in the parameter space is decoupled using an unsupervised variational methodology entitled variational expectation-maximization (VEM). This paper introduces a hyperparameter initialization procedure for the training algorithm. In the first stage, distributions of parameters resulting from successive runs of the expectation-maximization algorithm are formed. Afterward, maximum-likelihood estimators are applied to find appropriate initial values for the hyperparameters. The proposed initialization provides faster convergence, more accurate hyperparameter estimates, and better generalization for the VEM training algorithm. The proposed methodology is applied in blind signal detection and in color image segmentation.

摘要

本文提出了一种用于估计高斯混合模型的联合最大似然和贝叶斯方法。在贝叶斯推理中,参数的分布通过超参数进行建模和表征。对于高斯混合模型,参数的分布在均值方面被视为高斯分布,协方差方面被视为威沙特分布,混合概率方面被视为狄利克雷分布。学习任务包括估计表征这些分布的超参数。使用一种名为变分期望最大化(VEM)的无监督变分方法,在参数空间中的积分被解耦。本文为训练算法引入了一种超参数初始化过程。在第一阶段,形成期望最大化算法连续运行产生的参数分布。然后,应用最大似然估计器来找到超参数的合适初始值。所提出的初始化方法为VEM训练算法提供了更快的收敛速度、更准确的超参数估计以及更好的泛化能力。所提出的方法应用于盲信号检测和彩色图像分割。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验