Lu Zhiwu, Ip Horace H S
Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong.
IEEE Trans Syst Man Cybern B Cybern. 2009 Aug;39(4):901-9. doi: 10.1109/TSMCB.2008.2012119. Epub 2009 Apr 7.
When fitting Gaussian mixtures to multivariate data, it is crucial to select the appropriate number of Gaussians, which is generally referred to as the model selection problem. Under regularization theory, we aim to solve this model selection problem through developing an entropy regularized likelihood (ERL) learning on Gaussian mixtures. We further present a gradient algorithm for this ERL learning. Through some theoretic analysis, we have shown a mechanism of generalized competitive learning that is inherent in the ERL learning, which can lead to automatic model selection on Gaussian mixtures and also make our ERL learning algorithm less sensitive to the initialization as compared to the standard expectation-maximization algorithm. The experiments on simulated data using our algorithm verified our theoretic analysis. Moreover, our ERL learning algorithm has been shown to outperform other competitive learning algorithms in the application of unsupervised image segmentation.
在将高斯混合模型应用于多变量数据时,选择合适数量的高斯分布至关重要,这一问题通常被称为模型选择问题。在正则化理论的框架下,我们旨在通过开发一种针对高斯混合模型的熵正则化似然(ERL)学习方法来解决该模型选择问题。我们进一步提出了一种用于这种ERL学习的梯度算法。通过一些理论分析,我们揭示了ERL学习中固有的广义竞争学习机制,该机制能够实现高斯混合模型的自动模型选择,并且与标准期望最大化算法相比,使我们的ERL学习算法对初始化的敏感性更低。使用我们的算法对模拟数据进行的实验验证了我们的理论分析。此外,在无监督图像分割的应用中,我们的ERL学习算法已被证明优于其他竞争学习算法。