Suppr超能文献

高斯混合模型、三层网络和ME-RBF-SVM模型上无监督和监督学习的最佳协调、统一RPCL与自动模型选择。

Best harmony, unified RPCL and automated model selection for unsupervised and supervised learning on Gaussian mixtures, three-layer nets and ME-RBF-SVM models.

作者信息

Xu L

机构信息

Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, PR China.

出版信息

Int J Neural Syst. 2001 Feb;11(1):43-69. doi: 10.1142/S0129065701000497.

Abstract

After introducing the fundamentals of BYY system and harmony learning, which has been developed in past several years as a unified statistical framework for parameter learning, regularization and model selection, we systematically discuss this BYY harmony learning on systems with discrete inner-representations. First, we shown that one special case leads to unsupervised learning on Gaussian mixture. We show how harmony learning not only leads us to the EM algorithm for maximum likelihood (ML) learning and the corresponding extended KMEAN algorithms for Mahalanobis clustering with criteria for selecting the number of Gaussians or clusters, but also provides us two new regularization techniques and a unified scheme that includes the previous rival penalized competitive learning (RPCL) as well as its various variants and extensions that performs model selection automatically during parameter learning. Moreover, as a by-product, we also get a new approach for determining a set of 'supporting vectors' for Parzen window density estimation. Second, we shown that other special cases lead to three typical supervised learning models with several new results. On three layer net, we get (i) a new regularized ML learning, (ii) a new criterion for selecting the number of hidden units, and (iii) a family of EM-like algorithms that combines harmony learning with new techniques of regularization. On the original and alternative models of mixture-of-expert (ME) as well as radial basis function (RBF) nets, we get not only a new type of criteria for selecting the number of experts or basis functions but also a new type of the EM-like algorithms that combines regularization techniques and RPCL learning for parameter learning with either least complexity nature on the original ME model or automated model selection on the alternative ME model and RBF nets. Moreover, all the results for the alternative ME model are also applied to other two popular nonparametric statistical approaches, namely kernel regression and supporting vector machine. Particularly, not only we get an easily implemented approach for determining the smoothing parameter in kernel regression, but also we get an alternative approach for deciding the set of supporting vectors in supporting vector machine.

摘要

在介绍了BYY系统和和谐学习的基本原理之后(BYY系统和和谐学习是在过去几年中作为参数学习、正则化和模型选择的统一统计框架而发展起来的),我们系统地讨论了具有离散内部表示的系统上的这种BYY和谐学习。首先,我们表明一个特殊情况会导致高斯混合模型的无监督学习。我们展示了和谐学习不仅如何引导我们得到用于最大似然(ML)学习的EM算法以及用于马氏聚类的相应扩展KMEAN算法,以及用于选择高斯分量或聚类数量的标准,而且还为我们提供了两种新的正则化技术和一个统一的方案,该方案包括先前竞争的惩罚竞争学习(RPCL)及其各种变体和扩展,这些在参数学习过程中自动执行模型选择。此外,作为一个副产品,我们还得到了一种用于确定Parzen窗口密度估计的一组“支持向量”的新方法。其次,我们表明其他特殊情况会导致三个典型的监督学习模型,并得到一些新结果。在三层网络上,我们得到(i)一种新的正则化ML学习方法,(ii)一种选择隐藏单元数量的新准则,以及(iii)一族将和谐学习与新的正则化技术相结合的类似EM的算法。在专家混合(ME)的原始模型和替代模型以及径向基函数(RBF)网络上,我们不仅得到了一种选择专家数量或基函数数量的新型准则,而且还得到了一族类似EM的算法,这些算法将正则化技术和RPCL学习相结合用于参数学习,在原始ME模型上具有最小复杂度特性,或者在替代ME模型和RBF网络上进行自动模型选择。此外,替代ME模型的所有结果也适用于其他两种流行的非参数统计方法,即核回归和支持向量机。特别是,我们不仅得到了一种用于确定核回归中平滑参数的易于实现的方法,而且还得到了一种用于确定支持向量机中支持向量集的替代方法。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验