IEEE Trans Neural Netw Learn Syst. 2012 Aug;23(8):1177-93. doi: 10.1109/TNNLS.2012.2200299.
In this paper, we provide a comprehensive survey of the mixture of experts (ME). We discuss the fundamental models for regression and classification and also their training with the expectation-maximization algorithm. We follow the discussion with improvements to the ME model and focus particularly on the mixtures of Gaussian process experts. We provide a review of the literature for other training methods, such as the alternative localized ME training, and cover the variational learning of ME in detail. In addition, we describe the model selection literature which encompasses finding the optimum number of experts, as well as the depth of the tree. We present the advances in ME in the classification area and present some issues concerning the classification model. We list the statistical properties of ME, discuss how the model has been modified over the years, compare ME to some popular algorithms, and list several applications. We conclude our survey with future directions and provide a list of publicly available datasets and a list of publicly available software that implement ME. Finally, we provide examples for regression and classification. We believe that the study described in this paper will provide quick access to the relevant literature for researchers and practitioners who would like to improve or use ME, and that it will stimulate further studies in ME.
在本文中,我们对混合专家(ME)进行了全面的综述。我们讨论了回归和分类的基本模型,以及使用期望最大化算法进行训练。我们接着讨论了 ME 模型的改进,并特别关注高斯过程专家的混合。我们回顾了其他训练方法的文献,例如替代局部 ME 训练,并详细介绍了 ME 的变分学习。此外,我们描述了涵盖寻找最佳专家数量以及树的深度的模型选择文献。我们介绍了 ME 在分类领域的进展,并提出了一些关于分类模型的问题。我们列出了 ME 的统计特性,讨论了多年来模型的修改方式,将 ME 与一些流行的算法进行了比较,并列出了一些应用。我们以未来的方向结束了调查,并提供了公开数据集的列表和实现 ME 的公开软件的列表。最后,我们提供了回归和分类的示例。我们相信,本文的研究将为希望改进或使用 ME 的研究人员和从业者提供快速访问相关文献的途径,并将激发对 ME 的进一步研究。