Simulation & Mining Division, NTT DATA Mathematical Systems Inc., 1F Shinanomachi Rengakan, 35, Shinanomachi, Shinjuku-ku, Tokyo, 160-0016, Japan; Department of Mathematical and Computing Science, Tokyo Institute of Technology, Mail-Box W8-42, 2-12-1, Oookayama, Meguro-ku, Tokyo, 152-8552, Japan.
Neural Netw. 2021 May;137:127-137. doi: 10.1016/j.neunet.2021.01.024. Epub 2021 Feb 5.
Latent Dirichlet allocation (LDA) obtains essential information from data by using Bayesian inference. It is applied to knowledge discovery via dimension reducing and clustering in many fields. However, its generalization error had not been yet clarified since it is a singular statistical model where there is no one-to-one mapping from parameters to probability distributions. In this paper, we give the exact asymptotic form of its generalization error and marginal likelihood, by theoretical analysis of its learning coefficient using algebraic geometry. The theoretical result shows that the Bayesian generalization error in LDA is expressed in terms of that in matrix factorization and a penalty from the simplex restriction of LDA's parameter region. A numerical experiment is consistent with the theoretical result.
潜在狄利克雷分配(LDA)通过贝叶斯推理从数据中获取重要信息。它通过降维和聚类在许多领域中应用于知识发现。然而,由于它是一个奇异的统计模型,其中参数到概率分布之间没有一一映射,因此其泛化误差尚未得到澄清。在本文中,我们通过使用代数几何对其学习系数进行理论分析,给出了其泛化误差和边缘似然的精确渐近形式。理论结果表明,LDA 中的贝叶斯泛化误差可以表示为矩阵分解中的误差和 LDA 参数区域的单纯形限制的惩罚。数值实验与理论结果一致。