College of Artificial Intelligence, Nankai University, Tianjin, China.
Xiamen Data Intelligence Academy of ICT, CAS, Xiamen, China; Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, China.
Neural Netw. 2020 Nov;131:312-323. doi: 10.1016/j.neunet.2020.07.027. Epub 2020 Aug 5.
Many tasks involve learning representations from matrices, and Non-negative Matrix Factorization (NMF) has been widely used due to its excellent interpretability. Through factorization, sample vectors are reconstructed as additive combinations of latent factors, which are represented as non-negative distributions over the raw input features. NMF models are significantly affected by latent factors' distribution characteristics and the correlations among them. And NMF models are faced with the challenge of learning robust latent factor. To this end, we propose to learn representations with an awareness of the semantic quality evaluated from the aspects of intra- and inter-factors. On the one hand, a Maximum Entropy-based function is devised for the intra-factor semantic quality. On the other hand, the semantic uniqueness is evaluated via inter-factor correlation, which reinforces the aim of semantic compactness. Moreover, we present a novel non-linear NMF framework. The learning algorithm is presented and the convergence is theoretically analyzed and proved. Extensive experimental results on multiple datasets demonstrate that our method can be successfully applied to representative NMF models and boost performances over state-of-the-art models.
许多任务都涉及从矩阵中学习表示,由于其出色的可解释性,非负矩阵分解 (NMF) 得到了广泛应用。通过分解,样本向量被重构为潜在因子的可加组合,这些潜在因子表示为原始输入特征上的非负分布。NMF 模型受到潜在因子分布特征及其之间相关性的显著影响。并且 NMF 模型面临学习鲁棒潜在因子的挑战。为此,我们提出从内在和外在因子两个方面来学习具有语义质量意识的表示。一方面,我们设计了基于最大熵的内在因子语义质量函数。另一方面,通过因子间相关性评估语义独特性,从而加强语义紧凑性的目标。此外,我们提出了一种新颖的非线性 NMF 框架。提出了学习算法,并从理论上分析和证明了其收敛性。在多个数据集上的广泛实验结果表明,我们的方法可以成功应用于代表性的 NMF 模型,并提高最先进模型的性能。