• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用潜在因素上的最大熵和相关性学习表示。

Leveraging maximum entropy and correlation on latent factors for learning representations.

机构信息

College of Artificial Intelligence, Nankai University, Tianjin, China.

Xiamen Data Intelligence Academy of ICT, CAS, Xiamen, China; Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, China.

出版信息

Neural Netw. 2020 Nov;131:312-323. doi: 10.1016/j.neunet.2020.07.027. Epub 2020 Aug 5.

DOI:10.1016/j.neunet.2020.07.027
PMID:32891017
Abstract

Many tasks involve learning representations from matrices, and Non-negative Matrix Factorization (NMF) has been widely used due to its excellent interpretability. Through factorization, sample vectors are reconstructed as additive combinations of latent factors, which are represented as non-negative distributions over the raw input features. NMF models are significantly affected by latent factors' distribution characteristics and the correlations among them. And NMF models are faced with the challenge of learning robust latent factor. To this end, we propose to learn representations with an awareness of the semantic quality evaluated from the aspects of intra- and inter-factors. On the one hand, a Maximum Entropy-based function is devised for the intra-factor semantic quality. On the other hand, the semantic uniqueness is evaluated via inter-factor correlation, which reinforces the aim of semantic compactness. Moreover, we present a novel non-linear NMF framework. The learning algorithm is presented and the convergence is theoretically analyzed and proved. Extensive experimental results on multiple datasets demonstrate that our method can be successfully applied to representative NMF models and boost performances over state-of-the-art models.

摘要

许多任务都涉及从矩阵中学习表示,由于其出色的可解释性,非负矩阵分解 (NMF) 得到了广泛应用。通过分解,样本向量被重构为潜在因子的可加组合,这些潜在因子表示为原始输入特征上的非负分布。NMF 模型受到潜在因子分布特征及其之间相关性的显著影响。并且 NMF 模型面临学习鲁棒潜在因子的挑战。为此,我们提出从内在和外在因子两个方面来学习具有语义质量意识的表示。一方面,我们设计了基于最大熵的内在因子语义质量函数。另一方面,通过因子间相关性评估语义独特性,从而加强语义紧凑性的目标。此外,我们提出了一种新颖的非线性 NMF 框架。提出了学习算法,并从理论上分析和证明了其收敛性。在多个数据集上的广泛实验结果表明,我们的方法可以成功应用于代表性的 NMF 模型,并提高最先进模型的性能。

相似文献

1
Leveraging maximum entropy and correlation on latent factors for learning representations.利用潜在因素上的最大熵和相关性学习表示。
Neural Netw. 2020 Nov;131:312-323. doi: 10.1016/j.neunet.2020.07.027. Epub 2020 Aug 5.
2
Data representation using robust nonnegative matrix factorization for edge computing.使用稳健的非负矩阵分解进行边缘计算的数据表示。
Math Biosci Eng. 2022 Jan;19(2):2147-2178. doi: 10.3934/mbe.2022100. Epub 2021 Dec 28.
3
A unified statistical approach to non-negative matrix factorization and probabilistic latent semantic indexing.一种用于非负矩阵分解和概率潜在语义索引的统一统计方法。
Mach Learn. 2015 Apr 1;99(1):137-163. doi: 10.1007/s10994-014-5470-z.
4
Joint Dictionary Learning-Based Non-Negative Matrix Factorization for Voice Conversion to Improve Speech Intelligibility After Oral Surgery.基于联合字典学习的非负矩阵分解用于口腔手术后语音转换以提高语音清晰度
IEEE Trans Biomed Eng. 2017 Nov;64(11):2584-2594. doi: 10.1109/TBME.2016.2644258.
5
Deep Non-Negative Matrix Factorization Architecture Based on Underlying Basis Images Learning.基于基础图像学习的深度非负矩阵分解架构
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):1897-1913. doi: 10.1109/TPAMI.2019.2962679. Epub 2021 May 11.
6
Robust Structured Nonnegative Matrix Factorization for Image Representation.鲁棒结构非负矩阵分解的图像表示。
IEEE Trans Neural Netw Learn Syst. 2018 May;29(5):1947-1960. doi: 10.1109/TNNLS.2017.2691725. Epub 2017 Apr 17.
7
An Entropy Weighted Nonnegative Matrix Factorization Algorithm for Feature Representation.一种用于特征表示的熵加权非负矩阵分解算法
IEEE Trans Neural Netw Learn Syst. 2023 Sep;34(9):5381-5391. doi: 10.1109/TNNLS.2022.3184286. Epub 2023 Sep 1.
8
A Plug-in Method for Representation Factorization in Connectionist Models.一种连接主义模型中表示因子分解的插件方法。
IEEE Trans Neural Netw Learn Syst. 2022 Aug;33(8):3792-3803. doi: 10.1109/TNNLS.2021.3054480. Epub 2022 Aug 3.
9
Entropy Minimizing Matrix Factorization.熵最小化矩阵分解
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9209-9222. doi: 10.1109/TNNLS.2022.3157148. Epub 2023 Oct 27.
10
Representation learning via Dual-Autoencoder for recommendation.通过双自动编码器进行推荐的表示学习。
Neural Netw. 2017 Jun;90:83-89. doi: 10.1016/j.neunet.2017.03.009. Epub 2017 Mar 27.

引用本文的文献

1
Discovering the nuclear localization signal universe through a deep learning model with interpretable attention units.通过具有可解释注意力单元的深度学习模型发现核定位信号全域。
Patterns (N Y). 2025 May 6;6(6):101262. doi: 10.1016/j.patter.2025.101262. eCollection 2025 Jun 13.