• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于大数据表示学习的深度非负矩阵分解模型。

A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning.

作者信息

Chen Zhikui, Jin Shan, Liu Runze, Zhang Jianing

机构信息

School of Software, Dalian University of Technology, Dalian, China.

出版信息

Front Neurorobot. 2021 Jul 20;15:701194. doi: 10.3389/fnbot.2021.701194. eCollection 2021.

DOI:10.3389/fnbot.2021.701194
PMID:34354579
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8329448/
Abstract

Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.

摘要

如今,深度表示因其在各种任务中的出色表现而备受关注。然而,深度表示的可解释性给实际应用带来了巨大挑战。为了缓解这一挑战,本文提出了一种具有非负约束的深度矩阵分解方法,用于学习大数据的基于部分的深度可解释表示。具体而言,设计了一种深度架构,其中监督网络用于抑制数据中的噪声,学生网络用于学习可解释性的深度表示,这是一个用于模式挖掘的端到端框架。此外,为了训练深度矩阵分解架构,定义了一种可解释性损失,包括对称损失、并列损失和非负约束损失,这可以确保知识从监督网络转移到学生网络,增强深度表示的鲁棒性。最后,在两个基准数据集上的大量实验结果证明了深度矩阵分解方法的优越性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/cf31eaf2faa2/fnbot-15-701194-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/773a4c103a61/fnbot-15-701194-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/e6ebd7e3055e/fnbot-15-701194-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/cf31eaf2faa2/fnbot-15-701194-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/773a4c103a61/fnbot-15-701194-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/e6ebd7e3055e/fnbot-15-701194-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/cf31eaf2faa2/fnbot-15-701194-g0003.jpg

相似文献

1
A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning.一种用于大数据表示学习的深度非负矩阵分解模型。
Front Neurorobot. 2021 Jul 20;15:701194. doi: 10.3389/fnbot.2021.701194. eCollection 2021.
2
Deep Non-Negative Matrix Factorization Architecture Based on Underlying Basis Images Learning.基于基础图像学习的深度非负矩阵分解架构
IEEE Trans Pattern Anal Mach Intell. 2021 Jun;43(6):1897-1913. doi: 10.1109/TPAMI.2019.2962679. Epub 2021 May 11.
3
Patient Representation Learning From Heterogeneous Data Sources and Knowledge Graphs Using Deep Collective Matrix Factorization: Evaluation Study.使用深度集体矩阵分解从异构数据源和知识图谱中进行患者表示学习:评估研究
JMIR Med Inform. 2022 Jan 20;10(1):e28842. doi: 10.2196/28842.
4
Representation learning via Dual-Autoencoder for recommendation.通过双自动编码器进行推荐的表示学习。
Neural Netw. 2017 Jun;90:83-89. doi: 10.1016/j.neunet.2017.03.009. Epub 2017 Mar 27.
5
Comprehensive Multiview Representation Learning via Deep Autoencoder-Like Nonnegative Matrix Factorization.通过深度自编码器类非负矩阵分解实现的综合多视图表示学习
IEEE Trans Neural Netw Learn Syst. 2024 May;35(5):5953-5967. doi: 10.1109/TNNLS.2023.3304626. Epub 2024 May 2.
6
Deep Learning of Part-Based Representation of Data Using Sparse Autoencoders With Nonnegativity Constraints.基于非负稀疏自动编码器的数据的基于部分表示的深度学习。
IEEE Trans Neural Netw Learn Syst. 2016 Dec;27(12):2486-2498. doi: 10.1109/TNNLS.2015.2479223. Epub 2015 Oct 28.
7
Bayesian deep matrix factorization network for multiple images denoising.贝叶斯深度矩阵分解网络用于多图像去噪。
Neural Netw. 2020 Mar;123:420-428. doi: 10.1016/j.neunet.2019.12.023. Epub 2020 Jan 7.
8
Extracting and inserting knowledge into stacked denoising auto-encoders.从堆叠去噪自编码器中提取和插入知识。
Neural Netw. 2021 May;137:31-42. doi: 10.1016/j.neunet.2021.01.010. Epub 2021 Jan 20.
9
A Deep Matrix Factorization Method for Learning Attribute Representations.一种用于学习属性表示的深度矩阵分解方法。
IEEE Trans Pattern Anal Mach Intell. 2017 Mar;39(3):417-429. doi: 10.1109/TPAMI.2016.2554555. Epub 2016 Apr 15.
10
A deep learning technique for imputing missing healthcare data.一种用于填补缺失医疗数据的深度学习技术。
Annu Int Conf IEEE Eng Med Biol Soc. 2019 Jul;2019:6513-6516. doi: 10.1109/EMBC.2019.8856760.

引用本文的文献

1
Few-shot learning for inference in medical imaging with subspace feature representations.基于子空间特征表示的医学影像推断中的少样本学习。
PLoS One. 2024 Nov 6;19(11):e0309368. doi: 10.1371/journal.pone.0309368. eCollection 2024.

本文引用的文献

1
Semisupervised Adaptive Symmetric Non-Negative Matrix Factorization.半监督自适应对称非负矩阵分解
IEEE Trans Cybern. 2021 May;51(5):2550-2562. doi: 10.1109/TCYB.2020.2969684. Epub 2021 Apr 15.
2
Switchable Normalization for Learning-to-Normalize Deep Representation.用于学习归一化深度表示的可切换归一化
IEEE Trans Pattern Anal Mach Intell. 2021 Feb;43(2):712-728. doi: 10.1109/TPAMI.2019.2932062. Epub 2021 Jan 8.
3
Nonnegative matrix factorization in polynomial feature space.多项式特征空间中的非负矩阵分解
IEEE Trans Neural Netw. 2008 Jun;19(6):1090-100. doi: 10.1109/TNN.2008.2000162.
4
Learning the parts of objects by non-negative matrix factorization.通过非负矩阵分解学习物体的各个部分。
Nature. 1999 Oct 21;401(6755):788-91. doi: 10.1038/44565.