Suppr超能文献

一种用于大数据表示学习的深度非负矩阵分解模型。

A Deep Non-negative Matrix Factorization Model for Big Data Representation Learning.

作者信息

Chen Zhikui, Jin Shan, Liu Runze, Zhang Jianing

机构信息

School of Software, Dalian University of Technology, Dalian, China.

出版信息

Front Neurorobot. 2021 Jul 20;15:701194. doi: 10.3389/fnbot.2021.701194. eCollection 2021.

Abstract

Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.

摘要

如今,深度表示因其在各种任务中的出色表现而备受关注。然而,深度表示的可解释性给实际应用带来了巨大挑战。为了缓解这一挑战,本文提出了一种具有非负约束的深度矩阵分解方法,用于学习大数据的基于部分的深度可解释表示。具体而言,设计了一种深度架构,其中监督网络用于抑制数据中的噪声,学生网络用于学习可解释性的深度表示,这是一个用于模式挖掘的端到端框架。此外,为了训练深度矩阵分解架构,定义了一种可解释性损失,包括对称损失、并列损失和非负约束损失,这可以确保知识从监督网络转移到学生网络,增强深度表示的鲁棒性。最后,在两个基准数据集上的大量实验结果证明了深度矩阵分解方法的优越性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8471/8329448/773a4c103a61/fnbot-15-701194-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验