Chen Zhikui, Jin Shan, Liu Runze, Zhang Jianing
School of Software, Dalian University of Technology, Dalian, China.
Front Neurorobot. 2021 Jul 20;15:701194. doi: 10.3389/fnbot.2021.701194. eCollection 2021.
Nowadays, deep representations have been attracting much attention owing to the great performance in various tasks. However, the interpretability of deep representations poses a vast challenge on real-world applications. To alleviate the challenge, a deep matrix factorization method with non-negative constraints is proposed to learn deep part-based representations of interpretability for big data in this paper. Specifically, a deep architecture with a supervisor network suppressing noise in data and a student network learning deep representations of interpretability is designed, which is an end-to-end framework for pattern mining. Furthermore, to train the deep matrix factorization architecture, an interpretability loss is defined, including a symmetric loss, an apposition loss, and a non-negative constraint loss, which can ensure the knowledge transfer from the supervisor network to the student network, enhancing the robustness of deep representations. Finally, extensive experimental results on two benchmark datasets demonstrate the superiority of the deep matrix factorization method.
如今,深度表示因其在各种任务中的出色表现而备受关注。然而,深度表示的可解释性给实际应用带来了巨大挑战。为了缓解这一挑战,本文提出了一种具有非负约束的深度矩阵分解方法,用于学习大数据的基于部分的深度可解释表示。具体而言,设计了一种深度架构,其中监督网络用于抑制数据中的噪声,学生网络用于学习可解释性的深度表示,这是一个用于模式挖掘的端到端框架。此外,为了训练深度矩阵分解架构,定义了一种可解释性损失,包括对称损失、并列损失和非负约束损失,这可以确保知识从监督网络转移到学生网络,增强深度表示的鲁棒性。最后,在两个基准数据集上的大量实验结果证明了深度矩阵分解方法的优越性。