Suppr超能文献

一种用于稀疏字典学习和对偶主成分追踪的高效免正交归一化方法。

An Efficient Orthonormalization-Free Approach for Sparse Dictionary Learning and Dual Principal Component Pursuit.

作者信息

Hu Xiaoyin, Liu Xin

机构信息

Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.

School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China.

出版信息

Sensors (Basel). 2020 May 27;20(11):3041. doi: 10.3390/s20113041.

Abstract

Sparse dictionary learning (SDL) is a classic representation learning method and has been widely used in data analysis. Recently, the ℓ m -norm ( m ≥ 3 , m ∈ N ) maximization has been proposed to solve SDL, which reshapes the problem to an optimization problem with orthogonality constraints. In this paper, we first propose an ℓ m -norm maximization model for solving dual principal component pursuit (DPCP) based on the similarities between DPCP and SDL. Then, we propose a smooth unconstrained exact penalty model and show its equivalence with the ℓ m -norm maximization model. Based on our penalty model, we develop an efficient first-order algorithm for solving our penalty model (PenNMF) and show its global convergence. Extensive experiments illustrate the high efficiency of PenNMF when compared with the other state-of-the-art algorithms on solving the ℓ m -norm maximization with orthogonality constraints.

摘要

稀疏字典学习(SDL)是一种经典的表示学习方法,已广泛应用于数据分析。最近,有人提出通过最大化ℓm范数(m≥3,m∈N)来解决SDL问题,即将该问题重塑为一个具有正交性约束的优化问题。在本文中,我们首先基于对偶主成分追踪(DPCP)与SDL之间的相似性,提出了一种用于求解DPCP的ℓm范数最大化模型。然后,我们提出了一个光滑无约束精确罚模型,并证明了它与ℓm范数最大化模型的等价性。基于我们的罚模型,我们开发了一种求解罚模型(PenNMF)的高效一阶算法,并证明了其全局收敛性。大量实验表明,与其他求解具有正交性约束的ℓm范数最大化的先进算法相比,PenNMF具有更高的效率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8da9/7308875/d04d0fb9eece/sensors-20-03041-g0A1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验