Suppr超能文献

用于分类的双非相干自表达局部自适应潜在字典对学习

Twin-Incoherent Self-Expressive Locality-Adaptive Latent Dictionary Pair Learning for Classification.

作者信息

Zhang Zhao, Sun Yulin, Wang Yang, Zhang Zheng, Zhang Haijun, Liu Guangcan, Wang Meng

出版信息

IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):947-961. doi: 10.1109/TNNLS.2020.2979748. Epub 2021 Mar 1.

Abstract

The projective dictionary pair learning (DPL) model jointly seeks a synthesis dictionary and an analysis dictionary by extracting the block-diagonal coefficients with an incoherence-constrained analysis dictionary. However, DPL fails to discover the underlying subspaces and salient features at the same time, and it cannot encode the neighborhood information of the embedded coding coefficients, especially adaptively. In addition, although the data can be well reconstructed via the minimization of the reconstruction error, useful distinguishing salient feature information may be lost and incorporated into the noise term. In this article, we propose a novel self-expressive adaptive locality-preserving framework: twin-incoherent self-expressive latent DPL (SLatDPL). To capture the salient features from the samples, SLatDPL minimizes a latent reconstruction error by integrating the coefficient learning and salient feature extraction into a unified model, which can also be used to simultaneously discover the underlying subspaces and salient features. To make the coefficients block diagonal and ensure that the salient features are discriminative, our SLatDPL regularizes them by imposing a twin-incoherence constraint. Moreover, SLatDPL utilizes a self-expressive adaptive weighting strategy that uses normalized block-diagonal coefficients to preserve the locality of the codes and salient features. SLatDPL can use the class-specific reconstruction residual to handle new data directly. Extensive simulations on several public databases demonstrate the satisfactory performance of our SLatDPL compared with related methods.

摘要

投影字典对学习(DPL)模型通过使用具有非相干约束的分析字典提取块对角系数,联合寻找一个合成字典和一个分析字典。然而,DPL无法同时发现潜在子空间和显著特征,并且它不能对嵌入编码系数的邻域信息进行编码,尤其是自适应地编码。此外,尽管可以通过最小化重建误差来很好地重建数据,但有用的区分显著特征信息可能会丢失并被纳入噪声项。在本文中,我们提出了一种新颖的自表达自适应局部保持框架:双非相干自表达潜在DPL(SLatDPL)。为了从样本中捕获显著特征,SLatDPL通过将系数学习和显著特征提取集成到一个统一模型中,最小化潜在重建误差,该模型还可用于同时发现潜在子空间和显著特征。为了使系数为块对角形式并确保显著特征具有判别性,我们的SLatDPL通过施加双非相干约束对其进行正则化。此外,SLatDPL采用自表达自适应加权策略,该策略使用归一化的块对角系数来保持编码和显著特征的局部性。SLatDPL可以使用特定类别的重建残差直接处理新数据。在几个公共数据库上进行的大量仿真表明,与相关方法相比,我们的SLatDPL具有令人满意的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验