Wang Shiping, Chen Zhaoliang, Du Shide, Lin Zhouchen
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5042-5055. doi: 10.1109/TPAMI.2021.3082632. Epub 2022 Aug 4.
Sparsity-constrained optimization problems are common in machine learning, such as sparse coding, low-rank minimization and compressive sensing. However, most of previous studies focused on constructing various hand-crafted sparse regularizers, while little work was devoted to learning adaptive sparse regularizers from given input data for specific tasks. In this paper, we propose a deep sparse regularizer learning model that learns data-driven sparse regularizers adaptively. Via the proximal gradient algorithm, we find that the sparse regularizer learning is equivalent to learning a parameterized activation function. This encourages us to learn sparse regularizers in the deep learning framework. Therefore, we build a neural network composed of multiple blocks, each being differentiable and reusable. All blocks contain learnable piecewise linear activation functions which correspond to the sparse regularizer to be learned. Furthermore, the proposed model is trained with back propagation, and all parameters in this model are learned end-to-end. We apply our framework to multi-view clustering and semi-supervised classification tasks to learn a latent compact representation. Experimental results demonstrate the superiority of the proposed framework over state-of-the-art multi-view learning models.
稀疏约束优化问题在机器学习中很常见,例如稀疏编码、低秩最小化和压缩感知。然而,以前的大多数研究都集中在构建各种手工设计的稀疏正则化器上,而很少有工作致力于从给定的输入数据中学习针对特定任务的自适应稀疏正则化器。在本文中,我们提出了一种深度稀疏正则化器学习模型,该模型可以自适应地学习数据驱动的稀疏正则化器。通过近端梯度算法,我们发现稀疏正则化器学习等同于学习一个参数化的激活函数。这促使我们在深度学习框架中学习稀疏正则化器。因此,我们构建了一个由多个块组成的神经网络,每个块都是可微且可重用的。所有块都包含可学习的分段线性激活函数,这些函数对应于要学习的稀疏正则化器。此外,所提出的模型通过反向传播进行训练,并且该模型中的所有参数都是端到端学习的。我们将我们的框架应用于多视图聚类和半监督分类任务,以学习潜在的紧凑表示。实验结果证明了所提出的框架优于现有的多视图学习模型。