IEEE Trans Neural Netw Learn Syst. 2018 Apr;29(4):1006-1018. doi: 10.1109/TNNLS.2017.2648880. Epub 2017 Feb 1.
Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.
线性回归(LR)及其变体已被广泛应用于分类问题。这些方法大多假设在学习阶段,训练样本可以被精确地转换为严格的二进制标签矩阵,这使得标签拟合的自由度很小。为了解决这个问题,本文提出了一种新的正则化标签松弛 LR 方法,具有以下显著特点。首先,通过在 LR 中引入非负标签松弛矩阵,将所提出的方法将严格的二进制标签矩阵松弛为一个松弛变量矩阵,这为标签拟合提供了更多的自由度,同时尽可能地扩大了不同类之间的边界。其次,所提出的方法基于流形学习构建类紧致性图,并将其用作正则项,以避免过拟合问题。类紧致性图用于确保在转换后具有相同标签的样本可以保持接近。设计了两种不同的算法,分别基于范数和范数损失函数。这两种算法在每次迭代中都有紧凑的闭式解,因此易于实现。大量实验表明,这两种算法在分类精度和运行时间方面都优于最先进的算法。