IEEE Trans Neural Netw Learn Syst. 2016 Apr;27(4):822-35. doi: 10.1109/TNNLS.2015.2425215. Epub 2015 May 11.
This paper addresses the robust gradient learning (RGL) problem. Gradient learning models aim at learning the gradient vector of some target functions in supervised learning problems, which can be further used to applications, such as variable selection, coordinate covariance estimation, and supervised dimension reduction. However, existing GL models are not robust to outliers or heavy-tailed noise. This paper provides an RGL framework to address this problem in both regression and classification. This is achieved by introducing a robust regression loss function and proposing a robust classification loss. Moreover, our RGL algorithm works in an instance-based kernelized dictionary instead of some fixed reproducing kernel Hilbert space, which may provide more flexibility. To solve the proposed nonconvex model, a simple computational algorithm based on gradient descent is provided and the convergence of the proposed method is also analyzed. We then apply the proposed RGL model to applications, such as nonlinear variable selection and coordinate covariance estimation. The efficiency of our proposed model is verified on both synthetic and real data sets.
本文讨论了鲁棒梯度学习(RGL)问题。梯度学习模型旨在学习监督学习问题中某些目标函数的梯度向量,这些梯度向量可进一步应用于变量选择、坐标协方差估计和监督降维等领域。然而,现有的 GL 模型对离群值或重尾噪声并不稳健。本文提出了一种 RGL 框架,用于解决回归和分类中的这一问题。这是通过引入鲁棒回归损失函数和提出鲁棒分类损失来实现的。此外,我们的 RGL 算法在基于实例的核字典中工作,而不是在一些固定的再生核希尔伯特空间中工作,这可能提供更多的灵活性。为了解决所提出的非凸模型,提供了一种基于梯度下降的简单计算算法,并分析了所提出方法的收敛性。然后,我们将所提出的 RGL 模型应用于非线性变量选择和坐标协方差估计等应用中。在所合成和真实数据集上验证了我们所提出模型的效率。