Hirose Yoshihiro
Faculty of Information Science and Technology, Hokkaido University, Kita 14, Nishi 9, Kita-ku, Sapporo, Hokkaido 060-0814, Japan.
Global Station for Big Data and Cybersecurity, Global Institution for Collaborative Research and Education, Hokkaido University, Hokkaido 060-0814, Japan.
Entropy (Basel). 2020 Sep 16;22(9):1036. doi: 10.3390/e22091036.
We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume -normal distributions as the errors in linear models. A -normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the proposed methods for linear models with -normal errors coincide with the ordinary regularization methods that are applied to the normal linear model. The proposed methods can be computed using existing packages because they are penalized least squares methods. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed. The numerical experiments also illustrate that our methods work well in model selection and generalization, especially when the error is slightly heavy-tailed.
我们提出了基于Lq似然的线性模型正则化方法,Lq似然是使用幂函数对对数似然的一种推广。正则化方法在正态线性模型估计中很受欢迎。然而,重尾误差在统计学和机器学习中也很重要。我们假设线性模型中的误差服从-正态分布。-正态分布是重尾的,它是用幂函数而非指数函数定义的。我们发现,针对具有-正态误差的线性模型所提出的方法与应用于正态线性模型的普通正则化方法一致。所提出的方法可以使用现有软件包进行计算,因为它们是惩罚最小二乘法。我们通过数值实验检验了所提出的方法,结果表明这些方法即使在误差为重尾时也表现良好。数值实验还表明,我们的方法在模型选择和泛化方面效果良好,特别是当误差略为重尾时。