• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于重尾误差线性模型的似然函数的正则化方法。

Regularization Methods Based on the -Likelihood for Linear Models with Heavy-Tailed Errors.

作者信息

Hirose Yoshihiro

机构信息

Faculty of Information Science and Technology, Hokkaido University, Kita 14, Nishi 9, Kita-ku, Sapporo, Hokkaido 060-0814, Japan.

Global Station for Big Data and Cybersecurity, Global Institution for Collaborative Research and Education, Hokkaido University, Hokkaido 060-0814, Japan.

出版信息

Entropy (Basel). 2020 Sep 16;22(9):1036. doi: 10.3390/e22091036.

DOI:10.3390/e22091036
PMID:33286805
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7597096/
Abstract

We propose regularization methods for linear models based on the Lq-likelihood, which is a generalization of the log-likelihood using a power function. Regularization methods are popular for the estimation in the normal linear model. However, heavy-tailed errors are also important in statistics and machine learning. We assume -normal distributions as the errors in linear models. A -normal distribution is heavy-tailed, which is defined using a power function, not the exponential function. We find that the proposed methods for linear models with -normal errors coincide with the ordinary regularization methods that are applied to the normal linear model. The proposed methods can be computed using existing packages because they are penalized least squares methods. We examine the proposed methods using numerical experiments, showing that the methods perform well, even when the error is heavy-tailed. The numerical experiments also illustrate that our methods work well in model selection and generalization, especially when the error is slightly heavy-tailed.

摘要

我们提出了基于Lq似然的线性模型正则化方法,Lq似然是使用幂函数对对数似然的一种推广。正则化方法在正态线性模型估计中很受欢迎。然而,重尾误差在统计学和机器学习中也很重要。我们假设线性模型中的误差服从-正态分布。-正态分布是重尾的,它是用幂函数而非指数函数定义的。我们发现,针对具有-正态误差的线性模型所提出的方法与应用于正态线性模型的普通正则化方法一致。所提出的方法可以使用现有软件包进行计算,因为它们是惩罚最小二乘法。我们通过数值实验检验了所提出的方法,结果表明这些方法即使在误差为重尾时也表现良好。数值实验还表明,我们的方法在模型选择和泛化方面效果良好,特别是当误差略为重尾时。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2c742e7e3426/entropy-22-01036-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/65df5f6008d6/entropy-22-01036-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/615f496ad07b/entropy-22-01036-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/390e343e02d6/entropy-22-01036-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/e828e83dfde1/entropy-22-01036-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/414b18395c2a/entropy-22-01036-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/15619c922458/entropy-22-01036-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/9de15696e2c0/entropy-22-01036-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/092376df8c5a/entropy-22-01036-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/795158021676/entropy-22-01036-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/fa50533a1ccd/entropy-22-01036-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/c6f1808feaf5/entropy-22-01036-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/5d25345064ce/entropy-22-01036-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/58c12f9e4923/entropy-22-01036-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/b75bb99664b4/entropy-22-01036-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/7945e81d8e58/entropy-22-01036-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2820b9aef0ea/entropy-22-01036-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/356bcb3b80d3/entropy-22-01036-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2b8f7d18a785/entropy-22-01036-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/92e433c909b8/entropy-22-01036-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/4271c0997edd/entropy-22-01036-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/57d900938b74/entropy-22-01036-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2c742e7e3426/entropy-22-01036-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/65df5f6008d6/entropy-22-01036-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/615f496ad07b/entropy-22-01036-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/390e343e02d6/entropy-22-01036-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/e828e83dfde1/entropy-22-01036-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/414b18395c2a/entropy-22-01036-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/15619c922458/entropy-22-01036-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/9de15696e2c0/entropy-22-01036-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/092376df8c5a/entropy-22-01036-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/795158021676/entropy-22-01036-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/fa50533a1ccd/entropy-22-01036-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/c6f1808feaf5/entropy-22-01036-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/5d25345064ce/entropy-22-01036-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/58c12f9e4923/entropy-22-01036-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/b75bb99664b4/entropy-22-01036-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/7945e81d8e58/entropy-22-01036-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2820b9aef0ea/entropy-22-01036-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/356bcb3b80d3/entropy-22-01036-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2b8f7d18a785/entropy-22-01036-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/92e433c909b8/entropy-22-01036-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/4271c0997edd/entropy-22-01036-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/57d900938b74/entropy-22-01036-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6777/7597096/2c742e7e3426/entropy-22-01036-g022.jpg

相似文献

1
Regularization Methods Based on the -Likelihood for Linear Models with Heavy-Tailed Errors.基于重尾误差线性模型的似然函数的正则化方法。
Entropy (Basel). 2020 Sep 16;22(9):1036. doi: 10.3390/e22091036.
2
Newton-Raphson Meets Sparsity: Sparse Learning Via a Novel Penalty and a Fast Solver.牛顿-拉弗森方法与稀疏性:通过一种新型惩罚项和快速求解器实现稀疏学习
IEEE Trans Neural Netw Learn Syst. 2024 Sep;35(9):12057-12067. doi: 10.1109/TNNLS.2023.3251748. Epub 2024 Sep 3.
3
Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models.基于坐标下降法的凹惩罚广义线性模型的优化最小化
Stat Comput. 2014 Sep;24(5):871-883. doi: 10.1007/s11222-013-9407-3.
4
Sparse logistic regression with a L1/2 penalty for gene selection in cancer classification.基于 L1/2 罚项的稀疏逻辑回归在癌症分类中的基因选择。
BMC Bioinformatics. 2013 Jun 19;14:198. doi: 10.1186/1471-2105-14-198.
5
Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.针对零膨胀和过度分散数据的变量选择及其在德国医疗保健需求中的应用
Biom J. 2015 Sep;57(5):867-84. doi: 10.1002/bimj.201400143. Epub 2015 Jun 8.
6
Variable selection for longitudinal zero-inflated power series transition model.纵向零膨胀幂级数转移模型的变量选择。
J Biopharm Stat. 2021 Sep 3;31(5):668-685. doi: 10.1080/10543406.2021.1944177. Epub 2021 Jul 30.
7
Penalized variable selection for accelerated failure time models with random effects.具有随机效应的加速失效时间模型的惩罚变量选择。
Stat Med. 2019 Feb 28;38(5):878-892. doi: 10.1002/sim.8023. Epub 2018 Nov 8.
8
Estimation and tests for power-transformed and threshold GARCH models.幂变换和阈值GARCH模型的估计与检验
J Econom. 2008 Jan;142(1):352-378. doi: 10.1016/j.jeconom.2007.06.004. Epub 2007 Jul 18.
9
Sparse Group Penalties for bi-level variable selection.稀疏群组惩罚的双层变量选择。
Biom J. 2024 Jun;66(4):e2200334. doi: 10.1002/bimj.202200334.
10
COORDINATE DESCENT ALGORITHMS FOR NONCONVEX PENALIZED REGRESSION, WITH APPLICATIONS TO BIOLOGICAL FEATURE SELECTION.用于非凸惩罚回归的坐标下降算法及其在生物特征选择中的应用
Ann Appl Stat. 2011 Jan 1;5(1):232-253. doi: 10.1214/10-AOAS388.

本文引用的文献

1
Nonextensive foundation of Lévy distributions.Lévy分布的非广延基础。
Phys Rev E Stat Phys Plasmas Fluids Relat Interdiscip Topics. 1999 Aug;60(2 Pt B):2398-401. doi: 10.1103/physreve.60.2398.