Suppr超能文献

通过绝对惩罚凸最小化进行估计与选择及其多阶段自适应应用

Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications.

作者信息

Huang Jian, Zhang Cun-Hui

机构信息

Department of Statistics and Actuarial Science, University of Iowa, Iowa City, IA 52242, USA.

Department of Statistics and Biostatistics, Rutgers University, Piscataway, New Jersey 08854, USA.

出版信息

J Mach Learn Res. 2012 Jun 1;13:1839-1864.

Abstract

The ℓ-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ-penalized estimator in sparse, high-dimensional settings where the number of predictors can be much larger than the sample size . Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results.

摘要

ℓ 惩罚方法,即套索(Lasso)法,已成为分析大数据集的重要工具。在线性回归中,套索法已取得许多重要成果,这些成果加深了我们对高维统计问题的理解。在本文中,我们考虑一类用于一般形式凸损失函数的加权 ℓ 惩罚估计量,包括广义线性模型。我们研究加权 ℓ 惩罚估计量在稀疏、高维情形下的估计、预测、选择和稀疏性性质,其中预测变量的数量可能远大于样本量。自适应套索法被视为一种特殊情况。我们开发了一种多阶段方法,通过递归应用自适应套索法来近似凹正则化估计。我们给出了单阶段和多阶段估计量的预测和估计的神谕不等式、一个一般的选择一致性定理,以及套索估计量维度的一个上界。贯穿全文使用了包括线性回归、逻辑回归和对数线性模型在内的重要模型来说明一般结果的应用。

相似文献

2
Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.
Stat Sin. 2014 Jan 1;24(1):25-42. doi: 10.5705/ss.2012.240.
3
On the robustness of the adaptive lasso to model misspecification.
Biometrika. 2012 Sep;99(3):717-731. doi: 10.1093/biomet/ass027. Epub 2012 Jul 11.
4
The lasso for high dimensional regression with a possible change point.
J R Stat Soc Series B Stat Methodol. 2016 Jan;78(1):193-210. doi: 10.1111/rssb.12108. Epub 2015 Feb 15.
5
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Ann Stat. 2014 Jun;42(3):819-849. doi: 10.1214/13-aos1198.
6
VARIABLE SELECTION AND ESTIMATION IN HIGH-DIMENSIONAL VARYING-COEFFICIENT MODELS.
Stat Sin. 2011 Oct 1;21(4):1515-1540. doi: 10.5705/ss.2009.316.
7
Simulation-selection-extrapolation: Estimation in high-dimensional errors-in-variables models.
Biometrics. 2019 Dec;75(4):1133-1144. doi: 10.1111/biom.13112. Epub 2019 Aug 28.
8
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.
Ann Stat. 2013 Oct 1;41(5):2505-2536. doi: 10.1214/13-AOS1159.
9
A Generic Path Algorithm for Regularized Statistical Estimation.
J Am Stat Assoc. 2014;109(506):686-699. doi: 10.1080/01621459.2013.864166.
10
Hard thresholding regression.
Scand Stat Theory Appl. 2019 Mar;46(1):314-328. doi: 10.1111/sjos.12353. Epub 2018 Sep 24.

引用本文的文献

2
Estimation and Inference for High-Dimensional Generalized Linear Models with Knowledge Transfer.
J Am Stat Assoc. 2024;119(546):1274-1285. doi: 10.1080/01621459.2023.2184373. Epub 2023 Apr 12.
4
Statistical Inference for High-Dimensional Generalized Linear Models with Binary Outcomes.
J Am Stat Assoc. 2023;118(542):1319-1332. doi: 10.1080/01621459.2021.1990769. Epub 2021 Dec 9.
6
Prediction and Variable Selection in High-Dimensional Misspecified Binary Classification.
Entropy (Basel). 2020 May 13;22(5):543. doi: 10.3390/e22050543.
7
Hard thresholding regression.
Scand Stat Theory Appl. 2019 Mar;46(1):314-328. doi: 10.1111/sjos.12353. Epub 2018 Sep 24.
8
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING.
Ann Stat. 2016 Apr;44(2):629-659. doi: 10.1214/15-AOS1380.
9
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Ann Stat. 2014 Jun;42(3):819-849. doi: 10.1214/13-aos1198.
10
ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.
Ann Stat. 2013 Jun 1;41(3):1142-1165. doi: 10.1214/13-AOS1098.

本文引用的文献

1
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.
Ann Stat. 2008 Aug 1;36(4):1509-1533. doi: 10.1214/009053607000000802.
2
Variable Selection using MM Algorithms.
Ann Stat. 2005;33(4):1617-1642. doi: 10.1214/009053605000000200.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验