• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
ASYMPTOTIC PROPERTIES OF SUFFICIENT DIMENSION REDUCTION WITH A DIVERGING NUMBER OF PREDICTORS.具有发散数量预测变量的充分降维的渐近性质
Stat Sin. 2011;2011(21):707-730. doi: 10.5705/ss.2011.031a.
2
A few theoretical results for Laplace and arctan penalized ordinary least squares linear regression estimators.关于拉普拉斯和反正切惩罚普通最小二乘线性回归估计量的一些理论结果。
Commun Stat Theory Methods. 2024;53(13):4819-4840. doi: 10.1080/03610926.2023.2195033. Epub 2023 Apr 4.
3
Partial Consistency with Sparse Incidental Parameters.与稀疏附带参数的部分一致性
Stat Sin. 2018 May;28:2633-2655. doi: 10.5705/ss.202017.0027.
4
CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.超高维情形下非凸惩罚回归的校准
Ann Stat. 2013 Oct 1;41(5):2505-2536. doi: 10.1214/13-AOS1159.
5
Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications.通过绝对惩罚凸最小化进行估计与选择及其多阶段自适应应用
J Mach Learn Res. 2012 Jun 1;13:1839-1864.
6
Sparse partial least squares regression for simultaneous dimension reduction and variable selection.用于同时进行降维和变量选择的稀疏偏最小二乘回归。
J R Stat Soc Series B Stat Methodol. 2010 Jan;72(1):3-25. doi: 10.1111/j.1467-9868.2009.00723.x.
7
Variable Selection for Support Vector Machines in Moderately High Dimensions.适度高维下支持向量机的变量选择
J R Stat Soc Series B Stat Methodol. 2016 Jan;78(1):53-76. doi: 10.1111/rssb.12100. Epub 2015 Jan 5.
8
CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS.相关追踪:指数模型的向前逐步变量选择
J R Stat Soc Series B Stat Methodol. 2012 Nov 1;74(5):849-870. doi: 10.1111/j.1467-9868.2011.01026.x. Epub 2012 Apr 12.
9
Generalized Regression Estimators with High-Dimensional Covariates.具有高维协变量的广义回归估计量
Stat Sin. 2020 Jul;30(3):1135-1154. doi: 10.5705/ss.202017.0384.
10
Robust learning for optimal treatment decision with NP-dimensionality.具有NP维数的最优治疗决策的稳健学习。
Electron J Stat. 2016;10:2894-2921. doi: 10.1214/16-EJS1178. Epub 2016 Oct 13.

引用本文的文献

1
Probability-enhanced sufficient dimension reduction for binary classification.用于二元分类的概率增强型充分降维
Biometrics. 2014 Sep;70(3):546-55. doi: 10.1111/biom.12174. Epub 2014 Apr 29.

本文引用的文献

1
One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.非凹惩罚似然模型中的一步稀疏估计
Ann Stat. 2008 Aug 1;36(4):1509-1533. doi: 10.1214/009053607000000802.
2
Discussion of "Sure Independence Screening for Ultra-High Dimensional Feature Space.《超高维特征空间中的确定独立性筛选》讨论
J R Stat Soc Series B Stat Methodol. 2008 Nov;70(5):903. doi: 10.1111/j.1467-9868.2008.00674.x.
3
Variable Selection using MM Algorithms.使用MM算法进行变量选择
Ann Stat. 2005;33(4):1617-1642. doi: 10.1214/009053605000000200.
4
Tuning parameter selectors for the smoothly clipped absolute deviation method.用于平滑截断绝对偏差方法的调优参数选择器。
Biometrika. 2007 Aug 1;94(3):553-568. doi: 10.1093/biomet/asm053.
5
Variable Selection in Semiparametric Regression Modeling.半参数回归建模中的变量选择
Ann Stat. 2008;36(1):261-286. doi: 10.1214/009053607000000604.
6
RSIR: regularized sliced inverse regression for motif discovery.RSIR:用于基序发现的正则化切片逆回归
Bioinformatics. 2005 Nov 15;21(22):4169-75. doi: 10.1093/bioinformatics/bti680. Epub 2005 Sep 15.

具有发散数量预测变量的充分降维的渐近性质

ASYMPTOTIC PROPERTIES OF SUFFICIENT DIMENSION REDUCTION WITH A DIVERGING NUMBER OF PREDICTORS.

作者信息

Wu Yichao, Li Lexin

机构信息

Department of Statistics, North Carolina State University, Raleigh, NC 27695, USA.

出版信息

Stat Sin. 2011;2011(21):707-730. doi: 10.5705/ss.2011.031a.

DOI:10.5705/ss.2011.031a
PMID:22140299
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3228487/
Abstract

We investigate asymptotic properties of a family of sufficient dimension reduction estimators when the number of predictors p diverges to infinity with the sample size. We adopt a general formulation of dimension reduction estimation through least squares regression of a set of transformations of the response. This formulation allows us to establish the consistency of reduction projection estimation. We then introduce the SCAD max penalty, along with a difference convex optimization algorithm, to achieve variable selection. We show that the penalized estimator selects all truly relevant predictors and excludes all irrelevant ones with probability approaching one, meanwhile it maintains consistent reduction basis estimation for relevant predictors. Our work differs from most model-based selection methods in that it does not require a traditional model, and it extends existing sufficient dimension reduction and model-free variable selection approaches from the fixed p scenario to a diverging p.

摘要

我们研究了一类充分降维估计量的渐近性质,此时预测变量的数量(p)随着样本量趋于无穷大。我们通过对响应变量的一组变换进行最小二乘回归,采用了一种通用的降维估计公式。这种公式使我们能够建立降维投影估计的一致性。然后,我们引入SCAD最大惩罚项以及一种差分凸优化算法来实现变量选择。我们表明,惩罚估计量以概率趋近于1选择所有真正相关的预测变量并排除所有不相关的预测变量,同时它对相关预测变量保持一致的降维基估计。我们的工作与大多数基于模型的选择方法不同,因为它不需要传统模型,并且将现有的充分降维及无模型变量选择方法从固定(p)的情形扩展到了(p)发散的情形。