Suppr超能文献

基于模型的提升法探索稀疏快速变量选择

Probing for Sparse and Fast Variable Selection with Model-Based Boosting.

作者信息

Thomas Janek, Hepp Tobias, Mayr Andreas, Bischl Bernd

机构信息

Department of Statistics, LMU München, München, Germany.

Department of Medical Informatics, Biometry and Epidemiology, FAU Erlangen-Nürnberg, Erlangen, Germany.

出版信息

Comput Math Methods Med. 2017;2017:1421409. doi: 10.1155/2017/1421409. Epub 2017 Jul 31.

Abstract

We present a new variable selection method based on model-based gradient boosting and randomly permuted variables. Model-based boosting is a tool to fit a statistical model while performing variable selection at the same time. A drawback of the fitting lies in the need of multiple model fits on slightly altered data (e.g., cross-validation or bootstrap) to find the optimal number of boosting iterations and prevent overfitting. In our proposed approach, we augment the data set with randomly permuted versions of the true variables, so-called shadow variables, and stop the stepwise fitting as soon as such a variable would be added to the model. This allows variable selection in a single fit of the model without requiring further parameter tuning. We show that our probing approach can compete with state-of-the-art selection methods like stability selection in a high-dimensional classification benchmark and apply it on three gene expression data sets.

摘要

我们提出了一种基于模型的梯度提升和随机排列变量的新变量选择方法。基于模型的提升是一种在执行变量选择的同时拟合统计模型的工具。这种拟合的一个缺点在于需要在略有改变的数据上进行多次模型拟合(例如交叉验证或自助法),以找到最优的提升迭代次数并防止过拟合。在我们提出的方法中,我们用真实变量的随机排列版本(即所谓的影子变量)扩充数据集,并在将这样一个变量添加到模型时立即停止逐步拟合。这使得在单次模型拟合中就能进行变量选择,而无需进一步的参数调整。我们表明,在高维分类基准测试中,我们的探测方法可以与诸如稳定性选择等最先进的选择方法相媲美,并将其应用于三个基因表达数据集。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/250c/5555005/d51fc46d7397/CMMM2017-1421409.001.jpg

相似文献

1
Probing for Sparse and Fast Variable Selection with Model-Based Boosting.
Comput Math Methods Med. 2017;2017:1421409. doi: 10.1155/2017/1421409. Epub 2017 Jul 31.
3
Boosting for high-dimensional two-class prediction.
BMC Bioinformatics. 2015 Sep 21;16:300. doi: 10.1186/s12859-015-0723-9.
4
Controlling false discoveries in high-dimensional situations: boosting with stability selection.
BMC Bioinformatics. 2015 May 6;16:144. doi: 10.1186/s12859-015-0575-3.
5
Multivariate modeling of complications with data driven variable selection: guarding against overfitting and effects of data set size.
Radiother Oncol. 2012 Oct;105(1):115-21. doi: 10.1016/j.radonc.2011.12.006. Epub 2012 Jan 20.
6
Boosting multi-state models.
Lifetime Data Anal. 2016 Apr;22(2):241-62. doi: 10.1007/s10985-015-9329-9. Epub 2015 May 20.
7
Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction.
BMC Bioinformatics. 2021 Sep 16;22(1):441. doi: 10.1186/s12859-021-04340-z.
9
Comparison of variable selection methods for high-dimensional survival data with competing events.
Comput Biol Med. 2017 Dec 1;91:159-167. doi: 10.1016/j.compbiomed.2017.10.021. Epub 2017 Oct 20.
10
Meta-analysis based variable selection for gene expression data.
Biometrics. 2014 Dec;70(4):872-80. doi: 10.1111/biom.12213. Epub 2014 Sep 5.

引用本文的文献

1
A cost-effective, machine learning-driven approach for screening arterial functional aging in a large-scale Chinese population.
Front Public Health. 2024 Mar 20;12:1365479. doi: 10.3389/fpubh.2024.1365479. eCollection 2024.
2
A statistical boosting framework for polygenic risk scores based on large-scale genotype data.
Front Genet. 2023 Jan 10;13:1076440. doi: 10.3389/fgene.2022.1076440. eCollection 2022.
3
Regularization approaches in clinical biostatistics: A review of methods and their applications.
Stat Methods Med Res. 2023 Feb;32(2):425-440. doi: 10.1177/09622802221133557. Epub 2022 Nov 16.
4
Evaluating the risk of hypertension in residents in primary care in Shanghai, China with machine learning algorithms.
Front Public Health. 2022 Oct 4;10:984621. doi: 10.3389/fpubh.2022.984621. eCollection 2022.
5
Randomized boosting with multivariable base-learners for high-dimensional variable selection and prediction.
BMC Bioinformatics. 2021 Sep 16;22(1):441. doi: 10.1186/s12859-021-04340-z.
7
Machine learning-based outcome prediction and novel hypotheses generation for substance use disorder treatment.
J Am Med Inform Assoc. 2021 Jun 12;28(6):1216-1224. doi: 10.1093/jamia/ocaa350.
9
Corrigendum to "Probing for Sparse and Fast Variable Selection with Model-Based Boosting".
Comput Math Methods Med. 2018 Jul 5;2018:2430438. doi: 10.1155/2018/2430438. eCollection 2018.
10
Feature Genes Selection Using Supervised Locally Linear Embedding and Correlation Coefficient for Microarray Classification.
Comput Math Methods Med. 2018 Jan 31;2018:5490513. doi: 10.1155/2018/5490513. eCollection 2018.

本文引用的文献

1
Approaches to Regularized Regression - A Comparison between Gradient Boosting and the Lasso.
Methods Inf Med. 2016 Oct 17;55(5):422-430. doi: 10.3414/ME16-01-0033. Epub 2016 Sep 14.
4
Controlling false discoveries in high-dimensional situations: boosting with stability selection.
BMC Bioinformatics. 2015 May 6;16:144. doi: 10.1186/s12859-015-0575-3.
5
The evolution of boosting algorithms. From machine learning to statistical modelling.
Methods Inf Med. 2014;53(6):419-27. doi: 10.3414/ME13-01-0122. Epub 2014 Aug 12.
6
TIGRESS: Trustful Inference of Gene REgulation using Stability Selection.
BMC Syst Biol. 2012 Nov 22;6:145. doi: 10.1186/1752-0509-6-145.
7
The importance of knowing when to stop. A sequential stopping rule for component-wise gradient boosting.
Methods Inf Med. 2012;51(2):178-86. doi: 10.3414/ME11-02-0030. Epub 2012 Feb 20.
8
Estimation of functional connectivity in fMRI data using stability selection-based sparse partial correlation with elastic net penalty.
Neuroimage. 2012 Feb 15;59(4):3852-61. doi: 10.1016/j.neuroimage.2011.11.054. Epub 2011 Dec 1.
9
A prognostic DNA signature for T1T2 node-negative breast cancer patients.
Genes Chromosomes Cancer. 2010 Dec;49(12):1125-34. doi: 10.1002/gcc.20820.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验