Suppr超能文献

潜在因子回归和稀疏回归是否足够?

Are Latent Factor Regression and Sparse Regression Adequate?

作者信息

Fan Jianqing, Lou Zhipeng, Yu Mengxin

机构信息

Frederick L. Moore '18 Professor of Finance, Professor of Statistics, and Professor of Operations Research and Financial Engineering at the Princeton University.

Department of Operations Research and Financial Engineering, Princeton University.

出版信息

J Am Stat Assoc. 2024;119(546):1076-1088. doi: 10.1080/01621459.2023.2169700. Epub 2023 Feb 14.

Abstract

We propose the Factor Augmented (sparse linear) Regression Model (FARM) that not only admits both the latent factor regression and sparse linear regression as special cases but also bridges dimension reduction and sparse regression together. We provide theoretical guarantees for the estimation of our model under the existence of sub-Gaussian and heavy-tailed noises (with bounded (1 + ) -th moment, for all > 0) respectively. In addition, the existing works on supervised learning often assume the latent factor regression or sparse linear regression is the true underlying model without justifying its adequacy. To fill in such an important gap on high-dimensional inference, we also leverage our model as the alternative model to test the sufficiency of the latent factor regression and the sparse linear regression models. To accomplish these goals, we propose the Factor-Adjusted deBiased Test (FabTest) and a two-stage ANOVA type test respectively. We also conduct large-scale numerical experiments including both synthetic and FRED macroeconomics data to corroborate the theoretical properties of our methods. Numerical results illustrate the robustness and effectiveness of our model against latent factor regression and sparse linear regression models.

摘要

我们提出了因子增强(稀疏线性)回归模型(FARM),该模型不仅将潜在因子回归和稀疏线性回归作为特殊情况包含在内,还将降维和稀疏回归联系在一起。我们分别在次高斯噪声和重尾噪声(对于所有(\epsilon > 0),具有有界的((1 + \epsilon))阶矩)存在的情况下,为模型估计提供了理论保证。此外,现有的监督学习研究通常假设潜在因子回归或稀疏线性回归是真正的基础模型,却未对其充分性进行论证。为了填补高维推断方面的这一重要空白,我们还将我们的模型用作替代模型,以检验潜在因子回归模型和稀疏线性回归模型的充分性。为实现这些目标,我们分别提出了因子调整去偏检验(FabTest)和两阶段方差分析类型检验。我们还进行了大规模数值实验,包括合成数据和FRED宏观经济数据,以证实我们方法的理论性质。数值结果说明了我们的模型相对于潜在因子回归模型和稀疏线性回归模型的稳健性和有效性。

相似文献

1
Are Latent Factor Regression and Sparse Regression Adequate?
J Am Stat Assoc. 2024;119(546):1076-1088. doi: 10.1080/01621459.2023.2169700. Epub 2023 Feb 14.
2
Sparse Modal Additive Model.
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2373-2387. doi: 10.1109/TNNLS.2020.3005144. Epub 2021 Jun 2.
3
Adaptive Huber Regression.
J Am Stat Assoc. 2020;115(529):254-265. doi: 10.1080/01621459.2018.1543124. Epub 2019 Apr 22.
4
Sparse Reduced Rank Huber Regression in High Dimensions.
J Am Stat Assoc. 2023;118(544):2383-2393. doi: 10.1080/01621459.2022.2050243. Epub 2022 Apr 15.
5
A SHRINKAGE PRINCIPLE FOR HEAVY-TAILED DATA: HIGH-DIMENSIONAL ROBUST LOW-RANK MATRIX RECOVERY.
Ann Stat. 2021 Jun;49(3):1239-1266. doi: 10.1214/20-aos1980. Epub 2021 Aug 9.
6
Sparse latent factor regression models for genome-wide and epigenome-wide association studies.
Stat Appl Genet Mol Biol. 2022 Mar 7;21(1):sagmb-2021-0035. doi: 10.1515/sagmb-2021-0035.
7
Online inference in high-dimensional generalized linear models with streaming data.
Electron J Stat. 2023;17(2):3443-3471. doi: 10.1214/23-ejs2182. Epub 2023 Nov 28.
8
Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference.
IEEE Trans Inf Theory. 2022 Sep;68(9):5975-6002. doi: 10.1109/tit.2022.3175455. Epub 2022 May 16.
9
Minimax Optimal Bandits for Heavy Tail Rewards.
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):5280-5294. doi: 10.1109/TNNLS.2022.3203035. Epub 2024 Apr 4.
10
Regularization Methods Based on the -Likelihood for Linear Models with Heavy-Tailed Errors.
Entropy (Basel). 2020 Sep 16;22(9):1036. doi: 10.3390/e22091036.

引用本文的文献

本文引用的文献

1
Understanding Implicit Regularization in Over-Parameterized Single Index Model.
J Am Stat Assoc. 2023;118(544):2315-2328. doi: 10.1080/01621459.2022.2044824. Epub 2022 Mar 27.
2
Integrative Factor Regression and Its Inference for Multimodal Data Analysis.
J Am Stat Assoc. 2022;117(540):2207-2221. doi: 10.1080/01621459.2021.1914635. Epub 2021 May 20.
3
Adaptive Huber Regression.
J Am Stat Assoc. 2020;115(529):254-265. doi: 10.1080/01621459.2018.1543124. Epub 2019 Apr 22.
4
Factor-Adjusted Regularized Model Selection.
J Econom. 2020 May;216(1):71-85. doi: 10.1016/j.jeconom.2020.01.006. Epub 2020 Feb 7.
5
LINEAR HYPOTHESIS TESTING FOR HIGH DIMENSIONAL GENERALIZED LINEAR MODELS.
Ann Stat. 2019 Oct;47(5):2671-2703. doi: 10.1214/18-AOS1761. Epub 2019 Aug 3.
6
Robust estimation of high-dimensional covariance and precision matrices.
Biometrika. 2018 Jun 1;105(2):271-284. doi: 10.1093/biomet/asy011. Epub 2018 Mar 27.
7
Embracing the Blessing of Dimensionality in Factor Models.
J Am Stat Assoc. 2018;113(521):380-389. doi: 10.1080/01621459.2016.1256815. Epub 2017 Nov 13.
8
Sufficient Forecasting Using Factor Models.
J Econom. 2017 Dec;201(2):292-306. doi: 10.1016/j.jeconom.2017.08.009. Epub 2017 Aug 26.
9
Asymptotics of empirical eigenstructure for high dimensional spiked covariance.
Ann Stat. 2017 Jun;45(3):1342-1374. doi: 10.1214/16-AOS1487. Epub 2017 Jun 13.
10
Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions.
J R Stat Soc Series B Stat Methodol. 2017 Jan;79(1):247-265. doi: 10.1111/rssb.12166. Epub 2016 Apr 14.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验