Suppr超能文献

具有统计机器学习应用的稳健高维因子模型。

Robust high dimensional factor models with applications to statistical machine learning.

作者信息

Fan Jianqing, Wang Kaizheng, Zhong Yiqiao, Zhu Ziwei

机构信息

Department of Operations Research and Financial Engineering, Princeton University, Princeton, 08540, NJ, USA.

Department of Industrial Engineering and Operations Research, Columbia University, New York, 10027, NY, USA.

出版信息

Stat Sci. 2021 May;36(2):303-327. doi: 10.1214/20-sts785. Epub 2021 Apr 19.

Abstract

Factor models are a class of powerful statistical models that have been widely used to deal with dependent measurements that arise frequently from various applications from genomics and neuroscience to economics and finance. As data are collected at an ever-growing scale, statistical machine learning faces some new challenges: high dimensionality, strong dependence among observed variables, heavy-tailed variables and heterogeneity. High-dimensional robust factor analysis serves as a powerful toolkit to conquer these challenges. This paper gives a selective overview on recent advance on high-dimensional factor models and their applications to statistics including Factor-Adjusted Robust Model selection (FarmSelect) and Factor-Adjusted Robust Multiple testing (FarmTest). We show that classical methods, especially principal component analysis (PCA), can be tailored to many new problems and provide powerful tools for statistical estimation and inference. We highlight PCA and its connections to matrix perturbation theory, robust statistics, random projection, false discovery rate, etc., and illustrate through several applications how insights from these fields yield solutions to modern challenges. We also present far-reaching connections between factor models and popular statistical learning problems, including network analysis and low-rank matrix recovery.

摘要

因子模型是一类强大的统计模型,已被广泛用于处理从基因组学、神经科学到经济学和金融等各种应用中频繁出现的相关测量数据。随着数据收集规模的不断扩大,统计机器学习面临一些新挑战:高维度、观测变量之间的强相关性、重尾变量和异质性。高维稳健因子分析是克服这些挑战的有力工具。本文对高维因子模型及其在统计学中的应用(包括因子调整稳健模型选择(FarmSelect)和因子调整稳健多重检验(FarmTest))的最新进展进行了选择性综述。我们表明,经典方法,特别是主成分分析(PCA),可以针对许多新问题进行调整,并为统计估计和推断提供强大工具。我们重点介绍了PCA及其与矩阵扰动理论、稳健统计学、随机投影、错误发现率等的联系,并通过几个应用实例说明了这些领域的见解如何产生解决现代挑战的方案。我们还展示了因子模型与流行的统计学习问题之间的深远联系,包括网络分析和低秩矩阵恢复。

相似文献

1
Robust high dimensional factor models with applications to statistical machine learning.
Stat Sci. 2021 May;36(2):303-327. doi: 10.1214/20-sts785. Epub 2021 Apr 19.
2
FarmTest: Factor-adjusted robust multiple testing with approximate false discovery control.
J Am Stat Assoc. 2019;114(528):1880-1893. doi: 10.1080/01621459.2018.1527700. Epub 2019 Mar 20.
3
A SHRINKAGE PRINCIPLE FOR HEAVY-TAILED DATA: HIGH-DIMENSIONAL ROBUST LOW-RANK MATRIX RECOVERY.
Ann Stat. 2021 Jun;49(3):1239-1266. doi: 10.1214/20-aos1980. Epub 2021 Aug 9.
5
Asymptotic performance of PCA for high-dimensional heteroscedastic data.
J Multivar Anal. 2018 Sep;167:435-452. doi: 10.1016/j.jmva.2018.06.002. Epub 2018 Jun 19.
7
Estimating False Discovery Proportion Under Arbitrary Covariance Dependence.
J Am Stat Assoc. 2012;107(499):1019-1035. doi: 10.1080/01621459.2012.720478.
8
A Comparison of Methods for Estimating the Determinant of High-Dimensional Covariance Matrix.
Int J Biostat. 2017 Sep 21;13(2):/j/ijb.2017.13.issue-2/ijb-2017-0013/ijb-2017-0013.xml. doi: 10.1515/ijb-2017-0013.
9
Principal Component Analysis based on Nuclear norm Minimization.
Neural Netw. 2019 Oct;118:1-16. doi: 10.1016/j.neunet.2019.05.020. Epub 2019 Jun 8.
10
Likelihood-ratio-based verification in high-dimensional spaces.
IEEE Trans Pattern Anal Mach Intell. 2014 Jan;36(1):127-39. doi: 10.1109/TPAMI.2013.93.

引用本文的文献

1
Understanding Implicit Regularization in Over-Parameterized Single Index Model.
J Am Stat Assoc. 2023;118(544):2315-2328. doi: 10.1080/01621459.2022.2044824. Epub 2022 Mar 27.
2
A Joint MLE Approach to Large-Scale Structured Latent Attribute Analysis.
J Am Stat Assoc. 2023;118(541):746-760. doi: 10.1080/01621459.2021.1955689. Epub 2021 Sep 1.
3
Bayesian Factor-adjusted Sparse Regression.
J Econom. 2022 Sep;230(1):3-19. doi: 10.1016/j.jeconom.2020.06.012. Epub 2021 Nov 1.
4
Factor-Adjusted Regularized Model Selection.
J Econom. 2020 May;216(1):71-85. doi: 10.1016/j.jeconom.2020.01.006. Epub 2020 Feb 7.

本文引用的文献

1
Cross-Dimensional Inference of Dependent High-Dimensional Data.
J Am Stat Assoc. 2012;107(497):135-151. doi: 10.1080/01621459.2011.645777. Epub 2012 Jun 11.
2
A SHRINKAGE PRINCIPLE FOR HEAVY-TAILED DATA: HIGH-DIMENSIONAL ROBUST LOW-RANK MATRIX RECOVERY.
Ann Stat. 2021 Jun;49(3):1239-1266. doi: 10.1214/20-aos1980. Epub 2021 Aug 9.
3
ENTRYWISE EIGENVECTOR ANALYSIS OF RANDOM MATRICES WITH LOW EXPECTED RANK.
Ann Stat. 2020 Jun;48(3):1452-1474. doi: 10.1214/19-aos1854. Epub 2020 Jul 17.
4
FarmTest: Factor-adjusted robust multiple testing with approximate false discovery control.
J Am Stat Assoc. 2019;114(528):1880-1893. doi: 10.1080/01621459.2018.1527700. Epub 2019 Mar 20.
6
CONFOUNDER ADJUSTMENT IN MULTIPLE HYPOTHESIS TESTING.
Ann Stat. 2017 Oct;45(5):1863-1894. doi: 10.1214/16-AOS1511. Epub 2017 Oct 31.
7
Optimal Shrinkage of Eigenvalues in the Spiked Covariance Model.
Ann Stat. 2018 Aug;46(4):1742-1778. doi: 10.1214/17-AOS1601. Epub 2018 Jun 27.
8
LARGE COVARIANCE ESTIMATION THROUGH ELLIPTICAL FACTOR MODELS.
Ann Stat. 2018 Aug;46(4):1383-1414. doi: 10.1214/17-AOS1588. Epub 2018 Jun 27.
9
Embracing the Blessing of Dimensionality in Factor Models.
J Am Stat Assoc. 2018;113(521):380-389. doi: 10.1080/01621459.2016.1256815. Epub 2017 Nov 13.
10
Estimation of the false discovery proportion with unknown dependence.
J R Stat Soc Series B Stat Methodol. 2017 Sep;79(4):1143-1164. doi: 10.1111/rssb.12204. Epub 2016 Sep 26.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验