• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用套索人工神经网络在非线性干草堆中寻找针的相变。

A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks.

作者信息

Ma Xiaoyu, Sardy Sylvain, Hengartner Nick, Bobenko Nikolai, Lin Yen Ting

机构信息

Shandong University, Jinan, China.

Department of Mathematics, University of Geneva, Geneva, Switzerland.

出版信息

Stat Comput. 2022;32(6):99. doi: 10.1007/s11222-022-10169-0. Epub 2022 Oct 22.

DOI:10.1007/s11222-022-10169-0
PMID:36299529
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9587964/
Abstract

To fit sparse linear associations, a LASSO sparsity inducing penalty with a single hyperparameter provably allows to recover the important features (needles) with high probability in certain regimes even if the sample size is smaller than the dimension of the input vector (haystack). More recently learners known as artificial neural networks (ANN) have shown great successes in many machine learning tasks, in particular fitting nonlinear associations. Small learning rate, stochastic gradient descent algorithm and large training set help to cope with the explosion in the number of parameters present in deep neural networks. Yet few ANN learners have been developed and studied to find needles in nonlinear haystacks. Driven by a single hyperparameter, our ANN learner, like for sparse linear associations, exhibits a phase transition in the probability of retrieving the needles, which we do not observe with other ANN learners. To select our penalty parameter, we generalize the universal threshold of Donoho and Johnstone (Biometrika 81(3):425-455, 1994) which is a better rule than the conservative (too many false detections) and expensive cross-validation. In the spirit of simulated annealing, we propose a warm-start sparsity inducing algorithm to solve the high-dimensional, non-convex and non-differentiable optimization problem. We perform simulated and real data Monte Carlo experiments to quantify the effectiveness of our approach.

摘要

为了拟合稀疏线性关联,带有单个超参数的套索(LASSO)稀疏诱导惩罚在某些情况下即使样本量小于输入向量(干草堆)的维度,也能大概率地恢复重要特征(针)。最近,被称为人工神经网络(ANN)的学习器在许多机器学习任务中取得了巨大成功,特别是在拟合非线性关联方面。小学习率、随机梯度下降算法和大训练集有助于应对深度神经网络中参数数量的爆炸式增长。然而,很少有ANN学习器被开发和研究用于在非线性干草堆中找到针。由单个超参数驱动,我们的ANN学习器与稀疏线性关联的情况类似,在检索针的概率上表现出相变,而其他ANN学习器没有观察到这种情况。为了选择我们的惩罚参数,我们推广了多诺霍(Donoho)和约翰斯通(Johnstone)的通用阈值(《生物统计学》81(3):425 - 455,1994年),这是一个比保守的(错误检测过多)且昂贵的交叉验证更好的规则。本着模拟退火的精神,我们提出一种热启动稀疏诱导算法来解决高维、非凸且不可微的优化问题。我们进行了模拟和真实数据的蒙特卡罗实验来量化我们方法的有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/633a8b044032/11222_2022_10169_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/0312a3654cff/11222_2022_10169_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/870bae5f7170/11222_2022_10169_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/a4b0133278d6/11222_2022_10169_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/633a8b044032/11222_2022_10169_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/0312a3654cff/11222_2022_10169_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/870bae5f7170/11222_2022_10169_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/a4b0133278d6/11222_2022_10169_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a765/9587964/633a8b044032/11222_2022_10169_Fig4_HTML.jpg

相似文献

1
A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks.使用套索人工神经网络在非线性干草堆中寻找针的相变。
Stat Comput. 2022;32(6):99. doi: 10.1007/s11222-022-10169-0. Epub 2022 Oct 22.
2
Estimation of Granger causality through Artificial Neural Networks: applications to physiological systems and chaotic electronic oscillators.通过人工神经网络估计格兰杰因果关系:在生理系统和混沌电子振荡器中的应用。
PeerJ Comput Sci. 2021 May 18;7:e429. doi: 10.7717/peerj-cs.429. eCollection 2021.
3
Deep convolutional neural network and IoT technology for healthcare.用于医疗保健的深度卷积神经网络和物联网技术。
Digit Health. 2024 Jan 17;10:20552076231220123. doi: 10.1177/20552076231220123. eCollection 2024 Jan-Dec.
4
An artificial neural network to model response of a radiotherapy beam monitoring system.一种用于模拟放射治疗束监测系统响应的人工神经网络。
Med Phys. 2020 Apr;47(4):1983-1994. doi: 10.1002/mp.14033. Epub 2020 Feb 3.
5
Finding Distributed Needles in Neural Haystacks.在神经干草堆中寻找分布式的针。
J Neurosci. 2021 Feb 3;41(5):1019-1032. doi: 10.1523/JNEUROSCI.0904-20.2020. Epub 2020 Dec 17.
6
Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.基于分层贝叶斯弹性网络和精英套索模型的时空脑电图源成像
Front Neurosci. 2017 Nov 16;11:635. doi: 10.3389/fnins.2017.00635. eCollection 2017.
7
A novel artificial neural network method for biomedical prediction based on matrix pseudo-inversion.基于矩阵伪逆的生物医学预测新型人工神经网络方法。
J Biomed Inform. 2014 Apr;48:114-21. doi: 10.1016/j.jbi.2013.12.009. Epub 2013 Dec 18.
8
Predicting dry matter intake in Canadian Holstein dairy cattle using milk mid-infrared reflectance spectroscopy and other commonly available predictors via artificial neural networks.利用牛奶中红外反射光谱和其他常用预测因子通过人工神经网络预测加拿大荷斯坦奶牛的干物质采食量。
J Dairy Sci. 2022 Oct;105(10):8257-8271. doi: 10.3168/jds.2021-21297. Epub 2022 Aug 31.
9
Transformed ℓ regularization for learning sparse deep neural networks.ℓ 正则化变换在稀疏深度神经网络学习中的应用。
Neural Netw. 2019 Nov;119:286-298. doi: 10.1016/j.neunet.2019.08.015. Epub 2019 Aug 27.
10
Nonlinear Feature Selection Neural Network via Structured Sparse Regularization.基于结构化稀疏正则化的非线性特征选择神经网络
IEEE Trans Neural Netw Learn Syst. 2023 Nov;34(11):9493-9505. doi: 10.1109/TNNLS.2022.3209716. Epub 2023 Oct 27.

本文引用的文献

1
Consistent Sparse Deep Learning: Theory and Computation.一致稀疏深度学习:理论与计算
J Am Stat Assoc. 2022;117(540):1981-1995. doi: 10.1080/01621459.2021.1895175. Epub 2021 Apr 20.
2
SURPRISES IN HIGH-DIMENSIONAL RIDGELESS LEAST SQUARES INTERPOLATION.高维无脊最小二乘插值中的意外情况。
Ann Stat. 2022 Apr;50(2):949-986. doi: 10.1214/21-aos2133. Epub 2022 Apr 7.
3
The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale's 18th problem.计算稳定且准确的神经网络的困难:深度学习与斯梅尔第 18 问题的障碍。
Proc Natl Acad Sci U S A. 2022 Mar 22;119(12):e2107151119. doi: 10.1073/pnas.2107151119. Epub 2022 Mar 16.
4
High-dimensional dynamics of generalization error in neural networks.神经网络泛化误差的高维动力学。
Neural Netw. 2020 Dec;132:428-446. doi: 10.1016/j.neunet.2020.08.022. Epub 2020 Sep 5.
5
Transformed ℓ regularization for learning sparse deep neural networks.ℓ 正则化变换在稀疏深度神经网络学习中的应用。
Neural Netw. 2019 Nov;119:286-298. doi: 10.1016/j.neunet.2019.08.015. Epub 2019 Aug 27.
6
VIDOSAT: High-Dimensional Sparsifying Transform Learning for Online Video Denoising.VIDOSAT:用于在线视频去噪的高维稀疏变换学习。
IEEE Trans Image Process. 2019 Apr;28(4):1691-1704. doi: 10.1109/TIP.2018.2865684. Epub 2018 Aug 16.
7
Deep Feature Selection: Theory and Application to Identify Enhancers and Promoters.深度特征选择:识别增强子和启动子的理论与应用
J Comput Biol. 2016 May;23(5):322-36. doi: 10.1089/cmb.2015.0189. Epub 2016 Jan 22.
8
Regularization Paths for Generalized Linear Models via Coordinate Descent.基于坐标下降法的广义线性模型正则化路径
J Stat Softw. 2010;33(1):1-22.
9
Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems.具有任意激活函数的神经网络对非线性算子的通用逼近及其在动力系统中的应用。
IEEE Trans Neural Netw. 1995;6(4):911-7. doi: 10.1109/72.392253.