Suppr超能文献

基于模式诱导损失的梯度学习:一致性分析与应用

Gradient Learning With the Mode-Induced Loss: Consistency Analysis and Applications.

作者信息

Chen Hong, Fu Youcheng, Jiang Xue, Chen Yanhong, Li Weifu, Zhou Yicong, Zheng Feng

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9686-9699. doi: 10.1109/TNNLS.2023.3236345. Epub 2024 Jul 8.

Abstract

Variable selection methods aim to select the key covariates related to the response variable for learning problems with high-dimensional data. Typical methods of variable selection are formulated in terms of sparse mean regression with a parametric hypothesis class, such as linear functions or additive functions. Despite rapid progress, the existing methods depend heavily on the chosen parametric function class and are incapable of handling variable selection for problems where the data noise is heavy-tailed or skewed. To circumvent these drawbacks, we propose sparse gradient learning with the mode-induced loss (SGLML) for robust model-free (MF) variable selection. The theoretical analysis is established for SGLML on the upper bound of excess risk and the consistency of variable selection, which guarantees its ability for gradient estimation from the lens of gradient risk and informative variable identification under mild conditions. Experimental analysis on the simulated and real data demonstrates the competitive performance of our method over the previous gradient learning (GL) methods.

摘要

变量选择方法旨在为具有高维数据的学习问题选择与响应变量相关的关键协变量。典型的变量选择方法是根据具有参数假设类的稀疏均值回归来制定的,例如线性函数或加性函数。尽管取得了快速进展,但现有方法严重依赖于所选的参数函数类,并且无法处理数据噪声为重尾或偏态的问题的变量选择。为了克服这些缺点,我们提出了具有模式诱导损失的稀疏梯度学习(SGLML),用于稳健的无模型(MF)变量选择。对SGLML进行了关于超额风险上限和变量选择一致性的理论分析,这保证了它在温和条件下从梯度风险和信息变量识别的角度进行梯度估计的能力。对模拟数据和真实数据的实验分析表明,我们的方法比以前的梯度学习(GL)方法具有更具竞争力的性能。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验