• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种通过经验增益最大化进行学习的框架。

A Framework of Learning Through Empirical Gain Maximization.

作者信息

Feng Yunlong, Wu Qiang

机构信息

Department of Mathematics and Statistics, State University of New York at Albany, Albany, NY 12222, U.S.A.

Department of Mathematical Sciences, Middle Tennessee State University, Murfreesboro, TN 37132, U.S.A.

出版信息

Neural Comput. 2021 May 13;33(6):1656-1697. doi: 10.1162/neco_a_01384.

DOI:10.1162/neco_a_01384
PMID:34496383
Abstract

We develop in this letter a framework of empirical gain maximization (EGM) to address the robust regression problem where heavy-tailed noise or outliers may be present in the response variable. The idea of EGM is to approximate the density function of the noise distribution instead of approximating the truth function directly as usual. Unlike the classical maximum likelihood estimation that encourages equal importance of all observations and could be problematic in the presence of abnormal observations, EGM schemes can be interpreted from a minimum distance estimation viewpoint and allow the ignorance of those observations. Furthermore, we show that several well-known robust nonconvex regression paradigms, such as Tukey regression and truncated least square regression, can be reformulated into this new framework. We then develop a learning theory for EGM by means of which a unified analysis can be conducted for these well-established but not fully understood regression approaches. This new framework leads to a novel interpretation of existing bounded nonconvex loss functions. Within this new framework, the two seemingly irrelevant terminologies, the well-known Tukey's biweight loss for robust regression and the triweight kernel for nonparametric smoothing, are closely related. More precisely, we show that Tukey's biweight loss can be derived from the triweight kernel. Other frequently employed bounded nonconvex loss functions in machine learning, such as the truncated square loss, the Geman-McClure loss, and the exponential squared loss, can also be reformulated from certain smoothing kernels in statistics. In addition, the new framework enables us to devise new bounded nonconvex loss functions for robust learning.

摘要

在本信函中,我们提出了一个经验增益最大化(EGM)框架,以解决响应变量中可能存在重尾噪声或异常值的稳健回归问题。EGM的思想是近似噪声分布的密度函数,而不是像通常那样直接近似真值函数。与鼓励所有观测值具有同等重要性且在存在异常观测值时可能存在问题的经典最大似然估计不同,EGM方案可以从最小距离估计的角度进行解释,并允许忽略那些观测值。此外,我们表明,几种著名的稳健非凸回归范式,如图基回归和截断最小二乘回归,可以重新表述为这个新框架。然后,我们为EGM发展了一种学习理论,通过该理论可以对这些已确立但尚未完全理解的回归方法进行统一分析。这个新框架对现有的有界非凸损失函数给出了一种新颖的解释。在这个新框架内,两个看似不相关的术语,即著名的用于稳健回归的图基双权损失和用于非参数平滑的三权核,密切相关。更确切地说,我们表明图基双权损失可以从三权核推导出来。机器学习中其他常用的有界非凸损失函数,如截断平方损失、杰曼 - 麦克卢尔损失和指数平方损失,也可以从统计学中的某些平滑核重新表述而来。此外,新框架使我们能够设计用于稳健学习的新的有界非凸损失函数。

相似文献

1
A Framework of Learning Through Empirical Gain Maximization.一种通过经验增益最大化进行学习的框架。
Neural Comput. 2021 May 13;33(6):1656-1697. doi: 10.1162/neco_a_01384.
2
Fast Rates of Gaussian Empirical Gain Maximization With Heavy-Tailed Noise.
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):6038-6043. doi: 10.1109/TNNLS.2022.3171171. Epub 2022 Oct 5.
3
Robust Gradient Learning With Applications.稳健梯度学习及其应用。
IEEE Trans Neural Netw Learn Syst. 2016 Apr;27(4):822-35. doi: 10.1109/TNNLS.2015.2425215. Epub 2015 May 11.
4
Certifiably Optimal Outlier-Robust Geometric Perception: Semidefinite Relaxations and Scalable Global Optimization.可验证的最优离群值鲁棒几何感知:半定松弛与可扩展全局优化
IEEE Trans Pattern Anal Mach Intell. 2023 Mar;45(3):2816-2834. doi: 10.1109/TPAMI.2022.3179463. Epub 2023 Feb 3.
5
New Insights Into Learning With Correntropy-Based Regression.基于协方差的回归学习的新见解。
Neural Comput. 2021 Jan;33(1):157-173. doi: 10.1162/neco_a_01334. Epub 2020 Oct 20.
6
Robust Regression with Density Power Divergence: Theory, Comparisons, and Data Analysis.基于密度幂散度的稳健回归:理论、比较与数据分析
Entropy (Basel). 2020 Mar 31;22(4):399. doi: 10.3390/e22040399.
7
A robust outlier control framework for classification designed with family of homotopy loss function.用于分类的稳健异常值控制框架,使用同伦损失函数族设计。
Neural Netw. 2019 Apr;112:41-53. doi: 10.1016/j.neunet.2019.01.013. Epub 2019 Jan 30.
8
Robust regression with asymmetric loss functions.具有非对称损失函数的稳健回归。
Stat Methods Med Res. 2021 Aug;30(8):1800-1815. doi: 10.1177/09622802211012012. Epub 2021 May 11.
9
Robust Support Vector Machines for Classification with Nonconvex and Smooth Losses.用于非凸平滑损失分类的鲁棒支持向量机
Neural Comput. 2016 Jun;28(6):1217-47. doi: 10.1162/NECO_a_00837. Epub 2016 May 3.
10
Variable Selection for Nonparametric Learning with Power Series Kernels.带幂级数核的非参数学习的变量选择。
Neural Comput. 2019 Aug;31(8):1718-1750. doi: 10.1162/neco_a_01212. Epub 2019 Jul 1.