Suppr超能文献

基于协方差的回归学习的新见解。

New Insights Into Learning With Correntropy-Based Regression.

机构信息

Department of Mathematics and Statistics, State University of New York at Albany, Albany, NY 12222, U.S.A.

出版信息

Neural Comput. 2021 Jan;33(1):157-173. doi: 10.1162/neco_a_01334. Epub 2020 Oct 20.

Abstract

Stemming from information-theoretic learning, the correntropy criterion and its applications to machine learning tasks have been extensively studied and explored. Its application to regression problems leads to the robustness-enhanced regression paradigm: correntropy-based regression. Having drawn a great variety of successful real-world applications, its theoretical properties have also been investigated recently in a series of studies from a statistical learning viewpoint. The resulting big picture is that correntropy-based regression regresses toward the conditional mode function or the conditional mean function robustly under certain conditions. Continuing this trend and going further, in this study, we report some new insights into this problem. First, we show that under the additive noise regression model, such a regression paradigm can be deduced from minimum distance estimation, implying that the resulting estimator is essentially a minimum distance estimator and thus possesses robustness properties. Second, we show that the regression paradigm in fact provides a unified approach to regression problems in that it approaches the conditional mean, the conditional mode, and the conditional median functions under certain conditions. Third, we present some new results when it is used to learn the conditional mean function by developing its error bounds and exponential convergence rates under conditional ()-moment assumptions. The saturation effect on the established convergence rates, which was observed under ()-moment assumptions, still occurs, indicating the inherent bias of the regression estimator. These novel insights deepen our understanding of correntropy-based regression, help cement the theoretic correntropy framework, and enable us to investigate learning schemes induced by general bounded nonconvex loss functions.

摘要

从信息论学习的角度来看,相关熵准则及其在机器学习任务中的应用已经得到了广泛的研究和探索。它在回归问题中的应用导致了稳健性增强的回归范例:基于相关熵的回归。它已经在各种各样的成功的现实世界的应用中得到了应用,其理论性质最近也从统计学习的角度在一系列研究中得到了研究。由此产生的总体情况是,在某些条件下,基于相关熵的回归朝着条件模式函数或条件均值函数稳健地回归。延续这一趋势,在本研究中,我们报告了对这个问题的一些新的见解。首先,我们表明,在加性噪声回归模型下,这种回归范例可以从最小距离估计中推断出来,这意味着所得到的估计器本质上是一个最小距离估计器,因此具有稳健性。其次,我们表明,该回归范例实际上为回归问题提供了一种统一的方法,因为它在某些条件下接近条件均值、条件模式和条件中位数函数。第三,当它被用于通过开发条件 ()-矩假设下的误差界和指数收敛速度来学习条件均值函数时,我们给出了一些新的结果。在 ()-矩假设下观察到的建立的收敛速度的饱和效应仍然存在,表明回归估计器固有的偏差。这些新的见解加深了我们对基于相关熵的回归的理解,有助于巩固理论相关熵框架,并使我们能够研究由一般有界非凸损失函数诱导的学习方案。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验