• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种具有拉普拉斯核诱导损失的稳健回归框架。

A Robust Regression Framework with Laplace Kernel-Induced Loss.

作者信息

Yang Liming, Ren Zhuo, Wang Yidan, Dong Hongwei

机构信息

College of Science, China Agricultural University, Beijing, 100083, China

出版信息

Neural Comput. 2017 Nov;29(11):3014-3039. doi: 10.1162/neco_a_01002. Epub 2017 Aug 4.

DOI:10.1162/neco_a_01002
PMID:28777723
Abstract

This work proposes a robust regression framework with nonconvex loss function. Two regression formulations are presented based on the Laplace kernel-induced loss (LK-loss). Moreover, we illustrate that the LK-loss function is a nice approximation for the zero-norm. However, nonconvexity of the LK-loss makes it difficult to optimize. A continuous optimization method is developed to solve the proposed framework. The problems are formulated as DC (difference of convex functions) programming. The corresponding DC algorithms (DCAs) converge linearly. Furthermore, the proposed algorithms are applied directly to determine the hardness of licorice seeds using near-infrared spectral data with noisy input. Experiments in eight spectral regions show that the proposed methods improve generalization compared with the traditional support vector regressions (SVR), especially in high-frequency regions. Experiments on several benchmark data sets demonstrate that the proposed methods achieve better results than the traditional regression methods in most of data sets we have considered.

摘要

本文提出了一种具有非凸损失函数的稳健回归框架。基于拉普拉斯核诱导损失(LK损失)给出了两种回归公式。此外,我们说明了LK损失函数是对零范数的良好近似。然而,LK损失的非凸性使其难以优化。开发了一种连续优化方法来求解所提出的框架。这些问题被表述为DC(凸函数之差)规划。相应的DC算法(DCAs)线性收敛。此外,所提出的算法直接应用于利用带有噪声输入的近红外光谱数据来确定甘草种子的硬度。在八个光谱区域的实验表明,与传统支持向量回归(SVR)相比,所提出的方法提高了泛化能力,尤其是在高频区域。在几个基准数据集上的实验表明,在所考虑的大多数数据集中,所提出的方法比传统回归方法取得了更好的结果。

相似文献

1
A Robust Regression Framework with Laplace Kernel-Induced Loss.一种具有拉普拉斯核诱导损失的稳健回归框架。
Neural Comput. 2017 Nov;29(11):3014-3039. doi: 10.1162/neco_a_01002. Epub 2017 Aug 4.
2
Robust support vector regression in the primal.原始空间中的稳健支持向量回归
Neural Netw. 2008 Dec;21(10):1548-55. doi: 10.1016/j.neunet.2008.09.001. Epub 2008 Sep 10.
3
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.基于DCA算法的稀疏协方差矩阵估计
Neural Comput. 2017 Nov;29(11):3040-3077. doi: 10.1162/neco_a_01012. Epub 2017 Sep 28.
4
Linearithmic time sparse and convex maximum margin clustering.线性对数时间稀疏凸最大间隔聚类
IEEE Trans Syst Man Cybern B Cybern. 2012 Dec;42(6):1669-92. doi: 10.1109/TSMCB.2012.2197824. Epub 2012 May 23.
5
L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.基于贝叶斯误差界优化的 L1-范数核判别分析用于稳健特征提取。
IEEE Trans Neural Netw Learn Syst. 2014 Apr;25(4):793-805. doi: 10.1109/TNNLS.2013.2281428.
6
Low-rank structure learning via nonconvex heuristic recovery.基于非凸启发式恢复的低秩结构学习。
IEEE Trans Neural Netw Learn Syst. 2013 Mar;24(3):383-96. doi: 10.1109/TNNLS.2012.2235082.
7
Lagrangian support vector regression via unconstrained convex minimization.拉格朗日支持向量回归通过无约束凸最小化。
Neural Netw. 2014 Mar;51:67-79. doi: 10.1016/j.neunet.2013.12.003. Epub 2013 Dec 11.
8
Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.基于特殊结构全局优化的分类任务快速高斯核学习。
Neural Netw. 2014 Sep;57:51-62. doi: 10.1016/j.neunet.2014.05.014. Epub 2014 Jun 2.
9
Learning sparse kernel classifiers for multi-instance classification.学习稀疏核分类器进行多实例分类。
IEEE Trans Neural Netw Learn Syst. 2013 Sep;24(9):1377-89. doi: 10.1109/TNNLS.2013.2254721.
10
General Dimensional Multiple-Output Support Vector Regressions and Their Multiple Kernel Learning.广义维度多输出支持向量回归及其多核学习。
IEEE Trans Cybern. 2015 Nov;45(11):2572-84. doi: 10.1109/TCYB.2014.2377016. Epub 2014 Dec 19.

引用本文的文献

1
Relative Entropy of Correct Proximal Policy Optimization Algorithms with Modified Penalty Factor in Complex Environment.复杂环境下具有修正惩罚因子的正确近端策略优化算法的相对熵
Entropy (Basel). 2022 Mar 22;24(4):440. doi: 10.3390/e24040440.
2
On Regularization Based Twin Support Vector Regression with Huber Loss.基于Huber损失的正则化孪生支持向量回归
Neural Process Lett. 2021;53(1):459-515. doi: 10.1007/s11063-020-10380-y. Epub 2021 Jan 3.