• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过正则化相似性学习实现有保证的分类。

Guaranteed classification via regularized similarity learning.

机构信息

College of Engineering, Mathematics and Physical Sciences, University of Exeter, EX4 4QF, UK

出版信息

Neural Comput. 2014 Mar;26(3):497-522. doi: 10.1162/NECO_a_00556. Epub 2013 Dec 9.

DOI:10.1162/NECO_a_00556
PMID:24320848
Abstract

Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establish their generalization bounds. We show that the generalization error of the resulting linear classifier can be bounded by the derived generalization bound of similarity learning. This shows that a good generalization of the learned similarity function guarantees a good classification of the resulting linear classifier. Our results extend and improve those obtained by Bellet, Habrard, and Sebban (2012). Due to the techniques dependent on the notion of uniform stability (Bousquet & Elisseeff, 2002), the bound obtained there holds true only for the Frobenius matrix-norm regularization. Our techniques using the Rademacher complexity (Bartlett & Mendelson, 2002) and its related Khinchin-type inequality enable us to establish bounds for regularized similarity learning formulations associated with general matrix norms, including sparse L1-norm and mixed (2,1)-norm.

摘要

从可用数据中学习适当的(不)相似性函数是机器学习中的一个核心问题,因为许多机器学习算法的成功都取决于相似性函数的选择,以比较示例。尽管已经提出了许多相似性度量学习方法,但在相似性度量学习与所得分类器的分类性能之间的联系方面,几乎没有进行理论研究。在这封信中,我们提出了与一般矩阵范数相关的正则化相似性学习公式,并建立了它们的泛化界。我们表明,所得线性分类器的泛化误差可以由相似性学习的导出泛化界来限定。这表明学习的相似性函数的良好泛化保证了所得线性分类器的良好分类。我们的结果扩展和改进了 Bellet、Habrard 和 Sebban(2012 年)的结果。由于依赖于一致稳定性概念的技术(Bousquet 和 Elisseeff,2002 年),那里获得的界仅适用于 Frobenius 矩阵范数正则化。我们使用 Rademacher 复杂度(Bartlett 和 Mendelson,2002 年)及其相关的 Khinchin 型不等式的技术,使我们能够为与一般矩阵范数相关的正则化相似性学习公式建立界,包括稀疏 L1 范数和混合(2,1)-范数。

相似文献

1
Guaranteed classification via regularized similarity learning.通过正则化相似性学习实现有保证的分类。
Neural Comput. 2014 Mar;26(3):497-522. doi: 10.1162/NECO_a_00556. Epub 2013 Dec 9.
2
On the Impact of Regularization Variation on Localized Multiple Kernel Learning.正则化变化对局部多核学习的影响。
IEEE Trans Neural Netw Learn Syst. 2018 Jun;29(6):2625-2630. doi: 10.1109/TNNLS.2017.2688365. Epub 2017 Apr 11.
3
Fractional norm regularization: learning with very few relevant features.分数范数正则化:学习仅有少量相关特征。
IEEE Trans Neural Netw Learn Syst. 2013 Jun;24(6):953-63. doi: 10.1109/TNNLS.2013.2247417.
4
Rademacher chaos complexities for learning the kernel problem.Rademacher 混沌复杂度在核问题学习中的应用。
Neural Comput. 2010 Nov;22(11):2858-86. doi: 10.1162/NECO_a_00028.
5
A Guaranteed Similarity Metric Learning Framework for Biological Sequence Comparison.一种用于生物序列比较的保证相似性度量学习框架。
IEEE/ACM Trans Comput Biol Bioinform. 2016 Sep-Oct;13(5):868-877. doi: 10.1109/TCBB.2015.2495186. Epub 2015 Oct 26.
6
Efficient dual approach to distance metric learning.高效的距离度量学习双重方法。
IEEE Trans Neural Netw Learn Syst. 2014 Feb;25(2):394-406. doi: 10.1109/TNNLS.2013.2275170.
7
L1-norm locally linear representation regularization multi-source adaptation learning.L1 范数局部线性表示正则化多源自适应学习。
Neural Netw. 2015 Sep;69:80-98. doi: 10.1016/j.neunet.2015.01.009. Epub 2015 Feb 25.
8
A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.无线传感器网络上用于 ℓ1 正则化核机器的分布式学习方法
Sensors (Basel). 2016 Jul 1;16(7):1021. doi: 10.3390/s16071021.
9
Theoretical bounds of generalization error for generalized extreme learning machine and random vector functional link network.广义极端学习机和随机向量函数链接网络泛化误差的理论界。
Neural Netw. 2023 Jul;164:49-66. doi: 10.1016/j.neunet.2023.04.014. Epub 2023 Apr 20.
10
Multitask Classification Hypothesis Space With Improved Generalization Bounds.多任务分类假设空间与改进的泛化界。
IEEE Trans Neural Netw Learn Syst. 2015 Jul;26(7):1468-79. doi: 10.1109/TNNLS.2014.2347054. Epub 2014 Aug 26.