Suppr超能文献

基于大间隔多类损失的可扩展阶段式提升方法。

A scalable stagewise approach to large-margin multiclass loss-based boosting.

出版信息

IEEE Trans Neural Netw Learn Syst. 2014 May;25(5):1002-13. doi: 10.1109/TNNLS.2013.2282369.

Abstract

We present a scalable and effective classification model to train multiclass boosting for multiclass classification problems. A direct formulation of multiclass boosting had been introduced in the past in the sense that it directly maximized the multiclass margin. The major problem of that approach is its high computational complexity during training, which hampers its application to real-world problems. In this paper, we propose a scalable and simple stagewise multiclass boosting method which also directly maximizes the multiclass margin. Our approach offers the following advantages: 1) it is simple and computationally efficient to train. The approach can speed up the training time by more than two orders of magnitude without sacrificing the classification accuracy and 2) like traditional AdaBoost, it is less sensitive to the choice of parameters and empirically demonstrates excellent generalization performance. Experimental results on challenging multiclass machine learning and vision tasks demonstrate that the proposed approach substantially improves the convergence rate and accuracy of the final visual detector at no additional computational cost compared to existing multiclass boosting.

摘要

我们提出了一种可扩展且有效的分类模型,用于训练多类提升算法以解决多类分类问题。过去已经提出了多类提升的直接公式化方法,因为它直接最大化了多类边界。该方法的主要问题是其在训练过程中的计算复杂度很高,这限制了它在实际问题中的应用。在本文中,我们提出了一种可扩展且简单的分阶段多类提升方法,该方法也直接最大化了多类边界。我们的方法具有以下优点:1)训练简单且计算效率高。该方法可以在不牺牲分类精度的情况下,将训练时间提高两个数量级以上;2)与传统的 AdaBoost 一样,它对参数的选择不太敏感,并且在实践中表现出出色的泛化性能。在具有挑战性的多类机器学习和视觉任务上的实验结果表明,与现有的多类提升方法相比,所提出的方法可以显著提高最终视觉检测器的收敛速度和准确性,而不会增加额外的计算成本。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验