Suppr超能文献

大规模半监督序回归的增量学习算法。

Incremental learning algorithm for large-scale semi-supervised ordinal regression.

机构信息

College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 210006, China.

College of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, 210006, China.

出版信息

Neural Netw. 2022 May;149:124-136. doi: 10.1016/j.neunet.2022.02.004. Epub 2022 Feb 11.

Abstract

As a special case of multi-classification, ordinal regression (also known as ordinal classification) is a popular method to tackle the multi-class problems with samples marked by a set of ranks. Semi-supervised ordinal regression (SSOR) is especially important for data mining applications because semi-supervised learning can make use of the unlabeled samples to train a high-quality learning model. However, the training of large-scale SSOR is still an open question due to its complicated formulations and non-convexity to the best of our knowledge. To address this challenging problem, in this paper, we propose an incremental learning algorithm for SSOR (IL-SSOR), which can directly update the solution of SSOR based on the KKT conditions. More critically, we analyze the finite convergence of IL-SSOR which guarantees that SSOR can converge to a local minimum based on the framework of concave-convex procedure. To the best of our knowledge, the proposed new algorithm is the first efficient on-line learning algorithm for SSOR with local minimum convergence guarantee. The experimental results show, IL-SSOR can achieve better generalization than other semi-supervised multi-class algorithms. Compared with other semi-supervised ordinal regression algorithms, our experimental results show that IL-SSOR can achieve similar generalization with less running time.

摘要

作为多分类的一种特殊情况,有序回归(也称为有序分类)是一种解决具有一组等级标记的多类问题的常用方法。半监督有序回归(SSOR)对于数据挖掘应用特别重要,因为半监督学习可以利用未标记的样本来训练高质量的学习模型。然而,由于其复杂的公式和非凸性,大规模 SSOR 的训练仍然是一个开放性问题。为了解决这个具有挑战性的问题,在本文中,我们提出了一种用于 SSOR 的增量学习算法(IL-SSOR),它可以根据 KKT 条件直接更新 SSOR 的解。更重要的是,我们分析了 IL-SSOR 的有限收敛性,该收敛性保证了 SSOR 可以根据凹-凸过程的框架收敛到局部极小值。据我们所知,所提出的新算法是具有局部极小值收敛保证的 SSOR 的第一个高效在线学习算法。实验结果表明,IL-SSOR 可以比其他半监督多类算法实现更好的泛化。与其他半监督有序回归算法相比,我们的实验结果表明,IL-SSOR 可以在运行时间更少的情况下实现类似的泛化。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验