Jiangsu Engineering Center of Network Monitoring, Nanjing University of Information Science & Technology, Nanjing, PR China; Jiangsu Collaborative Innovation Center on Atmospheric Environment and Equipment Technology, PR China; School of Computer & Software, Nanjing University of Information Science & Technology, Nanjing, PR China; Department of Medical Biophysics, University of Western Ontario, London, Ontario, Canada.
Department of Computer Science, University of Central Arkansas, Conway, AR, USA.
Neural Netw. 2015 Jul;67:140-50. doi: 10.1016/j.neunet.2015.03.013. Epub 2015 Apr 6.
The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts.
ν-支持向量回归(ν-SVR)是一种有效的回归学习算法,它具有使用参数 ν 自动控制支持向量数量和调整管宽的优点。然而,与 ν-支持向量分类(ν-SVC)(Schölkopf 等人,2000)相比,ν-SVR 在其目标函数中引入了一个额外的线性项。因此,直接将准确在线 ν-SVC 算法(AONSVM)应用于 ν-SVR 将不会生成有效的初始解。设计增量 ν-SVR 学习算法是主要挑战。为了克服这一挑战,我们在本文中提出了一种称为初始调整的特殊过程。该过程根据 Karush-Kuhn-Tucker(KKT)条件调整 ν-SVC 的权重,为增量学习准备初始解。将初始调整与 AONSVM 的两个步骤相结合,产生了一种精确有效的增量 ν-SVR 学习算法(INSVR)。理论分析证明了三个关键逆矩阵的存在,这是 INSVR 的三个步骤(包括初始调整)的基石。在基准数据集上的实验表明,INSVR 可以尽可能避免不可行的更新路径,并成功收敛到最优解。结果还表明,INSVR 比具有冷启动和热启动的批量 ν-SVR 算法更快。