Chuang Chen-Chia, Su Shun-Feng, Jeng Jin-Tsong, Hsiao Chih-Ching
Dept. of Electron. Eng., Hwa-Hsia Coll. of Technol. and Commerce, Taipei, Taiwan.
IEEE Trans Neural Netw. 2002;13(6):1322-30. doi: 10.1109/TNN.2002.804227.
Support vector regression (SVR) employs the support vector machine (SVM) to tackle problems of function approximation and regression estimation. SVR has been shown to have good robust properties against noise. When the parameters used in SVR are improperly selected, overfitting phenomena may still occur. However, the selection of various parameters is not straightforward. Besides, in SVR, outliers may also possibly be taken as support vectors. Such an inclusion of outliers in support vectors may lead to seriously overfitting phenomena. In this paper, a novel regression approach, termed as the robust support vector regression (RSVR) network, is proposed to enhance the robust capability of SVR. In the approach, traditional robust learning approaches are employed to improve the learning performance for any selected parameters. From the simulation results, our RSVR can always improve the performance of the learned systems for all cases. Besides, it can be found that even the training lasted for a long period, the testing errors would not go up. In other words, the overfitting phenomenon is indeed suppressed.
支持向量回归(SVR)采用支持向量机(SVM)来解决函数逼近和回归估计问题。已证明SVR对噪声具有良好的鲁棒性。当SVR中使用的参数选择不当时,仍可能出现过拟合现象。然而,各种参数的选择并非易事。此外,在SVR中,异常值也可能被视为支持向量。将异常值包含在支持向量中可能会导致严重的过拟合现象。本文提出了一种新颖的回归方法,称为鲁棒支持向量回归(RSVR)网络,以增强SVR的鲁棒能力。在该方法中,采用传统的鲁棒学习方法来提高对任何选定参数的学习性能。从仿真结果来看,我们的RSVR在所有情况下总能提高学习系统的性能。此外,可以发现即使训练持续很长时间,测试误差也不会上升。换句话说,过拟合现象确实得到了抑制。