TY - JOUR

T1 - Global convergence of SMO algorithm for support vector regression

AU - Takahashi, Norikazu

AU - Guo, Jun

AU - Nishi, Tetsuo

N1 - Funding Information:
Manuscript received November 18, 2006; revised July 27, 2007 and October 12, 2007; accepted October 23, 2007. This work was presented in part at the International Joint Conference on Neural Networks, Vancouver, BC, Canada, July16–21, 2006. This work was supported in part by an Okawa Foundation research grant, the Ministry of Education, Culture, Sports, Science and Technology, Grant-in-Aid for Japan Society for the Promotion of Science (JSPS) Research Fellows 18-9473, and the 21st Century Center of Excellence (COE) Program “Reconstruction of Social Infrastructure Related to Information Science and Electrical Engineering.” N. Takahashi and J. Guo are with the Department of Computer Science and Communication Engineering, Kyushu University, Fukuoka 819-0395, Japan (e-mail: norikazu@csce.kyushu-u.ac.jp; guojun@kairo.csce.kyushu-u.ac.jp).

PY - 2008

Y1 - 2008

N2 - Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given ι training samples, SVR is formulated as a convex quadratic programming (QP) problem with ι pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.

AB - Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given ι training samples, SVR is formulated as a convex quadratic programming (QP) problem with ι pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.

KW - Convergence

KW - Quadratic programming (QP)

KW - Sequential minimal optimization (SMO)

KW - Support vector regression (SVR)

UR - http://www.scopus.com/inward/record.url?scp=49149114060&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=49149114060&partnerID=8YFLogxK

U2 - 10.1109/TNN.2007.915116

DO - 10.1109/TNN.2007.915116

M3 - Article

C2 - 18541498

AN - SCOPUS:49149114060

SN - 2162-237X

VL - 19

SP - 971

EP - 982

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

IS - 6

ER -