2.3.4. Support Vector Regression Model

EA Elanchezhian Arulmozhi
JB Jayanta Kumar Basak
TS Thavisack Sihalath
JP Jaesung Park
HK Hyeon Tae Kim
BM Byeong Eun Moon
request Request a Protocol
ask Ask a question
Favorite

In 1992, Vapnik proposed a supervised algorithm named support vector machine (SVM), which was regarded as a generalized classifier [33]. Initially, the SVM algorithm was widely used to solve the classification problem in the name of support vector classification (SVC). Later Druker [33] extended it to solve the nonlinear regression problems with the name of support vector regression (SVR). A hyperplane that supported by a minimum margin line and a maximum margin line along with the support vectors were the conception elements of SVR [31,34,35]. The schematic diagram of the one-dimensional support vector regression used for regression showed in Figure 3. Let consider the available dataset with n samples, where x is the input vector, and y is the corresponding response variable of the dataset. The SVR generates a regression function to predict the y variable. This process can be expressed [31,33,34,35] by

where x is the input of the datasets; ω and b are the parameter vectors; φ(x) is the mapping function, which is introduced by the SVR. In case of a multidimensional dataset, y can have unlimited prediction possibilities. So, a limitation for the tolerance introduce to solve the optimization problem [31,34,35], which could be expressed as

where ε is the minimum and maximum margin line/sensitivity zone of the hyperplane; ξ and ξi* are the slack variables that measure the training errors which subjected to Ɛ; and C is the positive constant. The slack variables were utilized to minimize the error between the sensitive zones of hyperplane. The sensitive zones can also be expressed using Lagrange multipliers, the optimization techniques to solve the dual nonlinear problem can be rewritten as the following equation [31,34,35],

where ai and ai* are the Lagrange multipliers which subject to Ɛ; K is the kernel function. The kernel function use the kernel trick to solve the nonlinear problems using a liner classifier. Generally, linear, radial basis function (RBF), polynomial, and sigmoid are used kernel functions of SVR models [31,34,35]. The current study chose RBF as kernel function to optimize the SVR during the simulation after a random test of other kernel functions. The RBF kernel function can be expressed as the following equation,

where, Ɣ referred as the structural parameter of RBF kernel function. Finally, the decision function of SVR can be expressed as

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A