39
Машинное обучение (Machine Learning) Метод опорных векторов (Support Vector Machine - SVM) Уткин Л.В. Санкт-Петербургский политехнический университет Петра Великого

Машинное обучение (Machine Learning)

  • Upload
    others

  • View
    48

  • Download
    0

Embed Size (px)

Citation preview

..

1 2 3 4
:
.. , .. , .. , .. , Andrew Moore, Lior Rokach, Rong Jin, Luis F. Teixeira, Alexander Statnikov .

“New methods always look better than old ones. Neural nets are better than logistic regression,
support vector machines are better than neural nets, etc.” - Brad Efron

1950- . ( .) 1992 . . .
?
. - m– Rm. : xi , i = 1, ...,m, yi ∈ {−1,+1}.
?

∑m i=1 wixi + b
( )
( )
?

2 + ... + w 2 m
:
yi = +1 : wTx + b ≥ 1 yi = −1 : wTx + b ≤ −1

w 2
m)→ min w
) ≥ 1

:
) − 1 )
αi , i = 1, ..., n - .
∂L ∂b

max
( n∑
i=1
αiyi = 0.
:


m)→ min w
) ≥ 1, i = 1, ..., n.
C -
max α
i=1
αiyi = 0.
:

X , , Φ.

(x1, x2) , :
(x1, x2, x1 · x2, x21 , x22 )
, .
: R2 → R5 : (x1, x2) = (x1, x2, x1 · x2, x21 , x22 )
R5 R2,
f (x) = sign ((w)(x)− b)

g(x) = n∑
i=1
n∑ i=1
αi(xi) T(x) + b
K (xi , xj) = (xi)(xj).
Kernel Trick
, . xTi xj SVM, (xi) . K (xi , xj) .

:
1 : K (xi , xj) = xTi xj 2 : K (xi , xj) =
( 1 + xTi xj
K (xi , xj) = exp
)
max α
i=1
αiyi = 0.
:
g(x) = n∑
i=1
2 C
3 (σ, p, β0,β1) 4
( ) αi
5 g ,
6 , (cross-validation)
7 , 2, .

: σ:
: σ
SVM
:
L(xi , yi) = max (0, 1− yig(xi)) .
: 1
n
∑n i=1 max (0, 1− yig(xi ,w, b))→ min
b,w

() :
∑n i=1 max (0, 1− yig(xi ,w, b))→ min
b,w
ξi = max (0, 1− yig(xi ,w, b)) C
⇓ :
1− yig(xi ,w, b) ≥ ξi , ξi ≥ 0, i = 1, ..., n

SVM
1
2 3 4

SVM:
. . C . .
R
https://cran.r-project.org/web/views/MachineLearning.html
Package ‘kernlab’: : ksvm Package ‘e1071’: : svm Package ‘wsvm’: : wsvm
. A. Karatzoglou, D. Meyer, K. Hornik “Support Vector Machines in R”.