Upload
others
View
13
Download
0
Embed Size (px)
Citation preview
Andrew Ng
Linear regression with one variable
Model representation
Machine Learning
Andrew Ng
0
100
200
300
400
500
0 500 1000 1500 2000 2500 3000
0
100
200
300
400
500
0 500 1000 1500 2000 2500 3000
집가격(Portland, OR)
가격(in 1000s of dollars)
크기 (feet2)
감독학습(Supervised Learning)
자료의 “정확한답"이각예들에대해주어진
회귀문제(Regression Problem)
실수-값을갖는출력을예측
Andrew Ng
기호:m = 훈련자료의갯수x’s = “입력” 변수 / 특징(features)y’s = “출력” 변수 / “목적(target)” 변수
Size in feet2 (x) Price ($) in 1000's (y)
2104 4601416 2321534 315852 178… …
집가격의훈련집합
(Portland, OR)
Andrew Ng
훈련자료집합
학습알고리즘
h집크기 추정가격
우리는 h를어떻게표현하는가?
변수가하나인선형회귀.단변량선형회귀
(Univariate linear regression)
Andrew Ng
Cost function
Machine Learning
Linear regression with one variable
Andrew Ng
는어떻게선택하는가 ?
훈련자료
가설(Hypothesis):
‘s : 파라메타(Parameters)
Size in feet2 (x) Price ($) in 1000's (y)
2104 4601416 2321534 315852 178… …
Andrew Ng
0
1
2
3
0 1 2 3
0
1
2
3
0 1 2 3
0
1
2
3
0 1 2 3
Andrew Ng
y
x
생각: 이우리의훈련자료에대한 에
가깝도록 를선택한다.
Andrew Ng
Cost functionintuition I
Machine Learning
Linear regression with one variable
Andrew Ng
가설:
파라메타:
비용함수 :
목적:
간략화
Andrew Ng
0
1
2
3
0 1 2 3
y
x
(고정된 에대하여, 이것은 x의함수) (파라메타 의함수 )
0
1
2
3
-0.5 0 0.5 1 1.5 2 2.5
Andrew Ng
0
1
2
3
0 1 2 3
y
x
0
1
2
3
-0.5 0 0.5 1 1.5 2 2.5
(고정된 에대하여, 이것은 x의함수) (파라메타 의함수 )
Andrew Ng
0
1
2
3
-0.5 0 0.5 1 1.5 2 2.5
y
x
0
1
2
3
0 1 2 3
(고정된 에대하여, 이것은 x의함수) (파라메타 의함수 )
Andrew Ng
Cost functionintuition II
Machine Learning
Linear regression with one variable
Andrew Ng
Hypothesis(가설):
파라메타:
비용함수:
목적:
Andrew Ng
0
100
200
300
400
500
0 1000 2000 3000
Price ($) in 1000’s
Size in feet2 (x)
(고정된 에대하여, 이것은 x의함수) (파라메타 의함수 )
Andrew Ng
Andrew Ng
(파라메타 의함수 )(고정된 에대하여, 이것은 x의함수)
Andrew Ng
(파라메타 의함수 )(고정된 에대하여, 이것은 x의함수)
Andrew Ng
(파라메타 의함수 )(고정된 에대하여, 이것은 x의함수)
Andrew Ng
(파라메타 의함수 )(고정된 에대하여, 이것은 x의함수)
Andrew Ng
Gradient descent
Machine Learning
Linear regression with one variable
Andrew Ng
어떤함수 를가지고
를원한다
Outline:
• 어떤 로시작한다.
• 최소값에도달할때까지, 가
감소하도록 를계속변화시킨다
Andrew Ng
1
0
J(0,1)
Andrew Ng
0
1
J(0,1)
Andrew Ng
Gradient descent algorithm
Correct: Simultaneous update Incorrect:
Andrew Ng
Gradient descentintuition
Machine Learning
Linear regression with one variable
Andrew Ng
경사하강(Gradient descent) algorithm
Andrew Ng
Andrew Ng
α 가너무작으면, 경사하강이느릴수있다.
α 가너무크면, 경사하강은최소점을지나칠수있다. 수렴에실패하거나, 심지어는발산할수있다.
Andrew Ng
at local optima
Current value of
Andrew Ng
경사하강은학습률 α 가고정된상태로도, 국지적최소점으로수렴할수있다.
우리가국지적최소값에접근함에따라서, 경사하강은자동적으로더작은단계를취한다. 따라서, 시간에따라서 α 를줄일필요가없다.
Andrew Ng
Gradient descent for linear regression
Machine Learning
Linear regression with one variable
Andrew Ng
Gradient descent algorithm Linear Regression Model
Andrew Ng
Andrew Ng
Gradient descent algorithm
과 를동시에갱신
Andrew Ng
1
0
J(0,1)
Andrew Ng
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
(for fixed , this is a function of x) (function of the parameters )
Andrew Ng
“배치(Batch)” 경사강하
“Batch”: 경사강하의각단계마다모든훈련자료를
사용한다.