8
Curve fitting – Least squares 1 Curve fitting 2 K m = 100 µM v max = 1 ATP s -1

Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Curve fitting – Least squares

1

Curve fitting

2

Km = 100 µMvmax = 1 ATP s-1

Page 2: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Excurse error of multiple measurements

Starting point: Measure N times the same parameterObtained values are Gaussian distributed around mean with standard deviation σ

What is the error of the mean of all measurements?

Sum = x1 + x2 + ... + xN

Variance of sum = N * σ2 (Central limit theorem)

Standard deviation of sum

Mean = (x1 + x2 + ... + xN)/N

Standard deviation of mean(called standard error of mean) N

N1

3

N

NN

N

Excurse error propagation

-> Individual variances add scaled by squared partial derivatives (if parameters are uncorrelated)

Examples :

What is error for f(x,y,z,…) if we know errors of x,y,z,… (σx, σy, σz, …) for purely statistical errors?

Addition/substraction:

Product:

4

squared errors add up

squared relative errors add up

Page 3: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Excurse error propagation

Ratios:

Powers:

Logarithms:

5

squared relative errors add up

error is relative error

relative errortimes power

Curve fitting – Least squaresStarting point: - data set with N pairs of (xi,yi)

- xi known exactly, - yi Gaussian distributed around true value with error σi

- errors uncorrelated- function f(x) which shall describe the values y (y = f(x))- f(x) depends on one or more parameters a

6

Mmax ][

][

KS

Svv

Page 4: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Curve fitting – Least squares

7

Mmax ][

][

KS

Svv

Probability to get yi for given xi]2/)];([ 22

2

1);( iii axfy

i

i eayP

Curve fitting – Least squares

8

Mmax ][

][

KS

Svv

Prob. to get whole set yi for set of xi

N

i

axfy

i

NiiieayyP

1

]2/)];([1

22

2

1);,..,(

For best fitting theory curve (red curve) P(y1,..yN;a) becomes maximum!

Page 5: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Curve fitting – Least squares

9

Prob. to get whole set yi for set of xi

N

i

axfy

i

NiiieayyP

1

]2/)];([1

22

2

1);,..,(

For best fitting theory curve (red curve) P(y1,..yN;a) becomes maximum!

Use logarithm of product, get a sum and maximize sum:

2ln);(

2

1);,..,(ln

11

2

1 i

NN

i

iiN

axfyayyP

OR minimize χ2 with:

Principle of least squares!!!

Curve fitting – Least squares

Principle of least squares!!!(Χ2 minimization)

Solve equation(s) either analytically (only simple functions) or numerically (specialized software, different algorithms)

χ2 value indicates goodness of fit

Errors available: USE THEM! → so called weighted fitErrors not available: σi’s are set as constant → conventional fit

10

Solve:

Page 6: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Reduced χ2

For weighted fit the reduced χ2 should become 1, if errors are properly chosen

11

Expectation value of χ2 for weighted fit:

N

axfy N

i i

iN

i i

ii

12

2

12

2

2);(

Define reduced χ2:

MNred

red

22

number of fit paramters

12 redwith

Expectation value of χ2 for unweighted fit:

2

1

2

1

22 1);(

1

N

i

N

i

ii MNaxfy

MN

Should approach the variance of a single data point

Simple proportion

Differentiation:

Solve:

Get:

For σi = const = σ

12

Page 7: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Simple proportion - Errors

Rewrite:

-> Error of m given by errors of yi-> Use rules for error propagation

13

-> Standard error of determination of m (single confidence interval) is then square-root of V(m)

N

ii axfyN 1

22 );(1

‐> σ2 is estimated as mean square deviation of fitted function from the y-values if now errors are available

Straight line fit (2 parameters)

Differentiation w.r.t c:

Get:(solve eqn. array)

or

Differentiation w.r.t m: or

Errors:

For all relations which are linear with respect to the fit parameters, analytical solutions possible! 14

(or σi = const = σ)

Page 8: Curve fitting – Least squaresphysik/sites/mona/wp... · Curve fitting – Least squares 9 Prob. to get whole set yifor set of xi N i y f x a i N P y y a e i i i 1 [ ( ; )] /2 ]

Straight line fit (2 parameters)

Additional quantities for multiple parameters: Covariances-> describes interdependency/correlation between the obtained parameters

Covariance matrix

Straight line fit:

Vii = Var(x(i))

15

)(

)(

)(

,cov

33231

32221

31211

333231

322221

312111

pVarVV

VpVarV

VVpVar

VVV

VVV

VVV

pp

pppp

pppp

pppp

pppppp

pppppp

pppppp

ji

Squared errors of fit parameters!

More in depth

http://www-zeus.physik.uni-bonn.de/~brock/teaching/stat_ws0001/

Online lecture: Statistical Methods of Data Analysis by Ian C. Brock

Numerical recipes in C++, Cambridge university press

16