69
Sections 6.7, 6.8, 7.7 (Note : The approach used here to present the material in these sections is substantially different from the approach used in the textbook.) Recall : If X and Y are random variables with E(X) = X , E(Y) = Y , Var(X) = X 2 , Var(Y) = Y 2 , and Cov(X,Y) = X Y , then the least squares line for predicting Y from X is y = Y + Y — (x X ) X o r y = Y Y X + X Y x X a b The least squares line is derived in Section 4.2 by minimizing E{[Y – (a + bX)] 2 } . Consider a set of observed data (x 1 , y 1 ) , (x 2 , y 2 ) , … , (x n , y n ) . Imagine that we treat this data as describing a joint p.m.f. for two random variables X and Y where each points y x

Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Embed Size (px)

Citation preview

Page 1: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Sections 6.7, 6.8, 7.7(Note: The approach used here to present the material in these sections is substantially different from the approach used in the textbook.)

Recall: If X and Y are random variables with E(X) = X , E(Y) = Y , Var(X) = X

2 , Var(Y) = Y2 , and Cov(X,Y) = XY , then the least

squares line for predicting Y from X isy = Y + Y

— (x – X)X

or y = Y – Y— X +X

Y — x X

a bThe least squares line is derived in Section 4.2 by minimizing

E{[Y – (a + bX)]2} .Consider a set of observed data (x1 , y1) , (x2 , y2) , … , (xn , yn ) . Imagine that we treat this data as describing a joint p.m.f. for two random variables X and Y where each points is assigned a probability of 1/n.

Then, we see that

y

x

Page 2: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

plays the role of E(X) = X ,

plays the role of E(Y) = Y ,

plays the role of Var(X) = X2 ,

plays the role of Var(Y) = Y2 , and

plays the role of Cov(X,Y) = XY .

i = 1

xi = 1— n

xn

i = 1

yi = 1— n

yn

i = 1

(xi – x)2 = sx2n 1

— n

n – 1—— n

i = 1

(yi – y)2 = sy2n 1

— n

n – 1—— n

i = 1

(xi – x)(yi – y) =n 1

— n

we shall complete this equation shortly.

Page 3: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

We define the sample covariance to be c = , and

we define the sample correlation to be r =

Consequently, the least squares line for predicting Y from X is

This least squares line minimizes

i = 1

(xi – x)(yi – y)n

n – 1

c—— . sxsy

The sample correlation r is a measure of the strength and direction of a linear relationship for the sample in the same way that the correlation is a measure of the strength and direction of a linear relationship for the two random variables X and Y.

Page 4: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

plays the role of E(X) = X ,

plays the role of E(Y) = Y ,

plays the role of Var(X) = X2 ,

plays the role of Var(Y) = Y2 , and

plays the role of Cov(X,Y) = XY .

i = 1

xi = 1— n

xn

i = 1

yi = 1— n

yn

i = 1

(xi – x)2 = sx2n 1

— n

n – 1—— n

i = 1

(yi – y)2 = sy2n 1

— n

n – 1—— n

i = 1

(xi – x)(yi – y) =n 1

— n

n – 1—— c n

Page 5: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

We define the sample covariance to be c = , and

we define the sample correlation to be r =

Consequently, the least squares line for predicting Y from X is

This least squares line minimizes

y = y + r sy— (x – x) sx

or y = y – r sy— x + sx

syr — x sx

i = 1

(xi – x)(yi – y)n

n – 1

c—— . sxsy

The sample correlation r is a measure of the strength and direction of a linear relationship for the sample in the same way that the correlation is a measure of the strength and direction of a linear relationship for the two random variables X and Y.

[yi – (a + bxi)]2 .i = 1

n

a b

Page 6: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

r = +1 r close to +1 r is positive

r = –1 r close to –1 r is negative

r close to 0 r close to 0 r is negativer close to 0

Page 7: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi . Then the joint p.d.f. of Y1 , Y2 , … , Yn is

exp—————————— =

2

[yi – (0 + 1xi)]2

– ——————— 22

n

i = 1

exp————————————

n (2)n/2

[yi – (0 + 1xi)]2

– ———————— 22

i = 1

n

for – < y1 < , – < y2 < , … , – < yn <

If we treat this joint p.d.f. as a function L(0 , 1), that is, a function of the unknown parameters 0 and 1 , then we can find the maximum likelihood estimates for 0 and 1 by maximizing the function L(0 , 1).It is clear the function L(0 , 1) will be maximized when

is minimized.[yi – (0 + 1xi)]2i = 1

n

Page 8: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

The previous result concerning the least squares line for predicting Y from X with a sample of data points tells us that the

mle of 1 is 1 =

and the

mle of 0 is 0 =

^

^

SyR — = sx

i = 1

(xi – x)(Yi – Y)n

i = 1

(xi – x)2n

i = 1

(xi – x)Yi

n

i = 1

(xi – x)2n

= =

n

j = 1

(xj – x)2n

i = 1

(xi – x)Yi

Y – RSy— x =sx

i = 1

n Yi— n

– n

j = 1

(xj – x)2n

i = 1

(xi – x)Yi =

i = 1

n 1— n

j = 1

(xj – x)2n

(xi – x)Yi

x

x

Page 9: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

1.

(a)

Suppose we are interested in predicting a person's height from the person's length of stride (distance between footprints). The following data is recorded for a random sample of 5 people:

Length of Stride (inches) 14 13 21 25 17Height (inches) 61 54 63 72 59

Find the equation of the least squares line for predicting a person's height from the person's length of stride.

The slope of the least squares line is120—— = 1.2 .100

The intercept of the least squares line is 61.8 – (1.2)(18) = 40.2 .

The least squares line can be written y = 40.2 + 1.2x .

Page 10: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

(b) Suppose we assume that the height of humans has a normal distribution with mean 0 + 1x and variance 2, where x is the length of stride. Find the maximum likelihood estimators for 0 and 1 .

The mle of 1 is120—— = 1.2 .100

The mle of 0 is 61.8 – (1.2)(18) = 40.2 .

Page 11: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

2.

(a) Use Theorem 5.5-1 (Class Exercise 5.5-1) to find the distribution of the maximum likelihood estimator of 1.

Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi .

n

j = 1

(xj – x)2n

i = 1

(xi – x)Yi

1 =^

has a normal distribution with

mean n

j = 1

(xj – x)2n

i = 1

(xi – x)(0 + 1xi) = n

j = 1

(xj – x)2n

i = 1

(xi – x)[0 + 1x + 1(xi – x)] =

n

j = 1

(xj – x)2n

i = 1

(xi – x)[0 + 1x] + n

j = 1

(xj – x)2n

i = 1

(xi – x) 1(xi – x) = 1 n

j = 1

(xj – x)2n

i = 1

(xi – x)2

= 1

Page 12: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

and variance n

j = 1

(xj – x)2n

i = 1

(xi – x)2 =

2

2 n

j = 1

(xj – x)2n

i = 1

(xi – x)2

2 =

2

(xi – x)2

i = 1

n

Page 13: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

2. - continued

(b) Use Theorem 5.5-1 (Class Exercise 5.5-1) to find the distribution of the maximum likelihood estimator of 0.

0 =^

i = 1

n 1— n

j = 1

(xj – x)2n

(xi – x)Yi

xhas a normal distribution with

mean i = 1

n 1— n

j = 1

(xj – x)2n

(xi – x) x(0 + 1xi) =

i = 1

n 1— n

j = 1

(xj – x)2n

(xi – x) x(0 + 1xi) =

i = 1

n(0 + 1xi) –

0 + 1x – x n

(xj – x)2n

i = 1

(xi – x)(0 + 1xi) =

j = 1

We already found this in part (a).

Page 14: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

0 + 1x – 1x = 0

and variance i = 1

n 1— n

j = 1

(xj – x)2n

(xi – x) x 2

2

=

i = 1

n 1— n2

2 –

j = 1

(xj – x)2n

2 (xi – x) x

n j = 1

(xj – x)2n

(xi – x)2

2+

x2

=

2 1— n

+x2

(xi – x)2

i = 1

n

Page 15: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Suppose we treat the joint p.d.f. as a function L(0 , 1 , 2), that is, a function of the three unknown parameters (instead of just two). Then, analogous to Text Example 6.1-3, we find that the maximum likelihood estimates for 0 and 1 are the same as previously derived, and that the maximum likelihood estimator for 2 is

[Yi – (0 + 1xi)]2i = 1

n

n

^ ^

Recall: If Y1 , Y2 , … , Yn are independent with each having a N(, 2) distribution (i.e., a random sample from a N(, 2) distribution), then

Y = has a distribution,

has a distribution, and

i = 1

Yi

n

n N( , )2

— n

(n – 1)S2

———– 2

2(n – 1)

Page 16: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

the random variables Y and are(n – 1)S2

———– 2 independent.

Analogous results for the more general situation previously considered can be proven using matrix algebra.

Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi . Then

1 has a distribution,

0 has a distribution,

^ N( 1 , )2

(xi – x)2

^ N( 0 , )

i = 1

n

2 1— n

+x2

(xi – x)2

i = 1

n

Page 17: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

has a distribution,

random variables and are independent, and

random variables and are independent.

[Yi – (0 + 1xi)]2i = 1

n

2

^ ^

2(n – 2)

1^

[Yi – (0 + 1xi)]2i = 1

n

2

^ ^

0^

[Yi – (0 + 1xi)]2i = 1

n

2

^ ^

Page 18: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

i = 1

(Yi – Y)2 =n(Yi – Y)2 +

i = 1

n ^(Yi – Yi)2

i = 1

n ^

This is called the total sum of squares and is denoted SST.

This is called the regression sum of squares and is denoted SSR.

This is called the error (residual) sum of squares and is denoted SSE.

Since, as we have noted, SSE / 2 has a 2(n – 2) distribution, we say that the df (degrees of freedom) associated with SSE is n – 2 .

If Y1 , Y2 , … , Yn all have the same mean, that is, if 1 = 0, then SST / 2 has a 2(n – 1) distribution; consequently, the df associated with SST is n – 1 .

If Y1 , Y2 , … , Yn all have the same mean, that is, if 1 = 0, then it can be shown that SSR and SSE are independent, and that SSR / 2 has a 2(1) distribution; consequently, the df associated with SSR is 1 .

For each i = 1, 2, …, n, we define the random variable Yi = 0 + 1xi , that is, Yi is the predicted value corresponding to xi . With appropriate algebra, it can be shown that

^ ^^

Page 19: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

3. Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi . Prove that SST = SSR + SSE .

First, we observe that for i = 1, 2, …, n, Yi = 0 + 1xi = ^ ^^ Y + 1(xi – x)^

SST = (Yi – Y)2 =n

i = 1

ni = 1

1(xi – x) + (Yi – Y) – 1(xi – x) =^ ^2

ni = 1

[1(xi – x)]2 +^ [(Yi – Y) – 1(xi – x)]2 +^

21(xi – x)[(Yi – Y) – 1(xi – x)] =^ ^

ni = 1

^[1(xi – x)]2 + ni = 1

[Yi – Y – 1(xi – x)]2 +^

2 1(xi – x)(Yi – Y) – 12(xi – x)2 =^ ^n

i = 1

ni = 1

Page 20: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

(Yi – Y)2 +i = 1

n ^(Yi – Yi)2 +

i = 1

n ^

2 1 (xi – x)(Yi – Y) – 12 (xi – x)2 =^ ^n

i = 1

ni = 1

SSR + SSE +

i = 1

(xi – x)(Yi – Y)n

i = 1

(xi – x)2n

2 (xi – x)(Yi – Y) –ni = 1

i = 1

(xi – x)(Yi – Y)n

i = 1

(xi – x)2n

(xi – x)2ni = 1

2

2

= SSR + SSE

Page 21: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

A mean square is a sum of squares divided by its degrees of freedom.

The error (residual) mean square is MSE = . SSE——n – 2

The regression mean square is MSR = . SSR—— 1

yTotal variation in Y is based on SST.

Variation in Y explained by the linear relationship with X is based on SSR.

Variation in Y explained by random error is based on SSE.

Page 22: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

If Y1 , Y2 , … , Yn all have the same mean, that is, if 1 = 0, then

E(SSR / 2) = and E

If 1 0, then E(SSR / 2) =

SSE / 2

——— = n – 2

1 1 .

(Yi – Y)2i = 1

n ^ [1(xi – x)]2^

E ————— =2

i = 1

n

E —————– =2

(xi – x)2i = 1

n

———— 2 E(1

2) =^(xi – x)2

i = 1

n

———— 2 Var(1) + [E(1)]2 =^ ^

1 + 12 > 1 .

(xi – x)2i = 1

n

———— 2

Page 23: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

This suggests that if 1 = 0, then the ratio of SSR / 2 to is expected to be close to one, but if 1 0, then this ratio is expected to be

SSE / 2

——— n – 2

larger than one.

If 1 = 0, then since SSR and SSE are independent, then the ratio of

SSR / 2 to must have an distribution.SSE / 2

——— n – 2

f( , )1 n – 2

Consequently, we can perform the hypothesis test

H0: 1 = 0 vs. H1: 1 0 .

by using the test statistic F =

We reject H0 (in favor of H1) when f > f(1, n – 2) .

SSR / 2 ———— =SSE / 2

——— n – 2

MSR——– . MSE

The linear relationship (correlation) between X and Y is not significant.

The linear relationship (correlation) between X and Y is significant.

Page 24: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

The calculations in a regression leading up to the f statistic are often organized into an analysis of variance (ANOVA) table such as the following:

Source df SS MS f p-value

Regression

Error

Total

It can be shown that the squared sample correlation is

R2 =

which is often called the proportion of variation in Y explained by X. The standard error of estimate is defined to be

SXY =

SSR1 MSR

SSEn – 2 MSE

n – 1

MSR / MSE

SSR—— . SST

SST

MSE .

(This is done in Class Exercise #7.)

Page 25: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

1. - continued

(c) Find the sums of squares SSR, SSE, and SST; then construct the ANOVA table and perform the corresponding f test with = 0.05, find and interpret the squared sample correlation, and find the standard error of estimate.

SSR = (Yi – Y)2 =i = 1

n ^ ni = 1

^ (xi – x)2 =12 (1.2)2(100) = 144

SST = (Yi – Y)2 =i = 1

n 174.8

SSE = SST – SSR = 174.8 – 144 = 30.8

Page 26: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Source df SS MS f p-value

Regression

Error

Total

1441 144

30.83 10.267

4

14.03

174.8

0.025< p < 0.05

Since f = 14.03 > f0.05(1,3) = 10.13, we reject H0. We conclude that the slope in the linear regression of height on stride length is different from zero (0.025< p-value < 0.05), and the results suggest that this slope is positive.(Note: We could alternatively conclude that the linear relationship or correlation is significant, and that the results suggest a positive linear relationship or correlation.)

R2 = 144 / 174.8 = 82.4% of the variation in height is explained by stride length.

The standard error of estimate is SXY = 10.267 = 3.20 inches.

Page 27: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8.

(a)

The prediction of grip strength from age for right-handed males is of interest. It is assumed that for any age x in years, where 10 x 25, Y = “grip strength in pounds” has a N(x , 2) distribution where x = 0 + 1x . For a random sample of right-handed males, the following data are recorded:

Age (years) 15 17 19 11 16 22 17 25 12 14 25 23Grip Strength (lbs.) 50 54 66 46 58 54 64 80 46 70 76 80

Obtain the calculations below from a calculator and from SPSS.

x =n = y =

(xi – x)2 =i = 1

n (yi – y)2 =i = 1

n

(xi – x)(yi – y) =ni = 1

12 18 62

256 1728

r =512 + 0.770

Descriptive Statistics

62.00 12.534 12

18.00 4.824 12

grip

age

Mean Std. Deviation N

Page 28: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Model Summaryb

.770a .593 .552 8.390Model1

R R SquareAdjustedR Square

Std. Error ofthe Estimate

Predictors: (Constant), agea.

Dependent Variable: gripb.

Correlations

1.000 .770

.770 1.000

. .002

.002 .

12 12

12 12

grip

age

grip

age

grip

age

Pearson Correlation

Sig. (1-tailed)

N

grip age

Page 29: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(b)

(c)

Find the equation of the least squares line from a calculator and from SPSS.

1 =

0 =

^

^

2

26

The least squares line can be written y = 26 + 2x .

Write a one-sentence interpretation of the slope in the least squares line and a one-sentence interpretation of the intercept in the least squares line.

Grip strength appears to increase on average by about 2 pounds with each increase of one year in age.

The intercept is the mean grip strength at age zero, which makes no sense in this situation.

Page 30: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Find r2, and write a one-sentence interpretation. (d)

(e) Find the standard error of estimate.

r2 = 0.593

About 59.3% of the variation in grip strength is explained by age.

s = 8.390

Page 31: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(f) Construct the ANOVA table and perform the corresponding f test with = 0.05.

SSR = (Yi – Y)2 =i = 1

n ^ ni = 1

^ (xi – x)2 =12 (2)2(256) = 1024

SST = (Yi – Y)2 =i = 1

n 1728

SSE = SST – SSR = 1728 – 1024 = 704

Source df SS MS f p-value

Regression

Error

Total

10241 1024

70410 70.4

11

14.55

1728

p < 0.01

The test statistic is f = 14.55

Page 32: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

The critical region with = 0.05 is f 4.96 .

4.96 = f0.05(1,10)

ANOVAb

1024.000 1 1024.000 14.545 .003a

704.000 10 70.400

1728.000 11

Regression

Residual

Total

Model1

Sum ofSquares df Mean Square F Sig.

Predictors: (Constant), agea.

Dependent Variable: gripb.

Since f = 14.55 > f0.05(1,10) = 4.96, we reject H0. We conclude that the slope in the linear regression of grip strength on age is different from zero (p < 0.01), and the results suggest that this slope is positive.(Note: We could alternatively conclude that the linear relationship or correlation is significant, and that the results suggest a positive linear relationship or correlation.)

0

The p-value issmaller than 0.01 (from the table) or p = 0.003 (from the SPSS output).

Page 33: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

4. Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi . Show that the maximum likelihood estimator of 2 is not unbiased; then, find a constant multiple of this maximum likelihood estimator which is an unbiased estimator of 2.

[Yi – (0 + 1xi)]2i = 1

n

n

^ ^

E = E =[Yi – (0 + 1xi)]2

i = 1

n

2

^ ^ 2

— n

2

— (n – 2) n

An unbiased estimator of 2 is n——n – 2

[Yi – (0 + 1xi)]2i = 1

n

n

^ ^

=

(Yi – Yi)2i = 1

n ^

————— n – 2

= MSE SSE—— =n – 2

Page 34: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

1^ – 1

(xi – x)2

i = 1

n

SSE / 2

——— n – 2

=1^ – 1

MSE

(xi – x)2

i = 1

n

has a distribution.t(n – 2)

0^ – 0

1— n

+x2

(xi – x)2

i = 1

n

SSE / 2

——— n – 2

=0^ – 0

has a distribution.t(n – 2)

1— n

+x2

(xi – x)2

i = 1

nMSE

Page 35: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

5. Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n,

i = 0 + 1xi .

For a given value x0 , use Theorem 5.5-1 (Class Exercise 5.5-1) to

find the distribution of Y | x0 = 0 + 1x0 (the predicted value of Y

corresponding to the value x0).

^ ^^

Y | x0 = 0 + 1x0 =^ ^^ Y + 1(x0 – x) =^

i = 1

n Yi— n

+ n

j = 1

(xj – x)2ni = 1

(xi – x)Yi =

(x0 – x)

i = 1

n 1— n

+

j = 1

(xj – x)2n

(xi – x)Yi

(x0 – x)has a normal distribution with

Page 36: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

mean E(0 + 1x0) =

and variance i = 1

n 1— n

+

j = 1

(xj – x)2n

(xi – x) (x0 – x)2

2

=

We can find this with algebra analogous to that in Class Exercise #2(b).

2 1— n

+

(xi – x)2n

(x0 – x)2

i = 1

^ ^ 0 + 1x0

In Class Exercise #2(b), this was a minus sign. In Class Exercise #2(b), this was just x .

Page 37: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

For a given value x0 , we define Y | x0 = 0 + 1x0 to be the predicted value of Y corresponding to the value x0 ; this predicted value has a

^ ^^

N( , )0 + 1x0 2 1— n

+

(xi – x)2n

(x0 – x)2

i = 1

random variables and are independent.

[Yi – (0 + 1xi)]2i = 1

n

2

^ ^

Y | x0 = 0 + 1x0^ ^^

1— n

+

(xi – x)2

i = 1

n

SSE / 2

——— n – 2

=

has a distribution.t(n – 2)

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2 (x0 – x)2

0 + 1x0^ ^ (0 + 1x0) –0 + 1x0

^ ^ (0 + 1x0)

We assume min{x1 , x2 , … , xn} x0 max{x1 , x2 , … , xn}, since prediction outside the range of the data may not be valid.

distribution, and

Page 38: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

6.

(a)

Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi .

Derive a 100(1 – )% confidence interval for the slope 1 .

1^ – 1

MSE

(xi – x)2

i = 1

n

t/2(n – 2) = 1 –

P – t/2(n – 2)

P 1 – t/2(n – 2) 1 1 + t/2(n – 2)

^MSE

(xi – x)2

i = 1

n

MSE

(xi – x)2

i = 1

n^

= 1 –

Page 39: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

If 1(0) is a hypothesized value for 1 , then derive the test statistic and rejection regions corresponding to the one sided and two sided hypothesis tests for testing H0: 1 = 1(0) with significance level .

(b)

The test statistic is T =1^ – 1(0)

MSE

(xi – x)2

i = 1

n

For H1: 1 < 1(0) , the rejection region is t – t(n – 2) .

For H1: 1 > 1(0) , the rejection region is t t(n – 2) .

For H1: 1 1(0) , the rejection region is |t| t/2(n – 2) .

Page 40: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(g) Perform the t test for H0: 1 = 0.8 vs. H1: 1 0.8 with = 0.05.

The test statistic is t =

The two-sided critical region with = 0.05 is |t| 2.228 .

2.228 = t0.025(10)

1^ – 0.8

MSE

(xi – x)2

i = 1

n

=2 – 0.8

70.4

256

= 2.288 .

– 2.228The p-value isbetween 0.02 and 0.05 (from the table).

Page 41: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Since t = 2.288 > t0.025(10) = 2.228, we reject H0. We conclude that the slope in the linear regression of grip strength on age is different from 0.8 lbs. (0.02 < p < 0.05), and the results suggest that this slope is greater than 0.8 lbs.(Note: We could alternatively conclude that the average change in grip strength is different from 0.8 lbs. per year, and that this change is greater than 0.8 lbs. per year.)

Page 42: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(h) Considering the results of the hypothesis tests in parts (f) and (g), explain why a 95% confidence interval for the slope in the regression would be of interest. Then find and interpret the confidence interval.

1 – t/2(n – 2) 1 1 + t/2(n – 2)^MSE

(xi – x)2

i = 1

n

MSE

(xi – x)2

i = 1

n^

Since rejecting H0 in part (f) suggests that the hypothesized zero slope is not correct, and rejecting H0 in part (g) suggests that the hypothesized slope of 0.8 is not correct, a 95% confidence interval will provide us with some information about the value of the slope, which estimates the average change in grip strength with an increase of one year in age.

2 (2.228)

70.4—— 256

0.832 and 3.168

Page 43: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

We are 95% confident that the slope in the regression to predict grip strength from age is between 0.832 and 3.168 lbs.

Page 44: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

6. - continued

(c) Derive a 100(1 – )% confidence interval for the intercept 0 .

0^ – 0

1— n

+x2

(xi – x)2

i = 1

nMSE

t/2(n – 2) = 1 –

P – t/2(n – 2)

P 0 – t/2(n – 2)^ 1— n

+x2

(xi – x)2

i = 1

nMSE

0 + t/2(n – 2)^ 1— n

+x2

(xi – x)2

i = 1

nMSE

0

= 1 –

Page 45: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

If 0(0) is a hypothesized value for 0 , then derive the test statistic and rejection regions corresponding to the one sided and two sided hypothesis tests for testing H0: 0 = 0(0) with significance level .

(d)

The test statistic is T =

For H1: 0 < 0(0) , the rejection region is t – t(n – 2) .

For H1: 0 > 0(0) , the rejection region is t t(n – 2) .

For H1: 0 0(0) , the rejection region is |t| t/2(n – 2) .

0^ – 0(0)

1— n

+x2

(xi – x)2

i = 1

nMSE

Page 46: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(i) Perform the t test for H0: 0 = 0 vs. H1: 0 0 with = 0.05.

The test statistic is t =0^ – 0

1— n

+x2

(xi – x)2

i = 1

nMSE

=

26 – 0

1—12

+70.4

= 182

—— 256

2.668

The two-sided critical region with = 0.05 is|t| 2.228 . 2.228 =

t0.025(10) – 2.228

The p-value isbetween 0.02 and 0.05 (from the table) or p = 0.024 (from the SPSS output).

Page 47: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Since t = 2.668 > t0.025(10) = 2.228, we reject H0. We conclude that the intercept in the linear regression of grip strength on age is different from zero (0.02 < p < 0.05), and the results suggest that this intercept is positive.

Page 48: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(j) Considering the results of the hypothesis test in part (i), explain why a 95% confidence interval for the intercept in the regression would be of interest. Then find and interpret the confidence interval.

Since rejecting H0 in part (i) suggests that the hypothesized zero intercept is not correct, a 95% confidence interval will provide us with some information about the value of the intercept.

0 – t/2(n – 2)^ 1— n

+x2

(xi – x)2

i = 1

nMSE

0

0 + t/2(n – 2)^ 1— n

+x2

(xi – x)2

i = 1

nMSE

Page 49: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

26 (2.228)

1—12

+70.4 182

—— 256

4.288 and 47.712

We are 95% confident that the intercept in the regression to predict grip strength from age is between 4.288 and 47.712 lbs.

Page 50: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

6. - continued

(e) For a given value x0 , we can call E(Y | x0) = 0 + 1x0 the mean of

Y corresponding to the value x0 , and an unbiased estimator of

this mean is Y | x0 = 0 + 1x0 (from Class Exercise #5). Derive

a 100(1 – )% confidence interval for E(Y | x0) = 0 + 1x0 .

^ ^^

t/2(n – 2) = 1 –

P – t/2(n – 2) 1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

–0 + 1x0^ ^ (0 + 1x0)

Page 51: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

P 0 + 1x0 – t/2(n – 2)^

0 + 1x0 + t/2(n – 2)^

0 + 1x0

= 1 –

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

^

^

Page 52: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

6. - continued

(f) If 0 is a hypothesized value for E(Y | x0) = 0 + 1x0 , then derive the test statistic and rejection regions corresponding to the one sided and two sided hypothesis tests for testing H0: E(Y | x0) = 0 with significance level .

The test statistic is T =

For H1: E(Y | x0) < 0 , the rejection region is t – t(n – 2) .

For H1: E(Y | x0) > 0 , the rejection region is t t(n – 2) .

For H1: E(Y | x0) 0 , the rejection region is |t| t/2(n – 2) .

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

–0 + 1x0^ ^ 0

Page 53: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(k) Perform the t test forH0:

vs.H1:

with = 0.05, that is, H0: E(Y | x = 20) = 80 vs. H1: E(Y | x = 20) 80

The mean grip strength for 20 year old right-handed males is 80 lbs.

The mean grip strength for 20 year old right-handed males is different from 80 lbs.

The test statistic is t =

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

–0 + 1x0^ ^ 0

=66 – 80

1—12

+70.4

=(20 – 18)2

———— 256

– 5.304

Page 54: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

The two-sided critical region with = 0.05 is|t| 2.228 . 2.228 =

t0.025(10) – 2.228

The p-value is less than 0.01 (from the table).

Since |t| = 5.304 > t0.025(10) = 2.228, we reject H0. We conclude that the mean grip strength for 20 year old right-handed males is different from 80 lbs. (p < 0.01), and the results suggest that this mean is less than 80 lbs.

Page 55: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(l) Considering the results of the hypothesis test in part (k), explain why a 95% confidence interval for the mean grip strength for 20 year old right-handed males would be of interest. Then find and interpret the confidence interval.

Since rejecting H0 in part (k) suggests that the mean grip strength for 20 year old right-handed males is not 80 lbs., a 95% confidence interval will provide us with some information about this mean.

0 + 1x0 – t/2(n – 2)^

0 + 1x0 + t/2(n – 2)^

0 + 1x0

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

1— n

+

(xi – x)2

i = 1

nMSE

(x0 – x)2

^

^

Page 56: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

66 (2.228)

60.12 and 71.88

We are 95% confident that the mean grip strength for 20 year old right-handed males is between 60.12 and 71.88 lbs.

1—12

+70.4(20 – 18)2

———— 256

Return to #6, part (g)

Page 57: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

For a given value x0 , let Y0 be a random variable independent of Y1 , Y2 , … , Yn and having a N(0 , 2) where 0 = 0 + 1x0 , that is, Y0 is a “new” random observation. Use Theorem 5.5-1 (Class Exercise 5.5-1) to derive a 100(1 – )% prediction interval for the value of Y0 . ^ ^Y0 – (0 + 1x0) has a normal distribution with

mean 0 + 1x0 – (0 + 1x0) = 0and variance 2 + .2

1— n

+ (xi – x)2n

(x0 – x)2

(g)

i = 1

11 + — n

+

(xi – x)2

i = 1

n

SSE / 2

——— n – 2

=

has a distribution.t(n – 2)

+

(xi – x)2

i = 1

nMSE

^ ^Y0 – (0 + 1x0) ^ ^Y0 – (0 + 1x0)

(x0 – x)2

(x0 – x)2 11 + — n

Page 58: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

6. (g) - continued

+

(xi – x)2

i = 1

nMSE

^ ^Y0 – (0 + 1x0)

(x0 – x)2 11 + — n

– t/2(n – 2) t/2(n – 2)

= 1 –

P

Page 59: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

Y0

P (0 + 1x0) – t/2(n – 2) +

(xi – x)2

i = 1

nMSE

(x0 – x)2 11 + — n

(0 + 1x0) + t/2(n – 2) +

(xi – x)2

i = 1

nMSE

(x0 – x)2 11 + — n

= 1 –

^ ^

^ ^

Page 60: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(m) Find and interpret a 95% prediction interval for the grip strength for a 20 year old right-handed male.

Y0

(0 + 1x0) – t/2(n – 2) +

(xi – x)2

i = 1

nMSE

(x0 – x)2 11 + — n

(0 + 1x0) + t/2(n – 2) +

(xi – x)2

i = 1

nMSE

(x0 – x)2 11 + — n

^ ^

^ ^

66 (2.228)

11 + — 12

+70.4(20 – 18)2

———— 256

Page 61: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

46.40 and 85.60

We are 95% confident that the grip strength for a randomly selected 20-year old right-handed male will be between 46.40 and 85.60 lbs.

OR

At least 95% of 20-year old right-handed males have a grip strength between 46.40 and 85.60 lbs.

(n) For what age group of right-handed males will the confidence interval for mean grip strength and the prediction interval for a particular grip strength both have the smallest length?

18 year olds

Page 62: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

6. - continued

(h) Consider the two sided hypothesis test H0: 1 = 0 vs. H1: 1 0 with significance level , which is one of the hypothesis tests defined in part (b). Prove that the square of the t test statistic for this hypothesis test is equal to the f test statistic in the one-way ANOVA.

1^ – 0

MSE

(xi – x)2

i = 1

n

2

=

12^

MSE

(xi – x)2

i = 1

n

= MSR——– MSE

We see this from the derivation in Class Exercise #3.

Page 63: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

7. Suppose Y1 , Y2 , … , Yn are independent with respective N(1 , 2), N(2 , 2), … , N(n , 2) distributions. Let x1 , x2 , … , xn be fixed values not all equal, and suppose that for i = 1, 2, …, n, i = 0 + 1xi . Prove that

R2 = SSR—— . SST

R2 = = Sy

2 sx2

R2 — — = sx

2 Sy2

12^

(xi – x)2

i = 1

n

(Yi – Y)2

i = 1

n

SSR—— SST

Page 64: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

8. - continued

(o) Use SPSS to graph the least squares line on a scatter plot, and comment on how appropriate the linear model seems to be.

The linear model appears to be reasonable.

Page 65: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the
Page 66: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

9. Voters in a state are surveyed. Each respondent is assigned an identification number (ID), and the following information about each is recorded: sex, area of residence (RES), political party affiliation (POL), number of children (CHL), age, yearly income (INC), job satisfaction score (JSS) where 0 = totally dissatisfied and 10 = totally satisfied, weekly hours spent watching television (TVH), and weekly hours spent listening to radio (RAD). The resulting data is as follows:ID SEX RES POL CHL AGE INC JSS TVH RAD

08 M Suburban Republican 5 35 34,000 5 15 15

27 F Urban Democrat 2 20 28,000 2 20 13

34 F Suburban Independent 4 35 71,000 7 18 17

18 M Rural Independent 7 41 35,000 8 12 20

04 M Urban Republican 3 39 55,000 4 14 15

14 M Urban Democrat 3 59 75,000 1 11 18

23 F Urban Democrat 1 20 26,000 2 10 11

39 M Rural Other 4 52 30,000 9 12 30

54 F Rural Other 2 44 27,000 7 8 14

44 M Urban Republican 0 46 53,000 0 21 10

17 F Urban Republican 2 40 45,000 3 12 14

Page 67: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

12 F Suburban Other 2 34 34,000 5 8 10

26 F Urban Republican 2 24 30,000 2 11 15

11 M Urban Democrat 4 62 78,000 1 18 11

38 M Suburban Other 3 44 68,000 7 17 12

09 F Rural Republican 6 44 29,000 9 8 27

29 F Suburban Democrat 4 38 40,000 9 9 25

13 M Rural Independent 9 47 39,000 6 8 20

24 M Urban Democrat 3 44 60,000 2 27 4

33 F Urban Democrat 1 45 49,000 3 9 12

15 F Rural Other 2 56 39,000 8 6 27

52 F Suburban Republican 0 32 33,000 5 11 15

35 M Suburban Other 1 54 65.000 3 14 14

30 F Rural Independent 3 41 25,000 7 5 25

47 M Suburban Republican 6 50 61,000 5 15 15

32 F Rural Republican 2 59 41,000 8 8 27

41 F Suburban Other 3 44 44,000 3 10 18

10 M Rural Other 3 62 45,000 10 10 23

48 M Suburban Republican 2 53 64,000 1 13 15

02 M Rural Democrat 8 59 39,000 8 9 24

Page 68: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

The data is stored in the SPSS data file survey with income entered in units of thousands of dollars. The prediction of yearly income from age is of interest, and the 30 individuals selected for the data set are treated as a random sample for simple linear regression. It is assumed that for any age x in years, where 20 x , Y = “yearly income” has a N(x , 2) distribution with x = 0 + 1x .

9. - continued

(a) Use the Analyze > Regression> Linear options in SPSS to select the variable income for the Dependent slot and select the variable age for the Independent(s) section.

To have the mean and standard deviation displayed for the dependent and independent variables, click on the Statistics button, and select the Descriptives option.

To add predicted values and residuals to the data file, click on the Save button, select the Unstandardized option in the Predicted Values section, and select the Unstandardized option in the Residuals section.

Do this exercise for homework.

Page 69: Sections 6.7, 6.8, 7.7 (Note: The approach used here to present the material in these sections is substantially different from the approach used in the

10. Use the fact that the random variable

SSE—— has a distribution 2

to derive a 100(1 – )% prediction interval for 2 .

2(n–2)

SSEP < —— < = 1 –

22

1 – /2(n–2) 2/2(n–2)

2

P > —— > = 1 – SSE

1—————2

1 – /2(n–2)

1————2

/2(n–2)

P > 2 > = 1 – SSE—————2

1 – /2(n–2)

SSE————2

/2(n–2)

P < 2 < = 1 – SSE—————2

1 – /2(n–2)

SSE————2

/2(n–2)