Hypo%26PowerLecture

Embed Size (px)

Citation preview

  • 8/14/2019 Hypo%26PowerLecture

    1/8

    Introduction to Hypothesis Testing, Power

    Analysis and Sample Size Calculations

    Hypothesis testing is one of the most commonly used statistical techniquesever. Most often, scientists are interested in estimating the differences be-tween two populations and use the sample means as the statistic of interest.For this reason, the normal distribution is the one most often used to esti-mate the probabilities of interest. Drawing conclusions about populationsfrom data is termed inference. Scientists wish to draw inferences about thepopulations they are interested in from the data they have collected.

    Power calculations also can be important in the case that we failed to

    reject the null hypothesis of no effect or no significant difference. This processcan be important in the regulatory community, where failing to reject thenull hypothesis of no effect, is unconvincing unless accompanied by a poweranalysis that shows that if there were an effect, the sample size was largeenough to detect it.

    This lecture will explore the basic concepts behind power analysis usingthe normal assumption. Power and sample size calculations can be morecomplicated when using other distributions but the basic idea is the same.

    1. Distribution of the Sample Mean

    Most hypothesis testing is conducted using the sample mean as the statisticof interestto estimate the true population mean. Consider a sample of size nof random variables X1, X2, , Xn, with E{Xi} = and V ar{Xi} = 2i.Let

    x =

    ni=1 Xi

    n.

    Then,

    E{x} = Eni=1 Xin = 1n n = and

    V ar{x} = V arn

    i=1 Xi

    n

    =

    1

    n2n2 =

    1

    n2

    1

  • 8/14/2019 Hypo%26PowerLecture

    2/8

    If Xi

    N

    {, 2

    }, then we know from earlier results that x

    N

    {,

    2

    n

    }.

    Additionally, even if the data do not come from a normal distribution

    limn

    P

    n (x )

    x

    = (x).

    Hence, even if our data are not normal, for a large enough sample size,we can calculate probabilities for x by applying the Central Limit Theorem,and our answers will be close enough.

    2. Hypothesis Testing

    Hypothesis testing is a formal statistical procedure that attempts to answerthe question Is the observed sample consistent with a given hypothesis.This boils down to calculating the probability of the sample given the hy-pothesis, P{X|H}. To set up the procedure, scientists propose what is calleda null hypothesis. The null hypothesis is usually of the form: these datawere generated by strictly random processes, with no underlying mechanism.Always, the null hypothesis is the opposite of the hypothesis that we areinterested in. The scientist will then set up a hypothesis test to compare thenull hypothesis to the mechanistic, scientific hypothesis consistent with theirscientific theory.

    Example 1 Examples of null hypotheses are:

    1. no difference in the response of patients to a drug versus a placebo,

    2. no difference in the contamination level of well-water near a papermill

    and a well some distance away,

    3. no difference between the leukemia rate in Woburn, Massachusetts and

    the national average, and

    4. the concentration of mercury in the groundwater is below the regulatory

    limit.

    A hypothesis test is usually represented as follows:H0 : = 0

    vs.Ha : = 0

    2

  • 8/14/2019 Hypo%26PowerLecture

    3/8

    The null and alternative hypotheses should be specified before the test is

    conducted and before the data are observed. The investigators also need tospecify a value for P{X|H0} at which they will reject H0. The idea is that ifthe data are quite unlikely under the null hypothesis, then we conclude thatthey are inconsistent with the null, and hence accept the alternative. Noticethat the null and the alternative are mutually exclusive and exhaustivethatis, one or the other must be true, but its impossible that both are.

    The probability the we reject the null is denoted and is called the sizeof the test. Its complement 1 is called the significance level of the test,though sometimes you will see these terms used interchangeably.

    Note that for some = P{X|H0} small enough, we reject H0. Hence, = P{we reject H0, when H0 is true}, also called the probability of a type Ierror.

    The probability of a Type II error is given by P{we fail to reject H0 whenH0 is false} = .

    Table 1: Possible Results from a Hypothesis TestTruth

    Test H0 HaH0 OK Type II ErrorHa Type I Error OK

    3

  • 8/14/2019 Hypo%26PowerLecture

    4/8

    The values under the normal curve that are equal or more extreme than

    our test statistic constitute the rejection region.Lets begin with an example. Say that regulators desire a high certainty

    that emissions are below 5 parts per billion for a particular contaminant, andthe regulatory limit is 8 parts per billion. They may conduct the followingtest

    H0 : < 5ppb

    versus

    Ha : 5ppbAt what value will we reject H0? Say, we would like to reject the null

    hypothesis at the 95% confidence level. This means we wish to fix the prob-ability of falsely rejecting H0 (type I error) at no greater than 5%. Here,under H0, can be fixed at = 5 without altering the size (confidence level)of the test. Now we need to find the rejection region, i.e. the value of x atwhich we can reject H0 at 95% confidence.

    We need to find a c such that

    P r{x > c| = 5} = 0.05 (2.1)Now, since the standard deviation is taken to be 3 and the sample size is 5,

    we can standardize x under H0 so that it has a standard normal distribution(a mean of 0 and a standard deviation of 1), and then we can make use ofthe standard normal probability charts. We have

    P r{x > c|H0} = P r

    x 535

    >c 5

    35

    = P r

    z >

    c 535

    (2.2)

    where z is a standard normal random variable. Then, from our probabilitycharts, we know that

    c 535

    = 1.64 (2.3)

    Solving for c, we find that we reject H0 when x 7.2.

    4

  • 8/14/2019 Hypo%26PowerLecture

    5/8

    3. Power Calculations

    Lets continue with our example. In the event that the managers fail to rejectH0, that is, they conclude that there is insufficient evidence that emissionsare above 5 ppb, they and their stakeholders, may want to ask the question:Was there sufficient information in our sample (i.e. is the lack of evidencedue to insufficient sample size) to have detected a difference of 3 ppb? Hence,they need to calculate the power of the test when = 8 ppb.

    The power of a test is defined as the probability that we correctly rejectthe null hypothesis, given that a particular alternative is true. Power canalso be defined as

    1-Pr{we do not reject H0 when H0 is false}=1-Pr{type II error}

    In order to calculate power, we need to specify an alternative and werequire an estimate of the variability of the statistic used to conduct thetest, in most cases this statistic is the sample mean. The standard deviationof the sample mean is given by the sample standard deviation divided by thesquare root of the sample size.

    x =x

    n(3.1)

    Continuing with the example, say we wish to calculate the power of theabove test, for a normal sample of size 5 and with known standard deviation3. As with the size calculation, for the purposes of this power calculation,under Ha, can be fixed at = 8. The test statistic is the sample mean. Wewill reject the null hypothesis for some value of x. This value can be easilycalculated using elementary statistics since we have made the assumptionthat the sample mean is normally distributed. This means that we assumethat if we were to repeat the experiment a large number of times and calcu-lated the mean each time, that the resulting sample of means would show anormal distribution.

    We know that the rejection region was x 7.2. We can calculate thepower of this test at some alternative, say = 8.

    5

  • 8/14/2019 Hypo%26PowerLecture

    6/8

    We need

    P r {x > 7.2| = 8} = P r

    x 835

    >7.2 8

    35

    = P r {z > 0.5963} = 0.7245

    (3.2)The power of this test at the specified alternative is then 0.7245. Alterna-

    tively, we can say that the probability of type II error, or the probability thatwe failed to reject the null when the true mean was 8 is 1 0.7245 = 0.2755.We can conduct a full power analysis by plotting the power at a wide varietyof alternatives, or distances from 0, assuming that the standard deviationremains constant across all concentrations.

    Power Curve for n=5, sigma=3

    Alternative Mean Concenration in parts per billion

    Power

    6 8 10 12 14

    0.2

    0

    .4

    0.6

    0.8

    1.0

    4. Sample Size Calculations

    Even better than performing a power analysis after an experiment has beenconducted is to perform it before any data are collected. Careful experimen-tal design can save untold hours and dollars from being wasted. As Quinn&Keogh point out, too often a post hoc power calculation reveals nothing

    6

  • 8/14/2019 Hypo%26PowerLecture

    7/8

    more than our inability to design a decent experiment. If we have any reason-

    able estimate of the variability and a scientifically justifiable and interestingalternative, or even a range of alternatives, we can estimate before handwhether or not the experiment is worth doing given the limitations on ourtime and budget.

    Say we would like to set the probability of a type I error at no greaterthan 5% and of a type II error at no greater than 10%, what sample sizewould we need for the test shown above? We saw that we rejected H0 at

    x z1

    n

    + 0.

    Now consider the desired power. We need to repeat the same process aswe did above for the level, but this time solving for c using the z value forthe corresponding power.

    x z

    n

    + a.

    Now recall that z = z1. Setting the two expressions for x equal toone another we have

    z1 n+ 0 = z1

    n+ aLetting 1 be the confidence level we desire and 1 be the power

    with z and z being the corresponding z values and solving for n in theabove equation yields

    n = 2

    z1 + z1a 0

    2

    (4.1)

    So, for the example above, from a standard normal probability chart wehave z = 1.645 and z = 1.282. For this test, a = 8 and 0 = 5, yielding

    n = 9

    1.645 + 1.282

    3

    2

    = 8.56. (4.2)

    So we need a sample of size 9 to achieve the desired confidence and powerfor this experiment.

    7

  • 8/14/2019 Hypo%26PowerLecture

    8/8

    Of course, often we will have no preliminary data from which to estimate

    a standard deviation. In this case, we must use a conservative best guessfor the variance. In practice, we may also not know exactly what is a sci-entifically meaningful alternative. However, as practitioners of science weshould be working to move our community towards more careful planning ofexperiments and more careful thinking about our questions before we beginthe experiment.

    5. References

    1. Pagano M and K. Gauvreau, 1993. Principles of Biostatistics, Duxbury

    Press, Belmont, California.

    2. Quinn, Gerry P. and Michael J. Keogh, 2002. Cambridge UniversityPress, Cambridge.

    8