Manua(Nuc. Physics)

Embed Size (px)

Citation preview

  • 8/20/2019 Manua(Nuc. Physics)

    1/175

  • 8/20/2019 Manua(Nuc. Physics)

    2/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c sContents

    3

    1 STATISTICS & ERROR ANALYSIS 41.1 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

    1.1.1 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.2 Uncertainty in Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.2.1 Uncertainity, Accuracy & Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    1.2.2 Systematic & Random Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2.3 Signicant Digits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Statistical Analysis of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    1.3.1 Histograms & Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101.3.2 Parent & Sample Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.3.3 Mean & Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.4 Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 Binomial Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.4.2 Poisson Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241.4.3 Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    1.5 Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381.5.1 Propagation of Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    1.6 Estimation and Error of the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511.6.1 Method of Maximum Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511.6.2 Estimated Error in the Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    1.7 Method of Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551.8 Goodness of Fit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2 RADIOACTIVITY 612.1 Radioactivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2.1.1 Measure of radioactivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612.1.2 Activity Law & Half Life . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2.2 Nuclear Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.1 Alpha Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.2 Beta Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2.3 Gamma Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    3 INTERACTION WITH MATTER 723.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    3.1.1 Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2 Interaction of Charged Particles with Matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

    3.2.1 Interaction of Heavy charged particle with matter . . . . . . . . . . . . . . . . . . . . . . . . . . . 753.2.2 Interaction with matter of electrons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 833.2.3 Interaction of gamma rays with matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    1

  • 8/20/2019 Manua(Nuc. Physics)

    3/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 2

    4 G-M COUNTER 964.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2 Detector Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.3 Ionisation of Gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    4.3.1 Townsend Avalanche . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004.3.2 Kinds of Detectors & Detector Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

    4.4 GM Counter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.4.1 Geiger Discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.4.2 Quenching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.4.3 Dead Time & Recovery Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084.4.4 Geiger Counting Plateau & Operating Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114.4.5 Counting Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5 Experiment: GM Characteristics 1165.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115.2 Precautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5.2.1 Health Effects of Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165.3 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5.3.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115.3.3 Sample Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    6 Experiment: GM Counter: Counting Efficiency for β & γ rays 1316.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

    6.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.2.3 Sample Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136.2.4 Error Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

    7 Experiment: Absorption of γ rays in Iron 1427.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    7.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147.2.3 Sample Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    8 Experiment: Verication of the Inverse Square Law for γ rays 1488.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

    8.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148.2.3 Sample Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    9 Experiment: To Determine the Range of β rays in Aluminum and to determine the End Point

    Energy 1559.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159.2 Experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    9.2.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159.2.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159.2.3 Sample Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

  • 8/20/2019 Manua(Nuc. Physics)

    4/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 3

    10 Experiment: Scintillation Counter 16310.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610.2 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    10.2.1 Inorganic Scintillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16410.2.2 Organic Scintillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16710.2.3 Photomultiplier Tube . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

    Index 172

  • 8/20/2019 Manua(Nuc. Physics)

    5/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c sChapter 1

    STATISTICS & ERROR ANALYSIS

    1.1 Probability

    Since we are going to be dealing with probability in the subsequent sections, let us give a brief back-

    ground. The probability of an event refers to the likelihood that the event will occur. Mathematically,the probability that an event will occur is expressed as a number between 0 and 1. The sum of prob-abilities in any statistical experiment is always 1, a statement of the fact that something will certainlyhappen. Let us illustrate how one can calculate probabilities.

    Consider rst the case of an experiment with n possible outcomes which are each equally likely. Nowif we take a subset r of these and call them successes then clearly the probability of success in theexperiment is rn . Thus, if there are 10 balls, 7 white and 3 black in a bag, then the probability of gettinga black ball if one is picked out at random from the bag is 310 .

    There is another approach to probability where one talks about relative frequencies. Suppose I countthe number of cars passing a particular point on a road at a particular interval of time and notice howmany of them are white. Suppose on the rst day, I see 5 white cars out of a total of 20 cars, while onthe second day I count 9 white cars out of 30 while on the third day I nd 3 white cars out of 5 and soon. Clearly, the relative frequency of white cars is different on different days. However, one could ndfor instance that if I repeat this experiment many many times, then the relative frequency is 0 .26. Thenthe Law of Large Numbers says that the relative frequency of an event will converge on the probabilityof that event as the number of trials increases.

    Some denitions in probability theory are useful:

    1. Two events are mutually exclusive or disjoint if they cannot occur at the same time.

    2. The probability that Event A occurs, given that Event B has occurred, is called a conditionalprobability. The conditional probability of Event A, given Event B, is denoted by the symbol

    4

  • 8/20/2019 Manua(Nuc. Physics)

    6/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 5

    P (A|B).3. The complement of an event is the event not occurring. The probability that Event A will not

    occur is denoted by P (A ).

    4. The probability that Events A and B both occur is the probability of the intersection of A and B.The probability of the intersection of Events A and B is denoted by P (A B). If Events A and Bare mutually exclusive, P (A B) = 0.

    5. The probability that Events A or B occur is the probability of the union of A and B. The probabilityof the union of Events A and B is denoted by P (A B) .

    6. If the occurrence of Event A changes the probability of Event B, then Events A and B are depen-dent. On the other hand, if the occurrence of Event A does not change the probability of EventB, then Events A and B are independent.

    These denitions allow us to write down the rules for probability .

    Subtraction : The probability that event A will occur is equal to 1 minus the probability that eventA will not occur.

    P (A) = 1 −P (A )Multiplication : The probability that Events A and B both occur is equal to the probability that

    Event A occurs times the probability that Event B occurs, given that A has occurred.

    P (A B) = P (A)P (B |A)Addition : The probability that Event A or Event B occurs is equal to the probability that Event

    A occurs plus the probability that Event B occurs minus the probability that both Events A and B occur.

    P (A B) = P (A) + P (B) −P (A B)) = P (A) + P (B) −P (A)P (B |A)

    These rules are fairly obvious intuitively and the easiest way to prove them is to use Venn diagramswhere the results quoted above are immediately clear.

    1.1.1 Random Variables

    When the value of a variable is determined by a chance event, that variable is called a random vari-able . Random variables can be discrete or continuous.

  • 8/20/2019 Manua(Nuc. Physics)

    7/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 6

    Discrete Random Variable

    Within a range of numbers, discrete variables can take on only certain values. Suppose, for example,that we ip a coin and count the number of heads. The number of heads will be a value between 0

    and + ∞. Within that range, though, the number of heads can be only certain values. For example,the number of heads can only be a whole number, not a fraction. Therefore, the number of heads is adiscrete variable. And because the number of heads results from a random process - ipping a coin - itis a discrete random variable.

    Continuous Random Variable

    Continuous variables, in contrast, can take on any value within a range of values. For example,suppose we randomly select an individual from a population. Then, we measure the age of that person.

    In theory, his/her age can take on any value between 0 and + ∞, so age is a continuous variable. Inthis example, the age of the person selected is determined by a chance event; so, in this example, ageis a continuous random variable.

    Note that discrete variables can be nite or innite. Thus, for instance the number of heads in coinips can be innite while the number of aces that I can choose from a deck of cards is nite (0 , 1, 2, 3, 4).Continuous variables can always take an innite number of values while some discrete variables can takeinnite number of values.

    1.2 Uncertainty in Measurement1.2.1 Uncertainity, Accuracy & Precision

    All measurements that we do have some degree of uncertainty. This uncertainty might come from avariety of sources about which we will talk later. But the fact that needs to be emphasised is that allmeasurements have some uncertainty and an analysis of this uncertainty is what we call error analysis .Any measured value that we quote must be accompanied by our estimate of the level of certainty or con-dence associated with that measurement. This fact is absolutely essential since without this the basicquestion of science, namely “does the result of our experiment agree with the theory” can not be an-

    swered. To decide whether the proposed theory is valid or not, this question would need to be answered.

    When we carry out an experiment to measure a quantity, we of course assume that some ‘true’ orexact value exists. We of course may or may not know this value but we always attempt to nd the bestvalue possible given the limitations of our own experimental setup. Typically, every time we carry outthe experiment, we will nd a different value and so the question is how do we report our best estimateof this ‘true’ value? Usually, this is done as

  • 8/20/2019 Manua(Nuc. Physics)

    8/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 7

    measurement = best estimate ±uncertaintyFor example, let us assume you want to nd the weight of your mobile phone. By simply putting

    in your hand, you can estimate it to be between 100 and 200 grams. But that is not good enough. Soyou go to a balance in the laboratory and it gives you a reading of 145 .55 grams. This value is muchmore precise than the original estimate you obtained, but how does one know that it is accurate ?One way is to repeat your measurement several times and suppose you get the values 145 .59, 145.53and 145.51 grams. Then one could say that the weight of the phone is 145 .55 ± .04 grams. Butnow suppose you go to another balance and nd a value of 144 .15 grams? Now one is faced with aproblem since your original best estimate is very different from this measurement. So what does one do?

    To understand this, we need to understand rst the difference between precision and accuracy .

    Precision & Accuracy

    Accuracy is how close the measured value is to the true or the accepted value of aquantity.

    Precision is a measure of how well the result can be measured, irrespective of thetrue value. It reects the degree of consistency and agreement between repeated, inde-pendent measurements of the same quantity as well as the reproducibility of the results.

    Any statement of uncertainty associated with a measurement must include factorswhich affect both accuracy and precision. After all, it is a waste of time if we determinea result which is very precise but highly inaccurate or its converse, that is, a resultwhich is very accurate but highly imprecise.

    In our example above, we have no way of knowing whether our result is accurate or not unless wecompare it with a known standard. For instance, we could use a standard weight to determine if thebalances used in our measurement are accurate or not.

    Precision is often reported experimentally as relative uncertainty dened as

    Relative Uncertainty =uncertainty

    measured value(1.1)

    Thus in our example, the relative uncertainty is

    Relative Uncertainty = 0.04145.55

    = 0.027%

  • 8/20/2019 Manua(Nuc. Physics)

    9/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 8

    Accuracy on the other hand is reported usually as relative error which is dened as

    Relative Error = measured value - expected value

    expected valueIn our example above, if we think that the expected value is 145 .50 grams, then the relative error is

    Relative Error = 145.55 −145.50

    145.50 =

    0.05145.50

    = 0.034%

    Thus we see that any measurement needs to be both precise and accurate for it to be good. Theidea of making good measurements is directly related to errors in measurement. Errors in measurementcan be broadly classied into two categories- random and systematic .

    1.2.2 Systematic & Random Errors

    Systematic Errors are errors which will make the results obtained by us differ from the “true” valueof the quantity under consideration. They are reproducible inaccuracies which are difficult to detect andalso cannot be analysed statistically. An important thing to realise is that systematic errors cannotbe reduced by repeated measurements.

    Random Errors are errors which are uctuations in the observations which are statistical in nature.They can be analysed statistically and furthermore, they can be reduced by repeated measurementsand taking averages as we shall see later.

    To illustrate this distinction think of the following experiment. Suppose I wish to nd the timeperiod of a pendulum by timing some number of oscillations with the help of a stop watch. There couldbe several sources of error- One source of error will be my reaction time, that is the time between myseeing the pendulum bob reaching the extreme position and my starting the watch and again at a laterpoint my observing the bob and my stopping the watch. Obviously, if my reaction time was alwaysexactly the same, then the time delay wouldn’t matter since they would cancel. However, we knowthat in practice, my reaction time will be different. I may delay more in starting, and so underestimatethe time of a revolution; or I may delay more in stopping, and so overestimate the time. Since either

    possibility is equally likely, the sign of the effect is random. If I repeat the measurement several times,I will sometimes overestimate and sometimes underestimate. Thus, my variable reaction time will showup as a variation of the answers found. By analyzing the spread in results statistically, I can get a veryreliable estimate of this kind of error.

    Now suppose that the watch that I use is slow or fast. Then, no matter how many times I re-peat the experiment (of course with the same watch), I can never know the amount of such an error.Further, the error’s sign will always be the same- either the watch will be fast or slow leading to ei-

  • 8/20/2019 Manua(Nuc. Physics)

    10/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 9

    ther an overestimate or underestimate of the rate of revolution. This is an example of a systematic error.

    In general, there is no set prescription for eliminating systematic errors and mostly one has to usecommon sense to know if there are any systematic errors and to eliminate them. Random errors on the

    other hand are usually easier to study and eliminate or reduce. But one should remember that in manysituations, the accuracy of a measurement is dominated by possible systematic errors in the instrumentrather than the precision with which we can make the measurement.

    To summarise

    Systematic & Random Errors

    Systematic Errors are reproducible inaccuracies that are difficult to detect and cannotbe analyzed statistically. If a systematic error is identied when calibrating against astandard, applying a correction or correction factor to compensate for the effect canreduce the bias.

    Random Errors are statistical uctuations in the measured data due to the precisionlimitations of the measurement device. Random errors can be evaluated throughstatistical analysis and can be reduced by averaging over a large number of observations

    The fundamental aim of an experimentalist is to reduce as many sources of error as s/he can, andthen to keep track of those errors that cant be eliminated.

    1.2.3 Signicant Digits

    Whenever one writes the results of an experiment, the precision in the experiment is normally indicatedby the number of digits which one reports the result with. The number of signicant gures dependson how precise the given data is. The following rules are helpful:

    Signicant Figures

    1. The leftmost NONZERO digit is ALWAYS the MOST signicant digit.

    2. When there is NO decimal point, the rightmost NONZERO digit is the leastsignicant digit.

    3. In case of a decimal point, the rightmost digit is the least signicant digit EVENIF IT IS A ZERO.

  • 8/20/2019 Manua(Nuc. Physics)

    11/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 10

    4. The number of digits between the most and least signicant digits are the numberof signicant digits.

    Thus for instance, 22 .00, 2234, 22340000, 2200. all have four signicant digits. When one is adding,subtracting, multiplying or dividing numbers, then the result should be quoted with the least numberof signicant gures in any one of the quantities being used in the operation of adding, multiplying etc.In your intermediate calculations, always keep ONE MORE signicant digit than is needed in thenal answer. Also when quoting an experimental result, the number of signicant gures should be onemore than is suggested by the experimental precision.

    Two things that need to be always avoided areWriting more digits in an answer (intermediate or nal) than justied by the number of digits in thedata.Rounding-off, say, to two digits in an intermediate answer, and then writing three digits in the nalanswer.

    While dropping off some gures from a number, the last digit that one keeps should be roundedoff for better accuracy. This is usually done by truncating the number as desired and then treatingthe extra digits (which are to be dropped) as decimal fractions. Then, if the fraction is greater than1

    2, increment the truncated least signicant gure by one. If the fraction is smaller than 1

    2, then do

    nothing. If the fraction is exactly 12 , then increment the least signicant digit only if it is odd.

    1.3 Statistical Analysis of Data

    1.3.1 Histograms & Distribution

    The fundamental problem in reporting the results of an experiment is to estimate the uncertainty in ameasurement. It is reasonable to think that the reliability of an estimate of uncertainty in a measure-

    ment can be improved if the measurement is repeated many times. The rst problem in reporting theresults of many repeated measurements is to nd a concise way to record and display the values obtained.

    Suppose we measured the weights of all the new one rupee coins minted since they were introduced.It is clear that all the coins will not have the same weight. The actual weight would depend on severalthings including when it was minted and how much it has been in use etc. One way to display theresults of our measurement would be a histogram as shown in the Figure 1.1. This is for a sample of 25 coins and we have divided the weights into bins of width ∆ = 0 .01 gm.

  • 8/20/2019 Manua(Nuc. Physics)

    12/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 11

    Figure 1.1: Binned histogram

    We have plotted the data in such a way that the fraction of measurements that fall in each bin isgiven by the area of the rectangle above the bin. That is to say that the height P (k) of the kth bin issuch that

    Area = P (k) ×∆ = fraction of measurements in the kth binThus, for instance, the area of the rectangle between 2 .50 −2.51 is 20×0.01 = .2. This means that

    20% of the coins fall in this weight range.

    We can see that such a plot gives us a good way to represent the data namely how the weights of thecoins in our sample are distributed. In most experiments, as the number of measurements increases, thehistogram begins to take on a denite simple shape, and as the number of measurements approachesinnity, their distribution approaches some denite, continuous curve, the so-called limiting distri-bution as in Fig(1.2).

    Figure 1.2: Limiting Distribution

    An important concept that we will need to understand is that of a probability distribution . Aprobability distribution is a table or an equation that links each outcome of a statisticalexperiment with its probability of occurrence. Recall that when the value of the variable is an

  • 8/20/2019 Manua(Nuc. Physics)

    13/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 12

    outcome of a statistical experiment, then it is a random variable. Thus, for instance we can think of a statistical experiment of tossing a coin twice. We can get 4 possible outcomes- HH,HT,TH and TT.Now let the variable X represent the number of heads in this experiment. Then X is a random variableand it can take 3 values, 0 , 1, 2. We can construct a table for the values x of the random variable X

    and the probability associated with that value.x p

    0 0.251 0.502 0.25

    Table 1.1: Discrete Probability Distribution

    This is a probability distribution. Clearly we can see that there will discrete and continuous proba-bility distributions depending on whether the variable is discrete or continuous. In the example aboveof the coin toss, the variable X is a discrete variable and hence this is a discrete probability distribution.Examples of discrete distributions are the Binomial distribution and the Poisson distribution which weshall examine shortly.

    On the other hand, if the random variable is continuous then the the probability distribution associ-ated with it is called continuous probability distribution. Note that a continuous probability distributiondiffers from a discrete probability distribution in that for a continuous distribution, the probability thatthe variable assumes a particular value is zero and hence it can’t be represented by a table. Instead,we represent it with a function. Such a function is called a probability distribution function .

    A probability distribution function has the following properties.

    Since the continuous random variable is dened over a continuous range of values(called the domain of the variable), the graph of the density function will also be contin-uous over that range.Since the continuous random variable can take an innite number of values, the probabil-

    ity that it takes a specic value, say a is zero.Furthermore, the area bounded by the curve of the density function and the x-axis is equalto 1, when computed over the domain of the variable.Finally, the probability that a random variable assumes a value between a and b is equalto the area under the density function bounded by a and b. Note that the area below theline x = a say, is equal to the probability that the variable X can take any value value lessthan or equal to a.Examples of continuous probability distribution that we will study will be the Normal or

  • 8/20/2019 Manua(Nuc. Physics)

    14/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 13

    Gaussian distribution.

    1.3.2 Parent & Sample Distribution

    Any measurement of a quantity is usually expected to approximate the quantity though not be exactlyequal to it. We have already seen that every time we make a measurement, we expect some discrepancybetween them because of random errors and so every measurement we expect to be different. However,as we increase the number of measurements, we see that the data is more and more distributed aroundthe correct value of the quantity being measured. (Of course all this is true if we can neglect or correctfor systematic errors).

    Suppose we make an innite number of measurements. Then in principle, we would know the exactnature of the distribution of the data points. If we had such a hypothetical distribution, then we coulddetermine the probability of getting any particular value of the measurement by doing a single measure-ment. This hypothetical distribution is called the parent distribution . Thus, in the above example,the parent distribution for the one rupee coins minted in a particular period would be the weights of ALL such coins. However, in practice, we always have a nite set of measured values, as in the exampleabove where we have a sample of 25 coins on which we carried out the measurements. The distributionsthat are obtained from the samples of the parent distribution are called sample distribution . Of course, in the limit of innite observations, the sample distribution becomes the parent distribution.

    We can dene the probability distribution function P (x) which is normalised to a unit area. Thisfunction is dened in such a way that for the limiting distribution (that is in the limit that the numberof observations N is very large, the number of observations of the variable x between x and x + ∆ x isgiven by

    ∆ N = NP (x)∆ x

    1.3.3 Mean & Deviation

    The parent and the sample distributions discussed above can be characterised by several quantities.We can dene a mean of the sample distribution in exactly the same way as we understand it- as theaverage value of the quantity. Thus the mean of the sample distribution, ¯ x is

    x̄ ≡ 1N

    N

    i=1

    xi ≡ 1N

    xi (1.2)

    where xi are the different observed values of the variable x. Clearly, the mean of the parent distri-bution, µ is

  • 8/20/2019 Manua(Nuc. Physics)

    15/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 14

    µ ≡ limN →∞ 1N

    xi (1.3)

    If the measurement of interest can be made with high precision, the majority of the values obtainedwill be very close to the true value of x, and the limiting distribution will be narrowly peaked aboutthe value µ. In contrast, if the measurement of interest is of low precision, the values found will bewidely spread and the distribution will be broad, but still centered on the value µ. Thus, we see thatthe breadth of the distribution not only provides us with a very visual representation of the uncertaintyin our measurement, but also, denes another important measure of the distribution. How can wecharacterise this measure?

    The most often used parameter for characterising the dispersion is called the standard deviation ,σ. We can dene the variance σ2 of the parent distribution as

    σ2 = limN →∞

    1N

    (xi −µ)2 (1.4)which is easily seen to be

    σ2 = limN →∞

    1N

    x2i −µ2 (1.5)from the denition of µ in Eq(1.3).

    These are the measures of dispersion for the parent distribution. What about the sample distribution?

    The variance here is dened in an analogous way, except that the factor in the denominator is N −1instead of N .

    s2 = 1

    N −1N

    i=1

    (xi −x̄)2 (1.6)Note that as N approaches ∞, N −1 and N are the same. But for any nite N the difference comes

    in because though the initial set of measurements were N independent measurements since all the N xiwere independent. However, in calculating ¯x, we have used one independent piece of information. Therigorous proof of this statement is a bit tricky though not required for our purposes.

    The importance of these two parameters, the mean µ and the standard deviation σ (or variance σ2) isthat this is precisely the information we are trying to extract from the experiment that we perform. Forthe sample distribution, s2 characterises the uncertainty associated with our experiment to determinethe true and actual values. As we shall see, the uncertainty in determining the mean of the parentdistribution is proportional to the standard deviation. Thus we conclude that for distributions whichare a result of statistical or random errors , these two parameters describe the distribution well.

  • 8/20/2019 Manua(Nuc. Physics)

    16/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 15

    How do we determine the mean and standard deviation of distributions? For this, we dene aquantity called the expectation value . The expectation value of any function f (x) of x is dened asthe weighted average of f (x) over all the values of x weighted by the probability density function p(x).

    Thus, the mean is the expectation value of x and the variance is the expectation value of square of thedeviations from µ. Thus for a discrete distribution from Eq(1.3), we need to replace the observed valuesxi by a sum over the values of possible observations multiplied by the number of times we expect theobservation to occur. Thus

    µ ≡E (X ) = limN →∞1N

    N

    i=1

    xi = limN →∞

    1N

    N

    j =1

    [x j NP (x j )] = limN →∞

    [x j P (x j )] (1.7)

    In a similar way, the variance can be written as

    σ2 = limN →∞

    N

    j =1

    (x j −µ)2P (x j ) = limN →∞N

    j =1

    x2 j P (x j ) −µ2 = E (X 2) −E (X )2 (1.8)

    For a continuous distribution, the analogous quantities are

    µ ≡E (X ) =∞

    −∞

    xp(x)dx (1.9)

    and

    σ2 = ∞ −∞ (x −µ)2 p(x)dx = ∞ −∞ x

    2 p(x)dx −µ2 = E (X 2) −E (X )2 (1.10)

    Example 1.3.3.1I throw a dice and get 1 rupees if it is showing 1, get 2 rupees if it is showing 2, get 3 if itis showing 3, etc. What is the amount of money I can expect if I throw the dice 150 times?

    For one throw, the expected value is

    E (X ) = xiP (xi)

    The probability of getting any of the digits in one roll is 16 . Thus

    E (X ) = 1 × 16

    + 2 × 16

    + 3 × 16

    + 4 × 16

    + 5 × 16

    + 6 × 16

    = 72

  • 8/20/2019 Manua(Nuc. Physics)

    17/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 16

    Thus if I roll the dice 150 times, my expected payoff is 525 rupees.

    Example 1.3.3.2I am given a probability distribution as below

    X P(X)8 1412 1616 3820 1524 18

    Find the variance of this distribution?

    We know that the variance is given by Eq(1.8). But to use this, we rst need to nd theexpectation value of x or the mean. This is

    E (X ) = xiP (xi) = 8 × 14

    + 12 × 16

    + 16 × 38

    + 20 × 15

    + 24 × 118

    = 17

    Now

    σ2

    = [xi −E (X )]2P (xi) = 32 .71

    Example 1.3.3.3At a pediatrician’s clinic, the age of the children, x years, coming to clinic, is given bythe following probability distribution function:

    f (x) = 3

    4x(2 −x) 0 < x < 2

    = 0 otherwise (1.11)

    If on a particular day 100 children are brought to the clinic, how many are expected tobe under 16 months old?

    16 months is 43 years. So the probability of nding a child under 16 months is given bythe area under the curve of the probability distribution function between 0 and 43 . Thisis

  • 8/20/2019 Manua(Nuc. Physics)

    18/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 17

    P (x < 43

    ) =

    43

    0 f (x)dx=

    43

    034

    x(2 −x)dx

    = 3

    4

    43

    0 x(2 −x)dx=

    34

    8081

    = 20

    27 (1.12)

    Thus for 100 children, the number we expect to be under 8 months is

    100 × 2027 ≈74.07

    What about the mean age of the children brought to the clinic? For this,we need to useEq(1.9). Thus

    E (X ) =2

    0 xf (x)dx=

    2

    0 x34

    x(2 −x) dx= 1

    (1.13)

    This result is not surprising if we try to see how the distribution looks graphically as inFigure 1.3.

  • 8/20/2019 Manua(Nuc. Physics)

    19/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 18

    0

    0.2

    0.4

    0.6

    0.8

    1

    0 0.5 1 1.5 2

    f(x)

    Figure 1.3: Graph of function in Eq(1.11)

    We can see that the distribution is symmetrical about x = 1 and therefore the mean mustbe in the middle of the range that is x = 1. This is always true and can be used when we

    know that the distribution is symmetrical.

    Finally, what about the variance of the distribution of the age of the children?

    We know that the variance for a continuous distribution is given by Eq(1.10). Thus

    σ2 = E (X 2) −E (X )2

    But

    E (X 2) =2

    0 x2f (x)dx=

    2

    0 x234

    x(2 −x) dx

    = 6

    5(1.14)

    Thus

    σ2 = 65 −1

    2 = 15

  • 8/20/2019 Manua(Nuc. Physics)

    20/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 19

    1.4 Distributions

    We have already seen that the results of a statistical experiment result in a distribution (either discreteor continuous). We would be interested in three kinds of distributions, Binomial, Poisson and Gaussianor normal distributions. It is important to note where these distributions are used.

    The Gaussian distribution is the one which we encounter most frequently since this describes thedistribution of random observations in many experiments.

    The Poisson distribution is generally used for counting experiments of the kind used in nuclearphysics where the data is the number of events per unit interval. In the study of random processes likeradioactivity, Poisson distribution is important as it is whenever we sort data in bins to get a histogram.

    Finally, the binomial distribution is a discrete distribution which is used whenever the possible num-ber of nal states is small. This is true for instance in coin tossing experiments or even in scatteringexperiments in particle or nuclear physics.

    1.4.1 Binomial Distribution

    A Binomial or Bernoulli trial is basically a statistical experiment which has the following properties:

    1. The experiment consists of n repeated trials.2. Each trial can result in just two possible outcomes. We call one of these outcomes a success and

    the other a failure.

    3. The probability of success, denoted by P , is the same on every trial.

    4. The trials are independent, that is, the outcome on one trial does not affect the outcome on othertrials.

    A typical example would be repeated tosses of a coin and counting the number of heads that are

    turn up. Clearly the properties mentioned above are satised.

    We can dene a Binomial random number as the number of successes, say x in a binomialexperiment with n trials. The probability distribution of such a binomial random numberis called the binomial distribution. An example of such a distribution we have already seen in thediscussion of the discrete distribution where the probability distribution is given as a table in Table 1.1.

  • 8/20/2019 Manua(Nuc. Physics)

    21/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 20

    We can nd an expression for the probability P (x; n) for x successes in a binomial experiment with ntrials by analysing an experiment of coin tosss. Suppose we want to know the probability of x coins withheads and n −x coins with tails. For this purpose, we know that there are nC x different combinationsin which we can get the set of observations. In each of these combinations the probability of x heads

    coming is px

    which in this case is12

    x

    and the probability for n −x tails is (1− p)n

    −x

    = q n

    −x

    which hereis 12

    n−x . With this, we can write down the probability P (x; n) of getting x successes in an experimentwith n trials, each with probability p as

    P (x; n, p) =nx

    px q n−x = nC x px q n−x = n!

    x!(n −x)! px (1 − p)n−x (1.15)

    Mean of Binomial Distribution

    To nd the mean of the binomial distribution, recall that the mean is just the expectation value of the random variable. In this case, the random variable is x and the probability distribution functionEq(1.15) is the probability of x successes in n independent trials when the probability of success in eachtrial is p. Thus by the denition of expectation value, we have

    E (x) =n

    x=0

    xP (x; n, p)

    =

    n

    x=0x

    n!x!(n −x)! p

    x (1 − p)n−x

    =n

    x=1

    n!(x −1)!(n −x)!

    px (1 − p)n−x (1.16)

    since the x = 0 term does not contribute. Now substitute m = n −1 and y = x −1. Then

    E (x) =m

    y=0

    (m + 1)!y!(m −y)!

    ( p)y+1 (1 − p)m−y

    = ( m + 1) pm

    y=0

    m!y!(m −y)!

    ( p)y(1 − p)m−y

    = npm

    y=0

    m!y!(m −y)!

    ( p)y(1 − p)m−y (1.17)

    But we know that the binomial theorem states that

  • 8/20/2019 Manua(Nuc. Physics)

    22/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 21

    ( p + q )m =m

    y=0

    m!y!(m −y)!

    pyq m −y

    Thus

    m

    y=0

    m!y!(m −y)!

    ( p)y(1 − p)m−y = ( p + 1 − p)m = 1

    and soE (x) = np (1.18)

    Variance of Binomial Distribution

    Recall that the variance of a distribution is dened as the difference of the Expectation value of thesquare of the random variable and the square of the expectation, that is

    σ2 = E (x2) −E (x)2Consider

    E (x(x −1)) =n

    x=0

    x(x −1)n C x px(1 − p)n−x

    =n

    x=0

    x(x −1) n!

    x!(n −x)! px (1 − p)n−x

    =n

    x=2

    n!(x −2)!(n −x)!

    px (1 − p)n−x

    = n(n −1)n

    x=2

    (n −2)!(x −2)!(n −x)!

    px (1 − p)n−x

    = n(n −1) p2n

    x=2

    (n −2)!(x

    −2)!(n

    −x)!

    p(x−2) (1 − p)n−x

    = n(n −1) p2m

    y=0

    (m)!(y)!(m −y)!

    p(y)(1 − p)m−y

    = n(n −1) p2( p + 1 − p)m= n(n −1) p2 (1.19)

    where we have used the substitution, y = x −2 and m = n −2. Now variance is

  • 8/20/2019 Manua(Nuc. Physics)

    23/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 22

    σ2 = E (x2) −E (x)2= E (x(x −1)) + E (x) −E (x)2

    = n(n −1) p2

    + np −n2

    p2

    = np(1 − p) (1.20)

    Example 1.4.1.1An unbiased coin is tossed ten times. What is the probability of getting less than 3 heads?

    The probability of nding less than 3 heads in 10 tosses, is the probability of nding lessthan or equal to 2 heads, P (H ≤ 2). This will be the sum of probabilities of nding noheads, 1 head and two heads. Thus

    P (H ≤2) = P (H = 0) + P (H = 1) + P (H = 2)Now the probability of nding x heads in n tosses is given by Eq(1.15). In our case,n = 10, p = q = 12 . Thus

    P (H = 0) = nx px q n−x = 100 12

    10

    = 11024

    Similarly

    P (H = 1) =nx

    pxq n−x =101

    12

    12

    9

    = 101024

    P (H = 2) =nx

    px q n−x =102

    12

    2 12

    8

    = 451024

    Thus the total probability of getting less than 3 heads in 10 tosses is

    P (H = 0) + P (H = 1) + P (H = 2) = 11024

    [1 + 10 + 45] = 7128

    Example 1.4.1.2Here is a game to test sixth sense. Take 4 cards numbered 1 to 4. One person picks a cardat random and another person tries to identify the card. What is the probability distribu-tion that the second person would identify the card correctly if the test is repeated 4 times?

  • 8/20/2019 Manua(Nuc. Physics)

    24/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 23

    Let P (X = x) be the probability of correctly identifying x cards after 4 attempts. Then,by the binomial probability distribution function, this is given by

    P (X = x) = nx px q n−x == 4x (0.25)

    x (0.75)4−x

    since the probability of identifying the correct card in 4 attempts, out of 4 cards is14 = 0.25. Here x = 0, 1, 2, · · · , 4. Thus the probability of getting one card right in 4attempts is

    P (1) =41

    (0.25)(0.75)3 = 0 .4219

    The probability distribution is given by

    x P(x)0 0.31641 0.42192 0.21093 0.04684 0.0039

    If this is done with say 100 people, we can see that the number of people getting 1 card

    correct is 100 ×0.4219∼42.

    Example 1.4.1.3A biased that is an unfair dice is thrown fty times and the number of sixes seen is ten.If the dice is thrown a further fteen times nd:

    (a)the probability that a six will occur exactly thrice;

    (b)the expected number of sixes;

    (c)the variance of the expected number of sixes.

    The experiment is clearly a Bernoulli or Binomial trial. If the success is taken to begetting a six, then the probability p is given by

    p = 1050

    = 0.2

    Now if X is dened as the number of sixes in 15 trials, then

  • 8/20/2019 Manua(Nuc. Physics)

    25/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 24

    X = B(15, p)

    We want the probability for getting exactly 3 sixes in 15 trials. Thus, x = 3 and

    B(15, 1050

    ) =153

    15

    3 45

    12

    ≈0.2475The expected number of sixes will be the expectation value E (X ). This is, from Eq(1.18),

    E (X ) = np = 15 × 15

    = 15

    5 = 3

    Finally, the variance is given by Eq(1.20)

    σ2 = np(1 − p) = 15 × 15 ×

    45 = 2.4

    1.4.2 Poisson Distribution

    A Poisson distribution is the probability distribution that results from a Poisson experiment. A Poissonexperiment has the following properties:

    1. The experiment results in outcomes that we can call successes or failures.

    2. The average number of successes µ that occur in a specied region is known.

    3. The probability that a success will occur is proportional to the size of the region.4. The probability that a success will occur in an extremely small region is virtually zero.

    A Poisson random variable is the number of successes that result from a Poisson experiment. Theprobability distribution of a Poisson random variable is called a Poisson distribution. A Poisson dis-tribution is an approximation to the binomial distribution when the average number of successes, thatis µ is much smaller than the possible number, that is when µ n. In such cases, evaluation of thebinomial probability is extremely complicated and tedious.

    As an example of a Poisson distribution, consider the ux of cosmic rays reaching the earth. Thisis known to be around 1 per cm 2 per minute. Now consider a detector with a surface area of 40 cm 2.We expect to detect 40 cosmic rays per minute in this detector. Now suppose we record the numberof cosmic rays detected in 20 second interval. On an average we then expect about 13 .3 cosmic rays.However, when we do the experiment over many 20 second intervals, we will detect something like13, 15, 14, 12 etc and occasionally even 9 or 18 cosmic rays. We can plot a histogram of this, that is plotthe number of times nx that we observe x rays in this xed interval of time. Or, if we divide the numberof times nx by the total number of intervals N , then we can get the probability P x of observing x cosmic

  • 8/20/2019 Manua(Nuc. Physics)

    26/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 25

    rays in this experiment. If our number of intervals N is large, then this probability distribution willbe a Poisson distribution. Whenever we observe independent random events that occur at a constantrate such that the expected number of events is µ. In the case of cosmic rays, the events are obviouslyrandom and clearly independent since the arrival of one cosmic ray does not depend on the arrival of

    others. Further, the rate of arrival is almost constant.

    Another example can be a scattering experiment with a beam of B particles incident on a thin foiland the probability p of any one interaction taking place is very small. Then we know that the numberof observed interactions r will be binomially distributed where we will take the number of trials as Band the probability of success (interaction) as p. It turns out that when p is very small, then the valuesof P x will be like a Poisson distribution with a mean given by Bp (which we have seen is the mean of the binomial distribution Eq(1.18)).

    Thus we see that a binomial distribution goes to a Poisson distribution when the number of trials

    N increases while the probability of success p decreases in such a way that N p is a constant.Consider the case of a binomial distribution where p 1 and we consider the situation where n → ∞

    but np remains nite. Recall that np is the mean of the binomial distribution (Eq1.18). The probabilityfunction for the binomial distribution is

    P (x; n, p) = 1x!

    n!(n −x)!

    px (1 − p)−x (1 − p)n

    But

    n!(n −x)! = n(n −1)(n −2) · · ·(n −x + 2)( n −x + 1)

    Now since x n, each of these x factors is very nearly n and hence this becomes

    n!(n −x)! ≡

    nx

    Now

    (1 − p)−x ≡(1 + px) ≡1

    since p →0. Thus we now have the probability function asP (x; n, p) =

    1x!

    nx px(1 − p)nConsider now

    (1 − p)n = lim p→0[(1 − p)1

    p ]µ =1e

    µ

    = e−µ

  • 8/20/2019 Manua(Nuc. Physics)

    27/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 26

    Since the mean, µ = np for a Binomial distribution.

    Thus we get the Probability function for the Poisson distribution

    P P (x; µ) = lim p→0 P B (x; n, p) =

    µx

    x! e−µ (1.21)

    This is the probability of obtaining x events in the given interval. Remember that x is positiveinteger or zero.

    Mean of Poisson Distribution

    The mean of the distribution is the expectation value of the random variable. Thus

    E (x) =∞

    x=0

    xµx

    x!e−µ

    = µe−µ∞

    x=1

    µx−1

    (x −1)!= µe−µ

    y=0

    µy

    y!

    = µ (1.22)

    Thus we see that the mean of the Poisson distribution is

    E (x) = µ (1.23)

    Variance of the Poisson Distribution

    We know that the variance is dened in terms of difference of the expectation value of the square of

    the variable and square of the expectation value. That is

    σ2 = E (x2) −E (x)2Consider

  • 8/20/2019 Manua(Nuc. Physics)

    28/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 27

    E (x2) =∞

    x=0

    x2µx

    x!e−µ

    = 0 +∞

    x=1x

    2 µx

    x! e−µ

    =∞

    y=0

    (y + 1) 2 µy+1

    (y + 1)!e−µ

    =∞

    y=0

    (y + 1) 2 µyµ

    (y + 1)( y)!e−µ

    = µ∞

    y=0

    (y + 1) µy

    (y)!e−µ

    = µ ∞y=0

    (y) µy

    (y)!e−µ + ∞

    y=0

    µy

    (y)!e−µ

    = µE (x) + µ

    = µ(µ + 1) (1.24)

    The sum of the probability distribution function over all x is unity and therefore the second term inthe square bracket is unity. Thus we see that the variance is simply

    σ2 = E (x2)

    −E (x)2 = µ(µ + 1)

    −µ2 = µ (1.25)

    This is a remarkable result. The mean and the variance of the Poisson distribution is thesame . This gives rise to the famous square root rule . In an experiment where the distributionsatises Poisson distribution conditions, for instance in the counting of N independent events in a xedinterval, we can estimate the mean of the distribution to be N . Then, as we saw above, the varianceis also N and therefore σ = √ N . The statistical errors in then would be √ N and we would quote ourresults as N ±√ N . We will return to this later in the Chapter when we discuss error estimation.

    Example 1.4.2.1A factory produces resistors and packs them in boxes of 500. If the probability that aresistor is defective is 0.005, nd the probability that a box selected at random containsat most two resistors which are defective.

    If we take X as the ‘number of defective resistors in a box of 500’, then

  • 8/20/2019 Manua(Nuc. Physics)

    29/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 28

    X = B(500, 0.005)

    since the trials are obviously Binomial. Now in this case, we can see that the number of trials, n = 500 is large and the probability of success p = 0.005 in each trial is low, so theBinomial distribution can be approximated by a Poisson distribution with a mean

    µ = np = 500 ×0.005 = 2.5Then the probability of nding two or less defective resistors in a box of 500 is

    P (X ≤2) = P (0) + P (1) + P (2)where

    P (0) = 2.50e−2.5

    0!

    P (1) = 2.51e−2.5

    1!

    P (2) = 2.52e−2.5

    2!

    or

    P (X ≤2) = 6 .625 ×e−2.5 ≈0.543Example 1.4.2.2A bank manager opens on an average 3 new accounts per week. Use Poisson’s distributionto calculate the probability that in a given week she will open 2 or more accounts but lessthan 6 accounts.

    To use the Poisson distribution function, we need to know the mean. In this case, the

    mean is given as 3 accounts per week. Thus we can use Eq(1.21) with µ = 3. Then theprobability of opening 2 or more accounts but less than 6 in a week is simply

    P (2 < x < 6, 3) = P (2, 3)+ P (3, 3)+ P (4, 3)+ P (5, 3) = e−332

    2! +

    e−333

    3! +

    e−334

    4! +

    e−335

    5! = 0.71

    Example 1.4.2.3Thirty sheets of plain glass are examined for defective pieces. The frequency of the number

  • 8/20/2019 Manua(Nuc. Physics)

    30/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 29

    of sheets with a given number of defects per sheet was as follows:

    No. of defects Frequency0 81 52 43 74 45 2

    = 30 = 60What is the probability of nding a sheet chosen at random which contains 4 or moredefects?

    To use the Poisson distribution function, we need to nd the mean number of defects. Weknow that there are 60 defects in 30 sheets of glass. Thus the mean number of defectsper sheet is

    µ = 6030

    = 2

    The probability then of nding a sheet with 4 or more defects is

    P (x ≥4) = 1 −P (x < 4)= 1 −P (0) −P (1) −P (2) −P (3)= 1 −

    e−2(2)0

    0! +

    e−2(2)1

    1! +

    e−2(2)2

    2! +

    e−2(2)3

    3!= 0 .1431

    Example 1.4.2.4

    At the ITO intersection, vehicles pass through at an average rate of 600 per hour.

    (a)Find the probability that none passes in a given minute.

    (b)What is the expected number passing in ve minutes?

    The average number of vehicles per minute is simply

  • 8/20/2019 Manua(Nuc. Physics)

    31/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 30

    µ = 600

    60 = 10

    Thus the probability that no vehicle passes in a given minute is

    P (0, 5) = e−10100

    0! = 4.53 ×10−5

    The expected number of vehicles passing in ve minutes is

    E (X = 5) = 10 ×5 = 50

    1.4.3 Normal Distribution

    The Normal Distribution is extremely important in probability theory and statistics and forms thecornerstone of most of statistical analysis. For our purposes, its importance lies in the fact that manyreal-world phenomena involve random quantities that are distributed in an approximately normal fash-ion. For instance, the errors in a scientic measurement are approximately normal. It is often calledGaussian distribution and also referred to as “bell-shaped distribution”, because the graph of its prob-ability density function resembles the shape of a bell.

    The Normal or Gaussian Distribution is an approximation to the Binomial distribution for the lim-

    iting case when the number of possible observations, that is n goes to innity AND the probability of success in each measurement is nite and remains constant, that is when np 1. It is also the limitingcase of the Poisson distribution when the mean µ becomes large.

    The Gaussian probability density is dened as

    P G = 1σ√ 2π exp −

    12

    x −µσ

    2

    (1.26)

    As one can see this is a continuous distribution function and thus describes the probability of gettinga value x from a random observation from a parent distribution of mean µ and standard deviation σ.Properly dened, we should talk about a Gaussian Probability Distribution Function , such thatthe probability dP G of the random observation having a value between x and x + dx i.e.

    dP G (x; µ, σ) = pG (x; µ, σ)dx

    With this probability distribution function, we can now see how a Poisson distribution goes to aGaussian distribution for large mean. Consider rst a Poisson distribution and a normal distribution

  • 8/20/2019 Manua(Nuc. Physics)

    32/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 31

    both with mean 1. Thus µ = 1 and in the case of the Gaussian distribution, σ = 1. Then the probabilitythat n ≤2 that is n ≤µ + 1 σ for the Poisson distribution is

    P (1 ±1) = P (0) + P (1) + P (2) = .736where P (n) is the Poisson distribution function of Eq(1.21). With the Gaussian distribution function

    (Eq(1.26)), we get

    P (0 < x < 2) = 0.68

    Thus we see that for small means, the two distributions are fairly different. What about for largemean? Let us take µ = 152 = 7.5. Then σ = √ 7.5 = 2.7. Then the corresponding probabilities forµ + 1 σ are

    P (7.5 ±2.7) = P (0) + P (1) + ·+ P (10) = 0 .64and for the Gaussian distribution

    P (0 < x < 10.2) = 0 .68

    which is very similar. In general, for µ > 5, the Gaussian distribution is a good approximation to thePoisson distribution. This fact will be important to us since we will see that in counting experiments,it will become easier to use the Gaussian distribution in cases where the mean number of counts is large.

    A normal distribution has the following properties:

    1. The total area under the normal curve is equal to 1.

    2. The probability that a normal random variable x equals any particular value is 0.

    3. The probability that x is greater than some value a equals the area under the normal curve boundedby a and ∞.

    4. The probability that x is less than a equals the area under the normal curve bounded by a and

    −∞.5. About 68% of the area under the curve falls within 1 standard deviation of the mean.

    6. About 95% of the area under the curve falls within 2 standard deviations of the mean.

    7. About 99.7% of the area under the curve falls within 3 standard deviations of the mean.

    A convenient form of the normal distribution is the Standard Normal Distribution . To obtainthis, we simply use the substitution

  • 8/20/2019 Manua(Nuc. Physics)

    33/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 32

    z = x −µ

    σ (1.27)

    in Eq(1.26). Then the probability distribution function becomes

    pG (z )dz = 1√ 2π exp −z 2

    2dz (1.28)

    It is important to see that since all the values of X , the normal variable falling between x1 and x2have corresponding Z (the standard normal variable) values between z 1 and z 2, it means that the areaunder the X curve between X = x1 and X = x2 equals the area under the Z curve between Z = z 1 andZ = z 2. Therefore, we have, for the probabilities,

    P (x1 < X < x 2) = P (z 1 < Z < z 2)

    Mean of the Standard Normal Distribution

    E (x) =∞

    −∞

    xpG (x)dx

    =

    1

    √ 2π∞

    −∞ x exp −x2

    2 dx

    = 1√ 2π

    0

    −∞

    x exp −x2

    2dx +

    0 x exp −x2

    2dx

    = 0 (1.29)

    Thus we see that for a standard Normal distribution, the mean is 0.

    Variance of the Standard Normal Distribution

    We know that

    σ2 = E (x2) −E (x)2Now

  • 8/20/2019 Manua(Nuc. Physics)

    34/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 33

    E (x2) =∞

    −∞

    x2 pG (x)dx

    = 1√ 2π∞ −∞ x

    2 exp −x2

    2dx

    = 1√ 2π

    0

    −∞

    x x exp −x2

    2dx +

    0 x x exp −x2

    2dx

    = 1√ 2π −x exp −

    x2

    2

    0

    −∞+

    0

    −∞

    exp −x2

    2dx + −x exp −

    x2

    2∞

    0+

    0 exp −x2

    2d

    = 1√ 2π

    −∞ exp −x2

    2 dx

    = 1 (1.

    using Integration by parts ( udv = uv − vdu) and also the fact that the probability distributionfunction is normalised to 1.Therefore

    σ2

    = E (x2) −E (x) = 1 −0 = 1 (1.31)

    Thus we see that the standard normal distribution has a mean equal to 0 and a varianceequal to 1.

    The difference between the standard normal distribution and the normal distribution can be seenfrom the curves for probability distribution. Consider a normal distribution with µ = 2, σ = 13 . Theprobability distribution function will look like Figure 1.4

  • 8/20/2019 Manua(Nuc. Physics)

    35/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 34

    Figure 1.4: Normal Distribution with mean = 2 and σ = 13

    The corresponding standard normal distribution with µ = 0, σ = 1 will resemble Figure 1.5

    Figure 1.5: Standard Normal Distribution with mean =0 and σ = 1

    The two graphs obviously have very different µ and σ but have identical shapes and a shifting of the axes will give one from the other. It is also easy to see that the area under the two curves be-tween two equivalent points is the same. Thus, for instance, the area of the Normal distribution (withµ = 2, σ = 13 between 0.5σ to 2σ to the right of the mean will be the area from x1 = µ +

    σ2 = 2 +

    16

    to x2 = µ + 2σ = 2.66. The area under the Standard normal distribution would be the area fromz 1 = µ + σ2 = 0 + 0.5 = 0.5 to z 2 = µ + 2 σ = 0 + 2 = 2 .0.

  • 8/20/2019 Manua(Nuc. Physics)

    36/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 35

    The Standard Normal distribution with the probability distribution function given by Eq(1.28), givesus the probability of nding the value. This is also the area under the curve from 0 to the value. This isusually tabulated in a z-table which can be looked up as a standard reference as given below in Figure

    1.6.

    Figure 1.6: z-tables for Standard Normal Distribution §

    §(Source: http:// www.katyanovablog.com/ picsgevs/normal-distribution-table )

    Example 1.4.3.1The mean weight of 1000 parts produced by a machine was 30 .05 gm with a standarddeviation of 0.05 gm. Find the probability that a part selected at random would weighbetween 30.00 mm and 30.15 mm?

    30.00 is 1σ that is 0 .05 below the mean. Similarly, 30.15 is 2σ = 0.1 above the mean.Thus

    P (30.00 < X < 30.15) = P (−1 < Z < 2)since recall that the area under the normal gaussian curve between two points x1 and x2is the same as under a standard normal curve between two points z 1 and z 2 which arerelated to x1 and x2 by the transformation Eq(1.27). And 1 σ for the standard normaldistribution is 1 from the mean that is 0 and 2 σ is 2. These values can be looked up fromthe standard tables.

  • 8/20/2019 Manua(Nuc. Physics)

    37/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 36

    P (1 < Z < 2) = 0.3413 + 0.4772 = 0.8185

    So the probability is 0 .8185.

    What about the Gaussian distribution? The Mean of the Gaussian distribution is obtained easilynow either by a substitution as given above into the standard normal distribution or by direct calcula-tion.

    E (x) = 1σ√ 2π

    −∞

    x exp −(x −µ)2

    2σ2dx

    = 1σ√ 2π

    −∞

    (x −µ) exp −(x −µ)2

    2σ2dx +

    1σ√ 2π

    −∞

    µ exp −(x −µ)2

    2σ2dx

    = 0 + µ 1

    σ√ 2π∞

    −∞

    exp −(x −µ)2

    2σ2dx

    = µ (1.32)

    since the distribution function is normalised to 1.

    Thus we see that for a Gaussian distribution,

    E (x) = µ (1.33)

    We can also nd the Variance of Gaussian Distribution easily.

    σ2 = E ((x −µ)2)

    = 1σ√ 2π

    −∞ (x −µ)2

    exp −(x

    −µ)2

    2σ2 dx

    = σ2

    √ 2π∞

    −∞

    y2 exp −(y)2

    2dy

    = σ2 (1.34)

    since the integral is simply the variance of the Standard Normal Distribution given in Eq(1.30) whichis 1. Thus we see that the Variance of the Gaussian distribution is σ2.

  • 8/20/2019 Manua(Nuc. Physics)

    38/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 37

    The probability distribution function for the Normal or Gaussian distribution (Eq(1.26) is normalisedto 1. That is

    −∞ P G (x)dx = 1From the denition of the probability distribution function, we know that the probability that any

    one x value lies between the limits x = µ−∆ and x = µ + ∆ is simply the the area under the Gaussiancurve between these limits. If one computes (by integration) such areas for various choices of ∆, onecan show that the probability of nding any one measurement of x between various limits, measuredas multiples of the standard deviation, σ, is given by the data given in Table 1.2.

    Probability Interval0.50 µ −0.674σ < x < µ + 0 .674σ0.68 µ −σ < x < µ + σ0.80 µ −1.282σ < x < µ + 1 .282σ0.90 µ −1.645σ < x < µ + 1 .645σ0.95 µ −1.960σ < x < µ + 1 .960σ0.99 µ −2.576σ < x < µ + 2 .576σ0.999 µ −3.291σ < x < µ + 3 .291σ

    Table 1.2: Normal Distribution: Probabilities with intervals

    How can one interpret this table? The table indicates that we can be 95% condent that any one mea-surement that we make in the experiment (assuming all measurements are distributed normally) willlie within approximately 2 σ of the mean. Thus, the probability column can be taken as the condencelevel and the interval column as the condence interval.

    This interpretation and analysis looks very straightforward. However, there is a problem- the prob-lem is that the µ and the σ that we are using in the Gaussian distribution is the parent mean andstandard deviation. This means, as we have discussed above, that this will only be valid if we make aninnite number of measurements!

    We will address this issue in Section 1.6.

  • 8/20/2019 Manua(Nuc. Physics)

    39/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 38

    1.5 Error Estimation

    The basic aim in our experiment is to measure a quantity and also estimate the uncertainties in themeasurements. We also need to understand the sources of the uncertainties. Lastly we need to knowhow to combine uncertainties in measurements of more than one quantity into the error in the quantitywhich is calculated from these measurements. This is what we now discuss. Throughout this section,we will only be dealing with statistical or random errors. Systematic errors will be assumed to havebeen taken care of.

    First of all, it is important to understate a crucial fact which allows one to use the statistical methodsdiscussed above to analyse the errors in any experiment. The crucial result is that any measurementsubject to many small random errors will be distributed normally . This follows from theCentral Limit Theorem which states that the distribution of the sum of a large number of random

    variables will tend towards a normal distribution. We may think of a measurement as being the resultof a process namely our carrying out many small steps in the experiment. Each step in the processmay lead to a small error with a probability distribution. When we sum the error over all steps to getnal error, the Central Limit Theorem guarantees that this will lead to a normal distribution no matterwhat the error on the individual steps works out to be. So as a result of this, we generally expectnormal distributions to describe errors. Note that this simple, yet powerful fact allows us to use thewhole machinery of normal distributions and statistics to analyse errors.

    In the case of observations that we are taking are collections of nite number of counts over nite

    intervals, then the underlying distribution we know is Poisson. In this case, we know that the observedvalues would be distributed around the mean in a Poisson distribution. (Recall that the random variablein a Poisson distribution can only take positive values, including zero since it is dened as the numberof successes in a Poisson experiment.) In fact, in any experiment where data is grouped in bins to forma histogram, the number of events in each bin will be distributed according to a Poisson distribution.

    This allows us a tremendous simplication. We know that for a Poisson distribution, the standarddeviation is, Eq(1.25), simply

    σ = √ µThus the relative uncertainty, Eq(1.1), that is ratio of standard deviation to the mean is simply

    relative uncertainty = σµ

    = 1√ µ

    In our counting experiments, for instance, this means, that as we increase the number of counts perinterval (that is increase the mean µ), the relative uncertainty goes down as the square root of the mean.

  • 8/20/2019 Manua(Nuc. Physics)

    40/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 39

    This is actually referred to in general as the Square root rule for Counting Experiments whichstates that the uncertainty in the any counted number of random events, which is used as an estimateof the true average number, is equal to the square root of the counted number. For our purposes, this isextremely important in our counting experiments since the process which we are measuring, namely the

    counts are random and distributed in a Poisson distribution in any time interval. However, care shouldbe taken to note that this uncertainty is only in the counted variable and not in any derivedquantity . Thus, for instance if we were to measure N counts in an interval of T seconds, to get therate R per second,

    R = N T

    Now to nd the uncertainty in R, we know that the uncertainty is in the measured random variableN and it is √ N . Thus the number of counts in time T is really

    N ±√ N From this the rate can be seen to be simply

    R = N ±√ N

    T and NOT

    R = N T ±

    N T

    since only the quantity N is counted and hence the uncertainty in N is the square root of N .

    This example above then leads us to the issue of how to estimate the error in any derived quantityfrom the errors in the measured quantities?

    1.5.1 Propagation of Errors

    Consider an experiment where we measure some quantities, for instance the number of counts and the

    distance and use these measured quantities to determine a quantity which is a function of the twomeasured quantities. Let the desired quantity by x and the measured quantities, u and v be such that

    x = f (u, v) (1.35)

    Suppose further that the most probable value of x, x̄ is such that

    x̄ = f (ū, v̄) (1.36)

  • 8/20/2019 Manua(Nuc. Physics)

    41/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 40

    To determine the most probable value of x, we take the measurements of u and v, that is ui and vi anddetermine the different xi . That is

    xi = f (ui , vi) (1.37)

    We have already seen that in the limit of an innite number of measurements, the sample distributionwill go to the limiting distribution or parent distribution and the average of x, that is x̄ will be themean of the distribution. In that limit, we can use the calculated sample mean, ¯ x to nd the varianceσ2x as

    σ2x = limN →∞

    1N

    (xi −x̄)2 (1.38)Also, expanding the function in Eq(1.35) in a Taylor series around the averages of u and v,

    xi −x̄ (u i −ū) ∂x∂u + ( vi −v̄) ∂x∂v (1.39)Of course the partial derivatives have to be evaluated when the other variable is kept xed at the

    mean value.

    Now combining Eq(1.38) and Eq(1.39), we have

    σ2x limN →∞1N

    (ui −ū)∂x∂u

    + ( vi −v̄)∂x∂v

    2

    limN →∞

    1N

    (ui −ū)2∂x∂u

    2

    + ( vi −v̄)2∂x∂v

    2

    + 2( ui −ū)(vi −v̄)∂x∂u

    ∂x∂v

    (1.40)

    Clearly, the rst two terms are related to the variance of u and v, that is

    σ2u = limN →∞

    1N

    (ui −ū)2 (1.41)and

    σ2v = limN →∞

    1N (vi −v̄)

    2 (1.42)

    We also dene a new quantity covariance between the two variables u and v as

    σ2uv = limN →∞

    1N

    (ui −ū)(vi −v̄) (1.43)Thus the variance for x is given by

  • 8/20/2019 Manua(Nuc. Physics)

    42/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 41

    σ2x σ2u∂x∂u

    2

    + σ2v∂x∂v

    2

    + 2 σ2uv∂x∂u

    ∂x∂u

    (1.44)

    This is known as the error propagation equation .

    The covariance term which is a measure of the correlation of the variations in u and v. In mostexperiments the uctuations in the measured variables are uncorrelated and so on the average thecovariance term vanishes for a large number of observations. We shall neglect this in our furtherdiscussion. Thus we have in general, the error propagation equation as

    σ2x σ2u∂x∂u

    2

    + σ2v∂x∂v

    2

    + · · · (1.45)where the quantity x could be a function of any number of uncorrelated, independent variables.

    Let us see what the error propagation equation looks like in some common cases.

    1. Sums & Differences

    Let

    x = u + a (1.46)

    where a is a constant. Then since ∂x∂u = 1, we get

    σx = σu (1.47)

    We can also nd the relative uncertainty (Eq(1.1))

    σxx

    = σu

    x =

    σuu + a

    (1.48)

    2. Weighted Sums & Differences

    Suppose x is the weighted sum of two variable u and v.

    x = au + bv

    Then, since

    ∂x∂u

    = a

  • 8/20/2019 Manua(Nuc. Physics)

    43/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 42

    ∂x∂v

    = b

    we get using Eq(1.44)

    σ2x = a2σ2u + b

    2σ2v (1.49)

    assuming no correlation.

    3. Multiplication & Division

    Suppose the quantity of interest x is dened by

    x = uv

    where u and v are measured quantities. In this case, we can see that

    ∂x∂u

    = v

    and

    ∂x∂v

    = u

    and therefore the error propagation equation tells us that

    σ2x = v2σ2u + u

    2σ2v (1.50)

    or

    σ2xx2

    = σ2uu2

    + σ2vv2

    (1.51)

    For the case of division, we have

    x = uv

    then

    ∂x∂u

    = 1v

  • 8/20/2019 Manua(Nuc. Physics)

    44/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 43

    ∂x∂v

    = −uv2

    and therefore

    σ2x = σ2uv2

    + u2σ2v

    v4

    or

    σ2xx2

    = σ2uu2

    + σ2vv2

    (1.52)

    Note the very important difference between Eq(1.52, 1.51) and Eq(1.49). In the case of weightedsums and differences the absolute errors are relevant while in this case, it is only the fractional

    errors in u and v which are related to the fractional error in x.

    4. Powers

    Suppose

    x = au b

    Then

    ∂x∂u

    = abub−1 = bxu

    and

    σ2x = σ2u ×

    bxu

    The relative uncertainty in x is

    σ2xx

    = bu

    σ2u (1.53)

    Example 1.5.1.1Consider an experiment where we count N 1 = 945 counts in a 10 second interval in anexperiment and then N 2 = 19 counts in a 10 second interval. We have already founda background reading of N B = 14.2 counts for the same 10 second interval by carrying

  • 8/20/2019 Manua(Nuc. Physics)

    45/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 44

    out a separate experiment carefully. Thus we assume that there is no uncertainty in thebackground reading. Now in the rst time interval, the corrected counts are

    x1 = N 1 −N B = 930.8 countswith an uncertainty of

    σx 1 = σN 1 = √ 945 30.7 countsand a relative uncertainty of

    σx 1x

    = 30.7930.8

    = 0.032 3.1%

    On the other hand, in the second interval, the gures are

    x2 = 19 −14.2 = 4.8 countswith an uncertainty of

    σx 2 = σN 2 = √ 19 4.4and a relative uncertainty of

    σx 2x

    = 4.44.8

    = 0.91

    Example 1.5.1.2Now suppose in the previous example, the background reading also was subject to someuncertainty. Then we have the formula

    x = u + v

    and thus the error propagation becomes

    σ2x = σ2u + σ

    2v

    since the partial derivatives are unity and the uncertainty is

    σx = σ2u + σ2v

  • 8/20/2019 Manua(Nuc. Physics)

    46/175

    L a b M a n u

    a l N u c l

    e a r P h y s

    i c s

    MANUAL FOR M.Sc.(P) NUCLEAR PHYSICS LAB. 45

    In the above experiment for the rst reading, if the background was not error free, thenwe will have the error in the net count to be

    σx 1 =

    σ2N 1 + σ2N B = √ 30.7 + 3.7 = 5.6

    and so we should report our ne