Docstoc

PHYS stick

Document Sample
PHYS  stick Powered By Docstoc
					PHYS 201                                                                           Version 6/28/10
Physics I: Mechanics, Wave Motion, & Sound                                              pg. 1 of 10

                             An Analysis of Measurement Errors

        Whenever we measure a particular quantity, there is always some uncertainty in the
number we get. For example, a measurement of a length with a meter stick will never allow us
to conclude that the length is precisely, say, 1.25 meters; rather, that number must be uncertain
either at the level of 0.01 m, or 0.001 m, etc. These measurement uncertainties are commonly
called measurement errors—an unfortunate term, since it is not the case that we have made a
mistake in measuring, but simply that our precision of measurement is limited!

       How do these uncertainties or “errors” arise? Three sources of error are recognized:
1) A priori (“from the former”) errors arise from the limitations of the measurement device itself.
   For example, with a meter stick ruled in divisions of 1 mm = 0.1 cm, we cannot measure any
   length more accurately than ± 0.1 cm, since the best we can do is to record the tick marks
   on the meter stick that appear closest to the object’s two ends.
2) A posteriori (“from the subsequent”) errors arise from tiny random variations in the
   characteristics of the measuring device or of the object being measured, between one
   measurement and the next. For example, if we repeatedly measure a metal rod with a meter
   stick, we may get slightly different answers each time, either because we don’t line up the
   rod and the meter stick exactly the same way each time, or because we view the tick marks
   on the meter stick from a slightly different angle each time (“parallax error”), or because the
   metal rod expands and contracts if its temperature changes between subsequent
   measurements, etc.
3) Systematic errors arise from some unexpected but approximately unchanging phenomenon.
   Unlike the former two types of error, systematic errors always cause the measured values to
   be either all too large or all too small than they really should be. For example, using a
   wooden meter stick that has swelled to 100.2 cm in length because of humidity, all our
   length measurements will be 1/1.002 times what they should be. (Do you see why? Think
   what measurement this meter stick would give for an object exactly 100.2 cm long.)
   Systematic errors are often very difficult to detect, but, if detected (like the example just
   given), they can often be corrected by adjusting the measured values appropriately (in this
   case, multiplying each measured length by 1.002).

       Parts I and II of these notes will consider in more detail how we can estimate a priori
and a posteriori errors. In Part III, we suppose these error estimates are already known, and
explore how the errors in several measured quantities “propagate” through a calculation involving
those quantities, to determine the overall uncertainty in the final result.

                                   *****   *****   *****   *****   *****
PHYS 201                                                                              Version 6/28/10
Errors Part I                                                                              pg. 2 of 10

                               Part I: Measurement Errors A Priori

         If we repeatedly measure a length with a meter stick, we may get a value of, say,
“23.6 cm” each time: i.e., there is no apparent variation in the repeated measurements.
Nonetheless, since the accuracy of this measurement is only 0.1 cm (for a meter stick marked in
mm), then the length value should be reported as
                                          L  L  23.6  0.1cm .
The error term (here, L ) is often called the absolute error in the measured quantity L: note
that it has units which are the same as those of L.
        The significance of this error L , however, depends on the value of L itself. In the above
                                     0.1 cm
example, for instance, L is only             0.004 times as big as L. But if the same error L
                                    23 .6 cm
occurred for a measured value L  0.3 cm , then L would be fully one-third of L!
Consequently, to convey at a glance this significance of L relative to L, one often reports the
                              L
relative or percentage error      , which for the above example is
                               L
                                 L 0.1 cm
                                                 0.004  0.4% .
                                  L 23 .6 cm
                                                              L
Note that, unlike the absolute error L , the relative error      never carries units (because the
                                                               L
units always cancel in the fraction) and so may be expressed as a percentage.
        Some measuring devices present fixed absolute errors for all measurements (like meter
sticks, with L  0.1cm ), whereas other measuring devices (like many electronic balances!)
present fixed relative errors. For example, suppose a manufacturer reports that a certain model
                                                                                 M
of electronic balance possesses an accuracy of 2%.             This means that         0.02 for all
                                                                                 M
measured mass values M, so that the absolute error M  0.02 M increases proportionally with
the measured value of M. Thus, a mass measurement M  1.50 g on this balance has an
uncertainty M  0.03 g , whereas a mass measurement M  150.26 g has an uncertainty
M  3 g (which means that those two digits to the right of the decimal point are completely
meaningless!). So we would report the first measurement as 1.50  0.03 g , but the second
simply as 150  3 g .

                               *****   *****   *****   *****     *****
PHYS 201                                                                                                     Version 6/28/10
Errors Part II                                                                                                    pg. 3 of 10

                     Part II: Determining Errors A Posteriori from Repeated Measurements

        Let’s now consider an example where we have trouble aligning a meter stick with an
object whose length we wish to measure. Repeated measurement trials may then lead to the
following table of measured lengths:

                 trial, i          1      2        3            4                   5           6      7       8
           length, Li (in cm)     72.6   71.8     73.0         72.4                72.9        71.7   73.1    72.4
Clearly, no one of these values can be trusted to within the a priori uncertainty L  0.1cm of
the meter stick. How, then, do we estimate the object’s true length (call it L0 ), and the
uncertainty L associated with that estimate of L0 ?
        If we can reasonably assume that the cause(s) of the observed variation in the values Li
are truly random, then the different trials i are all independent of each other. Thus, we may
regard our N  8 measurements as a sample of size N drawn randomly from the whole universe
or “population” of such possible measurements. Most of these measurements, of course, cluster
about the true length L0 , but a few might lie quite far away from it. According to the theory of
independent random variables, the probability that a single measurement will yield a value in the
interval ( L  2 dL, L  2 dL) (for any particular numerical value of L) is given by a Gaussian
                 1        1


function
                                                 1                             / 2 L
                                  p( L) dL               e  ( L  L0 )
                                                                           2        2
                                                                                        dL .                          Eq.(1)
                                                2  L
Equivalently, we call p( L) the probability density function for this measurement process: its
graph is the familiar bell-shaped curve shown below. Notice, however, that we know neither this
Gaussian’s population mean L0 , nor its population standard deviation  L (its “width”)!

                              p(L)




                                                  L         L




                                                     L0
                                                                                               L

        It will probably come as no surprise to you that the best estimate we can make for the
object’s true length L0 (the population mean of the Gaussian function above) is the sample
mean L of our N  8 measured values, defined as
PHYS 201                                                                                       Version 6/28/10
Errors Part II                                                                                      pg. 4 of 10

                                                               N
                                                          1
                                                 L
                                                          N
                                                               L
                                                               i 1
                                                                       i    .                           Eq.(2)

You should be able to check that, for the data tabulated above, the sample mean is
L  72 .5 cm . (Most calculators will do this computation at the push of a button, once you have
entered your data values Li in a list!)
       This much is simple enough, right? But how might we now assign a value to the
uncertainty L in that estimate of L0 ? Let’s tackle that question by thinking for a moment about
the measurement process that yields the sample mean L : this mean is computed from the N
individual measurements Li according to Eq.(2) above. Note that each one of these
measurements Li can itself be regarded as a random variable with probability density function
 p( L) (i.e., a Gaussian function with population mean L0 and population standard deviation
 L ). After all, each time we repeat the set of measurements that yields L , we will of course get
different values for each Li . Now, according to some elementary theorems of statistics, we may
conclude from Eq.(2) that if each Li behaves like a Gaussian random variable, then L is also a
Gaussian random variable, whose population mean is
                                                N                          N
                                           1                       1
                                     L 
                                           N
                                               i 1
                                                         Li 
                                                                   N
                                                                       L  i 1
                                                                                  0    L0 .            Eq.(3)

(Note the use of angle brackets here to indicate population means of the different random
variables.) The population standard deviation of L (called the standard error of the mean) is

                                           1        N
                                                                       N L2
                                                                              
                                  L 
                                           N2
                                                 
                                                  i 1
                                                          i
                                                           2
                                                               
                                                                       N 2    L .
                                                                               N
                                                                                                        Eq.(4)

It is this standard error of the mean,  L , that we want to estimate, and use for L . (Think for a
moment about why that is true:  L is the width of the probability density function of our sample
mean L .)
        We’re not quite out of the woods yet: after all, we don’t know the population standard
deviation  L that enters the right-hand side of Eq.(4)! We must make an estimate of it, and the
best estimate we can get from our data is the sample standard deviation s L , defined as1

                                                   1 N
                                       sL              ( L  L) 2 .
                                                  N  1 i 1 i
                                                                                                        Eq.(5)


1
    The sample standard deviation s L should remind you of the “root-mean-square deviation of the sample
values from the sample mean,” which it is, except for the fact that there is an N 1 instead of N in the
denominator. The (rather subtle) reason for that difference (which becomes negligible when N is large!)
is that the computation of s L involves only N 1 “degrees of freedom”, not N of them: the sample
mean L is not independent of the values Li , but immediately determines the last measurement LN if
we know     L , L ,, L  .
                 1   2   N 1   (If for some strange reason we did happen to know the population mean
L0 exactly, then it would be correct to use L0 and N in place of L and N 1 in the definition of s L .)
PHYS 201                                                                            Version 6/28/10
Errors Part II                                                                           pg. 5 of 10

Thus we arrive at the uncertainty estimate
                                                    L       sL
                                       L   L               .                            Eq.(6)
                                                     N        N
        As an example, let’s apply these equations to the data tabulated above. Using the
sample mean L  72.5 cm which we have already computed, you should be able to verify from
Eq.(5) that s L  0.52 cm . [Again, most calculators will give you s L at the push of a button: just
be careful to select the “ s x ” and not the “  x ” key, since the latter uses N, not N 1 , in the
denominator of Eq.(5)!] Inserting this estimate for  L into Eq.(6), we arrive at an uncertainty of
L  0.2 cm . Altogether, then, the best estimate of the true length from our measurements is
                                       L  L  72 .5  0.2 cm .                             Eq.(7)
Note that, because of the N in the denominator of Eq.(6), the uncertainty L decreases as we
take more measurements. That is physically reasonable: by taking more measurements, we tend
to “zero in” on the true length. People often summarize this phenomenon by saying that one
should take many measurements to “generate good statistics.” (It’s somewhat unfortunate that
the denominator is N and not just N, because the former increases more slowly than N, but
that’s simply an unavoidable fact of life!) You should not think, however, that we could decrease
L below the a priori uncertainty L  0.1cm of the meter stick, just by making the number of
observations N very large. The a priori uncertainty L  0.1cm represents the lowest possible
uncertainty attainable with this meter stick, even if Eq.(6) should predict a lower number, since
none of the individual measurements Li can be known more accurately than that.

                               *****    *****   *****    *****      *****
PHYS 201                                                                                   Version 6/28/10
Errors Part III                                                                                 pg. 6 of 10

                       Part III: Propagation of Errors through Calculations

1) Single-variable Functions
      Quite often, the quantities of interest to us must be obtained from computations involving
measured quantities. For example, we might measure the radius of a circle to be
                                       r   r  0.24  0.01 m ,                                       Eq.(8)
and then want to know the circle’s area, which requires us to use the formula
                                               A  r 2 .                                              Eq.(9)
Clearly, the uncertainty  r in our radius value will create some uncertainty  A in the area, but
just how much? Notice that, from a mathematical point of view, A is a function of r, and the
question we are asking is how much of a small change in A is created by a small change in r.
But the ratio of such small changes is precisely what the derivative dA/dr is! Consequently, we
could write
                                                   dA
                                            A       r .                                            Eq.(10)
                                                   dr
Notice that we’ve put absolute-value bars around the derivative, because we regard both  A
and  r as intrinsically positive quantities, even if the derivative should happen to be negative.
       We can now apply the general statement of Eq.(10) to our specific situation in Eqs.(8)
and (9) to conclude
                             A   A  r 2  2r  r
                                      ( 0.24 m )2  2 ( 0.24 m) ( 0.01 m)                          Eq.(11)
                                    = 0.181  0.015 m 2 .
Notice that this result is consistent with what we would obtain by separately computing the
lower and upper bounds  ( r   r ) 2  0166 m 2 and  ( r   r ) 2  0196 m 2 .
                                          .                              .

2) Multivariable Functions
       Most often, our calculations will involve not just one, but several measured quantities.
For example, an average speed might be obtained from the quotient v  x / t , where both
x and t are measured quantities with their own associated uncertainties. That raises the
question, “What should be assigned as the uncertainty of the final quotient v?”
         To answer this question, let’s phrase it somewhat more generally: Suppose a quantity z
is a function of several other quantities       x , x ,, x  ,
                                                   1    2    n     i.e., z  f ( x1 , x2 ,, xn ) .    What
uncertainty  z should be assigned to z as a result of uncertainties  xi in each of the x i ? The
key is to recognize that each one of the  xi contributes an error term analogous to the single-
variable derivative of Eq.(10). This gives us the expression

                                    f          f              f
                             z          x1        x2        x .                              Eq.(12)
                                     x1         x2             xn n
PHYS 201                                                                                    Version 6/28/10
Errors Part III                                                                                  pg. 7 of 10

                                  f
In this equation, the symbol           represents the partial derivative of the function f with respect
                                   xi
to the variable x i : i.e., the result of differentiating f with respect to x i as if all the other variables
x j ( j  i ) were simply fixed constants. 2
          For example, from the formula for the volume of a cylinder,
                                                  V  r 2 h ,                                      Eq.(13)
we could form the partial derivatives
                                         V         V
                                             2rh,     r 2 ,                                     Eq.(14)
                                         r         h
and so obtain the error formula

                                    V      V
                             V       r      h  ( 2rh ) r  (r 2 ) h .                     Eq.(15)
                                    r      h
        To see how the error formula of Eq.(12) works (and along the way derive some useful
special cases that are worth remembering!), we’ll apply it to a few particular cases that are often
encountered.

3) Addition and Subtraction
       If we are simply adding or subtracting two quantities x and y, then we have the simple
two-variable function
                                           z  x  y  f ( x, y ) .                                 Eq.(16)
Its partial derivatives are just
                                               f      f
                                                   1,     1                                      Eq.(17)
                                               x      y
so that Eq.(12) yields
                                     z  x  y  z  x  y ,
           i.e., the total error of a sum or difference is simply the sum of the absolute errors.
                                                                                                    Eq.(18)
      You might be wondering how (if at all!) this result relates to the simple rule of thumb you
may have learned in high school:
    “Keep only as many decimal places as are known to be accurate in all the numbers being added.”

2
   A more precise error estimate than Eq.(12) can be derived from the theory of how independent
Gaussian random variables combine. This theory states that one should “add the errors in quadrature,”
i.e., take the square root of the sum of the squares of the individual contributions:
                                                            2
                                                    f 
                                                   n
                                         z             ( xi )2 .
                                              i 1   xi 
In this course, however, we shall just content ourselves with using the simpler Eq.(12).
PHYS 201                                                                                        Version 6/28/10
Errors Part III                                                                                      pg. 8 of 10

Let’s examine the example
                       1.0126                  x  0.0001
                     + 3.18                    y  0.01                                               Eq.(19)
                        4.19                   z  0.0101  0.01
As indicated in the column at the right, rounding the sum to the nearest 0.01 is exactly what we
should do to reflect the fact that its error is  z   x   y : the largest error (in this case  y )
tends to dominate the sum. (Adding the errors in quadrature leads to the same general
conclusion.) So the familiar rule of thumb is consistent with our more general theory for
combining uncertainties!


4) Multiplication and Division of Powers
         As a second case, consider multiplication of powers of two quantities,
                                             z  c x m y n  f ( x, y ) ,                               Eq.(20)
where c, m, and n can be any constants. Note that this covers ordinary multiplication if
m  n  1, and ordinary division if m  1, n  1 . Here the partial derivatives are
                            f                    z                f                   z
                                mcx m 1 y n  m       and            ncx m y n 1  n .              Eq.(21)
                            x                    x                y                   y
Notice that both partial derivatives have been expressed quite simply in terms of z itself. Also,
we needn’t bother with a derivative f / c because the uncertainty c of the constant c will be
zero in Eq.(12). Now, inserting these derivatives into Eq.(12) and then dividing through by z
(you should write out the algebra here to make sure you follow it!), we get

                                                           z        x           y
                                    z  cxm yn                 m        n           .                Eq.(22)
                                                           z          x            y
        This is a surprisingly simple result, especially for the cases of ordinary multiplication or
division ( m  1, n  1 ), for which it just states that

                                                       x            z        x       y
                                 z  c xy or z  c                                       ,
                                                       y             z        x        y

                            z                                                                  x     y
i.e., the relative error,        , of a product or quotient is the sum of the relative errors      and    .
                             z                                                                   x      y
                                                                                                        Eq.(23)
Make sure you understand the distinction between absolute and relative error. (Review Part I
above if necessary.) Note too that in the end,  z does contain the constant c, since z contains
it.
        Let’s now compare Eq.(23) with the rule of thumb you learned for multiplication and
division:
PHYS 201                                                                                         Version 6/28/10
Errors Part III                                                                                       pg. 9 of 10

“Keep only as many significant figures in a product or quotient as are present in the factor with the
                                   fewest significant figures.”
Consider the example
                                                  x         .
                                                            01     1
                   720.1                                             0.0001  (4 sig. figs.)
                                                   x           .
                                                           7201 10,000
                                                  y         .
                                                            01   1
                    6.3                                          0.01                 (2 sig. figs.)
                                                   y        6.3 100

                                                  z       x       y
                  4536.63  4500.                                      0.0101  0.01  (2 sig. figs.).
                                                   z        x       y
                                                                                                             Eq.(24)
We see that the rule of thumb, “Round the product to 2 sig. figs.,” follows naturally from the fact
that Eq.(23) directs us to add the relative errors, and that sum is dominated by the quantity with
the largest relative error (i.e., the fewest significant figures—in this case, y). Notice how the rule
of thumb rounds off the relative errors to powers of ten: without this rounding, we would find
more precisely that
                                z       x       y         .
                                                            01      .
                                                                   01
                                                                   0.016 .                              Eq.(25)
                                 z       x        y            .
                                                           7201 6.3
To recover the absolute error  z , we would of course have to multiply this relative error by the
value of z, to obtain
                                 z  0.016 z  0.016  4536.63  73 .                                       Eq.(26)

So simply rounding z to the nearest 100, as directed by the rule of thumb, was not so far off the
mark. (As in the addition/subtraction case, adding the relative errors in quadrature would not
substantially change this conclusion.)

4) More Complicated Operations
         At this point, you should be ready to agree that our Eq.(12) for determining the
uncertainty in a computed quantity is really quite a powerful and natural extension of the rules of
thumb that you already knew for addition/subtraction and multiplication/division! Moreover,
Eq.(12) can handle cases more complicated than those simple operations. For example,
suppose you had measured the hypotenuse ( h  6.7  01 cm ) and one angle
                                                                      .
(   0.52  0.05 rad ) of a right triangle, and then computed the opposite side,
 l  h sin   3.35 cm . What uncertainty should you assign to l? You should be able to verify
that Eq.(12) yields the answer
                                l  (sin  ) h  ( h cos  )  0.34 cm .                                 Eq.(27)
But a final note of caution is needed here:
                           angles must be measured in radians (not degrees)
PHYS 201                                                                             Version 6/28/10
Errors Part III                                                                         pg. 10 of 10

in order to apply Eq.(12) correctly! The reason is that only in radian measure are such derivative
                   d
formulas as          (sin  )  cos valid.   Review the definition of a derivative to see why that
                  d
statement is true!

				
DOCUMENT INFO
Shared By:
Categories:
Tags: PHYS, stick
Stats:
views:52
posted:6/28/2010
language:English
pages:10
Description: PHYS stick