Probability

Document Sample
Probability Powered By Docstoc
					Generating Functions
           The Moments of Y
• We have referred to E(Y) and E(Y2) as the first and
  second moments of Y, respectively. In general,
  E(Yk) is the kth moment of Y.
• Consider the polynomial where the moments of Y
  are incorporated into the coefficients
      k                     2    2    3    3
       t                       t E (Y ) t E (Y )
   k ! E (Y )  1  t E (Y )  2!  3! 
  k 0
            k
   Moment Generating Function
• If the sum converges for all t in some interval |t| < b,
  the polynomial is called the moment-generating
  function, m(t), for the random variable Y.
                           t 2 E (Y 2 ) t 3 E (Y 3 )
    m(t )  1  t E (Y )                           
                                2!           3!
• And we may note that for each k,
                         t k  y k p( y )
        t k E (Y k )                           (t y )k
                           y
                                                     p( y )
             k!                 k!           y   k!
   Moment Generating Function
• Hence, the moment-generating function is given by
                         t 2 E (Y 2 ) t 3 E (Y 3 )
  m(t )  1  t E (Y )                           
                              2!           3!
               
                      (t y ) k
                            p( y )
              k 0 y    k!
                              (t y)k             May rearrange,
                                      p( y )   since finite for
                         y  k 0 k !                  |t| < b.

                           et y p( y )  E[ety ]
                             y
   Moment Generating Function
• That is,
     m(t )  E[ety ]   ety p ( y )
                           y

                              t 2 E (Y 2 ) t 3 E (Y 3 )
              1  t E (Y )                           
                                   2!           3!
  is the polynomial whose coefficients involve the
  moments of Y.
                   The      k th   moment
  • To retrieve the kth moment from the MGF,
    evaluate the kth derivative at t = 0.
d k [m(t )] k !t 0 E (Y k ) (k  1)!t1E (Y k 1 ) t 2 E (Y k 2 )
                                                               
     dt            k!            (k  1)!                2!

  • And so, letting t = 0:
                     k
                   d [m(t )]
                                   E (Y k )
                      dt     t 0
             Geometric MGF
• For the geometric distribution,
      m(t )  E[e ]   e ( q
                   ty            ty        y 1
                                                  p)
                             y

            et p  e 2t qp  e3t q 2 p  e 4t q 3 p 
            e p 1  e q  e q  e q 
               t         t            2t    2          3t   3
                                                                
                 1          pet
            e p
              t
                       t 
                           
                 1  e q  1  qet
             Common MGFs
• The MGFs for some of the discrete distributions
  we’ve seen include:
  binomial: m(t )  ( pe  q)t       n

                          t
                       pe
  geometric: m(t ) 
                     1  qet
                                          r
                              pe  t
  negative binomial: m(t )          t 
                              1  qe 
                         ( et 1)
  Poisson: m(t )  e
     Recognize the distribution
• Identify the distribution having the moment
  generating function
                               20
                      e 3
                        t
             m(t )       
                      4 
• Give the mean and variance for this distribution.
• Could use the derivatives, but is that necessary?
             Geometric MGF
                                       1       t          t
                                      e    e
• Consider the MGF m(t )             3
                           1  3 e 3  2e
                               2  t       t


• Use derivatives to determine the first and second
  moments.                 t
                            3e
              m(t ) 
                         3  2e 
                                 t 2

   And so,
                                 3e0                3
         E (Y )  m(0)                            3
                             3  2e  0 2          1
               Geometric MGF
                         3et
• Since m(t ) 
                     3  2e  t 2

                                           V (Y )  E (Y 2 )  [ E (Y )]2
• We have
                   3e (3  2e )
                     t           t                  15  (3)  6
                                                              2

      m(t ) 
                     3  2e 
                            t 3


 And so,
                                     3e (3  2e )
                                       0           0
           E (Y )  m(t ) 
               2
                                                        15
                                       3  2e 
                                               0 3
             Geometric MGF
                   1       t   t
                       e   e
• Since m(t )     3
                         
                1  3 e 3  2e
                    2  t       t


  is for a geometric random variable with p = 1/3,
  our prior results tell us
        E(Y) = 1/p and V(Y) = (1 – p)/p2.

            1                  1 1 3 2  9 
  E (Y )      3 and V (Y )           6
                               1 3 3  1 
                                    2
           13

     which do agree with our current results.
             All the moments
• Although the mean and variance help to describe a
  distribution, they alone do not uniquely describe a
  distribution.
• All the moments are necessary to uniquely
  describe a probability distribution.
• That is, if two random variables have equal MGFs,
  (i.e., mY(t) = mZ(t) for |t| < b ),
  then they have the same probability distribution.
                     m(aY+b)?
• For the random variable Y with MGF m(t),
  consider W = aY + b.
 m(t )  mY (t )  E[etY ]
   mW (t )  E[et ( aY b ) ]   Construct the MGF for
                                the random variable
             E[e atY ebt ]     W= 2Y + 3, where Y is
                                a geometric random
             e E[e ]
                bt     atY
                                variable with p = 4/5.
             e mY (at )
                bt
                    E(aY+b)
• Now, based on the MGF, we could again
  consider E(W) = E(aY + b).
           d bt
                              
 mW (t )  e mY (at )  ebt mY (at )(a)  mY (at )bebt
           dt
          e  amY (at )  bmY (at ) 
            bt
                 
And so, letting t = 0,
    E (W )  mW (0)  e0  amY (0)  bmY (0) 
                            
             aE (Y )  b as expected.
        Tchebysheff’s Theorem
• For “bell-shaped” distributions, the empirical rule
  gave us a 68-95-99.7% rule for probability a value
  falls within 1, 2, or 3 standard deviations from the
  mean, respectively.
• When the distribution is not so bell-shaped,
  Tchebysheff tells use the probability of being
  within k standard deviations of the mean is
  at least 1 – 1/k2, for k > 0.
                                   1   Remember, it’s just
          P(| Y   |  k )  1  2   a lower bound.
                                  k
        A Skewed Distribution
• Consider a binomial experiment with n = 10
  and p = 0.1.




                                 P(| Y  1|  2(0.95))
                                          1
                                    1  2  0.75
                                         2
        A Skewed Distribution
• Verify Tchebysheff’s lower bound for k = 2:

                           P(| Y  1|  2(0.95))
                              P(0.9  Y  2.9)
                                   1
                              1  2  0.75
                                  2

P(0.9  Y  2.9)  0.34868  0.38742  0.19371  0.93

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:8
posted:10/5/2012
language:English
pages:18