Section Method of Moment Generating Functions by mikesanye

VIEWS: 20 PAGES: 13

									Chapter 6, Section 5
Transformations of Variables

 Method of Moment-Generating
 Functions


 John J Currano, 04/16/2010   1
         Method of Moment-Generating Functions

The crucial theorem is:

Theorem 6.1 (p. 318): If X and Y are random variables which

both have moment-generating functions, and if

      mX(t) = mY(t) for all t in some interval around t = 0,

then X and Y have the same probability distribution.




                                                               2
              Method of Moment-Generating Functions

Some other useful facts:

1. If U = aY + b, then

                                                        
   m U (t )  E (e tU )  E e aYt  bt  e bt E eY (at )  e bt m Y (at ).


2. If Y1, Y2, … , Yn are independent and U = Y1 + Y2 +  + Yn, then


                        (
  m U (t )  E (e tU )  E e tY1               ) E e
                                    tY2 L tYn
                                                  (     tY1 tY2
                                                          e       Le     )
                                                                       tYn




                         ( )( ) ( )
                        E etY1 E etY2 L E e tYn                  by independence

                        mY (t ) mY (t ) L mY (t )
                           1       2         n


                                                                                    3
             Method of Moment-Generating Functions

Some other useful facts:
                                                   n
3. If Y1, Y2, … , Yn are independent and U   aiYi ,
                                                  i 1
                    n
   then m U (t )   m Y (ai t ).
                        i
                    i 1




4. If Y1, Y2, … , Yn are independent and identically distributed (iid)

   with common distribution Y, i.e., a Random Sample from Y, and

                                           [ ].
   U = Y1 + Y2 +  + Yn, then m U (t )  m Y (t )   n




                                                                         4
                 Method of Moment-Generating Functions


Example 1. Suppose that Y1, Y2, … , Ym are independent binomial RVs
with Yi ~ bin(ni, p) [same p]. Then for i = 1, 2, … , m,

                                        
                              m Yi (t )  q  pet      ni


          m
Let Y   Yi . Then by Property 2 on slide 3,
          i 1



                                        ( )              n1  n2 L nm
                             m
                   mY (t )   m Yi (t )  q  pe t                       ,
                            i 1

                                                 m
so Y has a binomial distribution with n   ni trials and probability of
                                                 i 1
success, p. Thus, the sum of independent binomials with the same
probability of success, p, is also binomial.

                                                                              5
Example 2. Let             Y ~ NegBin(r, p),
                           X0 = 0, and
                           Xi = # of trial on which i th success occurs.
Then
                           Yi = Xi  Xi 1 ~ Geom(p) for i = 1, 2, … , r, and
                           Y1, Y2, … , Yr are independent.
Also,
            r          r
           Yi   X i  X i 1  X r  X 0  X r  0  Y .
           i 1       i 1
                                  telescoping sum

                                              r
                  r            pe t      
Thus, m Y (t )   m Y (t )                    by Property 2 on slide 3.
                                t       
                               1 qe      
                      i
                 i 1

We have found the moment-generating functions of the negative
binomial distributions using those of the geometric distributions.
                                                                                 6
                                                                 r
                                         pe                t
Example 2. Y ~ NegBin(r, p), m Y (t )            
                                         1  qe t  .
                                                  
We can now use the mgf of Y to find its mean and variance.
For example,

                        pe 0 
  E (Y )  mY (0)  r           
                                         r 1
                                                
                                                  pe 1  qe   pe qe 
                                                    0        0         0   0

                        1  qe 0 
                                                         1  qe 
                                                                 0 2


                                 r 1
                                          ( ) )
                       p                  p (  q  pq
                                               1
                   r      
                      1 q                  (1  q)2


                              r 1
                       p
                   r                  (
                                         p pq      )
                      p                  p2

                        r
                         .
                        p

                                                                                7
  Example 3. Let Y ~ N( ,  2) and Z  Y    1 Y   .
                                                    
                                    
  Then m Y (t )  exp  t  1  2 t 2 ,
                            2

                             1 
  so m Z (t )  exp   t  m Y  t  by Property 1 on slide 3
                              

                              1             1  
                                                       2
               exp   t  exp    t   2  2  t  
                                            1
                                             
                                                        
                          t                2 t 2      
               exp  
                     
                    
                        t
                          
                            
                                        1
                                         2
                                             
                                              2
                                             
                                                        
                                                        
                                                        
                                                              exp
                                                            
                                                            
                                                                  ( ) 1
                                                                      2
                                                                          t2


So Z ~ N(0, 1 ) : Z has a standard normal distribution. We proved this
fact in Chapter 4 using the Distribution Function method – this is simpler.

Transforming a random variable Y by subtracting its mean and dividing
by its standard deviation is called standardizing Y.
                                                                               8
Example 4. Let Z ~ N(0,1) and Y =  +  Z .

Then m Z (t )  exp      1
                          2
                                
                              t2 ,

so m Y (t )  exp  t  m Z  t                by Property 1 on slide 3

              exp  t  exp    (      1
                                        2      )
                                             t 2

                 (
              exp  t             1
                                    2
                                        2   t)
                                              2



So Y ~ N( ,  2) : Y has a normal distribution with mean  and
variance  2. We also proved this in Chapter 4 using the
Distribution Function method – again this method is simpler.

Transforming a standard normal random variable Z in this fashion
is a way of simulating other normal random variables.

                                                                             9
Example 5. If Yi ~ N( i ,  i 2) are independent for i = 1, 2, . . . , n,


        i
                       
then m Y (t )  exp  i t    1
                              2
                                   i2 t 2    for         1 i  n. Let Y   ai Yi .
                                                                                            m

                                                                                            i 1

Then by the Property 3 on slide 4,
                m

                i 1
                                    m

                                    i 1
                                                   (
      m Y (t )   m Yi ( ai t )   exp  i a i t                      1
                                                                         2      (a t) .
                                                                              i2   i)  2




                     m                                m                 
               exp  t  a i  i         1
                                               t   2
                                                             ai2  i2   ,
                     i 1                 2
                                                       i 1              

           m             m         
so   Y ~ N   a i  i ,  ai2  i2  . Thus, a linear combination of
            i 1        i 1       

independent normal random variables is also normal.


                                                                                                   10
Special Case. Suppose Y1, Y2, . . . , Yn are a random sample from
                                              n
a N(  ,    2)   - distribution, and Y  1    Yi , the sample mean.
                                          n   i 1


                                  
Then by Example 5, Y ~ N  ,  2 n .      

Thus the sample mean of a random sample of size n from a normal
distribution also has a normal distribution.

Its mean is the same as that of the original distribution and its
variance is smaller   2/n instead of  2.

This will be used often next semester.



                                                                        11
Example 6. (p. 319, Example 6.11)

                                    1    z2 2
Let Z ~ N(0,1), so f ( z )           e       , and let Y  Z 2 . Then
                                    2

             
m Y (t )  E e   tZ2
                          
                             etz
                                    2   1
                                        2
                                           e z
                                                2
                                                      2
                                                          dz
                            

                            
                                 1    (1 2t ) z 2
                                  e                 2
                                                          dz
                               2

                            
                         
                                 1
                                    e                   )dz
                                       z 2 ( (1 2t ) 1
                                             2

                               2




                                                                          12
Example 6. (p. 319, Example 6.11) Let Z ~ N(0,1) and Y  Z 2 .
                    
Then m Y (t )  
                         1
                            e  z2   2(12t )1  dz .
                       2

The integrand is proportional to the density function of the normal

distribution with  = 0 and  2 = (1 – 2t)1. Thus, if we multiply by
1
     1  2t     inside the integral and by   1  2t 1 2 outside,
              12

we obtain
                   m Y (t )  1  2t          1  1  2t 
                                        1 2                 1 2
                                                                    .

 Therefore, Y  Z 2 has a c 2 (1) - distribution.

We proved this in Chapter 4 using the Distribution Function method.
Once again, this method is simpler.
                                                                        13

								
To top