# Introduction to Mathematical Statistics Lecture-3

W
Shared by:
Categories
-
Stats
views:
13
posted:
6/1/2010
language:
Turkish
pages:
8
Document Sample

```							        Introduction to Mathematical Statistics
Lecture-3
Orhan Erdem
Istanbul Bilgi University
October 24, 2007

1     Multivariate Distributions
1.1    Distribution of Two Random Variables
De…nition 1 random vector (X1 ; X2 )
A non-negative integrable function f de…ned on <2 and such that its integral
equals to one is called a probability density function.
ZZ
Pf g =        f (x1 ; x2 )dx1 dx2

provided that     is su¢ ciently regular for the integral to exist.

Z1Z2
b b

P fa1   <    X1     b1 ; a2 < X2         b2 g =           f (x1 ; x2 )dx1 dx2
a1 a2
= F (b1 ; b2 )    F (a1 ; b2 )     F (b1 ; a2 ) + F (a1 ; a2 )
for all combinations ai < bi : Letting a1 = a2 =                    1 we get the cdf F of f,
namely
F (x1 ; x2 ) = P fX1 x1 ; X2                  x2 g                  (1)
Here F is called joint cumulative density function.
In the space is …nite we deal with discrete r.v. thus their joint prob. mass
function is

f (x1 ; x2 ) = P [X1 = x1 ; X2 = x2 ]
At points of continuity of f (x1 ; x2 ) we have
@ 2 F (x1 ; x2 )
= f (x1 ; x2 )
@x1 @x2
A pdf is essentially characterized by two properties

1
1. f (x1 ; x2 ) 0
RR
2.     f (x1 ; x2 )dx1 dx2 = 1

Example 2 f (x; y) =         p1        2
exp             2(1
1
2)   (x2      2xy + y 2 ) Let   = 0:5
2    1

0.15

0.10
z
-4                0.05                      -4
-2                    -2
0.00
0 0

y2               2x
4                                  4

Multivariate Normal Distribution with                                 = 0:5

Example 3 2.1.1, 2.1.2 from HMC.

Using 1 we can say that P fX1                      x1 g = F (x1 ; 1):The marginal distribution
function of x1 is then de…ned as
Z1

f (x1 ) =                    f (x1 ; x2 )dx2
1

Example 4 2.1.4. from HMC
f (x; y) = x + y

2
3
-1.0              2                             -1.0

-0.5     1     -0.5
0
z 0.0 0.0
-1
-2
y 0.5-3 0.5 x
1.0                                1.0

Z1                            Z1
1
fX (x) =          f (x; y)dy =                 (x + y)dy =          +x
2
1                            0

y   5

4

3

2

1

-5   -4   -3     -2               -1                       1           2      3    4   5
-1                                          x

-2

-3

-4

Z1                            Z1
1
fY (y) =          f (x; y)dx =                 (x + y)dx =          +y
2
1                             0

3
1=2 R
R 1                                                1=2
R
1
P (X       2)   =             f (x1 ; x2 )dx2 dx1 =                            ( 1 + x)dx =
2
3
8
0   0                                             0
To calculate the probability like P (X + Y                                           1); we must use

Z
Z1 1         x

(x + y)dydx = 1=3
0    0

1                           x
Example 5 f (x; y) =              y   exp            y        y       ; 0 < x; y < 1

3
2          2
0     1
0.0 0
-2
z        -1
-4
-2        -6 0.5 y
-3               -8
-10      1.0

x                1.5

2.0

1                         x
The graph of                y       exp           y   y

Find the marginal density of Y.
Solution:
Z1               Z1                                                                          Z1
1
1                                                x           1       y             x          1     y        x
y
fY (y) =     f (x; y)dx =      exp                                y                   dx = e               e    y   dx =     e       ye   y       =e
y                                                y           y                                y                  0
1                             1                                                             0

hence y is exponentially distributed.
Example 6 Bivariate normal distribution
Z1                            Z1
1                                   1                                  1     y2
fY (y)    =             f (x; y)dx =                          p                       exp                  2)
(x2    2xy + y 2 ) dx = p exp(    )
2        1               2             2(1                                      2     2
1                             1
Z1
p
x2
=) Check using                          e        2   =           2
1

4
which is N(0,1)

1.2      Expectation
Let Y = g(X1 ; X2 ) for some real valued function, i.e. g : R2 ! R: Then E(Y)
exists if
Z Z
1 1

jg(x1 ; x2 )j f (x1 ; x2 )dx1 dx2 < 1
1        1

Then
Z Z
1 1

E(Y ) =                g(x1 ; x2 )f (x1 ; x2 )dx1 dx2
1   1

Likewise in discrete case
XX
E(Y ) =                g(x1 ; x2 )f (x1 ; x2 )dx1 dx2

2
The expectation    1    and variance         1   of X1 -if they exist-are given by
Z Z
1 1

1   = E(X1 ) =                 x1 f (x1 ; x2 )dx1 dx2
1   1

and
Z Z
1 1
2                    2                                                2
1   = E[(X1         1)   ] = V ar(X1 ) =               (x1         1 ) f (x1 ; x2 )dx1 dx2
1   1

By symmetry, these de…nitions apply also to X2 :
Note that the expected value of any function g(X2 ) can be found in two
ways:

Z Z
1 1                                            Z1

E(g(X2 )) =                  g(x2 )f (x1 ; x2 )dx1 dx2 =            g(x2 )fX2 (x2 )dx2
1        1                                     1

Example 7 Cont to 2.1.5 from HMC.

De…nition 8 MGF of a random vector. Let X = (X1 ; X2 )0 be a random vector.
If E(et1 X1 +t2 X2 ) exists for jt1 j < h1 and jt2 j < h2 ; where h1 and h2 are positive,
it is denoted by MX1 ;X2 (t1 ; t2 ) and is called mgf of X

The covariance of X1 and X2 is
Z Z
1 1

12   = Covar(X1 ; X2 ) = E[(X1              1 )(X2      2 )]   =            (x1    1 )(x2     2 )f (x1 ; x2 )dx1 dx2
1   1

5
1
The normalized variables Xi      i       are dimensionless and their covariance
1       1
= Covar(X1 ; X2 )           1       2

is the correlation coe¢ cient of X1 and X2 :

Example 9 Covariance of X,Y in bivariate normal case
Z Z
1 1

Covar(X; Y )   =              xyf (x; y)dxdy
1    1
0                                                 1
Z1                            Z1
1            1 2
@                   1                  1(x    y)2 A
=         yp e           2y             xp                      exp
2)          2(1   2)
2                             2 (1
1                            1
Z1
1            1 2
=         yp e           2y    y
2
1
Z1
1          1 2
=           y2 p e         2y
2
1
=

Third equality comes from the fact that the term in the bigger brackets is the
mean of that distribution which is y:The fourth equality comes from the fact
that the integral is the variance of y which is 1.

1.3    Independence
Remember: A …nite collection X1 ; X2 ; ::::Xn of events is independent if

P (Ak1 \ :::Akj ) = P (Ak1 ):::P (Akj )

for 2 j n and 1 k1 < ::::kj n:
When n = 2 two sets are independent if

P (A1 \ A2 ) = P (A1 ):P (A2 )

Theorem 10 In a bivariate distribution for (X,Y),

1. X and Y are independent if and only if

F (x1 ; x2 ) = F (x1 )F (x2 )                for all x1 ; x2

Proof.

6
1. To show independency implies factorization, take A1 = [X1                x1 ] andA2 =
[X2 x2 ]

P (A1 \ A2 ) = P (A1 ):P (A2 )
P ([X1      x1 ]; [X2 x2 ]) = P (X1            x1 ):P (X2     x2 )
F (x1 ; x2 ) = F (x1 )F (x2 )

Taking derivative of both sides w.r.t both variables
@2                      @2
F (x1 ; x2 ) =           F (x1 )F (x2 )
@x1 @x2                @x1 @x2
f (x1 ; x2 ) = f (x1 )f (x2 )

To show that the factorization implies independency,assume F (x1 ; x2 ) =
F (x1 )F (x2 ) holds.Then

P (A1 \ A2 )    =   F (x1 ; x2 )
=   F (x1 )F (x2 )
=   P (X1 x1 ):P (X2         x2 )
=   P (A1 ):P (A2 )

or de…ne A1 = [a1 < X1            b1 ] and A2 = [a2 < X2      b2 ]

P (A1 \ A2 )     =     P fa1 < X1 b1 ; a2 < X2 b2 g = F (b1 ; b2 ) F (a1 ; b2 ) F (b1 ; a2 ) + F (a1 ; a2 )
=     Fx1 (b1 )Fx2 (b2 ) Fx1 (a1 )Fx2 (b2 ) Fx1 (b1 )Fx2 (a2 ) + Fx1 (a1 )Fx2 (a2 )
=     Fx2 (b2 )fFx1 (b1 ) Fx1 (a1 )g Fx2 (a2 )fFx1 (b1 ) Fx1 (a1 )g
=     fFx1 (b1 ) Fx1 (a1 )gfFx2 (b2 ) Fx2 (a2 )g
=     P (a1 < X1 b1 )P (a2 < X2 b2 )
=     P (A1 ):P (A2 )

As a result

P (A1 \A2 ) = P (A1 ):P (A2 ) () F (x1 ; x2 ) = F (x1 )F (x2 ) () f (x1 ; x2 ) = f (x1 )f (x2 )

Example 11 f (x; y) =          p1 2 exp         1
2(1 2 ) (x
2
2xy + y 2 ) and let       =0
2   1
f (x; y) =   p1   exp(   1 2        1 2
2          2 x ) exp( 2 y ) = fX (x)fY    (y)

and so X and Y are independent. Thus standart bivariate normal variables
are independent if and only if they are uncorrelated.

Theorem 12 Suppose X and Y are independent and that E(u(X)) and E(v(Y))
exist. Then
E[u(X)v(Y )] = E(u(X))E(v(Y ))
Proof: HMC p.112.

7
Corollary 13 If two events are independent then E(XY ) = E(X)E(Y )

Theorem 14 When X and Y are independent Cov(X; Y ) = 0
Proof:

Covar(X1 ; X2 )   = E[(X1     1 )(X2     2 )] = E[X1 X2   X1 2    1 X2 +                         1 2]
= E(X1 X2 )     2 E(X1 )     1 E(X2 ) + 1 2 = E(X1 X2 )                            1 2

Note that E(X1 X2 ) = E(X1 )E(X2 )
Thus,
Covar(X1 ; X2 ) = E(X1 )E(X2 )                         1 2   =0

De…nition 15 Correlation Coe¢ cient:
12
=
1 2

A correlation is zero when the covariance is zero, and we say that in this case the
variables are uncorrelated. In particular (14) states that when two variables are
independent then they are uncorrelated. The converse is not necessarily true.

Example 16
Y     0          1             2
X
1          1=4        0             1=4         1=2
3          1=12       1=3           1=12        1=2
4=12       4=12          4=12        1

1. E(X) = 1 1 + 3 1 = 2
2     2           ;E(Y ) = 0 +      1
3    +   1
3   =   2
3
2 1                           2                      2 1
2.    12= Cov(X; Y ) = (1 2)(0          3 ) 4 + (1           2)(1     3 )0 + (1       2)(2   3)4   +
1
(3 2)(0 2 ) 12 + (3 2)(1
3
2 1
3 ) 3 + (3            2)(2     2 1
3 ) 12 = 0
=0
1                                            1 4        1
3. f (X = 1; Y = 0) = 4 6= f (X = 1)f (Y = 0) =                      2 12   =   6   So even though
they are uncorrelated, they are not independent.

8

```
Related docs