AMPLITUDE MODULATION BY RANDOM SIGNALS

Document Sample
AMPLITUDE MODULATION BY RANDOM SIGNALS Powered By Docstoc
					                                                             Class 16
                                                             EE5302


AMPLITUDE MODULATION BY RANDOM SIGNALS

Many of the transmission media used in communication systems can be modeled as linear
systems and their behavior can be specified by a transfer function H ( f ) , which passes certain
frequencies and rejects others. Quite often, the information signal A(t ) (i.e., a speech or music
signal) is not at the frequencies that propagate well. The purpose of a modulator is to map the
information signal A(t ) into a transmission signal X (t ) that is in a frequency range that
propagates well over the desired medium. At the receiver, we need to perform inverse mapping
to recover A(t ) from X (t ) .

Let A(t ) be a WSS random process that represents an information signal. In general, A(t ) will be
“lowpass” in character, that is, its power spectral density will be concentrated at low frequencies
as shown in Fig 1. below. Speech is between 0-4KHZ, Music 0-12KHZ video (0-6MHZ) etc.

                                                              Fig 1.
                                                        S (f )
                                                         A




                                                                 0                               f

An amplitude modulation (AM) system produces a transmission signal by multiplying A(t ) by
a “carrier” signal cos 2π  (     f   c
                                         t+Θ : )
                (
X (t ) = A(t )cos 2π   f   c
                               t+Θ   )
where we assume Θ is a random variable that is uniformly distributed in the interval (0,2π ) , and
Θ and A(t ) are independent. Do these assumptions seem reasonable to you?

The autocorrelation of X (t ) is

                       [
E[ X (t + τ )X (t )] = E A(t + τ )cos 2π   (       f (t + τ ) + Θ)A(t )cos(2π f
                                                    c                             c
                                                                                      t+Θ   )]

                                                                                                     1
                 =

                 =

                 =
                       1
                       2 RA
                            (τ )cos 2π           (            f   c
                                                                      τ   )
where we used the fact that E cos 2π         [ (                      f (2t +τ ) + 2Θ)] = 0 . Thus X (t ) is also WSS.
                                                                          c



The power spectral density of X (t ) is


S ( f ) = F  2 R (τ )cos(2π f                            )                   (           )+ 1 S ( f − f )
            1                                              1
                                                         τ  = SA f +             f
                                                           4
  X                A                                                                              A
                                                     c                                c      4          c



where we used the table of Fourier transforms. In the Fig 2. below shows                                     S ( f ).
                                                                                                                X
                                             Fig 2.
                                       SX (f )




                                                                                                                         f

It can be seen that the power spectral density of the information signal has been shifted to the
regions around ± f . X (t ) is an example of a bandpass signal(characterized as having their
                        c
power spectral density concentrated about some frequency much greater than zero)

The transmission signal is demodulated by multiplying it by the carrier signal and lowpass
filtering, as shown in Fig 3. below:

                            X (t )                                    X
                                                                                                             Y (t )
                                                                                            LPF


                  2 cos 2π   (       f   c
                                             t+Θ          )
Let Y (t ) = X (t ) 2 cos 2π     (       f   c
                                                 t+Θ          )
                                                                                                                             2
Proceeding as above , we find that


S (f )= 2 S
  Y
        1
                    X
                        ( f + f )+ 1 S ( f − f )
                                   2 c           X           c




        =
            1
            2
                {S ( f + 2 f )+ S ( f )}+ 1 {S
                    A                    c2      A                       A
                                                                             ( f )+ S A (f       −2       f    c
                                                                                                                   )}
The ideal lowpass filter passes                  S ( f ) and blocks S
                                                     A                                      A
                                                                                                ( f ± 2 f ), thus the output of the lowpass
                                                                                                                   c

filter has power spectral density (This is easy to do because fc>>f.)

S ( f ) = S ( f ).
  Y             A



We know the output is the original information signal, A(t ) .

The modulation method in equation X (t ) = A(t )cos 2π                                  (        f                  )
                                                                                                         t + Θ can only produce bandpass

                            S ( f ) is locally symmetric about f , S ( f                                                        )        (f          )
                                                                                                     c

signals for which             X                                                                            X
                                                                                                                            + δf = S X            − δf for
                                                                                                 c                      c                     c

 δf < W as in Fig 2. The method cannot yield real-valued transmission signals whose power
spectral density lack this symmetry, such as shown in Fig 4. below.

                                                                              Fig 4.

                                                             S (f )
                                                                 X




The following quadrature amplitude modulation (QAM) method can be used to produce such
signals:


X (t ) = A(t )cos 2π    (    f   c
                                             )
                                     t + Θ + B(t )sin 2π (           f   c
                                                                             t +Θ   )
where A(t ) and B(t ) are real-valued, wide-sense stationary random processes, and we require
that


                                                                                                                                                             3
R (τ ) = R (τ ) , R (τ ) = − R (τ ) .
   A         B           B,A                   A, B



Note that R A (τ ) = R B (τ ) implies that            S ( f ) = S ( f ) , a real-valued, even function of
                                                         A                    B
                                                                                                            f , as
shown by Fig 5. below:

                                                               Fig 5.

                                                             S (f )
                                                              A




                                                                          0

Note also that   R (τ ) = − R (τ ) implies that S ( f ) is a purely imaginary, odd function of
                   B,A                  A, B                                  B, A

f , as shown in Fig 6. below:

                                                               Fig 6.

                                                      j S B, A ( f )




                                                                                  0



Proceeding as before, we can show that X (t ) is a wide-sense stationary random process with
autocorrelation function



R (τ ) = R (τ )cos(2π f
   X         A                 c
                                    )
                                   τ + R B , A (τ )sin 2π (       f   c
                                                                          τ   )

                                                                                                                     4
and power spectral density


S ( f ) = 2 {S               ( f − f )+ S ( f + f )}+ 21j {S ( f − f )− S ( f + f )}
          1
  X                     A                 c               A           c               BA            c   BA   c



The resulting power spectral density is shown in Fig 4. Thus QAM can be used to generate real-
valued bandpass signals with arbitrary power spectral density.

Bandpass random signals, such as those in Fig 4., arise in communication systems when wide-
sense stationary white noise is filtered by bandpass filters. Let N (t ) be such a process with
power spectral density                        S ( f ). It can be shown that N (t ) can be represented by
                                                      N



N (t ) =       N (t )cos(2π f
                    c                         c
                                                              )
                                                  t + Θ − N s (t )sin 2π      (   f   c
                                                                                          t+Θ   )
where          N (t ) and N (t ) are jointly wide-sense stationary processes with
                    c                 s




S N ( f ) = S N ( f ) = {S N ( f − f c )+ S N ( f + f c )}
      c                  s                                                                          L



and

SN    c
          ,   Ns
                         { (
                   ( f ) = j S N f − f c +S N f + f c             )       (           )}   L



where the subscript L denotes the lowpass portion of the expression in brackets. In words, every
           bandpass process can be treated as if it had been generated by a QAM modulator.

Example 1.




Y (t ) = A(t ) cos 2π        (    f       c
                                                          )
                                              t + Θ + N (t )

where N (t ) is the bandlimited white noise process with spectral density


        N0                                
                                f ±     <W      f
SN(f )=  2                            c   .
        0                       elsewhere 
                                          

Find the signal to noise of the recovered signal.


                                                                                                                 5
Solution:


Equation N (t ) =                       N (t )cos(2π f
                                          c                   c
                                                                          )
                                                                  t + Θ − N s (t )sin 2π    (           f   c
                                                                                                                          )
                                                                                                                t + Θ allows us to represent the
received signal by

           {
Y (t ) = A(t ) + N c (t )                 }    cos 2π (   f   c
                                                                       )
                                                                  t + Θ − N s (t )sin 2π(               f   c
                                                                                                                t+Θ       )

The demodulator in Fig 3. is used to recover A(t ) . After multiplication by 2 cos 2π                                               (         f   c
                                                                                                                                                      t+Θ , )
we have

2Y (t ) cos 2π  (           f   c
                                              ) {
                                    t + Θ = A(t ) + N c (t )          } 2 cos (2π f 2
                                                                                                    c
                                                                                                                 )             (
                                                                                                        t + Θ − N s (t )2 cos 2π    f   c
                                                                                                                                                      ) (
                                                                                                                                            t + Θ sin 2π        f   c
                                                                                                                                                                        t+Θ   )
                                                  {
                                              = A(t ) + N c (t )      }       (1 + cos(4π f         c
                                                                                                                     ))
                                                                                                        t + 2Θ − N s (t )sin 4π(    f   c
                                                                                                                                            t + 2Θ    )
After lowpass filtering, the recovered signal is

A(t ) + N c (t )

The power in the signal and noise components, respectively, are

           W

σA=        ∫ S ( f )df
   2
                    A
           −W


               W                                  W
                                                      N0 N0
σN         =    ∫ S N ( f )df                 =    ∫  2 + 2 df = 2W
   2
                                                                                      N   0
                                                                                                .
                                                  −W        
       c                        c
               −W



The output signal-to-noise ratio is then

                σ
                        2

SNR =                   A
                                    .
               2W   N       0




                                                                                                                                                                6
OPTIMUM LINEAR SYSTEMS

We observe a discrete-time, zero mean process                                   X   α
                                                                                        over a certain time interval
I = {t − a,............, t + b}, and we required to use the a + b + 1 resulting observations
{X t −a ,........., X t ,........, X t +b} to obtain an estimate Y t for some other ( presumably related)
zero-mean process                        Z . The estimate Y
                                             t                        t
                                                                          is required to be linear, as shown in Fig 7. Below:




                           X   t −a              X   t − a +1
                                                                …..            X    t
                                                                                             …..           X   t +b



                                                                …..                              ….




             t +b

Y   t
        =    ∑h X
            β =t − a
                       t −β       t −β
                                         .


The figure of merit for the estimator is the mean square error

  [ ]
E et = E  Z t −Y t 
    2
         
         
                      2
                       (
                        
                                             )
and we seek to find the optimum filter, which is characterized by the impulse response                                                     h   β
                                                                                                                                                   that
minimizes the mean square error.

The following examples show that different choices of                                      Z     t
                                                                                                     and   X   α
                                                                                                                   and of observation interval
correspond to different estimation problems.

Example 2:

Let the observations be the sum of a “desired signal”                                    Z   α
                                                                                                 plus unwanted “noise”         N   α
                                                                                                                                       :




                                                                                                                                                          7
X    α
         = Zα + Nα                    α∈I .

We are interested in estimating the desired signal at time t. The relation between t and the
observation interval I gives rise to a variety of estimation problems.

If I = (− ∞,t ), that is, a = ∞ and b = 0 , then we have a filtering problem where we estimate                                        Z    t

in terms of noisy observations of the past and present. If I = (t − a, t ) , then we have a filtering
problem in which we estimate Z t in terms of the a + 1 most recent noisy observations.

If I = (− ∞, ∞ ) , that is, a = b = ∞ , then we have smoothing problem where we are attempting to
recover the signal from its entire noisy version. There are applications where this makes sense,
for example, if the entire realization X α has been recorded and the estimate Z t is obtained by
“playing back”           X    α
                                  .


Example 3.

Suppose we want to predict                      Z   t
                                                        in terms of its recent past:    {Z   t −a
                                                                                                    ,......., Z t −1}. The general
estimation problem becomes this prediction problem if we let the observation                                         X   α
                                                                                                                             be the past
a values of the signal                Z   α
                                              , that is,


X    α
         = Zα            t − a ≤ α ≤ t −1.

The estimate Y t is then linear prediction of                        Z   t
                                                                             in terms of its most recent values.


The Orthogonality Condition

It is easy to show that the optimum filter must satisfy the Orthogonality Condition which states
that the error et must be orthogonal to all the observations X α , that is

0 = E et X α [       ]   for all α ∈ I

             [
    = E (Z t − Y t ) X α = 0      ]
or equivalently,

E   [Z X ] = E [Y X ]
         t       α        t           α
                                               for all α ∈ I

By substituting we get



                                                                                                                                               8
    [Z X ] = E  ∑ h X X                                        
                                     a
E     t          α                       β       t−β       α       for all α ∈ I
                                β = −b                         

                      =

                                     a
                       =        ∑ h R (t − α − β )
                               β =−b
                                              β     X                           for all α ∈ I .


                                                                [       ]
The above equation shows that E Z t X α depends upon only t − α , and thus X α and Z t are
jointly wide-sense stationary processes. Therefore , we can rewrite the above equation as
follows:

                                 a

RZ , X (t − α ) = ∑ hβ R X (t − β − α )
                               β =− b
                                                                            t −a ≤α ≤ t +b


Finally, letting m = t − α , we obtain the following key equation:

                           a

RZ , X (m) = ∑ hβ R X (m − β )
                      β = −b
                                                                    −b ≤ m ≤ a


The optimum linear filter must satisfy the set of a + b + 1 linear equations given by the above
equation.


In the above derivation we deliberately used the notation Z t instead of Z n to suggest that the
same development holds for continuous-time estimation. In the particular, suppose we seek a
linear estimate Y (t ) for the continuous-time random process Z (t ) in terms of observations of the
continuous-time random process X (α ) in the time interval t − a ≤ α ≤ t + b :

           t +b                                         a
Y (t ) =    ∫ h(t − β )X (β )dβ = ∫ h(β )X (t − β )dβ .
           t −a                                     −b



It can then be shown that the filter h(β ) that minimizes the mean square error is specified by

                      a

RZ , X (τ ) = ∫ h(β ) R X (τ − β )dβ                                 −b ≤τ ≤ a .
                      −b


Thus in the time-continuous case we obtain an integral equation instead of a set of linear
equations. The analytic solution of this integral equation can be quite difficult, but the equation
can be solved numerically by approximating the integral by a summation.


                                                                                                      9
We now determine the mean square error of the optimum filter. First we note for the optimum
filter, the error et and the estimate Y t are orthogonal since

  [               ] [
E et Y t = E et ∑ ht − β                              X         β
                                                                    ] = ∑ h E [e X ] = 0   t −β   t       β



where the terms inside the summation are 0. Since                                                             e = Z −Y
                                                                                                                  t    t   t
                                                                                                                               , the mean square error is then

  [ ] [
E et = E et (Z t − Y t )
        2
                                                  ]
                      [
             = E et Z t          ]
since       e  t
                    and Y t are orthogonal. Substituting for                                                  e
                                                                                                              t
                                                                                                                  yields

  [ ] [
E et = E (Z t − Y t )Z t = E
        2
                                                      ] [Z Z ] − E[Y Z ]t          t                  t   t


            =

            =

                                          a
             = R Z (0) −              ∑ h R (β ) . β            Z ,X
                                     β = −b



Similarly, it can be shown that the mean square error of the optimum filter in the continuous-
time case is


  [ ]
                                     a

E et =             R (0) − ∫ h R (β )dβ .
        2
                      Z                       β           Z,X
                                     −b


The following theorems summarize the above results.


THEOREM

Let     X     t
                    and     Z    t
                                     be discrete-time, zer0-mean, jointly wide-sense stationary processes, and let
Y   t
        be the estimate for                            Z    t
                                                                    of the form

             t +b                                     a

Y   t
        =    ∑h X
            β =t −a
                          t −β           β
                                              =   ∑h X
                                                  β = −b
                                                                    β       t −β
                                                                                       .




                                                                                                                                                                 10
The filter that minimizes E 
                            
                            
                                                    (Z t −Y t )  satisfies the equation
                                                                       2




                           a

RZ , X (m) = ∑ hβ R X (m − β )
                   β = −b
                                                                −b ≤ m ≤ a


and has mean square error given by


    (Z t −Y t )  = R (0) − ∑ h
                                                    a
E                                                          R (β )
                               2

 
                                    Z
                                               β = −b
                                                        β       Z ,X




THEOREM

Let X (t ) and Z (t ) be continuous-time, zero-mean, jointly wide-sense stationary processes, and
let Y (t ) be an estimate for Z (t ) of the form

            t +b                                a

Y (t ) =     ∫ h(t − β )X (β )dβ = ∫ h(β )X (t − β )dβ
            t −a                               −b




The filter         h   β
                                                
                               that minimizes E 
                                                
                                                        (Z (t ) −Y (t ) )        2
                                                                                     
                                                                                      satisfies the equation
                                                                                     

                           a

R    Z, X
            (τ ) = ∫ h(β ) R X (τ − β )dβ                                  −b ≤ τ ≤ a
                           −b
and has mean square error given by


 
E
 
    (Z (t ) −Y (t ) )                2
                                         
                                                            a

                                          = R Z (0) − ∫ h β RZ , X (β )dβ
                                                      −b


Example 4.

Suppose we are interested in estimating the signal                               Z     n
                                                                                           from the p + 1 most recent noisy
observations:

X    α
         = Zα + Nα                         α ∈ I = {n − p,................, n − 1, n}.

Find the set of linear equations for the optimum filter if                                  Z   α
                                                                                                    and   N   α
                                                                                                                  are independent random
processes.


                                                                                                                                           11
Solution:

For this choice of observation interval,
                                            p
equation RZ , X (m ) = ∑ hβ R X (m − β )                                              m ∈ {0,1,.........., p}
                                           β =0



becomes

R   Z ,X
           (m ) =          E   [Z X    n          n−m
                                                        ] = E [Z (Z  n          n−m
                                                                                      +   N   n−m
                                                                                                    )] = R (m ) .
                                                                                                              Z



The autocorrelation terms are given by

R (m − β ) = E [X
    X                                  n− β       X   n− m
                                                             ] = E [(Z   n−β
                                                                               + N n−β    )(Z   n− m
                                                                                                        + N n −m   )]
                     =


                       = R Z (m − β ) + R N (m − β )

sine    Z N are independent random processes. Thus equation
            α
                and            α
                       p

R (m ) = ∑ h R (m − β )
    Z ,X
                  β =0
                              m ∈ {0,1,.........., p} for the optimum filter becomes
                               β           X




RZ (m) = ∑ hβ {RZ (m − β ) + RN (m − β )}
                 p
                                                                                           m ∈ {0,1,.........., p} .
                β =0



This set of p + 1 linear equations in p + 1 unknowns                                            h   β
                                                                                                        is solved by matrix inversion.


Example 5:

Find the set of equations for the optimum filter in Example 4 if                                               Z is a first-order autoregressive
                                                                                                                        α

process with average power σ Z and parameter r , r < 1, and
                                                               2
                                                                                                               N is a white noise process with
                                                                                                                        α

average power σ N .
                                   2




Solution:

The autocorrelation for the first-order autoregressive process is given by



                                                                                                                                              12
R (m) = σ r                              m = 0,±1,±2,............
                 2         m
   Z             Z


The autocorrelation for the white noise process is

R (m) = σ δ (m)
                   2
   N               N



Substituting       R (m ) and R (m) into equation
                       Z                       N


R (m) = ∑ h {R (m − β ) + R (m − β )}
             p

   Z                   β        Z                      N
                                                                    m ∈ {0,1,.........., p} yields the following set of
            β =0

linear equations:


                           (σ                                 )
             p

σ r        = ∑ hβ                             + σ N δ (m − β )      m ∈ {0,................., p}
   2   m                        2       m−β        2
   Z
            β =0
                                Z   r

                   both sides of the above equation by σ and let Γ = σ
                                                                                             2
                                                                      2                      N
If we divide                                                                                     , we obtain the following
                                                                     σ
                                                                      Z                      2
                                                                                             Z
matrix equation:




Note that when the noise power is zero, i.e., Γ = 0 , then the solution is
h0 = 1, h j = 0, j = 1,........, p , that is, no filtering is required to obtain Z n .
Prediction

The linear prediction problem arises in many signal processing applications. In general we wish
to predict Z n in terms of Z n −1 , Z n − 2 ,.........., Z n − p :



                                                                                                                          13
       p

Y n = ∑ hβ Z n− β
      β =1


                                                                   a
For this problem,   X   α
                            = Z α , so equation   RZ , X (m) = ∑ hβ R X (m − β )
                                                                 β = −b
                                                                                   −b ≤ m ≤ a

becomes

              p

RZ (m) = ∑ hβ RZ (m − β )
             β =1
                                        m ∈ { ,........., p} .
                                             1


In matrix form this equation becomes




The above two equations are called the Yule-Walker equations.




                                                                                                14