Document Sample

11. RANDOM PROCESSES AND NOISE In Sections 5 and 8 deterministic test signals were used for the determining the specifications for design. Often, however, disturbances and noise may be modelled as random signals. In this section we will briefly cover some basics of the theory and use of random signals. For simplicity, and for the reason that random signals are always measured in discrete time by digital instruments and processed by digital computers, only discrete time random signals will be covered. We note that, exactly as described in Section 10, an appropriate anti-aliasing filter has to be used when sampling a random signal. 11.1 Random signals. White noise Intuitively the plot of a random signal, or a stochastic process, has an irregular form with no repetition of curve segments. Let w (t ), z(t ), t = 0, ± 1, ± 2, t = 0, ± 1, ± 2, (11.1) be two stochastic processes. The mean value function, m w t = E w (t ) , 16 ; @ (11.2) where E ⋅ denotes the expected value, is the function of expected values in each time instant. The actually assumed values of the stochastic variables w (t ) in (11.1) is called a realization of the stochastic process. When a control system is designed to attenuate the effect of a random disturbance or noise signal, it is essential to understand if there is some dependence between consecutive values or not. If there is such a dependence, the regulator may exploit this fact. The dependence between values of a stochastic process is described with the help of the covariance function, cov w (t ), w ( s ) = rw t , s = E w t − mw t ;@ 1 6 1 6 >3 1 6 def def 1 683w 1s 6 − m 1s 68C w (11.3) which typicallly describes how w (t ) and w ( s ) "co-operate". When the dependence between two different stochastic processes (11.1) is desired, the cross covariance function is defined, cov w (t ), z( s ) = rwz t , s = E w t − mw t 1 6 1 6 >3 1 6 1 683z1s 6 − m 1s 68C z (11.4) In some cases the mean value function (11.2) and the covariance function (11.3) completely describes the stochastic process. We will most assume this henceforth when nothing else is stated. A very useful and common case when this is true is when the stochastic process w (t ) , at each time instant, follows a Gaussian distribution. Such a stochastic process is called a Gaussian process. It is also common to assume that the stochastic process is time invariant, signifying that it does not depend on absolute time, i.e. the mean value is constant, mw t = m w and the covariance function depends on the time difference, 16 (11.5) rw t + τ , t = rw τ ,0 = rw τ . A process satisfying (11.5), (11.6) is called a stationary process. 1 6 1 6 def 16 (11.6) A particular stochastic process that is of great use is the following: Let w (t ) be a sequence of independent stochastic variables with mean = 0, and covariance = r, i.e. mw =0, and (i.e. δ 0 τ is the rw τ = δ 0 τ r , where δ 0 τ =1 for τ =0, and δ 0 τ =0 for τ =± 1, ± 2, discrete time equivalent to the Dirac δ -function). Such a sequency is called white noise. 16 16 16 16 16 11.2 The Fourier transform For continuous time signals, f (t ) , such that the integral signal), the Fourier transform of f (t ) is defined as I ∞ −∞ f (t ) dt converges (finite energy 2 cω = 16 1 2π I ∞ −∞ f (t )e − jω t dt . (11.7) Compare the Laplace transform (2.33), and note that the Laplace transform may be computed for functions that diverge for t → ∞ , for which the Fourier transform does not exist. Given the Fourier transform function c ω , the original function is found by the inverse Fourier transform, f (t ) = Clearly, c ω can be seen as representing (as a complex number) the amplitudes of the sine and cosine signals of various frequencies that f (t ) is decomposed into. It is reasonably easy to show that 16 I 16 ∞ −∞ cω e 16 jωt dω . (11.8) Φ ω = 2π c ω 16 16 2 (11.9) gives the energy density at the (positive and negative) frequency ω [rad/s] for the signal f (t ) . Therefore Φ ω is called the (energy) spectrum. To get the energy density at the physical frequency ω [rad/s], you add Φ ω + Φ −ω . When the left hand integral exists, Parseval's formula holds, 16 2 16 1 6 I ∞ −∞ g (t )dt = I ∞ −∞ Φ ω dω 16 (11.10) For a discrete time signal, fk = f kTs , k = 0, ± 1, ± 2, and whose sum ∑ 2 f k =−∞ k ∞ 2 7 , where T s [s] is the sampling interval, convergers (finite energy signal) the discrete Fourier transform (DFT) is computed, cω = 16 Ts 2π k =−∞ ∑f e k ∞ − jωkTs . (11.11) The time signal is retrieved by the inverse discrete Fourier transform (IDFT) whose integral is evaluated over the Nyquist frequency range, fk = I π Ts −π Ts cω e 16 jωkTs dω . (11.12) The (energy) spectrum is defined as , (11.13) Ts with the energy density at the physical frequency ω [rad/s], as the sum Φ ω + Φ −ω . Parseval's formula has the form Φω = 16 2π cω 16 2 16 1 6 (11.14) k =−∞ ∑f ∞ 2 k = I π Ts − π Ts Φ ω dω 16 . 11.3 Power spectral density. Variance Since a random signal (11.1), or for that matter a periodic signal (e.g. sin ωt , t = 0,±1,±2, ), does not tend to zero when t → ±∞ , and hence does not have finite energy, the Fourier transform (11.11) is not directly applicable. The problem is bypassed by truncating the signal at time NTs [s], and later letting N → ∞ . Define 1 6 w N (t ) = %w (t ⋅ T ), K &0, K ' s t = 1, 2, , N . 2 (11.15) otherwise The spectrum of (11.15) is now given by (11.11), (11.13), Φw ω = N 16 Ts 2π ∑ w 2tT 7e N s t =1 − jωtTs . (11.16) For simplicity we let Ts =1 henceforth. Then, Parseval's formula (11.14) gives ∑ w 1t 6 = N 2 t =1 I π −π Φ w ω dω . N 16 (11.17) Contemplating (11.17), it seems natural to normalize the spectrum by dividing it by N. For each ω , Φ w ω N is a stochastic variable, since it is defined from the stochastic process N 16 w (t ) . Define Φw ω = E Φ w ω N N 1 6 J 1 6 NL (11.18) as the mean energy-per-sampling interval spectrum, or power spectral density of the random signal w N t . If w (t ) is a stationary process, it is possible to find the limit with respect to N. 16 Theorem: Assume that w (t ) is a stationary process with mean value = 0, and covariance function rw τ 16 such that rw τ ≤ cτ 16 1 −1−δ 6 , δ > 0 . Then Φ w ω = lim E Φ w N ω N →∞ 16 J 1 6 NL (11.19) exists, and is called the power spectral density of the stochastic process w (t ) . Moreover it holds that Φw ω = and 16 1 2π τ =−∞ ∑ r 1τ 6e w ∞ − jωτ (11.20) rw τ = 16 I π −π Φw ω e 16 jωτ dω (11.21) • The proof is left as a not completely trivial exercise. In particular we not that (11.21) gives an expression for the variance of w (t ) , E w t J 1 6L = r 106 = 2 w I π −π Φw ω dω 16 (11.22) which also follows from (11.17). The significance of the theorem is that the power spectral density of w (t ) can be found in two ways: i) Fourier transform w N t , compute its energy spectrum (11.16), divide by N, and compute the limit (11.19) ; and ii) compute rw τ by (11.3), (11.5), (11.6), and Fourier transform it (11.20). In practice, the first way is preferrable, since often Φ w ω N is an acceptable estimate of Φ w ω for N large enough. Notice that (11.21) N 16 16 16 16 is an inverse Fourier transform. The cross spectral density between two stationary stochastic processes w(t) and z(t) in (11.1) is defined as Φ wz ω = 16 1 2π τ =−∞ ∑ r 1τ 6e wz ∞ − jωτ (11.23) where rwz τ is the cross covariance (11.4) of two stationary processes. In Matlab, the DFT of w N t is given by the command 16 effective if N, the number of elements of w N t , is a power of two. The result of the command 16 N IIW is N complex numbers c ω k , namely c ω k = constant ⋅ 2 7 16 IIW, which is numerically particularly 2 7 ∑ w 2tT 7e s t =1 − jω k tTs , ωk = kω k N for k = 0,1, , N − 1 (11.24) 1 6 where ω s = 2π Ts is the sampling frequency. The particular normalization constant in Matlab is chosen such that Parseval's formula (11.17), with the integral replaced by a sum, has the form ∑ 1 6 ∑ c2ω 7 N wN t = 2 N −1 2 k N . (11.25) t =1 k =0 The user of Matlab has to keep (11.25) in mind, and, if need be, renormalize c ωk = a ⋅c ωk 2 7 2 7 (11.26) with a normalizing constant a, such that Parseval's formula gets the desired form. E.g. if the following "Parseval's formula" is desired, 1 N ⋅ Ts ∑ 16 N 2 wN t = ωs t =1 ∑ c 2ω 7 N k k =0 N −1 2 (11.27) where the left hand side is power, and the right hand side is the total frequency "integral" of the power spectral density c ω k 2 7 2 , then choose (11.28) a= 1 2πN which easily follows from (11.24) - (11.26). It is important to note that the pure DFT is seldom used without windowing, i.e. weighting of data. Weighting is beyond the ambitions of this course, please refer to to a textbook in signal processing. Professionals always use weighting. Compare e.g. the Matlab command VSHFWUXP. However, before using this command, please investigate the normalization, e.g. by checking the PSD of a sine wave as in Example 11.1. Example 11.1. The Power Spectral Density of a sine wave. Let , w N t = sin t , t = 12, ,1024 16 16 (11.29) Hence the sampling period is Ts = 1 [s], and the sampling frequency is ω s = 2π [rad/s]. ª 1 W 1 Z VLQWF IIWZ ª >VXPZ Z VXPF FRQMF1@ DQV FI If we wish to normalize the DFT according to the "natural" Parseval's formula (11.27), proceed as follows, noting the known fact that the power of an infinite sine signal of amplitude A is 2 A 2 and its root mean square value is A 2 . In our example, A=1. ª D VTUW SL 1 RPV SL FEDU D F ª /+6 VXPZ Z1 /+6 /+6 RI >SRZHU@ ª 5+6 SL1 VXPFEDU FRQMFEDU 5+6 5+6 RI >SRZHU@ The power spectral density, 10 log c ω k Figure 11.1. Notice that [dB power] is defined differently, but consistently, with [dB gain] encountered in previous sections. 2 7 2 [dB power] versus ω k [rad/s], is displayed in ª SORW>1@ SL1 ORJFEDU FRQMFEDU JULG ª [ODEHO UDGV \ODEHO 36' ORJFEDU FRQMFEDU Notice in Figure 11.1 that all power is concentrated at 1 rad/s and -1 rad/s, exactly at the frequency of (11.29). • Figure 11.1. Power spectral density for sin(t) in Example 11.1. Example 11.2. The Power Spectral Density of white noise. Let {w(t)} [Volt] in (11.1) be a white noise sequence, with sampling interval Ts = 1 [s], and sampling frequency is ω s = 2π [rad/s], generated such that at each sampling instant w t ∈ N( 0,1) , i.e. normally distributed with zero mean and variance = 1. A plot of a particular realization of the white noise sequence over N=1024 seconds is found in Figure 11.2, and its PSD in Figure 11.3. We notice that the PSD is roughly constant ≈ 1 2π over the frequency 16 range, which follows from the typical white noise feature that rw 0 = 1 and zero elsewhere (Section 11.1), and (11.20). A windowed PSD calculation (VSHFWUXP) gives a smoother curve. Try it! The covariance function of w(t), rw τ , (11.3) is shown in Figure 11.4. 16 1 6 16 ª1 Z UDQGQ1 SORWZ N JULG[ODEHO V \ODEHO 9 )LJXUH ª F IIWZ D VTUW SL 1 FEDU D F ª /+6 VXPZ Z1 /+6 RI /+6 ª 5+6 SL1 VXPFEDU FRQMFEDU 5+6 RI 5+6 ª SORW>1@ SL1 ORJFEDU FRQMFEDUJULG ª [ODEHO UV )LJXUH ª \ODEHO 36' ORJFEDU FRQMFEDU>9AUDGV ª SORWFRYIZ N D[LV> @ [ODEHO ODJV >V@ \ODEHO FRYI )LJXUH • Figure 11.2. A realization of the white noise sequence {w(t)} [Volt] in Example 11.2. Figure 11.3. The power spectral density of {w(t)} of Figure 11.2. Figure 11.4. The covariance function of {w(t)} of Figure 11.2. 11.4 Filtering and colored noise Assume that a stationary stochastic process u (t ) , t = 0,±1, dynamic system, described by its pulse transfer function, ; @ 16 (11.1) is the input to linear (11.30) Y ( z ) = F ( z )U ( z ) where Y ( z ) is the Z-transform of the output y t (which is also a stochastic process), and U ( z ) is the Z-transform of the input u(t). Then the relations between the spectra of u(t) and y t is given by the following theorem, whose proof is based on (11.20), (11.23) and left as an exercise for the reader. 16 Theorem. Consider (11.30). If the power spectral density of u(t) is Φ u ω , then the cross spectral density is Φ yu ω = F e 16 1 6 4 9Φ 1ω 6 jω u jω 2 (11.31) and the power spectral density of the output y(t) is Φy ω = F e (11.32) 16 4 9 Φu ω 16 . • The Z-transform is defined for signals that equal zero for t < 0, see Section 10.2. To get Y ( z ) and U ( z ) we resort to the same trick as in (11.15), i.e. truncating the signal at t=N, and letting N tend to infinity. In practice, N is always final. If u(t) is white noise, whose PSD is constant ("white" in analogy with white light which has equal power at all colors of the visible spectrum), then y(t) is called colored noise, in analogy with colored light. The equations (11.31) and (11.32) have some very important implications. Control systems design to attenuate the influence of noise Assume that the PSD of random noise or disturbance source is known. Moreover assume that a specification is given that limits the power of the output (due to the random disturbance input) of a closed loop linear system to be designed. Then (11.17), (11.19) (or (11.27)) and (11.32) gives a specification of the gain of the closed loop frequency function. Identification Assume that you have an unknown linear dynamic system where you may measure the input and output. To get a model of such a system we have discussed impulse and step response experiments in Section 3.2, and frequency response experiments in Section 7.1. Now we have a third way: apply random noise (white or colored), and compute the input PSD, output PSD, and cross spectral density. Using (11.31) and (11.32) you will get an estimate of the transfer function of the dynamic system. The Matlab command VSHFWUXP has an option doing just that. There are also numerous spectral analysis instruments on the market for this purpose. Note also from the definition of of the Fourier transform c ω (11.7), and the definition of Laplace transform F(s) (2.33) that for continuous truncated signals, cf. (11.15), c ω = F jω . The same holds for the discrete Fourier transform and the Z-transform. Hence a frequency function estimate is easily obtained by dividing the Fourier transform of the output with the Fourier transform of the input. The Matlab command HWIH does that. Colored noise synthesis and filter design Assume that you have given Φ y ω , the PSD of a random signal you would to synthesize. 16 16 1 6 1 6 4 9 , and input white noise with unit power. Notice that the amplitude Bode diagram of Φ 1ω 6 [dB power] is equal to the amplitude Bode diagram of F 4e 9 [dB gain]. If Φ 1ω 6 is a rational Clearly (11.32) assigns a way to do it: build a linear filter such that Φ y ω = F e jω jω 2 16 y y function in can be proven that there always exist a stable filter with a rational transfer function that does the job. In practice it is easy to find F e 4 9 by manual fitting in the Bode diagram. jω Example 11.3. Frequency function estimates. Let {u(t)} in (11.1) be a white noise sequence, with sampling interval Ts = 1 [s], and sampling frequency is ω s = 2π [rad/s], generated such that at each sampling instant u t ∈ N( 0,1) , i.e. normally distributed with zero mean and variance = 1. Let u(t) be the input to a filter, whose Ztransform is 16 Y z = 16 0.5 z − 0.5 U z . 16 (11.32) Figure 11.5. True Bode diagram of the filter (smooth curves), and frequency function estimate using the Matlab command VSHFWUXP (slightly jagged curves). Figure 11.6. True Bode diagram of the filter (smooth curves), and frequency function estimate found by dividing the raw (unwindowed) Discrete Fourier transforms of the output and the input, using the Matlab command IIW\IIWX (jagged curves). The Bode diagram of (11.32) is displayed in Figures 11.5 and 11.6, together with the frequency function estimate in the Matlab command VSHFWUXP, and by direct division of the DFT of the output and the input, respectively. The variance (power) of u(t) equals 1, while the variance of the output y(t) equals 0.33. The sequence of Matlab commands follow. The reader is invited to try other alternatives to find the transfer function, e.g. using the cross spectral density, or using the Matlab command HWIH. X UDQGQ \ ILOWHU> @> @X 3 VSHFWUXPX\ KHOS VSHFWUXP JLYHV H[SODQDWLRQ RQ 3 GERGH> @> @ VXESORWKROG &XUUHQW SORW KHOG VXESORWVHPLORJ[>@ SL ORJDEV3 N )LJXUH VXESORWKROG &XUUHQW SORW KHOG VXESORWVHPLORJ[>@ SL DQJOH3 SL N )LJXUH FOJ IX IIWX IX IX UDGV I\ IIW\ I\ I\ IIW RYHU IUHTXHQF\ UDQJH > SL@ GERGH> @> @ VXESORWKROG &XUUHQW SORW KHOG VXESORWVHPLORJ[>@ SL ORJDEVI\IX N )LJXUH VXESORWKROG &XUUHQW SORW KHOG VXESORWVHPLORJ[>@ SL DQJOHI\IX SL N )LJXUH VXPX X DQV VXP\ \ DQV LQSXW YDULDQFH RXWSXW YDULDQFH •

DOCUMENT INFO

Shared By:

Categories:

Stats:

views: | 911 |

posted: | 10/26/2009 |

language: | English |

pages: | 11 |

Description:
A brief cover of some basics of the theory and use of random signals. simple and random signals are always measured in discrete time by digital instruments and processed by digital computers, only discrete time random signals are to be covered. We note that, an appropriate anti-aliasing filter has to be used when sampling a random signal.

OTHER DOCS BY Oleksander

Docstoc is the premier online destination to start and grow small businesses. It hosts the best quality and widest selection of professional documents (over 20 million) and resources including expert videos, articles and productivity tools to make every small business better.

Search or Browse for any specific document or resource you need for your business. Or explore our curated resources for Starting a Business, Growing a Business or for Professional Development.

Feel free to Contact Us with any questions you might have.