Paper Presention on digital signal processing and its applications

Document Sample
Paper Presention on digital signal processing and its applications Powered By Docstoc
					1. Introduction                                         2. Digital Signal processing
  Signal processing is an area of electrical              Digital signal processing (DSP) is
engineering, systems engineering and                    concerned with the representation of signals
applied mathematics that deals with                     by a sequence of numbers or symbols and
operations on or analysis of signals, in either         the processing of these signals. Digital
discrete or continuous time to perform                  signal processing and analog signal
useful operations on those signals. Signals             processing are subfields of signal
of interest can include sound, images, time-            processing. DSP includes subfields like:
varying measurement values and sensor                   audio and speech signal processing, sonar
data, for example biological data such as               and radar signal processing, sensor array
electrocardiograms, control system signals,             processing, spectral estimation, statistical
telecommunication transmission signals                  signal processing, digital image processing,
such as radio signals, and many others.                 signal processing for communications, bio-
Signals are analog or digital electrical                medical signal processing, seismic data
representations of time-varying or spatial-             processing, etc.
varying physical quantities. In the context of
signal processing, arbitrary binary data
streams and on-off signals are not                      2.1 Why DSP & Not ASP
considered as signals, but only analog and
digital signals that are representations of             Following are the reasons why DSP is
analog physical quantities.                             preferred over ASP:

  Signal processing is divided into following           i. A digital programmable system allows
categories;                                             flexibility in reconfiguring the digital signal
                                                        processing operations simply by changing
i. Analog signal processing [ASP]                       the program where as reconfiguring an
                                                        analog system usually implies a re-design of
ii. Digital signal processing [DSP]                     the hardware followed by testing &
                                                        verification.
   Analog signal processing is for signals
that have not been digitized, as in classical           ii. Tolerances in analog circuit components
radio, telephone, radar, and television                 make it extremely difficult to control the
systems. This involves linear electronic                accuracy of an ASP. On the other hand a
circuits such as passive filters, active filters,       digital system provides much better control
additive mixers, integrators and delay lines.           of accuracy requirements.
It also involves non-linear circuits such as
commanders, multiplicators (frequency                   iii. Digital signals are easily stored on
mixers and voltage-controlled amplifiers),              magnetic media without deterioration
voltage-controlled filters, voltage-controlled          beyond that introduced in A to D
oscillators and phase-locked loops. The                 conversion. As a result the signals become
output of ASP is a processed analog signal.             transportable and can be processed off- line
                                                        in a remote laboratory.



                                                    1
iv. DSP allows implementation of more               convert the signal from an analog to a digital
sophisticated signal processing algorithm.          form, by using an analog-to-digital converter
                                                    (ADC). Often, the required output signal is
v. It’s usually very difficult to perform           another analog output signal, which requires
precise mathematical operations on signals          a digital-to-analog converter (DAC). Even if
in analog form but these same operations            this process is more complex than analog
can be routinely implemented on a digital           processing and has a discrete value range,
computer using software.                            the stability of digital signal processing
                                                    thanks to error detection and correction and
2.2 DSP Domains                                     being less vulnerable to noise makes it
                                                    advantageous over analog signal processing
  In DSP, engineers usually study digital           for many, though not all, applications.
signals in one of the following domains:
time domain (one-dimensional signals),              2.3.1 A/D Converter
spatial domain (multidimensional signals),
frequency domain, autocorrelation domain,           An analog-to-digital converter is a device
and wavelet domains. They choose the                which converts continuous signals to
domain in which to process a signal by              discrete digital numbers. Typically, an ADC
making an informed guess (or by trying              is an electronic device that converts an input
different possibilities) as to which domain         analog voltage (or current) to a digital
best represents the essential characteristics       number proportional to the magnitude of the
of the signal. A sequence of samples from a         voltage or current. However, some non-
measuring device produces a time or spatial         electronic or only partially electronic
domain representation, whereas a discrete           devices, such as rotary encoders, can also be
Fourier transform produces the frequency            considered ADCs. The digital output may
domain information, which is the frequency          use different coding schemes, such as
spectrum.                                           binary, Gray code or two's complement
                                                    binary.
2.3 Process        of    Digital     Signal
Processing




   Figure 1. Block diagram of a DSP system

  Since the goal of DSP is usually to                Figure 2. (4-channel stereo multiplexed
measure or filter continuous real-world             analog-to-digital converter WM8775SEDS)
analog signals, the first step is usually to

                                                2
                                                       Sampling can be described by the relation;
                                                       x[n] = xa (n Ts)

                                                       Where x[n] = Discrete time signal

                                                       Xa = input analog signal

                                                       n= samples

                                                       Ts = Sampling interval

Figure 3. Block diagram of Analog to                   Fs = 1/T
Digital converter

2.3.1. A Sampler

 . In order to use an analog signal on a
computer it must be digitized with an
analog-to-digital converter. A continuously
varying band limited signal can be sampled
(that is, the signal values at intervals of time       Figure 5. Sampled signal (discrete time,
T, the sampling time .Sampling is usually              continuous values)
carried out in two stages, discretization and
quantization.                                             Since a practical ADC cannot make an
                                                       instantaneous conversion, the input value
   In the discretization stage, the space of           must necessarily be held constant during the
signals is partitioned into equivalence                time that the converter performs a
classes such that the sampling rate is higher          conversion (called the conversion time). An
than twice the highest frequency of the                input circuit called a sample and hold
signal. This is essentially what is embodied           performs this task—in most cases by using a
in the Shannon-Nyquist sampling theorem.               capacitor to store the analog voltage at the
                                                       input, & using an electronic switch or gate
                                                       to disconnect the capacitor from the input.
                                                       Many ADC integrated circuits include the
                                                       sample and hold subsystem internally.

                                                       2.3.1. B Quantizer

                                                         Quantization is carried out by replacing the
                                                       sampled signal with representative signal of
                                                       the corresponding equivalence class. In the
                                                       quantization stage the representative signal
                                                       values are approximated by values from a
                                                       finite set. For e.g. rounding a real number in
                                                       the interval [0,100] to an integer 0, 1,
Figure 4. Sampler
                                                   3
2...100. In other words, quantization can be         converter, for instance, must take its signal
described as a mapping that represents a             samples often enough to catch all the
finite continuous interval I = [a,b] of the          relevant fluctuations. If the ADC is too
range of a continuous valued signal, with a          slow, it misses some of the action. Imagine
single number c, which is also on that               trying to film a football game with a movie
interval. For example, rounding to the               camera running at one frame per minute.
                                                     The film would be incoherent, missing
nearest integer replaces the interval [c −
                                                     entire plays in the intervals between frames.
.5,c + .5) with the number c, for integer            The DSP, too, must keep pace, churning out
                                                     calculations as fast as the signal data is
                                                     received from the ADC. The pace gets
                                                     progressively more demanding as the signal
                                                     gets faster. Stereo equipment handles sound
                                                     signals of up to 20 kilohertz (20,000 cycles
                                                     per second, the upper limit of human
                                                     hearing), requiring a DSP to perform
Figure 6. Quantized signal: continuous time,         hundreds of millions of operations per
discrete values.                                     second. Other signals, such as satellite
                                                     transmissions, are even faster, reaching up
                                                     into the Gigahertz (billions of cycles per
                                                     second) range.

                                                     2.3.2. B DSPs versus Microprocessors

Figure 7. Digital signal (sampled, quantized:          DSPs differ from microprocessors in a
discrete time, discrete values.)                     number of ways. Microprocessors are
                                                     typically built for a range of general purpose
2.3.2 Digital Signal Processors                      functions, and normally run large blocks of
                                                     software, such as operating systems like
2.3.2. A Blinding Speed                              Windows or UNIX. Although today's
                                                     microprocessors, including the popular and
  At its heart, digital signal processing is         well-known Pentium family, are extremely
highly numerical and very repetitive. As             fast--as fast or faster than some DSPs--they
each new piece of signal data arrives, it must       are still not often called upon to perform
be multiplied, summed, and otherwise                 real-time computation or signal processing.
transformed according to complex formulas.           Usually, their bulk processing power is
What makes this such a keen technological            directed more at handling many tasks at
challenge is the speed requirement. DSP              once, and controlling huge amounts of
systems must work in real time, capturing            memory and data, and controlling a wide
and processing information as it happens.            variety of computer peripherals (disk drive,
Like a worker on a fast-moving assembly              modem, video display, etc). However,
line, Analog-to-Digital converters and DSPs          microprocessors such as Pentiums are
must keep up with the work flow. If they fall        notorious for their size, cost, and power
behind, information is lost and the signal           consumption to achieve their muscular
gets distorted. The Analog-to-Digital                performance, whereas DSPs are more

                                                 4
dedicated, racing through a smaller range of         hand, hi-fidelity stereo sound has a wider
functions at lightning speed, yet less costly        range, calling for a 16-bit ADC or 24-bit
and requiring much less space (size) and             ADC, and a 24-bit fixed-point DSP like the
power consumption to achieve their purpose.          Motorola DSP563xx series. In this case, the
DSPs are often used in "embedded systems",           ADC's 16-bit or 24-bit width is needed to
where they are accompanied by all                    capture the complete high-fidelity signal
necessary software (stored in onchip ROM             (i.e. much better than a phone); the DSP thus
or offchip EEPROM), built deep into a piece          must be 24 bits to accommodate the larger
of equipment, and dedicated to a group of            values resulting when the signal data is
related tasks. In computer systems, DSPs             manipulated.) Applications requiring still
may be employed as attached processors,              greater dynamic range include image
assisting    a    general    purpose     host        processing, 3-D graphics, and scientific and
microprocessor.                                      research simulations; such applications
                                                     typically a 32-bit floating-point processor.
2.3.2. C Different DSPs for Different Jobs
                                                     2.3.2. D DSP Evolution
  One way to classify DSP devices and
applications is by their dynamic range. The            Around 30 years ago, digital signal
dynamic range is the spread of numbers,              processing was more theory than practice.
from small to large that must be processed in        The only systems capable of doing signal
the course of an application. It takes a             processing were massive mainframes and
certain range of values, for instance, to            supercomputers and even then, much of the
describe the entire waveform of a particular         processing was done not in real time, but
signal, from deepest valley to highest peak.         off-line in batches. For example, seismic
The range may get even wider as                      data was collected in the field, stored on
calculations are performed, generating larger        magnetic tapes and then taken to a
and smaller numbers through multiplication           computing centre, where a mainframe might
and division. The DSP device must have the           take hours or days to digest the information.
capacity to handle the numbers so generated.         The first practical real-time DSP systems
If it doesn't, the numbers may "overflow,"           emerged in the late 1970s and used bipolar
producing invalid results. The processor's           "bit-slice" components. Large quantities of
capacity is a function of its data width (i.e.       these building-block chips were needed to
the number of bits it manipulates) and the           design a system, at considerable effort and
type of arithmetic it performs (i.e., fixed or       expense. Uses were limited to esoteric high-
floating point). A 32-bit processor has a            end technology, such as military and space
wider dynamic range than a 24-bit                    systems. The economics began to change in
Processor, which has a wider range than 16-          the early 80s with the advent of single-chip
bit processor. And floating-point chips have         MOS (Metal-Oxide Semiconductor) DSPs.
wider ranges than fixed-point devices. Each          Cheaper and easier to design-in than
type of processor is suited for a particular         building     blocks,    these    "monolithic"
range of applications. 16-bit fixed-point            processors meant that digital signal
DSPs such as typically used for voice-grade          processing could be cost-effectively
and telecom systems (such as cell-phones),           integrated into an array of ordinary products.
since they work with a relatively narrow             The early single-chip processors were
range of sound frequencies. On the other             relatively simple 16-bit devices, which,

                                                 5
teamed with 8- or 10-bit ADCs, were                   complete processor requires combining the
suitable for low-speed applications, general-         core with memory and interfaces to the
purpose coders such as talking toys, simple           outside world. While the core and these
controllers, and vocoders; (voice encoding            peripheral sections are designed separately,
devices used in telecommunications).                  they will be fabricated on the same piece of
                                                      silicon, making the processor a single
2.3.2. E The Digital Signal Processor                 integrated circuit.
Market
                                                        Suppose you build cellular telephones and
  The DSP market is very large and growing            want to include a DSP in the design. You
rapidly. As shown in Fig.8 it will be about           will probably want to purchase the DSP as a
8-10 billion dollars/year at the turn of the          processor, that is, an integrated circuit
century, and growing at a rate of 30-40%              ("chip") that contains the core, memory and
each year. This is being fueled by the                other internal features. To incorporate this
incessant                                             IC in your product, you design a printed
                                                      circuit board where it will be soldered in
                                                      next to your other electronics. This is the
                                                      most common way that DSPs are used.
                                                      There are several dozen companies that will sell
                                                      you DSPs already mounted on a printed circuit
                                                      board. These have such features as extra
                                                      memory, A/D and D/A converters, EPROM
                                                      sockets, multiple processors on the same board,
                                                      and so on. While some of these boards are
                                                      intended to be used as stand alone computers,
                                                      most are configured to be plugged into a host,
                                                      such as a personal computer. Following are
Figure 8. Graph of increasing demand of               some of the companies that dominate
DSP                                                   today’s DSP market:

  DSP marketdemand for better and cheaper
consumer products, such as: cellular
telephones, multimedia computers, and
high-fidelity music reproduction. These
high-revenue applications are shaping the
field, while less profitable areas, such as
scientific instrumentation, are just riding the
wave of technology.

  DSPs can be purchased in three forms, as a
core, as a processor, and as a board level
product. In DSP, the term "core" refers to
the section of the processor where the key
tasks are carried out, including the data
registers,   multiplier,   ALU,     address
generator, and program sequencer. A
                                                  6
2.3.2.F Things that have DSPs




                                                     Figure   9. (8-channel digital-to-analog
                                                     converter Cirrus Logic CS4382 placed on
                                                     Sound Blaster X-Fi Fatal1ty)

                                                       A DAC converts an abstract finite-
                                                     precision number (usually a fixed-point
                                                     binary number) into a concrete physical
                                                     quantity (e.g., a voltage or a pressure). In
                                                     particular, DACs are often used to convert
                                                     finite-precision time series data to a
Some typical and well-known items which              continually varying physical signal.
contain one (or many) embedded DSPs:

   ∑   the biggie: cell phones
   ∑   fax machines
   ∑   DVD players and other home audio
       equipment
   ∑   your car (for example: the anti-lock
       braking system)
   ∑   computer disk drives                           Figure 10. Reconstructed analog signal
   ∑   satellites (they have a lot)
   ∑   the "switch" at your local telephone             A typical DAC converts the abstract
       company (more than a lot)                     numbers into a concrete sequence of
   ∑   digital radios                                impulses that are then processed by a
   ∑   high-resolution printers                      reconstruction filter using some form of
   ∑   digital cameras                               interpolation to fill in data between the
                                                     impulses. Other DAC methods (e.g.,
2.3.3 Digital To Analog Converter                    methods based on Delta-sigma modulation)
                                                     produce a pulse-density modulated signal
  In    electronics,    a    digital-to-analog       that can then be filtered in a similar way to
converter (DAC or D-to-A) is a device for            produce a smoothly varying signal.
converting a digital (usually binary) code to
an analog signal (current, voltage or electric         By the Nyquist–Shannon sampling
charge).                                             theorem, sampled data can be reconstructed
                                                     perfectly provided that its bandwidth meets
                                                     certain requirements (e.g., a baseband signal
                                                     with bandwidth less than the Nyquist
                                                 7
frequency). However, even with an ideal               rather than a voltage or current as in a
reconstruction filter, digital sampling               analog filter.
introduces quantization error that makes
perfect        reconstruction     practically         3.1 Advantages of using digital
impossible. Increasing the digital resolution         filters
(i.e., increasing the number of bits used in
each sample) or introducing sampling dither             The following list gives some of the main
can reduce this error.                                advantages of digital over analog filters.
                                                      1. A digital filter is programmable, i.e. its
3. Digital Filters                                    operation is determined by a program stored
                                                      in the processor's memory. This means the
   In signal processing, the function of a            digital filter can easily be changed without
filter is to remove unwanted parts of the             affecting the circuitry (hardware).
signal, such as random noise, or to extract             An analog filter can only be changed by
useful parts of the signal, such as the               redesigning the filter circuit.
components lying within a certain frequency
range.                                                2. Digital filters are easily designed, tested
   There are two main kinds of filter, analog         and implemented on a general-purpose
and digital. They are quite different in their        computer or workstation.
physical makeup and in                                3. The characteristics of analog filter circuits
how they work.                                        are subject to drift and are dependent on
   An analog filter uses analog electronic            temperature. Digital filters do not suffer
circuits made up from components such as              from these problems, and so are extremely
resistors, capacitors and op-amps to produce          stable with respect both to time and
the required filtering effect. Such filter            temperature.
circuits are widely used in applications such
as     noise     reduction,     video    signal       4. Unlike their analog counterparts, digital
enhancement, graphic equalisers in hi-fi              filters can handle low frequency signals
systems, and many other areas. There are              accurately. As the speed of DSP technology
well-established standard techniques for              continues to increase, digital filters are being
designing an analog filter circuit for a given        applied to high frequency signals in the RF
requirement. At all stages, the signal being          (radio frequency) domain, which in the past
filtered is an electrical voltage or current          was the exclusive preserve of analog
which is the direct analogue of the physical          technology.
quantity (e.g. a sound or video signal or
transducer output) involved.                          5. Digital filters are very much more
   A digital filter uses a digital processor to       versatile in their ability to process signals in
perform numerical calculations on sampled             a variety of ways; this includes the ability of
values of the signal. The processor may be a          some types of digital filter to adapt to
general-purpose computer such as a PC, or a           changes in the characteristics of the signal.
specialised DSP (Digital Signal Processor)
chip.                                                 6. Fast DSP processors can handle complex
   Note that in a digital filter, the signal is       combinations of filters in parallel or cascade
represented by a sequence of numbers,                 (series), making the hardware requirements

                                                  8
relatively simple and compact in comparison             Examples of simple digital filters
with the equivalent analog circuitry.                   The following examples illustrate         the
                                                        essential features of digital filters.
3.2 Operation of digital filters
                                                        A. Unity gain filter:
    In this section, we will develop the basic
theory of the operation of digital filters. It is                       yn = xn
essential to understand how digital filters are         Each output value yn is exactly the same as
designed and used. Suppose the "raw" signal             the corresponding input value xn:
which is to be digitally filtered is in the form                        y1= x1
of a voltage waveform described by the                                  y2= x2
function                                                                y3= x3
                  V = x (t)                                             ...etc
Where (t) is time.                                      This is a trivial case in which the filter has
This signal is sampled at time intervals h              no effect on the signal.
(the sampling interval). The sampled value
at time t = ih is                                       B. Simple gain filter:
                  xi = x(ih)
Thus the digital values transferred from the                          yn = K xn
ADC to the processor can be represented by              Where K = constant.
the sequence                                            This simply applies a gain factor K to each
         x0 , x1 , x2 , x3 , ...                        input value.
Corresponding to the values of the signal               K > 1 makes the filter an amplifier, while 0
waveform at                                             < K < 1 makes it an attenuator. K < 0
          t = 0, h, 2h, 3h,...                          corresponds to an inverting amplifier.
And t = 0 is the instant at which sampling              Example (1) above is simply the special case
begins.                                                 where K = 1.
At time t = nh (where n is some positive
integer), the values available to the                   C. Pure delay filter:
processor, stored in memory, are
          x0 , x1 , x2 , x3 , ... xn                                   yn= xn-1
Note that the sampled values xn+1, xn+2 etc.            The output value at time t = nh is simply the
are not available, as they haven't happened             input at time t = (n-1)h, i.e. the signal is
yet!                                                    delayed by time h:
The digital output from the processor to the                           y0 = x-1
DAC consists of the sequence of values                                 y1= x0
            y1, y2, y3, y4 ...yn                                       y2= x1
In general, the value of yn is calculated from                         y3= x2
the values x0, x1, x2, x3 ..., xn. The way in                          ... etc
which the y's are calculated from the x's               Note that as sampling is assumed to
determines the filtering action of the digital          commence at t = 0, the input value x-1 at t =
filter.                                                 -h is undefined. It is usual to take this (and
                                                        any other values of x prior to t = 0) as zero.



                                                    9
D. Two-term difference filter:                         G. Central difference filter:

               yn = xn – xn-1                                  yn = (xn – xn-2) / 2
The output value at t = nh is equal to the             This is similar in its effect to example (4).
difference between the current input xn and            The output is equal to half the change in the
the previous input xn-1:                               input signal over the previous two sampling
               y0 = x0 – x-1                           intervals:
               y1 = x1 – x0                                    y0 = (x0 – x2) / 2
               y2 = x2 – x1                                    y1 = (x1 – x-1) / 2
               y3 = x3 – x2                                    y2 = (x2 – x0) / 2
               ... etc                                         ... etc
i.e.. the output is the change in the input
over the most recent sampling interval h.              3.3 Order of a digital filter
The effect of this filter is similar to that of
an analog differentiator circuit.                         The order of a digital filter is the number
                                                       of previous inputs (stored in the processor's
E. Two-term average filter:                            memory) used to calculate the current
                                                       output. Thus:
                yn= (xn + xn-1) / 2                    1. Examples (1) and (2) above are zero-order
The output is the average (arithmetic mean)            filters, as the current output yn depends only
of the current and previous input:                     on the current input xn and not on any
                y0 = (x0 + x-1) / 2                    previous inputs.
                y1= (x1 + x0) / 2
                y2 = (x2 + x1) / 2                     2. Examples (3), (4) and (5) are all of first
                y3 = (x3 + x2) / 2                     order, as one previous input (xn-1) is required
                ... etc                                to calculate yn. (Note that the filter of
This is a simple type of low pass filter as it         example (3) is classed as first-order because
tends to smooth out high-frequency                     it uses one previous input, even though the
variations in a signal.                                current input is not used).

F. Three-term average filter:                          3. In examples (6) and (7), two previous
                                                       inputs (xn-1 and xn-2) are needed, so these are
        yn= (xn + xn-1+ xn-2) / 3                      second-order filters. Filters may be of any
This is similar to the previous example, with          order from zero upwards.
the average being taken of the current and
two previous inputs:                                     All of the digital filter examples given
        y0 = (x0 + x-1+x-2) / 2                        above can be written in the following
        y1= (x1 + x0+x-1) / 3                          general forms:
        y2 = (x2 + x1+x0) / 3                          Zero order: yn = a0xn
        y3 = (x3 + x2+x1) / 3                          First order: yn = a0xn + a1xn-1
        ... etc                                        Second order: yn = a0xn + a1xn-1 + a2xn-2
                                                       Similar expressions can be developed for
                                                       filters of any order where a0, a1...are signal
                                                       coefficients.
                                                  10
3.4 Finite Impulse Response (FIR)                       This equation can also be expressed as a
Filter                                                  convolution of the coefficient sequence bi
                                                        with the input signal:
   A finite impulse response (FIR) filter is a
type of a digital filter. The impulse response,
the filter's response to a Kronecker delta
input, is finite because it settles to zero in a
finite number of sample intervals. This is in           That is, the filter output is a weighted sum of
contrast to infinite impulse response (IIR)             the current and a finite number of previous
filters, which have internal feedback and               values of the input signal
may continue to respond indefinitely. The
impulse response of an Nth-order FIR filter
                                                        Properties
lasts for N+ 1 sample, and then dies to zero.
In an FIR filter the current output (yn) is
                                                          An FIR filter has a number of useful
calculated solely from the current and
                                                        properties which sometimes make it
previous input values (xn, xn-1, xn-2...).
                                                        preferable to an infinite impulse response
                                                        (IIR) filter. FIR filters:

                                                           ∑   Are inherently stable. This is due to
                                                               the fact that all the poles are located
                                                               at the origin and thus are located
                                                               within the unit circle.
                                                           ∑   Require no feedback. This means
                                                               that any rounding errors are not
Figure 11. Block diagram of a simple FIR                       compounded by summed iterations.
filter                                                         The same relative error occurs in
                                                               each calculation. This also makes
  The difference equation that defines the                     implementation simpler.
output of an FIR filter in terms of its input              ∑   They can easily be designed to be
is:                                                            linear phase by making the
                                                               coefficient sequence symmetric;
  y[n]=b0x[n] + b1x[n-1] +....+bNx[n-N]                        linear phase, or phase change
                                                               proportional        to      frequency,
Where:                                                         corresponds to equal delay at all
                                                               frequencies. This property is
   ∑     x[n] is the input signal,                             sometimes desired for phase-
   ∑     y[n] is the output signal,                            sensitive applications, for example
   ∑     bi are the filter coefficients, and                   crossover filters, and mastering.
   ∑     N is the filter order – an Nth-order
         filter has (N + 1) terms on the right-           The main disadvantage of FIR filters is
         hand side; these are commonly                  that considerably more computation power
         referred to as taps.                           is required compared to an IIR filter with
                                                        similar sharpness or selectivity, especially
                                                        when low frequencies (relative to the sample
                                                        rate) cutoffs are needed.
                                                   11
3.5. Infinite         Impulse        Response                        yn = xn + yn-1
Filter

  Infinite impulse response (IIR) is a
property of signal processing systems.
Systems with this property are known as IIR
systems or, when dealing with filter systems,
as IIR filters. IIR systems have an impulse
response function that is non-zero over an
infinite length of time. This is in contrast to
finite impulse response filters (FIR), which
have fixed-duration impulse responses. The
simplest analog IIR filter is an RC filter
made up of a single resistor (R) feeding into             Figure 12. Block diagram of an IIR filter
a node shared with a single capacitor (C).
This filter has an exponential impulse                      In other words, this filter determines the
response characterized by an RC time                      current output (yn) by adding the current
constant.                                                 input (xn) to the previous output (yn-1).

   IIR filters may be implemented as either               4. Applications of DSP
analog or digital filters. In digital IIR filters,
the output feedback is immediately apparent                 The main applications of DSP are audio
in the equations defining the output. Note                signal processing, audio compression, digital
that unlike with FIR filters, in designing IIR            image processing, video compression,
filters it is necessary to carefully consider             speech processing, speech recognition,
"time zero" case in which the outputs of the              digital communications, RADAR,SONAR,
filter have not yet been clearly defined.                 seismology, and biomedicine.

   Design of digital IIR filters is heavily                  Specific    examples       are     speech
dependent on that of their analog                         compression and transmission in digital
counterparts because there are plenty of                  mobile phones, room matching equalization
resources, works and straightforward design               of sound in Hifi and sound reinforcement
methods concerning analog feedback filter                 applications, weather forecasting, economic
design while there are hardly any for digital             forecasting, seismic data processing,
IIR filters. As a result, usually, when a                 analysis and control of industrial processes,
digital IIR filter is going to be implemented,            computer-generated animations in movies,
an analog filter (e.g. Chebyshev filter,                  medical imaging such as CAT scans and
Butterworth filter, Elliptic filter) is first             MRI,       MP3      compression,       image
designed and then is converted to a digital               manipulation, high fidelity loudspeaker
filter by applying discretization techniques              crossovers and equalization, and audio
such as Bilinear transform or Impulse                     effects for use with electric guitar
invariance.                                               amplifiers.

  A simple example of an IIR is given by

                                                     12
4.1 Speech Processing                                ADC quantizes input signal & provides
                                                     digital output.
  Speech processing is the study of speech
signals and the processing methods of these
signals are regarded as a special case of
digital signal processing, applied to speech
signal.

Speech processing can be divided into the
following categories:

   A. Speech recognition, which deals with
      analysis of the linguistic content of a
      speech signal.
   B. Speaker recognition, where the aim
      is to recognize the identity of the            Figure 13. Block Diagram Of A Speech
      speaker.                                       Recognition System
   C. Enhancement of speech signals, e.g.
      audio noise reduction.                           The output given is stored in memory.
   D. Speech coding, a specialized form of           After storage, recognition process starts. In
      data compression, is important in the          that, spoken word is again digitised & its
      telecommunication area.                        template compared template of memory.
   E. Voice analysis for medical purposes,           When match occurs, word has been
      such as analysis of vocal loading and          recognised & system informs user, about the
      dysfunction of the vocal cords.                match.
   F. Speech synthesis: the artificial
      synthesis of speech, which usually               It is but obvious that performance of
      means computer-generated speech.               system is greatly affected by:

A. Speech Recognition                                1) Background noise

  Speech recognition is a broad term which           2) Speaker characteristics      (microphone
means it can recognize almost anybody's              characteristics)
speech - such as a call-centre system
designed to recognize many voices. Voice             3) Pause taken between two words.
recognition is a system trained to a
particular user, where it recognizes their           4) How carefully words are pronounced.
speech based on their unique vocal sound.
                                                       Now here DSP will be in the picture.
  Basically speech recognition involves              Considering all above problems, one has to
inputting of information into a computer             extract important parameter (template) from
using human voice; & the computer                    spoken word. After that matching it with
quantizes it & then recognizes human                 standard template, this operation is
speech. As shown in the fig. Using                   accurately, done by DSP processors
microphone one can input speech / voice.
                                                13
  Speech recognition applications include             has earned speaker recognition its
voice dialing (e.g., "Call home"), call               classification as a "behavioral biometric."
routing (e.g., "I would like to make a collect
call"), demotic appliance control and                 C. Speech Coding
content-based spoken audio search (e.g.,
find a podcast where particular words were              Speech coding is the application of data
spoken), simple data entry (e.g., entering a          compression of digital audio signals
credit card number), preparation of                   containing speech. Speech coding uses
structured documents (e.g., a radiology               speech-specific parameter estimation using
report), speech-to-text processing (e.g., word        audio signal processing techniques to model
processors or emails), and in aircraft                the speech signal, combined with generic
cockpits (usually termed Direct Voice                 data compression algorithms to represent the
Input).                                               resulting modeled parameters in a compact
                                                      bit stream.
B. Speaker Recognition
                                                        The two most important applications of
  Speaker recognition is the computing task           speech coding are mobile telephony and
of validating a user's claimed identity using         Voice over IP.
characteristics extracted from their voices.
                                                        The techniques used in speech coding are
   There is a difference between speaker              similar to that in audio data compression and
recognition (recognizing who is speaking)             audio coding where knowledge in
and speech recognition (recognizing what is           psychoacoustics is used to transmit only data
being said). These two terms are frequently           that is relevant to the human auditory
confused, as is voice recognition. Voice              system. For example, in narrowband speech
recognition is combination of the two where           coding, only information in the frequency
it uses learned aspects of a speaker’s voice          band 400 Hz to 3500 Hz is transmitted but
to determine what is being said - such a              the reconstructed signal is still adequate for
system cannot recognize speech from                   intelligibility.
random speakers very accurately, but it can
reach high accuracy for individual voices it            Speech coding differs from other forms of
has been trained with. In addition, there is a        audio coding in that speech is a much
difference between the act of authentication          simpler signal than most other audio signals,
(commonly referred to as speaker                      and that there is a lot more statistical
verification or speaker authentication) and           information available about the properties of
identification.                                       speech. As a result, some auditory
                                                      information which is relevant in audio
  Speaker recognition has a history dating            coding can be unnecessary in the speech
back some four decades and uses the                   coding context. In speech coding, the most
acoustic features of speech that have been            important criterion is preservation of
found to differ between individuals. These            intelligibility and "pleasantness" of speech,
acoustic patterns reflect both anatomy (e.g.,         with a constrained amount of transmitted
size and shape of the throat and mouth) and           data.
learned behavioral patterns (e.g., voice
pitch, speaking style). Speaker verification

                                                 14
  It should be emphasized that the                      There are three major components in
intelligibility of speech includes, besides the        Radar system.
actual literal content, also speaker identity,
emotions, intonation, timbre etc. that are all         i. Antenna
important for perfect intelligibility. The
more abstract concept of pleasantness of               ii. Tracking computer
degraded speech is a different property than
intelligibility, since it is possible that             iii. Signal processor
degraded speech is completely intelligible,
but subjectively annoying to the listener.             Antenna is used to transmit analog signal.

4.2 Application in Radar                               Tracking      computer     schedules     the
                                                       appropriate antenna positions & transmitted
   Radar is used to detect stationary/moving           signals as a function of time. It also keeps
objects. Radar has a transmitter & receiver.           track of important targets & controls the
From transmitter, the signals are generated            display in radar.
& transmitted through antenna. If the object
is present signals will hit the target & due to        Signal processor        performs    following
this, portion of the signal is echoed back.            functions,
Receiver receives echoed signal, will cancel
noise and amplify the signal. Depending                i. Matched filtering
upon the time duration between the
transmitted & received signals, the distance           ii. Removal of useless information-threshold
at which the target is located can be                  detection.
identified.
                                                       4.3 DTMF Signal Detection

                                                         Dual-tone     multi-frequency    (DTMF)
                                                       signaling is used for telecommunication
                                                       signaling over analog telephone lines in the
                                                       voice-frequency band between telephone
                                                       handsets and other communications devices
                                                       and the switching center.
                                                         As a method of in-band signaling, DTMF
                                                       tones were also used by cable television
                                                       broadcasters to indicate the start and stop
                                                       times of local commercial insertion points
                                                       during station breaks for the benefit of cable
                                                       companies. Until better out-of-band
                                                       signaling equipment was developed in the
                                                       1990s, fast, unacknowledged, and loud
Figure 14. Block Diagram of Modern Radar
                                                       DTMF tone sequences could be heard
System.
                                                       during the commercial breaks of cable
                                                       channels

                                                  15
                                                     Figure 16. Graph of generated DTMF signal

                                                       The original keypads had levers inside, so
                                                     each button activated two contacts. The
Figure 15. Telephone keypad in DTMF
                                                     multiple tones are the reason for calling the
dialing
                                                     system multi frequency. These tones are
  The DTMF keypad is laid out in a 4×4               then decoded by the switching center to
matrix, with each row representing a low             determine which key was pressed.
frequency, and each column representing a
high frequency.                                        For decoding we use DSP, to be very
                                                     specific, we use DFT. Following are the
                                                     steps to be followed.

                                                     i. Sample DTMF signal
                1209 1336 1477 1633                     The minimum duration of DTMF signal is
                                                     40 msec. So if we sample it with 8 kHz, there
697 Hz          1      2      3       A              are at most 0.04 x 8000 = 3200 samples.

                4      5      6       B              ii. Compute N Point DFT
770 Hz
                                                       Now we have to compute n point DFT
852 Hz          7      8      9       C              values, of sampled DTMF signals.
                                                     Normally, the actual number of samples
941 Hz          *      0      #       D              should be in such a way that it should
                                                     minimize the difference between the actual
                                                     location of the sinusoid & the nearest integer
  Pressing a single key (such as 1) will send        value of DFT index K.
a sinusoidal tone for each of the two
frequencies (697 and 1209 hertz ).                   iii. Compute DFT of 8 frequency tones
                                                       We know that in totality we have eight
                                                     frequency tones. The DTMF decoder
                                                     computes DFT samples closest in frequency
                                                     to the 8 DTMF fundamental tones & their
                                                     respective second harmonics. After that we
                                                     get energy spectrum.


                                                16
  In this spectrum, we get very high energy         4.4  Removing             vocals       from
at index K depending time frequency. The            commercial tracks
index K, at high energy level is taken and
compared with standard one.                           Since the advent of commercial music,
  After comparison, we can come out with            songs have commonly been identified by the
unique digit or code.                               singer’s voice. Popular tracks are
                                                    distinguishable by an extraordinary lead
   F k = K fs / N              (K= 1....N-1)        voice. Meanwhile, songs with great
                                                    instrumental backgrounds can be marred by
                                                    an unfortunate selection of vocalists and
                                                    lyrics. The goal of this project is to remove
iv. Choosing N
                                                    vocals from commercial tracks in order to
  In this application N is supposed to be           appreciate the underlying instrumental
carefully chosen. We know that, N                   background. Moreover, the removal of
determines frequency spacing between the            vocals has several applications. Karaoke is a
locations of DFT samples. N also                    common pastime in which the removal of
determines time taken to compute DFT                vocals permits the participants to effectively
samples. N is large, time required will be          interact with the song. Additionally,
large, but provides resolution in frequency         removing vocals makes the production of
domain. One more important parameter i.e.           ringtones and remixes easier. Finally, people
spectral leakage, one has to consider while         may just wish to hear the track without the
choosing N. The spectral leakage has to be          lead singer’s voice. In order to remove
as minimum as possible.                             vocals from commercial tracks, several
                                                    techniques are used. These techniques
                                                    include filtering the known (average)
                                                    frequency range of the human voice (band
v. Considering error sources                        stop     filtering),  cancelling      common
                                                    frequencies between stereo channels (stereo
   If you observe tone frequencies you will
                                                    cancellation), and masking a time frequency
find that, spectrum of human voice contains
                                                    spectrogram (audio blind source separation).
all these frequencies. Therefore it is very
                                                    These techniques are described in detail
important to distinguish between human
                                                    below.
voice & DTMF signal. The problem is very
simple, because DTMF signal generates               4.4.A Band stop filtering
pure sine wave with negligible power of
second harmonics. This is not true for                 The simplest technique for removing
human voice; it will contain second or third        vocals is band stop filtering. The human
harmonics. Therefore practical DTMF                 voice has a distinct frequency range between
decoder computes DFT closest in frequency           300 Hz and 3 kHz. By applying a band stop
to the second harmonics corresponding to            filter (figure 16) at these frequencies, most
each of the fundamental tone frequencies.           vocals can be removed. Software filtering
This will distinguish between DTMF signal           allows us the luxury of implementing a very
& human voice                                       high order filter. Unfortunately, however,
                                                    this technique has the side-effect of also
                                                    removing any instruments that occupy the

                                               17
same frequency range, such as strings and            4.4.C Audio Blind Source Separation
guitars. This is undesirable.
                                                       This technique consists of “extracting
                                                     from an input audio signal a set of audio
                                                     signals whose mix is perceived similarly to
                                                     the original audio signal”. In our case, we
                                                     focused on extracting the vocals track from
                                                     the mix consisting of the rest of the
                                                     instruments. In order to extract the vocals
                                                     from the mix, the following steps were
                                                     followed:

Figure 17. The result of applying a band             i. Selection of song
stop filter                                            A stereo mix has to be chosen, preferably
4.4.B Stereo Cancellation                            without any reverberation. If the selected
                                                     mix has reverberation, the procedure is
  Stereo cancellation, as the name implies,          complicated considerably because the mono
requires using a stereo sample, and involves         tracks overlap with other tracks. Good
subtracting common frequencies between               candidate mixes for this technique are old
the two channels (figure 17). This works             stereo songs, where complex post-recording
most of the time, because the lead singer’s          audio effects are not as frequent.
voice is mostly centre-panned, and therefore
has common frequencies in both channels.               The song that we chose is “Let it be” by
                                                     the Beatles. In figure 18, we observe both
  However, since one channel is subtracted           channels of our song. This clearly depicts
from the other, the result of stereo                 that the left channel (l) does not contain the
cancellation is a mono track. Another                same information as the right channel (r).
unwelcome result is the lowering in volume           Hence, the song was recorded in stereo
of other sounds that are common between              mode.
both channels, such as drums or bass. Still,
the results of this technique are better than
that of simple band stop cancellation
.




                                                      Figure 19. The chosen stereo clip


Figure 18. The left channel, before and after
stereo cancellation

                                                18
ii. Short Time Fourier Transform (STFT)                 The number of DFT points for our STFT
and spectrogram                                      (N) was set to 8192 while the offset among
                                                     frames was set to 2048 (N/4). This yields, in

                                                     ∗236 DFT points (for our specific clip).
  After the song was chosen and the                  the spectrogram, an array of 4097∗ 2+1
channels separated, a spectrogram was
generated for each one of the channels. In           Moreover, the window used for the frames is
order to get the spectrogram, we used the            the Hamming window, although a Blackman
STFT.                                                window will also yield the same or better
L= STFT (1)     (1)                                  results due to a higher attenuation at the stop
                                                     band. Figure 19. Shows the spectrograms for
R= STFT (r)      (2)                                 both the left and the right channels.

Where,                                               iii. Track Identification

L = Left channel STFT                                  Once we have both spectrograms, we
                                                     proceed to recognize similarities between
R = Right Channel STFT                               both channels. We do this with the purpose
                                                     of getting one or more of the tracks that the
The STFT has three important factors to              channels share. In our case, the technique
consider: The first factor is the number of          that we used is to divide the magnitude of
DFT points (N) that will be generated per
frame; the second factor is the offset among
frames; and third factor is the type of
window that each frame will use.
  The first two factors will create a large
number of samples in our spectrogram. This
large number of samples will let us extract
areas were the voice is concentrated with
more accuracy. Also, the window in each
frame is of extreme importance because the
offset among frames is really low. So, if the        Figure 21 - Frequency of channel ratio
data is not well confined in each frame, we
                                                     each DFT coefficient in the left channel by
will have noise coming from other frames
                                                     its counterpart in the right channel
due to overlap.                                      spectrogram. This division gives us a
                                                     channel ratio (CR) between both channels.
                                                     This is expressed as follows:
                                                     CR = |L| / |R|                      (3)
                                                       When you divide a coefficient from the
                                                     left channel (that represents a single track)
                                                     by a coefficient from the right channel (that
                                                     also represents the same single track), the
                                                     result will be a constant value no matter
Figure 20. Spectrogram of left and right             where we are located in the spectrogram [1].
channel                                              However, if you divide coefficients that
                                                19
represent two or more tracks, your result             to zero (blue). Now, it becomes clear why a
will not be constant throughout the                   greater array of coefficients would yield
spectrogram anymore. In figure 20. We can             better results. This is mainly because we can
see the frequency for each ratio in the               be more selective in the areas we want to
spectrogram. At a ratio of 1, we found a              reduce.
peak. This peak represents a mono track that
was inserted evenly on both channels (if the          v. Time-signal recovery–Inverse STFT
track is different to one, this means that the        (ISTFT)
mono track in one of the channels was
                                                        After the binary mask has been applied,
attenuated or amplified).
                                                      the signal has to be transformed back to the
iv. Time       Frequency    Mask     –Binary          time domain. In order to do this, the ISTFT
Method                                                is used on each one of the channels.

  Once the coefficients that represent mono           l= istft (L)                      (5)
tracks in the channels are identified, we
                                                      r= istft (R)
proceed to substitute them by zeros. This
method is usually known as binary masking             Where
because the coefficients are multiplied by
either one or by zero (in other, more                 L= Left channel masked signal before istft
advanced, techniques, the coefficients can
be weighted). This can be seen                        l = RECOVERED left channel signal
mathematically as follows:                            R= Right channel masked signal before istft
                                                      r = RECOVERED right channel signal
                                                        In figures 22 & 21, the original signal and
                                                      the recovered signal are compared. It can be
                                                      seen that each one of the signals has been
                                                      altered from the original one. When the two
                                                      recovered signals are mixed again, we get a
                                                      stereo clip, but this time without vocals.




Figure 22. Spectrogram with binary mask
M=0       if   a < CR < b             (4)
     1          otherwise
Where, M = Binary mask
                                                      Figure 23. Left channel with its recovered
  In figure 21, it can be observed that some
                                                      counterpart
parts of the spectrograms have been reduced

                                                 20
    The five steps explained above are one
way to remove the vocals from the song.
However, this technique is not limited to
extracting the vocals from a song; for
instance, we could extract the instruments
and leave the vocals in the song. Therefore,
this technique permits a greater flexibility
compared to the other techniques explained           Figure 25. Use of DSP in MP3 player
in this paper.
                                                        The DSP performs the MP3 encoding and
  Nonetheless, the audio source separation           saves the file to memory. During the
technique used in this project is just one of        playback phase, the file is taken from
many audio source separation approaches.             memory, decoded by the DSP and then
This is mainly because different mixes of            converted back to an analog signal through
instruments and new sound effects                    the digital-to-analog converter so it can be
intermingles frequencies in more complex             output through the speaker system. In a
ways. Due to this added complexity, a                more complex example, the DSP would
binary mask approach will not be enough to           perform other functions such as volume
separate the sources from the song.                  control, equalization and user interface.

                                                     Conclusion
                                                       Hence we can conclude by saying that,
                                                     DSP forms an alternative to ASP taking into
                                                     consideration both their merits & demerits.
                                                       In this paper we have presented the basic
                                                     elements of a Digital Processing System &
                                                     defined the operations required to process a
                                                     signal digitally. I have also discussed the
Figure 24. Right channel with its recovered
                                                     benefits of using a specialized digital signal
counterpart
                                                     processor rather than using a general
                                                     microprocessor.
4.5 DSP in MP3 Audio Player
                                                       Digital filters operate upon a signal and
  The diagram below shows how a DSP is               depending on the users requirements change
used in an MP3 audio player. During the              the    Amplitude-frequency        &     Phase-
recording phase, analog audio is input               frequency characteristics of a signal, so as to
through a receiver or other source. This             improve the quality of the signal.
analog signal is then converted to a digital
signal by an analog-to-digital converter and            DSP has it’s applications in almost every
passed to the DSP.                                   field related to electronics today. This paper
                                                     treats applications to speech processing,
                                                     RADAR, DTMF signal detection, Removing
                                                     vocals from commercial tracks & MP3
                                                     players.
                                                21
22

				
DOCUMENT INFO
Shared By:
Stats:
views:45
posted:2/13/2013
language:English
pages:22
Description: This is the my paper Presention report on digital signal processing .in which i cove the what is mean by dsp? and application of dsp and advantages of dsp.hope it will help you all.