Docstoc

Principle of communications

Document Sample
Principle of communications Powered By Docstoc
					                   LESSON PLAN

              (July 2011 - Odd Semester)



 Course: B.TECH. (INFORMATION TECHNOLOGY)

                    Semester: III

         Subject: Principles of Communication



             Faculty: J.P.JOSH KUMAR

           Designation: Assistant Professor

Department: Electronics and Communication Engineering
                                     SYLLABUS


SUBJECT CODE: 142302

SUBJECT NAME: PRINCIPLES OF COMMUNICATION

UNIT I: FUNDAMENTALS OF ANALOG COMMUNICATION
Principles of amplitude modulation, AM envelope, frequency spectrum and bandwidth,
modulation index and percent modulation, AM Voltage distribution, AM power
distribution, Angle modulation - FM and PM waveforms, phase deviation and modulation
index, frequency deviation and percent modulation, Frequency analysis of angle
modulated waves. Bandwidth requirements for Angle modulated waves.


UNIT II DIGITAL COMMUNICATION
Introduction, Shannon limit for information capacity, digital amplitude modulation,
frequency shift keying, FSK bit rate and baud, FSK transmitter, BW consideration of
FSK, FSK receiver, phase shift keying – binary phase shift keying – QPSK, Quadrature
Amplitude modulation, bandwidth efficiency, carrier recovery – squaring loop, Costas
loop, DPSK.


UNIT III DIGITAL TRANSMISSION
Introduction, Pulse modulation, PCM – PCM sampling, sampling rate, signal to
quantization noise rate, companding – analog and digital – percentage error, delta
modulation, adaptive delta modulation, differential pulse code modulation, pulse
transmission – Intersymbol interference, eye patterns.


UNIT IV SPREAD SPECTRUM AND MULTIPLE ACCESS TECHNIQUES
Introduction, Pseudo-noise sequence, DS spread spectrum with coherent binary PSK,
processing gain, FH spread spectrum, multiple access techniques – wireless
communication, TDMA and CDMA in wireless communication systems, source coding
of speech for wireless communications.
UNITV SATELLITE AND OPTICALCOMMUNICATION
Satellite Communication Systems-Keplers Law, LEO and GEO Orbits, footprint, Link
model-Optical Communication Systems-Elements of Optical Fiber Transmission link,
Types, Losses, Sources and Detectors.


TEXT BOOKS:
1. Wayne Tomasi, ―Advanced Electronic Communication Systems‖, 6/e, Pearson
Education, 2007.
2. Simon Haykin, ―Communication Systems‖, 4th Edition, John Wiley & Sons., 2001.


REFERENCES:
1. H.Taub,D L Schilling ,G Saha ,‖Principles of Communication‖3/e,2007.
2. B.P.Lathi,‖Modern Analog And Digital Communication systems‖, 3/e, Oxford
University Press, 2007
3. Blake, ―Electronic Communication Systems‖, Thomson Delmar Publications, 2002.
4. Martin S.Roden, ―Analog and Digital Communication System‖, 3rd Edition, PHI,
2002.
5. B.Sklar,‖Digital Communication Fundamentals and Applications‖2/e Pearson
Education 2007.
            UNIT I: FUNDAMENTALS OF ANALOG COMMUNICATION



Introduction

       Analog Communication is a data transmitting technique in a format that utilizes
continuous signals to transmit data including voice, image, video, electrons etc. An
analog signal is a variable signal continuous in both time and amplitude which is
generally carried by use of modulation.

       Analog circuits do not involve quantization of information unlike the digital
circuits and consequently have a primary disadvantage of random variation and signal
degradation, particularly resulting in adding noise to the audio or video quality over a
distance.

       Data is represented by physical quantities that are added or removed to alter data.
Analog transmission is inexpensive and enables information to be transmitted from point-
to-point or from one point to many. Once the data has arrived at the receiving end, it is
converted back into digital form so that it can be processed by the receiving computer.
Amplitude: It is the value of the signal at different instants of time. It is measured in
volts.

Frequency: It is inverse of the time period, i.e. f = 1/T. The unit of frequency is Hertz
(Hz) or cycles per second.

Phase: It gives a measure of the relative position in time of two signals within a single
period. It is represented by φ in degrees or radian.

Principles of Amplitude Modulation

       In electronics, modulation is the process of varying one or more properties of a
high-frequency periodic waveform, called the carrier signal, with              a modulating
signal which typically contains information to be transmitted. This is done in a similar
fashion to a musician modulating a tone (a periodic waveform) from a musical instrument
by varying its volume, timing and pitch. The purpose of modulation is usually to enable
the carrier signal to transport the information in the modulation signal to some
destination. At the destination, a process of demodulation extracts the modulation signal
from the modulated carrier. The three key parameters of a periodic waveform are
its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"). Any of these
properties can be modified in accordance with a low frequency signal to obtain the
modulated signal. Typically a high-frequency sinusoid waveform is used as carrier signal,
but a square wave pulse train may also be used.

       In telecommunications, modulation is the process of conveying a message signal,
for example a digital bit stream or an analog audio signal, inside another signal that can
be physically transmitted. Modulation of a sine waveform is used to transform
a baseband message signal into a pass-band signal, for example low-frequency audio
signal into a radio-frequency signal (RF signal). In radio communications, cable TV
systems or the public switched telephone network for instance, electrical signals can only
be transferred over a limited pass-band frequency spectrum, with specific (non-zero)
lower and upper cutoff frequencies. Modulating a sine-wave carrier makes it possible to
keep the frequency content of the transferred signal as close as possible to the centre
frequency (typically the carrier frequency) of the pass-band.
       A device that performs modulation is known as a modulator and a device that
performs the inverse operation of modulation is known as a demodulator. A device that
can do both operations is a modem (modulator–demodulator).

Amplitude:

       A sine wave is used to represent values of electrical current or voltage. The
greater its height, the greater the value it represents. As you have studied, a sine wave
alternately rises above and then falls below the reference line. That part above the line
represents a positive value and is referred to as a POSITIVE ALTERNATION. That part
of the cycle below the line has a negative value and is referred to as a NEGATIVE
ALTERNATION. The maximum value, above or below the reference line, is called the
PEAK AMPLITUDE. The value at any given point along the reference line is called the
INSTANTANEOUS AMPLITUDE.

       In the Microbroadcasting services, a reliable radio communication system is of
vital importance. The swiftly moving operations of modern communities require a
degree of coordination made possible only by radio. Today, the radio is standard
equipment in almost all vehicles, and the handie-talkie is a common sight in the
populace. Until recently, a-m (amplitude modulation) communication was used
universally. This system, however, has one great disadvantage: Random noise and other
interference can cripple communication beyond the control of the operator. In the a-m
receiver, interference has the same effect on the r-f signal as the intelligence being
transmitted because they are of the same nature and inseperable.

Carrier:

       The r-f signal used to transmit intelligence from one point to another is called the
carrier. It consists of an electromagnetic wave having amplitude, frequency, and phase.
If the voltage variations of an r-f signal are graphed in respect to time, the result is a
waveform such as that in figure. This curve of an unmodulated carrier is the same as
those plotted for current or power variatons, and it can be used to investigate the general
properties of carriers. The unmodulated carrier is a sine wave that repeats itself in
definite intervals of time. It swings first in the positive and then in the negative direction
about the time axis and represents changes in the amplitude of the wave. This action is
similar to that of alternating current in a wire, where these swings represent reversals in
the direction of current flow. It must be remembered that the plus and minus signs used
in the figure represent direction only. The starting point of the curve in the figure 2 is
chosen arbitrarily. It could have been taken at any other point just as well. Once a
starting point is chosen, however, it represents the point from which time is measured.
The starting point finds the curve at the top of its positive swing. The curve then swings
through 0 to some maximum amplitude in the negative direction, returning through 0 to
its original position. The changes in amplitude that take place in the interval of time then
are repeated exactly so long as the carrier remains unmodulated. A full set of values
occurring in any equal period of time, regardless of the starting point, constitutes one
cycle of the carrier. This can be seen in the figure, where two cycles with different
starting points are marked off. The number of these cycles that occur in 1 second is
called the frequency wave.
AM envelope
       The amplitude, phase, or frequency of a carrier can be varied in accordance with
the intelligence to be transmitted. The process of varying one of these characteristics is
called modulation. The three types of modulation, then are amplitude modulation, phase
modulation, and frequency modulation. Other special types, such as pulse modulation,
can be considered as subdivisions of these three types. With a sine-wave voltage used to
amplitude-modulate the carrier, the instantaneous amplitude of the carrier changes
constantly in a sinusoidal manner. The maximum amplitude that the wave reaches in
either the positive or the negative direction is termed the peak amplitude. The positive
and negative peaks are equal and the full swing of the cycle from the positive to the
negative peak is called the peak-to-peak amplitude. Considering the peak-to-peak
amplitude only, it can be said that the amplitude of this wave is constant. This is a
general amplitude characteristic of the unmodulated carrier. In amplitude modulation,
the peak-to-peak amplitude of the carier is varied in accordance with the intelligence to
be transmitted. For example, the voice picked up by a microphone is converted into an a-
f (audio-frequency) electrical signal which controls the peak-to-peak amplitude of the
carrier. A single sound at the microphone modulates the carrier, with the result shown in
figure 3. The carrier peaks are no longer because they follow the instantaneous changes
in the amplitude of the a-f signal. When the a-f signal swings in the positive direction,
the carrier peaks are increased accordingly. When the a-f signal swings in the negative
direction, the carrier peaks are decreased. Therefore, the instantaneous amplitude of the
a-f modulating signal determines the peak-to-peak amplitude of the modulated carrier.
Percentage of Modulation:
       (1) In amplitude modulation, it is common practice to express the degree to
which a carrier is modulated as a percentage of modulation. When the peak-to-peak
amplitude of the modulationg signal is equal to the peak-to-peak amplitude of the
unmodulated carrier, the carrier is said to be 100 percent modulated. In figure 4, the
peak-to-peak modulating voltage, EA, is equal to that of the carrier voltage, ER, and the
peak-to-peak amplitude of the carrier varies from 2ER, or 2EA, to 0. In other words, the
modulating signal swings far enough positive to double the peak-to-peak amplitude of the
carrier, and far enough negative to reduce the peak-to-peak amplitude of the carrier to 0.
       (2) If EA is less than ER, percentages of modulation below 100 percent occur. If
EA is one-half ER, the carrier is modulated only 50 percent (fig. 5). When the
modulating signal swings to its maximum value in the positive direction, the carrier
amplitude is increased by 50 percent. When the modulating signal reaches its maximum
negative peak value, the carrier amplitude is decreased by 50 percent.
       (3) It is possible to increase the percentage of modulation to a value greater than
100 percent by making EA greater than ER. In figure 6, the modulated carrier is varied
from 0 to some peak-to-peak amplitude greater than 2ER. Since the peak-to-peak
amplitude of the carrier cannot be less than 0, the carrier is cut off completely for all
negative values of EA greater than ER. This results in a distorted signal, and the
intelligence is received in a distorted form. Therefore, the percentage of modulation in a-
m systems of communication is limited to values from 0 to 100 percent.
  (4) The actual percentage of modulation of a carrier (M) can be calculated by using
the following simple formula M = percentage of modulation = ((Emax - Emin) / (Emax +
Emin)) * 100 where Emax is the greatest and Emin the smallest peak-to-peak amplitude of
the modulated carrier. For example, assume that a modulated carrier varies in its peak-to-
peak amplitude from 10 to 30 volts. Substituting in the formula, with Emax equal to 30
and Emin equal to 10, M = percentage of modulation = ((30 - 10) / (30 + 10)) * 100 = (20
/ 40) * 100 = 50 percent. This formula is accurate only for percentages between 0 and
100 percent.

Frequency Spectrum and Bandwidth

       The time domain representation displays a signal using time-domain plot, which
shows changes in signal amplitude with time. The time-domain plot can be visualized with
the help of an oscilloscope. The relationship between amplitude and frequency is provided by
frequency domain representation, which can be displayed with the help of spectrum analyser.
Time domain and frequency domain representations of three sine waves of three different
frequencies are shown in the figure.
   Domain and frequency domain representation, these are of little use in data
communication. Composite signals made of many simple sine waves find use in data
communication. Any composite signal can be represented by a combination of simple
sine waves using Fourier Analysis. For example, the signal shown is a composition of
two sine waves having frequencies f1, 3f1, respectively and it can be represented by
   S (t) = sin ωt + 1/3 sin 3ωt, where ω = 2πf1.
   The frequency domain function s (f) specifies the constituent frequencies of the
signal. The range of frequencies that a signal contains is known as it spectrum, which can
be visualized with the help of a spectrum analyzer. The band of frequencies over which
most of the energy of a signal is concentrated is known as the bandwidth of the signal.




          Figure: Time and frequency domain representations of a composite signal
Frequency Spectrum

Frequency spectrum of a signal is the range of frequencies that a signal contains.
Example: Consider a square wave shown in Fig. 2.1.8(a). It can be represented by a series
of sine waves S(t) = 4A/πsin2πft + 4A/3πsin(2π(3f)t) + 4A/5πsin2π (5f)t + . . . having
frequency components f, 3f, 5f, … and amplitudes 4A/π, 4A/3π, 4A/5π and so on. The
frequency spectrum of this signal can be approximation comprising only the first and
third harmonics as shown in Fig




Figure (a) A square wave, (b) Frequency spectrum of a square wave
Bandwidth: The range of frequencies over which most of the signal energy of a signal is
contained is known as bandwidth or effective bandwidth of the signal. The term ‗most‘ is
somewhat arbitrary. Usually, it is defined in terms of its 3dB cut-off frequency. The
frequency spectrum and spectrum of a signal is shown in Fig. 2.1.9. Here the fl and fh
may be represented by 3dB below (A/√2) the maximum amplitude.




Figure Frequency spectrum and bandwidth of a signal


ANGLE MODULATION

       ANGLE MODULATION is modulation in which the angle of a sine-wave carrier
is varied by a modulating wave. FREQUENCY MODULATION (fm) and PHASE
MODULATION (pm) are two types of angle modulation. In frequency modulation the
modulating signal causes the carrier frequency to vary. These variations are controlled by
both the frequency and the amplitude of the modulating wave. In phase modulation the
phase of the carrier is controlled by the modulating waveform.

Frequency Modulation

       When a carrier is frequency-modulated by a modulating signal, the carrier
amplitude is held constant and the carrier frequency varies directly as the amplitude of
the modulating signal. There are limits of frequency deviation similar to the phase-
deviation limits in phase modulation. There is also an equivalent phase shift of the
carrier, similar to the equivalent frequency shift in p-m.
       A frequency-modulated wave resulting from 2 cycles of modulating signal
imposed on a carrier is shown in A of figure 16. When the modulating-signal amplitude
is 0, the carrier frequency does not change. As the signal swings positive, the carrier
frequency is increased, reaching its highest frequency at the positive peak of the
modulating signal. When the signal swings in the negative direction, the carrier
frequency is lowered, reaching a minimum when the signal passes through its peak
negative value. The f-m wave can be compared with the p-m wave, in B, for the same 2
cycles of modulationg signal. If the p-m wave is shifted 90°, the two waves look alike.
Practically speaking, there is little difference, and an f-m receiver accepts both without
distinguishing between them. Direct phase modulation has limited use, however, and
most systems use some form of frequency modulation.
       In frequency modulation, the instantaneous frequency of the radio-frequency
wave is varied in accordance with the modulating signal, as shown in view (A) of figure.
As mentioned earlier, the amplitude is kept constant. This results in oscillations similar to
those illustrated in view (B). The number of times per second that the instantaneous
frequency is varied from the average (carrier frequency) is controlled by the frequency of
the modulating signal. The amount by which the frequency departs from the average is
controlled by the amplitude of the modulating signal. This variation is referred to as the
FREQUENCY DEVIATION of the frequency-modulated wave. We can now establish
two clear-cut rules for frequency deviation rate and amplitude in frequency modulation:




Figure Effect of frequency modulation on an RF carrier.

PERCENT OF MODULATION:

       Before we explain 100-percent modulation in an fm system, let's review the
conditions for 100-percent modulation of an AM wave. Recall that 100-percent
modulation for AM exists when the amplitude of the modulation envelope varies between
0 volts and twice its normal umodulated value. At 100-percent modulation there is a
power increase of 50 percent. Because the modulating wave is not constant in voice
signals, the degree of modulation constantly varies. In this case the vacuum tubes in an
AM system cannot be operated at maximum efficiency because of varying power
requirements.

       In frequency modulation, 100-percent modulation has a meaning different from
that of AM. The modulating signal varies only the frequency of the carrier. Therefore,
tubes do not have varying power requirements and can be operated at maximum
efficiency and the fm signal has a constant power output. In fm a modulation of 100
percent simply means that the carrier is deviated in frequency by the full permissible
amount. For example, an 88.5-megahertz fm station operates at 100-percent modulation
when the modulating signal deviation frequency band is from 75 kilohertz above to 75
kilohertz below the carrier (the maximum allowable limits). This maximum deviation
frequency is set arbitrarily and will vary according to the applications of a given fm
transmitter. In the case given above, 50-percent modulation would mean that the carrier
was deviated 37.5 kilohertz above and below the resting frequency (50 percent of the
150-kilohertz band divided by 2). Other assignments for fm service may limit the
allowable deviation to 50 kilohertz, or even 10 kilohertz. Since there is no fixed value for
comparison, the term "percent of modulation" has little meaning for fm. The term
MODULATION INDEX is more useful in fm modulation discussions. Modulation index
is frequency deviation divided by the frequency of the modulating signal.

MODULATION INDEX:

       This ratio of frequency deviation to frequency of the modulating signal is useful
because it also describes the ratio of amplitude to tone for the audio signal. These factors
determine the number and spacing of the side frequencies of the transmitted signal. The
modulation index formula is shown below:
Views (A) and (B) of figure 2-9 show the frequency spectrum for various fm signals. In
the four examples of view (A), the modulating frequency is constant; the deviation
frequency is changed to show the effects of modulation indexes of 0.5, 1.0, 5.0, and 10.0.
In view (B) the deviation frequency is held constant and the modulating frequency is

varied to give the same modulation indexes.




           Figure: Frequency spectra of fm waves under various conditions.



       You can determine several facts about fm signals by studying the frequency
spectrum. For example, table 2-1 was developed from the information in figure 2-9.
Notice in the top spectrums of both views (A) and (B) that the modulation index is 0.5.
Also notice as you look at the next lower spectrums that the modulation index is 1.0.
Next down is 5.0, and finally, the bottom spectrums have modulation indexes of 10.0.
This information was used to develop table 2-1 by listing the modulation indexes in the
left column and the number of significant sidebands in the right. SIGNIFICANT
SIDEBANDS (those with significantly large amplitudes) are shown in both views of
figure 2-9 as vertical lines on each side of the carrier frequency. Actually, an infinite
number of sidebands are produced, but only a small portion of them are of sufficient
amplitude to be important. For example, for a modulation index of 0.5 [top spectrums of
both views (A) and (B)], the number of significant sidebands counted is 4. For the next
spectrums down, the modulation index is 1.0 and the number of sidebands is 6, and so
forth. This holds true for any combination of deviating and modulating frequencies that
yield identical modulating indexes.


               MODULATION INDEX SIGNIFICANT SIDEBANDS
               .01                        2
               .4                         2
               .5                         4
               1.0                        6
               2.0                        8
               3.0                        12
               4.0                        14
               5.0                        16
               6.0                        18
               7.0                        22
               8.0                        24
               9.0                        26
               10.0                       28
               11.0                       32
               12.0                       32
               13.0                       36
               14.0                       38
               15.0                       38

                             Table: Modulation index table
You should be able to see by studying figure 2-9, views (A) and (B), that the modulating
frequency determines the spacing of the sideband frequencies. By using a significant
sidebands table (such as table 2-1), you can determine the bandwidth of a given fm
signal. Figure 2-10 illustrates the use of this table. The carrier frequency shown is 500
kilohertz. The modulating frequency is 15 kilohertz and the deviation frequency is 75
kilohertz.




                    Figure: Frequency deviation versus bandwidth.



PHASE MODULATION
       Frequency modulation requires the oscillator frequency to deviate both above and
below the carrier frequency. During the process of frequency modulation, the peaks of
each successive cycle in the modulated waveform occur at times other than they would if
the carrier were unmodulated. This is actually an incidental phase shift that takes place
along with the frequency shift in fm. Just the opposite action takes place in phase
modulation. The af signal is applied to a PHASE MODULATOR in pm. The resultant
wave from the phase modulator shifts in phase, as illustrated in figure 2-17. Notice that
the time period of each successive cycle varies in the modulated wave according to the
audio-wave variation. Since frequency is a function of time period per cycle, we can see
that such a phase shift in the carrier will cause its frequency to change. The frequency
change in fm is vital, but in pm it is merely incidental. The amount of frequency change
has nothing to do with the resultant modulated wave shape in pm. At this point the
comparison of fm to pm may seem a little hazy, but it will clear up as we progress.




                               Figure: Phase modulation.
           Let's review some voltage phase relationships. Look at figure-a and compare the
three voltages (A, B, and C). Since voltage A begins its cycle and reaches its peak before
voltage B, it is said to lead voltage B. Voltage C, on the other hand, lags voltage B by 30
degrees. In phase modulation the phase of the carrier is caused to shift at the rate of the
modulating signal. In figure-b, note that the unmodulated carrier has constant phase,
amplitude, and frequency. The dotted wave shape represents the modulated carrier.
Notice that the phase on the second peak leads the phase of the unmodulated carrier. On
the third peak the shift is even greater; however, on-the fourth peak, the peaks begin to
realign phase with each other. These relationships represent the effect of 1/2 cycle of an
modulating signal. On the negative alternation of the intelligence, the phase of the carrier
would lag and the peaks would occur at times later than they would in the unmodulated
carrier.

                                Figure-a: Phase relationships.




                        Figure-b: Carrier with and without modulation.




           The presentation of these two waves together does not mean that we transmit a
modulated wave together with an unmodulated carrier. The two waveforms were drawn
together only to show how a modulated wave looks when compared to an unmodulated
wave.

        Now that you have seen the phase and frequency shifts in both fm and pm, let's
find out exactly how they differ. First, only the phase shift is important in pm. It is
proportional to the af modulating signal. To visualize this relationship, refer to the wave
shapes shown in figure 2-20. Study the composition of the fm and pm waves carefully as
they are modulated with the modulating wave shape. Notice that in fm, the carrier
frequency deviates when the modulating wave changes polarity. With each alternation of
the modulating wave, the carrier advances or retards in frequency and remains at the new
frequency for the duration of that cycle. In pm you can see that between one alternation
and the next, the carrier phase must change, and the frequency shift that occurs does so
only during the transition time; the frequency then returns to its normal rate. Note in the
pm wave that the frequency shift occurs only when the modulating wave is changing
polarity. The frequency during the constant amplitude portion of each alternation is the
REST FREQUENCY.

                                 Figure-c: PM versus FM
The relationship, in PM, of the modulating to the change in the phase shift is easy to see
once you understand AM and FM principles. Again, we can establish two clear-cut rules
of Phase Modulation:


1. AMOUNT OF PHASE SHIFT IS PROPORTIONAL TO THE AMPLITUDE OF THE
MODULATING SIGNAL.
(If a 10-volt signal causes a phase shift of 20 degrees, then a 20-volt signal causes a phase
shift of 40 degrees.)


2. RATE OF PHASE SHIFT IS PROPORTIONAL TO THE FREQUENCY OF THE
MODULATING SIGNAL.
(If the carrier were modulated with a 1-kilohertz tone, the carrier would advance and
retard in phase 1,000 times each second.)


       Phase modulation is also similar to frequency modulation in the number of
sidebands that exist within the modulated wave and the spacing between sidebands.
Phase modulation will also produce an infinite number of sideband frequencies. The
spacing between these sidebands will be equal to the frequency of the modulating signal.
However, one factor is very different in phase modulation; that is, the distribution of
power in pm sidebands is not similar to that in fm sidebands, as will be explained in the
next section.

Modulation Index:

Recall from frequency modulation that modulation index is used to calculate the number
of significant sidebands existing in the waveform. The higher the modulation index, the
greater the number of sideband pairs. The modulation index is the ratio between the
amount of oscillator deviation and the frequency of the modulating signal:
In frequency modulation, we saw that as the frequency of the modulating signal increased
(assuming the deviation remained constant) the number of significant sideband pairs
decreased. This is shown in views (A) and (B) of the below figure. Notice that although
the total number of significant sidebands decreases with a higher frequency-modulating
signal, the sidebands spread out relative to each other; the total bandwidth increases.

                      Figure: FM versus PM Spectrum Distribution.




       In phase modulation the oscillator does not deviate, and the power in the
sidebands is a function of the amplitude of the modulating signal. Therefore, two signals,
one at 5 kilohertz and the other at 10 kilohertz, used to modulate a carrier would have the
same sideband power distribution. However, the 10-kilohertz sidebands would be farther
apart, as shown in views (C) and (D) of the above diagram. When compared to fm, the
bandwidth of the pm transmitted signal is greatly increased as the frequency of the
modulating signal is increased.
       Phase modulation cannot occur without an incidental change in frequency, nor
can frequency modulation occur without an incidental change in phase. The term fm is
loosely used when referring to any type of angle modulation, and phase modulation is
sometimes incorrectly referred to as "indirect fm." This is a definition that you should
disregard to avoid confusion. Phase modulation is just what the words imply - phase
modulation of a carrier by a modulating signal.
UNIT II DIGITAL COMMUNICATION


We define communication as information transfer between different points in space or
time, where the term information is loosely employed to cover standard formats that we
are all familiar with, such as voice, audio, video, data files, web pages, etc. Examples of
communication between two points in space include a telephone conversation, accessing
an Internet website from our home or office computer, or tuning in to a TV or radio
station. Examples of communication between two points in time include accessing a
storage device, such as a record, CD, DVD, or hard drive.


In the preceding examples, the information transferred is directly available for human
consumption. However, there are many other communication systems, which we do not
directly experience, but which form a crucial part of the infrastructure that we rely upon
in our daily lives. Examples include high-speed packet transfer between routers on the
Internet, inter- and intra-chip communication in integrated circuits, the connections
between computers and computer peripherals (such as keyboards and printers), and
control signals in communication networks.


In digital communication, the information being transferred is represented in digital form,
most commonly as binary digits, or bits. This is in contrast to analog information, which
takes on a continuum of values. Most communication systems used for transferring
information today are either digital, or are being converted from analog to digital.
Examples of some recent conversions that directly impact consumers include cellular
telephony (from analog FM to several competing digital standards), music storage (from
vinyl records to CDs), and video storage (from VHS or beta tapes to DVDs).


Shannon limit for information capacity


Shannon–Hartley theorem In information theory, the Shannon–Hartley theorem is an
application of the noisy channel coding theorem to the archetypal case of a continuous-
time analog communications channel subject to Gaussian noise. The theorem establishes
Shannon's channel capacity for such a communication link, a bound on the maximum
amount of error-free digital data (that is, information) that can be transmitted with a
specified bandwidth in the presence of the noise interference, under the assumption that
the signal power is bounded and the Gaussian noise process is characterized by a known
power or power spectral density. The law is named after Claude Shannon and Ralph
Hartley. Statement of the theorem Considering all possible multi-level and multi-phase
encoding techniques, the Shannon–Hartley theorem states that the channel capacity C,
meaning the theoretical tightest upper bound on the rate of clean (or arbitrarily low bit
error rate) data that can be sent with a given average signal power S through an analog
communication channel subject to additive white Gaussian noise of power N, is: C=B
log2(1+s/n) where C is the channel capacity in bits per second; B is the bandwidth of the
channel in hertz; S is the total signal power over the bandwidth, measured in watt or
volt2; N is the total noise power over the bandwidth, measured in watt or volt2; and S/N
is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the
communication signal to the Gaussian noise interference expressed as a linear power ratio
(not as logarithmic decibels).


In 1927, Nyquist determined that the number of independent pulses that could be put
through a telegraph channel per unit time is limited to twice the bandwidth of the
channel. In symbols, where fp is the pulse frequency (in pulses per second) and B is the
bandwidth (in hertz). The quantity 2B later came to be called the Nyquist rate, and
transmitting at the limiting pulse rate of 2B pulses per second as signalling at the Nyquist
rate. Nyquist published his results in 1928 as part of his paper "Certain topics in
Telegraph Transmission Theory." Hartley's law During that same year, Hartley
formulated a way to quantify information and its rate of transmission across a
communications channel. This method, later known as Hartley's law, became an
important precursor for Shannon's more sophisticated notion of channel capacity. Hartley
argued that the maximum number of distinct pulses that can be transmitted and received
reliably over a communications channel is limited by the dynamic range of the signal
amplitude and the precision with which the receiver can distinguish amplitude levels.
Specifically, if the amplitude of the transmitted signal is restricted to the range of [ –A ...
+A ] volts, and the precision of the receiver is ±ΔV volts, then the maximum number of
distinct pulses M is given by By taking information per pulse in bit/pulse to be the base-
2-logarithm of the number of distinct messages M that could be sent, Hartley constructed
a measure of the information rate R as: where fp is the pulse rate, also known as the
symbol rate, in symbols/second or baud. Hartley then combined the above quantification
with Nyquist's observation that the number of independent pulses that could be put
through a channel of bandwidth B hertz was 2B pulses per second, to arrive at his
quantitative measure for achievable information rate.


Hartley's law is sometimes quoted as just a proportionality between the analog
bandwidth, B, in Hertz and what today is called the digital bandwidth, R, in bit/s.Other
times it is quoted in this more quantitative form, as an achievable information rate of R
bits per second: Hartley did not work out exactly how the number M should depend on
the noise statistics of the channel, or how the communication could be made reliable even
when individual symbol pulses could not be reliably distinguished to M levels; with
Gaussian noise statistics, system designers had to choose a very conservative value of M
to achieve a low error rate.


The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's
observations about a logarithmic measure of information and Nyquist's observations
about the effect of bandwidth limitations. Hartley's rate result can be viewed as the
capacity of an errorless M-ary channel of 2B symbols per second. Some authors refer to it
as a capacity. But such an errorless channel is an idealization, and the result is necessarily
less than the Shannon capacity of the noisy channel of bandwidth B, which is the
Hartley–Shannon result that followed later. Noisy channel coding theorem and capacity
Claude Shannon's development of information theory during World War II provided the
next big step in understanding how much information could be reliably communicated
through noisy channels.


Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948)
describes the maximum possible efficiency of error-correcting methods versus levels of
noise interference and data corruption. The proof of the theorem shows that a randomly
constructed error correcting code is essentially as good as the best possible code; the
theorem is proved through the statistics of such random codes. Shannon's theorem shows
how to compute a channel capacity from a statistical description of a channel, and
establishes that given a noisy channel with capacity C and information transmitted at a
rate R, then if there exists a coding technique which allows the probability of error at the
receiver to be made arbitrarily small. This means that theoretically, it is possible to
transmit information nearly without error up to nearly a limit of C bits per second. The
converse is also important. If the probability of error at the receiver increases without
bound as the rate is increased. So no useful information can be transmitted beyond the
channel capacity. The theorem does not address the rare situation in which rate and
capacity are equal. Shannon–Hartley theorem The Shannon–Hartley theorem establishes
what that channel capacity is for a finite-bandwidth continuous-time channel subject to
Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a
form that is equivalent to specifying the M in Hartley's information rate formula in terms
of a signal-to-noise ratio, but achieving reliability through error-correction coding rather
than through reliably distinguishable pulse levels. If there were such a thing as an
infinite-bandwidth, noise-free analog channel, one could transmit unlimited amounts of
error-free data over it per unit of time.


Real channels, however, are subject to limitations imposed by both finite bandwidth and
nonzero noise. So how do bandwidth and noise affect the rate at which information can
be transmitted over an analog channel? Surprisingly, bandwidth limitations alone do not
impose a cap on maximum information rate. This is because it is still possible for the
signal to take on an indefinitely large number of different voltage levels on each symbol
pulse, with each slightly different level being assigned a different meaning or bit
sequence. If we combine both noise and bandwidth limitations, however, we do find there
is a limit to the amount of information that can be transferred by a signal of a bounded
power, even when clever multi-level encoding techniques are used. In the channel
considered by the Shannon-Hartley theorem, noise and signal are combined by addition.
That is, the receiver measures a signal that is equal to the sum of the signal encoding the
desired information and a continuous random variable that represents the noise. This
addition creates uncertainty as to the original signal's value. If the receiver has some
information about the random process that generates the noise, one can in principle
recover the information in the original signal by considering all possible states of the
noise process.


In the case of the Shannon-Hartley theorem, the noise is assumed to be generated by a
Gaussian process with a known variance. Since the variance of a Gaussian process is
equivalent to its power, it is conventional to call this variance the noise power. Such a
channel is called the Additive White Gaussian Noise channel, because Gaussian noise is
added to the signal; "white" means equal amounts of noise at all frequencies within the
channel bandwidth. Such noise can arise both from random sources of energy and also
from coding and measurement error at the sender and receiver respectively. Since sums
of independent Gaussian random variables are themselves Gaussian random variables,
this conveniently simplifies analysis, if one assumes that such error sources are also
Gaussian and independent.


Digital amplitude modulation


The transmission of digital signals is increasing at a rapid rate. Low-frequency analogue
signals are often converted to digital format (PAM) before transmission. The source
signals are generally referred to as baseband signals. Of course, we can send analogue
and digital signals directly over a medium. From electro-magnetic theory, for efficient
radiation of electrical energy from an antenna it must be at least in the order of magnitude
of a wavelength in size; c = fλ, where c is the velocity of light, f is the signal frequency
and λ is the wavelength. For a 1kHz audio signal, the wavelength is 300 km. An
antenna of this size is not practical for efficient transmission. The low-frequency signal
is often frequency-translated to a higher frequency range for efficient transmission. The
process is called modulation. The use of a higher frequency range reduces antenna size.
In the modulation process, the baseband signals constitute the modulating signal and the
high-frequency carrier signal is a sinusiodal waveform. There are three basic ways of
modulating a sine wave carrier. For binary digital modulation, they are called binary
amplitude-shift keying (BASK), binary frequency-shift keying (BFSK) and binary
phaseshift keying (BPSK).       Modulation also leads to the possibility of frequency
multiplexing.
In a frequency-multiplexed system, individual signals are transmitted over adjacent,
nonoverlapping frequency bands.        They are therefore transmitted in parallel and
simultaneously in time. If we operate at higher carrier frequencies, more bandwidth is
available for frequency-multiplexing more signals.


Amplitude-shift keying (ASK) is a form of modulation that represents digital data as
variations in the amplitude of a carrier wave.
The amplitude of an analog carrier signal varies in accordance with the bit stream
(modulating signal), keeping frequency and phase constant. The level of amplitude can be
used to represent binary logic 0s and 1s. We can think of a carrier signal as an ON or
OFF switch. In the modulated signal, logic 0 is represented by the absence of a carrier,
thus giving OFF/ON keying operation and hence the name given.


Like AM, ASK is also linear and sensitive to atmospheric noise, distortions, propagation
conditions on different routes in PSTN, etc. Both ASK modulation and demodulation
processes are relatively inexpensive. The ASK technique is also commonly used to
transmit digital data over optical fiber. For LED transmitters, binary 1 is represented by a
short pulse of light and binary 0 by the absence of light. Laser transmitters normally have
a fixed "bias" current that causes the device to emit a low light level. This low level
represents binary 0, while a higher-amplitude lightwave represents binary 1.


The simplest and most common form of ASK operates as a switch, using the presence of
a carrier wave to indicate a binary one and its absence to indicate a binary zero. This type
of modulation is called on-off keying, and is used at radio frequencies to transmit Morse
code (referred to as continuous wave operation).
More sophisticated encoding schemes have been developed which represent data in
groups using additional amplitude levels. For instance, a four-level encoding scheme can
represent two bits with each shift in amplitude; an eight-level scheme can represent three
bits; and so on. These forms of amplitude-shift keying require a high signal-to-noise
ratio for their recovery, as by their nature much of the signal is transmitted at reduced
power.
Here is a diagram showing the ideal model for a transmission system using an ASK
modulation:




It can be divided into three blocks. The first one represents the transmitter, the second
one is a linear model of the effects of the channel, the third one shows the structure of the
receiver. The following notation is used:
        ht(f) is the carrier signal for the transmission
        hc(f) is the impulse response of the channel
        n(t) is the noise introduced by the channel
        hr(f) is the filter at the receiver
        L is the number of levels that are used for transmission
        Ts is the time between the generation of two symbols
Different symbols are represented with different voltages. If the maximum allowed value
for the voltage is A, then all the possible values are in the range [−A, A] and they are
given by:



the difference between one voltage and the other is:



Considering the picture, the symbols v[n] are generated randomly by the source S, then
the impulse generator creates impulses with an area of v[n]. These impulses are sent to
the filterht to be sent through the channel. In other words, for each symbol a different
carrier wave is sent with the relative amplitude.
Out of the transmitter, the signal s(t) can be expressed in the form:




In the receiver, after the filtering through hr (t) the signal is:




where we use the notation:
nr(t) = n(t) * hr(f)
g(t) = ht(t) * hc(f) * hr(t)
where * indicates the convolution between two signals. After the A/D conversion the
signal z[k] can be expressed in the form:



In this relationship, the second term represents the symbol to be extracted. The others are
unwanted: the first one is the effect of noise, the second one is due to the intersymbol
interference.
If the filters are chosen so that g(t) will satisfy the Nyquist ISI criterion, then there will be
no intersymbol interference and the value of the sum will be zero, so:
z[k] = nr[k] + v[k]g[0]
the transmission will be affected only by noise.
Probability of error
The probability density function of having an error of a given size can be modelled by
a Gaussian function; the mean value will be the relative sent value, and its variance will
be given by:



where ΦN(f) is the spectral density of the noise within the band and Hr (f) is
the continuous Fourier transform of the impulse response of the filter hr (f).
The probability of making an error is given by:
where, for example,         is the conditional probability of making an error given that a
symbol v0 has been sent and        is the probability of sending a symbol v0.
If the probability of sending any symbol is the same, then:



If we represent all the probability density functions on the same plot against the possible
value of the voltage to be transmitted, we get a picture like this (the particular case
of L = 4 is shown):




The probability of making an error after a single symbol has been sent is the area of the
Gaussian function falling under the functions for the other symbols. It is shown in cyan
for just one of them. If we call P+ the area under one side of the Gaussian, the sum of all
the areas will be: 2LP + − 2P + . The total probability of making an error can be
expressed in the form:



We have now to calculate the value of P+. In order to do that, we can move the origin of
the reference wherever we want: the area below the function will not change. We are in a
situation like the one shown in the following picture:
it does not matter which Gaussian function we are considering, the area we want to
calculate will be the same. The value we are looking for will be given by the following
integral:




where erfc() is the complementary error function. Putting all these results together, the
probability to make an error is:




from this formula we can easily understand that the probability to make an error
decreases if the maximum amplitude of the transmitted signal or the amplification of the
system becomes greater; on the other hand, it increases if the number of levels or the
power of noise becomes greater.
This relationship is valid when there is no intersymbol interference, i.e. g(t) is a Nyquist
function.
Amplitude shift keying:




ASK - in the context of digital communications is a modulation
process, which imparts to a sinusoid two or more discrete amplitude levels. These are
related to the number of levels adopted by the digital message.
For a binary message sequence there are two levels, one of which is typically zero.
Thus the modulated waveform consists of bursts of a sinusoid.
Figure 1 illustrates a binary ASK signal (lower), together with the binary sequence which
initiated it (upper). Neither signal has been bandlimited.




Figure 1: an ASK signal (below) and the message (above)
There are sharp discontinuities shown at the transition points. These result in the signal
having an unnecessarily wide bandwidth. Bandlimiting is generally introduced before
transmission, in which case these discontinuities would be ‗rounded off‘.               The
bandlimiting may be applied to the digital message, or the modulated signal itself. The
data rate is often made a sub-multiple of the carrier frequency. This has been done in
the waveform of Figure 1.
One of the disadvantages of ASK, compared with FSK and PSK, for example, is that it
has not got a constant envelope. This makes its processing (eg, power amplification)
more difficult, since linearity becomes an important factor. However, it does make for
ease of demodulation with an envelope detector. Intro to Bandwidth Modification As
already indicated, the sharp discontinuities in the ASK waveform of Figure 1 imply a
wide bandwidth. A significant reduction can be accepted before errors at the receiver
increase unacceptably. This can be brought about by bandlimiting (pulse shaping) the
message before modulation, or bandlimiting the ASK signal itself after generation.




Figure 2: ASK generation method
Figure 3 shows the signals present in a model of Figure 2, where the message has been
bandlimited.    The shape, after bandlimiting, depends naturally enough upon the
amplitude and phase characteristics of the bandlimiting filter.
Figure 3: original TTL message (lower), bandlimited message (center),
and ASK (above)
Intro to Demodulation It is apparent from Figures 1 and 4 that the ASK signal has a well
defined envelope. Thus it is amenable to demodulation by an envelope detector With
bandlimiting of the transmitted ASK neither of these demodulation methods (envelope
detection or synchronous demodulation) would recover the original binary sequence;
instead, their outputs would be a bandlimited version. Thus further processing by some
sort of decision-making circuitry for example - would be necessary.
Thus demodulation is a two-stage process:
1. recovery of the bandlimited bit stream
2. regeneration of the binary bit stream
Figure 4 illustrates.




Figure 4: the two stages of the demodulation process
Modeling an ASK Generator
It is possible to model the rather basic generator shown in Figure 2.
The switch can be modeled by one half of a DUAL ANALOG SWITCH module. Being
an analog switch, the carrier frequency would need to be in the audio range. The TTL
output from the SEQUENCE GENERATOR is connected directly to the CONTROL
input of the DUAL ANALOG SWITCH. For a synchronous carrier and message use the
8.333 kHz TTL sample clock (filtered by a TUNEABLE LPF) and the 2.083 kHz
sinusoidal message from the MASTER SIGNALS module. If you need the TUNEABLE
LPF for bandlimiting of the ASK, use the sinusoidal output from an AUDIO
OSCILLATOR as the carrier. For a synchronized message as above, tune the oscillator
close to 8.333 kHz, and lock it there with the sample clock connected to its SYNCH
input.
This arrangement is shown modeled in Figure 5.




Figure 5: modeling ASK with the arrangement of Figure 2
Demodulation of an ASK signal
Having a very definite envelope, an envelope detector can be used as the first step in
recovering the original sequence. Further processing can be employed to regenerate the
true binary waveform. Figure 6 is a model for envelope recovery from a baseband ASK
signal.
Figure 6: envelope demodulation of baseband ASK
The output from the above demodulators will not be a copy of the binary sequence TTL
waveform. Bandlimiting will have shaped it, as (for example) illustrated in Figure 3.
If the ASK has been bandlimited before or during transmission (or even by the receiver
itself) then the recovered message, in the demodulator, will need restoration (‗cleaning
up‘) to its original bi-polar format.


FSK Generation:
As its name suggests, a frequency shift keyed transmitter has its frequency shifted by the
message. Although there could be more than two frequencies involved in an FSK signal,
in this experiment the message will be a binary bit stream, and so only two frequencies
will be involved. The word ‗keyed‘ suggests that the message is of the ‗on-off‘ (mark-
space) variety, such as one (historically) generated by a morse key, or more likely in the
present context, a binary sequence. The output from such a generator is illustrated in
Figure 1 below.




Conceptually, and in fact, the transmitter could consist of two oscillators (on frequencies
f1 and f2), with only one being connected to the output at any one time. This is shown in
block diagram form in Figure 2 below.




Unless there are special relationships between the two oscillator frequencies and the bit
clock there will be abrupt phase discontinuities of the output waveform during transitions
of the message.


Bandwidth:
Practice is for the tones f1 and f2 to bear special inter-relationships, and to be integer
multiples of the bit rate. This leads to the possibility of continuous phase, which offers
advantages, especially with respect to bandwidth control. Alternatively the frequency of a
single oscillator (VCO) can be switched between two values, thus guaranteeing
continuous phase - CPFSK. The continuous phase advantage of the VCO is not
accompanied by an ability to ensure that f1 and f2 are integer multiples of the bit rate.
This would be difficult (impossible ?) to implement with a VCO.


FSK signals can be generated at baseband, and transmitted over telephone lines (for
example). In this case, both f1 and f2 (of Figure 2) would be audio frequencies.
Alternatively, this signal could be translated to a higher frequency. Yet again, it may be
generated directly at ‗carrier‘ frequencies.
Demodulation of FSK:
There are different methods of demodulating FSK. A natural classification is into
synchronous (coherent) or asynchronous (non-coherent). Representative demodulators of
these two types are the following:
Asynchronous Demodulator:
A close look at the waveform of Figure 1 reveals that it is the sum of two amplitude
shift keyed (ASK) signals. The receiver of Figure 3 takes advantage of this. The FSK
signal has been separated into two parts by bandpass filters (BPF) tuned to the MARK
and SPACE frequencies.




The output from each BPF looks like an amplitude shift keyed (ASK) signal. These can
be demodulated asynchronously, using the envelope. The decision circuit, to which the
outputs of the envelope detectors are presented, selects the output which is the most
likely one of the two inputs. It also re-shapes the waveform from a bandlimited to a
rectangular form. This is, in effect, a two channel receiver. The bandwidth of each is
dependent on the message bit rate. There will be a minimum frequency separation
required of the two tones.
Synchronous Demodulator:
In the block diagram of Figure 4, two local carriers, on each of the two frequencies of the
binary FSK signal, are used in two synchronous demodulators. A decision circuit
examines the two outputs, and decides which is the most likely.
This is, in effect, a two-channel receiver. The bandwidth of each is dependent on the
message bit rate. There will be a minimum frequency separation required of the two
tones. This demodulator is more complex than most asynchronous demodulators are.


FSK bit rate and baud
Bit rate: is the number of bits per second (bps ).
Baud rate (Nbaud ): is the number of signal units per second (baud/s).
A signal unit ( one baud) is composed of 1 or more bits.
In the analog transmission of digital data, the baud rate is less than or equal to the bit
rate.Bit rate = baud rate X number of bits /baudBit rate and baud rateWhich one
determines the Bandwidth required to send a signal: Bit or baud rate?
Unless there are special relationships between the two oscillator frequencies and the bit
clock there will be abrupt phase discontinuities of the output waveform during transitions
of the message.
Bandwidth:
Practice is for the tones f1 and f2 to bear special inter-relationships, and to be integer
multiples of the bit rate. This leads to the possibility of continuous phase, which offers
advantages, especially with respect to bandwidth control.
Alternatively the frequency of a single oscillator (VCO) can be switched between two
values, thus guaranteeing continuous phase - CPFSK.
The continuous phase advantage of the VCO is not accompanied by an ability to ensure
that f1 and f2 are integer multiples of the bit rate. This would be difficult (impossible ?)
to implement with a VCO.
FSK signals can be generated at baseband, and transmitted over telephone lines (for
example). In this case, both f1 and f2 (of Figure 2) would be audio frequencies.
Alternatively, this signal could be translated to a higher frequency. Yet again, it may be
generated directly at ‗carrier‘ frequencies.


Phase-shift keying (PSK)

Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing,
or modulating, the phase of a reference signal (the carrier wave).

Any digital modulation scheme uses a finite number of distinct signals to represent digital
data. PSK uses a finite number of phases, each assigned a unique pattern of binary digits.
Usually, each phase encodes an equal number of bits. Each pattern of bits forms the
symbol that is represented by the particular phase. The demodulator, which is designed
specifically for the symbol-set used by the modulator, determines the phase of the
received signal and maps it back to the symbol it represents, thus recovering the original
data. This requires the receiver to be able to compare the phase of the received signal to a
reference signal — such a system is termed coherent (and referred to as CPSK).

Alternatively, instead of using the bit patterns to set the phase of the wave, it can instead
be used to change it by a specified amount. The demodulator then determines the changes
in the phase of the received signal rather than the phase itself. Since this scheme depends
on the difference between successive phases, it is termed differential phase-shift keying
(DPSK). DPSK can be significantly simpler to implement than ordinary PSK since there
is no need for the demodulator to have a copy of the reference signal to determine the
exact phase of the received signal (it is a non-coherent scheme). In exchange, it produces
more erroneous demodulations. The exact requirements of the particular scenario under
consideration determine which scheme is used.

BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the simplest
form of phase shift keying (PSK). It uses two phases which are separated by 180° and so
can also be termed 2-PSK. It does not particularly matter exactly where the constellation
points are positioned, and in this figure they are shown on the real axis, at 0° and 180°.
This modulation is the most robust of all the PSKs since it takes the highest level of noise
or distortion to make the demodulator reach an incorrect decision. It is, however, only
able to modulate at 1 bit/symbol (as seen in the figure) and so is unsuitable for high data-
rate applications when bandwidth is limited.

In the presence of an arbitrary phase-shift introduced by the communications channel, the
demodulator is unable to tell which constellation point is which. As a result, the data is
often differentially encoded prior to modulation.

BPSK (also sometimes called PRK, Phase Reversal Keying, or 2PSK) is the simplest
form of phase shift keying (PSK). It uses two phases which are separated by 180° and so
can also be termed 2-PSK. It does not particularly matter exactly where the constellation
points are positioned, and in this figure they are shown on the real axis, at 0° and 180°.
This modulation is the most robust of all the PSKs since it takes the highest level of noise
or distortion to make the demodulator reach an incorrect decision. It is, however, only
able to modulate at 1 bit/symbol (as seen in the figure) and so is unsuitable for high data-
rate applications when bandwidth is limited.

In the presence of an arbitrary phase-shift introduced by the communications channel, the
demodulator is unable to tell which constellation point is which. As a result, the data is
often differentially encoded prior to modulation.

The general form for BPSK follows the equation:




This yields two phases, 0 and π. In the specific form, binary data is often conveyed with
the following signals:




                                                                           for binary "0"


                                        for binary "1"

where fc is the frequency of the carrier-wave.

Hence, the signal-space can be represented by the single basis function




where 1 is represented by                 and 0 is represented by                    . This
assignment is, of course, arbitrary.

This use of this basis function is shown at the end of the next section in a signal timing
diagram. The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator
would produce. The bit-stream that causes this output is shown above the signal (the
other parts of this figure are relevant only to QPSK).

Bit error rate

The bit error rate (BER) of BPSK in AWGN can be calculated as[5]:




                              or

Since there is only one bit per symbol, this is also the symbol error rate.

Quadrature phase-shift keying (QPSK)




Constellation diagram for QPSK with Gray coding. Each adjacent symbol only differs by
one bit. Sometimes this is known as quaternary PSK, quadriphase PSK, 4-PSK, or 4-
QAM. (Although the root concepts of QPSK and 4-QAM are different, the resulting
modulated radio waves are exactly the same.) QPSK uses four points on the constellation
diagram, equispaced around a circle. With four phases, QPSK can encode two bits per
symbol, shown in the diagram with gray coding to minimize the bit error rate (BER) —
sometimes misperceived as twice the BER of BPSK.

The mathematical analysis shows that QPSK can be used either to double the data rate
compared with a BPSK system while maintaining the same bandwidth of the signal, or to
maintain the data-rate of BPSK but halving the bandwidth needed. In this latter case, the
BER of QPSK is exactly the same as the BER of BPSK - and deciding differently is a
common confusion when considering or describing QPSK.

Given that radio communication channels are allocated by agencies such as the Federal
Communication Commission giving a prescribed (maximum) bandwidth, the advantage
of QPSK over BPSK becomes evident: QPSK transmits twice the data rate in a given
bandwidth compared to BPSK - at the same BER. The engineering penalty that is paid is
that QPSK transmitters and receivers are more complicated than the ones for BPSK.
However, with modern electronics technology, the penalty in cost is very moderate.

As with BPSK, there are phase ambiguity problems at the receiving end, and
differentially encoded QPSK is often used in practice.

Implementation

The implementation of QPSK is more general than that of BPSK and also indicates the
implementation of higher-order PSK. Writing the symbols in the constellation diagram in
terms of the sine and cosine waves used to transmit them:




This yields the four phases π/4, 3π/4, 5π/4 and 7π/4 as needed.

This results in a two-dimensional signal space with unit basis functions




The first basis function is used as the in-phase component of the signal and the second as
the quadrature component of the signal.
Hence, the signal constellation consists of the signal-space 4 points




The factors of 1/2 indicate that the total power is split equally between the two carriers.

Comparing these basis functions with that for BPSK shows clearly how QPSK can be
viewed as two independent BPSK signals. Note that the signal-space points for BPSK do
not need to split the symbol (bit) energy over the two carriers in the scheme shown in the
BPSK constellation diagram.

QPSK systems can be implemented in a number of ways. An illustration of the major
components of the transmitter and receiver structure are shown below.




Conceptual transmitter structure for QPSK. The binary data stream is split into the in-
phase and quadrature-phase components. These are then separately modulated onto two
orthogonal basis functions. In this implementation, two sinusoids are used. Afterwards,
the two signals are superimposed, and the resulting signal is the QPSK signal. Note the
use of polar non-return-to-zero encoding. These encoders can be placed before for binary
data source, but have been placed after to illustrate the conceptual difference between
digital and analog signals involved with digital modulation.
Receiver structure for QPSK. The matched filters can be replaced with correlators. Each
detection device uses a reference threshold value to determine whether a 1 or 0 is
detected.

Bit error rate

Although QPSK can be viewed as a quaternary modulation, it is easier to see it as two
independently modulated quadrature carriers. With this interpretation, the even (or odd)
bits are used to modulate the in-phase component of the carrier, while the odd (or even)
bits are used to modulate the quadrature-phase component of the carrier. BPSK is used on
both carriers and they can be independently demodulated.

As a result, the probability of bit-error for QPSK is the same as for BPSK:




However, in order to achieve the same bit-error probability as BPSK, QPSK uses twice
the power (since two bits are transmitted simultaneously).

The symbol error rate is given by:




                                          .
If the signal-to-noise ratio is high (as is necessary for practical QPSK systems) the
probability of symbol error may be approximated:




QPSK signal in the time domain

The modulated signal is shown below for a short segment of a random binary data-
stream. The two carrier waves are a cosine wave and a sine wave, as indicated by the
signal-space analysis above. Here, the odd-numbered bits have been assigned to the in-
phase component and the even-numbered bits to the quadrature component (taking the
first bit as number 1). The total signal — the sum of the two components — is shown at
the bottom. Jumps in phase can be seen as the PSK changes the phase on each component
at the start of each bit-period. The topmost waveform alone matches the description given
for BPSK above.




Timing diagram for QPSK. The binary data stream is shown beneath the time axis. The
two signal components with their bit assignments are shown the top and the total,
combined signal at the bottom. Note the abrupt changes in phase at some of the bit-period
boundaries.

The binary data that is conveyed by this waveform is: 1 1 0 0 0 1 1 0.
      The odd bits, highlighted here, contribute to the in-phase component: 1 1 0 0 0 1 1
       0
      The even bits, highlighted here, contribute to the quadrature-phase component: 1
       1000110

   QPSK

Offset quadrature phase-shift keying (OQPSK) is a variant of phase-shift keying
modulation using 4 different values of the phase to transmit. It is sometimes called
Staggered quadrature phase-shift keying (SQPSK).
Taking four values of the phase (two bits) at a time to construct a QPSK symbol can
allow the phase of the signal to jump by as much as 180° at a time. When the signal is
low-pass filtered (as is typical in a transmitter), these phase-shifts result in large
amplitude fluctuations, an undesirable quality in communication systems. By offsetting
the timing of the odd and even bits by one bit-period, or half a symbol-period, the in-
phase and quadrature components will never change at the same time. In the constellation
diagram shown on the right, it can be seen that this will limit the phase-shift to no more
than 90° at a time. This yields much lower amplitude fluctuations than non-offset QPSK
and is sometimes preferred in practice.

The picture on the right shows the difference in the behavior of the phase between
ordinary QPSK and OQPSK. It can be seen that in the first plot the phase can change by
180° at once, while in OQPSK the changes are never greater than 90°.

The modulated signal is shown below for a short segment of a random binary data-
stream. Note the half symbol-period offset between the two component waves. The
sudden phase-shifts occur about twice as often as for QPSK (since the signals no longer
change together), but they are less severe. In other words, the magnitude of jumps is
smaller in OQPSK when compared to QPSK.




This final variant of QPSK uses two identical constellations which are rotated by 45° (π /
4 radians, hence the name) with respect to one another. Usually, either the even or odd
symbols are used to select points from one of the constellations and the other symbols
select points from the other constellation. This also reduces the phase-shifts from a
maximum of 180°, but only to a maximum of 135° and so the amplitude fluctuations of π
/ 4–QPSK are between OQPSK and non-offset QPSK.

One property this modulation scheme possesses is that if the modulated signal is
represented in the complex domain, it does not have any paths through the origin. In
other words, the signal does not pass through the origin. This lowers the dynamical range
of fluctuations in the signal which is desirable when engineering communications signals.

On the other hand, π / 4–QPSK lends itself to easy demodulation and has been adopted
for use in, for example, TDMA cellular telephone systems.

The modulated signal is shown below for a short segment of a random binary data-
stream. The construction is the same as above for ordinary QPSK. Successive symbols
are taken from the two constellations shown in the diagram. Thus, the first symbol (1 1)
is taken from the 'blue' constellation and the second symbol (0 0) is taken from the 'green'
constellation. Note that magnitudes of the two component waves change as they switch
between constellations, but the total signal's magnitude remains constant. The phase-
shifts are between those of the two previous timing-diagrams.




Differential phase-shift keying (DPSK)

Differential encoding

Differential phase shift keying (DPSK) is a common form of phase modulation that
conveys data by changing the phase of the carrier wave. As mentioned for BPSK and
QPSK there is an ambiguity of phase if the constellation is rotated by some effect in the
communications channel through which the signal passes. This problem can be overcome
by using the data to change rather than set the phase.

For example, in differentially-encoded BPSK a binary '1' may be transmitted by adding
180° to the current phase and a binary '0' by adding 0° to the current phase. Another
variant of DPSK is Symmetric Differential Phase Shift keying, SDPSK, where encoding
would be +90° for a '1' and -90° for a '0'.
In differentially-encoded QPSK (DQPSK), the phase-shifts are 0°, 90°, 180°, -90°
corresponding to data '00', '01', '11', '10'. This kind of encoding may be demodulated in
the same way as for non-differential PSK but the phase ambiguities can be ignored. Thus,
each received symbol is demodulated to one of the M points in the constellation and a
comparator then computes the difference in phase between this received signal and the
preceding one. The difference encodes the data as described above. Symmetric
Differential Quadrature Phase Shift Keying (SDQPSK) is like DQPSK, but encoding is
symmetric, using phase shift values of -135°, -45°, +45° and +135°.

The modulated signal is shown below for both DBPSK and DQPSK as described above.
In the figure, it is assumed that the signal starts with zero phase, and so there is a phase
shift in both signals at t = 0.




Analysis shows that differential encoding approximately doubles the error rate compared
to ordinary M-PSK but this may be overcome by only a small increase in Eb / N0.
Furthermore, this analysis (and the graphical results below) are based on a system in
which the only corruption is additive white Gaussian noise(AWGN). However, there will
also be a physical channel between the transmitter and receiver in the communication
system. This channel will, in general, introduce an unknown phase-shift to the PSK
signal; in these cases the differential schemes can yield a better error-rate than the
ordinary schemes which rely on precise phase information.
Demodulation

For a signal that has been differentially encoded, there is an obvious alternative method
of demodulation. Instead of demodulating as usual and ignoring carrier-phase ambiguity,
the phase between two successive received symbols is compared and used to determine
what the data must have been. When differential encoding is used in this manner, the
scheme is known as differential phase-shift keying (DPSK). Note that this is subtly
different to just differentially-encoded PSK since, upon reception, the received symbols
are not decoded one-by-one to constellation points but are instead compared directly to
one another.

Call the received symbol in the kth timeslot rk and let it have phase φk. Assume without
loss of generality that the phase of the carrier wave is zero. Denote the AWGN term as
nk. Then



                                  .

The decision variable for the k − 1th symbol and the kth symbol is the phase difference
between rk and rk − 1. That is, if rk is projected onto rk − 1, the decision is taken on the
phase of the resultant complex number:




where superscript * denotes complex conjugation. In the absence of noise, the phase of
this is θk − θk − 1, the phase-shift between the two received signals which can be used to
determine the data transmitted.

The probability of error for DPSK is difficult to calculate in general, but, in the case of
DBPSK it is:
which, when numerically evaluated, is only slightly worse than ordinary BPSK,
particularly at higher Eb / N0 values.

Using DPSK avoids the need for possibly complex carrier-recovery schemes to provide
an accurate phase estimate and can be an attractive alternative to ordinary PSK.

In optical communications, the data can be modulated onto the phase of a laser in a
differential way. The modulation is a laser which emits a continuous wave, and a Mach-
Zehnder modulator which receives electrical binary data. For the case of BPSK for
example, the laser transmits the field unchanged for binary '1', and with reverse polarity
for '0'. The demodulator consists of a delay line interferometer which delays one bit, so
two bits can be compared at one time. In further processing, a photo diode is used to
transform the optical field into an electric current, so the information is changed back into
its original state.




The bit-error rates of DBPSK and DQPSK are compared to their non-differential
counterparts in the graph to the right. The loss for using DBPSK is small enough
compared to the complexity reduction that it is often used in communications systems
that would otherwise use BPSK. For DQPSK though, the loss in performance compared
to ordinary QPSK is larger and the system designer must balance this against the
reduction in complexity.
Bandwidth efficiency

In terms of the positive frequencies, the transmission bandwidth of AM is twice the
signal's original (baseband) bandwidth—since both the positive and negative sidebands
are shifted up to the carrier frequency. Thus, double-sideband AM (DSB-AM) is
spectrally inefficient, meaning that fewer radio stations can be accommodated in a given
broadcast band. The various suppression methods in Forms of AM can be readily
understood in terms of the diagram in Figure 2. With the carrier suppressed there would
be no energy at the center of a group. And with a sideband suppressed, the "group" would

have the same bandwidth as the positive frequencies of            The transmitter power
efficiency of DSB-AM is relatively poor (about 33%). The benefit of this system is that
receivers are cheaper to produce. The forms of AM with suppressed carriers are found to
be 100% power efficient, since no power is wasted on the carrier signal which conveys
no information.

Carrier recovery

A carrier recovery system is a circuit used to estimate and compensate for frequency and
phase differences between a received signal's carrier wave and the receiver's local
oscillator for the purpose of coherent demodulation.

In the transmitter of a communications carrier system, a carrier wave is modulated by a
baseband signal. At the receiver the baseband information is extracted from the incoming
modulated waveform. In an ideal communications system the carrier frequency
oscillators of the transmitter and receiver would be perfectly matched in frequency and
phase thereby permitting perfect coherent demodulation of the modulated baseband
signal. However, transmitters and receivers rarely share the same carrier frequency
oscillator. Communications receiver systems are usually independent of transmitting
systems and contain their own oscillators with frequency and phase offsets and
instabilities. Doppler shift may also contribute to frequency differences in mobile radio
frequency communications systems. All these frequency and phase variations must be
estimated using information in the received signal to reproduce or recover the carrier
signal at the receiver and permit




For a quiet carrier or a signal containing a dominant carrier spectral line, carrier recovery
can be accomplished with a simple band-pass filter at the carrier frequency and/or with a
phase-locked loop.

However, many modulation schemes make this simple approach impractical because
most signal power is devoted to modulation—where the information is present—and not
to the carrier frequency. Reducing the carrier power results in greater transmitter
efficiency. Different methods must be employed to recover the carrier in these conditions.

Non-Data-Aided

Non-data-aided/"blind" carrier recovery methods do not rely on any knowledge of the
modulation symbols. They are typically used for simple carrier recovery schemes or as
the initial method of coarse carrier frequency recovery. Closed-loop non-data-aided
systems are frequently maximum likelihood frequency error detectors.
Multiply-filter-divide

In this method of non-data-aided carrier recovery a non-linear operation is applied to the
modulated signal to create harmonics of the carrier frequency with the modulation
removed. The carrier harmonic is then band-pass filtered and frequency divided to
recover the carrier frequency. (This may be followed by a PLL.) Multiply-filter-divide is
an example of open-loop carrier recovery, which is favored in burst transactions since the
acquisition time is typically shorter than for close-loop synchronizers.

If the phase-offset/delay of the multiply-filter-divide system is known, it can be
compensated for to recover the correct phase. In practice, applying this phase
compensation is difficult.

In general, the order of the modulation matches the order of the nonlinear operator
required to produce a clean carrier harmonic.

As an example, consider a BPSK signal. We can recover the RF carrier frequency, ωRF
by squaring:




This produces a signal at twice the RF carrier frequency with no phase modulation
(modulo 2π phase is effectively 0 modulation)

For a QPSK signal, we can take the fourth power:
Two terms (plus a DC component) are produced. An appropriate filter around 4ωRF
recovers this frequency.

Costas Loop

Carrier frequency and phase recovery as well as demodulation can be accomplished using
a Costas loop of the appropriate order. A Costas loop is a cousin of the PLL that uses
coherent quadrature signals to measure phase error. This phase error is used to discipline
the loop's oscillator. The quadrature signals, once properly aligned/recovered, also
successfully demodulate the signal. Costas loop carrier recovery may be used for any M-
ary PSK modulation scheme. One of the Costas Loop's inherent shortcomings is a 360/M
degree phase ambiguity present on the demodulated output.

Decision-Directed

At the start of the carrier recovery process it is possible to achieve symbol
synchronization prior to full carrier recovery because symbol timing can be determined
without knowledge of the carrier phase or the carrier's minor frequency variation/offset.
In decision directed carrier recovery the output of a symbol decoder is fed to a
comparison circuit and the phase difference/error between the decoded symbol and the
received signal is used to discipline the local oscillator. Decision directed methods are
suited to synchronizing frequency differences that are less than the symbol rate because
comparisons are performed on symbols at, or near, the symbol rate. Other frequency
recovery methods may be necessary to achieve initial frequency acquisition.

A common form of decision directed carrier recovery begins with quadrature phase
correlators producing in-phase and quadrature signals representing a symbol coordinate
in the complex plane. This point should correspond to a location in the modulation
constellation diagram. The phase error between the received value and nearest/decoded
symbol is calculated using arc tangent (or an approximation). However, arc tangent, can
only compute a phase correction between 0 and π / 2. Most QAM constellations also have
π / 2 phase symmetry. Both of these shortcomings came be overcome by the use of
differential coding.
In low SNR conditions, the symbol decoder will make errors more frequently.
Exclusively using the corner symbols in rectangular constellations or giving them more
weight versus lower SNR symbols reduces the impact of low SNR decision errors.
                        UNIT III DIGITAL TRANSMISSION

Pulse modulation methods

Pulse modulation schemes aim at transferring a narrowband analog signal over an analog
baseband channel as a two-level signal by modulating a pulse wave. Some pulse
modulation schemes also allow the narrowband analog signal to be transferred as a digital
signal (i.e. as a quantized discrete-time signal) with a fixed bit rate, which can be
transferred over an underlying digital transmission system, for example some line code.
These are not modulation schemes in the conventional sense since they are not channel
coding schemes, but should be considered as source coding schemes, and in some cases
analog-to-digital conversion techniques.

Analog-over-analog methods:

      Pulse-amplitude modulation (PAM)
      Pulse-width modulation (PWM)
      Pulse-position modulation (PPM)

Analog-over-digital methods:

      Pulse-code modulation (PCM)
           o   Differential PCM (DPCM)
           o   Adaptive DPCM (ADPCM)
      Delta modulation (DM or Δ-modulation)
      Sigma Delta modulation (∑Δ)
      Continuously variable slope delta modulation (CVSDM), also called Adaptive-
       delta modulation (ADM)
      Pulse-density modulation (PDM)
PULSE CODE MODULATION



Pulse-code modulation (PCM) is a method used to digitally represent sampled analog
signals, which was invented by Alec Reeves in 1937. It is the standard form for digital
audio in computers and various Blu-ray, Compact Disc and DVD formats, as well as
other uses such as digital telephone systems. A PCM stream is a digital representation of
an analog signal, in which the magnitude of the analogue signal is sampled regularly at
uniform intervals, with each sample being quantized to the nearest value within a range
of digital steps.

PCM Sampling and Sampling rate

PCM streams have two basic properties that determine their fidelity to the original analog
signal: the sampling rate, which is the number of times per second that samples are taken;
and the bit depth, which determines the number of possible digital values that each
sample can take.
In the diagram, a sine wave (red curve) is sampled and quantized for pulse code
modulation. The sine wave is sampled at regular intervals, shown as ticks on the x-axis.
For each sample, one of the available values (ticks on the y-axis) is chosen by some
algorithm. This produces a fully discrete representation of the input signal (shaded area)
that can be easily encoded as digital data for storage or manipulation. For the sine wave
example at right, we can verify that the quantized values at the sampling moments are 7,
9, 11, 12, 13, 14, 14, 15, 15, 15, 14, etc. Encoding these values as binary numbers would
result in the following set of nibbles: 0111 (23×0+22×1+21×1+20×1=0+4+2+1=7), 1001,
1011, 1100, 1101, 1110, 1110, 1111, 1111, 1111, 1110, etc. These digital values could
then be further processed or analyzed by a purpose-specific digital signal processor or
general purpose DSP. Several Pulse Code Modulation streams could also be multiplexed
into a larger aggregate data stream, generally for transmission of multiple streams over a
single physical link. One technique is called time-division multiplexing (TDM) and is
widely used, notably in the modern public telephone system. Another technique is called
frequency-division multiplexing (FDM), where the signal is assigned a frequency in a
spectrum and transmitted along with other signals inside that spectrum. Currently, TDM
is much more widely used than FDM because of its natural compatibility with digital
communication and generally lower bandwidth requirements.

There are many ways to implement a real device that performs this task. In real systems,
such a device is commonly implemented on a single integrated circuit that lacks only the
clock necessary for sampling, and is generally referred to as an ADC (Analog-to-Digital
converter). These devices will produce on their output a binary representation of the input
whenever they are triggered by a clock signal, which would then be read by a processor
of some sort.

Demodulation

To produce output from the sampled data, the procedure of modulation is applied in
reverse. After each sampling period has passed, the next value is read and a signal is
shifted to the new value. As a result of these transitions, the signal will have a significant
amount of high-frequency energy. To smooth out the signal and remove these undesirable
aliasing frequencies, the signal would be passed through analog filters that suppress
energy outside the expected frequency range (that is, greater than the Nyquist frequency
fs / 2). Some systems use digital filtering to remove some of the aliasing, converting the
signal from digital to analog at a higher sample rate such that the analog filter required
for anti-aliasing is much simpler. In some systems, no explicit filtering is done at all; as
it's impossible for any system to reproduce a signal with infinite bandwidth, inherent
losses in the system compensate for the artifacts — or the system simply does not require
much precision. The sampling theorem suggests that practical PCM devices, provided a
sampling frequency that is sufficiently greater than that of the input signal, can operate
without introducing significant distortions within their designed frequency bands.

The electronics involved in producing an accurate analog signal from the discrete data are
similar to those used for generating the digital signal. These devices are DACs (digital-to-
analog converters), and operate similarly to ADCs. They produce on their output a
voltage or current (depending on type) that represents the value presented on their inputs.
This output would then generally be filtered and amplified for use.

Quantization and Pulse Code Modulation
If eight bits are allowed for the PCM sample, this gives a total of 256 possible values.
PCM assigns these 256 possible values as 127 positive and 127 negative encoding levels,
plus the zero-amplitude level. (PCM assigns two samples to the zero level.) These levels
are divided up into eight bands called chords. Within each chord is sixteen steps. Figure
23 shows the chord/step structure for a linear encoding scheme.
Three examples of PAM samples are shown in Figure 23. Each PAM sample‘s peak falls
within a specific chord and step, giving it a numerical value. This value translates into a
binary code which becomes the corresponding PCM value. Figure 23 only shows the
positive-value PCM values, for simplicity.
Figure 24 shows the conversion function for a linear quantization process. As a voice
signal sample increases in amplitude the quantization levels increase uniformly. The 127
quantization levels are spread evenly over the voice signal‘s dynamic range. This gives
loud voice signals the same degree of resolution (same step size) as soft voice signals.
Encoding an analog signal in this manner, while conceptually simplistic, does not give
optimized fidelity in the reconstruction of human voice.
Notice this transfer function gives two values for a zero-amplitude signal. In PCM, there
is a ―positive zero‖ and a ―negative zero‖.


COMPANDING


Dividing the amplitude of the voice signal up into equal positive and negative steps is not
an efficient way to encode voice into PCM. Figure 23 shows PCM chords and steps as
uniform increments (such as would be created by the transfer function depicted in Figure
24). This does not take advantage of a natural property of human voice: voices create
lowamplitude signals most of the time (people seldom shout on the telephone). That is, most
of the energy in human voice is concentrated in the lower end of voice‘s dynamic range.
To create the highest-fidelity voice reproduction from PCM, the quantization process must
take into account this fact that most voice signals are typically of lower amplitude. To do
this the vocoder adjusts the chords and steps so that most of them are in the low-amplitude
end of the total encoding range. In this scheme, all step sizes are not equal. Step sizes are
smaller for lower-amplitude signals.
Quantization levels distributed according to a logarithmic, instead of linear, function gives
finer resolution, or smaller quantization steps, at lower signal amplitudes. Therefore,
higher-fidelity reproduction of voice is achieved. Figure 25 shows a conversion function
for a logarithmic quantization process.
A vocoder that places most of the quantization steps at lower amplitudes by using a nonlinear
function, such as a logarithm, is said to compress voice upon encoding, then expand
the PCM samples to re-create an analog voice signal. Such a vocoder is hence called a
compander (from compress and expand).




In reality, voice quantization does not exactly follow the logarithmic curve step for step, as
Figure 25 appears to indicate. PCM in North America uses a logarithmic function called
  -law. The encoding function only approximates a logarithmic curve, as steps within a
chord are all the same size, and therefore linear. The steps change in size only from chord
                                                                     -law logarithmic curve.
The chords form a piece-wise linear approximation of the logarithmic curve.


Delta Modulation


Delta   modulation (DM       or   Δ-modulation)is     an   analog-to-digital and    digital-to-analog
        signal conversion technique used for transmission of voice information where quality is
        not of primary importance. DM is the simplest form of differential pulse-code
        modulation (DPCM) where the difference between successive samples is encoded into n-
        bit data streams. In delta modulation, the transmitted data is reduced to a 1-bit data
        stream. Its main features are:

              the analog signal is approximated with a series of segments
              each segment of the approximated signal is compared to the original analog wave
           to determine the increase or decrease in relative amplitude
              the decision process for establishing the state of successive bits is determined by
           this comparison
              only the change of information is sent, that is, only an increase or decrease of the
           signal amplitude from the previous sample is sent whereas a no-change condition
           causes the modulated signal to remain at the same 0 or 1 state of the previous sample.

To achieve high signal-to-noise ratio, delta modulation must use oversampling techniques, that
        is, the analog signal is sampled at a rate several times higher than the Nyquist rate.

Derived forms of delta modulation are continuously variable slope delta modulation, delta-sigma
        modulation, anddifferential modulation. The Differential Pulse Code Modulation is the
        super set of DM.



Rather than quantizing the absolute value of the input analog waveform, delta modulation
        quantizes the difference between the current and the previous step, as shown in the block
        diagram in Fig. 1.
Fig. 1 - Block diagram of a Δ-modulator/demodulator

The modulator is made by a quantizer which converts the difference between the input signal and
       the average of the previous steps. In its simplest form, the quantizer can be realized with
       a comparator referenced to 0 (two levels quantizer), whose output is 1 or 0 if the input
       signal is positive or negative. It is also a bit-quantizer as it quantizes only a bit at a time.
       The demodulator is simply an integrator (like the one in the feedback loop) whose output
       rises or falls with each 1 or 0 received. The integrator itself constitutes a low-pass filter.

Transfer characteristics

The transfer characteristics of a delta modulated system follows a signum function,as it quantizes
       only two levels and also one-bit at a time.

The two sources of noise in delta modulation are "slope overload", when steps are too small to
       track the original waveform, and "granularity", when steps are too large. But a 1971
       study shows that slope overload is less objectionable compared to granularity than one
       might expect based solely on SNR measures.[1]

Output signal power

In delta modulation there is no restriction on the amplitude of the signal waveform, because the
       number of levels is not fixed. On the other hand, there is a limitation on the slope of the
       signal waveform which must be observed if slope overload is to be avoided. However, if
       the signal waveform changes slowly, there is nominally no limit to the signal power
       which may be transmitted.

Bit-rate

If the communication channel is of limited bandwidth,there is the possibility of interference in
       either DM or PCM.Hence,'DM' and 'PCM' operate at same bit-rate.[dubious – discuss]
Adaptive delta modulation

Adaptive delta modulation (ADM) or continuously variable slope delta modulation (CVSD) is a
       modification of DM in which the step size is not fixed. Rather, when several consecutive
       bits have the same direction value, the encoder and decoder assume that slope overload is
       occurring, and the step size becomes progressively larger. Otherwise, the step size
       becomes gradually smaller over time. ADM reduces slope error,at the expense of
       increasing quantizing error.This error can be reduced by using a low pass filter.

Comparison of PCM and DM

1.Signal-to-noise ratio of DM is larger than signal-to-noise ratio of PCM.

2.For an ADM signal-to-noise ratio is comparable to Signal-to-noise ratio of companded PCM.

3.In PCM,that it transmits all the bits which are used to code a sample, whereas in DM transmits
       only one bit for one sample.

Differential pulse code modulation

       Differential pulse code modulation (DPCM) is a procedure of converting an analog into a
       digital signal in which an analog signal is sampled and then the difference between the
       actual sample value and its predicted value (predicted value is based on previous sample
       or samples) is quantized and then encoded forming a digital value.

       DPCM code words represent differences between samples unlike PCM where code words
       represented a sample value.

       Basic concept of DPCM - coding a difference, is based on the fact that most source
       signals show significant correlation between successive samples so encoding uses
       redundancy in sample values which implies lower bit rate.

       Realization of basic concept (described above) is based on a technique in which we have
       to predict current sample value based upon previous samples (or sample) and we have to
       encode the difference between actual value of sample and predicted value (the difference
      between       samples       can      be      interpreted     as      prediction      error).
      Because it's necessary to predict sample value DPCM is form of predictive coding.

      DPCM compression depends on the prediction technique, well-conducted prediction
      techniques lead to good compression rates, in other cases DPCM could mean expansion
      comparing to regular PCM encoding.




Fig 1. DPCM encoder (transmitter)


Intersymbol interference (ISI)


      In telecommunication, intersymbol interference (ISI) is a form of distortion of a signal in
      which one symbol interferes with subsequent symbols. This is an unwanted phenomenon
      as the previous symbols have similar effect as noise, thus making the communication less
      reliable. ISI is usually caused by multipath propagation or the inherent non-linear
      frequency response of a channel causing successive symbols to "blur" together. The
      presence of ISI in the system introduces errors in the decision device at the receiver
      output. Therefore, in the design of the transmitting and receiving filters, the objective is
      to minimize the effects of ISI, and thereby deliver the digital data to its destination with
      the smallest error rate possible. Ways to fight intersymbol interference include adaptive
      equalization and error correcting codes.
Causes

Multipath propagation

         One of the causes of intersymbol interference is what is known as multipath
         propagation in which a wireless signal from a transmitter reaches the receiver via many
         different paths. The causes of this include reflection (for instance, the signal may bounce
         off buildings), refraction (such as through the foliage of a tree) and atmospheric effects
         such as atmospheric ductingand ionospheric reflection. Since all of these paths are
         different lengths - plus some of these effects will also slow the signal down - this results
         in the different versions of the signal arriving at different times. This delay means that
         part or all of a given symbol will be spread into the subsequent symbols, thereby
         interfering with the correct detection of those symbols. Additionally, the various paths
         often distort the amplitude and/or phase of the signal thereby causing further interference
         with the received signal.

Bandlimited channels

         Another cause of intersymbol interference is the transmission of a signal through
         a bandlimited channel, i.e., one where the frequency response is zero above a certain
         frequency (the cutoff frequency). Passing a signal through such a channel results in the
         removal of frequency components above this cutoff frequency; in addition, the amplitude
         of the frequency components below the cutoff frequency may also be attenuated by the
         channel.

         This filtering of the transmitted signal affects the shape of the pulse that arrives at the
         receiver. The effects of filtering a rectangular pulse; not only change the shape of the
         pulse within the first symbol period, but it is also spread out over the subsequent symbol
         periods. When a message is transmitted through such a channel, the spread pulse of each
         individual symbol will interfere with following symbols.

         As opposed to multipath propagation, bandlimited channels are present in both wired and
         wireless communications. The limitation is often imposed by the desire to operate
         multiple independent signals through the same area/cable; due to this, each system is
       typically allocated a piece of the total bandwidth available. For wireless systems, they
       may be allocated a slice of the electromagnetic spectrum to transmit in (for example, FM
       radio is often broadcast in the 87.5 MHz - 108 MHz range). This allocation is usually
       administered by a government agency; in the case of the United States this is the Federal
       Communications Commission (FCC). In a wired system, such as an optical fiber cable,
       the allocation will be decided by the owner of the cable.

       The bandlimiting can also be due to the physical properties of the medium - for instance,
       the cable being used in a wired system may have a cutoff frequency above which
       practically none of the transmitted signal will propagate.

       Communication systems that transmit data over bandlimited channels usually
       implement pulse shaping to avoid interference caused by the bandwidth limitation. If the
       channel frequency response is flat and the shaping filter has a finite bandwidth, it is
       possible to communicate with no ISI at all. Often the channel response is not known
       beforehand, and an adaptive equalizer is used to compensate the frequency response.

Effects on eye patterns

       One way to study ISI in a PCM or data transmission system experimentally is to apply
       the received wave to the vertical deflection plates of an oscilloscope and to apply a
       sawtooth wave at the transmitted symbol rate R, 1/T to the horizontal deflection plates.
       The resulting display is called an eye pattern because of its resemblance to the human eye
       for binary waves. The interior region of the eye pattern is called the eye opening. An eye
       pattern provides a great deal of information about the performance of the pertinent
       system.

       1.     The width of the eye opening defines the time interval over which the received
                 wave can be sampled without error from ISI. It is apparent that the preferred time
                 for sampling is the instant of time at which the eye is open widest.
       2.     The sensitivity of the system to timing error is determined by the rate of closure
                 of the eye as the sampling time is varied.
       3.     The height of the eye opening, at a specified sampling time, defines the margin
                 over noise.
      An eye pattern, which overlays many samples of a signal, can give a graphical
      representation of the signal characteristics. The first image below is the eye pattern for a
      binary phase-shift keying (PSK) system in which a one is represented by an amplitude of
      -1 and a zero by an amplitude of +1. The current sampling time is at the center of the
      image and the previous and next sampling times are at the edges of the image. The
      various transitions from one sampling time to another (such as one-to-zero, one-to-one
      and so forth) can clearly be seen on the diagram.

      The noise margin - the amount of noise required to cause the receiver to get an error - is
      given by the distance between the signal and the zero amplitude point at the sampling
      time; in other words, the further from zero at the sampling time the signal is the better.
      For the signal to be correctly interpreted, it must be sampled somewhere between the two
      points where the zero-to-one and one-to-zero transitions cross. Again, the further apart
      these points are the better, as this means the signal will be less sensitive to errors in the
      timing of the samples at the receiver.

      The effects of ISI are shown in the second image which is an eye pattern of the same
      system when operating over a multipath channel. The effects of receiving delayed and
      distorted versions of the signal can be seen in the loss of definition of the signal
      transitions. It also reduces both the noise margin and the window in which the signal can
      be sampled, which shows that the performance of the system will be worse (i.e. it will
      have a greater bit error ratio).






The eye diagram of a binary PSK system





The eye diagram of the same system with multipath effects added

Countering ISI

There are several techniques in telecommunication and data storage that try to work around the
       problem of intersymbol interference.

   Design systems such that the impulse response is short enough that very little energy from
   one symbol smears into the next symbol.




Consecutive raised-cosine impulses, demonstrating zero-ISI property

             Separate symbols in time with guard periods.
             Apply an equalizer at the receiver, that, broadly speaking, attempts to undo the
          effect of the channel by applying an inverse filter.
             Apply a sequence detector at the receiver, that attempts to estimate the sequence
          of transmitted symbols using the Viterbi algorithm.
UNIT IV SPREAD SPECTRUM AND MULTIPLE ACCESS TECHNIQUES


Spread-spectrum techniques are methods by which a signal (e.g. an electrical,
electromagnetic, or acoustic signal) generated in a particularbandwidth is deliberately
spread in the frequency domain, resulting in a signal with a wider bandwidth. These
techniques are used for a variety of reasons, including the establishment of secure
communications, increasing resistance to natural interference and jamming, to prevent
detection, and to limit power flux density (e.g. in satellite downlinks).


This is a technique in which a (telecommunication) signal is transmitted on
a bandwidth considerably larger than the frequency content of the original information.

Spread-spectrum telecommunications is a signal structuring technique that employs
direct sequence, frequency hopping, or a hybrid of these, which can be used for multiple
access and/or multiple functions. This technique decreases the potential interference to
other receivers while achieving privacy. Spread spectrum generally makes use of a
sequential noise-like signal structure to spread the normally narrowband information
signal over a relatively wideband (radio) band of frequencies. The receiver correlates the
received signals to retrieve the original information signal. Originally there were two
motivations: either to resist enemy efforts to jam the communications (anti-jam, or AJ),
or to hide the fact that communication was even taking place, sometimes called low
probability of intercept (LPI).

Frequency-hopping            spread        spectrum (FHSS), direct-sequence        spread
spectrum (DSSS), time-hopping spread spectrum (THSS), chirp spread spectrum (CSS),
and combinations of these techniques are forms of spread spectrum. Each of these
techniques employs pseudorandom number sequences — created using pseudorandom
number generators — to determine and control the spreading pattern of the signal across
the alloted bandwidth. Ultra-wideband (UWB) is another modulation technique that
accomplishes the same purpose, based on transmitting short duration pulses. Wireless
Ethernet standard IEEE 802.11 uses either FHSS or DSSS in its radio interface.
Theoretical Justification for Spread Spectrum
Spread-spectrum is apparent in the Shannon and Hartley channel-capacity theorem:
C = B × log2 (1 + S/N) (Eq. 1)
In this equation, C is the channel capacity in bits per second (bps), which is the maximum
data rate for a theoretical bit-error rate (BER). B is the required channel bandwidth in Hz,
and S/N is the signal-to-noise power ratio. To be more explicit, one assumes that C,
which represents the amount of information allowed by the communication channel, also
represents the desired performance. Bandwidth (B) is the price to be paid, because
frequency is a limited resource. The S/N ratio expresses the environmental conditions or
the physical characteristics (i.e., obstacles, presence of jammers, interferences, etc.).


There is an elegant interpretation of this equation, applicable for difficult environments,
for example, when a low S/N ratio is caused by noise and interference. This approach
says that one can maintain or even increase communication performance (high C) by
allowing or injecting more bandwidth (high B), even when signal power is below the
noise floor. (The equation does not forbid that condition!)
Modify Equation 1 by changing the log base from 2 to e (the Napierian number) and by
noting that ln = loge.
Therefore:
C/B = (1/ln2) × ln(1 + S/N) = 1.443 × ln(1 + S/N) (Eq. 2)
Applying the MacLaurin series development for
ln(1 + x) = x - x²/2 + x³/3 - x4/4 + ... + (-1)k+1xk/k + ...:
C/B = 1.443 × (S/N - 1/2 × (S/N)² + 1/3 × (S/N)³ - ...) (Eq. 3)
S/N is usually low for spread-spectrum applications. (As just mentioned, the signal power
density can even be below the noise level.) Assuming a noise level such that S/N << 1,
Shannon's expression becomes simply:
C/B    1.433 × S/N (Eq. 4)
Very roughly:
C/B    S/N (Eq. 5)
Or:
N/S   B/C (Eq. 6)
To send error-free information for a given noise-to-signal ratio in the channel, therefore,
one need only perform the fundamental spread-spectrum signal-spreading operation:
increase the transmitted bandwidth. That principle seems simple and evident.
Nonetheless, implementation is complex, mainly because spreading the baseband (by
a factor that can be several orders of magnitude) forces the electronics to act and react
accordingly, which, in turn, makes the spreading and despreading operations necessary.


Definitions
Different spread-spectrum techniques are available, but all have one idea in common: the
key (also called the code or sequence) attached to the communication channel. The
manner of inserting this code defines precisely the spread-spectrum technique. The term
"spread spectrum" refers to the expansion of signal bandwidth, by several orders of
magnitude in some cases, which occurs when a key is attached to the communication
channel. The formal definition of spread spectrum is more precise: an RF
communications system in which the baseband signal bandwidth is intentionally spread
over a larger bandwidth by injecting a higher frequency signal (Figure 1). As a direct
consequence, energy used in transmitting the signal is spread over a wider bandwidth,
and appears as noise. The ratio (in dB) between the spread baseband and the original
signal is called processing gain. Typical spread-spectrum processing gains run from 10dB
to 60dB.To apply a spread-spectrum technique, simply inject the corresponding spread-
spectrum code somewhere in the transmitting chain before the antenna (receiver). (That
injection is called the spreading operation.) The effect is to diffuse the information in a
larger bandwidth. Conversely, you can remove the spread-spectrum code (called a
despreading operation) at a point in the receive chain before data retrieval. A despreading
operation reconstitutes the information into its original bandwidth. Obviously, the same
code must be known in advance at both ends of the transmission channel. (In some
circumstances, the code should be known only by those two parties.)
Figure 1. Spread-spectrum communication system.


Bandwidth Effects of the Spreading Operation




Figure 2 illustrates the evaluation of signal bandwidths in a communication link.


Figure 2. Spreading operation spreads the signal energy over a wider frequency
bandwidth.Spread-spectrum modulation is applied on top of a conventional modulation
such as BPSK or direct conversion. One can demonstrate that all other signals not
receiving the spread-spectrum code will remain as they are, that is, unspread. Bandwidth
Effects of the Despreading Operation Similarly, despreading can be seen in Figure 3.




Figure 3. The despreading operation recovers the original signal.
Here a spread-spectrum demodulation has been made on top of the normal demodulation
operations. One can also demonstrate that signals such as an interferer or jammer added
during the transmission will be spread during the despreading operation


Waste of Bandwidth Due to Spreading Is Offset by Multiple Users
Spreading results directly in the use of a wider frequency band by a factor that
corresponds exactly to the "processing gain" mentioned earlier. Therefore spreading does
not spare the limited frequency resource. That overuse is well compensated, however, by
the possibility that many users will share the enlarged frequency band (Figure 4).




Figure 4. The same frequency band can be shared by multiple users with spread-spectrum
techniques.


Spread Spectrum Is a Wideband Technology
In contrast to regular narrowband technology, the spread-spectrum process is a wideband
technology. W-CDMA and UMTS, for example, are wideband technologies that require a
relatively large frequency bandwidth, compared to narrowband radio.


Benefits of Spread Spectrum
Resistance to Interference and Antijamming Effects
There are many benefits to spread-spectrum technology. Resistance to interference is the
most important advantage. Intentional or unintentional interference and jamming signals
are rejected because they do not contain the spread-spectrum key. Only the desired
signal, which has the key, will be seen at the receiver when the despreading operation is
exercised. See Figure 5.
Figure 5. A spread-spectrum communication system. Note that the interferer's energy is
spread while the data
Resistance to Interception
Resistance to interception is the second advantage provided by spread-spectrum
techniques. Because nonauthorized listeners do not have the key used to spread the
original signal, those listeners cannot decode it. Without the right key, the spread-
spectrum signal appears as noise or as an interferer. (Scanning methods can break the
code, however, if the key is short.) Even better, signal levels can be below the noise floor,
because the spreading operation reduces the spectral density. See Figure 6. (Total energy
is the same, but it is widely spread in frequency.) The message is thus made invisible, an
effect that is particularly strong with the directsequence spread-spectrum (DSSS)
technique. (DSSS is discussed in greater detail below.) Other receivers cannot "see" the
transmission; they only register a slight increase in the overall noise level! without the
right spread-spectrum keys




Figure 6. Spread-spectrum signal is buried under the noise level. The receiver cannot
"see" the transmission
.
Resistance to Fading (Multipath Effects)
       Wireless channels often include multiple-path propagation in which the signal has
more than one path from the transmitter to the receiver (Figure 7). Such multipaths can be
caused by atmospheric reflection or refraction, and by reflection from the ground or from
objects such as buildings.




Figure 7. Illustration of how the signal can reach the receiver over multiple paths.


The reflected path (R) can interfere with the direct path (D) in a phenomenon called
fading. Because the despreading process synchronizes to signal D, signal R is rejected
even though it contains the same key. Methods are available to use the reflected-path
signals by despreading them and adding the extracted results to the main one.


Spread Spectrum Allows CDMA
Note that spread spectrum is not a modulation scheme, and should not be confused with
other types of modulation. One can, for example, use spread-spectrum techniques to
transmit a signal modulated by FSK or BPSK. Thanks to the coding basis, spread
spectrum can also be used as another method for implementing multiple access (i.e., the
real or apparent coexistence of multiple and simultaneous communication links on the
same physical media). So far, three main methods are available.


FDMA—Frequency Division Multiple Access
FDMA allocates a specific carrier frequency to a communication channel. The number of
different users is limited to the number of "slices" in the frequency spectrum (Figure 8).
Of the three methods for enabling multiple access, FDMA is the least efficient in term of
frequency-band usage. Methods of FDMA access include radio broadcasting, TV, AMPS,
and TETRAPOLE.
Figure 8. Carrier-frequency allocations among different users in a FDMA system.
TDMA—Time Division Multiple Access
With TDMA the different users speak and listen to each other according to a defined
allocation of time slots (Figure 9). Different communication channels can then be
established for a unique carrier frequency. Examples of TDMA are GSM, DECT,
TETRA, and IS-136.




Figure 9. Time-slot allocations among different users in a TDMA system.


CDMA—Code Division Multiple Access
CDMA access to the air is determined by a key or code (Figure 10). In that sense, spread
spectrum is a CDMA access. The key must be defined and known in advance at the
transmitter and receiver ends. Growing examples are IS-95 (DS), IS-98, Bluetooth, and
WLAN.




Figure 10. CDMA systems access the same frequency band with unique keys or codes.


One can, of course, combine the above access methods. GSM, for instance, combines
TDMA and FDMA. GSM defines the topological areas (cells) with different carrier
frequencies, and sets time slots within each cell.
Spread Spectrum and (De)coding "Keys"
At this point, it is worth restating that the main characteristic of spread spectrum is the
presence of a code or key, which must be known in advance by the transmitter and
receiver(s). In modern communications the codes are digital sequences that must be as
long and as random as possible to appear as "noise-like" as possible. But in any case, the
codes must remain reproducible, or the receiver cannot extract the message that has been
sent. Thus, the sequence is "nearly random." Such a code is called a pseudo-random
number (PRN) or sequence. The method most frequently used to generate pseudo-random
codes is based on a feedback shift register. One example of a PRN is shown in Figure 11.
The shift register contains eight data flip-flops (FF). At the rising edge of the clock, the
contents of the shift register are shifted one bit to the left. The data clocked in by FF1
depends on the contents fed back from FF8 and FF7. The PRN is read out from FF8. The
contents of the FFs are reset at the beginning of each sequence length.
Figure 11. Block diagram of a sample PRN generator.


Many books are available on the generation of PRNs and their characteristics, but that
development is outside the scope of this basic tutorial. Simply note that the construction
or selection of proper sequences, or sets of sequences, is not trivial. To guarantee
efficient spread-spectrum communications, the PRN sequences must respect certain rules,
such as length, autocorrelation, cross-correlation, orthogonality, and bits balancing. The
more popular PRN sequences have names: Barker, M-Sequence, Gold, Hadamard-Walsh,
etc. Keep in mind that a more complex sequence set provides a more robust spread-
spectrum link. But there is a cost to this: more complex electronics both in speed and
behavior, mainly for the spread-spectrum despreading operations. Purely digital spread-
spectrum despreading chips can contain more than several million equivalent 2-input
NAND gates, switching at several tens of megahertz.


Different Modulation Spreading Techniques for Spread Spectrum


Different spread-spectrum techniques are distinguished according to the point in the
system at which a PRN is inserted in the communication channel. This is very basically
illustrated in the RF front-end schematic in Figure 12.




Figure: 12. Several spreading techniques are applied at different stages of the transmit
chain.
If the PRN is inserted at the data level, this is the direct-sequence form of spread
spectrum (DSSS). (In practice, the pseudo-random sequence is mixed or multiplied with
the information signal, giving an impression that the original data flow was "hashed" by
the PRN.) If the PRN acts at the carrier-frequency level, this is the frequencyhopping
form of spread spectrum (FHSS). Applied at the LO stage, FHSS PRN codes force the
carrier to change or "hop" according to the pseudo-random sequence. If the PRN acts as
an on/off gate to the transmitted signal, this is a time-hopping spread-spectrum technique
(THSS). There is also the "chirp" technique, which linearly sweeps the carrier frequency
in time. One can mix all the above techniques to form a hybrid spread-spectrum
technique, such as DSSS + FHSS. DSSS and FHSS are the two techniques most in use
today.
Direct-Sequence Spread Spectrum (DSSS)
With the DSSS technique, the PRN is applied directly to data entering the carrier
modulator. The modulator, therefore, sees a much larger bit rate, which corresponds to
the chip rate of the PRN sequence. Modulating an RF carrier with such a code sequence
produces a direct-sequence-modulated spread spectrum with ((sin x)/x)² frequency
spectrum, centered at the carrier frequency. The main lobe of this spectrum (null to null)
has a bandwidth twice the clock rate of the modulating code, and the side lobes have null-
to-null bandwidths equal to the code's clock rate. Illustrated in Figure 13 is the most
common type of direct-sequence-modulated spread-spectrum signal. Direct-sequence
spectra vary somewhat in spectral shape, depending on the actual carrier and data
modulation used. Below is a binary phase shift keyed (BPSK) signal, which is the most
common modulation type used in direct-sequence systems.




Figure: 13. Spectrum-analyzer and photo of a DSSS signal.
Note the original signal (nonspread) would only occupy half of the central lobe.
Frequency-Hopping Spread Spectrum (FHSS)
The FHSS method does exactly what its name implies—it causes the carrier to hop from
frequency to frequency over a wide band according to a sequence defined by the PRN.
The speed at which the hops are executed depends on the data rate of the original
information. One can, however, distinguish between fast frequency hopping (FFHSS) and
low frequency hopping (LFHSS). The latter method, the most common, allows several
consecutive data bits to modulate the same frequency. FFHSS is characterized by several
hops within each data bit.
The transmitted spectrum of a frequency-hopping signal is quite different from that of a
direct-sequence system. Instead of a ((sin x)/x)²-shaped envelope, the frequency hopper's
output is flat over the band of frequencies used (see Figure 14). The bandwidth of a
frequency-hopping signal is simply N times the number of frequency slots available,
where N is the bandwidth of each hop channel.




Figure: 14. Spectrum-analyzer and photo of a FHSS signal
Time-Hopping Spread Spectrum (THSS)




Figure: 15. THSS block diagram.
Figure 15 illustrates THSS, a method not well developed today. Here the on and off
sequences applied to the PA are dictated according to the PRN sequence.
Implementations and Conclusions
A complete spread-spectrum communication link requires various advanced and up-to-
date technologies and disciplines: an RF antenna, a powerful and efficient PA, a low-
noise and highly linear LNA, compact transceivers, high-resolution ADCs and DACs,
rapid low-power digital signal processing (DSP), etc. Though designers and
manufacturers compete, they are also joining in their effort to implement spread-spectrum
systems.
The most difficult area is the receiver path, especially at the despreading level for DSSS,
because the receiver must be able to recognize the message and synchronize with it in
real time. The operation of code recognition is also called correlation. Because
correlation is performed at the digital-format level, the tasks are mainly complex
arithmetic   calculations   including   fast,   highly   parallel,   binary   additions   and
multiplications. The most difficult aspect of today's receiver design is synchronization.
More time, effort, research, and money have gone toward developing and improving
synchronization techniques than toward any other aspect of spreadspectrum
communications. Several methods can solve the synchronization problem, and many of
them require a large number of discrete components to implement. Perhaps the biggest
breakthroughs have occurred in DSP and in application-specific integrated circuits
(ASICs). DSP provides high-speed mathematical functions that analyze, synchronize, and
decorrelate a spread-spectrum signal after slicing it in many small parts. ASIC chips drive
down costs with VLSI technology and by the creation of generic building blocks suitable
for any type of application.
UNITV SATELLITE AND OPTICALCOMMUNICATION


The use of satellite in communication system is very much a fact of everyday in life. This
is evidence by the many homes, which are equipped with antennas and dishes. These
antennas were used for reception of satellite signal for television. What may not be well
known that satellites also form an essential part of communication system worldwide
carrying large amount of data, telephone traffic in addition to television signals. Satellites
offers a number of important features, which are not readily available with others means
of communication. Some of them are enumerated below.




1. Very large area of earth is visible from satellite (about 42%) i.e. communication is
possible beyond earth curvature (beyond line of sight)
2. Satellite offers communication with remote communities in sparsely populated area,
which are difficult to access by other means of communication.
3. Satellite communication ignores political boundaries as well as geographical
boundaries.
4. Satellite provides communication with moving aircraft from ground control station
across the country.
5. Satellite provides remote sensing i.e. detection of water pollution, oil field, monitoring
and reporting of weather conditions etc.
6. The combination of three satellites, with the ability to relay messages from one to the
other could interconnect virtually all of the earth except the Polar Regions as shown in
figure 1


REQUIREMENT FOR SATELLITE COMMUNICATION:
The communication between one point to other depends upon frequency of the
transmitted signal as well as mode of communication. The frequency up to appropriately
10 MHz was used for small distance communication through Ground Wave Propagation.
As frequency increases, the attenuation of ground wave increases (Earth starts behaving
like absorber for high frequency signals) because of which, it is not possible to establish a
reliable communication link through ground waves for frequencies more than 10 MHz.
Since Earth is elliptical in shape, thus direct wave which are reaching at receiving
antenna are restricted by curvature of Earth (The direct wave communication is not
possible beyond Line of Sight).




The above limitation for long distance communication requires a reflector above the earth
surface, which reflects the signal towards receiving antenna. The Sky Wave Propagation
is possible due to Ionosphere present in the atmosphere. The ionosphere has property that
it reflect transmitted signals up to a certain frequency and after that the layer is behaving
as transparent medium and signal passes the layer. This natural reflector present in the
atmosphere provides radio broadcasting link to larger area of Earth beyond Line of Sight.
The signals having frequency more than 30 MHz are pass through ionosphere and these
are required to reflected back to earth by some artificial medium for establishing reliable
communication between transmitter and receiver. For fulfilling the requirement of high
frequency and long distance communication across the globe, the artificial reflector
(Satellite) above the ionosphere are required for transmitted signal. The satellites were
used for reflecting the signals having frequencies more than 30MHz. The transponders in
the satellite receive the signal and after signal conditioning (suppressing noise,
amplification) re-transmit back to ground for reception. The frequency at which signal is
transmitted from ground to satellite is known as uplink frequency and signal frequency
transmitted from satellite to ground is known as downlink frequency. It has been decided
by international community that uplink frequency is always higher than downlink
frequency. It is to be noted that as frequency of communication increases, the size of
transmitting and receiving antenna as well as the size of the electronics components
required are decreases drastically (Inversely proportional).

Kepler's laws of planetary motion




Figure 1: Illustration of Kepler's three laws with two planetary orbits. (1) The orbits are
ellipses, with focal points ƒ1 and ƒ2 for the first planet and ƒ1 and ƒ3 for the second
planet. The Sun is placed in focal point ƒ1. (2) The two shaded sectors A1 and A2 have
the same surface area and the time for planet 1 to cover segment A1 is equal to the time
to cover segment A2. (3) The total orbit times for planet 1 and planet 2 have a ratio
a13/2: a23/2.

In astronomy, Kepler's laws give a description of the motion of planets around the Sun.

Kepler's laws are:

   1. The orbit of every planet is an ellipse with the Sun at one of the two focus.
   2. A line joining a planet and the Sun sweeps out equal areas during equal intervals
       of time.[1]
   3. The square of the orbital period of a planet is directly proportional to the cube of
       the semi-major axis of its orbit.

Kepler's laws are strictly only valid for a lone (not affected by the gravity of other
planets) zero-mass object orbiting the Sun; a physical impossibility. Nevertheless,
Kepler's laws form a useful starting point to calculating the orbits of planets that do not
deviate too much from these restrictions.

Isaac Newton solidified Kepler's laws by showing that they were a natural consequence
of his inverse square law of gravity with the limits set in the previous paragraph. Further,
Newton extended Kepler's laws in a number of important ways such as allowing the
calculation of orbits around other celestial bodies.

The past Johannes Kepler published his first two laws in 1609, having found them by
analyzing the astronomical observations of Tycho Brahe.[2] Kepler did not discover his
third law until many years later, and it was published in 1619.[2] At the time, Kepler's
laws were radical claims; the prevailing belief (particularly in epicycle-based theories)
was that orbits should be based on perfect circles. Most of the planetary orbits can be
rather closely approximated as circles, so it is not immediately evident that the orbits are
ellipses. Detailed calculations for the orbit of the planet Mars first indicated to Kepler its
elliptical shape, and he inferred that other heavenly bodies, including those farther away
from the Sun, have elliptical orbits too. Kepler's laws and his analysis of the observations
on which they were based, the assertion that the Earth orbited the Sun, proof that the
planets' speeds varied, and use of elliptical orbits rather than circular orbits with
epicycles—challenged the long-accepted geocentric models of Aristotle and Ptolemy, and
generally supported the heliocentric theory of Nicolaus Copernicus (although Kepler's
ellipses likewise did away with Copernicus's circular orbits and epicycles).[2]

Almost a century later, Isaac Newton proved that relationships like Kepler's would apply
exactly under certain ideal conditions that are to a good approximation fulfilled in the
solar system, as consequences of Newton's own laws of motion and law of universal
gravitation.[3][4] Because of the nonzero planetary masses and resulting perturbations,
Kepler's laws apply only approximately and not exactly to the motions in the solar
system.[3][5] Voltaire's Eléments de la philosophie de Newton (Elements of Newton's
Philosophy) was in 1738 the first publication to call Kepler's Laws "laws".[6] Together
with Newton's mathematical theories, they are part of the foundation of modern
astronomy and physics.

Low Earth orbit




An orbiting cannon ball showing various sub-orbital and orbital possibilities.
Various earth orbits to scale; light blue represents low earth orbit.




Roughly half an orbit of the ISS.

A low Earth orbit (LEO) is generally defined as an orbit within the locus extending from
the Earth‘s surface up to an altitude of 2,000 km. Given the rapid orbital decay of objects
below approximately 200 km, the commonly accepted definition for LEO is between
160–2,000 km (100–1,240 miles) above the Earth's surface.[1][2]

With the exception of the lunar flights of the Apollo program, all human spaceflights
have either been orbital in LEO or sub-orbital. The altitude record for a human
spaceflight in LEO was Gemini 11 with an apogee of 1,374.1 km.


   
Orbital characteristics

Objects in LEO encounter atmospheric drag in the form of gases in the thermosphere
(approximately 80–500 km up) or exosphere (approximately 500 km and up), depending
on orbit height. LEO is an orbit around Earth between the atmosphere and below the
inner Van Allen radiation belt. The altitude is usually not less than 300 km because that
would be impractical due to the larger atmospheric drag.

Equatorial low Earth orbits (ELEO) are a subset of LEO. These orbits, with low
inclination to the Equator, allow rapid revisit times and have the lowest delta-v
requirement of any orbit. Orbits with a high inclination angle are usually called polar
orbits.

Higher orbits include medium Earth orbit (MEO), sometimes called intermediate circular
orbit (ICO), and further above, geostationary orbit (GEO). Orbits higher than low orbit
can lead to earlier failure of electronic components due to intense radiation and charge
accumulation.

Human use

The International Space Station is in a LEO that varies from 319.6 km (199 mi) to
346.9 km (216 mi) above the Earth's surface.

While a majority of artificial satellites are placed in LEO, where they travel at about
27,400 km/h (8 km/s), making one complete revolution around the Earth in about 90
minutes, many communication satellites require geostationary orbits, and move at the
same angular velocity as the Earth. Since it requires less energy to place a satellite into a
LEO and the LEO satellite needs less powerful amplifiers for successful transmission,
LEO is still used for many communication applications. Because these LEO orbits are not
geostationary, a network (or "constellation") of satellites is required to provide
continuous coverage. Lower orbits also aid remote sensing satellites because of the added
detail that can be gained. Remote sensing satellites can also take advantage of sun-
synchronous LEO orbits at an altitude of about 800 km (500 mi) and near polar
inclination. ENVISAT is one example of an Earth observation satellite that makes use of
this particular type of LEO.

Although the Earth's pull due to gravity in LEO is not much less than on the surface of
the Earth, people and objects in orbit experience weightlessness due to the effects of
freefall.

Atmospheric and gravity drag associated with launch typically adds 1,500–2,000 m/s to
the delta-v launch vehicle required to reach normal LEO orbital velocity of around
7,800 m/s (17,448 mph).

Geostationary orbit




Geostationary orbit.To an observer on the rotating Earth (green dot on the blue sphere),
the magenta satellite appears stationary in the sky. A red satellite is also geostationary
above its own point on the blue sphere
Geostationary orbit.To an observer on the rotating Earth (fixed point on the Earth), the
satellite appears stationary in the sky. A red satellite is also geostationary above its own
point on Earth. Top Down View




Side view of Geostationary 3D of 2 satellites




Side view of Geostationary 3D of 2 satellites of Earth

A geostationary orbit (or Geostationary Earth Orbit - GEO) is a geosynchronous orbit
directly above the Earth's equator (0° latitude), with a period equal to the Earth's
rotational period and an orbital eccentricity of approximately zero. An object in a
geostationary orbit appears motionless, at a fixed position in the sky, to ground observers.
Communications satellites and weather satellites are often given geostationary orbits, so
that the satellite antennas that communicate with them do not have to move to track them,
but can be pointed permanently at the position in the sky where they stay. Due to the
constant 0° latitude and circularity of geostationary orbits, satellites in GEO differ in
location by longitude only.

The notion of a geosynchronous satellite for communication purposes was first published
in 1928 (but not widely so) by Herman Potočnik.[1] The idea of a geostationary orbit was
first disseminated on a wide scale in a 1945 paper entitled "Extra-Terrestrial Relays —
Can Rocket Stations Give Worldwide Radio Coverage?" by British science fiction writer
Arthur C. Clarke, published in Wireless World magazine. The orbit, which Clarke first
described as useful for broadcast and relay communications satellites,[2] is sometimes
called the Clarke Orbit.[3] Similarly, the Clarke Belt is the part of space about 35,786 km
(22,000 mi) above sea level, in the plane of the equator, where near-geostationary orbits
may be implemented. The Clarke Orbit is about 265,000 km (165,000 mi) long.


              Geostationary orbits are useful because they cause a satellite to appear
       stationary with respect to a fixed point on the rotating Earth, allowing a fixed
       antenna to maintain a link with the satellite. The satellite orbits in the direction of
       the Earth's rotation, at an altitude of 35,786 km (22,236 mi) above ground,
       producing an orbital period equal to the Earth's period of rotation, known as the
       sidereal day.
A 5 x 6 degrees view of a part of the geostationary belt, showing several geostationary
satellites. Those with inclination 0 degrees form a diagonal belt across the image: a few
objects with small inclinations to the equator are visible above this line. Note how the
satellites are pinpoint, while stars have created small trails due to the Earth's rotation.

A geostationary orbit can only be achieved at an altitude very close to 35,786 km (22,236
mi), and directly above the equator. This equates to an orbital velocity of 3.07 km/s
(1.91 mi/s) or a period of 1,436 minutes, which equates to almost exactly one sidereal day
or 23.934461223 hours. This makes sense considering that the satellite must be locked to
the Earth's rotational period in order to have a stationary footprint on the ground. In
practice, this means that all geostationary satellites have to exist on this ring, which poses
problems for satellites that will be decommissioned at the end of their service lives (e.g.,
when they run out of thruster fuel). Such satellites will either continue to be used in
inclined orbits (where the orbital track appears to follow a figure-eight loop centered on
the equator), or else be elevated to a "graveyard" disposal orbit.

A geostationary transfer orbit is used to move a satellite from low Earth orbit (LEO) into
a geostationary orbit.

A worldwide network of operational geostationary meteorological satellites is used to
provide visible and infrared images of Earth's surface and atmosphere. These satellite
systems include:

               the United States GOES
               Meteosat, launched by the European Space Agency and operated by the
        European Weather Satellite Organization, EUMETSAT
               the Japanese MTSAT
               India's INSAT series

Most commercial communications satellites, broadcast satellites and SBAS satellites
operate in geostationary orbits. (Russian television satellites have used elliptical Molniya
and Tundra orbits due to the high latitudes of the receiving audience.) The first satellite
placed into a geostationary orbit was the Syncom-3, launched by a Delta-D rocket in
1964.

A statite, a hypothetical satellite that uses a solar sail to modify its orbit, could
theoretically hold itself in a "geostationary" orbit with different altitude and/or inclination
from the "traditional" equatorial geostationary orbit.

[edit] Derivation of geostationary altitude

In any circular orbit, the centripetal force required to maintain the orbit is provided by the
gravitational force on the satellite. To calculate the geostationary orbit altitude, one
begins with this equivalence, and uses the fact that the orbital period is one sidereal day.




By Newton's second law of motion, we can replace the forces F with the mass m of the
object multiplied by the acceleration felt by the object due to that force:




We note that the mass of the satellite m appears on both sides — geostationary orbit is
independent of the mass of the satellite.[4] So calculating the altitude simplifies into
calculating the point where the magnitudes of the centripetal acceleration required for
orbital motion and the gravitational acceleration provided by Earth's gravity are equal.

The centripetal acceleration's magnitude is:




where ω is the angular speed, and r is the orbital radius as measured from the Earth's
center of mass.

The magnitude of the gravitational acceleration is:
where M is the mass of Earth, 5.9736 × 1024 kg, and G is the gravitational constant,
6.67428 ± 0.00067 × 10−11 m3 kg−1 s−2.

Equating the two accelerations gives:




The product GM is known with much greater precision than either factor alone; it is
known as the geocentric gravitational constant μ = 398,600.4418 ± 0.0008 km3 s−2:




The angular speed ω is found by dividing the angle travelled in one revolution (360° = 2π
rad) by the orbital period (the time it takes to make one full revolution: one sidereal day,
or 86,164.09054 seconds).[5] This gives:




The resulting orbital radius is 42,164 kilometres (26,199 mi). Subtracting the Earth's
equatorial radius, 6,378 kilometres (3,963 mi), gives the altitude of 35,786 kilometres
(22,236 mi).

Orbital speed (how fast the satellite is moving through space) is calculated by multiplying
the angular speed by the orbital radius:




Now, by the same formula, let us find the geostationary orbit of an object in relation to
Mars (an areostationary orbit). The geocentric gravitational constant GM (which is μ) for
Mars has the value of 42,828 km3s-2, and the known rotational period (T) of Mars is
88,642.66 seconds. Since ω = 2π/T, using the formula above, the value of ω is found to
be approx 7.088218×10-5 s-1. Thus, r3 = 8.5243×1012 km3, whose cube root of is
20,427 km; subtracting the equatorial radius of Mars (3396.2 km) we have 17,031 km.

[edit] Practical limitations

A combination of lunar gravity, solar gravity, and the flattening of the Earth at its poles is
causing a precession motion of the orbit plane of any geostationary object with a period
of about 53 years and an initial inclination gradient of about 0.85 degrees per year,
achieving a maximum inclination of 15 degrees after 26.5 years. To correct for this
orbital perturbation, regular orbital stationkeeping maneuvers are necessary, amounting
to a delta-v of approximately 50 m/s per year.

The second effect to be taken into account is the longitude drift, caused by the asymmetry
of the earth - the equator is slightly elliptical. There are two stable (at 75.3°E, and at
104.7°W) and two unstable (at 165.3°E, and at 14.7°W) equilibrium points. Any
geostationary object placed between the equilibrium points would (without any action) be
slowly accelerated towards the stable equilibrium position, causing a periodic longitude
variation. The correction of this effect requires orbit control maneuvers with a maximum
delta-v of about 2 m/s per year, depending on the desired longitude.

In the absence of servicing missions from the Earth, the consumption of thruster
propellant for station-keeping places a limitation on the lifetime of the satellite.

Communications

Satellites in geostationary orbits are far enough away from Earth that communication
latency becomes very high — about a quarter of a second for a one-way trip from one
ground based transmitter to another via the geostationary satellite; close to half a second
for round-trip communication between two earth stations.
For example, for ground stations at latitudes of φ=±45° on the same meridian as the
satellite, the one-way delay can be computed by using the cosine rule, given the above
derived geostationary orbital radius r, the Earth's radius R and the speed of light c, as




This presents problems for latency-sensitive applications such as voice communication or
online gaming.

Orbit allocation

Satellites in geostationary orbit must all occupy a single ring above the equator. The
requirement to space these satellites apart to avoid harmful radio-frequency interference
during operations means that there are a limited number of orbital "slots" available, thus
only a limited number of satellites can be operated in geostationary orbit. This has led to
conflict between different countries wishing access to the same orbital slots (countries at
the same longitude but differing latitudes) and radio frequencies. These disputes are
addressed    through     the   International    Telecommunication       Union's    allocation
mechanism.[7] Countries located at the Earth's equator have also asserted their legal
claim to control the use of space above their territory.[8]

Footprint (satellite)

The footprint of a communications satellite is the ground area that its transponders offer
coverage, and determines the satellite dish diameter required to receive each
transponder's signal. There is usually a different map for each transponder (or group of
transponders) as each may be aimed to cover different areas of the ground.

Footprint maps usually show either the estimated minimal satellite dish diameter required
or the signal strength in each area measured in dBW.

OPTICAL COMMUNICATION
Transmission Capacity

The potential transmission capacity of optical fibre is enormous. Looking again at
Figure 14 on page 32 both the medium and long wavelength bands are very low in
loss. The medium wavelength band (second window) is about 100 nm wide and
ranges from 1250 nm to 1350 nm (loss of about .4 dB per km). The long
wavelength band (third window) is around 150 nm wide and ranges from 1450 nm
to 1600 nm (loss of about .2 dB per km). The loss peaks at 1250 and 1400 nm are
due to traces of water in the glass. The useful (low loss) range is therefore around
250 nm.


Expressed in terms of analogue bandwidth, a 1 nm wide waveband at 1500 nm has
a bandwidth of about 133 GHz. A 1 nm wide waveband at 1300 nm has a
bandwidth of 177 GHz. In total, this gives a usable range of about 30 Tera Hertz
(3 × 1013 Hz).


Capacity depends on the modulation technique used. In the electronic world we
are used to getting a digital bandwidth of up to 8 bits per Hz of analog bandwidth.
In the optical world, that objective is a long way off (and a trifle unnecessary). But
assuming that a modulation technique resulting in one bit per Hz of analog
bandwidth is available, then we can expect a digital bandwidth of 3 × 1013 bits per
second. Current technology limits electronic systems to a rate of about 10 Gbps, although
higher speeds are being experimented with in research. Current practical fibre
systems are also limited to this speed because of the speed of the electronics
needed for transmission and reception. The above suggests that, even if fibre quality is
not improved, we could get 10,000 times greater throughput from a single fibre than the
current practical limit.
Types




Multimode Step-Index

Multimode Graded-Index

Single-Mode (Step-Index)

The difference between them is in the way light travels along the fibre. The top section of
the figure shows the operation of ―multimode‖ fibre. There are two different parts to the
fibre. In the figure, there is a core of 50 microns (μm) in diameter and a cladding of 125
μm in diameter. (Fibre size is normally quoted as the core diameter followed by the
cladding diameter. Thus the fibre in the figure is identified as 50/125.) The cladding
surrounds the core. The cladding glass has a different (lower) refractive index than that of
the core, and the boundary forms a mirror. This is the effect you see when looking
upward from underwater. Except for the part immediately above, the junction of the
water and the air appears silver like a mirror.

Light is transmitted (with very low loss) down the fibre by reflection from the mirror
boundary between the core and the cladding. This phenomenon is called ―total internal
reflection‖. Perhaps the most important characteristic is that the fibre will bend around
corners to a radius of only a few centimetres without any loss of the light.




The expectation of many people is that if you shine a light down a fibre, then the light
will enter the fibre at an infinitely large number of angles and propagate by internal
reflection over an infinite number of possible paths. This is not true. What happens is that
there is only a finite number of possible paths for the light to take. These paths are called
―modes‖ and identify the general characteristic of the light transmission system being

used. Fibre that has a core diameter large enough for the light used to find multiple paths
is called ―multimode‖ fibre. For a fibre with a core diameter of 62.5 microns using light
of wavelength 1300 nm, the number of modes is around 400 depending on the difference
in refractive index between the core and the cladding. The problem with multimode
operation is that some of the paths taken by particular modes are longer than other paths.
This means that light will arrive at different times according to the path taken. Therefore
the pulse tends to disperse (spread out) as it travels through the fibre. This effect is one
cause of ―intersymbol interference‖. This restricts the distance that a pulse can be
usefully sent over multimode fibre.




One way around the problem of (modal) dispersion in multimode fibre is to do something
to the glass such that the refractive index of the core changes gradually from the centre to
the edge. Light travelling down the center of the fibre experiences a higher refractive
index than light that travels further out towards the cladding. Thus light on the physically
shorter paths (modes) travels more slowly than light on physically longer paths.
The light follows a curved trajectory within the fibre as illustrated in the figure. The aim
of this is to keep the speed of propagation of light on each path the same with respect to
the axis of the fibre. Thus a pulse of light composed of many modes stays together as it
travels through the fibre. This allows transmission for longer distances than does regular
multimode transmission. This type of fibre is called ―Graded Index‖ fibre. Within a GI

fibre light typically travels in around 400 modes (at a wavelength of 1300 nm) or 800
modes (in the 800 nm band).

Note that only the refractive index of the core is graded. There is still a cladding of lower
refractive index than the outer part of the core.




If the fibre core is very narrow compared to the wavelength of the light in use then the
light cannot travel in different modes and thus the fibre is called ―single-mode‖ or
―monomode‖. There is no longer any reflection from the core-cladding boundary but
rather the electromagnetic wave is tightly held to travel down the axis of the fibre. It
seems obvious that the longer the wavelength of light in use, the larger the diameter of
fibre we can use and still have light travel in a single-mode. The core diameter used in a

typical single-mode fibre is nine microns. It is not quite as simple as this in practice. A
significant proportion (up to 20%) of the light in a single-mode fibre actually travels in
the cladding. For this reason the ―apparent diameter‖ of the core (the region in which
most of the light travels) is somewhat wider than the core itself. The region in which li
ght travels in a single-mode fibre is often called the ―mode field‖ and the mode field
diameter is quoted instead of the core diameter. The mode field varies in diameter
depending on the relative refractive indices of core and cladding, because of losses at
bends in the fibre. As the core diameter decreases compared to the wavelength (the core
gets narrower or the wavelength gets longer), the minimum radius that we can bend the
fibre without loss increases. If a bend is too sharp, the light just comes out of the core into

the outer parts of the cladding and is lost. You can make fibre single-mode by:

Ÿ Making the core thin enough Ÿ Making the refractive index difference between core
and cladding small enough Ÿ Using a longer wavelength
Single-mode fibre usually has significantly lower attenuation than multimode (about
half). This has nothing to do with fibre geometry or manufacture. Single-mode fibres
have a significantly smaller difference in refractive index between core and cladding.
This means that less dopant is needed to modify the refractive index as dopant is a major
source of attenuation. It's not strictly correct to talk about ―single-mode fibre‖ and
―multimode fibre‖ without qualifying it - although we do this all the time. A fibre is
single-moded or multi-moded at a particular wavelength. If we use very long wave light
(say 10.6

nm from a CO² laser) then even most MM fibre would be single-moded for that
wavelength. If we use 600 nm light on standard single-mode fibre then we do have a
greater number of modes than just one (although typically only about 3 to 5). There is a
single-mode fibre characteristic called the ―cutoff wavelength‖. This is typically around
1100 nm for single-mode fibre with a core diameter of 9 microns. The cutoff wavelength
is the shortest wavelength at which the fibre remains single-moded. At wavelengths
shorter than the cutoff the fibre is multimode.

When light is introduced to the end of a fibre there is a critical angle of acceptance. Light
entering at a greater angle passes into the cladding and is lost. At a smaller angle the light
travels down the fibre. If this is considered in three dimensions, a cone is formed around
the end of the fibre within which all rays are contained. The sine of this angle is called
the ―numerical aperture‖ and is one of the important characteristics of a given fibre.

Single-mode fibre has a core diameter of 4 to 10 μm (8 μm is typical). Multimode fibre
can have many core diameters but in the last few years the core diameter of 62.5 μm in
the US and 50 μm outside the US has become predominant. However, the use of 62.5 μm
fibre outside the US is gaining popularity - mainly due to the availability of equipment
(designed for the US) that uses this type of fibre.




Figure 19 shows the refractive index profiles of some different types of fibre.

RI Profile of Multimode Step-Index Fibre Today's standard MM SI fibre has a core
diameter of either 62.5 or 50 microns with an overall cladding diameter in either case of
125 microns.
Thus it is referred to as 50/125 or 62.5/125 micron fibre. Usually the core is SiO² doped
with about 4% of GeO². The cladding is usually just pure silica. There is an abrupt
change in refractive index between core and cladding.

The bandwidth.distance product is a measure of the signal carrying capacity of the fibre.
This is discussed further in 7.6.1, ―Maximum Propagation Distance on Multimode Fibre‖
on page 335. The bandwidth.distance product for standard step index multimode fibre
varies between about 15 MHz/km and 50 MHz/km depending on the wavelength in use,
the core diameter and the RI contrast between core and cladding.

RI Profile of Multimode Graded Index Fibre

Graded index fibre has the same dimensions as step index fibre. The refractive index of
the core changes slowly between the fibre axis and the cladding. This is achieved by
using a varying level of dopant across the diameter of the core. Note the gradations are
not linear - they follow a ―parabolic‖ index profile. It is important to realise that GI fibre
is relatively difficult to make and is therefore significantly more expensive than step

index fibre (either MM or SM). The usual bandwidth.distance product for 62.5 micron GI
MM fibre is 500 MHz/km at 1300 nm. In the 800 nm band the bandwidth.distance
product is typically much less at 160 MHz/km. For MM GI fibre with a core diameter of
50 microns the bandwidth.distance product is 700 MHz/km (again at 1300 nm). Recently
(1997) MM GI fibre with significantly improved characteristics has become available. A
bandwidth.distance product figure for 62.5 micron fibre is advertised as 1000 MHz/km
and for 50 micron fibre of 1,200 MHz/km. This is a result of improved fibre
manufacturing techniques and better process control. Of course these fibres are
considered ―premium‖ fibres and are priced accordingly.

RI Profile of Single-Mode Fibre

Single-mode fibre is characterised by its narrow core size. This is done to ensure that
only one mode (well, actually two if you count the two orthogonal polarisations as
separate modes) can propagate. The key parameter of SM fibre is not the core size but
rather the ―Mode Field Diameter‖. (This is discussed further in: 2.4.1.1, ―Mode Field
Diameter (MFD) and Spot Size‖.

Core size is usually between 8 and 10 microns although special purpose SM fibres are
often used with core sizes of as low as 4 microns. The RI difference between core and
cladding is typically very small (around .01). This is done to help minimise attenuation.
You can achieve the index difference either by doping the core to raise its RI (say with
GeO²) or by doping the cladding (say with fluoride) to lower its RI. Dopants in both core

and cladding affect attenuation and therefore it's not a simple thing to decide. There are
many different core and cladding compositions in use. Bandwidth.distance product is not
a relevant concept for single-mode fibre as there is no modal dispersion (although there is
chromatic dispersion).
The refractive index of fibres is changed and manipulated by adding various ―dopants‖ to
the basic SiO² glass. These can have various effects: Ÿ Some dopants increase the
refractive index and others decrease it. This is the primary reason we use dopants. Ÿ All
dopants increase attenuation of the fibre. Thus dopants are to be avoided (or at least
minimised) if attenuation is important for the fibre's application. It is almost always very
important. We might expect that since the light travels in the core that dopant levels in
the cladding may not make too much difference. Wrong! In single-mode fibre a quite
large proportion of the optical power (electromagnetic field) travels in the cladding. In
single-mode fibre attenuation and speed of propagation are strongly influenced by the
characteristics of the cladding glass. In multimode graded index fibre the core is doped
anyway (albeit at different levels) so (for multimode) it is an issue even in the core.

In multimode step-index fibre there is an ―evanescent field‖ set up in the cladding every
time a ray is reflected. This is an electromagnetic field and is affected by the attenuation
characteristics of the cladding. If we use a dopant at too high a level not only does it
change the refractive index of the glass but it also changes the coefficient of expansion.
This means that in operational conditions if we use too much dopant the cladding may
crack away from the core.

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:189
posted:11/3/2011
language:English
pages:162
Priya A Priya A
About Doing M.E