Compression Techniques and Water Marking of Digital Image using Wavelet Transform and SPIHT Coding

Document Sample
Compression Techniques and Water Marking of Digital Image using Wavelet Transform and SPIHT Coding Powered By Docstoc
					                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          (Vol. 9 No. 3), 2011.
                           COMPRESSION TECHNIQUES AND WATER MARKING
                                     OF DIGITAL IMAGE USING
                              WAVELET TRANSFORM AND SPIHT CODING
G.Prasanna Lakshmi                                                       techniques w.r.t the objective fidelity criteria . The objective
Computer Science,IBSAR                                                   fidelity criteria are:
      Karjat,India                                                       a) MSE (mean square error) : if MSE is less, the compressed
Prasanalaxmi@yahoo.com                                                        image is more close to the original image
                                                                         b) PSNR (peak signal to noise ratio): if PSNR is more the
Dr. D.A.Chandulal                                                             compressed image is more close to the original image.
Professor and HOD, IBSAR                                                 The amount of compression is measured using CR
Computer Science                                                         (compression ratio) for each elimination technique. If
        India                                                            compression ratio is more the compression is more. Further
dr.chandulal@yahoo.com                                                   the two coding techniques were compared w.r.t Encoding and
                                                                         Decoding time.
                                                                         The proposed block diagram for the compression and
Dr.KTV Reddy)
Professor & Principal                                                    decompression (at transmitter and receiver)is:
Electronics & Telecommunications Dept.
Computer Science
        India
ktvreddy@rediffmail.com



                      I.   INTRODUCTION
          Advances that facilitate electronic publishing and
Commerce also heighten threats of intellectual property theft
                                                                                             Fig: 1.1 ENCODER
and unlawful tampering. One approach to address this problem
involves embedding an invisible structure into a host signal to
mark its ownership. These structures are called digital
watermarks and the associated embedding process is called
digital watermarking. One major driving force for research in
this area is the need for effective copyright protection
scenarios for digital imagery. In such an application a serial
number or a message is embedded into the image to protect
and to identify the copyright holder. So the objective of
watermarking is authenticity check. In this project the discrete
wavelet transform of an image is used which transforms the
image into two parts: an approximation part and a detail part.
So, using this transformation the details of an image can be
extracted. The control of the details of an image permits to                                 Fig: 1.2 DECODER
identify the invisible ones hence watermark can be inserted by
changing only the less important details of an image. The
watermark should survive the image processing techniques                             II.   DISCRETE WAVELET TRANSFORM
like compression etc. This project compresses the image by                        In DWT, we pass the time-domain signal from
using two different techniques called HUFFMAN and SPIHT                  various high Pass and low pass filters, which filters out either
Coding techniques. SPIHT (set partitioning in hierarchical               high frequency or low frequency portions of the signal. This
trees) is a new and a very fast Encoding techniqueSPIHT                  procedure is repeated, and every time some portion of the
algorithm is based on 3 concepts, they are                               signal corresponding to some frequencies is removed from the
     Ordered bit plane progressive transmission.                         signal. There are two types of data elimination methods used
     b) Set partitioning sorting algorithm.                              in wavelet transform. They are HH elimination and H*
     c) Spatial orientation trees.                                       elimination and in this project we use both elimination
          Also in this project each coding technique i.e.                techniques.
Huffman and SPIHT are performed by using two elimination                 The proposed architecture for HH and H* Elimination
techniques of wavelet transform, HH (LL and LH bands) and                techniques is as shown below
H* elimination (only LL band) and then we compared the two

                                                                    .

                                                                   226                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         (Vol. 9 No. 3), 2011.
                                                                        This is the connection between the wavelets theory and the
                                                                        digital watermarking of images representing the title of this
                                                                        thesis.
                                                                                  V.    THE WATERMARK INSERTION SYSTEM

                                                                           Various insertion techniques like amplitude modulation of
                                                                        frequencies etc are used for inserting the message into the
                                                                        image .The watermark insertion system used in this thesis is
                                                                        by using the ASCII codes of each alphabet of the message to
                                                                        be inserted i.e. Watermarking is obtained by applying wavelet
                                                                        transform and then altering the chosen frequencies of the
                                                                        original image according to the ASCII code of the alphabets in
                                                                        the message. The 8 bits code of each alphabet is embedded
        Fig: 1.3 H* ELIMINATION TECHNIQUE                               into the LSB’S of pixels starting from some chosen location.


                                                                                VI       COMPRESSION
                                                                                 After watermarking the next thing we present for
                                                                        proper transmission of the image is to compress the image to
                                                                        reduce the bandwidth required for transmission and the
                                                                        memory needed to store the image. Compression refers to the
                                                                        process of reducing the amount of data required to represent a
                                                                        given quantity of information i.e. the reduction process is the
                                                                        removal of redundant data, the data which contains no relevant
                                                                        information is called data redundancies given by
                                                                                             Rd = 1-1/Cr
                                                                        Where Cr is the compression ratio given by
        Fig: 1.4 HH ELIMINATION TECHNIQUE                                                   Cr = n1/n2
                                                                        And n1 and n2 are the number of information carrying units of
                                                                        input and output image.
       III.     DIGITAL WATERMARKING OF STILL IMAGES                        Compression refers to removing the redundancies so that
          One of the most used multimedia signals category is           the image takes less memory and less bandwidth for
that of images. For example 80% of the data transmitted using           transmission. In digital image four basic redundancies are
the internet are images. This is the reason why it is very              present
important to study the digital watermarking methods of                      • Inter pixel redundancies
images.                                                                    •    Psycho visual redundancies
          In this thesis a novel watermarking approach which
embeds a watermark in the discrete wavelet domain of the                   •    Coding redundancies
image is presented. This novel approach provides information
on specific frequencies of the image that have been modified.                   VII.     INTERPIXEL REDUNDANCIES
                                                                           This represents the Inter co-relation between the pixels
                                                                        within an image. These are eliminated by applying image
              IV.   WATERMARKING USING WAVELETS
                                                                        transform which involves mapping the original image data into
   The discrete wavelet transform of an image transforms the            another mathematical space where it is easier to compress the
image into two parts: an approximation part and a detail part.          data by representing it into fewer numbers of bits than the
So, using this transformation the details of an image can be            original image. In this project, the Wavelet Transform which
extracted. The control of the details of an image permits to
                                                                        is used for watermarking to remove the interpixel dundancies.
identify the invisible ones. This is very important because
changing only the less important details of an image is easy to
insert a watermark in this image, keeping the insertion                         VIII.    PSYCHOVISUAL REDUNDANCIES
procedure invisible. This can be a very simple and fast
procedure. Transforming these details, a new image, very                   This is the information which has less relative importance
similar with the original one, can be obtained. This new image          than other information in normal visual processing. These
can be regarded like the watermarked image associated to the            redundancies are removed by using quantization. Quantization
original one. Their difference can be considered the watermark          means mapping of a broad range of input values to a limited
embedded in the original image. So the discrete wavelet                 number of output values. Quantization is applied on the output
transform can be used to embed a watermark into an image.               obtained after applying DWT. Quantization is an irreversible

                                                                   .

                                                                  227                              http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            (Vol. 9 No. 3), 2011.
process, hence information lost cannot be regained during                            A repeatable or reproducible means of quantifying
decompression.                                                             the nature and extent of information loss is highly desirable.
                                                                           Two general classes of criteria for digital images are 1)
         IX.      CODING REDUNDANCIES                                      Objective Fidelity Criteria and 2) Subjective Fidelity Criteria
                                                                           In this paper we present Objective Fidelity Criteria like
         Coding Involves mapping the discrete data from the
                                                                                  1) MSE (Mean Square Error): As MSE decreases the
 quantizer onto a code in an optimal manner i.e. construction
                                                                                      clarity of the image increases i.e. the compressed
 of codes such that the number of bits used to represent the
                                                                                      image is more close to the original image.
 data is reduced i.e. by assigning fewer bits to the more
                                                                                  2) PSNR (Peak Signal to Noise Ratio): As PSNR
 probable gray levels than to the less probable ones which
                                                                                      increases the clarity of image increases i.e. the
 achieves compression. In this project we applied two coding
                                                                                      compressed image is more close to original image.
 techniques called Huffman coding and SPIHT Coding (Set
                                                                                  3) CR (Compression Ratio): As CR increases we
 Partitioning in Hieraricial Trees).
                                                                                      achieve more compression.
         X.       HUFFMAN CODING

     Huffman coding technique is the most popular technique
for removing coding redundancies. Huffman coding yields the                          2.        LITERATURE REVIEW
smallest possible number of code symbol per source symbol.
In terms of the noiseless coding theorem, the resulting code is                             2.1     TRANSFORMS
optimal for a fixed value of n, subject to the constraint that the
                                                                           WHAT IS A TRANSFORM?
source symbols be coded one at a time.The Huffman algorithm
can be described in five steps.                                            WHY DO WE NEED TRANSFORMS?
     1. Find the gray level probabilities for the image by
         finding the histogram
     2. Order the input probabilities from smallest to largest
     3. Combine the smallest two by addition                                         A Transform is a mathematical operation that takes a
     4. GOTO step 2, until only two probabilities are left                 function or sequence and maps it into another one. Transforms
     5. By working backward along the tree, generate code                  are used because
         by alternating assignment of 0 and 1                              a) The transform of a function may give additional /hidden
                                                                           information about the original function, which may not be
                                                                           available /obvious otherwise
  XI.      SPIHT CODING (Set partitioning in hierarchical                  b) The transform of an equation may be easier to solve than
                       trees)                                              the original equation (recall Laplace transforms for “Diff-
                                                                           Equations”)
      SPIHT Coding offers a new, fast and different                        c) The transform of a function/sequence may require less
implementation based on set partitioning in hierarcial trees,              storage, hence provide data compression reduction.
which provides better performance than other coding                        d) An operation may be easier to apply on the transformed
techniques. It has become the benchmark state-of-the-art                   function, rather than the original function (recall convolution).
algorithm for image compression.                                                     Mathematical transformations are applied to signals
                                                                           to obtain further information from that signal that is not
   SPIHT algorithm is based on 3 concepts                                  readily available in the raw signal. Most of the signals in
        a) Ordered bit plane progressive transmission.                     practice, are Time domain signals in their raw format, i.e.
        b) Set partitioning sorting algorithm.                             whatever that signal is measuring, is a function of time.
        c) Spatial orientation trees.                                                In other words, when we plot the signal, one of the
                                                                           axes is time (independent variable), and the other (dependent
    SPIHT has the following advantages:                                    variable) is usually the amplitude. When we plot time-domain
         a) Optimized for progressive image transmission                   signal we obtain a Time – Amplitude representation of the
          b) Produces a fully embedded coded file                          signal. This representation is not always the                best
         c) Simple quantization algorithm                                  representation of the signal for most IMAGE PROCESSING
         d) Fast coding and decoding                                       related applications.
         e) Wide application                                                                   In many cases, the most distinguished
         f) Good image quality, high PSNR                                  information is hidden in the frequency content of the signal.
          g) Can code to exact bit rate or distortion                      The frequency spectrum of a signal is basically the frequency
          h) Efficient combination with error protection                   components (spectral components) of that signal. The
                                                                           frequency spectrum of a signal shows what frequencies exist
                                                                           in the signal.
                XII.     FIDELITY CRITERIA


                                                                      .

                                                                     228                              http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           (Vol. 9 No. 3), 2011.
      Intuitively, we all know that the frequency is something to         The following      shows the FT of            the 50Hz signal:
do with the change in rate of something. If something (a
mathematical or physical variable would be the technically
correct term) changes rapidly, we say that it is of high
frequency, where as if this variable does not change rapidly,
i.e., it changes smoothly, we say that it is of low frequency. If
this variable does not change at all, then we say it has zero
frequency, For example the publication frequency of a daily
newspaper is higher than that of a monthly magazine.
Frequency is measured in cycles/second, or with a more
common name, in "Hertz". Now, look at the following figures.
The first one is a sine wave at 3 Hz, the second one at 10 Hz
as shown below.




                                                                                                Fig: 2.1.2 FT OF A 50 Hz SIGNAL

                                                                                 Although FT is probably the most popular transform
                                                                          being used, there are many other transforms that are used quite
                                                                          often by engineers and mathematicians. Hilbert transform,
                                                                          short-time Fourier transform, Wigner distributions, the Radon
                                                                          Transform, and of course our featured transform, the wavelet
                                                                          transform constitute only a small portion of a huge list of
                                                                          transforms that are available at engineer's and mathematician's
                                                                          disposal. Every transformation technique has its own area of
                                                                          application, with advantages and disadvantages, and the
                                                                          wavelet transform (WT) is no exception.
                                                                                    For a better understanding of the need for the WT
                                                                          let's look at the FT more closely. FT and WT both are
     Fig: 2.1.1    SINE WAVES WITH DIFFERENT                              reversible transforms, that is, it allows going back and
                      FREQUENCIES                                         forwarding between the raw and processed (transformed)
                                                                          signals. However, only either of them is available at any given
                                                                          time. That is, no frequency information is available in the
                                                                          time-domain signal, and no time information is available in the
So how do we measure frequency, or how do we find the                     Fourier transformed signal.
frequency content of a signal or an image?                                          The natural question that comes to mind is that is it
     The answer is FOURIER TRANSFORM (FT). If the FT                      necessary to have both the time and the frequency information
of a signal in time domain is taken, the frequency-amplitude              at the same time? Recall that the FT gives the frequency
representation of that signal is obtained. In other words, we             information of the signal, which means that it tells us how
now have a plot with one axis being the frequency and the                 much of each frequency exists in the signal, but it does not tell
other being the amplitude. This plot tells us how much of each            us when in time these frequency components exist. This
frequency exists in our signal or image. For example the FT of            information is not required when the signal is so-called
the electric current that we use in our house, we get one spike           stationary, i.e. in stationary signals, all frequency components
at 50 Hz, and nothing Elsewhere, since that signal has only               that exist in the signal, exist throughout the entire duration of
50 Hz frequency component.                                                the signal. There is 10 Hz at all times, there is 50 Hz at all
                                                                          times, and there is 100 Hz at all times as shown below.




                                                                     .

                                                                    229                              http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        (Vol. 9 No. 3), 2011.




                Fig: 2.1.3 Stationery signal                               Fig: 2.1.6 FT OF A NON-STATIONERY SIGNAL

                                                                       Do not worry about the little ripples at this time; they are due
                                                                       to sudden changes from one frequency component to another.
                                                                       Now, compare the Figures 1.4 and 1.6. The similarity between
                                                                       these two spectrums should be apparent. Both of them show
                                                                       four spectral components at exactly the same frequencies, i.e.,
                                                                       at 10, 25, 50, and 100 Hz. Other than the ripples, and the
                                                                       difference in amplitude (which can always be normalized), the
                                                                       two spectrums are almost identical, although the
                                                                       corresponding time-domain signals are not even close to each
                                                                       other. The signals involve the same frequency components,
       Fig: 2.1.4 FT OF A STATIONERY SIGNAL
                                                                       but the first one has these frequencies at all times, the second
                                                                       one has these frequencies at different intervals. So, how come
Note the four spectral components corresponding to the
                                                                       the spectrums of two entirely different signals look very much
frequencies 10, 25, 50 and 100 Hz.
                                                                       alike? Recall that the FT gives the spectral content of the
    Contrary to the above signal, the signal shown below is a
                                                                       signal, but it gives no information regarding where in time
non-stationary signal whose frequency constantly changes in
                                                                       those spectral components appear. Therefore, FT is not a
time. This signal is known as the "chirp" signal or a non
                                                                       suitable technique for non-stationary signal, with one
stationary signal.
                                                                       exception; FT can be used for stationary signals, if we are only
                                                                       interested in what spectral components exist in the signal, but
                                                                       not interested where these occur. However, if this information
                                                                       is needed, i.e., if we want to know, what spectral component
                                                                       occur at what time (interval) , then Fourier transform is not the
                                                                       right transform to use. When the time localization of the
                                                                       spectral components is needed, a transform giving the Time-
                                                                       Frequency representation of the signal is needed. Hence we go
                                                                       for a transform called WAVELET TRANSFORM.

                                                                                  2.2) THE WAVELET TRANSFORM

                                                                               The Wavelet transform is a transform of this type i.e. it
                                                                       provides the time-frequency representation. There are other
                                                                       transforms which give this information too, such as short time
                                                                       Fourier transforms, Wigner distributions.
                                                                                 Wavelet transform is capable of providing the time
         Fig: 2.1.5 NON- STATIONERY SIGNAL
                                                                       and frequency information simultaneously, hence giving a
The above plot shows a signal with four different frequency            time-frequency representation of the signal. The WT was
components at four different time intervals, hence a non-              developed as an alternative to the STFT (Short Time Fourier
stationary signal. The interval 0 to 300 ms has a 100 Hz               transform). The advantages of Wavelet Transform is
sinusoid, the interval 300 to 600 ms has a 50 Hz sinusoid, the         a. Overcomes the present resolution problem of the STFT by
interval 600 to 800 ms has a 25 Hz sinusoid, and finally the                using a variable length window
interval 800 to 1000 ms has a 10 Hz sinusoid. And the                  b. Analysis windows of different lengths are used for
following is its FT:                                                        different frequencies:
                                                                       c. For analysis of high frequencies, Use narrower windows
                                                                            for better time resolution

                                                                  .

                                                                 230                              http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        (Vol. 9 No. 3), 2011.
d.  For analysis of low frequencies, Use wider windows for
    better frequency resolution
e. This works well, if the signal to be analyzed mainly                     The continuous wavelet transform is obtained using
    consists of slowly varying characteristics with occasional              the equation
    short high frequency bursts.
f. Heisenberg principle still holds good
g. The function used to window the signal is called the
    wavelet.
Wavelet Transforms basically work on two properties a)
Scaling property and b) Translation property
Translation Property: It is the time shift property
f(t)            f(a.t) a>0
If 0<a<1 then contraction takes place i.e. low scale (high
frequency)
If      a>1 then dilation takes place i.e. expansion, large scale
(lower frequency)
if f(t)             f(a/t) a>0
If 0<a<1 then dilation takes place i.e. large scale (lower
frequency)
If        a>1 then contraction takes place, low scale (high
frequency)
Scaling Property : It has a similar meaning as that of scale in           Computation of CWT
maps
A. Large scale: Overall view, long term behavior
B. Small scale: Detail view, local behavior




               Continuous Wavelet Transform:




                                                                     .

                                                                    231                           http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         (Vol. 9 No. 3), 2011.
DISCRETE WAVELET TRANSFORM:                                                Transform:
         we need not have to use a uniform sampling rate for
the translation parameters, since we do not need as high time
sampling rate when the scale is high (low frequency).
Let’s consider the following sampling grid:




                                                                                   Fig: 2.2.3 1D WAVELET TRANSFORMS

                                                                           SCALING FUNCTION:
   Equations


                 Fig: 2.2.2 SAMPLING GRID




    The equations are an exception to the prescribed
specifications of this template. You will need to determine
whether or not your equation should be typed using either the
Times New Roman or the Symbol font (please no other font).
To create multileveled equations, it may be necessary to treat
the equation as a graphic and insert it into the text after your
paper is styled.
    Number equations consecutively. Equation numbers, within
parentheses, are to position flush right, as in (1), using a right
tab stop. To make your equations more compact, you may use
the solidus ( / ), the exp function, or appropriate exponents.
Italicize Roman symbols for quantities and variables, but not
Greek symbols. Use a long dash rather than a hyphen for a
minus sign. Punctuate equations with commas or periods when
they are part of a sentence, as in

                           α + β = χ.


                                                                           “
                                                                           The equation for a 1D DWT is




Consider an example as shown below for a 2D Wavelet
                                                                           2D WAVELET FUNCTIONS:
                                                                      .

                                                                     232                           http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          (Vol. 9 No. 3), 2011.




   let other common scientific constants, is zero with subscript
   formatting, not a lowercase letter “o”.
   •    In American English, commas, semi-/colons, periods,
        question and exclamation marks are located within
        quotation marks only when a complete thought or



                                                                                Fig: 2.2.5 SINGLE STAGE DECOMPOSITION




Fig: 2.2.4 IMPLEMENTATION OF 2D WAVELET
                      TRANSFORM
In DWT, we pass the time-domain signal from various high
pass and low pass filters, which filters out either high
frequency or low frequency portions of the signal. This                        Fig: 2.2.6 MULTI STAGE DECOMPOSITION
procedure is repeated, and every time some portion of the
signal corresponding to some frequencies being removed from               Assuming that we have taken the low pass portion, we now
the signal. This is the technique used for compression of an              have 3 sets of data, each corresponding to the same signal at
image using Wavelet Transform.                                            frequencies 0-250 Hz, 250-500 Hz, 500-
     Here is how this works: The WT can be performed by                   1000 Hz. Then we take the low pass portion again and pass it
using two elimination methods, they are 1) H-Elimination                  through low and high pass filters; we now have 4 sets of
method and 2) H* Elimination method. The elimination                      signals corresponding to 0-125 Hz, 125-250 Hz, 250-500 Hz,
methods are chosen based on the required compression.                     and 500-1000 Hz. We continue like this until we have
     Now suppose we have a signal which has frequencies up                decomposed the signal to a pre-defined certain level. Then we
to 1000 Hz. In the first stage we split up the signal into two            have a bunch of signals, which actually represent the same
parts by passing the signal from a high pass and a low pass               signal, but all corresponding to different frequency bands. We
filter (filters should satisfy some certain conditions, so-called         know which signal corresponds to which frequency band, and
admissibility condition) which results in two different versions          then based on the required compression ratio some frequencies
of the same signal: portion of the signal corresponding to 0-             are computed and some frequencies are skipped as shown
500 Hz (low pass portion), and 500-1000 Hz (high pass                     below
portion). Then, we take either portion (usually low pass                       The results of applying Discrete Wavelet Transform on
portion) or both, and do the same thing again. This operation             the image in single stage and multiple stage is as shown below
is called decomposition. The figure below shows the single
stage and multi stage decomposition in Wavelet Transform.




                                                                     .

                                                                    233                             http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                 (Vol. 9 No. 3), 2011.
                                                                       should appear outside of the quotation marks. A
                                                                       parenthetical phrase or statement at the end of a
                                                                       sentence is punctuated outside of the closing
                                                                       parenthesis (like this). (A parenthetical sentence is
                                                                       punctuated within the parentheses.)




                                                               In wavelet analysis the signal is multiplied with a function, i.e.
                                                               a wavelet, similar to the window and the transform is
                                                               computed separately for different segments of the time-
                                                               domain signal. The width of the window is changed as the
                                                               transform is computed for every single spectral component,
                                                               which is probably the most significant characteristic of the
                                                               wavelet transform. The term Wavelet means small wave. The
                                                               smallness refers to the condition that this (window) function is
      Fig: 2.2.7 ORIGINAL IMAGE                                of finite length (compactly supported). The wave refers to the
                                                               condition that this function is oscillatory.
                                                                         In terms of frequency, low frequencies (high scales)
                                                               correspond to a global information of a signal (that usually
                                                               spans the entire signal), whereas high frequencies (low scales)
                                                               correspond to a detailed information of a hidden pattern in the
                                                               signal (that usually lasts a relatively short time). Fortunately in
                                                               practical applications, low scales (high frequencies) do not last
                                                               for the entire duration of the signal, unlike those shown in the
                                                               figure, but they usually appear from time to time as short
                                                               bursts, or spikes.
                                                                         The discrete wavelet transform (DWT), on the other
                                                               hand, provides sufficient information both for analysis and
                                                               synthesis of the original signal, with a significant reduction in
                                                               the computation time. The DWT is considerably easier to
                                                               implement when compared to the CWT. In the discrete case,
                                                               filters of different cutoff frequencies are used to analyze the
                                                               signal at different scales. The signal is passed through a series
                                                               of high pass filters to analyze the high frequencies, and it is
Fig: 2.2.8 FIRST STAGE DISCRETE WAVELET                        passed through a series of low pass filters to analyze the low
                                                               frequencies.
               TRANSFORM                                                 The resolution of the signal, which is a measure of
                                                               the amount of detail information in the signal, is changed by
                                                               the filtering operations, and the scale is changed by up
                                                               sampling and down sampling (sub sampling) operations.
                                                               Sub sampling a signal corresponds to reducing the sampling
                                                               rate, or removing some of the samples of the signal. For
                                                               example, sub sampling by two refers to dropping every other
                                                               sample of the signal. Sub-sampling by a factor n reduces the
                                                               number of samples in the signal n times. Up sampling a signal
                                                               corresponds to increasing the sampling rate of a signal by
                                                               adding new samples to the signal. For example, up sampling
                                                               by two refers to adding a new sample, usually a zero or an
                                                               interpolated Value, between every two samples of the signal.
                                                               Up sampling a signal by a factor of n increases the number of
                                                               samples in the signal by a factor of n. The procedure starts
                                                               with passing this signal (sequence) through a half band digital
                                                               low pass filter with impulse response h[n]. Filtering a signal
                                                               corresponds to the mathematical operation of convolution of
name is cited, such as a title or full quotation. When         the signal with the impulse response of the filter.
quotation marks are used, instead of a bold or italic          The convolution operation in discrete time is defined as
typeface, to highlight a word or phrase, punctuation           follows:

                                                          .

                                                         234                               http://sites.google.com/site/ijcsis/
                                                                                           ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           (Vol. 9 No. 3), 2011.




        The unit of frequency is of particular importance at this
time. In discrete signals, frequency is expressed in terms of
radians. Accordingly, the sampling frequency of the signal is                      Where yhigh[k] and ylow[k] are the outputs of the high
equal to 2p radians in terms of radial frequency. Therefore, the          pass and low pass filters, respectively, after sub sampling by 2.
highest frequency component that exists in a signal will be p             This decomposition halves the time resolution since only half
radians, if the signal is sampled at Nyquist’s rate (which is             the number of samples now characterizes the entire signal.
twice the maximum frequency that exists in the signal); that is,          However, this operation doubles the frequency resolution,
the Nyquist’s rate corresponds to p rad in the discrete                   since the frequency band of the signal now spans only half the
frequency domain. Therefore using Hz is not appropriate for               previous frequency band, effectively reducing the uncertainty
discrete                                                signals.          in the frequency by half. The above procedure, which is also
     After passing the signal through a half band low pass                known as the sub band coding, can be repeated for further
filter, half of the samples can be eliminated according to the            decomposition. At every level, the filtering and sub sampling
Nyquist’s rule, since the signal now has a highest frequency of           will result in half the number of samples (and hence half the
p/2 radians instead of p radians. Simply discarding every other           time resolution) and half the frequency band spanned (and
sample will sub sample the signal by two, and the signal will             hence doubles the frequency resolution). Figure 4.1 illustrates
then have half the number of points. The scale of the signal is           this procedure, where x[n] is the original signal to be
now doubled. Note that the low pass filtering removes the high            decomposed, and h[n] and g[n] is low pass and high pass
frequency information, but leaves the scale unchanged. Only               filters, respectively. The bandwidth of the signal at every level
the sub sampling process changes the scale.                               is marked on the figure as "f".
     Resolution, on the other hand, is related to the amount of
information in the signal, and therefore, it is affected by the
filtering operations. Half band low pass filtering removes half
of the frequencies, which can be interpreted as losing half of
the information. Therefore, the resolution is halved after the
filtering operation. Note, however, the sub sampling operation
after filtering does not affect the resolution, since removing
half of the spectral components from the signal makes half the
number of samples redundant anyway. Half the samples can
be discarded without any loss of information. In summary, the
low pass filtering halves the resolution, but leaves the scale
unchanged. The signal is then sub sampled by 2 since half of
the number of samples are redundant. This doubles the scale.
This procedure can mathematically be expressed as


                               Having said that, we now look
how the DWT is actually computed: The DWT analyzes the
signal at different frequency bands with different resolutions
by decomposing the signal into a coarse approximation and
detail information. DWT employs two sets of functions, called
scaling functions and wavelet functions, which are associated
with low pass and high pass filters, respectively.
          The decomposition of the signal into different
frequency bands is simply obtained by successive high pass
and low pass filtering of the time domain signal. The original                         Fig: 2.2.10 DWT COEFFICIENTS
signal x[n] is first passed through a half band high pass filter
g[n] and a low pass filter h[n]. After the filtering, half of the         The Sub band Coding Algorithm As an example, suppose that
samples can be eliminated according to the Nyquist’s rule,                the original signal x[n] has 512 sample points, spanning a
since the signal now has a highest frequency of p /2 radians              frequency band of zero to p rad/s. At the first decomposition
instead of p. The signal can therefore be sub sampled by 2,               level, the signal is passed through the high pass and low pass
simply by discarding every other sample. This constitutes one             filters, followed by sub sampling by 2. The output of the high
level of decomposition and can mathematically be expressed                pass filter has 256 points (hence half the time resolution), but
as follows:                                                               it only spans the frequencies p/2 to p rad/s (hence double the
                                                                          frequency resolution). These 256 samples constitute the first
                                                                          level of DWT coefficients. The output of the low pass filter
                                                                          also has 256 samples, but it spans the other half of the
                                                                     .

                                                                    235                              http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             (Vol. 9 No. 3), 2011.
frequency band, frequencies from 0 to p/2 rad/s. This signal is             plotted in the DWT plot. This is followed by 2 level 8
then passed through the same low pass and high pass filters for             coefficients, 4 level 7 coefficients, 8 level 6 coefficients, 16
further decomposition. The output of the second low pass filter             level 5 coefficients, 32 level 4 coefficients, 64 level 3
followed by sub sampling has 128 samples spanning a                         coefficients, 128 level 2 coefficients and finally 256 level 1
frequency band of 0 to p/4 rad/s, and the output of the second              coefficients. Note that less and less number of samples is used
high pass filter followed by sub sampling has 128 samples                   at lower frequencies, therefore, the time resolution decreases
spanning a frequency band of p/4 to p/2 rad/s. The second high              as frequency decreases, but since the frequency interval also
pass filtered signal constitutes the second level of DWT                    decreases at low frequencies, the frequency resolution
coefficients. This signal has half the time resolution, but twice           increases. Obviously, the first few coefficients would not carry
the frequency resolution of the first level signal. In other                a whole lot of information, simply due to greatly reduced time
words, time resolution has decreased by a factor of 4, and                  resolution. One area that has benefited the most from this
frequency resolution has increased by a factor of 4 compared                particular property of the wavelet transforms is image
to the original signal. The low pass filter output is then filtered         processing.
once again for further decomposition. This process continues                         DWT can be used to reduce the image size without
until two samples are left. For this specific example there                 losing much of the resolution.
would be 8 levels of decomposition, each having half the                    Here is how:
number of samples of the previous level. The DWT of the                                        For a given image, you can compute the
original signal is then obtained by concatenating all                       DWT of, say each row, and discard all values in the DWT that
coefficients starting from the last level of decomposition                  are less then a certain threshold. We then save only those
(remaining two samples, in this case). The DWT will then                    DWT coefficients that are above the threshold for each row,
have the same number of coefficients as the original signal.                and when we need to reconstruct the original image, we
The frequencies that are most prominent in the original signal              simply pad each row with as many zeros as the number of
will appear as high amplitudes in that region of the DWT                    discarded coefficients, and use the inverse DWT to reconstruct
signal that includes those particular frequencies. The                      each row of the original image.
difference of this transform from the Fourier transform is that                      We can also analyze the image at different frequency
the time localization of these frequencies will not be lost.                bands, and reconstruct the original image by using only the
However, the time localization will have a resolution that                  coefficients that are of a particular band.
depends on which level they appear. If the main information
of the signal lies in the high frequencies, as happens most
often, the time localization of these frequencies will be more                          2.3) WATERMARKING
precise, since they are characterized by more number of
                                                                            Digital watermarking is an adaptation of the commonly used
samples. If the main information lies only at very low
                                                                            and well-known paper watermarks to the digital world. Digital
frequencies, the time localization will not be very precise,
                                                                            watermarking describes methods and technologies that allow
since few samples are used to express signal at these
                                                                            hiding of information, for example a number or text, in digital
frequencies. This procedure in effect offers a good time
                                                                            media, such as images, video and audio. The embedding takes
resolution at high frequencies, and good frequency resolution
                                                                            place by manipulating the content of the digital data that
at low frequencies.
                                                                            means the information is not embedded in the frame around
      Suppose we have a 256-sample long signal sampled at 10
                                                                            the data. There are two types of watermarks. They are
MHZ and we wish to obtain its DWT coefficients.Since the
                                                                             1) VISIBLE WATERMARKS: These are visible to the
signal is sampled at 10 MHz, the highest frequency component
                                                                                  viewers as in a bond paper to mark the paper type
that exists in the signal is 5 MHz. At the first level, the signal
is passed through the low pass filter h[n], and the high pass
filter g[n], the outputs of which are sub sampled by two. The
high pass filter output is the first level DWT coefficients.
There are 128 of them, and they represent the signal in the [2.5
5] MHz range. These 128 samples are the last 128 samples
plotted. The low pass filter output, which also has 128
samples, but spanning the frequency band of [0 2.5] MHz, are
further decomposed by passing them through the same h[n]
and g[n]. The output of the second high pass filter is the level             2) INVISIBLE WATERMARKS: These are invisible to the
2 DWT coefficients and these 64 samples precede the 128                         viewer and are useful for identifying the authorized
level 1 coefficients in the plot. The output of the second low                  owner.
pass filter is further decomposed, once again by passing it
through the filters h[n] and g[n]. The output of the third high
pass filter is the level 3 DWT coefficients. These 32 samples
precede the level 2 DWT coefficients in the plot.The
procedure continues until only 1 DWT coefficient can be
computed at level 9. This one coefficient is the first to be

                                                                       .

                                                                      236                              http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            (Vol. 9 No. 3), 2011.
                                                                          6. Imperceptibility: The watermark should not be visible by
                                                                          human visual system (HVS) and should not degrade the image
                                                                          quality
                                                                          7. Reliability: To ensure that the project application returns
                                                                          the watermark each time.

                                                                          2.3.2) DOMAINS USED IN WATERMARKING
                                                                              Spatial domain: This is one of the simplest techniques
                                                                                Simple Technique obtained by LSB Substitution i.e.
                                                                                  Obtain the bit planes of the Host Image and Replace
                                                                              the zero bit plane of host image with watermark image
                                                                                Advantages:
                                                                                 a) A simple technique
           Fig: 2.3.2 INVISIBLE WATERMARKING                                     b) Requires no watermark image to retrieve it from
        First applications of watermarking that came to mind                     watermarked im
        were related to copyright protection of digital media. In                c) No blocking artifacts
        the past duplicating artwork was quite complicated and                   d) Maximum Capacity
        required a great expertise for that the counterfeit                     Disadvantages:
        looked like the original. However, in the digital world                 a) Prone to tampering and attacks like
        this is not true. For everyone it is extremely easy to                         Compression
        duplicate digital data and this even without any loss of                      Rotation
        quality                                                                       Scaling
                                                                                      Translation
                                                                                      Cropping etc.

                                                                                   Transform domain: Host image is transformed into
                                                                                   another domain using DCT, Hartley, and Wavelet etc.
                                                                                   Watermark image is embedded in the frequency
                                                                                   coefficients of the transformed host image

                                                                                   Watermark is extracted from the watermarked image
                                                                                   by taking inverse transform and identifying the
                                                                                   coefficients
                                                                                    Advantages:
                                                                                       a) Robustness
                                                                                       b) Resistant to rotation, scaling and translation and
                                                                               Compression
                                                                                       c) Perceptibility
                                                                                   Disadvantages:
                                                                                        a) Less Capacity
 Fig: 2.3.3 Classification of information hiding technique
                                                                                        b) Computationally Complex
     2.3.1) REQUIREMENTS OF DIGITAL                                                     c) Blocking artifacts due to block processing
              WATERMARKING                                                     Hybrid domain: This is intermediate between spatial and
                                                                          transform domain, it is a combination of both spatial
Digital watermarking has to meet the following requirements:
                                                                          and frequency domain.
1. Perceptual transparency: The algorithm must embed data
                                                                               Advantages:
without affecting the perceptual quality of underlying host
                                                                                    a) To increase the capacity of the watermark.
signal.
                                                                                    b) To make use of the benefit of transform domains.
2. Security: A secure data embedding procedure can not be
                                                                                    c) To maximize the immunity of the watermark
broken unless the unauthorized user access to a secret key that
                                                                               against various distortion attacks.
controls the insertion of data in host signal.
3. Robustness: Watermarking must survive attacks by lossy
data compression and image manipulation like cut and paste,
filtering etc
4. Unambiguous: Retrieval of watermark should
unambiguously identify the owner                                               2.3.3) APPLICATIONS OF WATERMARKING
5. Universal: Same watermark algorithm should be applicable                         Watermarking has wide range of application .they
to        all       multimedia         under     consideration            can be used for

                                                                     .

                                                                    237                               http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         (Vol. 9 No. 3), 2011.
    1. Data Hiding -- Providing private secret messages                 of a message) are used by specialized systems or organizations
    2. Copyright Protection         -- To prove ownership               to check the authenticity of a message.
    3. Copy Control -- To trace illegal copies and License                   The      embedding      mechanism     entails      imposing
    Agreement                                                           imperceptible changes to the host signal to generate a
    4. Data Authenticatio    -- Check if content is modified            watermarked signal containing the watermark information,
    5.Broadcasting Monitor -- For commercial Advertisement              while the extraction routine attempts to reliably recover the
    6. Copy Protection -- To protect illegal copying of the             hidden watermark from a possible tampered watermarked
    information                                                         signal.
                                                                            One of the most used multimedia signals category is that
       2.3.4) WATERMARKING COMPONENTS                                   of images. For example 80% of the data transmitted using the
                                                                        internet are images. This is the reason why it is very important
                                                                        to study the digital watermarking of images.
                          VISIBILITY
                                                                             2.3.5) TYPES OF WATERMARK IN
                                                                                       TERMS OF FIDELITY
                                                                             There are three types of watermarks in terms of Fidelity.
                                                                             They are
           CAPACITY                         ROBUSTNESS                               Watermark:
                                                                        a) RobustROBUSTNESS This watermark has the ability to
                                                                             withstand various image attacks thus providing
                                                                             authentication.
CAPACITY: It refers to the amount of information we are                 b) Fragile Watermark: This watermark is mainly used for
able to insert into the host image                                           detecting modification of data. This watermark gets
.                                                                            degraded even for a slight modification of data in the
                  Capacity = Bytes of hidden data                            image.
                               Bytes of Cover image                     c) Semi Fragile Watermark: It is an intermediate between
                                                                             fragile and robust watermarks. It is not robust against all
                                                                             possible image attacks.
ROBUSTNESS:                                                                     2.3.6) WATERMARK INSERTION SYSTEM
                                                                         The watermarks can be inserted by using various techniques
It refers to ability of inserted information to withstand image         like
modifications.                                                               a) Flip the lowest order bit of chosen pixels
 At present, digital watermarking research primarily involves                b) Superimpose a symbol over the area of an image
the identification of effective signal processing strategies to              c) By using color separation, i.e. the watermark appears
discreetly, robustly, and unambiguously hide the watermark                   in only one color band
information into multimedia signals. The general process                     d) By applying transforms and then altering the chosen
involves the use of a key which must be used to successfully                 frequencies from the original
embed and extract the hidden information. The embedding
mechanism entails imposing imperceptible changes to the host                           2.4) COMPRESSION
signal to generate a watermarked signal containing the                  Compression refers to the process of reducing the amount of
watermark information, while the extraction routine attempts            data required to represent a given quantity information. The
to reliably recover the hidden watermark from a possible                compression system model consists of two parts:
tampered watermarked signal.                                                1. The Compressor
     The objective of this project is the security of image and             2. The Decompressor
security has the following objectives:
- The transmitted information confidentiality-                                   Image compression model
- The transmitted information integrity,
- The transmitted information authenticity,
- The transmitted information non-repudiation,
- The disposability of the required information and of the
required services,
    The authenticity of the image can be verified by another
person or system connected in the same network. This kind of
authenticity check is very important and was intensively                              Fig: 2.4.1 SOURCE ENCODER
developed in the last years. The author of the message sends a
transformed form of another message, related with the first
one, to a third entity. Processing this transformed form of the
messages the third entity can establish the author. Today,
digital signatures or digital envelopes (the transformed forms

                                                                   .

                                                                  238                              http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            (Vol. 9 No. 3), 2011.
              Fig: 2.4.2 SOURCE DECODER                                             In this project we have used two different encoding
                                                                          techniques i.e. Huffman coding and SPIHT coding using both
                                                                          HH and H* Elimination techniques.
The data which contains no relevant information is called data                                2.4.4) HUFFMAN CODING
redundancies given by                                                     The Huffman code, developed by D. Huffman in 1952, is a
                   Rd = 1-1/Cr                                            minimum length code which is the most popular technique for
Where Cr is the compression ratio given by                                removing coding redundancies. Huffman coding yields the
                  Cr = n1/n2                                              smallest possible number of code symbol per source symbol.
And n1 and n2 are the number of information carrying units of             In terms of the noiseless coding theorem, the resulting code is
input and output image. In digital image three basic                      optimal for a fixed value of n, subject to the constraint that the
redundancies are present which can be eliminated for                      source symbols be coded one at a time.
compression                                                                    Huffman coding gives a statistical distribution of the gray
    1) Interpixel redundancies                                            levels (the histogram), the Huffman algorithm will generate a
    2) Coding redundancies                                                code that is as close as possible to the minimum bound. For
    3) Psycho visual redundancies                                         complex images, Huffman coding alone will typically reduce
                                                                          the file by 10% to 50% but this ratio can be improved to 2:1 or
           2.4.1) INTERPIXEL REDUNDANCIES                                 3:1 by preprocessing for irrelevant information removal.
     The correlations which exist between the pixels due to               The Huffman algorithm can be described in five steps
structural or geometric relationships between the objects of the               1. Find the gray level probabilities for the image by
image .some redundancies arise due to this Inter correlation                        finding the histogram
between the pixels within an image. A variety of names like                    2. Order the input probabilities from smallest to largest
spatial redundancies, Geometric Redundancies, and Interframe                   3. Combine the smallest two by addition
redundancies .these are eliminated in an image , the 2D pixel                  4. GOTO step 2, until only two probabilities are left
array normally used for human viewing and interpretation                       5. By working backward along the tree, generate code
must be transformed into a more efficient format i.e. the                           by alternating assignment of 0 and 1.
difference between adjacent is used to represent an image .
This is usually done by applying transforms. This process is              An example of how the Huffman coding algorithm works is as
also known as mapping. In this project interpixel redundancies            shown below:
are removed by using Wavelet Transforms.                                      1) The first step in Huffman’s approach is to create a series
           2.4.2) PSYCHOVISUAL REDUNDANCIES                               of source reductions by ordering the probabilities of the
     Human eye does not respond with equal sensitivity to all             symbols under considerations and combining the lowest
visual information. Certain information simply has less                   probability symbols into a single symbol that replaces them in
relative importance than other information in normal visual               the next source reduction as shown in the tabular column
processing. This information is said to be psycho visually                below
redundant which can be removed without significantly
impairing the quality of image perception because the
information itself is not essential for normal visual processing.
    Since the elimination of psycho visual redundant data
results in a loss of quantitative information, it is commonly
referred to as Quantization. Quantization is mapping of a
broad range of input values to a limited number of output
values. Quantization is an irreversible process and results in
lossy compression.

             2.4.3) CODING REDUNDANCIES
 In the process of removing coding redundancies the shortest
code word is assigned to the grey levels that occur most
frequently in an image i.e. fewer bits are assigned to the most
probable grey levels than to the less probable ones and this
achieves data compression. This process is referred to as
variable length coding. Coding redundancies are removed by                The second step in Huffman’s procedure is to code each
using the process of encoding. There are various encoding                 reduced source, starting with the smallest source and working
process like variable length coding                                       back to the original source and the minimal length binary code
     1) Huffman coding                                                    for a two symbol source, is the symbols 0 and 1.
     2) Arithmetic coding
     3) LZW Coding
     4) Bit plane coding
     5) SPIHT Coding

                                                                     .

                                                                    239                               http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          (Vol. 9 No. 3), 2011.
                                                                             So, SPIHT can be very useful for applications where the
                                                                        user can quickly inspect the image and decide if it should be
                                                                        really downloaded, or is good enough to be saved, or need
                                                                        refinement.

                                                                        A. Optimized Embedded Coding:
                                                                             Suppose you need to compress an image for three remote
                                                                        users. Each one have different needs of image reproduction
        The final code appears at the far left in the above             quality, and you find that those qualities can be obtained with
table which shows that fewer bits are allotted to the most              the image compressed to at least 8 Kb, 30 Kb, and 80 Kb,
probable symbols. Huffman encoded symbols can be decoded                respectively. If you use a non-embedded encoder (like JPEG)
by examining the individual symbols of the string in a left to          to save in transmission costs (or time) you must prepare one
right manner.                                                           file for each user. On the other hand, if you use an embedded
                                                                        encoder (like SPIHT) then you can compress the image to a
                                                                        single 80 Kb file, and then send the first 8 Kb of the file to the
                   2.4.5) SPIHT CODING                                  first user, the first 30 Kb to the second user, and the whole file
          SPIHT Coding offers a new, fast and different                 to the third user.
implementation based on set partitioning in hierarchial trees,
which provides better performance than other coding                          Surprisingly, with SPIHT all three users would get (for
techniques. It has become the benchmark state-of-the-art                the same file size) an image quality comparable or superior to
algorithm for image compression.                                        the most sophisticated non-embedded encoders available
SPIHT has the following advantages:                                     today. SPIHT achieves this feat by optimizing the embedded
           1) good image quality , high PSNR, especially for            coding process and always coding the most important
                color images                                            information first.
           2) it is optimized for progressive image
                transmission                                            B. Compression Algorithm:
           3) produces a fully embedded coded file                          SPIHT represents a small "revolution" in image
           4) simple quantization algorithm                             compression because it broke the trend to more complex (in
           5) fast coding/decoding time                                 both the theoretical and the computational senses)
           6) has wide application, completely adaptive                 compression schemes. While researchers had been trying to
           7) can be used for lossless compression                      improve previous schemes for image coding using very
           8) can code to exact bit rate or distortion                  sophisticated vector quantization, SPIHT achieved superior
           9) efficient combination with error protection               results using the simplest method: uniform scalar quantization.
Image quality:                                                          Thus, it is much easier to design fast SPIHT codes.
     SPIHT yields very good quality visual images by
exploiting the properties of wavelet transform images.
Progressive image transmission:                                         C. Encoding/Decoding Speed:
      I some systems with progressive image transmission
(like WWW browsers) the quality of the displayed images                 The SPIHT process represents a very effective form of
follows the sequence: (a) weird abstract art; (b) you begin to          entropy-coding. When compared to SPIHT coding to other
believe that it is an image of something; (c) CGA-like quality;         coding techniques the difference in compression is small,
(d) lossless recovery. With very fast links the transition from         showing that it is not necessary to use slow methods. A
(a) to (d) can be so fast that you will never notice. With slow         straightforward consequence of the compression simplicity is
links (how "slow" depends on the image size, colors, etc.) the          the greater coding/decoding speed. The SPIHT algorithm is
time from one stage to the next grows exponentially, and it             nearly symmetric, i.e., the time to encode is nearly equal to the
may take hours to download a large image. Considering that it           time to decode. (Complex compression algorithms tend to
may be possible to recover an excellent-quality image using             have encoding times much larger than the decoding times.)
10-20 times less bits, it is easy to see the inefficiency.
Furthermore, the mentioned systems are not efficient even for           D. Applications:
lossless transmission.
 The problem is that such widely used schemes employ a very                  SPIHT exploits properties that are present in a wide
primitive progressive image transmission method. On the                 variety of images. It had been successfully tested in natural
other extreme, SPIHT is a state-of-the-art method that was              (portraits, landscape, weddings, etc.) and medical (X-ray, CT,
designed for optimal progressive transmission (and still beats          etc) images. Furthermore, its embedded coding process proved
most non-progressive methods!). It does so by producing a               to be effective in a broad range of reconstruction qualities. For
fully embedded coded file in a manner that at any moment the            instance, it can code fair-quality portraits and high-quality
quality of the displayed image is the best available for the            medical images equally well (as compared with other methods
number of bits received up to that moment.                              in the same conditions).

                                                                   .

                                                                  240                               http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                             (Vol. 9 No. 3), 2011.
E. Lossless Compression:                                                   G. Use with Graphics
    SPIHT codes the individual bits of the image wavelet                       SPIHT uses wavelets designed for natural images. It was
transform coefficients following a bit-plane sequence. Thus, it            not developed for artificially generated graphical images that
is capable of recovering the image perfectly (every single bit             have very wide areas of the same color. Even though there are
of it) by coding all bits of the transform. In other words, the            methods that try to compress efficiently both graphic and
property that SPIHT yields progressive transmission with                   natural images, the best results for graphics have been
practically no penalty in compression efficiency applies to                obtained with methods like the Lempel-Ziv algorithm.
lossless compression too.                                                  Actually, graphics can be much more effectively compressed
                                                                           using the rules that generated them.
Rate or Distortion Specification:
                                                                                    There is still no "universal compression" scheme, in
Almost all image compression methods developed so far do                   the future documents we will use more extensively what is
not have precise rate control. For some methods you specify a              already being used by WWW browsers: one decoder for text,
target rate, and the program tries to give something that is not           another for sound, another for natural images (how about
too far from what you wanted. For others you specify a                     SPIHT?), another for video etc.
"quality factor" and wait to see if the size of the file fits your
needs. (If not, just keep trying...). The embedded coding                                  2.4.6) SPIHT ALGORITHM
property of SPIHT allows exact bit rate control, without any               SPIHT algorithm is based on 3 concepts:
penalty in performance (no bits wasted with padding or                                 1) Ordered Bit Plane Progressive Transmission
whatever).                                                                             2) Set Partitioning Sorting Algorithm
                                                                                       3) Spatial Orientation Trees
          The same property also allows exact mean squared-                Ordered Bit Plane Progressive Transmission:
error (MSE) distortion control. Even though the MSE is not                    A major objective in a progressive transmission scheme is
the best measure of image quality, it is far superior to other                to select the most important information- which yields the
criteria used for quality specification.                                      largest distortion reduction- to be transmitted first.
                                                                              It incorporates two concepts:
Error Protection                                                                   Ordering the coefficients by magnitude
                                                                                   Transmitting the most significant bits (MSB’S) first.
F.        Errors in the compressed file cause havoc for                            Set Partitioning Sorting Algorithm:
     practically all important image compression methods.
     This is not exactly related to variable length entropy-                   The sorting algorithm divides the set of pixels into
     coding, but to the necessity of using context generation              partitioning subsets Tm and performs the significance test by
     for efficient compression. For instance, Huffman codes                using the function
     have the ability to quickly recover after an error.                             Sn (T) = 1, max {(I, j) € T[C i, j] > 2n
     However, if it is used to code run-lengths, then that                               = o, otherwise                        where n is the
     property is useless because all runs after an error would
     be shifted.                                                                 number of pass

          SPIHT is not an exception for this rule. One                     Spatial Orientation Trees:
difference, however, is that due to SPIHT embedded coding
property, it is much easier to design efficient error-resilient
schemes. This happens because with embedded coding the
information is sorted according to its importance, and the
requirement for powerful error correction codes decreases
from the beginning to the end of the compressed file. If an
error is detected, but not corrected, the decoder can discard the
                                                                                          Fig : 2.4.3 SPATIAL ORIENTATION TREES
data after that point and still display the image obtained with
the bits received before the error. Also, with bit-plane coding            •   O (i, j): set of coordinates of all offspring of node (i, j);
the error effects are limited to below the previously coded                    children only
planes. Another reason is that SPIHT generates two types of                •   D (i, j): set of coordinates of all descendants of node (i, j);
data. The first is sorting information, which needs error                      children, grandchildren, great-grand, etc.
protection as explained above. The second consists of                      •   H (i, j): set of all tree roots (nodes in the highest pyramid
uncompressed sign and refinement bits, which do not need                       level); parents
special protection because they affect only one pixel.                     •   L (i, j): D (i, j) – O (i, j) (all descendents except the
                                                                               offspring); grandchildren, great-grand, etc.
          While SPIHT can yield gains like 3 dB PSNR over                      For practical applications the following sets are used to
methods like JPEG, its use in noisy channels, combined with                    store ordered lists
error protection as explained above, leads to much larger                                  LSP: List of significant pixels
gains, like 6-12 dB.
                                                                      .

                                                                     241                               http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
INITIALIZATION                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                               (Vol. 9 No. 3), 2011.
                     LIP: List of insignificant pixels                                 S(0,1) similarly the sets S(1,0) and S(1,1) are
                     LIS: List of insignificant sets                                   insignificant hence
      To illustrate how the SPIHT Coding works lets look at the                         transmit                    two               0’s
      following example                                                                We need not process LSP since it is null
                                                                                   •   Update LSPT to LSP
                                                                                   •   The transmitted bit stream is 10000000(8 bits)
                                                                                   •   LIP={(0,1),(1,0),(1,1)}
                                                                                   •   LIS={D(0,1),D(1,0),D(1,1)}
                                                                                   •   LSP={ (0,0)}




                            INITIALIZATION

        LIP                LSP                       LIS

      (0, 0) = 26                         (0, 1)= {13, 10, 6, 4}
                         EMPTY             (1, 0)= {4,-4, 2,- 2}
      (0, 1) = 6                           (1, 1)= {4,-3,-2, 0}

      (1, 0) = -7
                                                                               Fig: 2.4.6 BLOCK DIAGRAM AFTER FIRST SORTING PASS
      (1, 1) = 7                                                                                After Second Sorting Pass
                                                                                n=4-1=3, Threshold T1= 2n=23=8
                                                                               Process LIP
                    n = log2 (MAX COEFF)                                             S(0,1)=6, S(1,0)=-7, S(1,1),=7, are insignificant ,hence
                         n=log2(26) =4                                                     we transmit 3 0’S,

                                                                                  Process LIS
      The first step in SPIHT coding is the initialization of the sets                DS (0, 0) =13, DS (0, 1) =10, these two are > T1 hence
      LSP, LIS, LIP which is done as shown below                                we transmit 1 for the set then we transmit 10 for 1 and
                                                                                again we transmit 10 for 10, then move (0, 2) and (0, 3) to
                      Fig 2.4.5 INITIALIZATION                                  LSPT
                                                                                  DS(1,0)=6 and DS(1,1)=4 < T1 we transmit two O’S ,then
                                                                                move (1,2) and(1,3) to LIP
      Then the pixels are sorted according to a threshold which is                The sets D (1, 0) and D (1, 1) are insignificant hence we
      given below i.e.                                                                 transmit two 0’S
                                                                               Process LSP
                          After First Sorting Pass
                                                                                C (0, 0) =26= (11010)2 ---- TRANSMIT NTH MSB =1
                                                                               • Update LSPT to LSP
                       Threshold To= 2n=24=16
                                                                               • The transmitted bit stream is 0001101000001(13 bits)
          •    Process LIP                                                     • LIP={(0,1),(1,0),(1,1),(1,2),(1,3)}
          •    S(0,0)=26>To, we transmit 1, since 26 is +ve ,we                • LIS={D(1,0),D(1,1)}
               transmit 0; then move (0,0) to LSPT (Temporary),                • LSP={ (0,0),(0,2),(0,3) }
               then
          •    S(0,1)=6, S(1,0)= -7, S(1,1)=7 are all <To hence they
               are insignificant , therefore
                transmit                     three                0,            Fig: 2.4.7 BLOCK DIAGRAM AFTER SECOND SORTING
               Process LIS                                                                                 PASS
          •    DS(0,0)=13, DS(0,1)=10, DS(1,0)=4 ,DS(1,1) are all
               less than To hence we transmit 0 for the complete set
                                                                          .

                                                                         242                             http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                            (Vol. 9 No. 3), 2011.


                         DECODER
An example of decoding the above transmitted bit is as shown
                            below
                        First Receive
                      Get n=4, To=2n = 16
               LIP = {(0, 0), (0, 1), (1, 0), (1, 1)}                           28          0        12            12
              LIS ={ D(0,0),D(0,1),D(1,0),D(1,1)}
                            LSP = { }
         The transmitted bit stream is 10000000(8 bits)                           0         0          0             0
                        Process LIP
    Get 1=S(0,0) is significant, next is zero hence +ve value;
                  move S(0,0) to LSP, Then construct
                                                                                  0         0          0             0
                      C(0,0)=(3/2)TO=(3/2)16 =24
     Get three 0 = S (0, 1), S (1, 0), S (1, 1) are insignificant                 0         0          0             0
                         Process LIS
         Get three 0 = DS (0, 1), DS (1, 0), DS (1, 1) are
                                insignificant
                   LIP = {(0, 1), (1, 0), (1, 1)}                             Fig: 2.4.9 PIXELS AFTER SECOND RECIEVE
                  LIS ={ D(0,1),D(1,0),D(1,1)}
                          LSP = {(0, 0)}
                                                                            2.5) DECOMPRESSION
                                                                               The decompression process is exactly the reverse of the
                24            0              0             0             compression process. Decompression is involved with
                                                                         decoding. The decoding process consists of Huffman decoding
                  0           0              0             0             or SPIHT decoding. The reconstruction is done by using
                                                                         Inverse Wavelet Transform. The watermark can be retrieved by
                                                                         changing the frequency of the LSB’S of the output image.
                  0           0              0             0              2.6) FIDELITY CRITERIA

                  0           0              0             0                   During the removal of redundancies i.e. compression
                                                                          some information of interest may be lost, a repeatable or
                                                                          reproducible means of quantifying the nature and extent of
                                                                          information loss is highly desirable. There are two types of
       Fig: 2.4.8 PIXELS AFTER FIRST RECIEVE
                                                                          criteria used to make such an assessment. They are 1)
                                                                          Objective Fidelity Criteria 2) Subjective Fidelity
                                                                          Criteria.Subjective Fidelity Criteria:       Most decompressed
Second receive
                                                                          images are ultimately evaluated by human observer therefore
         Get n=4-1=3, T1=2n = 8
                                                                          measuring image quality by subjective evaluation of human
         LIP = {(0, 1), (1, 0), (1, 1)}
                                                                          observer is done in subjective criteria which is done by
         LIS ={ D(0,1),D(1,0),D(1,1)}
                                                                          showing a decompressed image to a cross section of viewers
         LSP = {(0, 0)}
                                                                          and averaging their evaluations. The evaluations may be made
      The transmitted bit stream is 0001101000001(13 bits)
                                                                          using an absolute rating scale or by side by side comparison of
Process LIP
                                                                          original and decompressed image.Objective Fidelity Criteria:
        Get 000 =S (0, 1), S (1, 0), S (1, 1) are insignificant
                                                                          This offers a very simple and convenient mechanism for
Process LIS
                                                                          evaluating information loss.
         Get 1 = DS (0, 1) is significant
                                                                                    Here the level of information loss is expressed as a
         Get 10 = C (0, 2) is a positive significance
                                                                          function of the original image and the decompressed image.
         Move (0, 2) to LSP, then reconstruct C (0, 2) = +
                                                                          For objective fidelity we use MSE (mean square error), PSNR
            (3/2) T1 = (3/2)8 = 12
                                                                          (peak signal to noise ratio) and CR (compression ratio).
Get 10 = C (0, 3) is a positive significance
                                                                           Let f( x , y ) and f '( x ,y ) represent an input and a
     Move (0, 3) to LSP then reconstruct C (0, 3) = + (3/2) T1
                                                                          compressed image then for any value of x, y the error e( x, y)
                                 = (3/2)8 =12
                                                                          is defined as e( x, y) = f '(x , y)- f (x , y) then the total error
      Get 00 = C (1, 2), C (1, 3) are insignificant move to LIP
                                                                          between      two      images      of     size     MX      N      is
           Get 00 = DS (1, 0), DS (1, 1) are insignificant
                                                                                                            M-1 N-1
Process LSP
Get 1, then add 2 n-1 to C (0, 0) = 24+2 n-1=24+2 2 =24+4 =28                                          Σ Σ [f '(x, y) – f (x, y)]
                                                                                                      x=0 y=0

                                                                     .

                                                                    243                               http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500
                                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                (Vol. 9 No. 3), 2011.
The root mean square error is the square root of the squared
error averaged over the MXN array given by                                               The limitation to numerical computation can be seen
                             M-1 N-1                                           as a drawback, but it is a source of strength too: MATLAB is
                MSE = [Σ Σ [f '(x, y) – f (x, y)] 2]1/2                        much preferred to Maple, Mathematical, and the like when it
                         x=0 y=0                                               comes to numeric. On the other hand, compared to other
                                                                               numerically oriented languages like C++ and FORTRAN,
If f(x, y) is considered as a combination of original image f(x,               MATLAB is much easier to use and comes with a huge
y) and noise signal e(x, y) then the mean square signal to noise               standard library. The unfavorable comparison here is a gap in
ratio is given as                                                              execution speed. This gap is not always dramatic, and it can
                                                                               often be narrowed or closed with good MATLAB
                          M-1 N-1                                              programming. Moreover, one can link other codes into
                  Σ Σ [f '(x, y)] 2                                            MATLAB, or vice versa, and MATLAB now optionally
                x=0 y=0                                                        supports parallel computing. Still, MATLAB is usually not the
        PSNR = ----------------------------------------------------            tool of choice for maximum-performance computing.
                      M-1 N-1                                                   Typical uses include:
                  Σ Σ [f '(x, y) – f (x, y)] 2                                      a)                         Math and Computation
                x=0 y=0                                                             b)                                  Algorithm
                                                                                         development
                                               No. of input pixels                  c)                                  Modeling, simulation
The compression ratio is given by CR =         -----------------------                   and prototyping
                                              No. of output pixels                  d)                                  Data           analysis,
                                                                                         exploration and visualization
In this project we use the above Objective Fidelity Criteria for                    e)                                  Scientific          and
the assessment of the quality of image and a comparison is                               engineering graphics
made between Huffman and SPIHT w.r.t MSE, PSNR and                                  f)                                  Application
CR.                                                                                      development which includes graphical user interface
                                                                                         building.
         2.7) INTRODUCTION TO MATLAB 7.0                                           MATLAB is an interactive system whose basic data
                                                                               element is an array. Perhaps the easiest way to visualize
                                                                               MATLAB is to think it as a full-featured calculator. Like a
                                                                               basic calculator, it does simple Math like addition, subtraction,
                                                                               multiplication and division. Like a scientific calculator it
                                                                               handles Square roots, complex numbers, logarithms and
                                                                               trigonometric operations such as sine, cosine and tangent. Like
                                                                               a programmable calculator, it can be used to store and retrieve
                                                                               data; you can create, execute and save sequence of commands,
                                                                               also you can make comparisons and control the order in which
                                                                               the commands are executed. And finally as a powerful
                                                                               calculator it allows you to perform matrix algebra, to
                                                                               manipulate polynomials and to plot data. When you start
                      Fig 2.7.1 MATLAB                                         Matlab the following window will appear:
The software used in this project is MATLAB 7.0. MATLAB
stands for Matrix Laboratory. The very first version of
MATLAB, written at the University of New Mexico and
Stanford University in the late 1970s was intended for use in
Matrix theory, Linear algebra and Numerical analysis. Later
on with the addition of several toolboxes the capabilities of
Matlab were expanded and today it is a very powerful tool at
the hands of an engineer. It offers a powerful programming
language, excellent graphics, and a wide range of expert
Knowledge. MATLAB is published by and a trademark of The
Math Works, Inc.
    The focus in MATLAB is on computation, not
mathematics: Symbolic expressions and manipulations are not
possible (except through the optional Symbolic Toolbox, a
clever interface to Maple). All results are not only numerical
but inexact, thanks to the rounding errors inherent in computer
arithmetic.

                                                                          .

                                                                         244                              http://sites.google.com/site/ijcsis/
                                                                                                          ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         (Vol. 9 No. 3), 2011.
                                                                        “root” page. The heart and soul of MATLAB is linear algebra.
                                                                        In fact, MATLAB was originally a contraction of “Matrix
                                                                        laboratory.” More so than any other language, MATLAB
             Fig: 2.7.2 Main screen of MATLAB                           encourages and expects you to make heavy use of arrays,
When you start MATLAB, you get a multipaneled desktop.                  vectors, and matrices.
The layout and behavior of the desktop and its components are
highly customizable (and may in fact already be customized                        MATLAB is oriented towards minimizing
for your site). The component that is the heart of MATLAB is            development and interaction time, not computational time. In
called the Command Window, located on the right by default.             some cases even the best MATLAB code may not keep up
Here you can give MATLAB commands typed at the prompt,                  with good C code, but the gap is not always wide. In fact, on
>>. Unlike FORTRAN and other compiled computer                          core linear algebra routines such as matrix multiplication and
languages, MATLAB is an interpreted environment—you give                linear system solution, there is very little practical difference
a command, and MATLAB tries to execute it right away                    in performance. MATLAB’s language has features that can
before asking for another. At the top left you can see the              make certain operations, most commonly those involving
Current Directory. In general MATLAB is aware only of files             loops in C or FORTRAN.
in the current directory (folder) and on its path, which can be
customized. For simple problems, entering the commands at                         After you type your commands save the file with an
the MATLAB prompt is fast and efficient.                                appropriate name in the directory “work”. Functions are the
          However as the number of commands increases, or               main way to extend the capabilities of MATLAB. Compared
when you wish to change the value of a variable and then re-            to scripts, they are much better at compartmentalizing tasks.
valuate all the other variables, typing at the command prompt           Each function starts with a line such as Function [out1, out2] =
is tedious. Matlab provides for this a logical solution: I.e.           myfun (in1, in2, in3)
place all your commands in a text file and then tell Matlab to          The variables in1, etc. are input arguments, and out1 etc. are
evaluate those commands. These files are called script files or         output arguments. You can have as many as you like of each
simple M-files. To create an M-file, chose from the File menu           type (including zero) and call them whatever you want. The
the option NEW and then chose M-file. Or click at the                   name myfun should match the name of the disk file.
appropriate icon at the command window.
 Then you will see this window:

                                                                                         3) METHODOLOGY

                                                                        Flow diagram showing the methodology of work
                                                                                       for ENCODER



                  Fig: 2.7.3 M- File Screen


Then to run it go at the command prompt and simple type its
name or in the M-file window press F5.
          MATLAB is huge. Nobody can tell you everything
that you personally will need to know, nor could you
remember it all anyway. It is essential that you become
familiar with the online help. There are two levels of help:
 • If you need quick help on the syntax of a command, use
help. For example, help plot shows right in the Command
Window all the ways in which you can use the plot command.
Typing help by itself gives you a list of categories that
themselves         yield        lists      of        commands.                Fig: 3.1 ENCODER FLOW DIAGRAM
• Typing doc followed by a command name brings up more
extensive help in a separate window. For example, doc plot is            The flow diagram showing the methodology of
better formatted and more informative than help plot.
     In the left panel one sees a hierarchical, brows able                        work done for DECODER
display of all the online documentation. Typing doc alone or
selecting Help from the menu brings up the window at a
                                                                   .

                                                                  245                              http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                         (Vol. 9 No. 3), 2011.
            Fig: 3.2 DECODER FLOW DIAGRAM                                  Fig: 4.5 Image after embedding the message using Huffman
                                                                                      coding and LL band (H* Elimination)


                     4) RESULTS

Input image:

    The first block is an input image, In this project we used a
part of the satellite image as input which is shown below.




                                                                               Fig: 4.3 Second Stage Discrete Wavelet Transform

                                                                         Then the embedding process was done by using HH and H*
                Fig: 4.1 ORIGINAL IMAGE
                                                                         Elimination method. The message which was embedded in the
                                                                         project was “SIT DEPT” any other message can also be
WAVELET TRANSFORM                                                        embedded, there is a provision to embed any message in this
                                                                         project.
    Discrete Wavelet transform was applied on this input
image in two stages. In the first stage only L and H bands were                         4.1) HUFFMAN RESULTS:
formed as shown below




                                                                         Fig: 4.4 Image before embedding the message using Huffman
                                                                         coding and LL band (H* Elimination)
       Fig: 4.2 First Stage Discrete Wavelet Analysis


Further a second stage wavelet transform was applied on this
to create LL, LH, HL and HH bands as shown below



                                                                    .

                                                                   246                             http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   (Vol. 9 No. 3), 2011.



                                                                    Image after decoding using Huffman decoding technique
                                                                                      and H* Elimination

Further the embedding was done using both the bands i.e. LL
and LH (HH Elimination method) bands and the results were




                        Fig: 4.6
                                                                                            Fig: 4.9
Image before embedding the message using Huffman coding and
                     HH Elimination                                   Image after reconstruction (IDWT) using Huffman
                                                                                decoding and H* Elimination




                               Fig: 4.7

  Image after embedding using Huffman coding and HH
                     Elimination



                                                                                            Fig: 4.10
                                                                    Image after decoding using Huffman decoding and HH
                                                                                         Elimination


                                             Fig: 4.8

                                                               .

                                                              247                            http://sites.google.com/site/ijcsis/
                                                                                             ISSN 1947-5500
                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                      (Vol. 9 No. 3), 2011.
                                                     Final output image using Huffman coding and
                                                     H* Elimination




                                                                        Fig: 4.13
                                                      Final output image using Huffman coding and
                                                                     HH Elimination
                                                     4.2) SPIHT CODING AND DECODING RESULTS:


               Fig: 4.11

Image after reconstruction using Huffman
     decoding and HH Elimination




                                                                               Fig: 4.14
                                                      Image before embedding using SPIHT coding
                                                                  and H* Elimination




               Fig: 4.12




                                                                                Fig: 4.15

                                                .

                                               248                              http://sites.google.com/site/ijcsis/
                                                                                ISSN 1947-5500
                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                      (Vol. 9 No. 3), 2011.
Image after embedding using SPIHT coding and         Image after embedding using SPIHT coding and
               H* Elimination                        HH Elimination




                                                                               Fig: 4.18
                  Fig: 4.16
                                                       Image after decoding using SPIHT and H*
      Image before embedding using SPIHT                              Elimination
      coding and HH Elimination




                  Fig: 4.17                                                    Fig: 4.19




                                                .

                                               249                              http://sites.google.com/site/ijcsis/
                                                                                ISSN 1947-5500
                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                             (Vol. 9 No. 3), 2011.
Image after reconstruction (IDWT) using SPIHT
              and H* Elimination




                                                                                      Fig: 4.22

                     Fig: 4.20                               Final output image using SPIHT coding and H*
                                                                              Elimination
 Image after decoding using SPIHT and HH Elimination




                                                                                       Fig: 4.23

                                                                Final output image using SPIHT coding and HH
                                                                                 Elimination
                     Fig: 4.21

Image after reconstruction (IDWT) using SPIHT
and HH Elimination




                                                        .

                                                       250                             http://sites.google.com/site/ijcsis/
                                                                                       ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                     (Vol. 9 No. 3), 2011.
4.3) COMPARISON OF RESULTS
Finally the results were compared for Huffman coding
between H* and HH Elimination with respect to MSE, PSNR
and CR and the results obtained were




                                                                    Fig: 4.26 Result of SPIHT coding using H* Elimination
                                                                    Results were
                                                                    MSE:     4.18
                                                                    PSNR: 41.94
                                                                    CR : 2.13




Fig: 4.24 Result of compression using Huffman coding and
                  H* Elimination method
The results of Huffman coding and H* Elimination method
were
MSE: 4.167
PSNR: 41.94
CR:      5.91




                                                                    Fig: 4.27 Results of SPIHT coding using HH Elimination
                                                                    MSE: 0.42
                                                                    PSNR: 51.69
                                                                    CR     : 1.68
                                                                    Finally a comparison was made between the two coding
                                                                    techniques i.e. Huffman coding and SPIHT coding with
Fig: 4.25 Result of compression using Huffman coding and            respect to coding and decoding time and the results obtained
                  HH Elimination method                             were

MSE: 0.42
PSNR: 51.89
CR : 2.75

                                                               .

                                                              251                              http://sites.google.com/site/ijcsis/
                                                                                               ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                          (Vol. 9 No. 3), 2011.




Fig: 4.28 RESULT OF COMPARISON BETWEEN ENCODING
                 AND DECODING TIME
                                                                              Fig: 5.2 Screen after importing the original image
 Huffman Coding Time: 4.146 sec Huffman Decoding Time: 1.807 sec
  SPIHT Coding Time: 1.723 sec    SPIHT Decoding Time: 0.511 sec         This is the screen after importing the input image. click on the
                                                                         message button and write the message which is to be
  5) A GLANCE THROUGH THE GUI’S OF                                       embedded in the input image and then click on the embed
                                                                         button to embed the message and compress the image then the
            THE PROJECT
                                                                         decoding operation is applied by pressing the retrieve button
This is the main screen of the software. Choose the encoding
                                                                         and the reconstructed image obtained as shown below.
technique from the above screen




                    Fig: 5.1 Main Screen

Choose the band i.e. LL or LL and LH and import the input
image by clicking on the browse button.



                                                                         Fig: 5.3 Screen after reconstructing the image using
                                                                         Huffman coding and H* Elimination
                                                                             Then the values of MSE, PSNR and CR is obtained by
                                                                                          pressing the validate button



                                                                    .

                                                                   252                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                           (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              (Vol. 9 No. 3), 2011.




                                                                        ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
                                                                        Fig: 5.6 Screen after reconstructing the image using both
                                                                        LL and LH bands and Huffman coding




Fig: 5.4 Screen showing the results of MSE, PSNR and CR
         using Huffman coding and H* Elimination

Then these results are cleared by pressing the clear button and
the above process is repeated by choosing both LL and LH
bands as shown below




                                                                        Fig: 5.7 Screen showing the values of MSE, PSNR and CR
                                                                        using Huffman coding and HH Elimination




                                                                                                  SPIHT CODING
    Fig: 5.5 Screen after importing and embedding the
              message using HH Elimination




                                                                   .

                                                                  253                                   http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       (Vol. 9 No. 3), 2011.




                                                                       Fig: 5.10 Screen showing the values of MSE, PSNR and
                                                                               CR of SPIHT coding and H* Elimination
 Fig: 5.8 Screen after importing the image using LL SPIHT
                 Coding and H* Elimination




Fig: 5.9 Screen after reconstructed image using SPIHT
                coding H* Elimination




                                                                      Fig: 5.11 Screen after importing the input image
                                                                         using SPIHT coding and HH Elimination



                                                                 .

                                                                254                              http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                   (Vol. 9 No. 3), 2011.




 Fig: 5.12 Screen after reconstructed image using SPIHT
               coding and HH Elimination



                                                                      Fig: 5.14 Screen showing the encoding and
                                                                    decoding time for Huffman and SPIHT coding


                                                                                  6) CONCLUSION
                                                                     The results obtained are put in a tabular column for easy
                                                                                           comparison




  Fig: 5.13 Screen showing the values of MSE, PSNR and
       CR using SPIHT coding and HH Elimination
Finally the two techniques i.e. Huffman and SPIHT coding
techniques were compared by clicking the validate button on
the main screen as shown below

                                                               .

                                                              255                            http://sites.google.com/site/ijcsis/
                                                                                             ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       (Vol. 9 No. 3), 2011.
                                                                       compromise with less compression and go for both LL and LH
                                                                       bands ( HH Elimination).
When the above results are analyzed we come to a conclusion
                               that                                    Also
                                                                       when
                                                                       choos
a) In both Huffman Coding and SPIHT Coding when a                      ing                             ENCODING            DECODING
comparison is made between H* and HH Elimination                       amon                                TIME                 TIME
Techniques, we see that MSE which was 4.1 is reduced to 0.4            g the
when two bands were used indicating that the signal error              two
decreases with increase in number of bands.                            techn          HUFFMAN                  4.146
b) Similarly in both Huffman and SPIHT Coding, the PSNR                iques
has increased when two bands (HH Elimination) are used                 i.e.          CODING                 SEC              11.807 SEC
indicating that the signal is more compared to noise with              Huff
increase in number of bands.                                           man
c) When a comparison is made between the CR, in both the               and                SPIHT                1.723
   techniques there is a decrease in compression with increase         SPIH          CODING                 SEC              0.511 SEC
   in number of bands.                                                 T a
d) If a comparison is made between the two techniques i.e.             compromise should be made between he speed and the amount
     Huffman and SPIHT we see that MSE and PSNR are                    of compression because SPIHT is very fast but gives less
     almost same for both the techniques whereas there is a            compression compared to Huffman which is slow but gives
     decrease in the value of CR (compression ratio) i.e.              more compression compared to SPIHT.
     SPIHT gives 50 % less compression compared to                          Therefore SPIHT is used for large images like satellite
     Huffman coding whether single LL band ( H*                        images which are very big where compression can be achieved
     Elimination) is used or both LL and LH bands (HH                  very fast but with a compromise in compression ratio.
     Elimination) is used.
e) If a comparison is made between the two techniques i.e.
     Huffman and SPIHT in terms of encoding and decoding                      7) FUTURE DEVELOPMENT
     time we see that SPIHT encoding is FOUR times faster
     than Huffman encoding and in terms of decoding time                               Though this project has been tested
     SPIHT decoding is very fast compared to Huffman                   for attacks on watermark due to compression, this
     decoding i.e. it is around 12 times faster than Huffman           project can further be tested for various other
     decoding.                                                         attacks on watermark like noise filtering and
     Finally we come to a conclusion that when ever there is a         other digital image processes. This project can
need for large compression we go for only LL band(H*
                                                                       also be further extended by embedding an image in
Elimination) but a compromise should be made with respect to
error in signal , but if we need more clarity of image we              an image.
                                                                                Future work on this project would be the Visual
                                                                       Cryptography where n images are encoded in a way
                                                                       that only the human visual system can decrypt the
                                 LL              LL AND                hidden     message    without      any    Cryptographic
                                                                       computations when all shares are stacked together.
                      BAND                 LH BANDS
                                                                       It is basically hiding a colored Image into
                                                                       multiple colored cover images. This scheme
                                                                       achieves lossless recovery and reduces the noise
                                                                       in    the    cover   images     without    adding   any
                                                                       computational complexity.
              MSE     PSNR        CR    MSE    PSNR    CR



                                                                                    8) LIST OF FIGURES
HUFFMAN
  CODING      4.167    41.94     5.91   0.42   51.89      Fig: 1.1
                                                       2.75                      Encoder
                                                          Fig: 1.2               Decoder
                                                          Fig: 1.3               H* Elimination Technique
                                                          Fig: 1.4               HH Elimination Technique
                                                          Fig: 2.1.1             Sine Waves With Different Frequencies
  SPIHT
                                                                  .
  CODING       4.18    41.94     2.13   0.42   51.69   1.68
                                                                 256                             http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                              (Vol. 9 No. 3), 2011.
Fig: 2.1.2           FT of a 50 Hz Signal                                       Fig: 4.18         Image After Decoding Using SPIHT and H* 16
Fig: 2.1.3           Stationery Signal                                                                    Elimination          17
Fig: 2.1.4           FT of a Stationery Signal                                Fig: 4.19         Image After Reconstruction (IDWT) using SPIHT
Fig: 2.1.5           Non-Stationery Signal                                                            and H* Elimination       18
Fig:2.1.6            FT of a Non-Stationery Signal                        Fig: 4.20                                            18
                                                                                        Image After Decoding using SPIHT and HH Elimination
Fig: 2.2.1           ComputationofCWT                                      Fig: 4.21           Image After Reconstruction (IDWT) using SPIHT
Fig: 2.2.2           Sampling Grid                                         and HH Elimination                                                  22
Fig: 2.2.3           1D Wavelet Transform                                 Fig: 4.22                                            23
                                                                                            Final output image using SPIHT coding and H*
Fig: 2.2.4            Implementation of 2D Wavelet Transform                                          Elimination              26
Fig: 2.2.5           Single Stage Decomposition                           Fig: 4.23                                            27
                                                                                           Final output image using SPIHT coding and HH
Fig: 2.2.6           Multistage Decomposition                       Elimination                                                27
Fig: 2.2.7           Original Image 28                                       Fig: 4.24        Result of compression using Huffman coding
Fig: 2.2.8           First Stage Discrete Wavelet Transform                  and H* Elimination method                         29
Fig: 2.2.9           Second Stage Discrete Wavelet                         Fig: 4.25                                            29
                                                                                            Result of compression using Huffman coding
Fig: 2.2.10          DWT Coefficients                                      and HH Elimination method                            33
Fig: 2.3.1           Visible Watermarking                                                                                       36
                                                                             Fig: 4.26 Result of SPIHT coding using H* Elimination
Fig: 2.3.2           Invisible Watermarking                                                                                     36
                                                                           Fig: 4.27 Results of SPIHT coding using HH Elimination
Fig: 2.3.3           Classification of Information Hiding Technique        Fig: 4.28 Result of Comparison between Encoding and  37
Fig: 2.4.1           Source Encoder                                        Decoding Time                                        42
Fig: 2.4.2           Source Decoder                                          Fig: 5.1 Main Screen                               42
Fig: 2.4.3           Spatial Orientation Trees                               Fig: 5.2 Screen after importing the original image          50
Fig: 2.4.4                                                           A
                      A 4x4 Matrix Showing The Pixel Values OfFig: 5.5                  Screen after importing and embedding the message
Digital Image                                                       using Huffman and HH Elimination                            51
          Fig: 2.4.5                                                Fig: 5.6            Screen after reconstructing the image using
                                                                                                                                Initialization 52
           Fig: 2.4.6        Block Diagram after First Sorting Pass Huffman coding and HH Elimination
           Fig: 2.4.7 Block Diagram after Second Sorting Pass       Fig: 5.7 Screen showing the values of MSE, PSNR and CR using
           Fig: 2.4.9             Pixels after Second Receive       Huffman coding and HH Elimination
           Fig 2.7.1MATLAB                                          Fig: 5.8 Screen after importing the image using SPIHT Coding and
            Fig: 2.7.2       Main Screen of MATLAB                  H* Elimination
            Fig: 2.7.3         M- File Screen                       Fig: 5.9      Screen after reconstructing the image using SPIHT coding
          Fig: 3.1                  Encoder Flow Diagram            H* Elimination
            Fig: 3.2                Decoder Flow Diagram            Fig: 5.10 Screen showing the values of MSE, PSNR and CR of SPIHT
           Fig: 4.1 Original Image                                  coding using H* Elimination
           Fig: 4.2 First Stage Discrete Wavelet Analysis           Fig: 5.11 Screen after importing the input image using SPIHT coding
            Fig: 4.3       Second Stage Discrete Wavelet Transform and HH Elimination
                                                                    Fig:
       Fig: 4.4 Image Before Embedding the Message Using Huffman 5.12              Screen after reconstructing the image using SPIHT coding
                           Coding and H* Elimination                and HH Elimination
                                                                    Fig:
           Fig: 4.5 Image after Embedding the Message Using Huffman 5.13 Screen showing the values of MSE, PSNR and CR using
                             Coding and H* Elimination.             SPIHT coding and HH Elimination
         Fig: 4.6        Image Before Embedding the Message Using   Fig: 5.14 Screen showing the encoding and decoding time for
        Huffman Coding and HH Elimination Fig: 4.7                  Huffman and SPIHT coding                                   Image After Embedding
           Fig: 4.8 Image After Decoding Using Huffman Decoding
                            Technique and H* elimination                                            BIBLIOGRAPHY
          Fig: 4.9     Image After Reconstruction (IDWT) Using Huffman
                              Decoding and H* elimination                    1) Amir Said,William a.Pearlman ,” A New, Fast AND
            Fig: 4.10 Image After Decoding Using Huffman Decoding and Efficient Codec Based on Set Partitioning in Hierarchical
                                     HH Elimination                          Trees” ,” IEEE Trans on circuits and systems for video
        Fig: 4.12          Final Output Image Using Huffman Coding and H*technology, vol: 6, no:3, june 1996
                                      Elimination                            2) M.Rabbani and P.W.Jones, “Digital Image Compression
          Fig: 4.13          Final Output Image Using Huffman Coding and Techniques,” Belligham, WA: SPIE Opt, Eng. Press, 1991
                                     HH Elimination                          3) Jean- Lue Dugelay, Stephen Roche, Christian Rey and
       Fig: 4.14           Image Before Embedding Using SPIHT Coding and     Gwenael Doerr, “ Still Image Watermarking Robust to Local
       H* Elimination                                                        Geometric Distortions,” IEEE Trans on Image Processing,
          Fig: 4.16           Image Before Embedding Using SPIHT Coding vol:15 , no: 9, sept:2006
                                  and HH Elimination                         4) The Engineers’ Ultimate Guide to Wavelet Analysis by
         Fig: 4.17          Image After Embedding Using SPIHT Coding and     Robi Polikar.
                                     HH Elimination                          5) Digital Image Processing By Rafael C. Gonzalez and
                                                                             Richard E. Woods.

                                                                        .

                                                                       257                              http://sites.google.com/site/ijcsis/
                                                                                                        ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                  (Vol. 9 No. 3), 2011.
6) Crash Course in Matlab by Trobin A. Driscell.
7) Digital Watermarking Of Images and Wavelets –
Alexandru, ISAR , Electronics and Telecommunications
Faculty, "Politehnica" University, Timiúoara.
8) Introduction to Graphical User Interface (GUI) MATLAB
6.5 by: Refaat Yousef Al Ashi,
 Ahmed Al Ameri and Prof. Abdulla Ismail A.
9) Digital Watermarking Technology by Dr. Martin Kutter and
Dr. Frédéric Jordan.
10) SPIHT Image Compression by SPIHT description .htm.
11) Watermarking Applications and Their Properties by
Ingemar J. Cox, Matt L. Miller and Jeffrey A. Bloom NEC
Research Institute.
12) Watermarking of Digital Images by Dr. Azizah A. Manaf
& Akram M. M. Zeki Zeki University Technology Malaysia.
13) Digital image processing by A.K. Jain.
14) Digital Image Processing, a Remote Sensing Perspective
by John. R. Jensen.
15) Lillesand, R. M and R.W.Kiefer, 1994,’Remote Sensing
and Image Interpretation’, New York, 1996.
16) Wavelet Transforms: Introduction to Theory and
Applications by Bopardikar, Addison Wesley, 1998.
17) Wavelets and Filter Banks, Gilbert Strang and Truong
Nguyen, Wellesley-Cambridge press, 1997.
18) Introduction to Wavelets and Wavelet Transforms: A
Primer, Burrens, Gopinath and Guo.
19) Digital Image Processing by Milan soni.
20) Digital Image Processing: a Remote sensing Perspective
by John .R. Jensen, 2ND Edition.




                                                               .

                                                              258                           http://sites.google.com/site/ijcsis/
                                                                                            ISSN 1947-5500