Docstoc

Lossless Image Compression for Transmitting Over Low Bandwidth Line

Document Sample
Lossless Image Compression for Transmitting Over Low Bandwidth Line Powered By Docstoc
					                                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                               Vol. 9, No. 9, September 2011




                  Lossless Image Compression For
               Transmitting Over Low Bandwidth Line

           G. Murugan, Research                           Dr. E. Kannan, Supervisor                           S. Arun , ECE Dept.
        Scholar , Singhania University                    ,Singhania University and                       Asst.Professor Veltech High
         & Sri Venkateswara College                        Dean Academic Veltech                          Engg college,Chennai email-
            of Engg , Thiruvallur                                 University                               yesarun001@yahoo.com



Abstract                                                                         solution, as, on the one hand, it provides significantly higher
          The aim of this paper is to develop an effective loss less
                                                                                 compression gains vis-à-vis lossless algorithms, and on the
algorithm technique to convert original image into a compressed one.             other hand it provides guaranteed bounds on the nature of loss
Here we are using a lossless algorithm technique in order to convert             introduced by compression.
original image into compressed one. Without changing the clarity of                       Another way to deal with the lossy-lossless dilemma
the original image. Lossless image compression is a class of image
compression algorithms that allows the exact original image to be                faced in applications such as medical imaging and remote
reconstructed from the compressed data.                                          sensing is to use a successively refindable compression
                                                                                 technique that provides a bit stream that leads to a progressive
           We present a compression technique that provides                      reconstruction of the image. Using wavelets, for example, one
progressive transmission as well as lossless and near-lossless                   can obtain an embedded bit stream from which various levels
compression in a single framework. The proposed technique
produces a bit stream that results in a progressive and ultimately
                                                                                 of rate and distortion can be obtained. In fact with reversible
lossless reconstruction of an image similar to what one can obtain               integer wavelets, one gets a progressive reconstruction
with a reversible wavelet codec. In addition, the proposed scheme                capability all the way to lossless recovery of the original. Such
provides near-lossless reconstruction with respect to a given bound              techniques have been explored for potential use in tele-
after decoding of each layer of the successively refineable bit stream.          radiology where a physician typically requests portions of an
We formulate the image data compression problem as one of                        image at increased quality (including lossless reconstruction)
successively refining the probability density function (pdf) estimate of         while accepting initial renderings and unimportant portions at
each pixel. Experimental results for both lossless and near-lossless             lower quality, and thus reducing the overall bandwidth
cases indicate that the proposed compression scheme, that                        requirements. In fact, the new still image compression
innovatively combines lossless, near-lossless and progressive coding
attributes, gives competitive performance in comparison to state-of-
                                                                                 standard, JPEG 2000, provides such features in its extended
the-art compression schemes.                                                     form [2].
                                                                                           In this paper, we present a compression technique
                                                                                 that incorporates the above two desirable characteristics,
1.INTRODUCTION                                                                   namely, near-lossless compression and progressive refinement
         Lossless or reversible compression refers to                            from lossy to lossless reconstruction. In other words, the
compression techniques in which the reconstructed data                           proposed technique produces a bit stream that results in a
exactly matches the original. Near-lossless compression                          progressive reconstruction of the image similar to what one
denotes compression methods, which give quantitative bounds                      can obtain with a reversible wavelet codec. In addition, our
on the nature of the loss that is introduced. Such compression                   scheme provides near-lossless (and lossless) reconstruction
techniques provide the guarantee that no pixel difference                        with respect to a given bound after each layer of the
between the original and the compressed image is above a                         successively refinable bit stream is decoded. Note, however
given value [1]. Both lossless and near-lossless compression                     that these bounds need to be set at compression time and
find potential applications in remote sensing, medical and                       cannot be changed during decompression. The compression
space imaging, and multispectral image archiving. In these                       performance provided by the proposed technique is
applications the volume of the data would call for lossy                         comparable to the best-known lossless and near-lossless
compression for practical storage or transmission. However,                      techniques proposed in the literature. It should be noted that to
the necessity to preserve the validity and precision of data for                 the best knowledge of the authors, this is the first technique
subsequent reconnaissance diagnosis operations, forensic                         reported in the literature that provides lossless and near-
analysis, as well as scientific or clinical measurements, often                  lossless compression as well as progressive reconstruction all
imposes strict constraints on the reconstruction error. In such                  in a single framework.
situations near-lossless compression becomes a viable                            2. METHODOLOGY



                                                                           140                              http://sites.google.com/site/ijcsis/
                                                                                                            ISSN 1947-5500
                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                        Vol. 9, No. 9, September 2011


2.1COMPRESSION TECHNIQUES                                                    or dropping some of the chrominance information in the
                                                                             image.
         LOSSLESS COMPRESSION
                                                                                 Transform coding. This is the most commonly used
     Where data is compressed and can be reconstituted
                                                                             method. A Fourier-related transform such as DCT or
(uncompressed) without loss of detail or information. These
                                                                             the wavelet      transform are      applied, followed
are referred to as bit-preserving or reversible compression
                                                                             by quantization and entropy coding.
systems also [11].
                                                                                  Fractal compression.
         LOSSY COMPRESSION
     Where the aim is to obtain the best possible fidelity for a         2.3COMPRESSION
given bit-rate or minimizing the bit-rate to achieve a given                      The process of coding that will effectively reduce the
fidelity measure. Video and audio compression techniques are             total number of bits needed to represent certain information.
most suited to this form of compression [12].
                                                                                                         STORAG                     DECODE
         If an image is compressed it clearly needs to be                                                  E OR                        R
         uncompressed (decoded) before it can                                     ENCOD                  NETWOR
         viewed/listened to. Some processing of data may be
                                                                                    ER                      KS                      (DECOM
         possible in encoded form however.
                                                                         INPUT   (COMPR                                             PRESSIO
         Lossless compression frequently involves some form                      ESSION)                                               N)
         of entropy encoding and are based in information
         theoretic techniques
         Lossy compression use source encoding techniques
         that may involve transform encoding, differential
         encoding or vector quantisation                                           Fig.1. a general data compression scheme

     Image compression may be lossy or lossless. Lossless
compression is preferred for archival purposes and often for
medical imaging, technical drawings, clip art, or comics. This
is because lossy compression methods, especially when used
at low bit rates, introduce compression artifacts. Lossy
methods are especially suitable for natural images such as
photographs in applications where minor (sometimes
imperceptible) loss of fidelity is acceptable to achieve a
substantial reduction in bit rate. The lossy compression that
produces imperceptible differences may be called visually                Fig.2 lossy image compressionresult result
lossless.
2.2METHODS FOR LOSSLESS IMAGE
COMPRESSION ARE
        Run-length encoding – used as default method
    in PCX and as one of possible in BMP, TGA, TIFF
        DPCM and Predictive Coding
        Entropy encoding
        Adaptive dictionary algorithms such as LZW – used
    in GIF and TIFF
        Deflation – used in PNG, MNG, and TIFF
        Chain codes

2.3METHODS FOR LOSSY COMPRESSION
         Reducing the color space to the most common colors
    in the image. The selected colors are specified in the color
    palette in the header of the compressed image. Each pixel
    just references the index of a color in the color palette.
    This method can be combined with dithering to
    avoid posterization.                                                 Fig. 3 lossless image comparison ratio
         Chroma sub sampling. This takes advantage of the
    fact that the human eye perceives spatial changes of
    brightness more sharply than those of color, by averaging




                                                                   141                              http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                          Vol. 9, No. 9, September 2011


                                                                                   Divide image up into 8x8 blocks
                                                                                   Each block is a symbol to be coded
                                                                                   Compute Huffman codes for set of block
                                                                                   Encode blocks accordingly
                                                                           3.2HUFFMAN CODING ALGORITHM




Fig.4lossy and lossless comparison ratio
    3.HUFFMAN CODING
         Huffman coding is based on the frequency of
    occurrence of a data item (pixel in images). The principle
    is to use a lower number of bits to encode the data that
    occurs more frequently. Codes are stored in a Code Book
    which may be constructed for each image or a set of
    images. In all cases the code book plus encoded data must
    be transmitted to enable decoding.
         The Huffman algorithm is now briefly summarised:
         A bottom-up approach
         1. Initialization: Put all nodes in an OPEN list, keep it         No Huffman code is the prefix of any other Huffman codes so
         sorted at all times (e.g., ABCDE).                                decoding is unambiguous
         2. Repeat until the OPEN list has only one node left:                 •   The Huffman coding technique is optimal (but we
         (a) From OPEN pick two nodes having the lowest                            must know the probabilities of each symbol for this
         frequencies/probabilities, create a parent node of                        to be true)
         them.                                                                 •   Symbols that occur more frequently have shorter
         (b) Assign the sum of the children's frequencies/                         Huffman codes
         probabilities to the parent node and insert it into               4.LEMPEL-ZIV-WELCH (LZW) ALGORITHM
         OPEN.
                                                                               THE LZW COMPRESSION ALGORITHM CAN
         (c) Assign code 0, 1 to the two branches of the tree,                 SUMMARISED AS FOLLOWS
         and delete the children from OPEN.
                                                                              w = NIL;
        The following points are worth noting about the                       while ( read a character k )
    above algorithm:
                                                                                  {
      Decoding for the above two algorithms is trivial as long                              if wk exists in the dictionary
as the coding table (the statistics) is sent before the data.                                        w = wk;
(There is a bit overhead for sending this, negligible if the data                           else
file is big.)                                                                                        add wk to the dictionary;
Unique Prefix Property                                                                               output the code for w;
                                                                                                     w = k;
           No code is a prefix to any other code (all symbols                     }
are at the leaf nodes) great for decoder, unambiguous. If prior            THE LZW DECOMPRESSION ALGORITHM IS AS
statistics are available and accurate, then Huffman coding is              FOLLOWS
very good.                                                                    read a character k;
3.1HUFFMAN CODING OF IMAGES                                                       output k;
                                                                                  w = k;
In order to encode images:                                                        while ( read a character k )




                                                                     142                             http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                      (IJCSIS) International Journal of Computer Science and Information Security,
                                                      Vol. 9, No. 9, September 2011


       /* k could be a character or a code. */                           •    Code this pair using a lossless method such as
                {                                                             Huffman coding
                         entry = dictionary entry for k;                               The difference is usually small so entropy
                         output entry;                                                 coding gives good results
                         add w + entry[0] to dictionary;
                         w = entry;                                                    Can only use a limited number of methods
                                                                                       on the edges of the image
                }
                                                                     5.LOSSY AND LOSSLESS ALGORITHMS
4.2ENTROPY ENCODING
                                                                              TREC includes both lossy and lossless compression
       Huffman maps fixed length symbols to variable                 algorithms. The lossless algorithm is used to compress data for
       length codes. Optimal only when symbol                        the Windows desktop which needs to be reproduced exactly as
       probabilities are powers of 2.                                it’s decompressed. The lossy algorithm is used to compress 3D
       Arithmetic maps entire message to real number range           image and texture data when some loss of detail is tolerable.
       based on statistics. Theoretically optimal for long                    Let me just explain the point about the Windows
       messages, but optimality depends on data model.               desktop since it’s perhaps not obvious why I even mentioned
       Also can be CPU/memory intensive.                             it. A Talisman video card in a PC is not only going to be
       Lempel-Ziv-Welch is a dictionary-based compression            producing 3D scenes but also the usual desktop for a Windows
       method. It maps a variable number of symbols to a             platform. Since there is no frame buffer, the entire desktop
       fixed length code.                                            needs to be treated as a sprite which in effect forms a
                                                                     background scene on which 3D windows might be
       Adaptive algorithms do not need a priori estimation           superimposed. Obviously we want to use as little memory as
       of probabilities, they are more useful in real                possible to store the Windows desktop image so it makes
       applications.                                                 sense to try to compress it, but it’s also vital that we don’t
4.2.1LOSSLESS JPEG                                                   distort any of the pixel data since it is possible that an
                                                                     application might want to read back a pixel it just wrote to the
   •   JPEG offers both lossy (common) and lossless                  display via GDI. So some form of lossless algorithm is vital
       (uncommon) modes.                                             when compressing the desktop image.
   •   Lossless mode is much different than lossy (and also          5.1LOSSLESS COMPRESSION
       gives much worse results)
                                                                              Let’s take a look at how the lossless compression
   •   Added to JPEG standard for completeness                       algorithm works first as it the simpler of the two. Figure 4.1
                                                                     shows a block diagram of the compression process.
   •   Lossless JPEG employs a           predictive   method
       combined with entropy coding.
   •   The prediction for the value of a pixel (greyscale or         RGBA                                                               DATA
       color component) is based on the value of up to three         COMPRESSED DATA
       neighboring pixels                                              RGB TO                PREDICTI                  HUFFMAN
   •   One of 7 predictors is used (choose the one which                YUV                    ON                        /RLE
       gives the best result for this pixel).                         CONVERS                                          ENCODIN
            PREDICTOR        PREDICTION
                                                                               Fig. 4.1 the lossless compression process
                    P1              A
                                                                              The RGB data is first converted to a form of YUV.
                    P2              B                                Using a YUV color space instead of RGB provides for better
                                                                     compression. The actual YUV data is peculiar to the TREC
                    P3              C
                                                                     algorithm and is derived as follows:
                    P4           A+B-C
                                                                                       Y=G
                    P5         A+(B-C)/2                                               U=R-G
                    P6         B+(A-C)/2                                               V=B-G
                    P7          (A+B)/2                                        The conversion step from RGB to YUV is optional.
                                                                     Following YUV conversion is a prediction step which takes
                                                                     advantage of the fact that an image such as a typical Windows
   Table lossless jpeg                                               desktop has a lot of vertical and horizontal lines as well as
   •   Now code the pixel as the pair (predictor-used,               large areas of solid color. Prediction is applied to each of the
       difference from predicted method)                             R, G, B and alpha values separately. For a given pixel p(x, y)
                                                                     it’s predicted value d(x, y) is given by



                                                               143                               http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                              Vol. 9, No. 9, September 2011


                     d(0, 0) = p(0, 0)                                         The first step is to convert the RGB data to a form of YUV
                     d(0, y) = p(0, y) - p(0, y-1) for y > 0                   called YOrtho using the following:
                     d(x, y) = p(x, y) - p(x-1, y)           for x > 0                           Y = (4R + 4G + 4B) / 3 - 512
         The output values from the predictor are fed into a                                     U=R-G
Huffman/RLE encoder which uses a set of fixed code tables.                                       V = (4B -2R -2G) / 3
The encoding algorithm is the same as that used in JPEG for
encoding the AC coefficients. (See ISO International Standard                            Note that the alpha value is not altered by this step.
10918, “ Digital Compression and Coding of Continuous-Tone                     The next step is to apply a two-dimensional Discrete Cosine
Still Images”.) The Huffman/RLE encode outputs a series of                     Transform (DCT) to each color and alpha component. This
variable-length code words. These code words describe the                      produces a two-dimensional array of coefficients for a
length from 0 to 15 of a run of zeroes before the next                         frequency domain representation of each color and alpha
                                                                               component. The next step is to rearrange the order of the
coefficient and the number of additional bits required to
                                                                               coefficients so that low DCT frequencies tend to occur at low
specify the sign and mantissa of the next non-zero coefficient.
                                                                               positions in a linear array. This tends to place zero coefficients
The sign and mantissa of the non-zero coefficient then follow                  in the upper end of the array and has the effect of simplifying
the code word.                                                                 the following quantization step and improving compression
                                                                               through the Huffman stage. The quantization step reduces the
5.2LOSSLESS DECOMPRESSION                                                      number of possible DCT coefficient values by doing an
        Decompressing an image produced by the lossless                        integer divide. Higher frequencies are divided by higher
compression algorithm follows the steps shown in figure 4.2                    factors because the eye is less sensitive to quantization noise
COPRESSION                                                      DATA           in the higher frequencies. The quantization factor can vary
RGPA DATA
                                                                               from 2 to 4096. Using a factor of 4096 produces zeros for all
         HUFFMA                 INVERSE                  YUV TO                input values. Each color and alpha plane has its own
          N/RLE                 PREDICTI                  RGB                  quantization factor. Reducing the detail in the frequency
                                                                               domain by quantization leads to better compression and the
5.2.1the lossless decompression process                                        expense of lost detail in the image. The quantized data is then
                                                                               Huffman encoded using the same process as was described for
The encoded data is first decoded using a Huffman decoder                      lossless compression.
using fixed code tables. The data from the Huffman decoder is
then passed through the inverse of the prediction filter used in               5.4LOSSY DECOMPRESSION
compression. For predicted pixel d(x, y) the output pixel                               The decompression process for images compressed
values p(x, y) are given by:                                                   using the TREC lossy compression algorithm is shown in
                   p(0, 0) = d(0, 0), p(0, y) = d(0, y-1) + d(0, y)            figure
                            for y > 0                                                                             Type and                  LOD
                                                                                                                   Factors               Parameters
                   p(x, y) = d(x-1, y) + d(x, y)          for x > 0
The final step is to convert the YUV-like data back to RGB
using: R = Y + U, G = Y,B = Y + V
5.3LOSSY COMPRESSION                                                            Compressed        Huffman/RLE                 Inverse                  Zig-zag
                                                                                   Data            Decoding                   Quantize                Reordering
The lossy compression algorithm is perhaps more interesting
since it achieves much higher degrees of compression that the
lossless algorithm and is used more extensively in
compressing the 3D images we are interested in. Figure 3
shows the compression steps.

  RGBA        RGB to YUV        Forward       Zig-zag                                               Inverse                  YUV to RGB
              Conversion         DCT          Ordering                                               DCT                     Conversion                 RGBA
  Data
                                                                                                                                                         Data



                                                                                         Fig. 4.4 the lossy decompression process
                                                                                        The decompression process is essentially the reverse
               Quantize
                              Huffman/RLE
                               Encoding
                                            Compressed                         of that used for compression except for the inverse
                                               Data
                                                                               quantization stage. At this point a level of detail (LOD)
                                                                               parameter can be used to determine how much detail is
                                                                               required in the output image. Applying a LOD filter during
               Type and
                Factors
                                                                               decompression is useful when reducing the size of an image.
                                                                               The LOD filter removes the higher frequency DCT
            Fig. 4.3 the lossy compression process                             coefficients which helps avoid aliasing in the output image




                                                                         144                                    http://sites.google.com/site/ijcsis/
                                                                                                                ISSN 1947-5500
                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                       Vol. 9, No. 9, September 2011


when simple pixel sampling is being used to access the source           [6] S. Martucci. Reversible compression of hdtv images using
pixels.                                                                 median adaptive prediction and arithmetic coding. Proc. IEEE
       Note that the level of detail filtering is not a part of         International Symposium on Circuits and Systems, pages
the TREC specification and not all TREC decompressions will             1310–1313, 1990.
implement it.                                                           [7] M. Weinberger, G. Seroussi, and G. Sapiro. LOCO-I: A
                                                                        low complexity, context-based, lossless image compression
6.EXPERIMENTAL RESULTS                                                  algorithm. Proc. Data Compression Conference, pages 140–
We present experimental results based on the steps                      149, March 1996.
                                                                        [8]    ITU-T Rec. T.84 - Information Technology. Digital
Step1. Lossless Compression                                             compression and coding of continuous-tone still images:
Step2. Lossless Decompression                                           Extensions, July 1996.
                                                                         [9] M. Weinberger, J. Rissanen, and R. Arps. Applications of
Step3. Lossless image Compression using Huffman coding                  universal context modeling to lossless compression of gray-
Step4. Lossless image Decompression using Huffman coding                scale images. IEEE Trans. on Image Processing, 5(4):575–
                                                                        586, April 1996.
Step5. Lossless image Compression for transmitting Low                  [10] G. Wallace. The jpeg still picture compression standard.
Bandwidth Line
                                                                        Comms. of the ACM, 34(4):30– 44, April 1991.
7.CONCLUSIONS                                                           [11]     ISO/IEC 14495-1, ITU Recommendation T.87,
                                                                        “Information technology - Lossless and           near-lossless
This work has shown that the compression of image can be
                                                                        compression of continuous-tone still images,” 1999.
improved by considering spectral and temporal correlations as
well as spatial redundancy. The efficiency of temporal                  [12] M. J. Weinberger, G. Seroussi, and G. Sapiro, “LOCO-I:
prediction was found to be highly dependent on individual               A low complexity lossless image compression algorithm.”
image sequences. Given the results from earlier work that               ISO/IEC JTC1/SC29/WG1 document N203, July 1995.
found temporal prediction to be more useful for image, we can
conclude that the relatively poor performance of temporal
prediction, for some sequences, is due to spectral prediction
being more efficient than temporal. Another Conclusions and
Future Work finding from this work is that the extra
compression available from image can be achieved without
necessitating a large increase in decoder complexity. Indeed
the presented scheme has a decoder that is less complex than
many lossless image compression decoders, due mainly to the
use of forward rather than backward adaptation.
 Although this study considered a relatively large set of test
image sequences compared to other such studies, more test
sequences are needed to determine the extent of sequences for
which temporal prediction is more efficient than spectral
prediction..
8.REFERENCES
[1] N. Memon and K. Sayood. Lossless image compression:
A comparative study. Proc. SPIE Still-Image Compression,
2418:8–20, March 1995.
[2] N. Memon and K. Sayood. Lossless compression of rgb
color images. Optical Engineering, 34(6):1711–1717, June
1995.
[3] S. Assche, W. Philips, and I. Lemahieu. Lossless
compression of pre-press images using a novel color
decorrelation technique. Proc. SPIE Very High Resolution and
QualityImaging III, 3308:85–92, January 1998.
[4] N. Memon, X. Wu, V. Sippy, and G. Miller. An interband
coding extension of the new lossless jpeg standard. Proc. SPIE
Visual Communications and Image Processing, 3024:47–58,
January 1997.
[5] N. Memon and K. Sayood. Lossless compression of video
sequences. IEEE Trans. on. Communications, 44(10):1340–
1345, October 1996.




                                                                  145                             http://sites.google.com/site/ijcsis/
                                                                                                  ISSN 1947-5500