Unified Fast Algorithm for Most Commonly used Transforms using Mixed Radix and Kronecker Product

Document Sample
Unified Fast Algorithm for Most Commonly used Transforms using Mixed Radix and Kronecker Product Powered By Docstoc
					                                                                       (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                 Vol. 9, No. 6, 2011

                       Unified Fast Algorithm for
                 Most Commonly used Transforms using
                  Mixed Radix and Kronecker product
          Dr. H.B. Kekre                                           Dr. Tanuja Sarode                                        Rekha Vig
 Senior Professor, Department of                           Associate Professor, Department of                  Asst. Prof. and Research Scholar,
       Computer Science,                                     Computer Science, Thadomal                          Dept. of Elec. and Telecom.
Mukesh Patel School of Technology                           Shahani College of Engineering                    Mukesh Patel School of Technology
  Management and Engineering                                         Mumbai, India                              Management and Engineering
         Mumbai, India                                                                                                  Mumbai, India



Abstract— In this paper we present a unified algorithm with                             The precursor of the transforms was the Fourier series to
some minor modifications applicable to most of the transforms.                      express functions in finite intervals. It was given by Joseph
There are many transforms used in signal and image processing                       Fourier, French mathematician and physicist who initiated the
for data compression and many other applications. Many authors                      Fourier series and its applications to problems of heat transfer
have given different algorithms for reducing the complexity to                      and vibrations[8]. Using the Fourier series, just about any
increase the speed of computation. These algorithms have been                       practical function of time can be represented as a sum of sines
developed at different points of time. The paper also shows how                     and cosines, each suitably scaled, shifted and "squeezed" or
the mixed radix system of counting can be used along with                           "stretched". Later the Fourier transform was developed to
Kronecker product of matrices leading to fast algorithm reducing
                                                                                    remove the requirement of finite intervals and to accommodate
the complexity to logarithmic order. The results of use of such
transforms have been shown for both 1-D and 2-D (image) signals
                                                                                    all types of signals[3]. Laplace transform technique followed
and considerable compression is observed in each case.                              which converted the frequency representation into a two-
                                                                                    dimensional s-plane, what is termed the "complex frequency"
    Keywords-Orthogonal transforms, Data compression, Fast                          domain.
algorithm, Kronecker product, Decimation in Time, Decimation in                         DFT is a transform for Fourier analysis of finite-domain
Frequency, mixed radix system of counting                                           discrete-time functions, which only evaluates enough
                                                                                    frequency components to reconstruct the finite segment that is
                          I.        INTRODUCTION                                    analyzed. Variants of the discrete Fourier transform were used
    Image transforms play an important role in digital image                        by Alexis Clairaut[30] in 1754 to compute an orbit, which has
processing as a theoretical and implementation tool in                              been described as the first formula for the DFT, and in 1759 by
numerous tasks, notably in digital image filtering, restoration,                    Joseph Louis Lagrange[30], in computing the coefficients of a
encoding, compression and analysis [1]. Image transforms are                        trigonometric series for a vibrating string. The data which both
often linear ones. If they are represented by transform matrix T                    considered had periodic patterns and was discrete samples of
then (1) represents the transformation,                                             an unknown function, and since the approximating functions
                                                                                    were finite sums of trigonometric functions, their work led to
                               F = [T]f                                (1)          some of the earliest expressions of the discrete Fourier
where, f and F are the original and transformed image                               transform[8]. Technically, Clairaut's work was a cosine-only
respectively. Unitary transforms are also energy conserving                         series (a form of discrete cosine transform), while Lagrange's

                                            ∑| F
                                                                                    work was a sine-only series (a form of discrete sine
transforms so that   ∑[ f ]     i
                                    2
                                        =       k   |2 , thus they are used         transform[10]); a true cosine+sine DFT was used by Gauss[7]
                      i                     k                                       in 1805 for trigonometric interpolation of asteroid orbits.
for data compression using energy compaction in transformed
elements. In most cases the transform matrices are unitary. i.e.                        Equally significant is a small calculation buried in Gauss’
                                                                                    treatise on interpolation that appeared posthumously in 1866 as
                               T-1 = Tt                                (2)          an unpublished paper, which shows the first clear and
                          t
   The columns of T are the basis vectors of transform. In                          indisputable use of the fast Fourier transform (FFT)[5][6],
case of 2-D transforms, the basis vectors correspond to the                         which is generally attributed to Cooley and Tukey[4] in 1965.
basis images. Thus a transform decomposes a digital image to a                      It is a very efficient algorithm for calculating the discrete
weighted sum of basis images.                                                       Fourier transform, before which the use of DFT, though useful
                                                                                    in many applications was very limited.




                                                                              194                              http://sites.google.com/site/ijcsis/
                                                                                                               ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 9, No. 6, 2011
    Digital applications becoming more popular with the advent          then,
                                                                                                    -1
                                                                                           f = Ct µ C F                                      (8)
of computers led to use of square waves as basis functions to
represent digital waveforms. Rademacher and J.L. Walsh [11]                         -1
                                                                        where µ C = [CCt]-1.
independently presented the first use of square functions which
led to development of more transforms based on square
functions, e.g. Haar[13], Walsh. All of these have fast                 B. Properties of Kronecker Product
algorithm for calculations and hence are used extensively.              1. (A + B) ⊗ C = A ⊗ C + B ⊗ C
Hadamard[12, 27] matrices having elements +1 and -1 are also
used as transforms. The other most commonly used transforms             2. ( A ⊗ B ) ⊗ C = A ⊗ ( B ⊗ C )
are Group Theoretic transforms[29] Slant transform[14] KLT              3. a (A ⊗ B) = (aA ⊗ B) = (A ⊗ aB) where a is scalar
and fast KLT [9,15].
                                                                        4. (A ⊗ B)t = At ⊗ Bt
    These transforms are used in various applications and
different transforms may be more suitable in different                  5. (A ⊗ B)-1 = A-1 ⊗ B-1
applications. The applications include image analysis[1,22,23],
                                                                        6. Π k =1 ( Ak ⊗ Bk ) = (Π k =1 Ak ) ⊗ (Π k =1 Bk )
                                                                                L                    L                 L
image     filtering[1],  image      segmentation[21],    image
reconstruction[1,16],      image      restoration[1],    image
compression[1,16,17-20,24-26], Scaling operation[2], Pattern            7. det (A ⊗ B) = (det A)m (det B)n where A is mxm matrix
analysis and recognition[28] etc.                                       and B is nxn matrix.
    In this paper we present general fast transform algorithm           8. Iff A and B are unitary matrices then A ⊗ B is also
for mixed radix system from which not only all other fast
                                                                        UNITARY matrix.
transform algorithms can be derived but one can generate
composite transforms with fast algorithms. Key to this fast
algorithm is Kronecker product of matrices. Image transforms              III. KRONECKER PRODUCT LEADS TO FAST ALGORITHM
such as DFT, sine, cosine, Hadamard, Haar and slant can be              Let C = A ⊗ B where A is mxm, B is nxn hence C is mn x mn.
factored as Kronecker products of several smaller sized
matrices. This makes it possible to develop fast algorithms for         Thus F= [C]f, is written in an expanded form as given below:
their implementation. Next section describes the Kronecker
product and properties of Kronecker product.

            II.   KRONECKER PRODUCT OF MATRICES

A. Kronecker Product
Kronecker product of two matrices A and B is defined as
                  C = A ⊗ B = [aij B ]                    (3)
Where C is m1n1 x m2n2 , A is m1 x m2 and B is n1 x n2
Matrix [C] is given by
                 ⎡ a11 B a12 B .......... a1m B ⎤
                 ⎢ a B a B .......... a B ⎥
                 ⎢ 21      22                 2m ⎥
                 ⎢ .        .        .         . ⎥
          [C ] = ⎢                                 ⎥       (4)
                 ⎢ .        .        .         . ⎥
                 ⎢ .        .        .         . ⎥
                 ⎢                                 ⎥
                 ⎢a m1 B a m 2 B .......... a mm B ⎥
                 ⎣                                 ⎦                    Let us partition the input and output sequences into m
                                                                        partitions of n elements each and also the matrix into nxn
For matrix C to be orthogonal matrices A and B both have to
                                                                        blocks as shown above.
be orthogonal. Now if AAt = µA diagonal matrix and BBt =
µB diagonal matrix then
                                                                        In a compact form the above matrix equation can be written as
                 CCt = µA ⊗ µB = µC                     (5)
is also a diagonal matrix. To get this result, use                      F0 = a0,0[B] f0 + a0,1[B] f1+ ……+ a0,m-1[B] fm-1                    (9.1)
          (A ⊗ B) (C ⊗ D) = AC ⊗ BD                     (6)             F1 = a1,0[B] f0+ a1,1[B] f1+ ……+ a1,m-1[B] fm-1                     (9.2)
                                                                                .
Thus if
                                                                                .
                    F=[C]f                                 (7)
                                                                        Fm-1 = am-1,0[B] f0+ am-1,1[B] f1+ .…+ am-1,m-1[B] fm-1             (9.m)




                                                                  195                                http://sites.google.com/site/ijcsis/
                                                                                                     ISSN 1947-5500
                                                                         (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                   Vol. 9, No. 6, 2011
      It is seen that the coefficients computed by operation of                       other. The algorithm thus obtained is given in pictorial form in
matrix [B] onto vectors f0, f1, ……., fm-1 (9.1) are directly                          Fig. 1.
used in (9.2) to (9.m), thus reducing the number of
computations.                                                                                           III.   DECIMATION IN FREQUENCY
By computing multiplication by matrix [B] to vectors f we can                         In this algorithm, input sequence (f0, f1, ……., fmn-1) appears in
compute intermediate coefficients vectors say G0, G1, …Gm-1,                          order whereas output sequence appears in a shuffled form,
so that,                                                                              hence the name Decimation in Frequency (DIF) as shown in
                   Gi = [B] fi       for i = 0,1, 2, .., m-1           (10)           Fig. 1. For number of computations required, let M be the total
                                                                                      multiplications required, then
Hence we get
 F0 = a0,0G0 + a0,1G1 + ………………+ a0,m-1Gm-1                           (11.1)                             M = n2m + m2n = nm(n+m)                            (16)
 F1 = a1,0G0 + a1,1G1 + ………………+ a1,m-1Gm-1                           (11.2)
                                                                                      Without this algorithm we require (nm)2 multiplications. Since
 .                                                                                    (n+m) < nm for all values of m and n except 2 there is a
                                                                                      reduction in number of multiplications. Similarly for additions
 .
 Fm-1 = am-1,0G0 + am-1,1G1 + …………+ am-1,m-1Gm-1                    (11.m)                     A = nm(n-1) + mn(m-1) = nm(n+m-2)                           (17)
      For calculations of F0, F1, ……., Fm-1 the coefficients
G0, G1, ……, Gm-1 can be used thus reducing computations                               In general if sequence length is N and if N= n1n2n3……nr, then
considerably. This algorithm can be made elegant as follows.                          we get
Let G0, G1, ……, Gm-1 be written as,
                                                                                                   M = N(n1 + n2 + n3 + ……. + nr)                          (18)
     ⎡ g 00 ⎤         ⎡ g10 ⎤                 ⎡ g m −1,0 ⎤                            and,
     ⎢       ⎥        ⎢        ⎥              ⎢              ⎥
     ⎢ g 01 ⎥         ⎢ g11 ⎥                 ⎢ g m −1,1 ⎥
G0 = ⎢.      ⎥ , G1 = ⎢.       ⎥ ,…., Gm −1 = ⎢.             ⎥        (12)                         A = N(n1 + n2 + n3 + ……. + nr - r)                      (19)
     ⎢       ⎥        ⎢        ⎥              ⎢              ⎥
     ⎢.      ⎥        ⎢.       ⎥              ⎢.             ⎥
     ⎢g      ⎥        ⎢g       ⎥              ⎢              ⎥                        Let n = 2r then
     ⎣ 0,n−1 ⎦        ⎣ 1,n −1 ⎦              ⎢ g m −1, n −1 ⎥
                                              ⎣              ⎦
     Now collecting the first elements of output vectors F0,                                       M = N(2 + 2 + 2 + …… r times )
F1, ……., Fm-1 and forming a new vectors as given by.
                                                                                                        = N(2r) = 2Nlog2n                                  (20)
⎡ F0         ⎤ ⎡ a         a 01    .......  a 0 m −1 ⎤    ⎡ g 00       ⎤
⎢            ⎥ ⎢ 00                                       ⎢            ⎥
⎢ Fn         ⎥ ⎢ a10       a11 ........ a1m −1 ⎥      ⎥   ⎢ g10        ⎥                     Normally without this algorithm we require M = N2
⎢.           ⎥ ⎢ .                                        ⎢.           ⎥              multiplications i.e. M = N(n1n2n3……nr) multiplication and
                             .                  .     ⎥
⎢            ⎥ =⎢                                     ⎥   ⎢            ⎥ (13)         with this algorithm we require M = N(n1 + n2 + n3 + …. + nr).
⎢.           ⎥ ⎢ .           .     .......      .
                                                      ⎥   ⎢.           ⎥              Thus the product is replaced by sum of those factors. The
⎢.           ⎥ ⎢ .           .                  .     ⎥   ⎢.           ⎥              reduction is of logarithmic order.
⎢            ⎥ ⎢                                          ⎢            ⎥
⎢ F( m −1) n ⎥ ⎣a m−1,0   a m −1,1 ....... a m −1,m−1 ⎥
                                                      ⎦   ⎢ g ( m−1),0 ⎥
⎣            ⎦                                            ⎣            ⎦
                                                                                      A. Relation between A ⊗ B and B ⊗ A
       Thus by taking second elements of vectors F0, F1,.., Fm-1
we get these by operating matrix [A] on second elements of                                   Consider the sequence f(n) represented by vector f
G0, G1, … Gm-1. In general taking ith elements of vectors F0, F1,                     transformed to vector F given by F = [C] f ; where C =
…, Fm-1, we get these by operating matrix[A] on ith elements                          A ⊗ B.
of G0, G1, …, Gm-1. This algorithm has been obtained by
shuffling output elements of B matrix by a perfect shuffle                            Now
matrix Sn. So the output F comes in a shuffled form Fs where                                 1. If we shuffle the input sequence f and output F has
                     Fs = [Sm] F                                       (14)           to remain same it is necessary to shuffle the columns of [C] by
                                                                                      the same shuffle.
Fs is obtained from F by dividing F into n groups of m                                       2. If we shuffle the output elements of vector F and
elements each sequentially and then picking up first element
                                                                                      want their values remain same the rows of matrix [C] are to be
of each group and then second element and so on. To obtain F
from Fs we get                                                                        shuffled by same shuffle.

                  F = [Sm]-1Fs = [Sn] Fs                               (15)                  Let           fs = [Sn] f    ⇒ f = [Sn]-1 fs
Here [Sm] and [Sn] are known to be PERFECT SHUFFLE                                           And           Fs = [Sn] F    ⇒ F = [Sn]-1 Fs
MATRICES, where m x n = N. They are also inverse of each



                                                                                196                                 http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 9, No. 6, 2011
Substituting in the above equation we get                                remainder. The process can be continued till we obtained mn-1.
                                                                         Thus the n-tuple (mn-1, mn-2, ……, m0) is obtained representing
          [Sn]-1Fs = [C] [Sn]-1fs                                        number N.
          Fs = [Sn] [C] [Sn]-1fs
             = [Sn] [C] [Sn]tfs                                                As an example of mixed radix system application
             = [Sn] [C] [Sm]                                             consider the Kronecker product of three orthogonal matrices
             = [B ⊗ A]fs                                   (21)          given below.

Thus replacing B and A with each other to implement [B ⊗ A]                                       ⎡1   1 1⎤
                                                                              ⎡1 1 ⎤
in (21) and giving input in a shuffled form fs will result in            T2 = ⎢     ⎥        T3 = ⎢− 2 1 1⎥
                                                                                                  ⎢        ⎥
output coming in a normal form, giving us a new algorithm                     ⎣1 − 1⎦             ⎢ 0 − 1 1⎥
which can be named as decimation in time. This is given in a                                      ⎣        ⎦
pictorial form in Fig. 2.                                                     ⎡ 1  1  1  1 1⎤
                                                                              ⎢− 4 1  1  1 1⎥
                   IV.    DECIMATION IN TIME                                  ⎢              ⎥
                                                                         T5 = ⎢ 0 − 3 1  1 1⎥
   In this algorithm the input sequence appears in a shuffled                 ⎢              ⎥
form using [Sm] as shuffling matrix. Output is in a normal                    ⎢ 0  0 − 2 1 1⎥
order as shown in Fig. 2.                                                     ⎢ 0
                                                                              ⎣    0  0 − 1 1⎥
                                                                                             ⎦

         V.    MERGING OF DIT AND DIF ALGORITHMS
                                                                                                                            ⎡5 ⎤
    Using rectangular array of size mxn and filling them as                                                                 ⎢20⎥
shown in Fig. 3, we can unify DIF and DIT algorithms. By                                                ⎡3 ⎤                ⎢ ⎥
                                                                                        ⎡ 2⎤
using two dimensional array [f] filled up by input sequence                      μ m0 = ⎢ ⎥     μ m1 = ⎢6⎥
                                                                                                       ⎢ ⎥         μ m2   = ⎢12 ⎥
                                                                                        ⎣ 2⎦                                ⎢ ⎥
columnwise as shown in Fig. 3 and operating by matrix [B] on                                            ⎢2⎥
                                                                                                        ⎣ ⎦                 ⎢6 ⎥
all columns of f-array we get g-array. On this operating by
                                                                                                                            ⎢2 ⎥
matrix [A] on rows we get [F] array. Operation of [B] and [A]                                                               ⎣ ⎦
can be interchanged, as shown in DIT path, where intermediate
array is named as p-array. It may be noted that g and p array            Giving a transformation matrix T = T5 ⊗ T3 ⊗ T2                      (22)
will have different elements but finally we obtain same F-array.
Algorithm given is very simple to understand. This works very                    Table 1 shows Fast algorithm using mixed radix system
well for Walsh, Hadamard and Haar transforms. With little                column one is the input sequence subscript i.e n as fn. Column
modifications it also gives fast algorithm for DFT and other             2, 3 and 4 shows the values of m2, m1, m0 representing n in a
Fourier transform family like DCT and DST and also Group                 mixed radix system with radices r3= 5, r2=3, and r1=2. Next
Theoretic Transforms. If N has more than 2 factors then we               column is input sequence f ={0,1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ,11,
have to consider multidimensional array for filling and reading.         12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
This is easily achieved by using mixed radix system of                   28, 29}. Intermediate stage g1 is computed from input
counting, for indexing input and output sequences.                       sequence f by operating T2 on pairs of two input numbers such
                                                                         that m2, m1 are constant and m0 is varying thus following
                                                                         fifteen pairs of input sequence (0, 1), (2, 3), (4, 5), (6, 7), (8,
                VI. MIXED RADIX SYSTEM                                   9), (10, 11),…. and so on are obtained. Intermediate stage 2 is
       Let N be any integer consisting of radix r, then N can be         computed from intermediate stage 1 by operating T3 on stage
written as N = mn-1rn-1 + ………….. + m2r2 + m1r1 + m0 in                   1, such that m2 and m0 are constant and m1 is varying thus we
case of mixed radix form N can be written as N = mn-1r1r2…rn-1           get following ten 3-tuples (15, 25, 35), (-15, -15, -15), (17, 27,
+ ……... + m2r1r2 + m1r1 + m0 where r1, r2, ……, rn-1 are                  37), (-15, -15, -15),….. and so on. Final output F = g3 is
different radices. When r1 = r2 = ……= rn-1 = r then mixed                computed from intermediate stage 2 such that m1 and m0 are
radix system reduces to fixed radix system. Thus mixed radix             constant and m2 is varying thus we get following 5-tuples (75,
system is general and fixed radix system is a special case of            81, 87, 93, 99), (30, 30, 30, 30, 30), (10, 10, 10, 10, 10), (-45, -
mixed radix system. Now we can decompose N in case fixed                 45, -45, -45, -45), (0, 0, 0, 0, 0) and (0, 0, 0, 0, 0) and
radix by dividing N by r successively to get coefficients m0,            operating by T5. we get final output sequence F = {435, 60,
m1, ……, mn-1 as remainders. In case of mixed radix N can be              36, 18, 6, 150, 0 ,0 ,0 ,0, 50, 0, 0, 0, 0, -225, 0, 0, 0, 0, 0, 0, 0,
decomposed by dividing N by r1 to obtain m0 as remainder                 0, 0, 0, 0, 0, 0} as shown in (23).
and quotient can then be divided by r2 to obtain m1 as




                                                                   197                                 http://sites.google.com/site/ijcsis/
                                                                                                       ISSN 1947-5500
                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                         Vol. 9, No. 6, 2011




                     Figure 1. Decimation in Frequency domain (Perfect Shuffle [Sm])
Note that there are two perfect shuffles [Sm] and [Sn] and [Sm].[Sn] = I where mn = N and also [Sm] = [Sn]t.




                            Figure 2. Decimation in Time domain ( Perfect Shuffle [Sn])




                                                   198                                     http://sites.google.com/site/ijcsis/
                                                                                           ISSN 1947-5500
                                    (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                              Vol. 9, No. 6, 2011




           Figure 3. Merging of decimation in time and frequency algorithms.


 435                                                                                                           0
  60                                                                                                           1
  36                                                                                                           2
  18                                                                                                           3
  6                                                                                                            4
 150                                                                                                           5
  0                                                                                                            6
  0                                                                                                            7
  0                                                                                                            8
  0                                                                                                            9
  50                                                                                                          10
  0                                                                                                           11
  0                                                                                                           12
  0                                                                                                           13
  0                                                                                                           14
-225                                                                                                    *     15
                                                                                                                      (23)
  0
  0
       =                                                                                                      16
                                                                                                              17
  0                                                                                                           18
  0                                                                                                           19
  0                                                                                                           20
  0                                                                                                           21
  0                                                                                                           22
  0                                                                                                           23
  0                                                                                                           24
  0                                                                                                           25
  0                                                                                                           26
  0                                                                                                           27
  0                                                                                                           28
  0                                                                                                           29




                                         199                                   http://sites.google.com/site/ijcsis/
                                                                               ISSN 1947-5500
                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                            Vol. 9, No. 6, 2011




               TABLE I.    FAST ALGORITHM USING MIXED RADIX SYSTEM




               5     3    2                    Intermediate stages      Final Stage
                                  Input
                                                                          Output
Subscript of                    sequence
                                               Stage 1      Stage 2      sequence
     f         m2    m1   m0        f
                                                 g1           g2          g3 = F
               3*2   2    1
     0         0     0    0         0            15           75           435
     1         0     0    1         1            25           30            60
     2         0     1    0         2           35            10            36
     3         0     1    1         3          -15            -45           18
     4         0     2    0         4          -15             0             6
     5         0     2    1         5          -15             0           150
     6         1     0    0         6            17           81             0
     7         1     0    1         7            27           30             0
     8         1     1    0         8            37           10             0
     9         1     1    1         9            -15          -45            0
    10         1     2    0        10            -15           0            50
    11         1     2    1        11            -15           0             0
    12         2     0    0        12           19            87             0
    13         2     0    1        13            29           30             0
    14         2     1    0        14            39           10             0
    15         2     1    1        15            -15          -45          -225
    16         2     2    0        16            -15           0             0
    17         2     2    1        17            -15           0             0
    18         3     0    0        18            21           93             0
    19         3     0    1        19            31           30             0
    20         3     1    0        20            41           10             0
    21         3     1    1        21            -15          -45            0
    22         3     2    0        22            -15           0             0
    23         3     2    1        23            -15           0             0
    24         4     0    0        24            23           99             0
    25         4     0    1        25            33           30             0
    26         4     1    0        26            43           10             0
    27         4     1    1        27            -15          -45            0
    28         4     2    0        28            -15           0             0
    29         4     2    1        29            -15           0             0




                                        200                            http://sites.google.com/site/ijcsis/
                                                                       ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 9, No. 6, 2011




A.       Inverse Transform:
         Let μ m2 μ m1 μ m0 = μ m2 ⊗ μ m1 ⊗ μ m0


                            ⎡5 ⎤
                            ⎢20⎥
                            ⎢ ⎥ ⎡3 ⎤ ⎡2⎤
         μ m2 μ m1 μ m0   = ⎢12 ⎥ ⊗ ⎢6⎥ ⊗ ⎢ ⎥
                            ⎢ ⎥ ⎢ ⎥ ⎣ 2⎦
                            ⎢6 ⎥ ⎢ 2 ⎥
                                    ⎣ ⎦
                            ⎢2 ⎥
                            ⎣ ⎦

where, m0, m1, and m2 are suffixes given in column 2, 3, and 4
of Table 1.

       To obtain original sequence f back from transformed
sequence F first divide each of Fk by μ m2 μ m1 μ m0 where
m2m1m0 is the mixed radix representation of subscript k of F
and then multiply the sequence by T5t ⊗ T3t ⊗ T2t refer to the
4th properties of Kronecker product given in section II. For fast
inverse mixed radix algorithm same algorithm given in Table                      Figure 4. a) Original Fingerprint image b) Reconstructed
1 is valid with respective matrices replaced by their transpose.                 image with 98% energy components gives 62.24%
                                                                                 compression and 7.86% error c) Reconstructed image with
       The number of multiplications required are N2 and                         98.5% energy components gives 54.32% compression and
additions N(N-1) where N is the input sequence length i.e.                       6.8% error d) Reconstructed image with 99% energy
total multiplications required for this problem is 302 = 900 and                 components gives 44.32% compression and 5.53% error
additions required are 30*29= 870. Where in case of proposed
mixed radix fast algorithm total multiplications required are                                        VII. CONCLUSION
30*(5+3+2) = 300 and total additions required are 30*(5+3+2-
3) =210, thus reducing the number of computations by a factor                 In this paper we propose generalized Fast Algorithm using
more than 3.                                                              Kronecker product. The given algorithm is very simple to
                                                                          understand and works very well for Walsh, Hadamard and
                                                                          Haar transforms. With little modifications it also gives fast
       In case of two-dimensional signals like images, these              algorithm for DFT and other Fourier transform family like
composite transforms generated by using mixed radix systems               DCT and DST and also Group Theoretic Transforms. If N has
can be used for compression. Given below is an example,                   more than 2 factors then we have to consider multidimensional
where an image is transformed using one such composite                    array for filling and reading. It has also been shown how this
transform generated using one 4x4 Walsh matrix(Matrix A),                 algorithm can be easily applied using mixed radix system of
one 3x3 Kekre’s transform matrix(Matrix B), one 5x5 DCT                   counting. The application of the proposed method to a one
(Discrete Cosine Transform) matrix(Matrix C) and one 5x5                  dimensional number sequence and two- dimensional image
Kekre’s transform matrix(Matrix D). The Kroenecker product                shows that this method can be used to generate considerable
taken in the order D ⊗ C ⊗ B ⊗ A of these matrices produces a             amount of compression.
300x300 size composite transform and has been used on a
300x300 fingerprint image. In transform domain certain
coefficients are selected such that the total energy of these                                            REFERENCES
coefficients is equal to some percentage of the total energy of
the image. The image is reconstructed using these selected                [1]   Loannis Pitas, “Digital Image Processing Algorithms and Applications”,
components. The reconstructed images, compression ratio and                     Published by Wiley-IEEE, Feb. 2000, ISBN 0471377392
the percentage error for 98%, 98.5% and 99% of total energy               [2]   M. J. Kieman L. M. Linnett R. J. Clarke “The Design and Application of
                                                                                Four-Tap Wavelet Filters”, IEE Colloquium on Applications of Wavelet
components are shown in the Fig. 4.                                             Transforms in Image Processing, 20 Jan 1993
                                                                          [3]   I. J. Good “The Interaction Algorithm and Practical Fourier Analysis” J.
                                                                                Royal Stat. Soc. (London) B20 (1958):361




                                                                    201                                    http://sites.google.com/site/ijcsis/
                                                                                                           ISSN 1947-5500
                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                  Vol. 9, No. 6, 2011
[4]    J.W. Cooley and J. W. Tukey, “An Algorithm for Machine Calculation            [27] M. H. Lee and M. Kaveh, “Fast Hadamard Transform Based on a Simple
       of Complex Fourier Series”, Math. Comput. 19, 90, April 1965, pp-297-              Matrix factorization,” IEEE Transactions on Acoustics, Speech and
       301.                                                                               Signal Processing, Vol. ASSP-34, No. 6, December 1986, pp 1666-1667.
[5]    G. D. Bergland. “A Guided Tour of the Fast Fourier Transform,” IEEE           [28] R. K. Rao Yarlagadda, John E. Hershey, “Hadamard Matrix Analysis
       Spectrum 6 (July 1969): 41-52.                                                     and Synthesis with Applications to Communications and Signal/Image
[6]    E. O. Brigham. The Fast Fourier Transform. Englewood Cliffs, N. J.:                Processing,” Kluwer Academic Publishers 1997.
       Prentice-Hall 1974.                                                           [29] S. V. Kanetkar, “Group Theoretic Transforms”, Ph.D. Thesis,
[7]    M. Heideman, D. Johnson, and C. S. Burrus, ``Gauss and the history of              Department of Electrical Engineering, Indian Institution of Technology,
       the FFT,'' IEEE Signal Processing Magazine, vol. 1, pp. 14-21, Oct.                Bombay, 1979.
       1984                                                                          [30] William L. Briggs, Van Emden Henson, “The DFT: an owner’s manual
[8]    Fourier, Joseph. (1878). The Analytical Theory of Heat. Cambridge                  for the discrete Fourier transform”, SIAM, 1995
       University Press (reissued by Cambridge University Press, 2009;
[9]    A. K. Jain, “A Fast Karhuen Loeve Transform for a Class of Random                                          AUTHORS PROFILE
       Processes”, IEEE Trans. Communications, vol Com-24. 1023-1029,                                              Dr. H. B. Kekre has received B.E. (Hons.) in
       Sept. 1976.                                                                                                 Telecomm. Engg. from Jabalpur University in
[10]   P. Yip and K. R. Rao. “ A Fast Computational Algorithm for the                                              1958, M.Tech (Industrial Electronics) from IIT
       Discrete Sine Transform.” IEEE Trans. Commun. COM-28, no. 2                                                 Bombay in 1960, M.S.Engg. (Electrical Engg.)
       (February 1980):304-307.                                                                                    from University of Ottawa in 1965 and Ph.D.
[11]   J. L. Walsh. “A Closed Set of Orthogonal Functions.” American J. of                                         (System Identification) from IIT Bombay in
       Mathematics 45 (1923): 5-24.                                                                                1970. He has worked Over 35 years as Faculty of
                                                                                                                   Electrical Engineering and then HOD Computer
[12]   H. Kitajima. “Energy Packing Efficiency of the Hadamard Transform.”
                                                                                                                   Science and Engg. at IIT Bombay. For last 13
       IEEE Trans. Comm. (correspondence) COM-24 (November 1976):1256-
                                                                                     years worked as a Professor in Department of Computer Engg. at Thadomal
       1258
                                                                                     Shahani Engineering College, Mumbai. He is currently Senior Professor
[13]   J. E. Shore. “On the Applications of Haar Functions,” IEEE Trans.             working with Mukesh Patel School of Technology Management and
       Communications COM-21 (March 1973): 209-216.                                  Engineering, SVKM’s NMIMS University, Vile Parle(w), Mumbai, INDIA.
[14]   W. K. Pratt, W. H. Chen and L. R. Welch. “Slant Transform Image               He has guided 17 Ph.D.s, 150 M.E./M.Tech Projects and several B.E./B.Tech
       Coding.” IEEE Trans. Comm. COM-22 (August 1974): 1075-1093.                   Projects. His areas of interest are Digital Signal processing, Image Processing
[15]   H. Hotelling. “Analysis of a Complex of Statistical Variables into            and Computer Networks. He has more than 350 papers in National /
       Principle Components.” J. Educ. Psychology 24 (1933): 417-441 and             International Conferences / Journals to his credit. Recently thirteen students
       498-520.                                                                      working under his guidance have received best paper awards. Four of his
                                                                                     students have been awarded Ph. D. of NMIMS University. Currently he is
[16]    Anil K. Jain, “Fundamentals of Digital Image Processing,” Prentice           guiding eight Ph.D. students. He is fellow of IETE and life member of ISTE.
       Hall 1997.
[17]   A. Habibi and P. A. Wintz. “Image Coding by Linear Transformation
       and Block Quantization.” IEEE Trans. Commun. Tech. COM-19, no. 1                                          Dr.    Tanuja      K. Sarode has received
       (February 1971): 50-63.                                                                                   M.E.(Computer Engineering) degree from
                                                                                                                 Mumbai University in 2004, Ph.D. from Mukesh
[18]   P. A. Wintz “Transform Picture Coding,” Proc. IEEE 60, no. 7 (July                                        Patel School of Technology, Management and
       1972): 809-823.                                                                                           Engg., SVKM’s NMIMS University, Vile-Parle
[19]    W. K. Pratt, W. H. Chen and L. R. Welch, “Slant Transform Image                                          (W), Mumbai, INDIA. She has more than 11
       Coding,” IEEE Trans. Commun. COM-22, no. 8 (August 1974): 1075-                                           years of experience in teaching. Currently
       1093.                                                                                                     working as Assistant Professor in Dept. of
[20]   K. R. Rao, M. A. Narsimhan and K. Revuluri. “Image Data                                                   Computer Engineering at Thadomal Shahani
       Processing by Hadamard –Haar Transforms.” IEEE Trans. Computers               Engineering College, Mumbai. She is member of International Association of
       C-23, no. 9 (September 1975): 888-896.                                        Engineers (IAENG) and International Association of Computer Science and
                                                                                     Information Technology (IACSIT). Her areas of interest are Image
[21]   Chang-Tsun Li, Roland Wilson, “Image Segmentation Using
                                                                                     Processing, Signal Processing and Computer Graphics. She has 90 papers in
       Multiresolution Fourier Transform” Technical report, Department of
                                                                                     National /International Conferences/journal to her credit.
       Computer Science, University of Warwick, September 1995.
[22]   Andrew R. Davies. Image Feature Analysis using the Multiresolution
       Fourier Transform. PhD thesis, Department of Computer Science, The
       University of Warwick, UK, 1993                                                                           Rekha Vig has received B.E. (Hons.) in
[23]   A. Calway. The Multiresolution Fourier Transform: A Genera Purpose                                        Telecomm. Engg. from Jabalpur University in
       Tool for Image Analysis. PhD thesis, Department of Computer Science,                                      1994 and M.Tech (Telecom) from MPSTME,
       The University of Warwick, UK, September 1989.                                                            NMIMS University in 2010. She is working as
                                                                                                                 Assisstant Professor in the Department of
[24]   Dorin, Comaniciu, Richard Grisel, “Image coding using transform
                                                                                                                 Electronics and Telecommunications in Mukesh
       vector quantization with training set synthesis”, Signal Processing
                                                                                                                 Patel School of Technology Management and
       Volume 82 , Issue 11 (November 2002) Pages: 1649 – 1663.
                                                                                                                 Engineering, NMIMS University, Mumbai. She
[25]   Data compression using orthogonal transform and vector quantization                                       has more than 12 years of teaching and
       United States Patent 4851906.                                                                             approximately 2 years of industry experience.
[26]   Robert Y. Li, Jung Kim and N. Al-Shamakhi “Image compression using            She is currently pursuing her Ph.D. from NMIMS University, Mumbai. Her
       transformed vector quantization” Image and Vision Computing 20,               areas of specialization are image processing, digital signal processing and
       2002, pp-37-45.                                                               wireless communication. Her publications include more than 15 papers in
                                                                                     IEEE international conferences, international journals and in national
                                                                                     conferences and journal.




                                                                               202                                    http://sites.google.com/site/ijcsis/
                                                                                                                      ISSN 1947-5500