Documents
Resources
Learning Center
Upload
Plans & pricing Sign in
Sign Out

Heschl's Gyrus Auditory Cortex Slice Registration Using Echo State Neural Network (ESNN)

VIEWS: 83 PAGES: 8

The International Journal of Computer Science and Information Security (IJCSIS Vol. 9 No. 2) is a reputable venue for publishing novel ideas, state-of-the-art research results and fundamental advances in all aspects of computer science and information & communication security. IJCSIS is a peer reviewed international journal with a key objective to provide the academic and industrial community a medium for presenting original research and applications related to Computer Science and Information Security. . The core vision of IJCSIS is to disseminate new knowledge and technology for the benefit of everyone ranging from the academic and professional research communities to industry practitioners in a range of topics in computer science & engineering in general and information & communication security, mobile & wireless networking, and wireless communication systems. It also provides a venue for high-calibre researchers, PhD students and professionals to submit on-going research and developments in these areas. . IJCSIS invites authors to submit their original and unpublished work that communicates current research on information assurance and security regarding both the theoretical and methodological aspects, as well as various applications in solving real world information security problems.

More Info
									                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 9, No. 2, February 2011




  HESCHL'S GYRUS AUDITORY CORTEX
     SLICE REGISTRATION USING
 ECHO STATE NEURAL NETWORK (ESNN)
                     1                                                             2
                         R.Rajeswari,                                                  Dr.Anthony Irudhayaraj
                Research Scholar,                                                Dean, Information Technology,
         Department of Computer Science                                      Arupadai Veedu Institute of Technology
        Mother Theresa Women’s University,                                         Paiyanoor-603104, India.
                Kodaikanal, India.                                             E-mail: <anto_irud@hotmail.com>,
         Email:rajeswaripuru@gmail.com

Abstract—This paper presents Herschel’s gyrus                       analysis[2][3]. A fundamental problem in medical
auditory cortex slice registration using Echo state                 image analysis is the integration of information from
neural network (ESNN). Training the network is done                 multiple images of the same subject, acquired using
with translation and rotational values of the selective             the same or different imaging modalities and possibly
points (feature points) from two images at a time                   at different time points. One essential aspect thereof
(source and target images). The input layer is given with           is image registration, i.e., recovering the geometric
coordinates of the selective points of the source image             between corresponding points in multiple images of
and in the output layer; the labeling is the translation            the same scene. While various more or less
and rotational values of the selective points of the target
                                                                    automated approaches for image registration have
image. ESNN is an estimation network which estimates
the required registration information from the selective
                                                                    been proposed in the field of medical imaging and
points of target and source image. The output of ESNN               image analysis, one strategy in particular, namely
is compared with radial basis function (RBF).                       maximization of mutual information[4][5] , has been
     Keywords-Echo state neural network, functional                 extremely successful at automatically computing the
magnetic resonance imaging (fMRI), Heschl's gyrus,                  registration of 3-D multimodal medical images of
auditory cortex                                                     various organs from the image content itself.

                 I.    INTRODUCTION                                    II.     MATERIALS AND METHODS
         The image registration [1] aims to find a
transformation that aligns images of the same scene                      A. Neural Network Structures
taken at different times, from different viewpoints. It                      The Echo state neural network is used for
has been studied in various contexts due to its                     learning the images. The number of neurons in the
significance in a wide range of areas, including                    input layer is 4, and the number of neurons in the
medical image fusion, remote sensing, and computer                  output layer is 6.
vision. Medical image acquisition systems generate
digital images that can be processed by a computer                  Input layer description
and transferred over computer networks. Digital
                                                                       Node 1 = x coordinate of point in image 2(target
imaging allows extracting objective, quantitative
                                                                       image)
parameters from the images by image analysis.
Medical image analysis exploits the numerical                          Node 2 = y coordinate of point in image 2(target
representation of digital images to develop image                      image)
processing techniques that facilitate computer-aided                   Node 3 = x coordinate of point in image 1(image
interpretation of medical images. The continuing                       to be registered with target image)
advancement of image acquisition technology and the                    Node 4 = y coordinate of point in image 1(image
resulting improvement of radiological image quality                    to be registered with target image)
have led to an increasing clinical need and
physician’s demand for quantitative image
interpretation in routine practice, imposing new and                Output layer description
more challenging requirements for medical image                       Node 1= vertical shift



                                                              204                               http://sites.google.com/site/ijcsis/
                                                                                                ISSN 1947-5500
                                                   (IJCSIS) International Journal of Computer Science and Information Security,
                                                   Vol. 9, No. 2, February 2011




   Node 2= upward (1) or downward (2)                                  Column 3= y coordinate of points in target image
   Node 3=horizontal shift                                             Column 4= x coordinate of points in source image
   Node 4= left (1) or right (2)                                       Column 5= y coordinate of points in source image
   Node 5= angle with respect to axis passing                          Column 6= shift in rows
   through centre of the image                                         Column 7= Upward or downward translation
   Node 6= left (1) or right (2)                                       Column 8= shift in columns
   The hidden layer has been trained with different                    Column 9 = Horizontal translation
number of nodes increasing from 2 neurons.                             Column 10= Rotation of source pixel coordinate
   The target values corresponding to x, y values of               with respect to corresponding target pixel coordinate
image 1 and image2 are calculated as follows                           Column 11= Clock wise or counterclockwise
   TS=size (Directions, 1)                                         rotation
   for i=1:TS-1%1
      I=Directions(i,:);
      F=Directions(i+1,:);                                                   Table 1 Rotation of source coordinates from
      X=F(1,1)-I(1,1);                                                                Target image coordinates
      Y=F(1,2)-I(1,2);                                                   1
      if X==0 & Y==1
         D(i)=1;
      elseif X==0 & Y==1
         D(i)=2;
         elseif X==-1 & Y==0
         D(i)=3;                                                         2
         elseif X==1 & Y==0
         D(i)=4;
         elseif X==-1 & Y==-1
         D(i)=5;
         elseif X==1 & Y==1
         D(i)=6;
         elseif X==-1 & Y==1                                             3
         D(i)=7;
         elseif X==1 & Y==-1
         D(i)=8;
      end

   end                                                                   4
    Table 1 shows the direction of rotation among
pixel coordinates of source and target image. The size
of the image considered is 63 rows by 63 columns.
The term ‘T’ refers to target image and ‘S’ refers to
source image. Curved arrow to the right is the
clockwise direction and the curved arrow to the left is
the counter clockwise direction. Table 1 shows the                       5
possible rotation of the pixel of source image to
different location in target image.
    Table 2 presents 10 sample pixel coordinates that
is used for training the network. For testing the
network, the same sample points with another 10
points (total 20 points) are presented.
    The description of Table is as follows.
    Column 1 = pattern number
    Column 2= x coordinate of points in target image

                                     Table 2 Patterns used for training and testing ESNN
          Input pattern                                                              Target pattern
          Target(actual)     Source(distorted)                       Translation (pixel)                        Rotation (degrees)




                                                            205                                  http://sites.google.com/site/ijcsis/
                                                                                                 ISSN 1947-5500
                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                  Vol. 9, No. 2, February 2011




Pattern   x         y         x         y          Vertical     Upward(1)       Horizontal   Left(1)      Angle        Direction
number                                             shift        Downward(2)     shift        Right(2)     rotated      CW(2) /
                                                                                                                       CCW(1)
1         3         14        1         17         2            1               1            2            3.05         2
2         5         41        3         42         2            1               1            2            0.59         1
3         22        47        19        48         3            1               1            2            5.4          1
4         34        47        32        48         2            1               1            2            7.59         2
5         38        18        36        18         2            1               0            0            7.25         1
6         28        6         27        7          1            1               1            2            2.56         2
7         48        14        47        15         1            1               1            2            0.2          2
8         49        45        48        45         1            1               0            0            1.68         1
9         36        62        35        62         1            1               0            0            1.88         1
10        13        57        12        58         1            1               1            2            0.33         1


     B ECHO STATE NEURAL NETWORK                                    in the overall architecture that have not yet been fully
     (ESNN)                                                         studied.

   An Artificial Neural Network (ANN) is an                             The echo state condition is defined in terms of the
abstract stimulation of a real nervous system that                  spectral radius (the largest among the absolute values
contains a collection of neuron units, communicating                of the eigenvalues of a matrix, denoted by (|| || ) of the
with each other via axon connections. Artificial                    reservoir’s weight matrix (|| W || < 1). This condition
neural networks are computing elements which are                    states that the dynamics of the ESNN is uniquely
based on the structure and function of the biological               controlled by the input, and the effect of the initial
neurons. These networks have nodes or neurons                       states vanishes. The current design of ESNN
which are described by difference or differential                   parameters relies on the selection of spectral radius.
equations.                                                          There are many possible weight matrices with the
                                                                    same spectral radius, and unfortunately they do not
    Dynamic computational models require the ability                perform at the same level of mean square error
to store and access the time history of their inputs and            (MSE) for functional approximation.
outputs. The most common dynamic neural
architecture is the Time-Delay Neural Network
(TDNN) that couples delay lines with a nonlinear
static architecture where all the parameters (weights)
are adapted with the back propagation algorithm.
Recurrent Neural Networks (RNNs) implement a
different type of embedding that is largely
unexplored. RNNs are perhaps the most biologically
plausible of the Artificial Neural Network (ANN)
models. One of the main practical problems with
RNNs is the difficulty to adapt the system weights.
Various algorithms, such as back propagation
through time and real-time recurrent learning, have
been proposed to train RNNs; these algorithms suffer
from computational complexity, resulting in slow
training, complex performance surfaces, the
possibility of instability, and the decay of gradients
through the topology and time. The problem of
decaying gradients has been addressed with special
processing elements (PEs). ESNN possesses a highly                             Figure 1 Echo State Network (ESNN)
interconnected and recurrent topology of nonlinear
PEs that constitutes a reservoir of rich dynamics and
contains information about the history of input and
output patterns. The outputs of this internal PEs (echo
states) are fed to a memory less but adaptive readout               ALGORITHM
network (generally linear) that produces the network
output. The interesting property of ESNN is that only
the memory less readout is trained, whereas the                     1.Read data
recurrent topology has fixed connection weights. This               2.Separate into inputs (datain) and target outputs
reduces the complexity of RNN training to simple                    (dataout)
linear regression while preserving a recurrent                      3.Initialize number of reservoirs
topology, but obviously places important constraints



                                                              206                             http://sites.google.com/site/ijcsis/
                                                                                              ISSN 1947-5500
                                                     (IJCSIS) International Journal of Computer Science and Information Security,
                                                     Vol. 9, No. 2, February 2011




4.Initialize (Input to hidden layer, output to hidden               Here,       all      fi’s      are         hyperbolic     tangent
layer, hidden to hidden layer)
5.Initialize state vector                                                       e x − e− x
                                                                    functions              . The output from the readout
     • Calculate next state= tanh (input matrix *                               e x + e− x
          Input vector + hidden matrix * state +output
          matrix * target output)                                   network is computed according to
     • Assign next state to present state and repeat
          step 5 and step 6                                         y(n + 1) = fout(Woutx(n + 1)), .                             (4)
     • Find pseudo inverse for the state matrix and
          multiply with targets
                                                                    where
The recurrent network is a reservoir of highly
interconnected dynamical components, states of
which are called echo states. The memory less linear                 f out = ( f1out , f2out ,...., f Lout )
                                                                                                      are the output unit’s
readout is trained to produce the output.
                                                                    nonlinear functions. Generally, the readout is linear
Consider the recurrent discrete-time neural network
                                                                    so fout is identity [6]. The flowcharts for training and
given in Figure 1 with M input units, N internal PEs,
                                                                    testing ESNN are given in Figure 2 and Figure 3
and L output units. The value of the input unit at time
n is u(n) = [u1(n), u2(n), . . . , uM(n)]T ,
                                                                             III       IMAGE REGISTRATION
                                                                        Characteristic points in image 1 (Source) and
The internal units are    x(n) = [x1(n), x2(n), . . . ,             image 2 (Target) are defined. Characteristic points
xN(n)]T (1)                                                         are important points through maximum alignment
 , and                                                              can be done. By this, unnecessary points choosing
Output units are y(n) = [y1(n), y2(n), . . . , yL (n)]T             can be avoided and hence the ESNN can learn with
(2).                                                                less number of patterns. During training, the x, y
The connection weights are given                                    coordinates of the characteristic points of image 1
•   in an (N x M) weight matrix      W back = Wijback               and image 2 are input in the input layer and the
                                                                    horizontal, vertical shifts along with angle are given
    for connections between the input and the                       in the output layer of ESNN.

    internal PEs,                                                   Implementation steps:
                                                                    Training
•   in an N × N matrix             W = W
                                      in        in
                                               ij      for             Step 1: Identify characteristic points in image 1
                                                                       and image 2.
    connections between the internal PEs                               Step 2: Calculate translation and rotation angle.
                                                                       Step 3: Generate training patterns with the
•   in an L × N matrix           W out = Wijout        for             information obtained in step 1 and step 2.
                                                                       Step 4: Train ESNN with training patterns.
    connections from PEs to the output units and
                                                                    Testing
•   in an N × L matrix      W back = Wijback    for the                Step 5: Present the same set of characteristic
                                                                       points and obtain values in the output layer. Find
    connections that project back from the output to                   the error between obtained and actual values.

    the internal PEs.
The activation of the internal PEs (echo state) is
updated according to

x(n + 1) = f(Win u(n + 1) + Wx(n) +Wbacky(n)), (3)

where f = ( f1, f2, . . . , fN) are the internal PEs’
activation functions.




                                                              207                                   http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                               (IJCSIS) International Journal of Computer Science and Information Security,
                                               Vol. 9, No. 2, February 2011




      Read a Pattern (I) and its Target (T)                              Read a Pattern (I) and its Target (T)
                     value                                                              value



       Decide the number of reservoirs
                                                                           Decide the number of reservoirs


Decide the number of sides in the input layer =
              length of pattern

                                                                       Decide the number of sides in the input
Decide the number of sides in the output layer =                             layer = length of pattern 
            number of target values 


 Initialize random weights between input and                                        Calculate F=Ih*I 
hidden layer (Ih) hidden and output layer (Ho)
       and Reservoir (R), State matrix (S) 



               Calculate F=Ih*I                                                      TH = Ho * T



                 TH = Ho * T 
                                                                                        TT = R*S


                   TT = R*S



                                                                                  S = tan h(F+TT+TH) 
            S = tan h(F+TT+TH)




            a = Pseudo inverse (S)
                                                                                      Wout = a*T 



                Wout = a * T                                              Figure 3 Flow chart for testing the ESNN


                                                                          IV.       RESULTS AND DISCUSSIONS
       Figure 2 Flow chart for Training ESNN
                                                                       The fMRI have been obtained with standard
                                                              setup conditions. The magnetic resonance imaging of
                                                              a subject was performed with a 1.5-T Siemens
                                                              Magnetom Vision system using a gradient -echo




                                                        208                                http://sites.google.com/site/ijcsis/
                                                                                           ISSN 1947-5500
                                                  (IJCSIS) International Journal of Computer Science and Information Security,
                                                  Vol. 9, No. 2, February 2011




     5
                                                                 10
10

15                                                               20

20

                                                                 30
25

30
                                                                 40
35

40                                                               50

45
                                                                 60
50

          10    20    30         40    50    60                          10    20        30              40        50        60    70

Fig.4 Heschl's gyrus, auditory cortex(target)                         Fig.5 Heschl's gyrus, auditory cortex (10o
                      Overlap
                                                                                  rotated)(source)
                                                                                               Overlap




10                                                               10




20                                                               20




30                                                               30




40                                                               40




50                                                               50




60                                                               60


         10    20    30         40    50    60




Fig.6 First alignment                                            Fig.7 Second alignment
                      Overlap                                                                  Overlap




10                                                               10




20                                                               20




30                                                               30




40                                                               40




50                                                               50




60                                                               60


         10    20    30         40    50    60                            10        20        30              40        50        60




Fig.8Third alignment                                             Fig.9 Fourth alignment
                            p




10                                                               10




20                                                               20




30                                                               30




40                                                               40




50                                                               50




60                                                               60


         10    20    30         40    50    60                            10        20        30              40        50        60




                                                           209                                                http://sites.google.com/site/ijcsis/
                                                                                                              ISSN 1947-5500
                                                                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                             Vol. 9, No. 2, February 2011




Fig.10 Fifth alignment                                                                                                           Fig.11 Sixth alignment
                                                               p




10                                                                                                                                10




20                                                                                                                                20




30                                                                                                                                30




40                                                                                                                                40




50                                                                                                                                50




60                                                                                                                                60


                               10           20        30             40         50           60                                                10    20    30         40      50      60




Fig.12 Seventh alignment                                                                                                         Fig.13 Sixth alignment
                                                           Overlap




                                                                                                                                 10
10




                                                                                                                                 20
20




                                                                                                                                 30
30




                                                                                                                                 40
40




                                                                                                                                 50
50




                                                                                                                                 60
60

                                                                                                                                              10    20    30         40      50      60
                                   10        20        30             40            50        60




Fig.14 Eighth alignment                                                                                                          Fig.15 Final alignment

                        x 10
                               4                                     Error Metric
                   10
                                                                                                                                      1.52
                    9
                                                                                                                                       1.5
                    8
                                                                                                  Variational Distance                1.48
                    7                                                                             Bhattacharya Distance
                                                                                                  Harmonic Mean                       1.46
 Alignment error




                    6
                                                                                                                                      1.44
                    5
                                                                                                                                      1.42
                    4

                    3                                                                                                                  1.4

                    2                                                                                                                 1.38

                    1                                                                                                                 1.36

                    0                                                                                                                 1.34
                     1                  2         3        4          5         6        7             8        9         10              0         5           10            15           20          25
                                                                       Iteration



Fig.16 Error metric                                                                                                                    Fig.17 MI for the alignment using ESNN




                                                                                                                           210                                        http://sites.google.com/site/ijcsis/
                                                                                                                                                                      ISSN 1947-5500
                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                          Vol. 9, No. 2, February 2011




echoplanar (EPI) sequence (TE 76 ms, TR 2.4 s, flip                                Conference on Computational Intelligence and
                                                                                   Multimedia Application, Xi’an, China, September
angle 90 , field of view 256 - 256 mm, matrix size 64                              2003, pp. 385–390.
* 64, 42 slices, slice thickness 3 mm, gap 1 mm), and                         [4]. R. Gan, J. Wu, A. C. S. Chung, S.C.H. Yu, and W.M.
a standard head coil. A checkerboard visual stimulus                               Wells III, “Multiresolution image registration based on
flashing at 8 Hz rate (task condition, 24 s) was                                   Kullback-Leibler distance,” in Proceedings of Medical
                                                                                   Image Computing and Computer Assisted Intervention
alternated with a sound (control condition, 24 s). In                              (MICCAI), Saint-Malo, 2004, pp. 599–606.
total, 110 samples (3-D volumes) were acquired.                               [5]. J.P.W. Pluim, J.B.A. Maintz, M.A. Viergever, Mutual-
                                                                                   information based registration of medical images: a
         Figure 4 shows the Heschl's gyrus, auditory                               survey, IEEE Trans. Med.Imaging 22 (6) (2003) 986–
                                                                                   1004.
cortex (target) image slice. This image is rotated
                                                                              [6]. S.Purushothaman, D.Suganthi, fmri segmentation using 
through 10o clockwise. This is treated as the source
                                                                                   echo state neural network, International Journal of 
image (Figure 5). Figure 6 to Figure 15 shows the                                  Image Processing,vol(2),Issue(1),2008,pp 1‐9 
alignment of source with target at each iteration.
Figure 16 presents the error metric of variational                                            AUTHORS PROFILE
distance, Bhattacharya distance and Harmonic Mean
and Figure 17 presents the mutual information for the                     R.Rajeswari was born in Madurai, 04.01.1967,
alignment using ESNN.                                                     received her masters degree in Information
                                                                          Technology in 2002 from Bharathidasan University,
               V.      CONCLUSION                                         Tiruchirappalli and Master of Philosophy in
     This paper describes implementation of ESNN                          Computer science in 2005 from Alagappa University.
for registration of Heschl's gyrus, auditory cortex                       She is pursuing her PhD degree in Mother Teresa
image slice. ESNN take least time to learn the                            Women’s University, Kodaikanal, India. Her
alignment of characteristic points.                                       Doctoral study is on Image registration in medical
                                                                          imaging. Her research interests include Image
                      REFERENCES                                          processing.
    [1].    Josien P. W. Pluim And J. Michael Fitzpatrick, Image
         Registration , IEEE Transactions On Medical Imaging,
         Vol. 22, No. 11, November 2003                                   A.Anthony Irudhayaraj was born on 15-03-1956. He
    [2]. G. Khaissidi, M. Karoud, H. Tairi and A. Aarab                   is currently Professor of Information Technology in
         ‘Medical Image Registration using Regions Matching               AVIT, Paiyanoor. He received his masters degree in
         with Invariant Geometrical Moments’ ICGST                        Computer science, Anna university and PhD degree
         International Journal on Graphics, Vision and Image
         Processing, GVIP, 08(2): pp 15-20, 2008.                         from Anna university.
    [3]. R. Wan, M.L. Li, An overview of medical image
         registration, in: Proceedings of the Fifth International




                                                                    211                               http://sites.google.com/site/ijcsis/
                                                                                                      ISSN 1947-5500

								
To top