Docstoc

Short term flood forecasting using RBF static neural network modeling a comparative study

Document Sample
Short term flood forecasting using RBF static neural network modeling a comparative study Powered By Docstoc
					                                                            (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                      Vol. 8, No. 6, 2010

  Short term flood forecasting using RBF static neural
         network modeling a comparative study

                  Rahul P. Deshmukh                                                            A. A. Ghatol
         Indian Institute of Technology, Bombay                                        Former Vice-Chancellor
                      Powai, Mumbai                                        Dr. Babasaheb Ambedkar Technological University,
                           India                                                         Lonere, Raigad, India
               deshmukh.rahul@iitb.ac.in                                               vc2005@rediffmail.com


Abstract—The artificial neural networks (ANNs) have been                essential and plays a vital role in planning for flood regulation
applied to various hydrologic problems recently. This research          and protection measures.
demonstrates static neural approach by applying Radial basis                      The total runoff from catchment area depends upon
function neural network to rainfall-runoff modeling for the             various unknown parameters like Rainfall intensity, Duration
upper area of Wardha River in India. The model is developed             of rainfall, Frequency of intense rainfall, Evaporation,
by processing online data over time using static modeling.              Interception, Infiltration, Surface storage, Surface detention,
Methodologies and techniques by applying different learning             Channel detention, Geological characteristics of drainage
rule and activation function are presented in this paper and a          basin, Meteorological characteristics of basin, Geographical
comparison for the short term runoff prediction results                 features of basin etc. Thus it is very difficult to predict runoff
between them is also conducted. The prediction results of the           at the dam due to the nonlinear and unknown parameters.
Radial basis function neural network with Levenberg                                In this context, the power of ANNs arises from the
Marquardt learning rule and Tanh activation function indicate           capability for constructing complicated indicators (non-linear
a satisfactory performance in the three hours ahead of time             models). Among several artificial intelligence methods
prediction. The conclusions also indicate that Radial basis             artificial neural networks (ANN) holds a vital role and even
function neural network with Levenberg Marquardt learning               ASCE Task Committee Reports have accepted ANNs as an
rule and Tanh activation function is more versatile than other          efficient forecasting and modeling tool of complex hydrologic
combinations for RBF neural network and can be considered               systems[22].
as an alternate and practical tool for predicting short term                      Neural networks are widely regarded as a potentially
flood flow.                                                             effective approach for handling large amounts of dynamic,
                                                                        non-linear and noisy data, especially in situations where the
                                                                        underlying physical relationships are not fully understood.
   Keywords-component; Artificial neural network; Forecasting;          Neural networks are also particularly well suited to modeling
Rainfall; Runoff;                                                       systems on a real-time basis, and this could greatly benefit
                                                                        operational flood forecasting systems which aim to predict the
                      I.    INTRODUCTION                                flood hydrograph for purposes of flood warning and
          The main focus of this research is development of             control[16].
Artificial Neural Network (ANN) models for short term flood                        A subset of historical rainfall data from the Wardha
forecasting, determining the characteristics of different neural        River catchment in India was used to build neural network
network models. Comparisons are made between the                        models for real time prediction. Telematic automatic rain
performances of different parameters for Radial basis function          gauging stations are deployed at eight identified strategic
artificial neural network models.                                       locations which transmit the real time rainfall data on hourly
          The field engineers face the danger of very heavy flow        basis. At the dam site the ANN model is developed to predict
of water through the gates to control the reservoir level by            the runoff three hours ahead of time.
proper operation of gates to achieve the amount of water                          In this paper, we demonstrate the use of Radial basis
flowing over the spillway. This can be limited to maximum               function neural network (RBF) model for real time prediction
allowable flood and control flood downstream restricting river          of runoff at the dam and compare the effectiveness of different
channel capacity so as to have safe florid levels in the river          learning rules and activation function. Radial basis function
within the city limits on the downstream.                               neural network is having a feed-forward structure consisting of
          By keeping the water level in the dam at the optimum          hidden layer for a given number of locally tuned units which
level in the monsoon the post monsoon replenishment can be              are fully interconnected to an output layer of linear units.
conveniently stored between the full reservoir level and the                      At a time when global climatic change would seem to
permissible maximum water level. Flood estimation is very               be increasing the risk of historically unprecedented changes in




                                                                   93                               http://sites.google.com/site/ijcsis/
                                                                                                    ISSN 1947-5500
                                                                 (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                           Vol. 8, No. 6, 2010
river regimes, it would appear to be appropriate that                               MSE (Mean Square Error):
alternative representations for flood forecasting should be
considered.                                                                     The formula for the mean square error is:

                                                                                           d                  yij 
                                                                                           P    N
                                                                                                                               2
                       II.    METHODOLOGY                                                                  ij
                                                                                          j 0 i  0
   In this study different parameters like learning rule and                     MSE 
                                                                                                       NP
activation function are employed for rainfall-runoff modeling
                                                                                                        … (1)
using Radial basis function neural network model of artificial
                                                                              Where
neural network.
                                                                               P = number of output PEs,
          Radial basis functions networks have a very strong                  N = number of exemplars in the data set,
mathematical foundation rooted in regularization theory for                   yij
solving ill-conditioned problems.                                                 = network output for exemplar i at PE j,
          The mapping function of a radial basis function                    dij
                                                                                  = desired output for exemplar i at PE j.
network, is built up of Gaussians rather than sigmoids as in
MLP networks. Learning in RBF network is carried out in two
phases: first for the hidden layer, and then for the output layer.                  NMSE (Normalized Mean Square Error):
The hidden layer is self-organising; its parameters depend on
the distribution of the inputs, not on the mapping from the input                      The normalized mean squared error is defined by
to the output. The output layer, on the other hand, uses                  the following formula:
supervised learning (gradient or linear regression) to set its                                         P N MSE
parameters.                                                                         NMSE                                 2
                                                                                                      N
                                                                                                                 N     
                                                                                                    N  dij 2    dij 
                                                                                                i 0 N i 0 
                                                                                                 P


                                                                                               j 0
                                                                                                                             … (2)
                                                                                   Where
                                                                                   P = number of output processing elements,
                                                                                   N = number of exemplars in the data set,
                                                                                MSE = mean square error,
                                                                                   dij
                                                                                       = desired output for exemplar i at processing
                                                                                             element j.

                                                                                   r (correlation coefficient):

                                                                                    The size of the mean square error (MSE) can be used
                                                                          to determine how well the network output fits the desired
                                                                          output, but it doesn't necessarily reflect whether the two sets of
                                                                          data move in the same direction. For instance, by simply
            Figure 1. The Radial basis function neural network            scaling the network output, the MSE can be changed without
                                                                          changing the directionality of the data. The correlation
        In this study we applied different learning rules to the          coefficient (r) solves this problem. By definition, the
RBF neural network and studied the optimum performance                    correlation coefficient between a network output x and a
with different activation function. We applied Momentum,                  desired output d is:
Deltabar Delta, Levenberg Marquardt , Conjugate Gradient,
Quick prop learning rule with activation function Tanh, Linear                                                                           _
                                                                                                                                              
                                                                                                 x  x  d
                                                                                                                    _
Tanh, Sigmoid and Linear Sigmoid.                                                                           i                          i   d
                                                                                                 i                                         
                                                                                   r                               N
                                                                                                                    2                                 2
                                                                                                              _
                                                                                                                                       _
Performance Measures:                                                                       di  d 
                                                                                          i         
                                                                                                                                 xi  x 
                                                                                                                               i         
     The learning and generalization ability of the estimated                                  N                                    N                     … (3)
NN model is assessed on the basis of important performance
measures such as MSE (Mean Square Error), NMSE
(Normalized Mean Square Error) and r (Correlation                                                               1       N                         _
                                                                                                                                                          1   N

                                                                                                                    x                                        d
                                                                                                       _
coefficient)                                                                                           x                          i              d                 i
                                                                                        where                   N       i 1               and            N   i 1




                                                                     94                                             http://sites.google.com/site/ijcsis/
                                                                                                                    ISSN 1947-5500
                                                              (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                        Vol. 8, No. 6, 2010
 The correlation coefficient is confined to the range [-1, 1].                Data: Rainfall runoff data for this study is taken from the
When r = 1 there is a perfect positive linear correlation between         Wardha river catchment area which contains a mix of urban
x and d, that is, they co-vary, which means that they vary by             and rural land. The catchments is evenly distributed in eight
the same amount.                                                          zones based on the amount of rainfall and geographical survey.
                                                                          The model is developed using historical rainfall runoff data ,
               III.      STUDY AREA AND DATA SET                          provided by Upper Wardha Dam Division Amravati,
   The Upper Wardha catchment area lies directly in the path              department of irrigation Govt. of Maharashtra. Network is
of depression movements which originates in the Bay of                    trained by rainfall information gathered from eight telemetric
Bengal. When the low pressure area is formed in the Bay of                rain-gauge stations distributed evenly throughout the catchment
Bengal and cyclone moves in North West directions, many                   area and runoff at the dam site.
times this catchment receives very heavy intense cyclonic                 The data is received at the central control room online through
precipitation for a day or two. Occurrence of such events have            this system on hourly basis. The Upper Wardha dam reservoir
been observed in the months of August and September.                      operations are also fully automated. The amount of inflow,
Rainfall is so intense that immediately flash runoff, causing             amount of discharge is also recorded on hourly basis. From the
heavy flood has been very common feature in this catchment.               inflow and discharge data the cumulative inflow is calculated.
        For such flashy type of catchment and wide variety in             The following features are identified for the modeling the
topography, runoff at dam is still complicated to predict. The            neural network .
conventional methods also display chaotic result. Thus ANN
                                                                                  TABLE I - THE PARAMETERS USED FOR TRAINING THE NETWORK
based model is built to predict the total runoff from rainfall in
Upper Wardha catchment area for controlling water level of the
                                                                          Month     RG1    RG2    RG3     RG4      RG5   RG6     RG7     RG8    CIF
dam.
   In the initial reaches, near its origin catchment area is hilly            •      Month                    – The month of rainfall
and covered with forest. The latter portion of the river lies                 •      Rain1 to Rain8           – Eight rain gauging stations.
almost in plain with wide valleys.                                            •      Cum Inflow               – Cumulative inflow in dam
         The catchment area up to dam site is 4302 sq. km. At
                                                                              Seven years of data on hourly basis from 2001 to 2007 is
dam site the river has wide fan shaped catchment area which
                                                                          used. It has been found that major rain fall (90%) occurs in the
has large variation with respect to slope, soil and vegetation
                                                                          month of June to October Mostly all other months are dry
cover.
                                                                          hence data from five months. June to October is used to train
                                                                          the network

                                                                                                        IV.     RESULT
                                                                                    The different structures of neural network are
                                                                          employed to learn the unknown characterization of the system
                                                                          from the dataset presented to it. The dataset is partitioned into
                                                                          three categories, namely training, cross validation and test. The
                                                                          idea behind this is that the estimated NN model should be
                                                                          tested against the dataset that was never presented to it before.
                                                                          This is necessary to ensure the generalization. An experiment is
                                                                          performed at least twenty five times with different random
                                                                          initializations of the connection weights in order to improve
                                                                          generalization.
   Figure 2- Location of Upper Wardha dam on Indian map
                                                                                              The data set is divided in to training , testing
                                                                          and cross validation data and the network is trained for all
                                                                          models of Radial basis function neural network for 5000
                                                                          epochs.
                                                                                    The performance results obtain on parameters by
                                                                          applying learning rules Momentum, Deltabar Delta, Levenberg
                                                                          Marquardt , Conjugate Gradient, Quick prop with activation
                                                                          function Tanh, Linear Tanh, Sigmoid, Linear Sigmoid are
                                                                          listed in Table II through Table VI.




                      Figure 3- The Wardha river catchment




                                                                     95                                  http://sites.google.com/site/ijcsis/
                                                                                                         ISSN 1947-5500
                                                             (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                       Vol. 8, No. 6, 2010
TABLE II - RBF NETWORK PERFORMANCE WITH MOMENTUM LEARNING RULE          TABLE V - RBF NETWORK PERFORMANCE WITH CONJUGATE GRADIENT
                                                                                                 LEARNING RULE

        Param     MSE       N        Min     Max
        eter                MSE      Abs     Abs        r                      Param     MSE       N          Min        Max
        1                            error   error                             eter                MSE        Abs        Abs          r
                                                                               4                              error      error
        Tanh      0.106     0.124    0.034   0.465     0.534
                                                                               Tanh      0.094     0.165      0.051      0.312       0.646
        Linear    0.097     0.105    0.024   0.212     0.639
        Tanh                                                                   Linear    0.089     0.094      0.059      0.215       0.633
        Sigmo     0.089     0.093    0.047   0.421     0.678                   Tanh
        id                                                                     Sigmo     0.092     0.134      0.041      0.474       0.701
        Linear    0.094     0.132    0.041   0.381     0.689                   id
        Sigmo                                                                  Linear    0.094     0.124      0.064      0.541       0.732
        id                                                                     Sigmo
                                                                               id

                                                                       TABLE VI - RBF NETWORK PERFORMANCE WITH QUICK PROP. LEARNING
   TABLE III - RBF NETWORK PERFORMANCE WITH DELTABAR DELTA                                           RULE
                          LEARNING RULE


                                                                               Param     MSE       N          Min        Max
        Param     MSE       N        Min     Max                                                   MSE        Abs        Abs          r
                            MSE      Abs     Abs        r                      eter
        eter                                                                   5                              error      error
        2                            error   error
                                                                               Tanh      0.133     0.245      0.042      0.465       0.584
        Tanh      0.093     0.141    0.051   0.564     0.651
                                                                               Linear    0.169     0.212      0.054      0.514       0.601
        Linear    0.190     0.241    0.041   0.412     0.591                   Tanh
        Tanh                                                                   Sigmo     0.106     0.256      0.059      0.329       0.563
        Sigmo     0.143     0.215    0.032   0.495     0.543                   id
        id                                                                     Linear    0.098     0.112      0.046      0.311       0.609
        Linear    0.086     0.095    0.067   0.315     0.603                   Sigmo
        Sigmo                                                                  id
        id

                                                                          The parameters and performance for RBF model with
                                                                      different learning rule and activation function are compared on
  TABLE IV - RBF NETWORK PERFORMANCE WITH L. M. LEARNING RULE
                                                                      the performance scale and are listed in the Table VII shown
                                                                      below. The comparative analysis of the MSE and r (the
        Param     MSE       N        Min     Max                      correlation coefficient) is done.
        eter                MSE      Abs     Abs        r
        3                            error   error
                                                                             TABLE VII – COMPARISON OF PERFORMANCE PARAMETERS
        Tanh      0.076     0.064    0.018   0.143     0.854
        Linear    0.086     0.094    0.028   0.298     0.732
        Tanh
        Sigmo     0.083     0.094    0.020   0.228     0.634
        id
        Linear    0.089     0.095    0.034   0.469     0.758
        Sigmo
        id




                                                                      After training the network the optimum performance is studied
                                                                      and it is found that Levenberg Marquardt learning rule and
                                                                      Tanh activation function produce optimal result. In the Table-
                                                                      VIII the parameters and the best performances for Radial basis
                                                                      function neural network are listed.




                                                                 96                                http://sites.google.com/site/ijcsis/
                                                                                                   ISSN 1947-5500
                                                                                                        (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                                                  Vol. 8, No. 6, 2010
                        TABLE VIII- RBF NETWORK PARAMETERS                                                       have fewer weights, these networks train extremely fast and
                                                                                                                 require fewer training samples.

               Parameter                           Performance
               MSE                                 0.07629                                                                                  V.     CONCLUSION
               NMSE                                0.06431                                                       An ANN-based short-term runoff forecasting system is
               Min Abs Error                       0.01943                                                       developed in this work. A comparison between five different
               Max Abs Error                       0.14387                                                       learning rules with four activation function for optimal
               r                                   0.85437                                                       performance for Radial basis function neural network model is
                                                                                                                 made. We find that Radial basis function neural network with
                                                                                                                 Levenberg Marquardt learning rule and Tanh activation
Fig 4 shows the plot of actual Vs predicted optimum values for                                                   function is more versatile than other approaches studied. Radial
Radial basis function neural network found with Levenberg                                                        basis function neural network with Levenberg Marquardt
Marquardt learning rule and Tanh activation function.                                                            learning rule and Tanh activation function is performing better
                                                                                                                 as compare to other approaches studied as far as the overall
                                                                                                                 performance is concerned for forecasting runoff for 3 hrs lead
                                                                                                                 time. Other approaches studied are also performing optimally.
                                                                                                                 Which means that static model of Radial basis function neural
                    Actual Vs Predicted Runoff by RBF NN Model
                                                                                                                 network with Levenberg Marquardt learning rule and Tanh
               10                                                                                                activation function is powerful tool for short term runoff
               8                                                                                                 forecasting for Wardha River basin.
      Runoff




               6
               4
                                                                                                                                          ACKNOWLEDGMENT
               2
               0
                                                                                                                   This study is supported by Upper Wardha Dam Division
                    1    3      5        7     9       11     13       15       17   19    21     23   25        Amravati, department of irrigation Govt. of Maharashtra, India
                                                            Exemplar

                                             Actual Runoff              Predicted Runoff
                                                                                                                                               REFERENCES
        Figure 4.– Actual Vs. Predicted runoff by RBF for L.M. and Tanh

   The error found in the actual and predicted runoff at the                                                     [1]  P. Srivastava, J. N. McVair, and T. E. Johnson, "Comparison of process-
                                                                                                                      based and artificial neural network approaches for streamflow modeling
dam site is plotted for RBF network as shown in the Figure 5.                                                         in an agricultural watershed," Journal of the American Water Resources
                                                                                                                      Association, vol. 42, pp. 545563, Jun 2006.
                             Error in prediction for RBF NN Model                                                [2] K. Hornik, M. Stinchcombe, and H. White, "Multilayer feedforward
                                                                                                                      networks are universal approximators," Neural Netw., vol. 2, pp. 359-
                                                                                                                      366,1989.
                                                   1
                                         0.2000
                                          26            2                                                        [3] M. C. Demirel, A. Venancio, and E. Kahya, "Flow forecast by SWAT
                                    25                        3                                                       model and ANN in Pracana basin, Portugal," Advances in Engineering
                               24        0.1000                    4
                                                                                                                      Software, vol. 40, pp. 467-473, Jul 2009.
                          23                                            5
                                         0.0000                                                                  [4] A. S. Tokar and M. Markus, "Precipitation-Runoff Modeling Using
                         22                                                 6
                                     -0.1000                                                                          Artificial Neural Networks and Conceptual Models," Journal of
                        21                                                  7                                         Hydrologic Engineering, vol. 5, pp. 156-161,2000.
                                     -0.2000                                              error
                        20                                                  8                                    [5] S. Q. Zhou, X. Liang, J. Chen, and P. Gong, "An assessment of the VIC-
                         19                                              9                                            3L hydrological model for the Yangtze River basin based on remote
                          18                                            10                                            sensing: a case study of the Baohe River basin," Canadian Journal of
                               17                                  11                                                 Remote Sensing, vol. 30, pp. 840-853, Oct 2004.
                                    16                        12
                                          15            13                                                       [6] R. J. Zhao, "The Xinanjiang Model," in Hydrological Forecasting
                                                14                                                                    Proceedings Oxford Symposium, lASH, Oxford, 1980 pp. 351-356.
                                                                                                                 [7] R. J. Zhao, "The Xinanjiang Model Applied in China," Journal of
                                                                                                                      Hydrology, vol. 135, pp. 371-381, Ju11992.
                    Fig 5 – Error graph of RBF Model for L.M. and Tanh                                           [8] D. Zhang and Z. Wanchang, "Distributed hydrological modeling study
                                                                                                                      with the dynamic water yielding mechanism and RS/GIS techniques," in
                                                                                                                      Proc. of SPIE, 2006, pp. 63591Ml-12.
                                                                                                                 [9] J. E. Nash and I. V. Sutcliffe, "River flow forecasting through
                                                                                                                      conceptual models," Journal ofHydrology, vol. 273, pp. 282290,1970.
The main advantage of RBF is that it finds the input to output                                                   [10] D. Zhang, "Study of Distributed Hydrological Model with the Dynamic
map using local approximators. Each one of these local pieces                                                         Integration of Infiltration Excess and Saturated Excess Water Yielding
is weighted linearly at the output of the network. Since they                                                         Mechanism." vol. Doctor Nanjing: Nanjing University, 2006, p. 190.529




                                                                                                            97                                   http://sites.google.com/site/ijcsis/
                                                                                                                                                 ISSN 1947-5500
                                                                          (IJCSIS) International Journal of Computer Science and Information Security,
                                                                                                                                    Vol. 8, No. 6, 2010
[11] E. Kahya and J. A. Dracup, "U.S. Streamflow Patterns in Relation to the
     EI Nit'lo/Southern Oscillation," Water Resour. Res., vol. 29, pp. 2491-
     2503 ,1993.
[12] K. J. Beven and M. J. Kirkby, "A physically based variable contributing
     area model of basin hydrology," Hydrologi cal Science Bulletin, vol. 43,
     pp. 43-69,1979.
[13] N. J. de Vos, T. H. M. Rientjes, “Constraints of artificial neural
     networks for rainfall-runoff modelling: trade-offs in hydrological state
     representation and model evaluation”, Hydrology and Earth System
     Sciences, European Geosciences Union, 2005, 9, pp. 111-126.
[14] Holger R. Maier, Graeme C. Dandy, “Neural networks for the perdiction
     and forecasting of water resources variables: a review of modeling issues
     and applications”, Environmental Modelling & Software, ELSEVIER,
     2000, 15, pp. 101-124.
[15] T. Hu, P. Yuan, etc. “Applications of artificial neural network to
     hydrology and water resources”, Advances in Water Science, NHRI,
     1995, 1, pp. 76-82.
[16] Q. Ju, Z. Hao, etc. “Hydrologic simulations with artificial neural
     networks”, Proceedings-Third International Conference on Natural
     Computation, ICNC, 2007, pp. 22-27.
[17] G. WANG, M. ZHOU, etc. “Improved version of BTOPMC model and
     its application in event-based hydrologic simulations”, Journal of
     Geographical Sciences, Springer, 2007, 2, pp. 73-84.
[18] K. Beven, M. Kirkby, “A physically based, variable contributing area
     model of basin hydrology”, Hydrological Sciences Bulletin, Springer,
     1979, 1, pp.43-69.
[19] K. Thirumalaiah, and C.D. Makarand, Hydrological Forecasting Using
     Neural Networks Journal of Hydrologic Engineering. Vol. 5, pp. 180-
     189, 2000.
[20] G. WANG, M. ZHOU, etc. “Improved version of BTOPMC model and
     its application in event-based hydrologic simulations”, Journal of
     Geographical Sciences, Springer, 2007, 2, pp. 73-84.
[21] H. Goto, Y. Hasegawa, and M. Tanaka, “Efficient Scheduling Focusing
     on the Duality of MPL Representatives,” Proc. IEEE Symp.
     Computational Intelligence in Scheduling (SCIS 07), IEEE Press, Dec.
     2007, pp. 57-64.
[22] ASCE Task Committee on Application of Artificial Neural Networks in
     Hydrology, ”Artificial neural networks in hydrology I: preliminary
     concepts”, Journal of Hydrologic Engineering, 5(2), pp.115-123, 2000




                               Rahul Deshmukh received the B.E. and
                               M.E. degrees in Electronics Engineering from
                               Amravati University. During 1996-2007, he
                               stayed in Government College of Engineering,
                               Amravati in department of Electronics and
                               telecommunication teaching undergraduate
                               and postgraduate students. From 2007 till now
                               he is with Indian Institute of Technology (IIT)
                               Bombay, Mumbai. His area of reserch are
                               artificial intelligence and neural networks.



                                    A. A. Ghatol received the B.E. from
                                    Nagpur university foallowed by M. Tech
                                    and P.hd. from IIT Bombay. He is best
                                    teacher award recipient of government of
                                    Maharastra state. He has worked as
                                    director of College of Engineering Poona
                                    and Vice-Chancellor, Dr. Babasaheb
                                    Ambedkar Technological University,
                                    Lonere, Raigad, India.        His area of
                                    research is artificial intelligence, neural
                                    networks and semiconductors.




                                                                                  98                           http://sites.google.com/site/ijcsis/
                                                                                                               ISSN 1947-5500

				
DOCUMENT INFO
Description: IJCSIS is an open access publishing venue for research in general computer science and information security. Target Audience: IT academics, university IT faculties; industry IT departments; government departments; the mobile industry and computing industry. Coverage includes: security infrastructures, network security: Internet security, content protection, cryptography, steganography and formal methods in information security; computer science, computer applications, multimedia systems, software, information systems, intelligent systems, web services, data mining, wireless communication, networking and technologies, innovation technology and management. The average paper acceptance rate for IJCSIS issues is kept at 25-30% with an aim to provide selective research work of quality in the areas of computer science and engineering. Thanks for your contributions in September 2010 issue and we are grateful to the experienced team of reviewers for providing valuable comments.