Paper 8 International Journal of Advances in Science and Technology Vol

Document Sample
Paper 8 International Journal of Advances in Science and Technology Vol Powered By Docstoc
					                                              International Journal of Advances in Science and Technology,
                                                                                         Vol. 2, No. 4, 2011



         FPGA Implementation of Feed Forward
            Artificial Neural Network for
                    Classification
                                  Abhay B. Rathod1 and Gajanan P. Dhok2
          1
            Department of Electronics and Tele Communication. Sipna College of Engineering and Technology,
                                                 Amravati. (MS), India.
                                                abhaybr@rediffmail.com
          2
            Department of Electronics and Tele Communication. Sipna College of Engineering and Technology,
                                                 Amravati. (MS), India
                                              gajanandhok@rediffmail.com


                                                    Abstract
     Artificial Neural Networks (ANN) proven its own value in technical education and that has led to
     the emergence of Artificial Neural Networks as a major paradigm for Data Mining applications.
     Neural nets have gone through two major development periods the early 60’s and the mid 80’s.
     Neural Networks Neural networks have emerged as a field of study within AI and engineering via
     the collaborative efforts of engineers, physicists, mathematicians, computer scientists, and
     neuroscientists. Although the strands of research are many, there is a basic underlying focus on
     pattern recognition and pattern generation, embedded within an overall focus on network
     architectures. In our paper a hardware implementation of a neural network using Field
     programmable Gate Arrays (FPGA) is presented. Digital system architecture is designed to realize
     a feed forward multilayer neural network. VHDL coding will compiled, synthesized and
     implemented with altera quartus 2 software tools. Simulations are made with ModelSim tool.
     Finally, the design is realized on a DEII board having the alter FPGA chip.

     Key words:- Artificial Neural Network, Feed Forward Neural Network (FFNN), Very High Speed
     Integrated Circuit Hardware Description Language (VHDL), Field Programmable Gate Arrays
     (FPGA).


    I.    Introduction
     Neural networks have emerged as a field of study within AI and engineering via the collaborative
     efforts of engineers, physicists, mathematicians, computer scientists, and neuroscientists. Although
     the strands of research are many, there is a basic underlying focus on pattern recognition and pattern
     generation, embedded within an overall focus on network architectures. An artificial neural network
     is a system based on the operation of biological neural networks, in other words, is an emulation of
     biological neural system. Why would be necessary the implementation of artificial neural networks?
     Although computing these days is truly advanced, there are certain tasks that a program made for a
     common microprocessor is unable to perform; even so a software implementation of a neural network
     can be made with their advantages and disadvantages[1].Artificial neural network (ANN) are non
     linear mapping structure based on human brain. They are powerful tool for modeling especially when
     underlying data relationship is unknown. The basic processing elements of neural networks are called
     artificial neurons, or simply neurons or nodes. In a simplified mathematical model of the neuron, the
     effects of the synapses are represented by connection weights that modulate the effect of the
     associated input signals, and the nonlinear characteristic exhibited by neurons is represented by a
     transfer function. The neuron impulse is then computed as the weighted sum of the input signals,
     transformed by the transfer function. The learning capability of an artificial neuron is achieved by
     adjusting the weights in accordance to the chosen learning algorithm. A great variety of problems can
     be solved with ANNs in the areas of image processing, robotics, pattern recognition, etc. Most of the
     work done in this field until now consists of software simulations, investigating capabilities of ANN
     models or new algorithms. But we try hardware implementation for taking the advantage of neural
     network’s inherent parallelism. There are analog, digital and also mixed system architectures



April Issue                                       Page 59 of 95                                    ISSN 2229 5216
                                              International Journal of Advances in Science and Technology,
                                                                                         Vol. 2, No. 4, 2011


     proposed for the implementation of ANNs. The analog ones are more precise but difficult to
     implement and have problems with weight storage. Digital designs have the advantage of low noise
     sensitivity, and weight storage is not a problem. With the advance in programmable logic device
     technologies, FPGAs has gained much interest in digital system design. They are user configurable
     and there are powerful tools for design entry, synthesis and programming. ANNs are biologically
     inspired and require parallel computations in their nature. Microprocessors and DSPs are not suitable
     for parallel designs. We develop ANN with help fully parallel modules can be available by ASICs
     and VLSIs but it is expensive and time consuming to develop such chips. In addition the design
     results in an ANN suited only for one target application. FPGAs not only offer parallelism but also
     flexible designs, savings in cost and design cycle.


     2. Literature Review

     2.1 Overviews of Artificial Neural Network

     An artificial neuron is a computational model inspired in the natural neurons. Natural neurons receive
     signals through synapses located on the dendrites or membrane of the neuron. When the signals
     received are strong enough (surpass a certain threshold), the neuron is activated and emits a signal
     though the axon. This signal might be sent to another synapse, and might activate other neurons.




                                        Figure 1. Components of a neuron.

     The complexity of real neurons is highly abstracted when modeling artificial neurons. These basically
     consist of inputs (like synapses), which are multiplied by weights (strength of the respective signals),
     and then computed by a mathematical function which determines the activation of the neuron.
     Another function (which may be the identity) computes the output of the artificial neuron (sometimes
     in dependence of a certain threshold). ANNs combine artificial neurons in order to process
     information. An artificial neuron is a device with many inputs and one output as shown Figure 2.




                                           Figure 2. An artificial neuron

     Referring to Figure 2, the signal flow from inputs x1. . . xn is considered to be unidirectional, which
     are indicated by arrows, as is a neuron’s output signal flow (O). The neuron output signal O is given
     by the following relationship:



     where wj is the weight vector, and the function f(net) is referred to as an activation (transfer)
     function. The variable
     net is defined as a scalar product of the weight and input vectors.

     net = w1 x1+…….+wn xn.




April Issue                                        Page 60 of 95                                    ISSN 2229 5216
                                             International Journal of Advances in Science and Technology,
                                                                                        Vol. 2, No. 4, 2011




                                 Figure 3. Multilayered artificial neural network

     The objective of this type of ANN is for the network to “learn” and be able to predict the output
     values that are associated with a given input value. This learning occurs by adjusting the weights on
     the arcs in the network so that for a given value of the ANN’s estimated value of output will closely
     approximate the actual value of output associated with input. This requires iteratively presenting the
     network with a set of data containing known pairings of input and output values and adjusting the
     weights to reduce any error in the network’s predictions.

     2.2. Characteristic of neural networks

        The neural networks exhibits mapping capabilities and they can map input pattern to their
         associated output patterns
        The neural networks can process information in parallel, at high speed and distributed manner.
        Neural networks work as single layer and multilayer network model.
        Neural network as ability to learn where neural network generate their own rule.
        The neural networks are robust and are fault tolerant.
        ANN computations may be carried out in parallel, and special hardware devices are being
         designed and manufactured.

     3. Neural Network Architectures
     The basic architecture consists of three types of neuron layers: input, hidden, and output layer. A
     neural network has to be configured such that the application of a set of inputs produces the desired
     set of outputs.

     3.1 Feed-forward networks

     In feed-forward networks, the signal flow is from input to output units, strictly in a feed-forward
     direction. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. The
     data processing can extend over multiple (layers of) units, but no feedback connections are present.
     Recurrent networks contain feedback connections. Contrary to feed-forward networks, the dynamical
     properties of the network are important. In some cases, the activation values of the units undergo a
     relaxation process such that the network will evolve to a stable state in which these activations do not
     change anymore. In other applications, the changes of the activation values of the output neurons are
     significant, such that the dynamical behavior constitutes the output of the network.

     3.2 Feedback networks
     Feedback artificial neural networks allow signal to travel one way only from input to output. There is
     no feedback loop i.e. the output of any layer does not affect that same layer. Feedback networks are
     very powerful and can get extremely complicated. Feedback architectures are also referred to as
     interactive or recurrent, although the latter term is often used to denote feedback connections in single
     layer organizations.




April Issue                                       Page 61 of 95                                   ISSN 2229 5216
                                             International Journal of Advances in Science and Technology,
                                                                                        Vol. 2, No. 4, 2011


     4.     DESIGN OF SYSTEM

     4.1 Neural Design of the System
     The iterative nature of presenting training data to an ANN, along with the assumption that the back
     propagation algorithm is needed for training purposes, belies the fact that an ANN can be
     implemented and trained quite easily using with the Altera Quartus Design Tool. To generate a
     hardware model from software algorithm some simple logic and arithmetic blocks such as: multiplier,
     adders and logic gates have been used. Basically, each neuron has three multipliers to multiply each
     input value by the corresponding weight and finally, the transfer function delivers the neuron’s
     output. The block diagram of feed forward neural network for 3-4-2 layer is illustrated in Figure 4.
     The number of layers and the number of hidden neurons in each hidden layer are the parameters that
     will be designed by user. The general rule is to choose these design parameters so that the best
     possible model with as few parameters as possible is obtained. For many practical applications, one
     or two hidden layers will suffice.




                        Figure 4. Block diagram of feed forward neural network for 3-4-2 layer.
     4.2 Hardware Implementation and Result
     The complete diagram of the neural network can be seen in Figure 5. The 12 pins on the left side of
     the picture are the three input values to each neuron. They are connected to the set of buses (first
     layer); these busses distribute the input signal to the next layer (hidden layer). The results are
     provided to the output layer and finally the results are showed in the output pin (on the right side of
     Fig. 3).




          Figure 5 RTL hardware schematic circuits for implementing 3-4-2 Feed Forward Neural Network

     In this particular implementation, the value (input set, weights, and results) are all in integers. The
     neural network has been trained in software once this hardware implementation does not allow on-
     chip training. In Figure 6 and Figure 7, the waveform analyses are showed, and the input and output
     signals can be seen.




April Issue                                       Page 62 of 95                                   ISSN 2229 5216
                                            International Journal of Advances in Science and Technology,
                                                                                       Vol. 2, No. 4, 2011




                                Figure 6. Simulation results for Hidden Layer




                                Figure 7. Simulation results for Output Layer

     5. Application

     The application selected in this work is a three input XOR problem. A 3-4-2 feed forward network
     (three neurons in the input layer, four neurons in the hidden layer and two neurons in the output
     layer) will target on a alter chip series with > 10,000 typical gate count. First the network will be
     trained in software using MATLAB Neural Networks Processing Toolbox. Then calculated weights
     are going to be written to a VHDL Package file. This file, along with other VHDL coding will
     compiled, synthesized and implemented with altera quartus 2 software tools. Simulations are made
     with same tool. Finally, the design is realized on a DEII board having the alter FPGA chip.

     6. Conclusion

     The discussion above introduced a simple approach to implementing and training ANNs using Altera
     Quartus Design Tool This paper has presented the implementation of feed forward neural networks
     by FPGAs. The proposed network architecture is modular, being possible to easily increase or
     decrease the number of neurons as well as layers. FPGAs can be used for portable, modular, and
     reconfigurable hardware solutions for neural networks, which have been mostly used to be realized
     on computers until now. The motivation for this study stems from the fact that an FPGA coprocessor
     with limited logic density and capabilities can used in building Artificial Neural Network which is
     widely used in solving different problems. Future work involves estimating the maximum size of
     ANNs in modern FPGAs. The main points are the size and parameter is ability of multipliers and the
     number of interlayer interconnections.




April Issue                                     Page 63 of 95                                 ISSN 2229 5216
                                           International Journal of Advances in Science and Technology,
                                                                                      Vol. 2, No. 4, 2011


     7. References

     [1] J.J. Blake, L.P. Maguire, T.M. McGinnity, B. Roche, L.J. McDaid, “The Implementation of
          Fuzzy Systems, Neural Networks using FPGAs”, Information Sciences, Vol. 112, pp.151-168,
          1998.
     [2] C. Cox and W. Blanz, “GANGLION-A Fast Field- Programmable Gate Array Implementation of
          a Connectionist Classifier,” IEEE Journal of Solid- State Circuits, Vol. 27, No. 3, pp288-299,
          1992.
     [3] M. Krips, T. Lammert, and Anton Kummert, “FPGA Implementation of a Neural Network for a
          Real-Time Hand Tracking System”, Proceedings of the first IEEE International Workshop on
          Electronic Design, Test and Applications, 2002.
     [4] H. Ossoinig, E. Reisinger, C. Steger, and Reinhold Weiss. “Design and FPGA-Implementation of
          a Neural Network.” Proceedings of the 7th International Conference on Signal Processing
          Applications & Technology, pp 939-943, Boston, USA,October 1996.
     [5] M. Stevenson, R. Weinter, and B. Widow, “Sensitivity of Feedforward Neural Networks to
          Weight Errors,” IEEE Transactions on Neural Networks, Vol. 1, No. 2, pp 71-80, 1990.
     [6] R. Gadea, J. Cerda, F. Ballester, A. Mocholi, “Artificial neural network implementation on a
          single FPGA of a pipelined on-line backpropagation”, Proceedings of the 13th International
          Symposium on System Synthesis (ISSS'00), pp 225-230, Madrid, Spain, 2000.
     [7] Xilinx, “Logicore Multiplier Generator V5.0 Product Specification,” San Jose, 2002.
     [8] Xilinx, “Logicore’s Single-Port Block Memory for Virtex, Virtex-II, Virtex-II Pro, Spartan-II,
          and Spartan-II E V4.0,” San Jose, 2001.
     [9] BISHOP, C. M. 1995. Neural Networks for Pattern Recognition. Oxford University Press, New
          York.
     [10] HERTZ, J., KROGH, A., AND PALMER, R. G. 1991. Introduction to the Theory of Neural
          Computation. Addison-Wesley, Reading, MA.
     [11] Smith, K., & Gupta, J. (2002). Neural networks in business: Techniques and applications.
          Hershey, PA: IGI Publishing.
     [12] Mandic, D. and Chambers, J. (2001) Recurrent Neural Networks for Prediction: Learning
          Algorithms, Architectures and Stability, John Wiley & Sons, New York.
     [13] “Financial Time Series as Random Walks”.Extracted on March5, 2007 from
          http://www.cs.sunysb.edu/~skiena/691 /lectures/lect ure8.pdf
     [14] Smith, Leslie. “An Introduction to Neural Networks”. Centre for Cognitive and Computational
          Neuroscience.       April    2,   2003.      Retrieved    on    March      6,    2007    from
          http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides
     [15] Bose, N. K.; & Liang P. (1996). Neural network fundamentals with graphs, algorithms, and
          applications. McGraw-Hill.
     [16] Warner, B.,&Misra, M. (1996).Understanding neural networks as statistical tools. The American
          Statistician, 50(4), 284–293.

     Author Profile

                     Abhay B. Rathod received his BE degree in Electronics           from S
                     wami Ramanand Teeth Marathwada University in 2000/ India. He is working
                     towards his Master in Electronics and Telecommunication from Amravati
                     University.




                      Gajanan P. Dhok is Associate Professor of Department of Electronics and
                      Telecommunication at Sipna’s College of Engineering and Technology, Amravati.
                      With 14 Year Teaching Experience. He did his B.E. (Instrumentation) Dr.
                      Babasaheb Marathwada University in 1998 in India & M.E. (Digital Electronics) in
                      2006 from Amravati University.




April Issue                                    Page 64 of 95                                 ISSN 2229 5216

				
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views:67
posted:5/22/2011
language:English
pages:6