COMPUTER VISION by dffhrtcv3

VIEWS: 1 PAGES: 17

									          ADVANCED
       MACHINE LEARNING
                       Slide 3
• Introduction to neural networks
•   The first perceptron
•   A neuron model
•   Activation functions
•   The multilayer perceptron structure
•   The feedforward and backward computations
     What is a neural network:
A neural network is a massively parallel
distributed processor made up of simple
processing units tht has a natural propensity
for storing experiental knowled and making it
available for use. It resembles the brain in two
respects:

1- Knowledge is acquired by the network from
its environment through a learning process.

2- Interneuron connection strenghts, known as
synaptic weights, are used to store the
acquired knowledge. (by Haykin)
     Benefits of neural networks
1-   Nonlinearity
2-   Input-output mapping
3-   Adaptivity
4-   Evidental response
5-   Contexual information
6-   Fault tolerance
7-   VLSI implementability
8-   Uniformity of Analysis and Design
9-   Neuro biological analogy
           Representation of the nervous System
  Block diagram representation of the brain system is
  given below:

             Receptors       Neural Net        Effectors
Stimulus
                                                           Response




    Components of a neuron                The synapse
       The first perceptron




The first perceptron is based on a
model proposed by Rosenblat in
1958.
                               NET=I.W




              A neuron model




          A neuron



A neuron with an activation
         function
Sample Activation Functions
The sigmoid activation function
   Activation function should be a
derivative process apliable function.
Because in the backward operation
for the error minimization we need to
get its derivation.
A sample multi-layer feedforward
    neural network structure
    Backpropogation Algorithm
The backpropagation is widely used algorithm, and it
can map nonlinear processes. Backpropagation is a
 systematic method for training multilayer artificial
neural networks. It has a strong mathematical
foundation based on gradient descent learning.
The learning rate is chosen around 0.3
and momentum coefficient is chosen
around 0.9. But these values are not
strict. The best values are found
experimentally.
 The learning rate defines the change
amount of the weights. The momentum
coefficient is used to add a certain rate
of the change amount to the next
change to avoid from stopping any
local optimum point during the
learning.
            Average Squared Error
u   Error signal of output neuron j at presentation of n-
    th training example:
                                                C: Set of
u   Total error at time n:
                                                neurons
                                                in output
u   Average squared error:                      layer

u   Measure of learning                        N: size of
    performance:                               training set




u   Goal: Adjust weights of NN to minimize EAV
A MULTILAYER PERCEPTRON MODEL FOR AN APPLICATION

								
To top