Matthew Collins & Albert DeLucca
What are Neural Networks?
Artificial neural networks are mathematical models originally designed to mimic aspects of how we believe the brain works.
The basic unit of the human brain is a cell called the neuron, a specialized cell that has the ability to receive and store information. The brain is estimated to contain 100 Billion of these cells. Also each of these neurons connects to approximately 10,000 other cells. These connections are called Synapses. So, the power of the brain comes from the sheer number of neurons and the number of synapses between them.
Neural networks are parallel processing structures consisting of non-linear processing elements interconnected by fixed or variable weights.
Rather than performing a programmed set of instructions sequentially as in a traditional Von Neumann type computer, neural network nodal functions can be evaluated simultaneously, thereby gaining enormous increases in processing speed.
The image to the right shows a basic neural network. The bottom layer represents the input layer, in this case with 5 inputs. In the middle is something called the hidden layer, with a variable number of nodes. It is the hidden layer that performs much of the work of the network. The output layer in this case has two nodes, representing output values we are trying to determine from the inputs.
Simple Neural Net
To most accurately and easily show how a neural net works, we chose an example that uses the most simplistic form of a neural net. The simplest neural-net design calls for two layers of cells (programmed system units), called either neurons or neurodes. These two layers are commonly refered to as the Input Layer and the Output Layer.
How A Neural Network Works
In order to illustrate how neural networks work, the following example will be used: Let's say that we want a robotic system to decide whether small gray-scale digital photographs, each measuring 50 x 50 pixels, show male or female faces. That means there are 2,500 inputs (one for each pixel of the photograph), and two possible outputs (one for man, one for woman).
Our Example: Input Layer
In our example, there are 2,500 neurons in the input layer. We determine this by multiplying the amount of pixels in the image’s height by the amount of pixels in the image’s width. In our case, each input neuron contains the brightness level of one digital photograph pixel, on a scale of 1 to 100.
Our Example: Output Layer
The Output Layer for our example has two neurons, one for male, one for female. If the picture is of a woman, we want the female value to be 1 and male to be 0; if it's a picture of a man, the inverse should be true. Of course, other applications may have more output neurons as neccessary.
How the Output Layer Gets Its Data
Each output neuron is calculated to be the sum of the values from input neurons. These in turn are then multiplied by a specific value designated for each combination of input and output neurons. And there is a different weight value associated with each input neuron. If the weight values are precisely correct, then the results presented by the output neurons will correctly reflect the pattern imposed on the input neurons. This process is somewhat analogous to the way a human’s brain recognizes faces when looking at blackand-white photos.
Beyond Design: Training The Net
The most important part of designing a successful neural net is not the programming, rather in the training of the system.
Training a neural network effectively synthesizes a set of rules from a body of training models . During the training phase, the neural network encodes the necessary transformation, mapping a desired set of input features to specific output features. The appropriate training methods are determined by the characteristics of the neural topology, in our example case, the two layer neural net, and the nodal functions. The first step in the training is to assemble a large set of training models that will accurately cover all possible variations of data. Iterating through this set many times makes the neural net better and more efficient in recognizing patterns.
Our Example: Training
In our example case, let's say that there are a couple of hundred of those 50 x 50-pixel photographs of faces, already identified as male or female. At the initial phase of training random numbers are used as the weight values. The neural-net learning software then decodes the first photograph's 2,500 pixel-brightness values into the appropriate input neurons. The neural net then performs the calculations described in the “How the Output Layer Gets Its Data” section to compute the output neurons' values. The learning software then checks the output neurons to see if they have the right answer. Because the weights were random, there's a 50/50 chance that the neural net got the right answer. If the net got it right, the learning software goes on to the next photograph.
Our Example: Training Cont.
If the neural net gives the wrong answer, the training software essentially adjusts the weight numbers a little bit. Exactly how it modifies them depends on the algorithms used by the neural net, but all the methods are mathematically complex, with computation time increasing rapidly with large networks.
Some of these modifying algorithms are: Adaline, Back Propagation, Delta Rule, ART1, Outstar, and Kohonen.
Once the numbers have been altered, it's back to the learning process, cycling through all the test data, adjusting the weights, and then doing it over and over again. The performance of the neural network will improve over time, and after 50 or 100 iterations through all of the sample data, the net should be able to match all the test data correctly.
Some techniques, such as feeding some of the output neuron data back to the input neurons, helps speed up the learning process.
Our Example: Conclusion
The big question is, has the neural net learned the pattern, or has it memorized the test data? That can happen, the same way that children who can't yet read can still pick out a few words that they know, like their name. With the learning software turned off, the neural net is tested, generally against a few dozen sets of test data that it hasn't seen before. If it learned the desired patterns, it would be able to identify the male and female faces in those new blackand-white photographs. If it merely memorized the training data, the neural net system would do poorly. In those cases, the network may have to be trained again, with a different set of random starting data; or if the problem persists the problem may be in the logic and coding of the training algorithm.
Problems Neural Nets Used For
Pattern Recognition, Classification and Detection
One of the largest and most successful application areas involves the use of neural nets for pattern recognition, classification and detection. The goal of pattern recognition and classification is that of assigning each separate input pattern to one of a finite number of output classes. Input elements represent measurements of selected features that are used to distinguish between the various output classes.
Target Tracking Modeling
Since neural networks can map arbitrary input-output associations, they are perfectly suited for modeling virtually any number of dynamical systems. One such problem is generating an interpolative/ extrapolative model of ocean data wherein the number of observations (i.e., salinity) is sparse or irregularly spaced.
Real Life Example
Some Wall Street analysts have been using neural net software to pick stocks based on changes in indicators. The good thing about neural networks is that they learn to find patterns on their own -- and will automatically assign low weights to data that doesn't help distinguish patterns. In theory, the stock analyst doesn't have to decide the appropriate indicators for a successful stock, but can just feed them all into the neural net, along with historical return-on-investment data, and let the learning algorithms work it out.
Neural Networks are useful in a variety of real life situations, due to their dynamic ability to learn. This ability to learn is a crucial factor in the growing field of artificial intelligence. As the field grows so will neural nets and many more uses for them will emerge within the next few years.
Neural Nets Explained by Alan Zeichick
An Introduction to Neural Networks by
Professor Leslie Smith A Short Overview Of Neural Networks by V. William Porto