Your Federal Quarterly Tax Payments are due April 15th Get Help Now >>

ECE774 HW6 2009 10 22 Fransway by JvRdX0

VIEWS: 0 PAGES: 3

									ECE 774                                             HW6                                      Dylan Fransway


3.5. One of the most common applications of the MLP NN trained with backpropagation is the
approximation of nonlinear functional mappings. Write a computer program and design an MLP NN with
one hidden layer, and train by backpropagation to perform the following mappings.

1.

2.                                       and

3.                                                 and

4.                                             ,             , and

For each of the above examples, do the following:

     (a) Generate three independent sets of input patterns.
         Training set: 200 patterns
         Testing set: 100 patterns
         Validation set: 50 patterns
         For each of the sets generate the target values, using the analytical expression for the function
         to be approximated.
     (b) In the process of network training, use the training data set to modify the weights. Use the
         testing data at the end of each training epoch to monitor the ability of the network to generalize
         and prevent overfitting. Finally, use the validation set to verify the overall performance of the
         network after it has been trained.
     (c) Experiment with different numbers of neurons in the hidden layer.
     (d) Compare the speed of network convergence when the weights are initialized as small random
         numbers versus the Nguyen-Widrow initialization procedure (cf. Sect. 3.3.2).



     (a) I generated 350 input output pairs using the randperm function. I then partitioned them into the
         three sets. The training values were used to train the neural network. When training, it is
         possible to go too far, to train the network until it starts to increase the error of the function as
         a whole while minimizing the error with respect to the training vectors. To this end, testing
         values were generated to measure the error outside of the training vectors. Finally the
         verification vectors were another set of inputs and outputs distinct from the previous two, this
         is yet another check that the net learned the function, not the training vectors.
     (b) The network was eventually able to learn a descent approximation of the given function. It
         wasn’t as tight a fit as I have seen in previous assignments though this network is approximating
         a nonlinear function, a significantly more difficult task. The plot of the verification data can be
         seen in Figure 1, while not a line with a slope of one, it is a decent approximation of one. Figure
2 shows the MSE through each epoch. Finally, Figure 3 shows the output of the network at a
significantly higher resolution, we can see that 10 neurons were not an over fit in this case.


                       10

                        9

                        8

                        7

                        6
      Network Output




                        5

                        4

                        3

                        2

                        1

                        0
                            0   1      2      3     4        5      6    7      8      9     10
                                                    Verification Data


                       Figure 1: Plot of the verification data sets output and expected output.
                           0
                       10




                           -1
                       10
  Mean Squared Error




                           -2
                       10




                           -3
                       10




                           -4
                       10
                                0                    1                   2                3                    4
                            10                   10              10                      10                10
                                                                Epoch


                                    Figure 2: Plot of the mean squared error over 10000 epochs.


                           10
                                                                                                Expected
                            9                                                                   Actual

                            8

                            7

                            6

                            5
                       y




                            4

                            3

                            2

                            1

                            0
                            0.1         0.2    0.3       0.4   0.5           0.6   0.7    0.8    0.9       1
                                                                     x


Figure 3: Plot of the expected vs. the actual output with 10,000 verification points.

								
To top