ANN: Simulation of a neural net with back-progagation

Introduction

Artificial neural networks are networks of interconnected 'neurons' based on the neural structure of the brain. They process instances of input one at a time, and 'learn' by comparing their classification of a sample with the a priori known correct classification of this sample. After each iteration, the errors from the classification are fed back into the network and used to adjust the parameters of the network. algorithm the second time around, and so on for many iterations. The adjusting of its weights can be done by back-propagation:

  1. Give the neural net a set of samples inputs;
  2. Compute the resulting output;
  3. Compare the resulting output with the desired output and calculate the weight changes for each input sample;
  4. Add up the weight changes for all sample inputs and change the weights.

By applying this procedure over and over again (on the same set of sample inputs), the neural net can be trained to perform better and better. See for more information about the procedure and the parameters involved any well known book about neural networks.

The program discussed on this page implements such a neural net and illustrates the back-propagation procedure by giving a visual representation of the net. This example is based on an example, mentioned in the book Artificial Intelligence, 3rd edition by Patrick Henry Winston.

Download the simulation

The program consists of an zipped MS Windows executable:

nnkennis.zip (Windows, 278 KB, zipped)

If desired, this file can be verified against checksum:

54cf09dfeafaa00bde0d4a5dc621a395

How does it work ?

The purpose of the net is to determine whether the two people corresponding to the ON inputs (1) are acquaintances. The two people are judged to be acquaintances if the output value equals or is greater than 0.9; they are judged to be NOT acquaintances if the output value is smaller than 0.9.

Interface of neural network simulator

  1. Visualization of neural net;
  2. Rows of sample input and corresponding desired output (A);
  3. Status overview and parameters: number of weight change cycles (X) and Rate parameter (Y).;
  4. Button panel:
    - Re-initialize ! : Reset weights, input values and sample inputs according to example;
    - Step ! : Execute step in order to process a sample input. Change in values can be seen directly in the visualization;
    - Train ! : Train the net by making X weight change cycles. X using Rate parameter Y.
    - Enter input... : Test the net by giving it your own input values.
    - Error graph... : Show the learning process of the net by plotting the RMS error given the weight change cycle.

References

  1. Dayhoff, J.E. (1990). Neural network architectures: an introduction. , Van Nostrand Reinhold Co.
  2. Fausett, L. (1994). Fundamentals of neural networks: architectures, algorithms, and applications. , Prentice-Hall, Inc.
  3. ?. Generation5: Online Neural Networks Articles.