Wednesday, September 28, 2011

A17 - Neural Networks

Artificial neural network or simply neural network is a mathematical model that aims to mimic the structural/functional aspect of biological neural networks. So what is the difference between the two? Biological networks aim to explain a complex phenomena using real neurons, forming a circuit, releasing physical signals (voltage action potential)to one another. Artificial neural networks, on the other hand, are purely mathematical and uses artificial neurons instead.


The idea of an artificial neuron is as follows

 
Figure 1. Artificial neuron.

A neuron receives signals x1, x2 and x3. Each of the signals are weighted based on a synaptic strength w. The product of each signal and weight are added to obtain the total signal a. The total signal a will act on an activation function (commonly a sigmoid or a step function) g. The resulting signal z is then fired to other neurons.

The structure of an artificial neural network is often represented as


Figure 2. Artificial neural network.
(image taken from http://en.wikipedia.org/wiki/Artificial_neural_network)

Each neuron obeys the idea described above. The input neurons are the signal receivers (e.g. features of a pattern). These input neurons are each connected to all hidden neurons. The hidden neurons then pass the signal to the output neurons which will contain the information we need.

Here we will use an artificial neural network to classify objects into classes using supervised learning wherein a desired output is presented to the network to facilitate much accurate classification.

To describe how classification can be done using artificial neural networks, I will use the same data I used in my previous blog post as shown below

Table 1. Training and Test sets.

We then use the Artificial Neural Network toolbox of Scilab which can be downloaded here. The usage of artificial neural networks depend on the number of input/hidden/output neurons, a learning rate constant and a training cycle time constant.

The number of input, hidden and output neurons I used is 2, 2 and 1, respectively. To check the effect of the learning rate and training cycle constants, I varied them accordingly.

Results using a training cycle of 100 and varying learning rates are shown below

Table 2. Results using the training set as the test for the ANN's learning while varying learning rate.

Table 3. Results using the test set as the test for the ANN's learning while varying learning rate.

By varying the learning rate, we can see in tables 2 and 3 that the percentage of correct classification increases dramatically.

We then vary the training cycle time while keeping the learning rate fixed at 0.05. Results are shown below

Table 4. Results using the training set as the test for the ANN's learning while varying training cycles.

Table 5. Results using the test set as the test for the ANN's learning while varying training cycles.

For both cases in tables 4 and 5, even though the learning is kept low at 0.05, increasing the training cycles will effectively increase the percentage of correct classification.

Changes in the percent classifications of tables 3, 4, 5 and 6 may not tell exactly the effect of learning rate and training cycles. However, looking closely at the actual result values, increasing learning rate or training cycles brings their values closer to 0 or 1.

The choice of the learning rate and training cycles may be crucial in the computation. It is true that assigning high learning rate and training cycles initially will give better classification. However, the set back is that it will induce an increase in computation time and the need for more computational resources. Thus, weighing accuracy and optimal computation must be thought carefully.


For this activity, I'm giving myself a 10.0 for being able to use artificial neural networks to classify objects into classes. I also checked the effect of learning rate and training cycles to the accuracy of classification. I have shown that even with small learning rate, increasing the training cycles will give more accurate classification, and vice versa. I would like to thank Mr. Jeric Tugaff for providing a sample code for this activity.


References:
[1] 'Neural Networks', 2008 Applied Physics 186 manual by Dr. Maricor Soriano
 

No comments:

Post a Comment