Author Topic: Training and Analysis of a Neural Network Model Algorithm  (Read 2237 times)

0 Members and 1 Guest are viewing this topic.

content.writer

  • Newbie
  • *
  • Posts: 48
  • Karma: +0/-0
    • View Profile
Training and Analysis of a Neural Network Model Algorithm
« on: April 23, 2011, 11:20:21 am »
Quote
Author : Prof. Gouri Patil
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper - http://www.ijser.org/onlineResearchPaperViewer.aspx?Training_and_Analysis_of_a_Neural_Network_Model_Algorithm.pdf

Abstract—An algorithm is a set of instruction pattern given in an analytical process of any program/function-ale to achieve desired results. It is a model-programmed action leading to a desired reaction. A neural network is a self-learning mining model algorithm, which aligns/ learns relative to the logic applied in initiation of primary codes of network. Neural network models are the most suitable models in any management system be it business forecast or weather forecast. The paper emphasizes not only on designing, functioning of neural network models but also on the prediction errors in the network associated at every step in the design and function-ale process.

Index Terms— Input , Neural Network, Training, Weights

Neural Network – A Mining Model                                                             
A Neural Network algorithm could contain multiple networks, depending on the number of columns used for both input and prediction, or that are used only for prediction. The number of networks a single mining model contains depends on the number of parameters connected by the input columns and predictable columns the mining model uses. Neural network function-ale is a mimic of human neural interconnections and memory. Human brain or memory comprises of an average of about ten billion neurons functioning in a network synchronization -- and every single neuron is, on average connected to several other neurons ( may be around a thousand) indirectly or directly to the central neural mass, the brain. By way of these connections and interconnections, nerves send and receive messages as packets of energy called as impulses. One very impor-tant feature of human neurons is that they don't react immediately to the reception of energy called impulse. Instead, they sum their received energies, and then, they send their own quantities of energy to the other asso-ciated neurons only when this sum reaches a certain critical threshold the neurons trigger and respond back as signals or packets of energy called impulse. Brain learns by adjusting the number and strength of these connections and gives a desired response or output in terms of polarization or re-polarization of neurons and difference in the energy levels. Neural networks also work on the same principle as our brains work, they respond to a threshold level of input signal often calculated as weights in the network system.

2   INPUT VALUES IN A NEURAL NETWORK MODEL
As all inputs and outputs of neural networks are numer-ic, primary task in designing a neural network is to de-fine a transfer function wide enough to accept input in any numeric range and specific enough to give output in the desired range. The range and limit of inputs should be predefined.  Saturation/threshold limit for inputs could then be pre-defined, so that inputs in a way are programmed to give the desired outputs or those inputs or functions incompetent of giving the desired outputs are straight away ruled out in a programmed function-ale network model. Networks are built in a manner that they induce a stepwise logical action called the learning rule. Numeric functional values for the inputs are thus predefined and scaled but for some non-numeric nominal values as male or female, yes or no the network functions are divided as different links in the network and graded on an another chain of Numeric functional values as inputs like (0 and 1) and thus branching the network in two simulating paths working on the same predefined threshold value to decide the final output of the process. The input data for training is the fundamen-tal factor of a neural network, as it gives required information to "discover" the optimal threshold operating point and the result or output of the network. If the output of the network is known and weighed inputs are adjusted to reach the desired output then the network training is called supervised neural network. When the output is not defined or known, as in the case of sale or stock prediction the network is an unsupervised network, which is generally programmed to trigger for all hit value of the input. In case of unsupervised networks for hidden neurons,  input values of the network may adjust and interpret differen-tially in different situations and the network ripens or learns based on the adjustments made in the input val-ues of hidden neural layer.

3   FUNCTION-ALE OF A NEURAL NETWORK MODEL
The adjustments made in the input values of the hidden neural layer of an unsurpervised network is nothing  but calculations in neural network in terms of adjustments of their weights at the summing up junction  and the decisions made at the activation point by defining an activation function value specific for every condition and value. Decisions of the factors include calculations with loops and functions.  Initially some time lapse as a function is used to process the data and "winner” input in the time lapse of the model takes all the credit in the first step, where the neuron with the highest value from the calculation fires and takes a value 1, and all other neurons take the value 0.
       It is  advisable that the sigmoid curve is used as a transfer function because it has the effect of "squashing" the inputs into the range [0,1]. The sigmoid function has the additional benefit of having an extremely simple derivative function for back propagating errors through the neural network.

Typically the weights in a neural network are initially set to small random values; this represents that the net-work knows nothing to begin with. As the training process proceeds, these weights would converge to val-ues allowing them to perform a useful computation. Thus it can be said that the neural network begins with knowing nothing and moves on to gain admirable real world application.
  Activation function is an important function, which de-cides the maturation and output of a neural network. Activation functions of the hidden values in the neural network could introduce non linearity and desired maturation of a neural network otherwise the network would have been just a plain mathematical algorithm without logical application. For feedback or feed forward learning of  network, the activation function should be differentiable as it helps in  most of the learning curves for training a neural network like the bounded sigmoid function , the logistic tanh function with positive and negative values  or the Gaussian function. Almost all of these nonlinear activation function conditions assure better numerical conditioning and induced learning.
Networks with threshold limits with out activation func-tions are difficult to train as they are programmed for step wise constant rise or fall of weight. Where as a sig-moid activation functions with threshold limits makes a small change in the input weight produces change in the output and also makes it possible to predict whether the change in the input weight is good or bad.
In the activation function training, Numerical condition is one of the most fundamental and important concepts of the algorithm, it is very important that the activation function of a network algorithm is a predefined numeric condition.  Numerical condition affects the speed and accuracy of most numerical algorithms. Numerical condition is especially important in the study of neural networks because ill-conditioning is a common cause of slow and inaccurate results from many network algo-rithms.
Numerical condition is mostly decided based on condi-tion number of the input value, which for a neural net-work is the ratio of the largest and smallest eigenvalues.   of the Hessian matrix.  The eigenvalues of inputs are the   squares of the singular values of the primary input and Hessain matrix is the matrix of second-order partial de-rivatives of the error function with respect to the weights and biases.

Read More: http://www.ijser.org/onlineResearchPaperViewer.aspx?Training_and_Analysis_of_a_Neural_Network_Model_Algorithm.pdf