Author Topic: Designing Aspects of Artificial Neural Network Controller  (Read 3031 times)

0 Members and 1 Guest are viewing this topic.

content.writer

  • Newbie
  • *
  • Posts: 48
  • Karma: +0/-0
    • View Profile
Designing Aspects of Artificial Neural Network Controller
« on: April 23, 2011, 09:39:08 am »
Quote
Author : Navita Sajwan, Kumar Rajesh
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper -  http://www.ijser.org/onlineResearchPaperViewer.aspx?Designing_Aspects_of_Artificial_Neural_Network_Controller.pdf

Abstract In this paper important fundamental steps in applying artificial neural network in the design of intelligent control systems is discussed. Architecture including single layered and multi layered of neural networks are examined for controls applications. The importance of different learning algorithms for both linear and nonlinear neural networks is developed. The problem of generalization of the neural networks in control systems together with some possible solutions are also included.

Index Terms Artificial, neural network, adaline algorithm, levenberg gradient, forward propagation, backward propagation, weight update algorithm. 

INTRODUCTION                                                                     
The field of intelligent controls has become important due to the development in computing speed, power and affordability. Neural network based control system design has become an important aspect of intelligent control as it can replace mathematical models. It is a distinctive computational paradigm to learn linear or nonlinear mapping from a priori data and knowledge. The models are developed using computer, the control design produces controllers,that can be implemented online.The paper includes both the nonlinear multi-layer feed-forward architecture and the linear single-layer architecture of artificial neural networks for application in control system design. In the nonlinear multi-layer feed-forward case, the two major problems are the long training process and the poor generalization. To overcome these problems, a number of data analysis strategies before training and several improvement generalization techniques are used.

2 ARCHITECTURE IN NEURAL NETWOKS
Depending upon the nature of the problems, design of neural network architecture is selected. There  are many commonly used neural network architectures for control system applications such as Perceptron Network, Adaline network, feed forward neural network.
(a)ADALINE Architecture:
ADALINE( For ADAptive LINear combiner) is a device and a new, powerful learning rule called the widrow-Hoff learning rule this is shown in figure1.The rule minimized the summed square error during training involving pattern  classification. Early applications of ADALINE and its extension to MADALINE( for many ADALINES)  include pattern recognition, weather forecasting and adaptive control.

ADALINE  algorithm:
1.   Randomly choose the value of weights in the range -1 to 1.
2.   While stopping condition is false,follow steps 3.
3.   For each bipolar training pair S:t, do step 4-7.
4.   Select activations to the input units. X0=1, xi=si(i=1,2..n).
5.   Calculate net input or y.
6.   update the bias and weights.
W0=w0(old)+alpha(t-y)
Wnew=wi(old)+alpha(t-y)xi.

7.   If the largest weight change that occurred in step 3 is smaller than a specified value, stop else continue.

Figure 1:ADALINE Neuron Model ( Download Full Paper For Fig. 1 )

(b)Feed-forward Neural Network Architecture: It is a important architecture due to its non-parametric, non-linear mapping between input and output. Multilayer feed-forward neural networks employing sigoidal hidden unit activations are known as universal approximators. These function can approximate unknown function and its derivative. The feed-forward neural networks include one or more layers of hidden units between the input and output layers.The output of each node propagates from the input to the outside side. Nonlinear activation functions in multiple layers of neurons allows neural network to learn nonlinear and linear relationships between input and output vectors. Each input has an appropriate weighting W.The sum of W and the bias B form the input to the transfer function. Any differentiable activation function f may be used to generate the outputs. Most commonly used activation function are
purelin           f(x)=x,         log-sigmoid      f(x)=(1+e-x)-1    , and   tan-sigmoid
     
                      f(x)=tan(x/2)=(1-e-x) / (1+e-x)

Hyperbolic tangent(Tan-sigmoid) and logistic(log-sigmoid) functions approximate the signum and  step functions,respectively,and yet provide smooth, nonzero derivatives with respect to the input signals. These two activation function called sigmoid functions because there S-shaped curves exhibit smoothness and asymptotic properties. The activation function  fh  of the hidden units have to be differentiable functions. If  fh  is linear, one can always collapse the net to a single layer and thus lose the universal approximation/mapping capabilities. Each unit of the output layer is assumed to have the same activation function.

3 BACK PROPAGATION LEARNING
Error correction learning is most commonly used in neural networks. The technique of  back propagation, apply error-correction learning to neural network with hidden layers.It also determine the value of the learning rate, η..Values for η is restricted such that 0<η<1.Back propagation requires a perception neural network, (no interlayer or recurrent connection). Each layer must feed sequentially into the next layer.In this paper only the three-layer, A, B, and C are investigated. Feeding into layer a is the input vector I. Thus layer a has L nodes, ai (i=1to L), one node for each input parameter. Layer B, the hidden layer, has m nodes, bj (j =1 to m).L = m = 3; in practice L ≠ m. Each layer may have a different number of nodes. Layer C, the output layer, has n nodes, ck (k = 1 to n), with one node for each output parameter. The interconnecting weight between the ith node of layer A and the jth node of layer B is denoted as vij, and that between the jth node of layer B and the kth node of layer C is wjk.Each node has an internal threshold value. For layer A, the threshold is TAi, for layer B, TBi, and for layer C, Tck.The Back propagation neural network is shown in figure 2..

Figure 2: Three-layered artificial neural system ( Download Full Paper for Fig. 2 )

When the network has a group of inputs, the updating of activation values propagates forward from the input neurons, through the hidden layer of neurons to the output neurons that provide the network response. The outputs can be mathematically represented by:
         M         N
Yp = f( ((fXnWnm))*Kmp))
         m=-1    n=-1
Yp   =  The pth output of the network
Xn   =  The nth input to the network
Wnm   =  The mth weight factor applied to the nth input to the network
Kmp   =  The pth weight factor applied to the mth output of the hidden layer
F( )   =  Transfer function (i.e., sigmoid, etc.)
The ANS becomes a powerful tool that can be used to solve difficult process control applications

Figure 3 depicts the designing procedure of Artificial Neural Network Controller. ( Download Full Paper for Fig. 3 )

Read More: http://www.ijser.org/onlineResearchPaperViewer.aspx?Designing_Aspects_of_Artificial_Neural_Network_Controller.pdf