International Journal of Scientific & Engineering Research, Volume 2, Issue 12, December-2011 1

ISSN 2229-5518

Neural Networks for Nonlinear Fractional

Programming

S.K Bisoi, G. Devi, Arabinda Rath

Abstract - This paper presents a neural network for solving non-linear minimax multiobjective fractional programming problem subject to nonlinear inequality constraints. Neural model is designed for optimization with constraints condition. Methodology is based on the lagrange multiplier with saddle point optimization.

Key words: Multiobjective, Fractional programming, saddle point, Lagrange multiplier, variational inequality, projection.

—————————— ——————————

1. Introduction

Optimization problems arise in a wide variety of scientific
and engineering applications including signal processing, system identification, filter design, function approximation, regression analysis and so on.
Here, we perform a rigorous analysis of a neural
network for solving non-linear fractional programming problems and present a saddle point optimality theory for the minimax fractional programming problem. Various numerical procedures have been presented over decades for solving linear and nonlinear optimization problems. Tank and Hop field [5] in 1986 first proposed a neural network for linear programming.
Kennedy and Chua [20,21] extended and improved the tank and Hopfield network by using penalty method for solving nonlinear programming problem. Variety of attempts to avoid using penalty parameters have been made. Radriguez – Vazquez et al.[22] proposed a switched – capacitor neural network for solving a class of constrained non-linear convex optimization problems. Neural network models for optimization problem have been investigated intensively since the pioneer work of Hop field see [18,6,2,4] , for the eigen value problem, Xia [24] gave a promising neural network model which was proved to have global convergence with respect to the problems feasible set. Xia and Wang [8] gave a general neural network model designing methodology which put together way gradient based network models for solving the convex programming problems with globally convergent stability . Neural network for quadratic and nonlinear optimization with interval constraints were developed by Bouzerdorm, pattison [23] and Liang, Wang [3] and others [1],[10]-[17]. All there neural networks can be classified into the following three types:
good performance neural network model to solve this
optimization problem becomes a challenge now since.
Fractional programming is a nonlinear programming method that has known increasing exposure recently and its importance in solving concrete problems is steadily increasing. Also non-linear optimization models describe practical problems much better than the linear optimization does. The fractional programming problems are particularly useful in the solution of economic problems in which various activities use certain resources in various proportions while the objective is to optimize a certain indicator, usually the most favorable return – on – allocation ratio subject to the constraint imposed on the availability of goods. Pal & Gupta [9] 2008 presented a Goal programming approach for solving interval valued multi objective fractional programming problems using Genetic Algorithm. Zhang & Feng [7] developed Neuro dynamic analysis for a class of nonlinear fractional programming. Wen and Wu [15] solved a continuous- time linear fractional programming problems by using the parametric method Neural circuit design techniques and related characteristics analysis is now becoming a typically challenging undertaking. Motivated by this idea, this paper is organized as follows.
In this section 2, we formulate the multi objecting non-linear fractional programming problem and its duality. In section 3, some illustrative examples are presented. In section 4, Neural model is primarily designed for optimization with constraints condition. The methodology is based on the lagrange multiplier with saddle point which satisfies the optimality and conclusion part in section 5 are cited.

Section -2 : We consider the following problems:

(1) The gradient- based models
(P) min

xX

max

1ip

(fi (x) | hi (x) )
(2) The penalty function based models
(3) The projection based models
Nonlinear fractional programming does not belong
to convex optimization problems and how to construct a

Subject to gx 0, x X Rn

where fi , hi , i = 1,2,…p are real valued functions defined on X, each hi is strictly positive and g = (g1, g2 ……gm ) , where each gj is a real valued function defined on x.

IJSER © 2011

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 2, Issue 12, December-2011 2

ISSN 2229-5518

To develop the optimality conditions, consider the following auxiliary problem:

 max fi x/ hi x, hi x 0 for a ll i

1i p

Hence the conclusion follows:
(Pe) min

xX

max fi xe hi x

1ip

Subject to gx 0 ,

x X Rn

m

Result 2 : If u* is an optimal multiplier for (Pe) then for every optimal solution x* of (Pe) , x* , u* satisfies the optimality

Le (x,u) =

max (fi (x) – e hi (x))

1ip

uj g j x

j1

for fixed real
conditions for (Pe) .

number e and for all x X a nd u Rm .

The lagrangian dual of (P) is defined as follows:
exists
Proof : Since u* is an optimal multiplier for (Pe), there

x X such that it satisfies

(i) minimizes of Le (x0 , u*)

max  in f

m 

* 0

  xX  max fi xehi x  uj g j x

(ii) u gx  0

u o 

1ip

j1



(iii) gx0  0

For a fixed e and for each x X & u Rm

(iv) u*  0

Definition 1: If there exists x* X

, u* Rm u*  0 such that

From (ii) , (iii) and (iv) conditions it follows that

* 0

Le(x* , u)  Le x* , u* Le x, u*

u g(x )  0,  j

for all

x X

and for all

u Rm , u  0, then x* , u* is called a

Now for fixed e,

max f (x* ) -eh (x* )

saddle point of problem (Pe).

1ip i i

 inf max f (x* ) -eh (x)

xX 1ip i i

Definition 2: A pair x* , u* with x* X, u* Rm

is said to

  0   0 

satisfy the optimality conditions for the problem (pe) if and

 max

1ip

fi x

ehi x

only if the following four conditions are satisfied

(a) x* minimizes Le (x, u*)

 max

1ip

f x0

m

ehi x uj

j1

g x0

(b) u* g (x*) = 0 (c) g (x*)  0

= Le x0 , u* Lx* , u*

m

(d)

u*  0

= max f x e h x u . g x

*

1ip

* * *

j1

 max f x* e h

x* 

Definition 3: A point

u* Rm

is said to be an optimal

1ip i i

multiplier of problem (Pe) if and only if there exists an x* X
Since

gx*  0 , u*  0 , it follows that

Le x* , u*

such that x* , u* satisfies optimality condition of definition

Le x, u* x X

Hence
2.

x* , u*

satisfies the optimality conditions for (Pe).

Main Results:

If x* , u* satisfies the optimality conditions for (Pe)

with e*  max f x / h x* , then

1 i p i i i

x* , u* also satisfies the optimality conditions for (P)

Result 3 : A pair x* , u* satisfies the optimality conditions (i) to (iv) of definition 2 if and only if it satisfies the following conditions

(i) x* is an optimal solution (Pe)
(ii) u* is an optimal solution of D

Result 1:

(iii)

max (fi (x* ) e hi (x* )) = in f Le (x, u*)

For any

1ip

* *

xX

m

uU , x X , inf Le (x, u)  inf  max fi xe hi xuj g j x

Proof: Now x , u satisfies the optimality conditions of (i)

to (iv) of definition – 2. Then for any u Rm , u  0, u gx*  0

xX

xX 1  i p

j 1 

 max fi x/ hi x

Le * ,

 max

*

* 

*

for all

Proof: For any uU , x X

1  i p

fixed e

x u fi x

1i p

e hi x

uj g j x

j 1

m

 max f x* e h x* , since u g x*  0

inf Le (x, u) =

inf

ax f xe h x

u g x

1ip i i

xX

m i

xX 1 i p

m

i j j

j 1 

m

f x e h x u g x

 max fi xe hi xuj g j x

1ip i

i j j

j1

1ip

j1

= Lex* , u*

 max fi x/ ehi xsince g j (x)  0 and uj  0 for all j.

1ip

From optimality conditions,

IJSER © 2011

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 2, Issue 12, December-2011 3

ISSN 2229-5518

Lex* , u* Le x, u* x X

Hence from equation (1) and (2) it follows that

Le x* , uLe x* , u* Le x, u*

This implies that x* , u* is a saddle point.

Since x* , u* is a saddle point of Le x, u* ,

then x* , u* is an optimal solution of (P ) . This proves the

Result 4 : The following statements are eq(u2)ivalent

(a) x* , u* is a saddle point of Le (x , u)
(b) Conditions of Result – 2 hold
(c) Condition (i) to (iv) of definition -2 hold. We observe that (a) (b) (c) (a)

Result 5: Suppose that (P) has an optimal solution

result-3 conditions (i). To establish conditions (ii),

* *

*   *   * 

   

x a nd tha t Pe

is stable where e

 max

1ip

fi x

/ hi x

inf Le(x, u)

infmax

fi (x)

ehi (x)

uj g j (x)

Thus (D) has an optimal solution and the optimal values of

xX

xX  1ip

*

j1 

m

* *

(P) and (D) are equal.

max

1i p

fi (x )

ehi (x )

uj g j (x )

j 1

Section – 3:

 max f (x* )  eh (x* )

Definition 4 : A feasible solution

x* of

P is said to be an

1ip i

*

i

m

* * *

e

efficient solution of (Pe) if there does not exist any feasible

max

1i p

fi (x )

ehi (x )

j 1

uj g j (x )

solution x of (Pe) such that

Le x* , u*

fi (x)  ehi (x)fi (x )  ehi (x )for i  1,2...p

& for

Le x , u*

* *

fixed e.

and fi (x)  ehi (x)f j (x )  ehj (x )

* *

 max fi (x)  ehi (x)uj g j (x)  x X

for some j and for fixed e

1i p

j 1

If x *
is an efficient solution of (Pe) then x *
is an efficient

   

* 

solution of (P)

inf Le

x , u

inf max

fi (x)

ehi (x)

uj g j (x) 

xX

xX 1 i p

 inf Le x , u*

j 1 

Definition 5: A feasible solution x *

of Pe

is said to be

xX

u* is an optimal solution of (D). This establishes the proof for (ii).

properly efficient solution of Pe if it is an efficient solution

of Pe and there exists a scalar M > 0 such that for some i and
for some feasible x,
To prove (iii), we consider from equation (2) that

f (x)  e h (x)

f (x* )  eh (x* ),

Le x* , u*  inf Le x , u*

i i i i

xX

f (x * )  eh (x * )f (x)  e h (x)

i i i

m

i

for some j
or, *

* * *

Mf (x)  e h (x)f x * eh x *

max

1i p

fi (x )

ehi (x )

uj g j (x )

j 1

j j j j

 inf Le x , u*

xX

such that

f (x)  e h (x)f (x* )  eh (x* )

Since, u* g (x* )  0  j  1,2,...m,

j i i i

*

*

j j If

x is an properly efficient solution of Pe then

x is

max f (x* )  eh (x* ) inf Le x , u*

an properly efficient solution (P).

1ip i

i xX

Example :

To prove for converse part, we consider that it satisfies the

F

min f (x)  eh (x), f xeh x

conditions from (i) to (iii).

p x 1 1 2 2

Then,



* 

subject to g j (x)  0 , j  1,2

and x X  2, 2

infmax

fi (x)

ehi (x)

uj g j (x) 

xX  1ip

j1 

f1 (x)  x  1

f2 (x)  3  x

 max

f (x* ) 

m

ehi (x )

u* g (x)

h1 (x)  x  2

h2 (x)  x  2

1i p

j 1

g (x)  x2  1

g (x)  x

 max f (x* )  eh (x* ) inf Le x , u* 1 2

1ip i

i xX

The feasible region is 0,1. Let us take

x*  1  0,1and e=1

Hence all the relations are equal.
for fixed real number.

x* minimizes

Le x , u* a nd u* g(x* )  0

f (x)  e h (x)f (x* )  e h (x* )

1 1 1 1

The optimamality conditions (iii) and (iv) follow form the

 (x 2  1  x 2  2)  (2  3)  1  1  0

feasibility of

x* a nd u*

Now we can prove that of (Fp ) .

x  1

is a properly efficient solution

IJSER © 2011

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 2, Issue 12, December-2011 4

ISSN 2229-5518

Section 4: Neural Model

The neurons in the network can be classified into two classes: variable neurons x and Lagrangian neurons , with regard to their role in searching for the solution.

In the dynamic process of the neural network, Lagrangian neurons lead the trajectory intot the feasible region while variable neurons decrease the Lagrangian function L(x,). The decrease of the Lagrangian function x can be verified from the fact that along the trajectory of the network

[4.] Y. Huang, ‚ Lagrange- type Neural networks for nonlinear programming problems with Inequality constraints‛, IEEE conference on decision and control, PP-12-15, Dec. 2005.

[5.] D.W. Tank and J.J. Hopfield ‚ Simple neural optimization networks ; An A/D converter, signal decision circuit and linear Programming circuit,‛ IEEE Trans, circuits systs,Vol.CAS-33,PP.533-541 May 1986.

[6.] A. Jeflea‛ A parametric study for solving Nonlinear

fractional problems‛ An St. Univ Ovidius constanta, vol

11(2),PP.87-92, 2003.

[7.] Q.Zhang and J.Feng et.al. ‚Neurodynamic Analysis for a

dL(x, )

n L(x, ) dx n dx 2


i   i

class of nonlinear fractional programming‛ IEEE

International conference on computational Intelligence &

dt  cons tant

i1

xi dt

i1

dt

 0 .

Security 2008.

Let f,h,g be pseudoconvex functions. Hence

f is

h

[8.] Y.xia and J. Wang‛ A General methologyfor designing

Globally convergent optimization Neural Networks. IEEE

pseudoconvex i.e.

fi (x)  ehi (x) is pseudoconvex.

transactions on Neural Networks, Vol – 9, No-6,PP.1331-

Since, g is pseudoconvex and

f (x)  e h (x)yt g(x) is pseudoconvex.

y  0

that implies

1343, Nov 1998.

[9.] B.B. Pal & S. Gupta ‚A Goal programming approach for

i i

Also

(x x* )t f (x* )  e h (x* )ytg(x* )  0 , this

solving Interval valued multiobjective fractional

programming problems using Genetic Algorithm‛, IEEE

i i

satisfies variational Inequality problem over 0 .
Hence the new projection neural network model is

Px* f (x* )  e h (x* )ytg(x* x*

0 i i

where, X x  n | d x h,

International conference on Industrial and Information

Systems, PP: 8-10, Dec 2008.

[10.] Y.Li, Y.A. Chen and H.S. Ahn. ‚Fractional order interactive learning control‛ In proceedings of 2009 ICROS – SICE.

[11.] D. Kinderlehrer, G. Stampchia, ‚An introduction to

P : n x

is a projection operator and

variational Inequalities and their Application Academic,

  x  n | g(x)  0is closed convex.

Section 5:

Conclusion: The paper proposes a new projection neural network model and theoretically guaranteed to solve variational inequality problems. The multiobjective minimax nonlinear fractional programming is defined and its optimality is derived by using its Lagrangian duality. The equilibrium points of the proposed neural network model are found to correspond to the Karush Kuhn Trcker point associated with the nonlinear fractional programming problem.

References

[1.] M.S. Bazaraa, H.D. Sherali, C.M. Shetty ‚Nonlinear Programming Theory and Algorithms‛ 2nd ed., New York. John Wiley 1993.

[2.] S. Zhang and A.G. Constantinides, ‚Lagrange Programming

Neural Networks‛, IEEE Trans.on Circuits and Systems II,

Analog and Digital Signal Processing. Vol. 39, No. 7, pp.

441-452. July 1992.

[3.] X.B Liang and J. Wang, A recurrent neural network for nonlinear optimization with a continuously differentiable objective function and bound constraints‛ IEEE Trans,

Neural network, vol-II , no -6 PP. 1251 – 1262, Nov 2000

New York (1980).

[12.] R. Fourer, D.M Gay et. al., ‚A modeling language for

Mathematical programming, Scientific press 1993,94.

[13.] H.Y. Benson, et. al. ‚Interior – point methods for non- convex nonlinear programming, Jamming and Comparative Numerical testing‛ Operations Research and Financial Engineering, Princeton University, Aug 28,2000.

[14.] A.M Geoffrin, ‚Proper efficiency and theory of vector

maximization, J. Math, Analysis and Application Vol-22,No-

3 PP: 618-630, 1968.

[15.] C.F Wen & H.C Wu ‚A parametric method for solving continuous- time linear fractional programming problems‛ IEEE international conference on computational science and

optimization. 2010.

[16.] S.G Nersesov and W.M. Hadded. ‚On the Stability and control of Nonlinear Dynamical systems via vector Lyapunov Functions‛ IEEE transactions on Automatic control vol 51, No-2, Feb. 2006.

[17.] O.L. Mangasarian. ‚Nonlinear programming‛ New York, Mc Graw – Hill 1969.

[18.] J.J. Hopfield. ‚Neurons with graded response have collective computational properties like those of two state neuron‛ Proc. Natl, Acad. Sci Vol 81, PP 3088-3092, 1984.

[19.] T.E Stern ‚Theory of Non linear Networks and systems,

New York, Addison Wesley, 1965.

[20.] M.P. Kennedy and L.O. Chua, ‚ Unifying the Tank and

Hoffield Linear Programming circuit and the canonical

IJSER © 2011

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 2, Issue 12, December-2011 5

ISSN 2229-5518

Non-Linar Programming circuit of chua and Lin‛,IEEE

circuit syst; Vol CAS-34,No-2 February 19787.

[21.] M.P. Kennedy and L.O.Chua, ‚ Neural Networks for

nonlinear programming.‛ IEEE Trans. circuit syst., Vol

35,No. 5,May 1988.

[22.] A.Rodriguez-Vazquez, R.Domin guez-castro, et.al. ‚ Nonlinear switched-capacitor ‘neural’ networks for optimization problems.‛IEEE Trans, circuits systems vol 37, No.3,PP.384-398,Mar 1990.

[23.] A.Bouzerdowm and T.R.Pattison, ‚ Neural network for quadratic optimization with bound constraints,‛IEEE Trans.Neural Networks.Vol.4,No.-2,PP.293-304,1993.

[24.] Y.S. Xia, ‚ Further results on global convergence and

stability of globally projected dynamical systems, J.

Optimization Theory and Application,Vol 122,No-3PP 627-

649, 2004.

IJSER © 2011

http://www.ijser.org