International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 1

ISSN 2229-5518

Stochastic Linear Programming

Mrs.J.Evangeline Cicelia*, Dr.K.Ramanathan & Dr.K.Rangarajan*

Abstract— Operations Research has three major methods called Mathematical Programming Techniques, Stochastic Process Techniques and Statistical Methods. Mathematical Programming plays a vital role among them. This programming has too many branches. Stochastic Programming is one of these branches. Non-linear programming algorithms are classified into two algorithms. They are unconstrained and constrained nonlinear algorithms. In general, there is no algorithm for handling nonlinear models, mainly because of the irregular behaviour of the nonlinear functions. Perhaps the most general result applicable to the problem is the Kuhn Tucker conditions. In constrained non- linear algorithms, stochastic programming techniques solve the non-linear problem by dealing with one or more linear problems that are extracted from the original program. This paper deals with basic concepts in stochastic linear programming. There are two techniques viz. two stage programming and chance constrained programming with an example which is solved in a computer in the Pascal Language.

Index Terms— Khun-Tucker Conditions, Non-linear programming, Mathematical programming, two stage programming, chance

Programming, Stochastic process, Pascal language

1 INTRODUCTION

—————————— ——————————

tochastic or Probabilistic programming deals with situa- tions where some or all of the parameters of the optimiza-
and
xj > O j = 1,2,.....n
tion problem are described by stochastic or random or proba- bilistic variables rather than by deterministic quantities. The sources of random variables may be several, depending on the nature and the type of the problem.
Depending on the nature of equations involved in
terms of random variables in the problem a stochastic optimi- zation-problem is called a stochastic linear or dynamic or non- linear programming problem.
The basic idea used in solving any stochastic pro- gramming problem is to convert the stochastic problem into an equivalent deterministic problem. The resulting determinis- tic problem is then solved by using the familiar techniques like
linear geometric, dynamic and non-linear programming.
... (1.3)
where cj, aij and bi are random variables and xj are assumed to be deterministic with known probability distributions. Several methods are available for solving the problem stated in (1.1) – (1.3). Here we are dealing with only two methods namely the two stage programming technique and the chance constrained programming technique.
Before dealing we shall see some examples for sto-
chastic programming problems and shall examine the difficul- ties involved in their solution.

SEQUENTIAL STOCHASTIC PROGRAMMING PROB- LEM

STOCHASTIC LINEAR PROGRAMMING

A stochastic linear programming problem can be stat- ed as follows:

n 2

Mini f(x) = CTX = Cj Xj

j1

.... (1.1)
subject to

n

Consider the inventory problem in which plans are being made to control the inventory of single item over the time period of n years. It is assumed that the orders to replen- ish the inventory are placed at the beginning of each year, so that the quantity required comes into inventory before the end of that year. The demand in every year is treated as continuous random variable and also the demands in different years are assumed to be independent variables. If xj denotes the number of units being ordered, then the cost of procuring xj units at
the beginning of jth year will be,

A i X  a ij x j  bi

i = 1,2,....m

A j j

 c j

x j ,

.... (1.2)

j  1

0

Where j  

1

Aj is fixe charge.

for x j  0 for x j  0

and

* Department of Mathematics,

Bharath Institute of Science & Technology (BIST), Bharath University, Selaiyur, Chennai – 600073,

Tamil Nadu, India. E-mail : evangelinecicelia@gmail.com

There costs associated with the inventory problem are the carrying cost and the stock out cost. Ignoring the accuracy of these we assume that kj hj and πj sj are the carrying cost and stock-out cost, respectively for the jth year. Here hj > 0 denotes

the inventory in hand at the end of jth year and sj > 0 denotes

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 2

ISSN 2229-5518

the number of back orders at the end of jth year. Let us define,
such a way that

A T X will be greater than or equal to bi (i =

Y1 = Inventory in hand at the beginning of 1st year
vi = Demand in the ith year
1,2,.....m) for whatever value bi takes. The difference between

T

xi = Quantity order in the ith year

A i X

and bi a random variable, whose probability distribu-
Using these notations, the inventory in hand at the end of jth
year is given by
tion function depends on the value of X chosen.
One can now think of associating a penalty for viola-

j

hj = Y1 + x i

i  1

j

v i

i 1

if hj > 0
tion we might get for the constraints. In this case, we can think
of minimizing the sum of CTX and the expected value of the penalty. There are several choices for the penalty. One choice is
The number of backorder at the end of jth year will be given by
to assume a constant penalty cost of pi for violating the ith con-

j

Sj =

i 1

j

v i

i 1

x i  y1i

if Sj > 0
straint by one unit. Thus the total penalty is given by the ex- pected (mean) value of the sum of the individual penalties,

m

It is to be noted that one of the carrying cost and stockout cost disappear in the presence of the other. So let us define a new function,

i 1

Epi yi where E is the expectation and yi is defined as,

yi = bi - AT X, yi > 0 I = 1,2,……m … (2.1)

j

j 1

j

i i

k j h j ,

for h j  0

After adding the mean total penalty cost to the original objec- ve function, the new optimization problem becomes

F  y 

x  v

   ti

i 1

i  1

   j

s j ,

for s j  0

Minimize CTX + E (PTY)
Then the cost of operating the inventory system for the n years
time period will be
subject to
AX + BY = b
… (2.2)

n n j j

… (2.3)

a j A j j  c j x j a j Fj  y1

x i

v i

and

j  1

j 1

i  1

i 1

X > 0, Y > 0

Where αi is the discounting factor. If αi = 1, then there will be

no discounting factor. The probability density function for getting a specific set of vj is

n

Where

 p1

 y1

… (2.4)

  v

j 1

  v

v 2

.... v n

 

2

 

2

Thus for any given set of xj > 0, the expected cost over n years time period is given by,
P =  . 

 . 

 

Y =  . 

 . 

 

   n n

n j

 .  j

  . 

Z   

   v

 

A 

 c x

 F

 y  x

v 

   

j 

j j j j j

j j 1

pi

i y 

  

j  1

j 1

j 1

i  1

m

i  1



m

dv1...dvn
When the decision is made on how much to order for period
and B = I = Identity matrix of order m.
The penalty term in Eqn. (2.2) will be a deterministic
K, the optimal value of xk will depend upon yk. Using the
quantity in terms of the expected values of yi,

yi . For eg. if bi

technique of dynamic programming we find out the functions
follows uniform (rectangular) distribution in the range
xk(yk), where the value of yk, is given by

b  m i , bi

 m

and

yi denotes

bi -

AT X,

then the

k  1

yk = y1 +

i  1

k  1

x i v i

i  1

mean penalty cost can be shown to be equal to
E(piyi) = pi1 + pi2 + pi3
... (2.5)

TWO STAGE PROGRAMMING TECHNIQUE

INTRODUCTION

The two-stage programming technique converts a stochastic linear programming problem into an equivalent deterministic problem. Assume that the elements bi are proba-
Where

pi1 = 0 if yi > mi

mi  yi

i

... (2.6)
bilistic. This means that the variable bi is not known but its probability distribution function with a finite mean bi is
known to us. In this case it is impossible to find a vector X in
Pi2 = 0

s0

2mi

sds


if – mi < yi < mi
… (2.7)

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 3

ISSN 2229-5518

Pi3 = 0

mi  yi

sm  y

pi sds

2mi


if yi < mi
Thus the two-stage formulation can be interpreted to mean that a non-negative vector X must be found before the actual
values of bi (i = 1,2…..m) are known, and that when they are
From (2.7)

p


Pi2 =

i i


s mi  yi

s  0

… (2.8)
known, a recourse Y must be found by solving the second stage problem of Equations (2.12). Hence a general two stage
problem can be stated as follows:

Minimize CTX + E min P T Y

From (2.8)

2mi

=

2

pi m

4mi

 y 2

… (2.9)
subject to
A X + B Y > b mxn1 n1x1 mxn2 n2x1 mx1
… (2.13)

pi s

mi  y

Where b is a random m-dimensional vector with known prob-

Pi3 = i
ability distribution F(b) and probability density function

2mi

2 s  mi  yi

dF(b) = f(b)

pi


=

4mi

m

 y 2  m

 y 2

The following assumptions are generally made to solve this problem.
(i) The penalty cost vector P is a known
deterministic vector and

pi


=

4mi

m

 yi

 m i

 yi mi

 yi

 mi

 y

(ii) There exists a nonempty convex set S consisting of

= pi  4 m

4mi

non-negative solution vectors X such that for each b there ex- ists a solution vector Y(b) so that the pair [X, Y(b)] is feasible.

=  pi yi

The 2nd
assumption is called the
… (2.10)
Assumption of permanent feasibility. By defining,

D  

Using (2.6), (2.6) and (2.10) in (2.5) we have,

mx n1 n 2 = [A, B]

pi 2  


E(piyi) =

4mi

mi  yi

 pi yi

… (2.11)

Q

… (2.14)

C

 n  n  x 1

=  

Which is a quadratic function of deterministic variable

yi .

  P 

To convert the problem stated in equations (2.1) to (2.4) to a

fully deterministic one, the probability constraints eqn. (2.3)
and
… (2.15)
have to be written either in a deterministic form like

yi = bi -

Z( b )

X

AT X, or interpreted as a two-stage problem as follows.

 n1  n 2  x 1

= Y(b)

i  

 

… (2.16)

FIRST STAGE

First estimate the vector b, and find the vector X by solving the problem stated in eqns. (1.1) to (1.3).

SECOND STAGE

Then observe the value of b and hence its discrepancy from the previous guess vector and find the vector y = Y (b,X) by solving the second stage problem:
Find Y which minimize PTY
Subject to
yi = bi - AT X, I = 1,2,….m
… (2.12)
and yi > 0, I = 1,2….m
where bi and X are known now.
The two stage problem stated in equation (2.13) can be as follows:

Minimize Q T Z(b) f (b) = expected cost

Subject to D z(b) > b
And z(b) > 0 V b
Example: Find the optimal values of factory production (x1) excess supply (x2) and the amount purchased from outside (x3) of commodity, for which the market demand (r) is a uniformly

1

distributed r.v. with a density function of f(r) = 80  70.

Each unit produced in the factory costs Rs. 1 whereas each unit purchased from outside costs Rs.2. The constraints are

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 4

ISSN 2229-5518

that
(i) The total supply of the commodity (x1 + x3) should not be less than the demand (r) and
(ii) Due to storage space and other restrictions, the

x1 80

x1 f (r) dr  x1  2 r  x1 f (r) dr

70 x1

r

amount of production in the factory (x1) plus the amount stored (x4) should be equal to 100 units.

Solution:

x 1 x

= 1

10 70

2

 1 x r  r 2  2x r80

10 1 1

This problem can be stated as follows: Minimize f = x1 + 2.x3

x


= 1 

70 x1  1

r 2

r  80

side.
= cost of production + cost of purchasing out-
Subject to

10 10

2

x  70x1

10 

1 x1

1  1

 2 2

6400  80x1 x  x

x1 + x4 = 100
and x1 + x3 – x2 = r where xi > 0, i = 1,2,3,4.


=

10

2

x  70x1

10  1 

1 1 =


1  1

6400  80x


f(r) = 80  70= 10

10 10

since the second constraint is probabilistic, we assume perma-

1 


= x

2 

 70x1  6400  80x1

nent feasibility condition for it. This means that it is possible to choose x3, outside purchase and x2, excess supply for whatever
may be the feasible values of x1, factory production and x4,

10  1 

1  2 


= x 150x  6400

10 1

amount stored.
It can be seen that if x1 > r for any particular value of r, then x3 = 0 gives the minimum value of f. However, if x1 < r

= 75  x

10 1

 77.5

then x3 = r – x1 gives the minimum values of f since x1 is
cheaper than x3. Thus we obtain
Hence the total expected cost function is a quadratic function in x1.
Its minimum is given by,

3 1 1

E(min f) =

1  2

x

150x1  6400

Since the market demand is probabilistic, we have to consider the following three cases.

dE 

10 

1 2x

1 

150

Case (i): When x1 > 80 i.e., when x1 > r
E (minimum f) = E(x1) = x1
Case (ii): When x1 < 70 i.e., whe x1 < r

dx1 10

= x1 15

5

E (minimum f) = E[x1 + 2 (r-x1)]

dE

x1

80

= x1

 2r  2x1

80  2r  x f (r) dr 

dr

dx1

 0 

15  0

5

70 7  10 

2

x1 15

r  r x 80

=

6400  80 x1

 4900  70 x1 5

10 70 10

x1 = 75

= 1500 10 x1

10

= 150 – x1
Hence this value satisfies the first constraint also we obtain the optimum solution as
Case (iii): When 70 < x1 < 80. Here the demand may be less
than, equal to or greater than x1:
E(minimum f) =
x1 = E(r) = 75, x2 = 0
x3 = r – 75 with E(x3) = 0, x4 = 25.

CHANCE – CONSTRAINED PROGRAMMING TECH-

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 5

ISSN 2229-5518

NIQUE

An important class of stochastic programming problems is the
chance-constrained problems. These problems were initially

n

di

j 1

a ij x j

i = 1,2,...m
studied by A. Charnes and W.W. Cooper. In a stochastic pro- gramming problem some constraints may be deterministic
... (3.5)
And a variance of
and the remaining may involve random elements. On the oth-
er hand, in a chance-constrained programming problem, the
Var (di) = d
= XTviX
latter set of constraints is not required to always hold, but these must hold simultaneously or individually with given probabilities. In other words, we are given a set of probability measure indicating the extent of violation of the random con- straints. The general chance-constrained linear program is of the form:

n

... (3.6)


Where Vi is the ith covariance matrix defined as
Var(ai1) (Cov(ai1, ai2) ....... Cov(ai1, ain) (Cov(ai1, ai2) Var(ai2) ....... Cov(ai2, ain)
. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .
... (3.7)
Minimize f(x) =

j 1

... (3.1) Subject to

n

c j x j

 

(Cov(ain, ai1) Cov(ain, ai2) ....... Var(ai2)
The constraint or equation (3.2) can be expressed as
P[di < bi] > pi

P ai j

x j  bi   bi   pi ,

i = 1,2,....m

 d  d

b  d 

j 1

 


i.e. P

i i

i i

  pi i =

... (3.2)

 var (di )

var(di ) 

And xj > 0, j = 1,2.....m
... (3.3)
Where cj, aij, bi are random variables and pi are specified prob- abilities. Eqn. (3.2) indicate that the ith constraint
1,2....m ...(3.8)

di  di


where

var (di )

can be a standard normal variate with a

n

ai j x j  bi

mean of zero and a variance of one. Thus the probability of di
smaller than or equal to bi can be

j 1

 b  d 

has to be satisfied with a probability of atleast pi where 0 < pi<
P[di < pi] = φ i i
1. Assume that the decision variables xj are deterministic. First we consider the special case where only cj or aij or bj are ran- dom variables. After we consider the case in which cj, aij and bi are all random variables. Further we shall assume that all the random variables are normally distributed with known mean and standard deviations.

ONLY aij’s ARE RANDOM VARIABLES

2

 var(d ) 

i

... (3.9)
Where φ(x) represents the cumulative distribution function of the standard normal distribution evaluated at x. If ei denotes the value of the standard normal variable at which

Φ(ei) = pi

... (3.10)
Then the constraints in Eqn. (3.8) can be stated as

Let
a ij and Var(aij) =

ij

be the mean and the vari-

 bi  di

   e

i = 1,2,....m
ance of the normally distributed random variables aij. Also

φ  var(d ) 

i

assume that the multivariate distribution of aij, i = 1,2...m, j =
1,2...n is also known along with the covariance, Cov (aij, akl)
between the random variables aij and akl. Define quantities di

i

... (3.11)
These inequalities will be satisfied only if,
as

b  d 

i i > ei

n

di =

j 1

ai j x j

... (3.4)
i = 1,2....m

(or)

var(d ) 

< 0, i = 1,2,...m

d

Since ai1, ai2,...... ain are normally distributed, and x1, x2 .... xn i

ei var(di ) bi

are constants di will also be normally distributed with the mean value of
... (3.12)
By substituting Eqn. (3.5) and 3.6) in 3.12)

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 6

ISSN 2229-5518

n T b

 b

a ij x j + ei

X Vi X


- bi < 0 i = 1,2,.....m

Where i

i is a standard normal variable with zero

j 1

 var(bi ) 

... (3.13)
These are the deterministic non-linear constraints equivalent to the original stochastic linear constraints.
mean and unit variance. The inequalities (3.17) can also be stated as,

n

Thus the solution of the stochastic programming
problem stated in Eqns. (3.1) to (3.3) can be obtained by solv-

 bi

 bi

j 1

a ij

x j  b

i

> p i =

ing the equivalent deterministic programming problem.

n

1 – P

var(bi )

i

var(bi ) 

Minimize f(x) =

j 1

c j x j subject to

 

1,2,....m

n

j 1

a ij x j + ei

X Vi X

- bi < 0, i =
(or)

n

a x  b

1,2,....m ... (3.14)

i i

ij

j 1

j i

< 1-p i =

and xj > 0, j = 1,2,.....n.
If the normally distributed random variables aij are independent the covariance terms will be zero and equation

P

 var(bi )

i

var(bi ) 

(3.7) reduces to a diagonal matrix as

Var(a i1 ) 0 0 

 

 0 Var(a i 2 ) 0 

1,2,...m ... (3.18)
If Ei represents the value of the standard normal variate at

which Φ (Ei) = 1-pi.

The constraints in Eqn. (3.18) can be expressed as

 0

0 Var(a i 3 )

n

... (3.15)

j 1

a ij

x j  bi

In this case the constraints of Eqn. (3.13) reduce to

2

 var(b )

< Φ (Ei) i = 1,2.,....m

n

j 1

a ij x j + ei

Var(ai j ) x j

j 1

- bi < 0

i

 

... (3.19)
i = 1,2,....m ... (3.16)
These inequalities will be satisfied only if

n

ONLY bi’s ARE RANDOM VARIABLES:

Let bi and var(bi) denote the mean and variance of the

j 1

a ij

x j  bi

< Ei i = 1,2,...m

normally distributed random variable bi. The constraints of
equation (3.2) can be restated as
(or)

var(bi )

n n


P

j 1

a ij

x j  bi = P

j 1

a ij

x j  bi

- Ei

var(bi )

< 0, i = 1,2,....m

n

... (3.20)

j 1

a ij

x j  bi

bi  bi

Thus the stochastic linear programming problem stated in

 var(bi )

var(bi ) 

equations (3.1) to (3.3) is equivalent to the following determin- istic linear programming problem.

n


= P  bi

 bi

n

j 1

a ij

x j  b

i

> pi i =
Minimize f(x) =

j 1

n

c j x j

 var(bi )

var(bi ) 

Subject to

j 1

a ij

x j  bi

- Ei

var(bi )

< 0, i = 1,2,...m

1,2,...m ... (3.17)
and
xj > 0, j = 1,2,...m
... (3.21)

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 7

ISSN 2229-5518

CONCLUSION

Stochastic programming techniques are useful whenever the parameters of the optimization problem are stochastic or ran- dom in nature. The basic idea used in all the stochastic optimi- zation techniques is to convert the problem into an equivalent deterministic problem so that the techniques of linear can be applied to find the optimum solution. In stochastic program- ming problems, the two stage programming and the chance constrained programming techniques are presented for solv- ing a stochastic linear programming problem. On the other hand the solution of stochastic nonlinear programming prob- lems is considered using chance constrained programming technique only.

ACKNOWLEDGMENT

The authors are thankful to the Management for the financial assistances.

REFERENCES

[1] S.S. Rao, “Optimization Theory and Applications’. Wiley Eastern

Limited, Second Edition, 1984.

[2] N.S. Kambo, “Mathematical Programming Techniques” Affiliated

East-West Private Limited, 1984.

[3] Kanti Swarup, P.K.Gupta, Man Mohan, “Operations Resarch”, Sultan

Chand and Sons, 1987.

[4] Hamdy A.Taha, “Operations Research on Introduction”. Macmillan Publishing Company, New York, Collier Macmillan Publishers, Lon- don, Fourth Edition, 1989.

[5] V.Rajaraman, “Computer Programming in Pascal”, Prentice Hall of

India, revised Edition, 1985.

IJSER © 2013 http://www.ijser.org