International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1847

ISSN 2229-5518

Reference Number: 41127004; Paper ID: I058895

SEGMENTED LINE SEARCH METHOD FOR CONSTRUCTING

D-OPTIMAL EXACT DESIGNS IN CONTINUOUS EXPERIMENTAL AREAS

F.I Nsude

Department of Statistics

Akanu Ibiam Federal Polytechnic, Unwana, Ebonyi State, Nigeria

P.E. Chigbu

Department of Statistics, University of Nigeria, Nsukka, Enugu State, Nigeria

Abstract

An effective segmented line search technique for constructing D-optimal exact designs in a polynomial response
function of degree z is presented in this paper. The direction of search is a weighted average from all segments of the experimental trials; the weight being proportional to the mean square errors from the segments. The method is rapidly convergent and its application in continuous experimental areas is shown.

Key words: variance function, Exact Designs, Continuous design, Exchange algorithm.

~

1. Introduction

The method reported here is a segmented line search
procedure for constructing D-optimal exact designs in a continuous experimental region. In this search system, variance of polynomial response function of degree z is approximated by a polynomial function of degree 2z. Maximizer of this polynomial is determined by segmenting the experimental area into segments. Thereafter, this maximizer is used to replace the point of minimum prediction variance amongst the points

X is a collection of factor levels x , which constitutes a compact, continuous and metric space, and at each

point, π‘₯ ∈ 𝑋� , the response is observed with a random
error, 𝑒π‘₯ , which is a point in the space, βˆ‘ x which is a
non-negative continuous function, define in 𝑋� (Chigbu
and Oladugba, 2010) [4].
In this study, treatment or support point, x , was
already in the design. The search method is via the
selected such that each element of

xi , is attached a

Super Convergent line Search Series reported by
Onukogu (1997) [8] which has shown to have rapid convergence capability; see Onukogu and Chigbu (2002, p. 112) [9], and Ugbe and Chigbu (2014) [11].
Every line search technique has four characteristics
weigh wi; wi β‰₯ 0 and βˆ‘n 1 wi = 1, where n is the

i=

number of support points in the design. From a
collection of n support points we form a design matrix,

X, as well as normalized Fisher’s information matrix,

𝑋 β€² 𝑋 𝜎2

sequence as follows: (a) the starting point, π‘₯Μ…, which is
M( ΞΎ ) =

πœ‰ πœ‰ , where X is the design matrix of the

𝑛

an n-component vector, x ; (b) an n-component

model terms (the columns) evaluated at specific

β€² π‘‹πœ‰ is

direction vector, 𝑑; (c) the step-length,

ρ; and (d)

point/treatment in the design space (the rows), π‘‹πœ‰
Movement to the iterate, ( π‘₯ = π‘₯Μ… + π‘‘πœŒ ). This
a non-singular matrix (information matrix) and

β€² βˆ’1

set of sequential activities applies as well to this
𝜎 2 (π‘‹πœ‰ π‘‹πœ‰ )
is the variance-covariance matrix of the
technique herein reported.
least square estimate of the response parameters.
This search method also makes use of variance exchange of points procedure which is originally
Our interest is to find n-points design, π‘₯𝑖
∈ 𝑋� , 𝑖 =
reported by Fedorov (1972) [5], and modified severally over decades before the Variance Exchange Method reported by Atkinson and Donev (1989) [1].

2. Design Background Problem

1, 2, … , 𝑛, such that the resultant information matrix
(𝑋′ π‘‹πœ‰ ) is maximized. A design criterion that
maximizes the determinant of the information matrix
|𝑋′ π‘‹πœ‰ | or, equivalently, minimizes |𝑋′ π‘‹πœ‰ |βˆ’1 is referred

πœ‰ πœ‰

~
Given the basic experimental triple X,
Fx , βˆ‘ x },
to as D-optimality. Other optimality criteria have been
reported severally (Atkinson and Donev, (1992) [2]; Onukogu and Chigbu (2002) [9]).
where 𝐹π‘₯ is the response functions, which is a space of
finite dimensional linear function and continuous in ~

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1848

ISSN 2229-5518

Therefore in this paper, the intention is to present a search method which seeks iteratively an n-point design
measure, πœ‰π‘› that maximizes the determinant of M(πœ‰βˆ— );
by 𝑐𝑏 . Matrix of mean square error of each of the kth
segment is given by
i.e Max| M(πœ‰βˆ— )|; M(πœ‰π‘› ) ∈ 𝑆𝑝 Γ— 𝑝 , where 𝑆𝑝 Γ— 𝑝 is a set



βˆ’1 βˆ’1 βˆ’1

𝑀 = 𝑋 + 𝑋 𝑋′ 𝑋𝑏 𝑐 𝑐′ 𝑋′ 𝑏 𝑋 𝑋 (3.4)
of all non-singular p x p information matrices defined

π‘˜ π‘˜

π‘˜ π‘˜

π‘˜ 𝑏 𝑏

π‘˜ π‘˜ π‘˜

in 𝑋� .
where 𝑋

= 𝑋′ 𝑋 , and 𝑀
is an (m+1) x (m + 1)

π‘˜ πœ… πœ… π‘˜

3. Theoretical Framework
The method is summarized by the following sequence of steps.
matrix, m being the number of factors/variables.

3.8. The average mean square error is minimized to

obtain the matrix of convex combination, π»π‘˜ ; Thus,

min (𝑀(𝑑𝑑)) = min(βˆ‘π‘ 
β„Žπ‘‘π‘˜ π›Όπ‘‘π‘˜
) and as the cross

3.1. Pick at random an initial n-point design of

nonsingular information matrix.

3.2. Obtain its extended design matrix, X, according to the model terms, and the Fisher’s information matrix,

terms are zero; we have


min �𝑀(𝑑𝑑)οΏ½ = π‘šπ‘–π‘› βˆ‘π‘  β„Ž2 𝑀(π‘Ž ) (3.5)

π‘˜=1 π‘‘π‘˜ π‘‘π‘˜

𝑋′ 𝑋 = 𝑀(πœ‰).
where 𝑑𝑑 = βˆ‘π‘ 
0, 1, … , π‘š, β„Žπ‘‘π‘˜ β‰₯ 0.
β„Žπ‘‘π‘˜ π‘Žπ‘‘π‘˜

𝑠

π‘˜=1

β„Žπ‘‘π‘˜
= 1; 𝑑 =

3.3. From the above, compute standardized generalized variance function of degree 2z as


𝑑�π‘₯, πœ‰οΏ½ = 𝑛π‘₯β€² π‘€βˆ’1 (πœ‰)π‘₯, (3.1)
= 𝑔(π‘₯) (3.2)

The values, β„Žπ‘‘π‘˜ , are obtained by taking partial derivatives of 𝑀(𝑑𝑑 ) with respect to β„Žπ‘‘π‘Ÿ of equation
(3.5), equating to zero, and solving.

3.9. The average information matrix from all the segments is given by

= 𝛽0 + βˆ‘π‘š

𝛽𝑖 π‘₯ + βˆ‘π‘š

𝛽 π‘₯

+ βˆ‘π‘šβˆ’1 βˆ‘π‘š

𝛽 π‘₯ π‘₯ …

β€²

1×𝑝

𝑖=1 𝑖

𝑖=1 𝑖 𝑖

𝑖=1

𝑖<𝑗 𝑖

2

𝑖 𝑗

2

𝑀𝑑 = 𝐻𝑋′ 𝑋 𝐻′

𝑠

β€² β€²


= (1 π‘₯1𝑖 π‘₯2𝑖 . . . π‘₯1𝑖 π‘₯2𝑖 π‘₯1𝑖 π‘₯3𝑖 . . . π‘₯1𝑖
π‘₯2𝑖 . . . )
= οΏ½ π»π‘˜ π‘‹π‘˜ π‘‹π‘˜ π»π‘˜

π‘˜

Note that the number of factor interaction is π‘š(π‘šβˆ’1),

2

where m is the number of factors or variables.
Where H is a matrix of convex combination, and H =
(H , H , . . . , H )

1 2 s

3.4. Partition the experimental region,𝑋� , either into

π‘‹πœ„ 𝑋 = π‘‘π‘–π‘Žπ‘” (𝑋′ 𝑋 , 𝑋′ 𝑋 , … , 𝑋′ 𝑋 );

π‘†π‘˜ ; π‘˜ = 1,2. . . , 𝑠 non-overlapping segments such that

1 1 2 2

𝑠 𝑠

𝑆1 ∩ 𝑆2 . . .∩ 𝑆𝑠 = βˆ…, and 𝑆1 βˆͺ 𝑆2 . . .βˆͺ 𝑆𝑠 ≀ 𝑋� ; or with common boundary such that 𝑆1 ∩ 𝑆2 . . .∩ 𝑆𝑠 β‰ 
βˆ…, and 𝑆1 βˆͺ 𝑆2 . . .βˆͺ 𝑆8 ≀ 𝑋� : See (Ugbe and Chigbu
(2014) [11].
3.5. From each kth segment, pick or select π‘›π‘˜ supports

3.10. The response vector, z, is obtained from the variance function, g (x). Hence,
𝑧 = (𝑧0 , 𝑧1 , … , π‘§π‘š )β€² ; 𝑧𝑖 = π‘”οΏ½πœ‚π‘–π‘— οΏ½, (3.6)
𝑖 = 0, 1, … , π‘š; 𝑗 = 1, 2, … , π‘š;
points, π‘›π‘˜ β‰₯ π‘š + 1, π‘˜ = 1,2, . . . 𝑠; βˆ‘π‘ 
π‘›π‘˜ = 𝑁0, the
total number of selected points in all the segments.
3.6. Obtain the design matrix, π‘‹π‘˜ , π‘˜ = 1, 2, . . . , 𝑠, for each segment, and the information matrices,

β€²

Where πœ‚π‘–π‘— is the ith and jth element of the average information matrix, 𝑀𝑑.

3.11. Obtain the vector, starting point (π‘₯βˆ— ),

𝑗

𝑋 = π‘‹π‘˜ π‘‹π‘˜ (3.3)

𝑁0 βˆ’1


βˆ— 𝑖

3.7. Compute π‘‹π‘π‘˜, the matrices of

π‘₯ = οΏ½ 𝑀𝑖 π‘₯𝑖 ; 𝑀𝑖 = βˆ‘π‘0

;
π‘Žβˆ’1
interaction/biasing effect of the variable for each
segment; where k = 1, 2, . . ., s; and the vector of
π‘Žπ‘– = π‘₯𝑖

𝑖=1

𝑁 𝑖

𝑖=1 𝑖

coefficient of interaction/biasing terms of g(π‘₯) is given

β€² π‘€βˆ’1 (πœ‰

)π‘₯ ; (3.7)
And the direction vector (π‘‘βˆ— ) is,

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1849

ISSN 2229-5518

𝑑̂ = π‘€βˆ’1

𝑧 (3.8)

∞

οƒ° οΏ½π‘₯ οΏ½

𝑗 =1

= π‘₯π‘Ÿ
𝑑 = (𝑑0 , 𝑑1 , . . . , π‘‘π‘š )β€² and normalized such that

1

𝑑̂ β€²βˆ— 𝑑̂ βˆ— = 1. Thus, π‘‘βˆ— = 𝑑(𝑑′ 𝑑)βˆ’2; and the optimal step-
Proof


length (πœŒβˆ— ), 𝑔(𝜌) = 𝑔(π‘₯

𝑗


+ πœŒπ‘‘), and by taking
Define the information matrix at the jth iteration as
derivatives of the function with respect to 𝜌, we have

𝑀𝑗 (πœ‰π‘› ) = βˆ‘π‘–=1 π‘₯𝑖 π‘₯𝑖 (πœ‰π‘› ): See Gaffke and Krafft(1982)


𝑑[𝑔(𝜌)]
= 0;

𝑛 β€²

[7] , then
π‘‘πœŒ

𝑀𝑗 = π‘₯1 π‘₯1 + π‘₯2 π‘₯2 +. . . +π‘₯π‘Ÿ π‘₯π‘Ÿ +. . . π‘₯𝑛 π‘₯𝑛

β€² β€² β€² β€²


3.12. The new point is π‘₯𝑗 = π‘₯

+ πœŒβˆ— π‘‘βˆ— , 𝑗 = 1, 2, . . .

οƒ° βˆ‘π‘›βˆ’1 π‘₯ β€² + π‘₯ π‘₯β€²

𝑖=1

𝑖 π‘₯1

π‘Ÿ π‘Ÿ

3.13. Take the point, π‘₯1, to step (3.5) above and add it
to the segment it falls into, so that the number of
support points for the segment is (π‘›π‘˜ + 1), and follow the steps down to obtain a second point π‘₯2. Again, add the points, π‘₯2, to the segments it falls, and continue with the steps to obtain a third pint, π‘₯3 , etc. This process is continued until it converges to a point, π‘₯ βˆ—.
Set 𝑀0 = βˆ‘π‘›βˆ’1 π‘₯𝑖 π‘₯ β€²

𝑀𝑗 = 𝑀0 + π‘₯π‘Ÿ π‘₯β€² ,

Then the determinant becomes

|𝑀𝑗 | = |𝑀0 + π‘₯π‘Ÿ π‘₯β€² |

β€² π‘€βˆ’1 π‘₯ )| (4.1)

3.14. Compute the prediction variance of each design

οƒ° |𝑀0 (1 + π‘₯π‘Ÿ 0 π‘Ÿ
points as π‘₯ β€² π‘€βˆ’1
(πœ‰0 )π‘₯𝑖 ; i = 1, 2, . . . , n and select the
The information matrix at the (j-1) iteration is 𝑀 =
one with minimum prediction variance 𝑉�π‘₯π‘šπ‘–π‘› οΏ½ = β€²

π‘—βˆ’1

β€²

π‘šπ‘–π‘›

π‘€βˆ’1 (πœ‰0 )π‘₯π‘šπ‘–π‘›
. Add the point π‘₯ β€² row vector to

𝑀0 + π‘₯π‘˜ π‘₯π‘˜

augment the n rows extended matrix, and remove the
point with minimum predicted variance in the design to

οƒ° |π‘€π‘—βˆ’1 | = |𝑀0 + π‘₯π‘˜ π‘₯π‘˜ |

have the determinant as
οΏ½π‘€π‘—βˆ’1 οΏ½ = |𝑀0οΏ½1 + π‘₯π‘˜

1 π‘₯ οΏ½| (4.2)

β€² π‘€βˆ’

β€² β€²

�𝑀(πœ‰0 ) + π‘₯𝑗 π‘₯𝑗 βˆ’ π‘₯π‘šπ‘–π‘› π‘₯π‘šπ‘–π‘› οΏ½

= |𝑀(πœ‰0 )|οΏ½1 + βˆ†οΏ½π‘₯𝑗 , π‘₯π‘šπ‘–π‘› οΏ½οΏ½
By comparing equation (4.1) and (4.2), the sequence moves from a point of low variance to a point of high variance, therefore
Note that π‘₯π‘šπ‘–π‘› is the point of minimum prediction

π‘₯π‘˜

0 π‘˜ π‘Ÿ

0 π‘Ÿ

variance in the design and π‘₯ βˆ—
is the point of maximum

β€² π‘€βˆ’1 π‘₯

≀ π‘₯β€² π‘€βˆ’1 π‘₯

β€² π‘€βˆ’1 π‘₯

≀ π‘₯ β€² π‘€βˆ’1 π‘₯ , (4.3)
prediction variance from the experimental region, 𝑋� .

οƒ° π‘₯π‘—βˆ’1

π‘—βˆ’1

π‘—βˆ’1

𝑗 𝑗 𝑗

3.15. With the new point in the design, compute the

Since 𝑀0 = βˆ‘π‘›βˆ’1 π‘₯𝑖 π‘₯β€² is the same in both equations
inverse information matrix, and the variance function,
�𝑀0 οΏ½1 + π‘₯π‘Ÿ

𝑖=1 𝑖

0 π‘Ÿ 0

π‘˜

0 π‘˜


𝑔�π‘₯οΏ½, and continue the other processes of the steps to
obtain another converging point, which is exchanged
with a point of minimum prediction variance from the

β€² π‘€βˆ’1 π‘₯ οΏ½οΏ½ β‰₯ �𝑀 οΏ½1 + π‘₯β€² π‘€βˆ’1 π‘₯ οΏ½οΏ½

οƒ° |𝑀𝑗 | β‰₯ |π‘€π‘—βˆ’1 |

β€² βˆ’1

β€² βˆ’1

design.
Then, since π‘₯π‘˜ 𝑀0

π‘₯π‘˜ ≀ π‘₯π‘Ÿ 𝑀0

π‘₯π‘Ÿ , it follows from

β€² βˆ’1 ∞

equation (4.3) that the point {π‘₯𝑗 𝑀𝑗

π‘₯𝑗 }𝑗=1, is

3.16. Is οΏ½det 𝑀𝑗 (πœ‰)οΏ½ βˆ’ det π‘€π‘—βˆ’1 (πœ‰) |
≀ πœ–; πœ– β‰₯ 0?
Yes; stop and 𝑀𝑗 (πœ‰) is D-optimal.
4. Convergence of the sequence
monotonically increasing, and is sure to converge to a
point, π‘₯π‘Ÿ .
=> {π‘₯𝑗 }∞ = π‘₯ . Therefore, the Kolmogrov’s condition
for convergence with probability one is assured, see
Feller (1966, pg 259), and Onukogu and Nsude (2014) [10].

Given a line sequence π‘₯ = π‘₯

𝑗

+ πœŒπ‘— 𝑑𝑗 ; 𝑗 = 1, 2, … ;

The sequence has the following attributes:

∞ ∞

then, οΏ½π‘₯ οΏ½

𝑗=1

= {π‘₯

𝑗

+ πœŒπ‘— 𝑑𝑗 }𝑗=1

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1850

ISSN 2229-5518

(i) The direction vector is a weighted average from all segments; the weight being proportional to the mean square errors from the segments; see equation (3.7) and the sequence converges with minimax point; see Atkinson and Donev (1992, chapt. 11) [2].
(ii) Again, the response function was not use, only its variance function.
(iii) Division of experimental region into s over-lapping or non-overlapping segments avoids the use of second derivative in obtaining Hessian matrix which at times constitutes problem of inevitability in
Thus, the variance function is
𝑔�π‘₯οΏ½ = 2.5624 βˆ’ 3.8249π‘₯2 + 5.0624π‘₯4
Now partition the experimental region βˆ’1 ≀ π‘₯ ≀ 1 into
k=2 non over-lapping segments; βˆ’1 ≀ 𝑆1 ≀ βˆ’0.25,
and 0.25 ≀ 𝑆2 ≀ 1 and in the kth segment, 𝑔(π‘₯) is
approximated by a first-order linear function.
Thus, π‘¦π‘˜ = 𝛽00 + 𝛽0π‘˜ π‘₯ ; π‘˜ = 1, 2.
Pick points in the π‘ π‘˜ segments, and obtain the π‘‹π‘˜ design
matrices as thus:
most line search algorithms, eg. Newton’s method is achieved.

5. Numerical Demonstrations

1

𝑋1= οΏ½1

1

1

βˆ’1.0000

βˆ’0.7500

βˆ’0.5000

βˆ’0.2500

1

οΏ½ ; 𝑋 2 = οΏ½1

1

0.2500

0.5000 οΏ½

0.7500

1.0000

The problems below are used to illustrate the numerical
and their information matrices are respectively given as
behavior of the search system.
𝑋�
4 βˆ’2.5000

Problem 1: Consider a polynomial regression function,

1 = οΏ½βˆ’2.5000 1.8750 οΏ½ ;

2 = οΏ½ 4 2.5000


𝑓�π‘₯οΏ½ = 𝛽0 + 𝛽1 π‘₯ + 𝛽2 π‘₯2 , βˆ’1 < π‘₯ < 1; obtain a 4-
point exact D-optimal design. (Atkinson and Donev,
𝑋�
2.5000 1. οΏ½ ;
1992) [2]; (Atkinson, Donev and Tobias, 2006) [3] used
where 𝑋 οΏ½

β€² 𝑋

and 𝑋�

β€² 𝑋

this problem to find exact D-optimal design of a

1 = 𝑋1 1

2 = 𝑋2 2

quadratic polynomial of a single factor. Begin with
initial design points as
πœ‰0 = {βˆ’1, βˆ’0.3333, 0.3333, 1}
The matrices of biasing terms, π‘₯2 , π‘₯4 , of the variance
function, 𝑔�π‘₯οΏ½, of each segment are
The extended design matrix is,
1 βˆ’1 1
𝑋(πœ‰0 ) = οΏ½1 βˆ’0.3333 . 1111οΏ½ ;
𝑋𝑏1
βŽ›1.0000
= ⎜0.5625
0.2500
⎝0.0625
1.0000⎞
0.3164⎟ ;
0.0625
0.0156⎠
1 0.3333
1 1
. 1111
1
βŽ›0.0625
0.0156⎞
𝑋′ 𝑋(πœ‰0 ) = 𝑀(πœ‰0 )
4.0000 0 2.2222
𝑋𝑏2 = ⎜0.2500
0.5625
⎝1.0000
0.0625⎟
0.3164
1.0000⎠
= οΏ½ 0 2.2222 0 οΏ½
2.2222 0 2.0247
The variance-covariance matrix is
and the coefficient of biasing terms is
βˆ’3.8248
0.6404 0 βˆ’0.7031

𝑐𝑏 = οΏ½

οΏ½
5.0625
π‘€βˆ’1 (πœ‰0 ) = οΏ½
0 0.4500 0 οΏ½ ;
βˆ’0.7031 0 1.2656
The matrix of mean square error of each of the kth
segment is given by
𝑑𝑒𝑑𝑀(πœ‰0 ) = 7.0233


βˆ’1 βˆ’1 β€²

β€² β€² βˆ’1

The standardized general variance function is given by
𝑀 = 𝑋

π‘˜

+ 𝑋

π‘˜

π‘‹π‘˜ π‘‹π‘π‘˜ 𝑐𝑏 𝑐𝑏 𝑋 π‘π‘˜ π‘‹π‘˜ 𝑋

4

𝑔�π‘₯οΏ½ = 𝑛π‘₯ β€² π‘€βˆ’1 (πœ‰0 )π‘₯ = 𝛽0 + οΏ½ 𝛽0 π‘₯𝑖

2.7070 3.8821
οƒ° 𝑀1 = οΏ½3.8821 6. οΏ½ ; and

𝑖=1


𝑀 = οΏ½ 2.7070
βˆ’3.8821
βˆ’3.8821
6.1347 οΏ½

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1851

ISSN 2229-5518

The matrices of convex combination of the segments
are 𝐻1 = π‘‘π‘–π‘Žπ‘”(0.5, 0.5) and 𝐻2 = (1 βˆ’ 𝐻1 ) =
π‘‘π‘–π‘Žπ‘”(0.5, 0.5). These matrices are normalized to have
π»βˆ— = π‘‘π‘–π‘Žπ‘”(0.7071, 0.7071), and
π»βˆ— = π‘‘π‘–π‘Žπ‘”(0.7071, 0.7071).
The direction vector 𝑑̂ , is 𝑑̂ = 𝑀𝑑 𝑧

π‘₯1 = 0 = π‘₯π‘šπ‘Žπ‘₯.

Beginning another iteration in this iteration cycle, we
add the point, π‘₯π‘šπ‘Žπ‘₯ = 0, to the second segment (S2 ) and
repeat the process. The points in the first segment are
the same as to the first iteration, whereas the second segment is increased by the new point.
This is because; this point falls in the second segment.

𝑑 = οΏ½

𝑑0
οΏ½ ; 𝑑̂ = οΏ½
0.6406
οΏ½
Thus, a MATLAB program was developed for this
𝑑1
27.5654
work and the result of the iteration cycles is present in
the table 5.1 below.


and the normalized direction vector, 𝑑̂ , such that 𝑑̂ β€²

𝑑̂ = 1 is

𝑑̂ = οΏ½0.0232

0.9997
Then, the optimal starting point is

Table 5.1: The iteration sequence of D- optimal design in (-1, 1), for a quadratic single factor polynomial function

8

π‘₯ = οΏ½ 𝑀𝑖 π‘₯𝑖

𝑖=1

1.0000

π‘₯ = οΏ½

0

οΏ½
0.0000
In this table, the value of πœ– = 0.0005 is considered very
small, meaning that the optimal design is achieved.
Compute the optimal step-length, 𝜌0 , by substituting the
Therefore, the optimal design points are, (-1, 0.0048, 0,

values of π‘₯

0

= 0 and 𝑑̂ = 0.9997 in the variance
1).
function, we have
𝑔(𝜌0 ) = 2.5624 βˆ’ 3.8248(0 + .9997𝜌0 )2 +
5.0624(0 + 0.9997𝜌0 )4

Problem 2

Let us consider the most general second-order polynomial in two factors with a known solution. Thus,

2 + 𝛽

π‘₯2 ,

and solve for 𝜌0 to have
𝑓�π‘₯οΏ½ = 𝛽0 + 𝛽1 π‘₯1 + 𝛽2 π‘₯2 + 𝛽12 π‘₯1 π‘₯2 + 𝛽11 π‘₯1

22 2

𝜌�0 = 0; π‘œπ‘Ÿ
𝜌�0 = 0.6148; π‘œπ‘Ÿ
𝜌�0 = 0.6148
βˆ’1 ≀ π‘₯1 , π‘₯2 ≀ 1; to obtain a 6-point exact D-optimal
design.
In the experimental region, 𝑋�, pick the following points.
Make a move to obtain a point,
οΏ½π‘₯1
π‘₯2
βˆ’1 βˆ’1 1
1 βˆ’1 βˆ’1
1 0
1 βˆ’1
βˆ’.25
οΏ½
. 25

π‘₯1 = π‘₯0 + 𝜌�0𝑑̂

Then, the extended design matrix is
Thus, substituting the three values of 𝜌�0, and have

1 βˆ’1 1 βˆ’1 1 1

1 βˆ’1 βˆ’1 1 1 1

0 βŽ›1 1

βˆ’1 βˆ’1 1 1 ⎞

π‘₯1 = 0; π‘œπ‘Ÿ

𝑋(πœ‰ ) = ⎜1 1 1 1

1 0 βˆ’1 0

1 1 ⎟

0 1

-0.6148; or

⎝1 βˆ’.25

. 25

βˆ’.0625

. 0625

. 0625⎠

0.6148.
Checking the prediction variance at each of these three points, we have
and the information matrix is

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1852

ISSN 2229-5518

𝑋 ′𝑋 = 𝑀(πœ‰0 )

6 βˆ’.25 βˆ’.75

βˆ’.0625

4.0625

5.0025

1 βˆ’1 1
1 βˆ’1 0
1 1 1
1 1 0

βŽ› βˆ’.25 4.0625 βˆ’0.625

. 0156

βˆ’.0156

βˆ’.0156

βŽ›1 βˆ’1
βˆ’1⎞ ; 𝑋 = βŽ›1
1 βˆ’1⎞

= βˆ’.75

βˆ’.0625

βˆ’.0625

. 0156

5.0625

βˆ’.0156

βˆ’.0156

4.0039

. 0156

βˆ’.0039

βˆ’.9844⎞

βˆ’.0039⎟

𝑋1 = ⎜1 0
1 0
1 ⎟ 2
0
⎜1 0
1 0
βˆ’1⎟
0

4.0625

⎝ 5.0625

βˆ’.0156

βˆ’.0156

. 0156

βˆ’.9844

βˆ’.0039

βˆ’.0039

4.0039

4.00039

4.0039

5.0039⎠

⎝1 0
βˆ’1⎠
⎝1 0 1 ⎠
The variance-covariance matrix is
and their information matrices are respectively given as

π‘€βˆ’1 (πœ‰0 )

1.1756 . 0667 βˆ’.0667

. 0167

. 05

βˆ’1.2422

𝑋�
6 βˆ’3 0

οΏ½2

6 3 0

βŽ› . 0667 . 2500 0.0

0.0

0.0

βˆ’.0667 ⎞

1 = οΏ½βˆ’3 3 0οΏ½ ; 𝑋

0 0 4
= οΏ½3 3 0οΏ½ ;
0 0 4

= βˆ’.0667

. 0167

0.0

0.0

. 25

0.0

0.0

. 2500

βˆ’.25

0.0

. 3167

βˆ’.0167 ⎟ ; Where 𝑋�

1 1 2 2 2

. 0500

0.0

βˆ’.25

0.0

1.5000

βˆ’1.3

1 = 𝑋′ 𝑋

and 𝑋�
= π‘‹πœ„ 𝑋

βŽβˆ’1.2422

βˆ’.0667

. 3167

βˆ’.0167

βˆ’1.3

2.5589 ⎠ The matrices of biasing terms, π‘₯1 π‘₯2 , π‘₯1 , . . . , π‘₯2 , of the
𝐷𝑒𝑑𝑀 (πœ‰0 ) = 225.000

2 4

variance function of each segment are
The standardized general variance function is

βˆ’1 1 1 βˆ’1 1

0 1 0 0 0

1 1 βˆ’1 βˆ’1 1 1

0 0 0 0 1 0

𝑑�π‘₯, πœ‰0 οΏ½ = 𝑛π‘₯β€² π‘€βˆ’1 (πœ‰0 )π‘₯, = 𝑔(π‘₯)

𝑋𝑏 = βŽ› 1

1 ⎜ 0

1 1 βˆ’1 βˆ’1 βˆ’1 1 1

0 1 0 0 1 0 0

1 1 1⎞

1 0 1⎟

π‘š π‘šβˆ’1 π‘š π‘š

2

= 𝛽0 + οΏ½ 𝛽𝑖 π‘₯𝑖 + οΏ½ οΏ½ 𝛽𝑖𝑗 π‘₯𝑖 π‘₯𝑗 + οΏ½ 𝛽𝑖𝑖 π‘₯𝑖
and

𝑖=1

𝑖=1 𝑖<𝑗

+. . .

𝑖=1

1 1 1 1 1

0 1 0 0 0

βŽ›

1 1 1 1 1 1

0 0 0 0 1 0

⎞

Where

β€²

= (1 π‘₯ π‘₯
π‘₯ π‘₯
π‘₯2 π‘₯2 )

𝑋𝑏2 = βŽœβˆ’1

1 1 1 βˆ’1

0 1 0 0

βˆ’1 1

βˆ’1 0

βˆ’1 βˆ’1 1 1

0 1 0 1⎟

π‘₯1×𝑝

1 2 1 2 1 2 ⎠

Thus, the variance function is
𝑔�π‘₯οΏ½ = 7.0536 + .80042π‘₯1 βˆ’ .8004π‘₯2 + .2004π‘₯1 π‘₯2
while the coefficient of biasing terms is

𝑐′ = (.2004 2.1000 βˆ’ 13.4064 βˆ’ .8004 βˆ’ 3.000 βˆ’

2 2 2

+ 2.1π‘₯1 βˆ’ 13.4064π‘₯2 βˆ’ .8002π‘₯1 π‘₯2

3.8004 βˆ’ 14.1000 βˆ’ .002 βˆ’ .1002 9.0000 15.3534 )

2 3 2 2

βˆ’ 3π‘₯1 π‘₯2 + 3.8004π‘₯2 βˆ’ 14.1π‘₯1 π‘₯2
βˆ’ .002π‘₯1 π‘₯3 βˆ’ .1002π‘₯3 π‘₯ + 9π‘₯4

2 1 2 1

+ 15.3534π‘₯4
The matrix of mean square error of each of the kth
segment is given by


βˆ’1 βˆ’1 β€²

β€² β€² βˆ’1

We now partition the experimental region βˆ’1 ≀
π‘₯1 , π‘₯2 ≀ 1 into k-segments, k=2 with common
𝑀 = 𝑋

π‘˜

+ 𝑋

π‘˜

π‘‹π‘˜ π‘‹π‘π‘˜ 𝑐𝑏 𝑐𝑏 𝑋 π‘π‘˜ π‘‹π‘˜ 𝑋

boundary as
𝑆1 = {βˆ’1 ≀ π‘₯1 ≀ 0, βˆ’1 ≀ π‘₯2 ≀ 1} and 𝑆2 =
{0 ≀ π‘₯1 ≀ 1, βˆ’1 ≀ π‘₯2 ≀ 1}, and in the kth segment,
𝑔�π‘₯οΏ½ is approximated by a first-order linear function.
Thus, π‘¦π‘˜ = 𝛽00 + 𝛽1π‘˜ π‘₯1+ 𝛽2π‘˜ π‘₯2; k=1, 2.
Pick points in the π‘†π‘˜ segments, and obtain the π‘‹π‘˜
design matrices as thus

2.0181 βˆ’2.5659 2.9859

=> 𝑀1 = οΏ½βˆ’2.5659 5.6556 βˆ’5.1382οΏ½ ; π‘Žπ‘›π‘‘

2.9895 βˆ’5.1382 5.5418

2.0181 1.1807 2.9895

The matrices of convex combination of the segments are arranged in the Hi matrices as follows
. 5000 0 0
𝐻1 = �
0 . 2639 0 οΏ½ ;
0 0 . 5000

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1853

ISSN 2229-5518

0 0 . 5000
The point of maximum prediction variance among the three points is
and the normalized Hi are:
π‘₯βˆ— = οΏ½
βˆ’.0180
οΏ½
βˆ’.0344
. 7071 0 0

βˆ—

Add this point π‘₯1 , to the second segment (𝑆2 ) of the
𝐻1 = �
0 . 3374 0 οΏ½ ;
experimental region and repeat the process. Notice that
0 0 . 7071
. 7071 0 0
the points in the first segment have increased by the
addition of the new point, whereas the points in the
𝐻2 = �
0 . 9414 0 οΏ½
second segment remain the same.
0 0 . 7071
The direction vector 𝑑̂ , is

𝑑̂ = π‘€βˆ’1 𝑧

𝑑

βˆ’0.1069
= οΏ½ 9.4973 οΏ½
The summary of the iteration cycles is given in the table below.

Table 5.2: The iteration sequence of D-optimal design in [-1, 1], for a quadratic polynomial function of two factors

βˆ’52.6626

and normalized the direction vector, 𝑑̂ ,
such that 𝑑̂ β€² 𝑑̂ = 1 is

𝑑̂ = οΏ½βˆ’.0020οΏ½

. 1775
βˆ’.9841
Then, the optimal starting point is

12

π‘₯ = οΏ½ 𝑀𝑖 π‘₯𝑖

𝑖=1

1.0000

π‘₯ = οΏ½ οΏ½

0 βˆ’0.0031

βˆ’0.1166
The value of πœ– = .0003 is considered very small, meaning that the optimal design is reached.
Atkinson and Donev (1992) used KL exchange method to solve the problem for generating an exact D-optimal design for this regression model and obtain the same result which Box and Draper (1971) obtained via a computer hill-climbing search.
Their D-optimal design is

𝑛

= (βˆ’1, βˆ’1), (1, βˆ’1), (βˆ’1,1), (βˆ’.1315, βˆ’.1315), (. 3945, 1), (1, .3945)

Compute the optimal step-length 𝜌0, by substituting the

values of π‘₯

0

and 𝑑̂ in the variance function, and

5.2 Assessment of the search method

thereafter differentiate with respect to 𝜌0, equate to zero and solve for 𝜌�0, to obtain
𝜌0 = .6483, π‘œπ‘Ÿ 𝜌�0 = βˆ’.7296; π‘œπ‘Ÿ 𝜌�0 = βˆ’.0835
Make a move to obtain a point

π‘₯1 = π‘₯0 + 𝜌�0𝑑̂

Then, substituting the value of 𝜌�0, we have
The assessment of this method was considered in two folds. The first one is the computer execution
time (c. e. t.) of the algorithm to get to D-optimal design. In this paper, a Matlab program of this algorithm was developed and average of twenty running time of each model is presented in the table below. All runs were performed in a Presario CQ56 laptop computer.
. 1119
οΏ½ ; π‘œπ‘Ÿ π‘₯
βˆ’.1326
= οΏ½ οΏ½ ; π‘œπ‘Ÿ π‘₯1
The second other way of assessment is through D- efficiency. D-efficiency is a relative measure of how

π‘₯1 = οΏ½βˆ’.7546

1 . 6015

βˆ’.0180
= οΏ½ οΏ½
βˆ’.0344
design compares with the D-optimum design for a
specific design experimental region. Design efficiency

IJSER Β© 2015

http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1854

ISSN 2229-5518

is used to compare user-specified design to the optimal design. Theoretically, design efficiency lies between 0 and 1, and the closer the efficiency to 1, the better the arbitrary design. If information matrix for optimal
design is 𝑀(πœ‰1 ), and suppose an arbitrary information matrix of a design is 𝑀(πœ‰2 ). Then the D-efficiency of
the arbitrary design is defined as

|𝑀(πœ‰2 )| 1

method is a single search system, in contrast to the β€œAdjustment Algorithm” of Atkinson and Donev (1992, Chapter 15), where a secondary search is required at the end of the primary one before the search terminates.

References

[1] Atkinson, A. C. and Donev, A.C. (1989). The
Construction of D-optimal Experimental Exact

𝐷𝑒𝑓𝑓 = {
}𝑝 , where p is the number of model
Designs with Application to Blocking

|𝑀(πœ‰1 )|

parameters: see Atkinson and Donev (1989).
The table below shows how the method compares with the KL-exchange method reported by Atkinson and Donev.

Table 5.3: The performance of method

Design problem

Performan ce of line search

Performan ce relative to KL exchange

Execution time

m p n

|𝑀(πœ‰)|

𝐷𝑒𝑓𝑓

In seconds

1 3 3

2 6 6

7.9999

256.0000

1.0000

0.9924

0.08

2.24

The table 5.3 above shows that the method is efficiently equal to what Atkinson, Donev and Tobias reported for a single factor quadratic polynomial function. In the same way, the new method is highly efficient as the result reported using KL exchange method for two factor quadratic polynomial models. Again, the new method reached the optimal design points in only four iteration cycles in each of the example with minimal execution time.

6. Conclusion

A line search method for constructing D-optimal

exact designs for a polynomial response function f(π‘₯)

of degree z, where the factor levels π‘₯ are defined in
continuous geometry is shown. Under this search
method, variance of the regression function is approximated by a polynomial function of degree 2z. A maximizer of this polynomial is determined by segmented line search technique. Thereafter, this maximizer is used to replace the point of minimum prediction variance from amongst the point already in the design. This exchange of points is continued until the sequence converges to D-optimal design.
The search method compares favorably with the method of KL-Exchange Algorithm reported by Atkinson and Donev. We affirm that the new search
Response Surface Designs. Biometrics, 176,
515-527
[2] Atkinson, A. C and Donev, A.C. (1992). Optimum Experimental Designs. New York, Oxford University Press.
[3] Atkinson, A. C. Donev, A.C., and Tobias, R.D. (2006). Optimum Experimental Designs, with SAS. London, Macclesfield and Cary
[4] Chigbu, P.E and Oladugba, A.V. (2010). On the Loss of Information Matrix and Its Analytical Properties in DOE (Design of Experiments). J. of Nigerian Math. Society. Vol. 29, 41-49
[5] Fedorov, V. V. (1972). Theory of Optimal
Experiments. Academic Press, New York.
[6] Feller, W. (1966). An Introduction to Probability Theory and its Applications, 3rd Edition, John Wiley and Sons Inc., New York.
[7] Gaffke, N. and Krafft, O. (1982). Exact D-Optimal Designs for Quadratic Regression. Journal of the Royal Statistical Society, Series B V(44), N(2). 394-397.
[8] Onukogu, I.B. (1997). Foundation of Optimal Exploration of Response Surface. Ephrata Nsukka.
[9] Onukogu, I.B. and Chigbu, P.E. (2002). Super Convergent Line Series in Optimal Design of Experiment and Mathematical Programming, AP Express Publishers, Nsukka. K
[10] Onukogu, I.B. and Nsude F.I. (2014), A global convergent sequence for construction of D- optimal exact designs in an infinite space of trials. Journal of Statistics: Advances and Theory of Applications, 12(1), pp 39-52
[11] Ugbe, T.A. and Chigbu, P.E. (2014). On Non Overlapping Segmentation of the Response Surface for Solving Constrained Programming Problems through Super Convergent line

IJSER Β© 2015 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 6, Issue 1, January-2015 1855

ISSN 2229-5518

Series. Communication in Statistics-Theory and Methods,Vol.43,306-320.

IJSER Β© 2015 http://www.ijser.org