International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011 1

ISSN 2229-5518

Error of Approximation in Case of

Definite Integrals

Rajesh Kumar Sinha, Satya Narayan Mahto, Dhananjay Sharan

Abstract— This paper proposes a method for computation of error of approximation involved in case of evaluation of integrals of single variable. The error associated with a quadrature rule provides information with a difference of approximation. In numerical integration, approximation of any function is done through polynomial of suitable degree but shows a difference in their integration integrated over some interval. Such difference is error of approximation. Sometime, it is difficult to evaluate the integral by analytical methods Numerical Integration or Numerical Quadrature can be an alternative approach to solve such problems. As in other numerical techniques, it often results in approximate solution. The Integration can be performed on a continuous function on set of data.

Index Terms— Quadrature rule, Simpsons rule, Chebyshev polynomials, approximation, interpolation, error.

1 INTRODUCTION

—————————— • ——————————

O evaluate the definite integral of a function that has no explicit antiderivative of whose antideriva- tive is not easy to obtain; the basic method in-

tion for a given function, concentrating mainly on po- lynomial approximation. For approximation, there is considered a polynomial of first degree such as
volved in approximating is numerical quadrature [1]-
[4].

y = a + bx

a good approximation to a given function

b

f f ( x )dx

a

(1)
for the interval (a, b).

2 PROPOSED METHOD

n

i.e. L:ai f (x)

i =0

b

to approximate f f (x)dx . The methodolo-

a

2.1 Reflection on Approximation

This section cover types of approximation following
gy for computing the antiderivative at a given point,
the condition ‘best’ approximation for a given func-
the polynomial

p( x)

approximating the function

f (x)

tion, concentrating mainly on polynomial approxima-
generally oscillates about the function. This means that
tion. In this for approximation, there is considered a

if y = p(x)

over estimates the function

y = p(x)

in one

polynomial of first degree such as y = a + bx ; a good

interval then it would underestimate it in the next in- terval [5]. As a result, while the area is overestimated in one interval, it may be underestimated in the next
interval so that the overall effect of error in the two
approximation to a given continuous function for the interval (0, 1).
Under the assumption of given concept two following statements may be considered as,
intervals will be equal to the sum of their moduli, in- stead the effect of the error in one interval will be neu- tralized to some extent by the error is the next interval. Therefore, the estimated error in an integration formu-
The Taylor polynomial at x = 0

y = f (0) + xf '(0)

(assuming

f '(0) exists)

(2)
la may be unrealistically too high. In view to above
discussed facts, the paper would reveal types of ap- proximation following the condition ‘best’ approxima-

———————————————

Rajesh Kumar Sinha is with the Department of Mathematics, NIT Patna,

The interpolating polynomial constructed at and x = 1 .

y = f (0) + x[ f (1) f (0)]

x = 0

(3)

India. E-mail: rajesh_nitpat@rediffmail.com

Satya Narayan Mahto is with the Department of Mathematics, M. G. Col-

lege, LNMU, Darbhanga, India.

Dhananjay Sharan is research scholar in the Department of Mathematics,

NIT Patna, India.

A justification may be laid that a Taylor or interpolat-
ing polynomial constructed at some other point would
be more suitable. However, these approximations are
designed to initiate the behavior of f at only one or two
points.

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011 2

ISSN 2229-5518

Since, the polynomial of first degree in x as shown

imum modulus of f ( x)

p(x) .

above y = a + bx

follows a good approximation to f
To make an evident proof for a given statement an as-
throughout the interval (0,1). Now, for values of a and

b , the required mathematical exists such as

sumption is made

max

0:S x :S1

£ f (x) = (a + bx)lJ is minimized over all choices of

f ( x)

p(x) = 10 2 T (x)

(4)
two values a and b . This expression is said as mini- max (or Chebyshev) approximation. Instead of mini-

Where Tn ( x)

is a polynomial of degree n in x with

mizing the minimum error between the (continuous)

leading term

2n 1 xn for n > 0 . The

Tn are known as

function f and the approximating straight-line, the
process of maximizing ‘sum’ of the moduli of the er- rors may be undertaken.

1

Chebyshev polynomials, the Russian mathematician P. L. Chebyshev (1821 -1894) who has contributed the equation (4) for the difference between of the first ap-

proximation and the polynomial. Now, to determine

For values of a and b ,

f £ f ( x) = (a + bx)lJdx

0

is mini-

the turning value, of Tn the derivative

mized that is called a base L1 approximation. It should
be noted that the

L1 approximation provides equal

d T (x) =



d cos n8 d8 = n sin n8

(5)
weight to all the errors, while the minimax approxima- tion approximate in the error of largest modulus.

dx n d8

dx sin 8

Again, stressing on other approximation f which in a
Since

d8 1 1

sense, lies between the extremes of

L1 and minimax


= =

dx dx

sin8


(6)
approximation. Also, for a fixed value of p 1 , two
values a and b are formed so that

d8

n sin n8

2 ( sin n8 ( 8




1 i.e. Tn'( x) = = n
(7)

f £ f ( x) (a + bx)lJ dx

0

is minimized and therefore would
sin 8

n8 ) sin 8 )

be suggested as best

LP approximation. The above

but Lt

( sin8

(8)
maximized expression followed that due to the pres-

8 0 8 )

ence of the

Pth

power, the error of largest modulus
tends to dominate as p increases with f continuous. It

Thus T '(1) = n2

(9)
can be shown that, as p oo the best

LP approxima-

tion tends to the minimax approximation which is therefore sometimes called the best Loo approximation.
It can be shown that this is the maximum modulus of

T ' on (-1, 1). If n = 10 , say, the maximum modulus of

Thus the

LP approximations consist of a spectrum

f p = 10 2

on (-1, 1) whereas f '

p' = 1 . Furthermore,

ranging from the

L1 to the minimax approximations.

the consideration of an approximation f and the po-

Further, for 1< p <oo , L2

approximation, is the only
lynomial

pn that interpolates the approximation f at

commonly used and is better known as the best square
distinct point

x1 ............xn . If such happens then there

approximation.
exists a number x (depending on x ) in certain inter-

2.2 Generalized case for approximation under certain interval

val (a, b) such that

f ( n+ 1) (� )

This section would reveal for giving light on consider- ation of methods of approximating to f ' , given the

f ( x)

pn ( x) = (x x0 )(x x1 )..........(x xn )

(n + 1)

(10)
values of

f (x) at certain points. If p is same polynomi-

( n +1)

al, approximation to f an application of p' would exist
an approximation to f ' . However, there is need to be

f ( x)

pn (x) = n +1

f


( x)
(x )
(n + 1)
(11)
careful; the maximum modulus of

f '( x)

p'( x)

on a
where

( x) = (x x )(x x ).........(x x ) .

given interval (a, b) can be much larger than the max-

IJSER © 2011 http://www.ijser.org

n +1 0 1 n

International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011 3

ISSN 2229-5518

Thus equation (11) is known as error that exists in fol- lowing statement. If (a, b) is any interval and also con-
Differentiating both sides with respect to x ,
tains (n +1) points x, x0 , x1 ,.......xn . Suppose further it is

' = ds d

+ (19)

assumed that f ,

f '.......... f n

exist and are continuous

pn (x)

p (x dx ds n 0

sh)

on the interval (a, b) and

f (n+ 1)

exists for a < x < b

then

£ d I ( s ( s

( s \Il

the error holds as shown in equation (11). Thus,

f (n+ 1)

ex-



p' ( x) = 1 f

+ f

+ 2 f

+ .... + n f �\

n

ists on some interval (a, b) which is x, x0 , x1 , x2 ,...., xn .

=dx ds Il 0

ds

1 ) 0 2 ) 0

n ) IJ\J

Here, the number

x E (a, b)

(depending on x). Diffe-

(20)

rentiating (11) with respect to x ,

p' ( x) = 1 £ f

+ 1 (2s

1) 2 f

+ ..... + d ( s

l

n f

(21)

f '( x)

p' ( x) =

' ( x)

f ( n +1) ( x)

+ n +1 ( x) d f ( n +1) () (12)

n h 0 2

0 ds n ) 0 \

n n +1



(n + 1) (n + 1) dx x
To calculate

'

n +1

In general, there is nothing further to state about the second term on the right of equation (12). We can not

n +1 ( x) = (x xr ) (x xr )

(22)

perform the differentiation with respect to x

where

' would be obtained by means of differen-

of f ( n+ 1) (� ) , since

� is an unknown function of x .

tiating equations (22) such that

Thus, for Integral values of x given in equation (12) is

unless for determining the accuracy with which '

n'+1

(x) =

( d (x

xr )

(x

j r

xj ) + (x

d xr )

(x

j r

xr )

approximately to f ' . However, if we restrict x to one

dx ) dx

of the values x0 , x1 , .......xn , then

n +1 ( x) = 0

and the un-

(23)

known second term on the right of equation (12) be- comes zero.

By putting x = xr , the second term on the right of equa- tion (22) becomes zero.

f '( x )

p' (x) =

'

n+ 1 r

f ( n + 1) (� ) (n + 1)

(13)
Thus
Since
n'+1 ( x) = ( x x j ) = ( 1)

j r

n r hn r !(n r)!

(24)

Where

� has been considered when

x = xr . Now,

xr xj = (r j)h

(25)

applying forward difference to express the polynomial

From (13)

pn (x)

in given form such that

f '( x )

p' (x ) = ( 1)( n r ) hn r !(n r )! f ( n +1) (� )

(26)

pn (x) = pn (x0 + sh)

(14)

r n r

(n + r)! r

( s pn ( x) = f0 + 1

( s

f0 + 2

( s

f0 + ....... + n f0

(15)
Furthermore, investigating the case of polynomial, an interpretation hold that polynomials are sufficiently

) ) )

Since x = x0 + sh (16)

accurate for many approximation and interpolation
tasks.

i.e. dx = h ds

Also

pn (x) = pn (x0 + sh)

(17)
(18)

2.3 Degree of Accuracy

Now the degree of accuracy of quadrature formula is the largest positive integer n such that the formula is exact for xk , for each k = 0,1,....n . The Trapezoidal and Simpson’s rule have degrees of precision one and three, respectively. Integration and summation are li- near operations; that is,

IJSER © 2011 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 2, Issue 4, April-2011 4

ISSN 2229-5518

b b b

f (a f ( x) + g(x))dx = a f f (x)dx + f g(x)dx

a a a

n n n

(27)

physical sciences.

REFERENCES

[1] K. E. Atkinson, An Introduction to Numerical Analysis, Wiley,

L:

i =0

(a f (x ) + g( x )) = a

L:

i = 0

f (xi ) +

L:

i = 0

g(xi )

(28)

NewYork, 1993.

[2] R. E. Beard, “Some notes on approximate product integration,”

J. Inst. Actur., vol. 73, pp. 356-416, 1947.

[3] C. T. H. Baker, “On the nature of certain quadrature formulas

For each pair of integral functions f and g and each

pair of real constants. This implies that the degree of precision of a quadrature formula in n if and only if

and their errors,” SIAM. J. Numer anal., vol. 5, pp. 783-804, 1968. [4] P. J. Daniell, “Remainders in interpolation and quadrature for-

mulae,” Math. Gaz., Vol. 24, pp. 238-244, 1940.

[5] R. K. Sinha, “Estimating error involved in case of Evaluation of

the error

E( p( x)) = 0

for all polynomials

p( x)

of de-

Integrals of single variable,” Int J. Comp. Tech. Appl., Vol. 2, No.

2, pp. 345-348, 2011.

gree k = 0,1,....n , but

p( x) of degree (n + 1) .

3 CONCLUSION

E( p( x)) 0 for some polynomial

[6] T. J. Akai, Applied Numerical Methods for Engineerrs, Wiley, NewYok, 1993.

[7] L. M. Delves, “The Numerical Evaluation of Principal Value

Integrals,” Computer Journal, Vol. 10, pp. 389, 1968.

[8] Brain Bradi, A Friendly Introduction to Numerical Analysis,

pp. 441-532, Pearson Education, 2009.

[9] R. K. Sinha, “Numerical Method for evaluating the Integrable

Increasing, the degree of the approximating polynomi-

al dose not guarantees better accuracy. In a higher de- gree polynomial, the coefficients also get bigger which may magnify the errors. Similarly, reducing the size of the sub-interval by increasing their number may also lead to accumulation of rounding errors. Therefore a balance should be kept between the two, i.e. degree of polynomial and total number of intervals. These are the primary motivations for studying the techniques of numerical integration/quadrature [6]-[9]. In case of Simpson’s rule technique individually to the subinter- vals [a,(a + b)2] and [(a + b)2 , b] ; use error estimation procedure to determine if the approximation to the

function on a finite interval,” Int. J. of Engineering Science and

Technology, Vol. 2, No. 6, pp. 2200-2206, 2010.

[10] C. E. Froberg, Introduction to Numerical Analysis, Addison- Wesley Pub. Co. Inc.

[11] Ibid, The Numerical Evaluation of class Integrals, Proc. Comb.

Phil. Soc.52.

[12] P .J .Davis and P. Rebinowitz, Method of Numerical Integra-

tion, 2nd edition, Academic Press, New York, 1984.

integral on subinterval is within a tolerance of £

2 . If

so, then sum the approximations to procedure an ap-
proximation of function

f ( x )

over interval ( a, b )
within the tolerance £ . If the approximation on one of

the subintervals fails to be within the tolerance £ 2 ,
then that subinterval is itself subdivided, and the pro- cedure is reapplied to the two subintervals to deter- mine if the approximation on each subinterval is accu-
rate to within £

4 . This halving procedure is continued

until each portion is within the required tolerance. Thus, Numerical analysis is the study of algorithms that use numerical approximation for the problems of continuous functions [10]-[12]. Numerical analysis con- tinues this long tradition of practical mathematical cal- culations. Much like the Babylonian approximation, modern numerical analysis does not seek exact an- swers, since the exact answers are often impossible to obtain in practice. Instead, much of numerical analysis is concerned with obtaining approximate solutions while maintaining reasonable bounds on errors. It finds applications in all fields of engineering and the

IJSER © 2011 http://www.ijser.org