Author Topic: Error of Approximation in Case of Definite Integrals  (Read 2850 times)

0 Members and 1 Guest are viewing this topic.

content.writer

  • Newbie
  • *
  • Posts: 48
  • Karma: +0/-0
    • View Profile
Error of Approximation in Case of Definite Integrals
« on: April 23, 2011, 10:45:07 am »
Quote
Author : Rajesh Kumar Sinha, Satya Narayan Mahto, Dhananjay Sharan
International Journal of Scientific & Engineering Research, IJSER - Volume 2, Issue 4, April-2011
ISSN 2229-5518
Download Full Paper - http://www.ijser.org/onlineResearchPaperViewer.aspx?Error_of_Approximation_in_Case_of_Definite_Integrals.pdf

Abstract— This paper proposes a method for computation of error of approximation involved in case of evaluation of integrals of single variable. The error associated with a quadrature rule provides information with a difference of approximation. In numerical integration, approximation of any function is done through polynomial of suitable degree but shows a difference in their integration integrated over some interval. Such difference is error of approximation. Sometime, it is difficult to evaluate the integral by analytical methods Numerical Integration or Numerical Quadrature can be an alternative approach to solve such problems. As in other numerical techniques, it often results in approximate solution. The Integration can be performed on a continuous function on set of data.

Index Terms— Quadrature rule, Simpsons rule, Chebyshev polynomials, approximation, interpolation, error.

INTRODUCTION                                                                   
TO evaluate the definite integral of a function that has no explicit antiderivative of whose antiderivative is not easy to obtain; the basic method involved in approximating is numerical quadrature .

i.e Formula  ( Download Full paper for formula )

The methodolo-gy for computing the antiderivative at a given point, the polynomial   approximating the function   generally oscillates about the function. This means that if   over estimates the function   in one interval then it would underestimate it in the next interval [5]. As a result, while the area is overestimated in one interval, it may be underestimated in the next interval so that the overall effect of error in the two intervals will be equal to the sum of their moduli, instead the effect of the error in one interval will be neutralized to some extent by the error is the next interval. Therefore, the estimated error in an integration formula may be unrealistically too high. In view to above discussed facts, the paper would reveal types of approximation following the condition ‘best’ approximation for a given function, concentrating mainly on polynomial approximation. For approximation, there is considered a polynomial of first degree such as   a good approximation to a given function for the interval (a, b).

2    PROPOSED METHOD
2.1 Reflection on Approximation
This section cover types of approximation following the condition ‘best’ approximation for a given func-tion, concentrating mainly on polynomial approxima-tion. In this for approximation, there is considered a polynomial of first degree such as ; a good approximation to a given continuous function for the interval (0, 1).
Under the assumption of given concept two following statements may be considered as,
The Taylor polynomial at   (assuming  ex-ists)

                                                                (2)  ( For equation download full paper )

The interpolating polynomial constructed at   and .

                                                    (3) ( For equation download full paper )

A justification may be laid that a Taylor or interpolat-ing polynomial constructed at some other point would be more suitable. However, these approximations are designed to initiate the behavior of f at only one or two points.
Since, the polynomial of first degree in x as shown above   follows a good approximation to f throughout the interval (0,1). Now, for values of   and  , the required mathematical exists such as   is minimized over all choices of two values   and  . This expression is said as mini-max (or Chebyshev) approximation. Instead of mini-mizing the minimum error between the (continuous) function   and the approximating straight-line, the process of maximizing ‘sum’ of the moduli of the er-rors may be undertaken.
For values of   and  ,   is mini-mized that is called a base   approximation. It should be noted that the   approximation provides equal weight to all the errors, while the minimax approximation approximate in the error of largest modulus. Again, stressing on other approximation which in a sense, lies between the extremes of   and minimax approximation. Also, for a fixed value of , two values   and   are formed so that   is minimized and therefore would be suggested as best   approxi-mation. The above maximized expression followed that due to the presence of the   power, the error of largest modulus tends to dominate as p increases with f continuous. It can be shown that, as   the best   approximation tends to the minimax approximation which is therefore sometimes called the best   approximation.
Thus the   approximations consist of a spectrum ranging from the   to the minimax approximations. Further, for ,   approximation, is the only commonly used and is better known as the best square approximation.

Read More: http://www.ijser.org/onlineResearchPaperViewer.aspx?Error_of_Approximation_in_Case_of_Definite_Integrals.pdf