Example: confidence

Chapter 3 Interpolation - MIT OpenCourseWare

Chapter 3 InterpolationInterpolation is the problem of fitting a smooth curve through a given set ofpoints, generally as the graph of a function. It is useful at least in data analy-sis ( Interpolation is a form of regression), industrial design, signal processing(digital-to-analog conversion) and in numerical analysis. It is one of thoseimportant recurring concepts in applied mathematics. In this Chapter , wewill immediately put Interpolation to use to formulate high-order quadratureand differentiation Polynomial interpolationGivenN+ 1 pointsxj R, 0 j N, and sample valuesyj=f(xj) ofa function at these points, the polynomial Interpolation problem consists infinding a polynomialpN(x) of degreeNwhich reproduces those values:yj=pN(xj), j= 0,.., other words the graph of the polynomial should pass through the points(xj,yNj). A degree-Npolynomial can be written aspN(x) = n=0annxforsome coefficientsa0.

and di erentiation rules. 3.1 Polynomial interpolation Given N+ 1 points x j 2R, 0 j N, and sample values y ... the V matrix, and numerically solve the system with an instruction like a = Vny (in Matlab). This gives us the coe cients of the desired polynomial. The polynomial can now be

Tags:

  Chapter, Matrix, Mit opencourseware, Opencourseware, Interpolation, Di erentiation, Erentiation, Chapter 3 interpolation

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Chapter 3 Interpolation - MIT OpenCourseWare

1 Chapter 3 InterpolationInterpolation is the problem of fitting a smooth curve through a given set ofpoints, generally as the graph of a function. It is useful at least in data analy-sis ( Interpolation is a form of regression), industrial design, signal processing(digital-to-analog conversion) and in numerical analysis. It is one of thoseimportant recurring concepts in applied mathematics. In this Chapter , wewill immediately put Interpolation to use to formulate high-order quadratureand differentiation Polynomial interpolationGivenN+ 1 pointsxj R, 0 j N, and sample valuesyj=f(xj) ofa function at these points, the polynomial Interpolation problem consists infinding a polynomialpN(x) of degreeNwhich reproduces those values:yj=pN(xj), j= 0,.., other words the graph of the polynomial should pass through the points(xj,yNj). A degree-Npolynomial can be written aspN(x) = n=0annxforsome coefficientsa0.

2 ,aN. For Interpolation , the number of degrees offreedom (N+ 1 coefficients) in the polynomial matches the number of pointswhere the function should be fit. If the degree of the polynomial is strictlyless thanN, we cannot in general pass it through the points (xj,yj). Wecan still try to pass a polynomial ( , a line) in the best approximatemanner , but this is a problem in approximation rather than Interpolation ;we will return to it later in the Chapter on 3. INTERPOLATIONLet us first see how the Interpolation problem can be solved numericallyin a direct way. Use the expression ofpNinto the interpolating equationsyj=pN(xj): Na xnnj=yj, j= 0,.., theseN+ 1 equations indexed byj, the unknowns are the coefficientsa0,..,aN. We are in presence of a linear systemNVa=y, Vjnan=yj,n=0withVthe so-called Vandermonde matrix ,Vjn=xnj, , 1xN0 x0V= 1x1 xN1.

3 1xN xNN We can then use a numerical software like Matlab to construct the vector ofabscissasxj, the right-hand-side of valuesyj, theVmatrix, and numericallysolve the system with an instruction likea=V\y(in Matlab). This givesus the coefficients of the desired polynomial. The polynomial can now beplotted in between the grid pointsxj(on a finer grid), in order to displaythe , mathematicians such as Lagrange and Newton did not haveaccess to computers to display interpolants, so they found explicit (and el-egant) formulas for the coefficients of the Interpolation polynomial. It notonly simplified computations for them, but also allowed them to understandthe error of polynomial Interpolation , , the differencef(x) pN(x). Letus spend a bit of time retracing their steps. (They were concerned withapplications such as fitting curves to celestial trajectories.)

4 We ll define the Interpolation error from the uniform (L ) norm of thedifferencef pN: f pN := max x|f(x) pN(x)|,where the maximum is taken over the interval [x0,xN]. POLYNOMIAL INTERPOLATIONCallPNthe space of real-valued degree-Npolynomials:NPN={ anxn:ann=0 R}.Lagrange s solution to the problem of polynomial Interpolation is basedon the following 1.(Lagrange elementary polynomials) Let{xj,j= 0,..,N}be acollection of disjoint numbers. For eachk= 0,..,N, there exists a uniquedegree-NpolynomialLk(x)such thatLk(xj) = jk={1ifj=k;0ifj= roots atxjforj=k, soLkmust be of the form1Lk(x) =C (xj=k xj).Evaluating this expression atx=xk, we get1 =C 1(xk xj) C= .j=k(xk xj)j=kHence the only possible expression forLkis j=k(xLk(x) = xj).j=k(xk xj)These elementary polynomials form a basis (in the sense of linear algebra)for expanding any polynomial s because, if we fixj, we can divideLk(x) by (x xj),j=k.}

5 We obtainLk(x) = (x xj)q(x) +r(x), wherer(x) is a remainder of lower order thanx xj, ,a constant. SinceLk(xj) = 0 we must haver(x) = 0. Hence (x xj) must be a factorofLk(x). The same is true of any (x xj) forj=k. There areNsuch factors, whichexhausts the degreeN. The only remaining degree of freedom inLkis the 3. INTERPOLATIONT heorem 4.(Lagrange Interpolation theorem)Let{xj,j= 0,..,N}be a collection of disjoint real numbers. Let{yj,j=0,..,N}be a collection of real numbers. Then there exists a uniquepN PNsuch thatpN(xj) =yj, j= 0,.., expression isNpN(x) = ykLk(x),( )k=0whereLk(x)are the Lagrange elementary justification that ( ) interpolates is obvious:NpN(xj) = NykLk(xj) = ykLk(xj) = remains to see thatpNis the unique interpolating polynomial. For thispurpose, assume that bothpNandqNtake on the valueyjatxj. ThenrN=pN qNis a polynomial of degreeNthat has a root at each of theN+ 1 pointsx0.

6 ,xN. The fundamental theorem of algebra, however, saysthat a nonzero polynomial of degreeNcan only haveN(complex) , the only way forrNto haveN+ 1 roots is that it is the zeropolynomial. SopN= definition,NpN(x) = f(xk)Lk(x)k=0is called theLagrange Interpolation Interpolation through(x1,y1)and(x2,y2):x xL1(x) = 2x x1, L2(x) = ,x1 x2x2 x1p1(x) =y1L1(x) +y2L2(x)y2= y1y1x2x+ y2x1x2 x1x2 x1y2=y1+ y1( x x1) POLYNOMIAL INTERPOLATIONE xample 8.(Example ( ) in Suli-Mayers) Considerf(x) =ex, and inter-polate it by a parabola(N= 2)from three samples atx0= 1,x1= 0,x2= build(x)(L0(x= x1x) x2)1=x(x1)(x0x 1)(x0 x2)2 Similarly,1L1(x) = 1 x2, L2(x) =x(x+ 1).2So the quadratic interpolant isp2(x) =e 1L0(x) +e0L1(x) +e1L2(x),= 1 + sinh(1)x+ (cosh(1) 1)x2,'1 + + polynomial that approximatesexreasonably well on[ 1,1]is theTaylor expansion aboutx= 0:x2t2(x) = 1 +x+.

7 2 Manifestly,p2is not very different fromt2.(Insert picture here)Let us now move on to the main result concerning the Interpolation errorof smooth CN+1[a,b]for someN >0, and let{xj:j= 0,..,N}be a collection of disjoint reals in[a,b]. ConsiderpNthe Lagrange interpola-tion polynomial offatxj. Then for everyx [a,b]there exists (x) [a,b]such thatf(N+1)( (x))f(x) pN(x) = N+1(x),(N+ 1)!whereN+1 N+1(x) = (xj=1 x).jAn estimate on the Interpolation error follows directly from this +1= max(x[a,b]|fN+1)(x) |5 Chapter 3. Interpolation (which is well defined sincef(N+1)is continuous by assumption, hence reachesits lower and upper bounds.) ThenM|f(x) pN(x)| N+1(N+ 1)!| N+1(x)|In particular, we see that the Interpolation error is zero whenx=xjforsomej, as it should us now prove the (Can be found in Suli-Mayers, Chapter 6)In conclusion, the Interpolation error: depends on the smoothness offvia the high-order derivativef(N+1); has a factor 1/(N+ 1)!

8 That decays fast as the orderN ; and is directly proportional to the value of N+1(x), indicating that theinterpolant may be better in some places than natural follow-up question is that of convergence: can we alwaysexpect convergence of the polynomial interpolant asN ? In otherwords, does the factor 1/(N+ 1)! always win over the other two factors?Unfortunately, the answer is no in general. There are examples of verysmooth (analytic) functions for which polynomial Interpolation diverges, par-ticularly so near the boundaries of the interplation interval. This behavioris called theRunge phenomenon, and is usually illustrated by means of thefollowing 9.(Runge phenomenon) Letf(x)forx [ 5,5]. Interpolate itat equispaced pointsxj= 10j/N, wherej= N/2,..,N/2andNis is easy to check numerically that the interpolant diverges near the edges of[ 5,5], asN.

9 See the Trefethen textbook on page 44 for an illustrationof the Runge phenomenon.(Figure here)If we had done the same numerical experiment forx [ 1,1], the inter-polant would have converged. This shows that the size of the interval , there is divergence when the size of the interval is larger than the features , or characteristic length scale, of the function (here the width ofthe bump near the origin.) POLYNOMIAL RULES FOR INTEGRATIONThe analytical reason for the divergence in the example above is due inno small part to the very large values taken on by N+1(x) far away from theorigin in contrast to the relatively small values it takes on near the is a problem intrinsic to equispaced grids. We will be more quantitativeabout this issue in the section on Chebyshev interpolants, where a remedyinvolving non-equispaced grid points will be a conclusion, polynomial interpolants can be good for smallN, andon small intervals, but may fail to converge (quite dramatically) when theinterpolation interval is Polynomial rules for integrationIn this section, we return to the problem of approximating bu(x)dxbyaa weighted sum of samplesu(xj), also called a quadrature.

10 The plan isto form interpolants of the data, integrate those interpolants, and deducecorresponding quadrature formulas. We can formulate rules of arbitrarilyhigh order this way, although in practice we almost never go beyond order 4with polynomial Polynomial rulesWithout loss of generality, consider the local interpolants ofu(x) formed nearthe origin, withx0= 0,x1=handx 1= h. The rectangle rule does notbelong in this section: it is not formed from an interpolant. The trapezoidal rule, where we approximateu(x) by a line joining(0,u(x0)) and (h,u(x1)) in [0,h]. We need 2 derivatives to control theerror:u ( (x))u(x) =p1(x) +x(x2 h),p1(x) =u(0)L0(x) +u(h)L1(x),hL0(x) = xx, L1(x) =, hhhL0(x)dx= hL1(x)dx=h/2,(areas of triangles)0 0hu ( (x))x0|(x2 h)|dx Cmax |u ( )| 3. INTERPOLATIONThe result is hu(0) +u(h)u(x)dx=h0(+O(h3).)


Related search queries