Example: barber

Square Roots via Newton’s Method

Square Roots via newton s methods . G. Johnson, MIT Course 4, 20151 OverviewNumerical methodscan be distinguished from other branches of analysis and computer scienceby three characteristics: They work with arbitraryrealnumbers (and vector spaces/extensions thereof): the desiredresults are not restricted to integers or exact rationals (although in practice we only evercomputerational approximationsof irrational results). Like in computer science (= math + time = math + money), we are concerned not onlywith existence and correctness of the solutions (as in analysis), but with thetime(and othercomputational resources, memory) required to compute the result.

Square Roots via Newton’s Method S. G. Johnson, MIT Course 18.335 February 4, 2015 1 Overview ...

Tags:

  Square, Methods, Root, Newton, Square roots, Newton s method

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Square Roots via Newton’s Method

1 Square Roots via newton s methods . G. Johnson, MIT Course 4, 20151 OverviewNumerical methodscan be distinguished from other branches of analysis and computer scienceby three characteristics: They work with arbitraryrealnumbers (and vector spaces/extensions thereof): the desiredresults are not restricted to integers or exact rationals (although in practice we only evercomputerational approximationsof irrational results). Like in computer science (= math + time = math + money), we are concerned not onlywith existence and correctness of the solutions (as in analysis), but with thetime(and othercomputational resources, memory) required to compute the result.

2 We are also concerned withaccuracyof the results, because in practice we only ever haveapproximateanswers: Some algorithms may be intrinsically approximate like the newton s- Method exampleshown below, they convergetowardsthe desired result but neverreachit in a finite numberof fast they convergeis a key question. Arithmetic with real numbers is approximateon a computer, because we approximate thesetRof real numbers by the setFoffloating-point numbers, and the result of everyelementary operation (+, , , ) isroundedto the nearest element ofF. We need tounderstandFand howaccumulationof these rounding errors affects different Square rootsA classic algorithm that illustrates many of these concerns is newton s Method to compute squarerootsx= afora >0, to solvex2=a.

3 The algorithm starts with some guessx1>0andcomputes the sequence of improved guessesxn+1=12(xn+axn).The intuition is very simple: ifxnis too big (> a), thena/xnwill be too small (< a), andso their arithmetic meanxn+1will be closer to a. It turns out that this algorithm is very old,dating at least to the ancient Babylonians circa 1000 modern times, this was seen to1 See Boyer,A History of Mathematics, ch. 3; the Babylonians used base 60 and a famous tablet (YBC 7289)shows 2to about six decimal equivalent to newton s Method to find a root off(x) =x2 a. Recall that newton s methodfinds an approximate root off(x) = 0from a guessxnby approximatingf(x)as its tangent linef(xn) +f (xn) (x xn), leading to an improved guessxn+1from the root of the tangent:xn+1=xn f(xn)f (xn),and forf(x) =x2 athis yields the Babylonian formula Convergence proofA classic analysis text (Rudin,Principles of Mathematical Analysis) approaches the proof of con-vergence of this algorithm as follows: we prove that the sequence converges monotonically and isbounded, and hence it has a limit; we then easily see that the limit is a.

4 In particular:1. Supposexn> a, then it follows a < xn+1< xn:(a)xn+1 xn=12(axn xn) =a x2n2xn<0.(b)x2n+1 a=14(x2n+ 2a+a2x2n) a=14(x2n 2a+a2x2n) =14(xn axn)2=(x2n a)24x2n>0(regardless of whetherxn> a).2. A monotonic-decreasing sequence that is bounded below converges (Rudin theorem ). Ifx1< a, the second property above means thatx2> a; then forn >2it is monotonicallydecreasing and bounded below by The limitx= limn xnsatisfiesx=12(x+ax), which is easily solved to show thatx2= , this proof by itself tells us nothing abouthow fastthe sequence Convergence exampleUsing the accompanying Julia notebook, we will apply this Method to compute the most famousroot of all, 2.

5 (Supposedly, the Greek who discovered that 2is irrational was thrown off a cliffby his Pythagorean colleagues.). As a starting guess, we will usex1= 1, producing the followingsequence when computed with about 60 digits of accuracy, where the correct digits are shown carefully, we see that thenumber of accurate digits approximately doubles on eachiteration. This fantastic convergence rate means that we only need seven newton iterations toobtain more than 60 accurate digits the accuracy is quickly limited only by the precision of ourfloating-point numbers, a topic we will discuss in more detail later Convergence rateLet us analyze the convergence rate quantitatively given a small error non then-th iteration, wewill determine how much smaller the error n+1is in the next particular, let us definexn=x(1+ n), wherex= ais the exact solution.

6 This correspondsto defining| n|as therelative error:| n|=|xn x||x|,also called thefractional error(the error as a fraction of the exact value). Relative error is typicallythe most useful way to quantify the error because it is adimensionlessquantity (independent of theunits or overal scalling ofx). The logarithm ( log10 n) of the relative error is roughly the numberofaccurate significant digitsin the can plug this definition ofxn(andxn+1) in terms of n(and n+1) into our newton iterationformula to solve for the iteration of n, using the fact thata/x=xto divide both sides byx:1 + n+1=12(1 + n+11 + n)=12[1 + n+ 1 n+ 2n+O( 3n)],where we have Taylor-expanded(1 n) 1.

7 TheO( 3n)means roughly terms of order 3nor smaller; we will define it more precisely later on. Because the sequence converges, we are entitled to assumethat| n|3 1for sufficiently largen, and so the 3nand higher-order terms are eventually negligiblecompared to 2n. We obtain: n+1= 2n2+O( 3n),which means theerror roughly squares(and halves) on each iteration once we are close to thesolution. Squaring the relative error corresponds precisely to doubling the number of significantdigits, and hence explains the phenomenon above. This is known asquadratic convergence(notto be confused with second-order convergence, which unfortunately refers to an entirely differentconcept).

8 3


Related search queries