Example: dental hygienist

INTRODUCTION TO COMPUTATIONAL MATHEMATICS

INTRODUCTION TOCOMPUTATIONALMATHEMATICSC ourse Notes forCM 271 / AM 341 / CS 371H. De SterckP. UllrichDepartment of Applied MathematicsUniversity of WaterlooMarch 20th, 2006 These notes have been funded by2 Contents1 Errors and Error Sources of Error .. Floating Point Numbers and Operations .. A Binary computer .. Standard floating point systems .. Machine Precision .. Floating Point Operations .. Condition of a Mathematical Problem .. Stability of a Numerical Algorithm .. 232 Root INTRODUCTION .. Four Algorithms for Root Finding .. Bisection Method .. Fixed Point Iteration .. Newton s Method .. Secant Method .. Stopping Criteria for Iterative Functions .. Rate of Convergence .. Convergence Theory .. Fixed Point Iteration .. Newton s Method .. Secant Method .. Overview .. 443 Polynomial Interpolation .. The Vandermonde Matrix.

(1.3) However, a computer can only represent finite precision, so we are not guaranteed to retain all digits from the initial number. Let’s consider a hypothetical “decimal” computer with at most 5 digits in the mantissa. The floating-point representation of x obtained on this computer, after rounding, is given by xˆ = fl(x) = 0.12346 ...

Tags:

  Introduction, Computer, Computational, Mathematics, Introduction to computational mathematics

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of INTRODUCTION TO COMPUTATIONAL MATHEMATICS

1 INTRODUCTION TOCOMPUTATIONALMATHEMATICSC ourse Notes forCM 271 / AM 341 / CS 371H. De SterckP. UllrichDepartment of Applied MathematicsUniversity of WaterlooMarch 20th, 2006 These notes have been funded by2 Contents1 Errors and Error Sources of Error .. Floating Point Numbers and Operations .. A Binary computer .. Standard floating point systems .. Machine Precision .. Floating Point Operations .. Condition of a Mathematical Problem .. Stability of a Numerical Algorithm .. 232 Root INTRODUCTION .. Four Algorithms for Root Finding .. Bisection Method .. Fixed Point Iteration .. Newton s Method .. Secant Method .. Stopping Criteria for Iterative Functions .. Rate of Convergence .. Convergence Theory .. Fixed Point Iteration .. Newton s Method .. Secant Method .. Overview .. 443 Polynomial Interpolation .. The Vandermonde Matrix.

2 Lagrange Form .. Hermite Interpolation .. Piecewise Polynomial Interpolation .. Piecewise Linear Interpolation .. Spline Interpolation .. Further Generalizations .. 5534 CONTENTS4 Integration of an Interpolating Polynomial .. Midpoint Rule:y(x) degree 0 .. Trapezoid Rule:y(x) degree 1 .. Simpson Rule:y(x) degree 2 .. Accuracy, Truncation Error and Degree of Precision .. Composite Integration .. Composite Trapezoid Rule .. Composite Simpson Rule .. Gaussian Integration .. 645 Discrete Fourier INTRODUCTION .. Fourier Series .. Real form of the Fourier Series .. Complex form of the Fourier Series .. Fourier Series and Orthogonal Basis .. Discrete Fourier Transform .. Aliasing and the Sample Theorem .. Fast Fourier Transform .. DFT and Orthogonal Basis .. Power Spectrum and Parseval s Theorem .. 906 Numerical Linear INTRODUCTION .

3 Gaussian Elimination .. LU Factorization .. Pivoting .. Algorithm and COMPUTATIONAL Cost .. Determinants .. Condition and Stability .. The Matrix Norm .. Condition of the problemA~x=~b.. Stability of the LU Decomposition Algorithm .. Iterative Methods for solvingA~x=~b.. Jacobi and Gauss-Seidel Methods .. Convergence of Iterative Methods .. 115 INTRODUCTION toComputationalMathematicsThe goal of COMPUTATIONAL MATHEMATICS , put simply, is to find or develop algo-rithms that solve mathematical problems computationally (ie. using comput-ers). In particular, we desire that any algorithm we develop fulfills four primaryproperties: accurate algorithm is able to return a result that is nu-merically very close to the correct, or analytical, result. efficient algorithm is able to quickly solve the mathemat-ical problem with reasonable COMPUTATIONAL resources. robust algorithm works for a wide variety of inputsx.

4 Stable algorithm is not sensitive to small changes in notes have been funded 1 Errors and ErrorPropagationIn examining COMPUTATIONAL MATHEMATICS problems, we shall generally considera problem of the following form:ProblemConsider an arbitrary problemPwith inputx. We must computethe desired outputz=fP(x).In general our only tools for solving such problems are primitive mathemati-cal operations (for example, addition, subtraction, multiplication and division)combined with flow constructs (if statements and loops). As such, even simpleproblems such as evaluating the exponential function may be difficult the problemPdefined by the evaluation of the exponen-tial functionz= exp(x). We wish to find the approximation zforz= exp(x) from calculus that the Taylor series expansion of theexponential function is given byexp(x) = 1 +x+x22!+x33!+..= Xi=0xii!.( )Since we obviously cannot compute an infinite sum computationally withoutalso using infinite resources, consider the truncated series constructed by onlysumming the firstnterms.

5 This yields the expansion z=nXi=0xii!( )78 CHAPTER 1. ERRORS AND ERROR PROPAGATIONAs we will later see, this is actually a poor algorithm for computing theexponential - although the reason for this may not be immediately Sources of ErrorComputational error can originate from several sources, although the mostprominent forms of error (and the ones that we will worry about in this text)come from two main sources:1. Errors introduced in the inputx. In particular, this form of error can begenerally classified to be from one of two sources:a) Measurement error, caused by a difference between the exact valuexand the measured value x. The measurement error is computedas x=|x x|.b) Rounding error, caused by a difference between the exact valuexand the COMPUTATIONAL or floating-point representation x=f l(x).Since infinite precision cannot be achieved with finite resources, thecomputational representation is a finite precision approximation ofthe exact , for example, the decimal numberx= order to standardize the representation of these numbers we per-form normalization (such that the number to the left of the decimalpoint is 0 and the first digit right of the decimal point is nonzero).

6 The number xis thus normalized as follows:x= |{z}mantissa 10 3.( )However, a computer can only represent finite precision, so we arenot guaranteed to retain all digits from the initial number. Let sconsider a hypothetical decimal computer with at most 5 digits inthe mantissa. The floating-point representation ofxobtained on thiscomputer, after rounding, is given by x=fl(x) = 10 3.( )The process of conversion to a floating point number gives us a round-ing error equal to x=x x= Errors as a result of the calculation, approximation or algorithm. Thisform of error can again be generally classified to be from one of two sources:a) Truncation error. When truncating an infinite series to provide afinite approximation, the method inherently introduces an error. SOURCES OF ERROR9example we first considered truncating the infinite expansion asfollows:z= exp(x) = Xi=0xii!=nXi=0xii!+ Xi=n+1xii!= z+ Xi=n+1xii!

7 ( )In this case, the truncation error is given byT=z z= Xi=n+1xii!( )b) Rounding errors in elementary steps of the algorithm. For example,consider the addition of 107and 103in a floating pointnumber system where we only maintain 4 digit accuracy (in base 10).The exact result should be 107but due to rounding weinstead calculate the result to be analyzing sources of error, it is often useful to provide a mathematical basisfor the error analysis. In particular, there are two principal ways for providinga measure of the generated an exact resultzand an approximate result zgener-ated in a specified floating point number system. Then theabsolute errorisgiven by z=z z,( )and therelative error(assumingz6= 0) is given by z=z zz.( )Of course, considering the mathematical hardships often involved in analyz-ing our algorithms for error, there must be some justification in bothering witherror analysis. Although the errors may be first perceived as being negligible,some numerical algorithms are numerically unstable in the way they propa-gate errors.

8 In doing error analysis, we will often run into one of the followingsituations in COMPUTATIONAL MATHEMATICS :1. The algorithm may contain one or more avoidable steps that each greatlyamplify The algorithm may propagate initially small errors such that the error isamplified without bound in many small the following example of the first kind of generated error:Example the problemPwith inputxdefined by the evaluationof the exponential functionz= exp(x)(as considered in ( )). However, inperforming the calculation, assume the following conditions:10 CHAPTER 1. ERRORS AND ERROR PROPAGATION Assume decimal computer with 5 digits in the mantissa. Assume no measurement error or initial rounding error solving this problem with inputx= ; we find a numericalapproximation to the exact valuez= exp( ) solving this problem, we first apply Algorithm A, truncating the seriesafter the first 25 values. This yields the formula z=P24i=0xii!

9 Performingthis calculation with our floating point system yields the approximation zA= which an observant reader will notice has no significant digits incommon with the exact solution. We conclude that precision was lost in thiscalculation, and in fact notice that the relative error is zA=1z(z zA) = = 41%. This is a substantial error!So why does this instability occur?Our answer to this question can be found in a related problem: Consider thesubtraction of two numbers that are almost equal to one with floating-point representationfl(x1) = relative error in performing this approximation is x1=1x1(x1 fl(x1)) = 10 5= with floating-point representationfl(x1) = relative error in performing this approximation is x2=1x2(x2 fl(x2)) = 10 5= , in general, the approximation of these two numbers to their floating-pointequivalents produce relatively small errors. Now consider the subtractionz=x1 x2. The exact solution to this isz= and the computed solutionusing the floating-point representations is z=fl(x1) fl(x2) = Therelative error in this case is z=1z(z z) 27%.

10 So what was an almostnegligible error when performing rounding becomes a substantial error after wehave completed the , if possible, we will need to avoid these kind of subtractions when weare developing our algorithms. Looking back at the additions we performed incalculating exp( ) (see Table I), we see that there were several subtractionsperformed of numbers of similar magnitude. Fortunately, we have an easy wayaround this if we simply take advantage of a property of the exponential, namelyexp( x) = (exp(x)) 1. This provides us with Algorithm SOURCES OF ERROR11 Table I: Algorithm A applied tox= in seriesithtruncated sum0 1. ERRORS AND ERROR PROPAGATIONA lgorithm the truncated Taylor expansion for exp( ), we getthe following formula for exp( ):exp(x) = (24Xi=0( x)ii!)


Related search queries