INTRODUCTION TO COMPUTATIONAL MATHEMATICS

[Pages:118]INTRODUCTION TO COMPUTATIONAL

MATHEMATICS

Course Notes for CM 271 / AM 341 / CS 371

H. De Sterck P. Ullrich

Department of Applied Mathematics University of Waterloo

March 20th, 2006

These notes have been funded by

2

Contents

1 Errors and Error Propagation

7

1.1 Sources of Error . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.2 Floating Point Numbers and Operations . . . . . . . . . . . . . . 12

1.2.1 A Binary Computer . . . . . . . . . . . . . . . . . . . . . 13

1.2.2 Standard floating point systems . . . . . . . . . . . . . . . 14

1.2.3 Machine Precision . . . . . . . . . . . . . . . . . . . . . . 15

1.2.4 Floating Point Operations . . . . . . . . . . . . . . . . . . 17

1.3 Condition of a Mathematical Problem . . . . . . . . . . . . . . . 17

1.4 Stability of a Numerical Algorithm . . . . . . . . . . . . . . . . . 23

2 Root Finding

27

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2.2 Four Algorithms for Root Finding . . . . . . . . . . . . . . . . . 28

2.2.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . 28

2.2.2 Fixed Point Iteration . . . . . . . . . . . . . . . . . . . . . 30

2.2.3 Newton's Method . . . . . . . . . . . . . . . . . . . . . . . 31

2.2.4 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . 32

2.2.5 Stopping Criteria for Iterative Functions . . . . . . . . . . 32

2.3 Rate of Convergence . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.4 Convergence Theory . . . . . . . . . . . . . . . . . . . . . . . . . 34

2.4.1 Fixed Point Iteration . . . . . . . . . . . . . . . . . . . . . 34

2.4.2 Newton's Method . . . . . . . . . . . . . . . . . . . . . . . 41

2.4.3 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . 43

2.4.4 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3 Interpolation

45

3.1 Polynomial Interpolation . . . . . . . . . . . . . . . . . . . . . . . 46

3.1.1 The Vandermonde Matrix . . . . . . . . . . . . . . . . . . 46

3.1.2 Lagrange Form . . . . . . . . . . . . . . . . . . . . . . . . 48

3.1.3 Hermite Interpolation . . . . . . . . . . . . . . . . . . . . 51

3.2 Piecewise Polynomial Interpolation . . . . . . . . . . . . . . . . . 52

3.2.1 Piecewise Linear Interpolation . . . . . . . . . . . . . . . 53

3.2.2 Spline Interpolation . . . . . . . . . . . . . . . . . . . . . 54

3.2.3 Further Generalizations . . . . . . . . . . . . . . . . . . . 55

3

4

CONTENTS

4 Integration

57

4.1 Integration of an Interpolating Polynomial . . . . . . . . . . . . . 57

4.1.1 Midpoint Rule: y(x) degree 0 . . . . . . . . . . . . . . . . 58

4.1.2 Trapezoid Rule: y(x) degree 1 . . . . . . . . . . . . . . . 58

4.1.3 Simpson Rule: y(x) degree 2 . . . . . . . . . . . . . . . . 59

4.1.4 Accuracy, Truncation Error and Degree of Precision . . . 60

4.2 Composite Integration . . . . . . . . . . . . . . . . . . . . . . . . 61

4.2.1 Composite Trapezoid Rule . . . . . . . . . . . . . . . . . . 62

4.2.2 Composite Simpson Rule . . . . . . . . . . . . . . . . . . 63

4.3 Gaussian Integration . . . . . . . . . . . . . . . . . . . . . . . . . 64

5 Discrete Fourier Methods

67

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

5.2 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.2.1 Real form of the Fourier Series . . . . . . . . . . . . . . . 69

5.2.2 Complex form of the Fourier Series . . . . . . . . . . . . . 71

5.3 Fourier Series and Orthogonal Basis . . . . . . . . . . . . . . . . 74

5.4 Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . 77

5.4.1 Aliasing and the Sample Theorem . . . . . . . . . . . . . 81

5.5 Fast Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . 84

5.6 DFT and Orthogonal Basis . . . . . . . . . . . . . . . . . . . . . 87

5.7 Power Spectrum and Parseval's Theorem . . . . . . . . . . . . . 90

6 Numerical Linear Algebra

93

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

6.2 Gaussian Elimination . . . . . . . . . . . . . . . . . . . . . . . . . 94

6.2.1 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . 94

6.2.2 Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

6.2.3 Algorithm and Computational Cost . . . . . . . . . . . . 99

6.2.4 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . 104

6.3 Condition and Stability . . . . . . . . . . . . . . . . . . . . . . . 107

6.3.1 The Matrix Norm . . . . . . . . . . . . . . . . . . . . . . 107

6.3.2 Condition of the problem Ax = b . . . . . . . . . . . . . . 108

6.3.3 Stability of the LU Decomposition Algorithm . . . . . . . 111

6.4 Iterative Methods for solving Ax = b . . . . . . . . . . . . . . . . 113

6.4.1 Jacobi and Gauss-Seidel Methods . . . . . . . . . . . . . . 114

6.4.2 Convergence of Iterative Methods . . . . . . . . . . . . . . 115

Introduction to Computational Mathematics

The goal of computational mathematics, put simply, is to find or develop algorithms that solve mathematical problems computationally (ie. using computers). In particular, we desire that any algorithm we develop fulfills four primary properties:

? Accuracy. An accurate algorithm is able to return a result that is numerically very close to the correct, or analytical, result.

? Efficiency. An efficient algorithm is able to quickly solve the mathematical problem with reasonable computational resources.

? Robustness. A robust algorithm works for a wide variety of inputs x. ? Stability. A stable algorithm is not sensitive to small changes in the

input x.

These notes have been funded by...

5

6

CONTENTS

Chapter 1

Errors and Error Propagation

In examining computational mathematics problems, we shall generally consider a problem of the following form:

Problem Consider an arbitrary problem P with input x. We must compute the desired output z = fP (x).

In general our only tools for solving such problems are primitive mathematical operations (for example, addition, subtraction, multiplication and division) combined with flow constructs (if statements and loops). As such, even simple problems such as evaluating the exponential function may be difficult computationally.

Example 1.1 Consider the problem P defined by the evaluation of the exponential function z = exp(x). We wish to find the approximation z^ for z = exp(x) computationally.

Algorithm A. Recall from calculus that the Taylor series expansion of the exponential function is given by

exp(x)

=

1

+

x

+

x2 2!

+

x3 3!

+

...

=

xi i!

.

i=0

(1.1)

Since we obviously cannot compute an infinite sum computationally without also using infinite resources, consider the truncated series constructed by only summing the first n terms. This yields the expansion

z^ =

n

xi i!

i=0

(1.2)

7

8

CHAPTER 1. ERRORS AND ERROR PROPAGATION

As we will later see, this is actually a poor algorithm for computing the exponential - although the reason for this may not be immediately obvious.

1.1 Sources of Error

Computational error can originate from several sources, although the most prominent forms of error (and the ones that we will worry about in this text) come from two main sources:

1. Errors introduced in the input x. In particular, this form of error can be generally classified to be from one of two sources:

a) Measurement error, caused by a difference between the "exact" value x and the "measured" value x~. The measurement error is computed as x = |x - x~|.

b) Rounding error, caused by a difference between the "exact" value x and the computational or floating-point representation x^ = f l(x). Since infinite precision cannot be achieved with finite resources, the computational representation is a finite precision approximation of the exact value.

Consider, for example, the decimal number x = 0.00012345876543. In order to standardize the representation of these numbers we perform normalization (such that the number to the left of the decimal point is 0 and the first digit right of the decimal point is nonzero). The number x^ is thus normalized as follows:

x = 0. 12345876543 ?10-3.

mantissa

(1.3)

However, a computer can only represent finite precision, so we are not guaranteed to retain all digits from the initial number. Let's consider a hypothetical "decimal" computer with at most 5 digits in the mantissa. The floating-point representation of x obtained on this computer, after rounding, is given by

x^ = fl(x) = 0.12346 ? 10-3.

(1.4)

The process of conversion to a floating point number gives us a rounding error equal to x = x - x^ = -0.00000123457.

2. Errors as a result of the calculation, approximation or algorithm. This form of error can again be generally classified to be from one of two sources:

a) Truncation error. When truncating an infinite series to provide a finite approximation, the method inherently introduces an error. In

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download