CHAPTER I .ca



CHAPTER II

Truncation Errors and the Taylor series

I. Definition

Truncation errors are those that result from using an approximation in place of an exact mathematical procedure.

Example Approximation of the derivative

[pic]

One of the most important methods used in numerical methods to approximate mathematical functions is: Taylor series

II. Taylor series

The Taylor series provides a means to predict a function at one point in terms of the function value and its derivative at another point.

Therefore, any smooth function can be approximated as a polynomial.

[pic]

Figure.2.1. Taylor series.

The expression of Taylor series for any function is:

[pic]

A remainder term is included to account for high order terms neglected (from (n+1) to infinity):

[pic]

This represents the remainder for the nth-order approximation. ( is a value that lies somewhere between xi and xi+1.

It is very convenient to write the Taylor series under a compact form by replacing (xi+1- xi) by the step (h):

[pic]

and

[pic]

Although, theoretically an infinite number of terms are needed to yield to an exact estimate, in practice only few terms are sufficient.

The number of terms needed is dependent on the application, the precision needed and it is determined using the remainder term of the expansion.

However, the determination of the remainder term Rn is not straightforward, since we have:

- To know (

- To differentiate the function [pic](n+1) times, and for that we need to know the function f !!!!

The only term that we can control in this expression is (h). Therefore, it is very convenient to express Rn as:

[pic]

Which must be interpreted as a truncation error of order n+1. This means that the error is proportional to hn+1.

Therefore, if h is sufficiently small, an accurate estimate for [pic] can be reached using only few terms.

II.1. Using Taylor series to estimate truncation errors

Let us see how Taylor series may be used to estimate truncation errors. If we consider an approximation of the derivative:

[pic]

How to determine the error introduced by using this formulation to compute the derivative instead of using the real mathematical definition?

Using a Taylor series truncated at the first-order:

[pic]

Therefore, [pic]

We have now an estimate of the truncation error:

Truncation error = [pic]

So the order of the error due to the formulation used to compute the derivative is h.

III. Numerical differentiation

Another interesting point using the Taylor series is that we are able to compute the derivative of a function using the values of the function at xi+1 and xi and we know how to compute the error due to this approximation.

The formulation:

[pic]

Is called first forward difference. Forward because we are using (i) and (i+1)

- First-backward difference approximation

[pic]

gives: [pic]

[pic]

- Centered difference approximation

[pic]

and

[pic]

Will give

[pic]

Note that the truncation error is of the order of h2, so it is more accurate than the forward and backward formulation.

III.1. Finite difference approximations of higher derivatives

When you look at the Taylor series, you could notice that we can use it to derive any derivative of the function [pic] [second; third; …]

Example

Second order derivative:

|Forward |[pic] |

|Backward |[pic] |

|Centered |[pic] |

| |Note that for the centered formulation, it is a derivation of a derivative: |

| | |

| |[pic] |

III.2. Functions of more than one variable

Taylor series can be applied to functions of more than one variable.

For example for a function of two variables:

[pic]

IV. Error propagation

IV.1. Functions of single variable

In this part, the objective is to estimate the effect of a discrepancy between [pic]and [pic]on the value of a function[pic], or mathematically speaking, to estimate:

[pic]

But the problem is that we do not know[pic] because we do not know[pic]. But, thanks to Taylor series we can estimate [pic] using [pic] (if it is close to[pic]) and [pic] is continuous and differentiable:

[pic]

keeping only the zero and first order:

[pic]

thus, [pic] gives [pic]

This allows us to approximate the error in [pic] given the derivative of a function and an estimate of the error in the independent variable.

[pic]

Figure.2.2. Estimation of the error in [pic].

V. Stability and Condition

The condition of a mathematical problem relates to its sensitivity to changes in its input values. We say than a computation is numerically unstable if the uncertainty of the input values is grossly magnified by the numerical method.

We have: [pic]

We can compute the relative error on [pic] by: [pic]

The error on x is: [pic]

V.1. Condition number

It can be defined as the ratio between the error on [pic] and the error on [pic] [from above]:

Condition number (CN) = [pic]

The condition number provides a measure of the extent to which an uncertainty in [pic] is magnified by [pic]

|CN = 1 |The relative error on[pic] is identical to the relative error on [pic] |

|CN > 1 |The relative error is amplified |

|CN < 1 |The relative error is attenuated |

|CN >> 1 |Ill-conditioned |

VI. Total numerical error

The total error is the sum of the truncation and round-off errors, however:

|Round-off ( |( the number of figures |

|Round-off ( |With subtractive cancellation and the increase in the number of computations. |

|Truncation errors ( |( h [step], however this leads to subtractive cancellation or to the increase in computations. |

Therefore, we are facing a dilemma: decreasing one component of the total errors increases the other.

However, with actual computers, the round-off errors can be minimized and therefore we will be able to decrease the truncation error by reducing (h).

[pic]

Figure.2.3. Variation of total error as a function of the step size.

VII. Some advice to control numerical errors

- Avoid subtracting two nearly equal numbers.

- Avoid subtractive cancellation by reformulating your problem.

- When you add or subtract numbers sort them and start with the smallest one.

-----------------------

Example

Given a value of [pic]=2.5 with an error of [pic], estimate the resulting error in the function: [pic].

Example

Use forward and backward difference approximations of 0(h) and a centered difference approximation of 0(h2) to estimate the first derivative of:

[pic]

For x=0.5 using h=0.5;0.25.

[pic]

The butterfly effect is a phrase that encapsulates the more technical notion of sensitive dependence on initial conditions in chaos theory. Small variations of the initial condition of a dynamical system may produce large variations in the long term behavior of the system. This is sometimes presented as esoteric behavior, but can be exhibited by very simple systems: for example, a ball placed at the crest of a hill might roll into any of several valleys depending on slight differences in initial position. The phrase refers to the idea that a butterfly's wings might create tiny changes in the atmosphere that ultimately cause a tornado to appear (or, for that matter, prevent a tornado from appearing). The flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different.Sensitive dependence on initial conditions was first described in the literature by Jacques Hadamard in 1890 and popularized by Pierre Duhem's 1906 book. The term "butterfly effect" itself is related to the work of Edward Lorenz, who in a 1963 paper for the New York Academy of Sciences noted that "One meteorologist remarked that if the theory were correct, one flap of a seagull's wings could change the course of weather forever." Later speeches and papers by Lorenz used the more poetic butterfly. According to Lorenz, upon failing to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas? as a title.

Brook Taylor, born at Edmonton on August 18, 1685, and died in London on December 29, 1731, was educated at St. John's College, Cambridge, and was among the most enthusiastic of Newton's admirers. From the year 1712 onwards he wrote numerous papers in the Philosophical Transactions, in which, among other things, he discussed the motion of projectiles, the centre of oscillation, and the forms taken by liquids when raised by capillarity. In 1719 he resigned the secretaryship of the Royal Society and abandoned the study of mathematics. His earliest work, and that by which he is generally known, is his Methodus Incrementorum Directa et Inversa, published in London in 1715. This contains a proof of the well-known theorem by which a function of a single variable can be expanded in powers of it. He does not consider the convergency of the series, and the proof which involves numerous assumptions is not worth reproducing.

[pic]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download