Errors



1.2 Absolute and relative errors.

In mathematics, science, and engineering we calculate various numbers, such as the current in an electric circuit, or the viscosity of the transmission fluid in a car, or the price of Ford Motor Company stock a year from now, or sin(1.5). For various reasons we may not be able to obtain the exact value and we must be content with an approximation to the true value. Some of these reasons are the following.

Errors in mathematical approximations.

Imperfect physical measurements.

Propagation of errors in exact computations.

Round-off errors and propagation of errors in floating point computations.

An imperfect mathematical model of a physical system.

To a large extent, numerical analysis is concerned with methods for obtaining approximate values and for accessing the error in these approximations. In this section we look at two ways of specifying errors: by means of the absolute error or by means of the relative error. Propagation of errors in exact computations is discussed in sections 1.3 and 1.4, while sections 1.5 and 1.6 are devoted to round-off errors and propagation of errors in floating point computations. Section 1.7 considers two other ways of specifying errors, section 1.8 looks at binary floating point numbers and the use of mathematical software in error calculations is the subject of section 1.9.

Absolute and relative errors. The error is the difference between the true value, x, and approximate value, xa.

(1) e = (x = x - xa

Other notations for the error are common. For example, in formula (3) in section 1.1 the remainder R is the error in the Taylor series approximation. The absolute error, (, is the magnitude of the error

(2) ( = | x - xa |.

The relative error, (, is the absolute error divided by the magnitude of the value. Sometimes people divide by the magnitude of the true value.

(3) ( = (t = | | = | - 1 |

Other times people divide by the magnitude of the approximate value.

(4) ( = (a = | | = | - 1 |

As indicated above, if we are in a situation where we are dealing with both relative errors we shall distinguish them by writing (t and (a. As we go along we shall try to make it clear which relative error we are using in various situations where it arises below.

There is a symmetry between the two relative errors. Let us define

((u, v) = | | = | - 1 |

Then (t = ((xa, x) and (a = ((x, xa). Because of the symmetry, certain things that hold for one type of relative error also hold for other.

If we are in a situation where we are working with several variables then we shall either use subscripts or some other means to indicate what variable the error is for, e.g. (x = ((x) = | x - xa |.

Example 1. Suppose we measure the length of a table to be xa = 59.7 in, but the true length is x = 59.5 in. Then the error, absolute error and two relative errors are

e = x – xa = 59.5 – 59.7 = - 0.2 in

( = | x – xa | = | - 0.2 | = 0.2 in.

(t = = = 0.003361…. = 0.3361..%

(a = = = 0.003350…. = 0.3350..%

It is common to express the relative error as a percent and we shall generally do so. Also, it is common to round absolute and relative errors up so they have only one significant digit. In this example, the relative error is no greater than 0.4%. In fact errors are often simply rounded (not necessarily up) to one digit. In this case many people would say the relative error is about 0.3%. We shall express this symbolically by ( 0.3%. In general

(5) x y is short for x ( z where z ( y

While this is not very precise, it is useful.

In Example 1 the two relative errors (t and (a are quite close. In general, if one of the relative errors is small then the two are approximately equal. The following proposition makes this precise.

Proposition 1 If x ( 0 and (t < 1, then xa ( 0 and   (  (a  (  . If xa ( 0 and (a  ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download