Errors



2 Errors

In the previous chapter we looked at two algorithms for solving systems of linear equations Ax = b where A = , x = and b = . One was the general purpose LU decomposition algorithm while the other was the Cholesky decomposition which restricted to the case where A is positive definite.

In this chapter we look at how errors propagate when we the system Ax = b. Even though the LU and Cholesky decomposition algorithms are exact in theory, errors are an important consideration in this subject. This is because

The coefficients aij and the right hand sides bi of the equations may be physical quantities that are subject to errors in measurement. We are interested in how these errors propagate to the solution as we solve the equations

Usually numbers are stored in binary floating point form in computer software and are subject to roundoff errors when they are entered and as arithmetic is done on them. Again we are interested in how much this causes the solution to the equations to be in error.

We shall see that systems of equations differ in how they propagate the errors. Some greatly magnify the error while others may reduce it. Similarly, the choice of a pivoting strategy can also affect the error propagation.

2.1 Describing errors.

In this section we look at two ways of specifying errors: by means of the absolute error or by means of the relative error.

Absolute and relative errors. The error is the difference between the true value, x, and approximate value, xa.

(1) e = (x = x - xa

Other notations for the error are common. The absolute error, (, is the magnitude of the error

(2) ( = | x - xa |.

The relative error, (, is the absolute error divided by the magnitude of the value. Sometimes people divide by the magnitude of the true value.

(3) ( = (t = | | = | - 1 |

Other times people divide by the magnitude of the approximate value.

(4) ( = (a = | | = | - 1 |

For the most part, we will be dealing with the second form of the absolute error where we divide by the approximate value. One reason for this is that when we are working with the measured value of a physical quantity we usually don't know the true value. If we are in a situation where we are dealing with both relative errors we shall distinguish them by writing (t and (a.

There is a symmetry between the two relative errors. Let us define

((u, v) = | | = | - 1 |

Then

(t = ((xa, x)

(a = ((x, xa)

Because of the symmetry, certain things that hold for one type of relative error also hold for other.

If we are in a situation where we are working with several variables then we shall either use subscripts or some other means to indicate what variable the error is for, e.g. (x = ((x) = | x - xa |.

Example 1. Suppose we measure the length of a table to be xa = 59.7 in, but the true length is x = 59.5 in. Then the error, absolute error and two relative errors are

e = x – xa = 59.5 – 59.7 = - 0.2 in

( = | x – xa | = | - 0.2 | = 0.2 in.

(t = = = 0.003361…. = 0.3361..%

(a = = = 0.003350…. = 0.3350..%

It is common to express the relative error as a percent and we shall generally do so. Also, it is common to round absolute and relative errors up so they have only one significant digit. In this example, the relative error is no greater than 0.4%. In fact errors are often simply rounded (not necessarily up) to one digit. In this example, many people would say the relative error is about 0.3%. We shall express this symbolically by ( 0.3%. In general

(5) x y is short for x ( z where z ( y

While this is not very precise, it is useful.

In Example 1 the two relative errors (t and (a are quite close. In general, if one of the relative errors is small then the two are approximately equal. The following proposition makes this precise.

Proposition 1 If x ( 0 and (t < 1, then xa ( 0 and   (  (a  (  . If xa ( 0 and (a  ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download