4.3 Least Squares Approximations - MIT Mathematics

218

Chapter 4. Orthogonality

4.3 Least Squares Approximations

It often happens that Ax D b has no solution. The usual reason is: too many equations. The matrix has more rows than columns. There are more equations than unknowns (m is greater than n). The n columns span a small part of m-dimensional space. Unless all measurements are perfect, b is outside that column space. Elimination reaches an impossible equation and stops. But we can't stop just because measurements include noise.

To repeat: We cannot always get the error e D b Ax down to zero. When e is zero, x is an exact solution to Ax D b. When the length of e is as small as possible, bx is a least squares solution. Our goal in this section is to compute bx and use it. These are real problems and they need an answer.

The previous section emphasized p (the projection). This section emphasizes bx (the least squares solution). They are connected by p D Abx. The fundamental equation is still ATAbx D ATb. Here is a short unofficial way to reach this equation:

When Ax D b has no solution, multiply by AT and solve ATAbx D ATb:

Example 1 A crucial application of least squares is fitting a straight line to m points. Start with three points: Find the closest line to the points .0; 6/; .1; 0/, and .2; 0/.

No straight line b D C C Dt goes through those three points. We are asking for two numbers C and D that satisfy three equations. Here are the equations at t D 0; 1; 2 to match the given values b D 6; 0; 0:

tD0 tD1 tD2

The first point is on the line b D C C Dt if The second point is on the line b D C C Dt if The third point is on the line b D C C Dt if

C CD 0D6 C CD 1D0 C C D 2 D 0:

This 3 by 2 system has no solution: b D .6; 0; 0/ is not a combination of the columns

.1; 1; 1/ and .0; 1; 2/. Read off A; x; and b from those equations:

2 1

A D 41

1

3 0 15

2

?

xD

C D

23 6

b D 405

0

Ax D b is not solvable.

The same numbers were in Example 3 in the last section. We computed bx D .5; 3/. Those numbers are the best C and D, so 5 3t will be the best line for the 3 points. We must connect projections to least squares, by explaining why ATAbx D ATb.

In practical problems, there could easily be m D 100 points instead of m D 3. They don't exactly match any straight line C C Dt . Our numbers 6; 0; 0 exaggerate the error so

you can see e1; e2, and e3 in Figure 4.6.

Minimizing the Error

How do we make the error e D b Ax as small as possible? This is an important question with a beautiful answer. The best x (called bx/ can be found by geometry or algebra or calculus: 90i angle or project using P or set the derivative of the error to zero.

4.3. Least Squares Approximations

219

By geometry Every Ax lies in the plane of the columns .1; 1; 1/ and .0; 1; 2/. In that plane, we look for the point closest to b. The nearest point is the projection p.

The best choice for Abx is p. The smallest possible error is e D b p. The three points at heights .p1; p2; p3/ do lie on a line, because p is in the column space. In fitting a straight line, bx gives the best choice for .C; D/.

By algebra Every vector b splits into two parts. The part in the column space is p. The perpendicular part in the nullspace of AT is e. There is an equation we cannot solve .Ax D b/. There is an equation Abx D p we do solve (by removing e/:

Ax D b D p C e is impossible; Abx D p is solvable.

(1)

The solution to Abx D p leaves the least possible error (which is e):

Squared length for any x

kAx bk2 D kAx pk2 C kek2:

(2)

This is the law c2 D a2 C b2 for a right triangle. The vector Ax p in the column space is perpendicular to e in the left nullspace. We reduce Ax p to zero by choosing x to be bx. That leaves the smallest possible error e D .e1; e2; e3/.

Notice what "smallest" means. The squared length of Ax b is minimized:

The least squares solution bx makes E D kAx bk2 as small as possible.

Figure 4.6: Best line and projection: Two pictures, same problem. The line has heights p D .5; 2; 1/ with errors e D .1; 2; 1/. The equations ATAbx D ATb give bx D .5; 3/. The best line is b D 5 3t and the projection is p D 5a1 3a2.

Figure 4.6a shows the closest line. It misses by distances e1; e2; e3 D 1; 2; 1. Those are vertical distances. The least squares line minimizes E D e12 C e22 C e32.

220

Chapter 4. Orthogonality

Figure 4.6b shows the same problem in 3-dimensional space (b p e space). The vector b is not in the column space of A. That is why we could not solve Ax D b. No line goes

through the three points. The smallest possible error is the perpendicular vector e. This is e D b Abx, the vector of errors .1; 2; 1/ in the three equations. Those are the distances from the best line. Behind both figures is the fundamental equation ATAbx D ATb.

Notice that the errors 1; 2; 1 add to zero. The error e D .e1; e2; e3/ is perpendicular to the first column .1; 1; 1/ in A. The dot product gives e1 C e2 C e3 D 0.

By calculus Most functions are minimized by calculus! The graph bottoms out and the

derivative in every direction is zero. Here the error function E to be minimized is a sum of squares e12 C e22 C e32 (the square of the error in each equation):

E D kAx bk2 D .C C D 0 6/2 C .C C D 1/2 C .C C D 2/2:

(3)

The unknowns are C and D. With two unknowns there are two derivatives--both zero at the minimum. They are "partial derivatives" because @E=@C treats D as constant and @E=@D treats C as constant:

@E=@C D 2.C C D 0 6/ C 2.C C D 1/ C 2.C C D 2/ D 0

@E=@D D 2.C C D 0 6/.0/ C 2.C C D 1/.1/ C 2.C C D 2/.2/ D 0:

@E=@D contains the extra factors 0; 1; 2 from the chain rule. (The last derivative from

.C C 2D/2 was 2 times C C 2D times that extra 2.) In the C derivative the corresponding

factors are 1; 1; 1, because C is always multiplied by 1. It is no accident that 1, 1, 1 and

0, 1, 2 are the columns of A.

Now cancel 2 from every term and collect all C 's and all D's:

?

The C derivative is zero: 3C C 3D D 6 The D derivative is zero: 3C C 5D D 0

This matrix

33 35

is ATA

(4)

These equations are identical with ATAbx D ATb. The best C and D are the components of bx. The equations from calculus are the same as the "normal equations" from linear algebra. These are the key equations of least squares:

The partial derivatives of kAx bk2 are zero when ATAbx D ATb:

The solution is C D 5 and D D 3. Therefore b D 5 3t is the best line--it comes closest to the three points. At t D 0, 1, 2 this line goes through p D 5, 2, 1. It could not go through b D 6, 0, 0. The errors are 1, 2, 1. This is the vector e!

The Big Picture

The key figure of this book shows the four subspaces and the true action of a matrix. The vector x on the left side of Figure 4.3 went to b D Ax on the right side. In that figure x was split into xr C xn. There were many solutions to Ax D b.

4.3. Least Squares Approximations

221

Figure 4.7: The projection p D Abx is closest to b, so bx minimizes E D kb Axk2.

In this section the situation is just the opposite. There are no solutions to Ax D b.

Instead of splitting up x we are splitting up b. Figure 4.3 shows the big picture for least squares. Instead of Ax D b we solve Abx D p. The error e D b p is unavoidable.

Notice how the nullspace N .A/ is very small--just one point. With independent columns, the only solution to Ax D 0 is x D 0. Then ATA is invertible. The equation ATAbx D ATb fully determines the best vector bx. The error has ATe D 0.

Chapter 7 will have the complete picture--all four subspaces included. Every x splits into xr C xn, and every b splits into p C e. The best solution is bxr in the row space. We can't help e and we don't want xn--this leaves Abx D p.

Fitting a Straight Line

Fitting a line is the clearest application of least squares. It starts with m > 2 points,

hopefully near a straight line. At times t1; : : : ; tm those m points are at heights

b1; : : : ; bm. The best line C C Dt misses the points by vertical distances e1; : : : ; em. No line is perfect, and the least squares line minimizes E D e12 C C em2 .

The first example in this section had three points in Figure 4.6. Now we allow m points

(and m can be large). The two components of bx are still C and D.

A line goes through the m points when we exactly solve Ax D b. Generally we can't

do it. Two unknowns C and D determine a line, so A has only n D 2 columns. To fit the

m points, we are trying to solve m equations (and we only want two!):

2

3

C C Dt1 D b1

1 t1

Ax D b is

C C Dt2 D b2 :::

with

A

D

6664

1 :::

t2 :::

7775

:

(5)

C C Dtm D bm

1 tm

222

Chapter 4. Orthogonality

The column space is so thin that almost certainly b is outside of it. When b happens to lie in the column space, the points happen to lie on a line. In that case b D p. Then Ax D b is solvable and the errors are e D .0; : : : ; 0/.

The closest line C C Dt has heights p1; : : : ; pm with errors e1; : : : ; em.

Solve ATAbx D ATb for bx D .C; D/. The errors are ei D bi C Dti .

Fitting points by a straight line is so important that we give the two equations A TAbx D

ATb, once and for all. The two columns of A are independent (unless all times ti are the same). So we turn to least squares and solve ATAbx D ATb.

?

Dot-product matrix

ATA D

1 t1

2

1

1 tm

64 :::

1

t1 :::

tm

3 75

D

"

m P

ti

P#

P

ti ti2

:

(6)

On the right side of the normal equation is the 2 by 1 vector ATb:

?

ATb D

1 t1

1 tm

2 b1

64 ::: bm

3 75

D

"

P P bi

ti bi

#

:

(7)

In a specific problem, these numbers are given. The best bx D .C; D/ is in equation (9).

The line C C Dt minimizes e12 C C em2 D kAx bk2 when ATAbx D ATb:

"

P #?

"P #

m P

ti

P

ti ti2

C D

D

P bi ti bi

:

(8)

The vertical errors at the m points on the line are the components of e D b p. This error vector (the residual) b Abx is perpendicular to the columns of A (geometry). The error is in the nullspace of AT (linear algebra). The best bx D .C; D/ minimizes the total error E, the sum of squares:

E.x/ D kAx bk2 D .C C Dt1 b1/2 C C .C C Dtm bm/2:

When calculus sets the derivatives @E=@C and @E=@D to zero, it produces ATAbx D ATb. Other least squares problems have more than two unknowns. Fitting by the best parabola

has n D 3 coefficients C; D; E (see below). In general we are fitting m data points by n parameters x1; : : : ; xn. The matrix A has n columns and n < m. The derivatives of kAx bk2 give the n equations ATAbx D ATb. The derivative of a square is linear. This is why the method of least squares is so popular.

Example 2 A has orthogonal columns when the measurement times ti add to zero.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download