Linear Algebra: Graduate Level Problems and Solutions

[Pages:36]Linear Algebra: Graduate Level Problems and Solutions

Igor Yanovsky

1

Linear Algebra

Igor Yanovsky, 2005

2

Disclaimer: This handbook is intended to assist graduate students with qualifying examination preparation. Please be aware, however, that the handbook might contain, and almost certainly contains, typos as well as incorrect or inaccurate solutions. I can not be made responsible for any inaccuracies contained in this handbook.

Linear Algebra

Igor Yanovsky, 2005

3

Contents

1 Basic Theory

4

1.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Linear Maps as Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3 Dimension and Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4 Matrix Representations Redux . . . . . . . . . . . . . . . . . . . . . . . 6

1.5 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.6 Linear Maps and Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.7 Dimension Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.8 Matrix Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.9 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Inner Product Spaces

8

2.1 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.1 Gram-Schmidt procedure . . . . . . . . . . . . . . . . . . . . . . 9

2.2.2 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3 Orthogonal Complements and Projections . . . . . . . . . . . . . . . . . 9

3 Linear Maps on Inner Product Spaces

11

3.1 Adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.2 Self-Adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.3 Polarization and Isometries . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.4 Unitary and Orthogonal Operators . . . . . . . . . . . . . . . . . . . . . 14

3.5 Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.6 Normal Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.7 Unitary Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.8 Triangulability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Determinants

17

4.1 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Linear Operators

18

5.1 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

5.2 Dual Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 Problems

23

Linear Algebra

Igor Yanovsky, 2005

4

1 Basic Theory

1.1 Linear Maps

Lemma. If A M atmxn(F) and B M atnxm(F), then

tr(AB) = tr(BA).

Proof. Note that the (i, i) entry in AB is

m i=1

jiij

.

mn

Thus tr(AB) =

ij ji,

i=1 j=1

nm

tr(BA) =

jiij .

j=1 i=1

n j=1

ij

ji,

while

(j, j)

entry

in

BA

is

1.2 Linear Maps as Matrices

Example. Let Pn = {0 + 1t + ? ? ? + ntn : 0, 1, . . . , n F} be the space of polynomials of degree n and D : V V the differential map

D(0 + 1t + ? ? ? + ntn) = 1 + ? ? ? + nntn-1.

If we use the basis 1, t, . . . , tn for V then we see that D(tk) = ktk-1 and thus

the (n + 1)x(n + 1) matrix representation is computed via

0 1 0 ??? 0

[D(1) D(t) D(t2)

???

D(tn)]

=

[0 1 2t

???

ntn-1]

=

[1 t t2

???

tn]

0 0 ...

0 0 ...

2

0 ...

??? ...

...

0 0 n

0 0 0 ??? 0

1.3 Dimension and Isomorphism

A linear map L : V W is isomorphism if we can find K : W V such that LK = IW and KL = IV .

V ---L- W

IV

IW

V -K--- W

Linear Algebra

Igor Yanovsky, 2005

5

Theorem. V and W are isomorphic there is a bijective linear map L : V W .

Proof. If V and W are isomorphic we can find linear maps L : V W and K : W V so that LK = IW and KL = IV . Then for any y = IW (y) = L(K(y)) so we can let x = K(y), which means L is onto. If L(x1) = L(x2) then x1 = IV (x1) = KL(x1) = KL(x2) = IV (x2) = x2, which means L is 1 - 1. Assume L : V W is linear and a bijection. Then we have an inverse map L-1 which satisfies L L-1 = IW and L-1 L = IV . In order for this inverse map to be allowable as K we need to check that it is linear. Select 1, 2 F and y1, y2 W . Let xi = L-1(yi) so that L(xi) = yi. Then we have

L-1(1y1 + 2y2) = L-1(1L(x1) + 2L(x2)) = L-1(L(1x1 + 2x2)) = IV (1x1 + 2x2) = 1x1 + 2x2 = 1L-1(y1) + 2L-1(y2).

Theorem. If Fm and Fn are isomorphic over F, then n = m.

Proof. Suppose we have L : Fm Fn and K : Fn Fm such that LK = IFn and KL = IFm. L M atnxm(F) and K M atmxn(F). Thus

n = tr(IFn) = tr(LK) = tr(KL) = tr(IFm) = m.

Define the dimension of a vector space V over F as dimF V = n if V is isomorphic to Fn.

Remark. dimC C = 1, dimR C = 2, dimQ R = . The set of all linear maps {L : V W } over F is homomorphism, and is denoted

by homF(V, W ). Corollary. If V and W are finite dimensional vector spaces over F, then homF(V, W ) is also finite dimensional and

dimF homF(V, W ) = (dimF W ) ? (dimF V )

Proof. By choosing bases for V and W there is a natural mapping

homF(V, W ) M at(dimF W )?(dimF V )(F) F(dimF W )?(dimF V )

This map is both 1-1 and onto as the matrix represetation uniquely determines the linear map and every matrix yields a linear map.

Linear Algebra

Igor Yanovsky, 2005

6

1.4 Matrix Representations Redux

L : V W , bases x1, . . . , xm for V and y1, . . . , yn for W . The matrix for L interpreted as a linear map is [L] : Fm Fn. The basis isomorphisms defined by the choices of basis for V and W : [x1 ? ? ? xm] : Fm V , 1 [y1 ? ? ? yn ] : Fn W .

V [x1???xm]

---L- W [y1???yn]

Fm ---[L-] Fn

L [x1 ? ? ? xm] = [y1 ? ? ? yn][L]

1.5 Subspaces

A nonempty subset M V is a subspace if , F and x, y M , then x + y M . Also, 0 M . If M, N V are subspaces, then we can form two new subspaces, the sum and the intersection:

M + N = {x + y : x M, y N }, M N = {x : x M, x N }.

M and N have trivial intersection if M N = {0}. M and N are transversal if M + N = V . Two spaces are complementary if they are transversal and have trivial intersection. M, N form a direct sum of V if M N = {0} and M + N = V . Write V = M N .

Example. V = R2. M = {(x, 0) : x R}, x-axis, and N = {(0, y) : y R}, y-axis.

Example. V = R2. M = {(x, 0) : x R}, x-axis, and N = {(y, y) : y R}, a diagonal. Note (x, y) = (x - y, 0) + (y, y), which gives V = M N .

If we have a direct sum decomposition V = M N , then we can construct the projection of V onto M along N . The map E : V V is defined using that each z = x + y, x M , y N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) = E(x) = x. Thus im(E) = M and ker(E) = N .

Definition. If V is a vector space, a projection of V is a linear operator E on V such that E2 = E.

1[x1 ? ? ? xm] : Fm V means 1

[x1 ? ? ? xm] ... = 1x1 + ? ? ? + mxm m

Linear Algebra

Igor Yanovsky, 2005

7

1.6 Linear Maps and Subspaces

L : V W is a linear map over F. The kernel or nullspace of L is

ker(L) = N (L) = {x V : L(x) = 0}

The image or range of L is

im(L) = R(L) = L(V ) = {L(x) W : x V }

Lemma. ker(L) is a subspace of V and im (L) is a subspace of W .

Proof. Assume that 1, 2 F and that x1, x2 ker(L), then L(1x1 + 2x2) = 1L(x1) + 2L(x2) = 0 1x1 + 2x2 ker(L). Assume 1, 2 F and x1, x2 V , then 1L(x1) + 2L(x2) = L(1x1 + 2x2) im (L).

Lemma. L is 1-1 ker(L) = {0}.

Proof. We know that L(0?0) = 0?L(0) = 0, so if L is 1-1 we have L(x) = 0 = L(0) implies that x = 0. Hence ker(L) = {0}. Assume that ker(L) = {0}. If L(x1) = L(x2), then linearity of L tells that L(x1 - x2) = 0. Then ker(L) = {0} implies x1 - x2 = 0, which shows that x1 = x2 as desired.

Lemma. L : V W , and dim V = dim W . L is 1-1 L is onto dim im (L) = dim V .

Proof. From the dimension formula, we have

dim V = dim ker(L) + dim im(L).

L is 1-1 ker(L) = {0} dim ker(L) = 0 dim im (L) = dim V dim im (L) = dim W im (L) = W , that is, L is onto.

1.7 Dimension Formula

Theorem. Let V be finite dimensional and L : V W a linear map, all over F, then im(L) is finite dimensional and

dimF V = dimF ker(L) + dimF im(L)

Proof. We know that dim ker(L) dim V and that it has a complement M of dimension k = dim V - dim ker(L). Since M ker(L) = {0} the linear map L must be 1-1 when restricted to M . Thus L|M : M im(L) is an isomorphism, i.e. dim im(L) = dim M = k.

1.8 Matrix Calculations

Change of Basis Matrix. Given the two basis of R2, 1 = {x1 = (1, 1), x2 = (1, 0)}

and 2 = {y1 = (4, 3), y2 = (3, 2)}, we find the change-of-basis matrix P from 1 to 2.

Write y1 as a linear combination of x1 and x2, y1 = ax1 + bx2. (4, 3) = a(1, 1) + b(1, 0)

a = 3, b = 1 y1 = 3x1 + x2.

Write y2 as a linear combination of x1 and x2, y2 = ax1 + bx2. (3, 2) = a(1, 1) + b(1, 0)

a = 2, b = 1 y2 = 2x1 + x2.

Write the coordinates of y1 and y2 as columns of P .

P=

3 1

2 1

.

Linear Algebra

Igor Yanovsky, 2005

8

1.9 Diagonalizability

Definition. Let T be a linear operator on the finite-dimensional space V . T is diagonalizable if there is a basis for V consisting of eigenvectors of T .

Theorem. Let v1, . . . , vn be nonzero eigenvectors of distinct eigenvalues 1, . . . , n. Then {v1, . . . , vn} is linearly independent.

Alternative Statement. If L has n distinct eigenvalues 1, . . . , n, then L is diagonalizable. (Proof is in the exercises).

Definition. Let L be a linear operator on a finite-dimensional vector space V , and let be an eigenvalue of L. Define E = {x V : L(x) = x} = ker(L - IV ). The set E is called the eigenspace of L corresponding to the eigenvalue . The algebraic multiplicity is defined to be the multiplicity of as a root of the characteristic polynomial of L, while the geometric multiplicity of is defined to be the dimension of its eigenspace, dim E = dim(ker(L - IV )). Also,

dim(ker(L - IV )) m.

Eigenspaces. A vector v with (A - I)v = 0 is an eigenvector for .

Generalized Eigenspaces. Let be an eigenvalue of A with algebraic multiplicity m. A vector v with (A - I)mv = 0 is a generalised eigenvector for .

2 Inner Product Spaces

2.1 Inner Products

The three important properties for complex inner products are: 1) (x|x) = ||x||2 > 0 unless x = 0. 2) (x|y) = (y|x). 3) For each y V the map x (x|y) is linear. The inner product on Cn is defined by

(x|y) = xty?

Consequences: (1x1 + 2x2|y) = 1(x1|y) + 2(x2|y), (x|1y1 + 2y2) = ?1(x|y1) + ?2(x|y2), (x|x) = ?(x|x) = ||2(x|x).

2.2 Orthonormal Bases

Lemma. Let e1, . . . , en be orthonormal. Then e1, . . . , en are linearly independent and any element x span{e1, . . . , en} has the expansion

x = (x|e1)e1 + ? ? ? + (x|en)en.

Proof. Note that if x = 1e1 + ? ? ? + nen, then (x|ei) = (1e1+? ? ?+nen|ei) = 1(e1|ei)+? ? ?+n(en|ei) = 11i+? ? ?+nni = i.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download