Matrices and Linear Algebra

[Pages:10]Chapter 2

Matrices and Linear Algebra

2.1 Basics

Definition 2.1.1. A matrix is an m ? n array of scalars from a given field F . The individual values in the matrix are called entries.

Examples.

A=

2 -1

1 2

3 4

B=

1 3

2 4

The size of the array is?written as m ? n, where

m?n

Notation

number of rows number of columns

a11 a12 . . . a1n

A = a21

a22

...

a2n

-

rows

an1 an2 . . . amn

columns

A := uppercase denotes a matrix a := lower case denotes an entry of a matrix a F. Special matrices

33

34

CHAPTER 2. MATRICES AND LINEAR ALGEBRA

(1) If m = n, the matrix is called square. In this case we have

(1a) A matrix A is said to be diagonal if

aij = 0 i = j.

(1b) A diagonal matrix A may be denoted by diag(d1, d2, . . . , dn) where

aii = di aij = 0 j = i.

The diagonal matrix diag(1, 1, . . . , 1) is called the identity matrix

and is usually denoted by

1 0 ... 0

In = 0... 1 . . .

0

1

or simply I, when n is assumed to be known. 0 = diag(0, . . . , 0) is called the zero matrix.

(1c) A square matrix L is said to be lower triangular if

ij = 0 i < j.

(1d) A square matrix U is said to be upper triangular if

uij = 0 i > j.

(1e) A square matrix A is called symmetric if

aij = aji.

(1f) A square matrix A is called Hermitian if

aij = a?ji (z? := complex conjugate of z).

(1g) Eij has a 1 in the (i, j) position and zeros in all other positions.

(2) A rectangular matrix A is called nonnegative if

It is called positive if

aij 0 all i, j.

aij > 0 all i, j.

Each of these matrices has some special properties, which we will study during this course.

2.1. BASICS

35

Definition 2.1.2. The set of all m ? n matrices is denoted by Mm,n(F ), where F is the underlying field (usually R or C). In the case where m = n we write Mn(F ) to denote the matrices of size n ? n.

Theorem 2.1.1. Mm,n is a vector space with basis given by Eij, 1 i m, 1 j n.

Equality, Addition, Multiplication

Definition 2.1.3. Two matrices A and B are equal if and only if they have the same size and

aij = bij all i, j. Definition 2.1.4. If A is any matrix and F then the scalar multiplication B = A is defined by

bij = aij all i, j. Definition 2.1.5. If A and B are matrices of the same size then the sum A and B is defined by C = A + B, where

cij = aij + bij all i, j We can also compute the difference D = A - B by summing A and (-1)B

D = A - B = A + (-1)B.

matrix subtraction. Matrix addition "inherits" many properties from the field F .

Theorem 2.1.2. If A, B, C Mm,n(F ) and , F , then (1) A + B = B + A commutivity (2) A + (B + C) = (A + B) + C associativity (3) (A + B) = A + B distributivity of a scalar (4) If B = 0 (a matrix of all zeros) then

A+B =A+0=A

(4) ( + )A = A + A

36

CHAPTER 2. MATRICES AND LINEAR ALGEBRA

(5) (A) = A

(6) 0A = 0

(7) 0 = 0.

Definition 2.1.6. If x and y Rn,

x = (x1 . . . xn) y = (y1 . . . yn).

Then the scalar or dot product of x and y is given by

n

x, y = xiyi.

i=1

Remark 2.1.1. (i) Alternate notation for the scalar product: x, y = x ? y. (ii) The dot product is defined only for vectors of the same length.

Example 2.1.1. Let x = (1, 0, 3, -1) and y = (0, 2, -1, 2) then x, y = 1(0) + 0(2) + 3(-1) - 1(2) = -5.

Definition 2.1.7. If A is m ? n and B is n ? p. Let ri(A) denote the vector with entries given by the ith row of A, and let cj(B) denote the vector with entries given by the jth row of B. The product C = AB is the m ? p matrix defined by

cij = ri(A), cj(B)

where ri(A) is the vector in Rn consisting of the ith row of A and similarly cj(B) is the vector formed from the jth column of B. Other notation for

C = AB

n

cij = aikbkj

k=1

1im 1 j p.

Example 2.1.2. Let

A=

1 3

0 2

1 1

21

and B = 3 0 .

-1 1

Then

AB =

1 11

2 4

.

2.1. BASICS

37

Properties of matrix multiplication

(1) If AB exists, does it happen that BA exists and AB = BA? The answer is usually no. First AB and BA exist if and only if A Mm,n(F ) and B Mn,m(F ). Even if this is so the sizes of AB and BA are different (AB is m ? m and BA is n ? n) unless m = n. However even if m = n we may have AB = BA. See the examples below. They may be different sizes and if they are the same size (i.e. A and B are square) the entries may be different

A = [1, 2]

B=

-1 1

A=

1 3

2 4

B=

-1 0

1 1

AB = [1]

BA =

-1 1

-2 2

AB =

-1 -3

3 7

BA =

2 3

2 4

(2) If A is square we define

A1 = A, A2 = AA, A3 = A2A = AAA An = An-1A = A ? ? ? A (n factors).

(3) I = diag(1, . . . , 1). If A Mm,n(F ) then AIn = A and ImA = A.

Theorem 2.1.3 (Matrix Multiplication Rules). Assume A, B, and C are matrices for which all products below make sense. Then

(1) A(BC) = (AB)C (2) A(B ? C) = AB ? AC and (A ? B)C = AC ? BC (3) AI = A and IA = A (4) c(AB) = (cA)B (5) A0 = 0 and 0B = 0

38

CHAPTER 2. MATRICES AND LINEAR ALGEBRA

(6) For A square ArAs = AsAr for all integers r, s 1.

Fact: If AC and BC are equal, it does not follow that A = B. See Exercise 60.

Remark 2.1.2. We use an alternate notation for matrix entries. For any matrix B denote the (i, j)-entry by (B)ij.

Definition 2.1.8. Let A Mm,n(F ).

(i) Define the transpose of A, denoted by AT , to be the n ? m matrix with entries

(AT )ij = aji.

(ii) Define the adjoint of A, denoted by A, to be the n ? m matrix with entries

(A)ij = a?ji complex conjugate

Example 2.1.3.

A=

1 5

2 4

3 1

15

AT = 2 4

31

In words. . . "The rows of A become the columns of AT , taken in the same order." The following results are easy to prove. Theorem 2.1.4 (Laws of transposes). (1) (AT )T = A and (A) = A

(2) (A ? B)T = AT ? BT (and for ) (3) (cA)T = cAT (cA) = c?A (4) (AB)T = BT AT

(5) If A is symmetric

A = AT

2.1. BASICS

39

(6) If A is Hermitian

A = A.

More facts about symmetry.

Proof. (1) We know (AT )ij = aji. So ((AT )T )ij = aij. Thus (AT )T = A. (2) (A ? B)T = aji ? bji. So (A ? B)T = AT ? BT .

Proposition 2.1.1. (1) A is symmetric if and only if AT is symmetric. (1) A is Hermitian if and only if A is Hermitian. (2) If A is symmetric, then A2 is also symmetric. (3) If A is symmetric, then An is also symmetric for all n.

Definition 2.1.9. A matrix is called skew-symmetric if AT = -A.

Example 2.1.4. The matrix

012

A = -1 0 -3

-2 3 0

is skew-symmetric.

Theorem 2.1.5. (1) If A is skew symmetric, then A is a square matrix and aii = 0, i = 1, . . . , n.

(2) For any matrix A Mn(F ) A - AT

is skew-symmetric while A + AT is symmetric.

(3) Every matrix A Mn(F ) can be uniquely written as the sum of a skew-symmetric and symmetric matrix.

Proof. (1) If A Mm,n(F ), then AT Mn,m(F ). So, if AT = -A we must have m = n. Also

aii = -aii

for i = 1, . . . , n. So aii = 0 for all i.

40

CHAPTER 2. MATRICES AND LINEAR ALGEBRA

(2) Since (A - AT )T = AT - A = -(A - AT ), it follows that A - AT is skew-symmetric.

(3) Let A = B + C be a second such decomposition. Subtraction gives

1 (A

+

AT

)

-

B

=

C

-

1 (A

-

AT

).

2

2

The left matrix is symmetric while the right matrix is skew-symmetric. Hence both are the zero matrix.

A

=

1 (A

+

AT )

+

1 (A

-

AT ).

2

2

Examples. A =

0 -1 10

is skew-symmetric. Let

B=

1 -1

2 4

BT =

1 2

-1 4

B - BT =

0 -3

3 0

B + BT =

2 1

1 8

.

Then

B = 1 (B - BT ) + 1 (B + BT ).

2

2

An important observation about matrix multiplication is related to ideas from vector spaces. Indeed, two very important vector spaces are associated with matrices.

Definition 2.1.10. Let A Mm,n(C). (i)Denote by

cj(A) := jth column of A

cj(A) Cm. We call the subspace of Cm spanned by the columns of A the column space of A. With c1 (A) , . . . , cn (A) denoting the columns of A

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download