Vectors and Vector Operations



4.3 Properties of Determinants

In this section we look at the algebraic properties of determinants that are the key to computing and applying them.

Property 1 – Determinants of Triangular Matrics. As we saw in the previous section the definition of a n(n involves the sum/difference of n! products. For triangular matrices this sum reduces to a single product.

Proposition 1. If A is upper triangular or lower triangular then det(A) is the product of the elements on the main diagonal.

Proof. det(A) = . If A is upper triangular, then only product that doesn't contain an element below the main diagonal is a11a22(ann. Similarly, if A is lower triangular. //

Example 1.

= (2)(3)(7) = 42

= (2)(3)(7) = 42

Property 2 – A matrix and its transpose have the same determinant.

Example3. A = and AT = ( det(A) = (3)(4) – (1)(2) = 10 and det(A) = (3)(4) – (2)(1) = 10.

Proposition 2. det(AT) = det(A)

Proof. det(A) is a sum/difference of products where each product is obtained by picking an element from each row of A not repeating the choice of columns. However, this is the same as picking an element from each column of A not repeating the choice of rows. However, this is equivalent to picking an element from each row of AT not repeating the choice of columns. So this is one of the products that goes into det(AT). In order to see that a product has the same sign in both det(A) and det(AT), let's look at a typical product in a 4(4 determinant . One such product is (– 1)3a14a21a32a43. This product is multiplied by (- 1)3 since in three pairs of subscripts the second numbers are in the opposite order as the first numbers. These three pairs are

14 and 21

14 and 32

14 and 43

Note that this product also appears in det(AT) as (- 1)3a21a32a43a14  =  (- 1)3(AT)12(AT)23(AT)34(AT)41. This product is also multiplied by (- 1)3 since in three pairs of subscripts the second numbers are in the opposite order as the first numbers. These three pairs are

12 and 41

23 and 41

34 and 41

Note that each of the three pairs (i, ((i)) and (j, ((j)) in the product for det(A) where i < j and ((i) > ((j) corresponds to a the pair (((j), j) and (((i), i) in the product for det(AT) where ((j) > ((i) and j > i. So there are the same number of pairs with an order reversal in the product for det(A) and the product for det(AT). So the product has the same sign in each case. //

One consequence of this property of determinants is that other properties that hold for rows also automatically hold for columns.

Property 3 – Linearity of Determinants in any Single Row of Column. In the case of 2(2 determinants we saw that the determinant was a linear function of one of the rows if the other row was held fixed. Similarly it was a linear function of one of the columns if the other column was held fixed. This is true in general.

Proposition 3. det(A) is a linear function of any row if the other rows are fixed. det(A) is a linear function of any column if the other columns are fixed.

Proof. The row part follows from the fact that det(A) is a sum/difference of products where each product is obtained by picking an element from each row of A. The column part follows from the row part and the fact that A and AT have the same determinant. //

Corollary 4. If all the elements of any single row or column are zero, then the determinant is zero.

Example 3.

= = +

= 2 + 5 +

Corollary 5. Let D be a diagonal matrix. Then det(DA) = det(D)det(A) and det(AD) = det(D)det(A) for any matrix A the same size as D.

Proof. We can write D = D1D2…Dn where each Dj is a diagonal matrix with one’s on the diagonal except for the jth position where it might be something different from one. By successively multiplying A by each of the Dj one can reduce to the case where D is one of the Dj. However, this case follows from Proposition 3. //

Property 4 – Interchanging Two Rows or Columns Negates the Determinant. This is related to the factor (- 1)I(() that appears in the definition of the determinant.

Proposition 6. If one interchanges two rows of a determinant, this negates its values. Similarly if one interchanges two columns.

Proof. Suppose j < k and A = and one interchanges rows j and k to get B = . Then det(A) = and det(B) = . Consider a particular term a1,((1)(ak,((j)(aj,((k)(an,((n) in the sum for det(B). It appears in the sum for det(A) as

a1,((1)(aj,((j)(ak,((k)(an,((n) where ((i) = ((i) if i is different from j and k and ((j) = ((k) and ((k) = ((j). We need to show that one of ( and ( is even and the other is odd. We shall give the argument in the case ((j) < ((k); the argument in the other case is similar. One inversion that appears in ( but not in ( is the one corresponding to the pair (j, k). The other inversions that are different in ( than ( are the ones where j < i < k and ((j)  ((k), so we pick up two more inversions in (. So ( has an odd number of inversions more than (. So one of ( and ( is even and the other is odd. So the term a1,((1)(ak,((j)(aj,((k)(an,((n) appears with an opposite sign in the sum for det(A). //

Example 4. = -

Corollary 7. If two rows are the same or proportional, then the determinant is zero. The same is true if two columns are the same or proportional.

Proof. If the two rows are the same, then the two row can be interchanged with the sign changing. However, the matrix stays the same when the two rows are interchanged since the two rows are the same. So the determinant is the same after the rows are interchanged. The determinant must be zero, since that is the only number that remains the same when it is multiplied by - 1. If two rows are proportional, then the proportionality constant can be factored out using linearity, and the two rows are then equal, so the determinant is zero. Similarly for columns. //

Example 5. = 0

Corollary 8. Let P be the permutation matrix obtained by interchanging rows j and k of the identity and leaving the rest the same. Then det(P) = - 1 and det(PA) = det(P)det(A) for any matrix A the same size as P.

Proof. It follows from Proposition 1 that det(I) = 1. It follows from Proposition 5 that det(P) = - 1. Note that PA is obtained from A by interchanging two of the rows of A. By Proposition 5 one has det(PA) = - det(A). So det(PA) = det(P)det(A). //

Property 5 – Adding a multiple of one row to another leaves the determinant unchanged. Similarly for columns.

Proposition 9. If one adds a multiple of one row to another, the determinant is unchanged. Similarly if one adds a multiple of one column to another.

Proof. Suppose B is obtained by obtaining adding c times row j of A to row k. Using linearity one can write det(B) = det(A) + c det(D) where D is obtained from A by replacing row k by row j. By Corollary 7 one has det(D) = 0. So det(B) = det(A). Similarly for columns. //

Example 6. = =

Corollary 10. Let E be the elementary matrix obtained by adding c times row j of the identity matrix to row k and leaving the rest the same. Then det(E) = 1 and det(EA) = det(E)det(A) for any matrix A the same size as P.

Proof. It follows from Proposition 1 that det(I) = 1. It follows from Proposition 9 that det(E) = 1. Note that EA is obtained from A by c times row j of A to row k. By Proposition 8 one has det(EA) = det(A). So det(EA) = det(E)det(A). //

Propositions 6 and 9 show that the Gaussian elimination algorithm can be modified to give an efficient way to evaluate determinants. We illustrate this with an example.

Example 7. Find . Subtract 2 times row 1 from row 2 and 3 times row 1 from row 3. The determinant remains unchanged. So = . Now subtract 4 times row 2 from row 3. The determinant remains unchanged. So. By Proposition 1 the value of the last determinant is 5. So = 5

Property 6 – A matrix is invertible if and only if its determinant is non-zer0. Recall in section 4.1 we noted that one of the applications of determinants is that it gave us a concise test for when matrix is invertible. We can now prove this.

Proposition 11. A is invertible if and only if det(A) = 0.

Proof. As we saw in section 2.1 and 2.2 we can multiply A by elementary matrices that add multiples of one row to another or interchange rows until we get an upper triangular matrix U, i.e E1E2…EnA = U where the Ej are either adds a multiple of one row to another or interchanges two rows. By the above results A is invertible if and only if U is and det(A) is either equal to det(U) or the negative of det(U). So it suffices to prove the result when A is upper triangular. In that case A is invertible if and only if the diagonal elements of A are all non-zero and det(A) is non-zero if and only if the diagonal elements are all non-zero. So the result is true. //

Example 8. Determine if the matrix A = is invertible.

= 0

So A is not-invertible.

Property 7 – Cofactor Expansions. Recall that an alternative way some books use to define determinants is by the sum of the elements of the first row times their cofactors. We shall show now that the determinant is in fact equal to the sum of the elements of any row or column times their cofactors.

Proposition 12. If A is an n(n matrix then for any row i and column j one has

(1) det(A) =

(2) det(A) =

Proof. (2) follows from (1) by taking transposes. To prove (1) we first prove it for matrices of the form

(3) A =

Using Definition 1 of section 4.2 and the fact that all the elements of the last row except the last are zero, one has

(4) det(A) = ann

Where Q is the set of permutations of 1, 2, …, n such that ((n) = n. However a permutation ( in Q can be regarded as a permutation of just the numbers 1, 2, …, n-1 with the same number of inversions. Therefore

(4) det(A) = ann = ann det(M(n,n))

and (1) hold for an A of the form (3).

Next we prove (1) for matrices of the form

(5) A =

We move column j to column n by interchanging this column with the column after it n-j times. Then

(6) det(A) = (- 1)n-j

= anj (- 1)n+j det(M(n,j))

where we used (4).

Using linearity in the last row it follows that (1) holds in the case where i = n.

To prove the general case we move row i donw to row n by interchanging this row with the one after it n-i times. Thus det(A) = (-1)n-idet(B) where row k of B is row k of A for k = 1, …, i-1 and row k of B is row k-1 of A for k = i+1, …, n-1 and row n of B is row i of A. Since (1) holds for i = 1, one has

det(B) =

where N(n,j) is the nj-minor of B. However N(n,j) = M(i,j). So

det(B) =

So

det(A) = (-1)n-i =

//

Example 8. Find by expanding in cofactors along the second column.

= - (-1) + (-1) - (1)

= (1)(-10) + (-1)(-11) - (1)(-4) = 5

Property 8 – The Determinant of a Product is the Product of the Determinants.

Proposition 13. det(AB) = det(A) det(B)

Proof. We can write A and B as the product of elementary matrices, i.e A = E1…Ep and B = F1…Fq where each of the Ej and Fj are either diagonal, interchange two rows or add a multiple of one row to another. Using the above propositions one has det(A) = det(E1)…det(Ep) and det(B) = det(F1)…det(Fq). Then AB= E1…EpF1…Fq and det(AB) = det(E1)…det(Ep)det(F1)…det(Fq) = det(A)det(B). //

Corollary 14. det(A-1) = .

Proof. This follows from Proposition 13 and the fact that AA-1 = I. //

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download