CHAPTER I .ca
CHAPTER IV
Linear Algebraic equations
Topics
Mathematical background
Graphical method
Cramer’s rule
Gauss elimination
Gauss-Jordan
I. Mathematical background
In mathematics, a matrix (plural matrices) is a rectangular table of numbers or, more generally, a table consisting of abstract quantities that can be added and multiplied.
Example
[pic][pic][pic]
The elements in the “ middle” of a square matrix form the diagonal of the matrix
[pic]
I.1. Some particular matrices
I.1.1. Symmetric matrix
The same numbers are above and below the diagonal
[pic][pic][pic]
I.1.2. Diagonal matrix
It is a SQUARE matrix where all the elements are zero except on the diagonal:
ai,j=0 for i≠j
[pic][pic][pic]
I.1.3. Identity matrix
it is a diagonal matrix where all the elements on the diagonal are equal to 1. Noted as [ I ]
ai,j=0 for i≠j AND ai,i=1
[ I ]=[pic]
I.1.4. Upper triangular matrix
It is a matrix where all the elements below the diagonal are equal to zero.
[pic][pic][pic]
I.1.5. Lower triangular matrix
It is a matrix where all the elements above the diagonal are equal to zero.
[pic][pic][pic]
I.1.6. Banded matrix
A banded matrix has all elements equal to zero, with the exception of a band centered on the main diagonal.
[pic]
I.2. Operations on the matrices
I.2.1. Multiplication of matrices
If we consider a matrix [A] and a matrix [B] and the product of [A] and [B] is equal [C]
Therefore;
[pic]
Example:
[pic][pic]=[pic]
Procedure
[pic]
[pic] [pic]
I.2.2. Inverse matrix
The inverse matrix [A]-1 can only be computed for a square matrix and non-singular matrix (det(A)(0)
The inverse matrix is defined as a matrix that if multiplied by the matrix [A] will give the identity matrix [I]:
[A] ( [A]-1 = [I]
And for 2(2 matrix:
[pic]
I.2.3. Transpose matrix
The transpose of a matrix [A], noted [A]T is computed as:
[pic]=[pic]
Example
[pic]
I.2.4. Trace of a matrix
The trace of a matrix, noted tr[A] is defined as:
[pic]
The sum the elements on the diagonal
Example
[A]=[pic]; tr[A] = 1+5+9 = 15
I.2.5. Matrix augmentation
A matrix is augmented by the addition of a column (or more) to the original matrix.
Example
Augmentation of the matrix [A]
[pic]
By the identity matrix [pic]
II. Solving small numbers of equations
In this part we will describe several methods that appropriate for solving small (n(3) sets of simultaneous equations.
II.1. Graphical method (n=2; mostly)
Consider we have to solve the following system:
[pic]
In this method, we plot two equations on a Cartesian coordinates with one axis for x1 and the other for x2. Because we are considering linear equations, each equation is a straight line.
Therefore;
[pic]
The intersection of the two lines represents the solution.
General form:
[pic]
and; [pic]
The graphical method can be used for n=3 (3 equations), but beyond, it will be very complex to determine the solution.
However, this technique is very useful to visualize the properties of the solutions:
- No solution
- Infinite solutions
- ill-conditioned system (the slopes are too close)
Figure.4.1. Graphical method: non-solution; infinite solutions; ill-conditioned
II.2. Determinants and the Cramer’s rule
II.2.1. Determinant
For a matrix [A], such as:
[A]= [pic]
The determinant D is
D=[pic]; the second determinant is: D=[pic]
The third order case is:
D =[pic]
The 2(2 determinants are called minors.
II.2.2. Cramer’s rule
The Cramer’s rule can be used to solve system of algebraic equations.
To solve the system, x1 and x2 are written under the form:
[pic]
And the same thing for x3
When the number of equations exceeds 3, the Cramer’s rule becomes impractical because the computation of the determinants is very time consuming.
II.3. The elimination of unknowns
To illustrate this well known procedure, let us take a simple system of equations with two equations:
[pic]
Step I. We multiply (1) by a21 and (2) by a11, thus
[pic]
By subtracting
[pic]
Therefore;
[pic]
Step II. And by replacing in the above equations:
[pic]
The problem with this method is that it is very time consuming for a large number of equations.
II.3.1. Naïve Gauss elimination
Using elimination of unknowns method (above), we:
1- Manipulated the equations to eliminate one unknown. We solved for the other unknown and;
2- We back-substituted it in one of the original equations to solve for other unknowns.
So, we will try to expand these steps: elimination and back-substitution to a large number of equations. The first method that will be introduced is the Gauss elimination method.
This technique is called naïve, because it does not avoid the problem of dividing by zero. This point has to be taken into account when implementing this technique on computers.
Consider the following general system
[pic]
- Forward elimination of unknown:
The principle is to eliminate at each step one unknown, starting from x1 to xn-1:
We multiply the first equation by [pic] and we subtract it from the second equation. We will get:
[pic]
Hence;
[pic]
Note that a11 has been removed from eq.2
The modified system is:
[pic]
We do the same thing with x2, i.e, we multiply by [pic], and subtract the result from the third equation, and so on …
The final result will be an upper triangular system:
[pic]
You can notice that xn can be found directly using the last equation.
Then, a back-substitution is performed:
[pic]
and
[pic] for i=(n-1), …, 1
II.3.1.1. Operation counting
We can show that the total effort in naïve Gauss elimination is:
[pic]
The first term is due to forward elimination and the second to backward elimination.
Two useful conclusions from this result:
- As the system gets larger, the computation time increases greatly.
- Most of the effort is incurred in the elimination step. Thus to improve the method, efforts have to be focused on this step.
II.3.1.2. Pitfalls of elimination methods
II.3.1.2.1. Division by zero
Example:
[pic]
Here we have a division by zero if we replace in the above formula for Gauss elimination (the same thing will happen if we use a very small number). The pivoting technique has been developed to avoid this problem.
II.3.1.2.2. Round-off errors
The large number of equations to be solved induces a propagation of the errors. A rough rule of thumb is that round-off error may be important when dealing with 100 or more equations.
Always substitute your answers back into the original equations to check whenever a substantial errors has occurred
II.3.1.2.3. ill-conditioned systems
A well-conditioned system means that a small change in one or more of the coefficients results in a similar change in the solution.
If we consider the simple system:
[pic]; thus [pic]
If the slopes are nearly equal it’s an ill-conditioned system:
[pic]
Which is the determinant of the matrix: [pic]
Therefore, an ill-conditioned system is a system with a determinant close to zero.
In the special case of a determinant equal zero, the system has no solution or an infinite number of solutions.
However, you must be prudent, because the determinant is influenced by the values of the coefficients.
II.3.1.2.4. Singular systems
II.3.1.2.4.1. Evaluation of the determinant using Gauss-elimination
We can show that for a triangular matrix, the determinant can be simply computed as the product of its diagonal elements:
[pic]
The extreme case of ill-conditioning is when two equations are the same. We will have, therefore:
n unknowns
n-1 equations
It is not always obvious to detect such cases when dealing with a large number of equations. We will develop, therefore, a technique to automatically detect such singularities.
- A singular system has a determinant equal zero.
- Using Gauss elimination, we know that the determinant is equal to the product of the elements on the diagonal.
- Hence, we have to detect if a zero element is created in the diagonal during elimination stage.
II.3.1.3. Techniques for improving the solution
To circumvent some of the pitfalls discussed above, some improvement can be used:
-a- More significant figures
You can increase the number of figures, however, this has a price (computational time, memory, …)
-b- Pivoting
Problems may arise when a pivot is zero or close to zero. The idea of pivoting is to look for the largest element in the same column below the zero pivot and then to switch the equation corresponding to this equation with the equation corresponding to the near zero pivot. This is called partial pivoting.
If the column and rows are searched for the highest element and then switched, this is called complete pivoting. But is rarely used because it adds complexity to the program.
-c- Scaling
It is important to use unites that lead to the same order of magnitude for all the coefficients (exp: voltage can be used in mV or MV).
-d- Complex systems
If we have to solve system with complex numbers, we have to convert this system into a real system:
[pic] (1)
Where: [pic] (2)
If we replace (2) into (1) and equating real and complex parts
[pic]
However, a system of n equations in converted to a system of 2n real equations.
The only solutions is to use a programming language that allow complex data types (Fortran, Matlab, Scilab, Labview, …).
II.3.1.4. Non-linear systems of equations
If we have to solve the following system of equations:
[pic]
One approach is to use a multi-dimensional version of the Newton-Raphson method.
If we write a Taylor serie for each equation:
For the kth equation for example:
[pic]
[pic] is set to zero at the root, and we can write the system under the following form:
[pic]
And the system can be expressed as
[pic] This system can be solved using Gauss elimination.
II.3.1.5. Gauss Jordan
It is a variation of Gauss elimination. The differences are:
- When an unknown is eliminated from an equation, it is also eliminated from all other equations.
- All rows are normalized by dividing them by their pivot element.
Hence, the elimination step results in an identity matrix rather than a triangular matrix. Back substitution is, therefore, not necessary.
All the techniques developed for Gauss elimination are still valid for Gauss-Jordan elimination.
However, Gauss-Jordan requires more computational work than Gauss elimination (approximately 50% more operations).
-----------------------
Eq.2
Eq.1
Eq.2
Eq.1
x2
x1
Eq.2
Eq.1
x2
x1
Example
Solve graphically the following system
[pic]
Intercept
Slop
+
+
+
(
(
(
x2
x1
Note
A determinant equal zero means that the matrix is singular and the system is ill-conditioned.
[pic]
Gabriel Cramer (July 31, 1704 - January 4, 1752) was a Swiss mathematician, born in Geneva. He showed promise in mathematics from an early age. At 18 he received his doctorate and at 20 he was co-chair of mathematics. In 1728 he proposed a solution to the St. Petersburg Paradox that came very close to the concept of expected utility theory given ten years later by Daniel Bernoulli.
Example
Solve using the Cramer’s rule the following system
[pic]
(1)
(2)
Note
Compare the to the Cramer’s law… it is exactly the same.
[pic]
Carl Friedrich Gauss (Gauß) (30 April 1777 – 23 February 1855) was a German mathematician and scientist of profound genius who contributed significantly to many fields, including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. Sometimes known as "the prince of mathematicians" and "greatest mathematician since antiquity", Gauss had a remarkable influence in many fields of mathematics and science and is ranked as one of history's most influential mathematicians.
a’22
a’2n
b’2
pivot equation
pivot coefficient or element
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- chapter by chapter bible summary
- do i qualify for chapter 7 bankruptcy
- chapter by chapter summaries
- novel outline template chapter by chapter pdf
- chapter 9 chapter review answers
- outsiders chapter summary chapter 5
- genesis chapter by chapter summary
- chapter by chapter bible study
- chapter test b chapter 6 chemistry in biology
- chapter 14 history of life chapter test
- humss cw mpig i 11 humss cw mpig i 12 humss cw mpig i 13
- chapter quiz course 2 chapter 5 expressions