Matrix Methods for Solving Systems of 1st Order Linear ...
Matrix Methods for Solving Systems of 1st Order Linear Differential Equations
The Main Idea:
Given
a system of 1st order
linear differential equations
dx dt
= Ax
with initial conditions
x(0) , we use
eigenvalue-eigenvector analysis to find an appropriate basis B = {v1,, vn} for Rn and a change of basis
matrix
S
=
v1
v
n
such
that in
coordinates
relative
to
this
basis
(u
= S-1x )
the
system is
in
a
standard
form with a known solution. Specifically, we find a standard matri= x B [A= ]B S-1AS , transform the system
into
du dt
= Bu , solve it as
u(t) = [etB ]u(0)
where
[etB ]
is the evolution matrix for B, then transform back to the
original coordinates to get x(t) = [etA ]x(0) where [etA ] = S[etB ]S-1 is the evolution matrix for B. That is
x= (t) [e= tA ] S[etB ]S-1x(0) . This is actually easier to do than it is to explain, so here are a few illustrative
examples:
The diagonalizable case
Problem:
Solve
the
system
d=x dt d=y
5x
-
6
y
with initial
3x - 4y
dt
conditions= x(0) 3= , y(0) 1.
Solution: In matrix form, we have
dx dt
= Ax
where
A
=
5 3
-6 -4
and
x(0)
=
3 1
.
We
start
by
finding
the
eigenvalues of the matrix: I - A =--35
6 +
4
,
and
the
characteristic polynomial is pA () = 2 - - 2 = ( - 2)( +1) . This gives the eigenvalues 1 = 2 and 2 = -1
.
The
first
of these
gives
the
eigenvector
v1
=
2 1
,
and
the second
gives
the eigenvector
v2
=
1 1
.
So we
have
AAvv21
= =
1v1 2v2
.
The
change
of
basis
matrix
is
S
=
2 1
1 1
and with the new basis (of eigenvectors)
B = {v1, v2} we have = [A]B S= -1AS 01= 02 02= -01 D , a diagonal matrix. [There is no need to carry
out the multiplication of the matrices if B = {v1, v2} is known to be is a basis of eigenvectors. It will always yield a diagonal matrix with the eigenvalues on the diagonal.]
The
evolution
matrix
for
this
diagonal
matrix
is
[etD ]
=
e2t
0
0 e-t
,
and
the
solution
of
the
system
is:
= x(t)
[e= tA ]x(0)
S[etD ]= S-1x(0)
2 1 e2t 1 1 0
0 1 e-t -1
= -21 13
2e2t e2t
e-t 2 e-t -1
=
4e2t 2e2t
- -
e-t e-t
=
2e2t
2 1
-
e-t
1 1
=
2e2t v1
-
e-t v2
1
Revised May 9, 2017
The complex eigenvalue case
Let A be a matrix with a complex conjugate pair of eigenvalues = a + ib and = a - ib . We can proceed just as in the case of real eigenvalues and find a complex vector w such that (I - A)w =0 . The components of such
a vector w will have complex numbers for its components. If we decompose w into its real and imaginary vector components as w= u + iv (where u and v and real vectors), we can calculate that:
(1)
Aw =Au + iAv =w =(a + ib)(u + iv) =(au - bv) + i(bu + av)
If we define the vector w^= u - iv and use the easy-to-prove fact that for a matrix A with all real entries we'll have Aw = Aw^ = w = w^ , we see that w^= u - iv will also be an eigenvector with eigenvalue , and:
(2)
Au - iAv = (au - bv) - i(bu + av)
The true value of this excursion into the world of complex numbers and complex vectors is seen when we add and subtract equation (1) and (2). We get:
2= Au 2(au - bv) 2= iAv 2i(bu + av)
After cancellation of the factors of 2 and 2i in the respective equations and rearranging, we get:
A=v av + bu Au =-bv + au
Note that we are now back in the "real world": all vectors and scalars in the above equations are real. If we use
the two vectors B = {v,u} as basis vectors associated with the two complex conjugate eigenvalues, we see that
in coordinates associated with this basis (and change of basis matrix S = [v u] ) we'll have the matrix [A]B of
the form:
a
-b
[A] B
= S-1AS = ba -ab = a2 + b2 b aa
2 2
+ b2 + b2
a
a2 a2
+ b2 + b2
= csoins -csoisn
= R
where R is the rotation matrix corresponding to the angle= arg() .
Next, we want to find the evolution matrix for this (real) normal form.
In
fact,
[etB ]
=
eat
cos bt sin bt
- sin bt cos bt
,
a
time-varying
rotation
matrix
with
exponential
scaling.
This
yields
a
trajectory that spirals out in the case where Re()= a > 0 (look to the original vector field to see whether it's
clockwise or counterclockwise), or a trajectory that spirals inward toward 0 in the case where Re()= a < 0 .
To derive this expression for [etB ] , make another coordinate change with complex eigenvectors starting
with
B
=
a b
-b a
.
B
will
have
the
same
eigenvalues
of
A,
namely
=
a + ib and =
a - ib , and
I - B =--ba
b -
a
.
Using
the
eigenvalue
=
a + ib ,
we
seek
a
complex
eigenvector
such
that
-ib= b ibb b
= -ibb ++ bibbb
0 0
.
This
implies
that
=
i ,
so
one
such
complex
eigenvector
is
w
=
i 1
.
The
2
Revised May 9, 2017
eigenvalue =
a - ib
will
then
give
eigenvector
w^
=
-i 1
.
Using
the
(complex)
change
of
basis
matrix
P
=
i 1
-i 1
,
we
have
that
P-1BP=
D=
a + ib 0
a
0 - ib
.
Just
as
in
the
case
of
real
eigenvalues,
it
follows
that:
= [etB ]
P= [etD ]P-1
1 i 2i 1
-i e(a+ib)t
1
0
e(
0
a -ib ) t
= -11 ii
eibt +e-ibt
eat
eibt
2
-e-ibt 2i
-eiebi= tb+t2-2ee-i -ibitbt
eat
cos bt sin bt
- sin bt cos bt
.
Note that the exponential factor eat will grow if a= Re() > 0 and decay if a= Re() < 0 . Further note that
the
matrix
cos bt sin bt
- sin bt cos bt
is
a
time-varying
rotation
matrix
with
rotational
frequency
b.
The
product
of
the
exponential factor and the time-varying rotation matrix means that the trajectories associated with the evolution
matrix [etB ] will be either outward or inward spirals depending upon whether a > 0 or a < 0 . In the case where
a = 0 we would get closed (periodic) trajectories ? circles, in fact, for this standard case.
These calculations enable us to write down a closed form expression for the solution of this linear system,
namely x(t) = [etA ]x(0) whe= re [etA ]
S= [etB ]S-1
eat
S
cos sin
bt bt
- sin bt cos bt
S-1
.
However,
the
more
important
result
is the ability to qualitatively describe the trajectories for this system by knowing only the real part of the
eigenvalues of the matrix A and the direction of the corresponding vector field (clockwise vs.
counterclockwise).
Problem:
Solve
the
system
d=x dt d=y
2x
-
5
y
with initial conditions
2x - 4y
dt
= x(0) 0= , y(0) 1.
Solution:
In
matrix
form,
we
have
dx dt
=
Ax
where
A
=
2 2
-5 -4
and
x(0)
=
0 1
.
We
again
start
by
finding
the
eigenvalues
of
the
matrix: I - A =--22
5 +
4
,
and
the
characteristic
polynomial
is pA () = 2 + 2 + 2 = ( +1)2 +1. This gives the complex
eigenvalue pair = -1+ i and = -1- i . We seek a complex eigenvector for the first of these:
-3 + i -2
5 3 + i
=
0 0
gives
the
(redundant)
equations
(-3 + i)
+ 5
= 0
and
-2
+ (3 + i)
=0 .
The
first
of these can be written as 5= (3 - i) , and an easy solution to this is where = 5, = 3 - i . (We could also
have used the second equation ? which is a scalar multiple of the first. The eigenvector might then have been different, but ultimately we'll get the same result.) This gives the complex eigenvector
w
= 3 5-
i
= 53 + i
0 -1
= u + iv
.
We
have
shown
that
with
the
specially
chosen
basis
B
= {v,u} ,
the
new
system will have standard matrix = [A]B S= -1AS ba= -ab B where a is the real part of the complex
3
Revised May 9, 2017
eigenvalue
and
b
is
its
imaginary
part.
We
also
showed
that
[etB ]
=
eat
cos bt sin bt
- sin bt cos bt
.
In
this
example,
a = -1 and b == 1, S
[= v u]
0 -1
5 3
,
S-1
=
1 5
3 1
-5 0
,
B
=
-1 1
-1 -1
,
and
[etB ]
=
e-t
cos t sin t
- sin t cos t
.
The
solution to the system is therefore x= (t) [e= tA ] S[etB ]S-1x= (0)
e-t 0 5 -1
5 cos t 3 sin t
- sin t 3 cos t 1
-5 0 0 1
e= 5-t - cos5tsi+n3tsin t sin5t +co3sctos t -05
e-t
-5sin t cos t - 3sin
t
.
That
is= , yx((tt)) =
-5e-t sin t e-t (cos t -
3
sin
t
)
.
Repeated eigenvalue case [with geometric multiplicity (GM) less than the algebraic multiplicity (AM)]:
Problem:
Solve
the
system
dx
dt dy
= =-4x
y
+ 4y
with
initial
dt
conditions= x(0) 3= , y(0) 2 .
Solution: In matrix form, we have
dx dt
=
Ax
where
A
=
0 -4
1 4
and
x(0)
=
3 2
.
We
again
start
by
finding
the
eigenvalues
of
the
matrix: I - A =4
-1 - 4
,
and
the
characteristic
polynomial
is
pA () = 2 - 4 + 4 = ( - 2)2 . This gives the repeated eigenvalue =2 with (algebraic) multiplicity 2. We
seek
eigenvectors:
2 4
-1 -2
=
0 0
gives
the
(redundant)
equations
2 - =0
and
4 - 2 =0 .
Therefore
=
2 , so we can choose
v 1
=
1 2
or any scalar multiple of this as an eigenvector, but we are unable to find
a second linearly independent eigenvector. (We say that the geometric multiplicity of the =2 eigenvalue
is 1.)
The standard procedure in this case is to seek a generalized eigenvector for this repeated eigenvalue, i.e. a
vector v2 such that (I - A)v2 is not zero, but rather a multiple of the eigenvector v1 . Specifically, we seek a
vector such that Av2= v1 + v2 . This translates into seeking v2 such that (I - A)v2 =-v1 . That is,
2 4
-1 -2
=
-1 -2
.
This
gives
redundant
equations
the
first
of
which
is
2 -
=
- 1
or
=
2 +1. If we
(arbitrarily)
choose
=0 ,
then
=1,
so
v2
=
0 1
.
The
fact
that
AAvv=21 =
2v1 v1 +
2
v
2
tells us
that
with the
change
of
basis
matrix
S
=
1 2
0 1
,
we
will
have
= [A]B
S= -1AS
= 02 12
B.
The
standard
form
in
this
repeated
eigenvalue
case
is
a
matrix
of
the
form
B
=
0
1
.
(There
are
analogous
forms in cases larger than 2? 2 matrices.) Note that we can write B =0
1
=I
+P
where
P
=
0 0
1 0
.
4
Revised May 9, 2017
There is a simple relationship between the solutions of the systems
dx dt
= Bx
and
du dt
= Pu , namely
x(t) = etu(t) . This is easily seen by differentiation:
dx dt
=
d dt
[et
u(t
)]
=
et
du dt
+ etu
=
etPu + etu =
et (Pu + Iu) =
et (I + P)u =
(I + P)etu =
Bx
together with the fact that
x(0) = u(0) . Furthermore, solving
du dt
= Pu
is simple. If
u
=
u1 u2
,
then
with
the
matrix
P
=
0 0
1 0
we
have
uu21
(t (t
) )
= =
u2 0
.
The
second equation
gives
that
u2 (t=)
c=2
u2 (0) , a constant. The
first equation is then u1(t) = u2 (0) , so u1= (t) u2 (0) t + c1 . At t = 0 this gives u1(0) = c1 , so
u1(t) = u1(0) + u2 (0) t . Together this gives:
= u(t)
= uu12((tt))
u1
(0)
+
uu22= ((00))t
1 0
t 1
uu= 12((00))
1 0
1t= u(0)
etP u(0)
T= herefore x(t)
e= t 10 1t x(0)
et
0
tet et
x(0)
,
so
etB
=
et
0
tet
et
for
B
=
0
1
.
If
we
apply
this
to
the
problem
at
hand,
we
get
[etB ]
=
e2t
0
te2t e2t
.
The
solution
to
the
system
is
therefore
x= (t) [e= tA ]
S[etB ]S-1x= (0)
1 2
0 e2t
1
0
te2t 1
e2t
-2
0 1
= 32
e2t
2e2t
te2t 3
2te2t
+
e2t
-4
6e23t e-2= 8t t-e24tte-24t e2t
= 32ee22tt -- 48ttee22tt
e2t
3 - 4t 2 - 8t
.
That
is,
= x(t) = y(t)
e2t e2t
(3 (2
- -
4t 8t
) )
.
It's worth noting that this can also be expressed as= x(t)
e2t
3 2
-
4te2t
1 2
.
The phase portrait in this case has just one invariant (eigenvector) direction. It gives an unstable node which can be viewed as a degenerate case of a (clockwise) outward spiral that cannot get past the eigenvector direction.
Moral of the Story: It's always possible to find a special basis relative to which a given linear system is in its simplest possible form. The new basis provides a way to decompose the given problem into several simple, standard problems which can be easily solved. Any complication in the algebraic expressions for the solution is the result of changing back to the original coordinates.
The standard 2? 2 cases are:
Diagonalizable with eigenvalues 1, 2 :
B= D=
1 0
0 2
Complex pair of eigenvalues = a ? ib :
B
=
a b
-b a
Repeated eigenvalue with GM < AM :
B
=
0
1
[e= tB ]
[e= tD ]
e1t
0
0
e
2t
[etB
]
=
eat
cos bt sin bt
- sin bt cos bt
[etB
]
=
et
0
tet
et
5
Revised May 9, 2017
In general, you should expect to encounter systems more complicated than these 2? 2 examples. To illustrate the line of reasoning in a significantly more complicated case, here is a Big Problem.
Big Problem: a) Find the general solution for the following system of differential equations:
dx1
dt dx2
dt
=
2 x1
-
4 x4
+
3x5
=
2 x2
-
2 x3
+
2 x4
dx=3
dt
x2 - x4
dx4 dt
=
- x4
dx5
dt
= -3x4
+ 2x5
5 4 b) Find the solution in the case where x(0) = 3 . 2 1
2 0 0 -4 3
Solution: This is
a continuous dynamical
system of the form
dx
= Ax
where
0 A = 0
2 1
-2 0
2 -1
0 0 .
dt
0 0 0 -1 0
0 0 0 -3 2
- 2 0 0 4 -3 0 - 2 2 -2 0
We start by seeking the eigenvalues. We have I - A = 0 -1 1 0 .
0 0 0 +1 0 0 0 0 3 - 2
The characteristic polynomial is pA () = ( - 2)2 ( +1)(2 - 2 + 2) which yields the repeated eigenvalue 1 =2 =2 (with algebraic multiplicity 2), the distinct eigenvalue 3 =-1 , and the complex pair 4 = 1+ i and 5 =4 =1- i .
1 0 The repeated eigenvalue 1 =2 =2 yields just one eigenvector v1 = 0 , so its geometric multiplicity if just 1. 0 0
We then seek a "generalized eigenvector" v2 such that Av2= v1 + v2 where =2 . That is, we seek a vector
v2 such that v2 - Av2 =(I - A)v2 =-v1 . This is just an inhomogeneous system which yields solutions of the
t
0
0
0
form v2 = 0 . For simplicity, take the solution with t = 0 , i.e. v2 = 0 .
0
0
1 3
1 3
6
Revised May 9, 2017
1 0 The eigenvalue 3 =-1 yields the eigenvector v3 = 3 . A straightforward calculation with the complex 3 3
0 0 0 1+ i 1 1 eigenvalue 4 = 1+ i yields the complex eigenvector v = 1 =1 + i 0 =v5 + i v4 in accordance with the 0 0 0 0 0 0
method previously derived.
1 0 1 0 0
Using the ba= sis B= v1
0 = 0 , v2
0 = 0 , v3
0 = 3 , v4
1 = 0 , v5
1 1
and
change
of
basis
matrix
0 0 3 0 0
0
13
3
0
0
1 0 1 0 0 0 0 0 1 1
1 0
0
-
1 3
0
0 0 0 -3 3
S = 0 0 3 0 1 , we compute the inverse matrix S-1 = 0 0 0
1 3
0 .
0 0 3 0 0
0 1 -1 1 0
0
1 3
3
0
0
0 0 1 -1 0
Av1 = 2v1
We know tha= t AA= vv23
v1 + 2v2 - v3
,
so
the
matrix
of
A
relative
to
the
basis
B
is
= Av4
v4 + v5
= Av 5
- v4 + v5
2 1 0 0 0 0 2 0 0 0 =B S= -1AS 0 0 -1 0 0 .
0 0 0 1 -1 0 0 0 1 1
Since A = SBS-1 , it will be the case that the evolution matrices are related via etA = S etB S-1 where
e2t te2t 0 0
0
0 e2t 0 0
0
etB = 00
0 0
0 0
e-t 0 0
0 et cos t et sin t
0 -et sin et cos t
t
.
The solution is then
= x(t)
= etA x(0)
S etB= S-1x(0)
1 0
0 0 0
0 0 0 0
1 3
1 0 3 3 3
0 1 0 0 0
0 e2t
1 1 0 0
0 00 0
te2t e2t 0 0 0
0 0 e-t 0 0
0
0
0 et cos t et sin t
0
0
0 -et sin et cos t
t
1 0 0 0 0
0 0 0 1 0
0
-
1 3
0 -3
0
1 3
-1 1
1 -1
0 3 0 x(0) . 0
0
7
Revised May 9, 2017
c1
If
we
multiply
the
leftmost
matrices
and
write
S-1x(0)
=
c2
ccc534
,
this
yields
the
general
solution:
= x(t)
= etA x(0)
S etB= S-1x(0)
e2t
0
0 0
0
te2t 0 0 0 e1 2t
3
e-t
0 3e-t 3e-t 3e-t
0 et (cos t + sin t)
et sin t
0
0
et
0 (cos t - sin
et cos t
0
0
t
)
c1 c2 ccc534
or
x1 x2 x3
(t) =c1e2t + c2te2t + c3e-t (=t) c4et (cos t + sin t) + c5et (t) =3c3e-t + c4et sin t + c5et
(cos t cos t
-
sin
t)
.
= xx45 ((tt)) =
3c3e-t
1 3
c2
e2t
+ 3c3e-t
5 4 If, on the other hand, we use the initial condition x(0) = 3 , we get the specific solution:
2 1
e2t
0
x(t)
=
0 0
0
te2t 0 0 0 e1 2t
3
e-t
0 3e-t 3e-t 3e-t
0 et (cos t + sin t)
et sin t
0
0
et
0 (cos t - sin
et cos t
0
0
t
)
133 -3 23 3 1
or= xxx231(((ttt))) == e2133te(e-42t tc+-oes3ttt(e3+2st2i+nsit23n+et-)ct os
t
)
.
x4 x5
(t (t
) )
= 2e-t =-e2t
+
2e-t
8
Revised May 9, 2017
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related download
- chapter 4 vector norms and matrix norms
- elimination with matrices mit opencourseware
- vector spaces and subspaces mit mathematics
- composition of linear transformations and matrix
- appendix a tridiagonal matrix algorithm
- lecture 10 solution via laplace transform and matrix
- chapter 7 thesingularvaluedecomposition svd
- qr decomposition with gram schmidt ucla mathematics
- systems of first order linear differential equations
- matrix solution set calculator
Related searches
- solving systems of equations with matrices
- solving systems of equations examples
- solving systems of linear equations calculator
- solving systems of equations kuta
- solving systems of linear equations by substitution
- solving systems of equations by elimination calculator
- solving systems of equations by graphing calculator
- solving systems of three equations w elimination
- solving systems of equations by graphing pdf
- solving systems of linear equations
- solving systems of linear inequalities calculator
- solving systems of linear equations worksheet