Fall 2010 Final Exam Review - Madison Area Technical College



Section 1.1: Some Basic Mathematical Models; Direction Fields

A direction field (or a slope field) is a graph consisting of short line segments that indicate the slopes of the tangent lines to the family of curves of a differential equation at a discrete number of points.

An equilibrium solution is a solution to a differential equation that does not change (i.e., is a constant). Equilibrium solutions are found by setting y( = 0 (because the derivative of a constant is zero) and solving the resulting equation.

A stable equilibrium solution is a solution that other solutions tend to converge toward. Direction field lines point toward stable equilibriums.

An unstable equilibrium solution is a solution that other solutions tend to diverge away from. Direction field lines point away from unstable equilibriums.

More comments on direction fields:

• Direction fields are useful in studying equations of the form [pic].

• f is sometimes called the rate function (think of the exponential growth differential equation [pic], where k is the growth rate (or rate costant)).

• Direction fields are nice because they do not require solving the differential equation.

• Use a computer to do the work!

Section 1.2: Solutions of Some Differential Equations

Equations are of the form [pic]have a general solution [pic]

Notes:

• As we solve these equations by performing a single integration, a single constant of integration will always be generated.

• To get a concrete value for the constant of integration, a desired point that the solution should contain must be given.

• The given point on the curve is called an initial condition.

• A differential equation combined with an initial condition is called an initial value problem (IVP).The solution obtained with the arbitrary constant of integration (due to no specified initial condition) is called the general solution.

• All the graphs of the solutions for all possible values of the constant of integration are called the integral curves.

• The solution to the equations in this section will be found by manipulating the equation so that one side is an exact differential and then integrating.

• This will be analogous to the technique of “separation of variables” you may have learned from section 7.2 of Smith & Minton’s 3rd edition of Calculus: Early Transcendental Functions.

Section 1.3: Classification of Differential Equations

Ordinary vs. Partial Differential Equations

|Ordinary Differential Equations |Partial Differential Equations |

|( Only ordinary derivatives appear in the equation |( Only partial derivatives appear in the equation |

|Example: |Example: |

|[pic] |[pic] |

|(RLC series circuit with a drive voltage) |(heat conduction equation) |

Systems of Differential Equations

• …consist of multiple differential equations involving multiple variables.

• Used when one quantity depends on other quantities…

• Example: Lotka-Volterra (predator-prey) equations: [pic]

Order of a Differential Equation

• …is the order of the highest derivative that appears (analogy: the degree of a polynomial is the largest exponent)

• General form for an nth order differential equation: [pic] for some function F.

• We’ll deal with differential equations that can be solved explicitly for the highest derivative: [pic].

Linear vs. Nonlinear Differential Equations

• [pic]is linear if F is a linear function of [pic]; note: F does not have to be linear in t…

• The general form of a linear differential equation of order n is: [pic]

• Nonlinear differential equations are not of the above form…

• Many times, nonlinear equations can approximated as being linear over small regions of the solution. This process is known as linearization.

Solutions of Differential Equations

• …are functions ( such that all necessary derivatives exist over an interval of interest of the independent variable and that satisfy [pic].

• To verify a given solution, substitute it and its derivatives into the given differential equation to show that the equation is satisfied.

Section 2.1: Linear Equations; Method of Integrating Factors

Form of a linear first-order differential equation:[pic]

Formula for the solution of a first-order linear differential equations with constant coefficients:

If [pic], then [pic], and [pic]

Formula for the solution of a general first-order linear differential equation:

If [pic], then [pic], and [pic]

Section 2.2: Separable Equations

General form of a first-order differential equation:[pic]

Separable first-order differential equation:[pic]

Another form for a separable first-order differential equation:[pic]

Solution:

[pic]

Note: equations of the form [pic], where [pic]can be converted to a separable equation. Recall the following homework problem:

[pic]

Section 2.3: Modeling with First Order Equations

Three Key Steps in Mathematical Modeling

• Construct the model (use the steps at the end of section 1.1).

• Analyze the model by solving it or making appropriate simplifications to solve it.

• Compare the results from the model with experiment and observation.

Section 2.4: Differences Between Linear and Nonlinear Equations

|Linear Fist-Order Differential Equations |Nonlinear Fist-Order Differential Equations |

|[pic] |[pic] |

| |… we have only looked at one kind, which is separable equations: |

| |[pic] |

|Solution: |Solution: |

|First, compute [pic] |First, separate variables: [pic] |

|Then answer is explicitly |Then integrate for an implicit equation |

|[pic] |[pic] |

|Theorem 2.4.1: Existence and Uniqueness of the Solution of a Linear |Theorem 2.4.2: Existence and Uniqueness of the Solution of a Nonlinear|

|First-Order Differential Equation |First-Order Differential Equation |

|If the functions p and g are continuous in an open interval I |If the functions f and [pic] are continuous in the rectangle |

|containing the point t = t0, then there exists a unique function y = |containing the point (t0, y0), then there exists a unique solution y =|

|((t) in I that satisfies the initial value problem |((t) in that rectangle that satisfies the initial value problem |

|[pic] and [pic]. |[pic] and [pic]. |

| | |

|The proof is rooted in the fact that we can write the solution in |The proof is beyond the scope of this class. It should be noted that |

|terms of operations on continuous functions. |if the conditions of the hypothesis are not satisfied, then it is |

| |possible to get multiple solutions for one initial condition |

|Note that if the uniqueness criteria are met, then graphs of two solutions can not intersect, because if they did then that implies two |

|solutions (i.e., contradicting uniqueness) for a given initial condition. |

|Interval of Definition: |Interval of Definition: |

|…is determined by any discontinuities in p and g |…cannot be determined by looking at f. |

| |… must look at the solution itself |

|General Solutions: |General Solutions: |

|…can always be written using the constant of integration c. |…many times cannot be summarized with the constant of integration; |

| |example: equilibrium solutions. |

|Implicit vs. Explicit Solutions: |Implicit vs. Explicit Solutions: |

|…the solution is stated explicitly as an integral. |…the solutions obtained are usually implicit. |

Section 2.5: Autonomous Equations and Population Dynamics

In this section, we look at a restricted form of a nonlinear separable equation:

[pic].

When f is independent of t, the equation is called autonomous (i.e., t does not drive the solution; the solution only depends on itself…).

Logistic Growth Model

• Accounts for the fact that the growth rate depends on population ( instead of

• [pic], use

• [pic]for some function h.

• We like h to model the fact that as populations get larger, the growth rate gets smaller due to “using up” the environement. ( [pic]is a simple model that gets used extensively.

• [pic]…called the logistic equation, and is usually written the form:

• [pic]where [pic]is the environmental carrying capacity (or saturation level), and r is the intrinsic growth rate.

• When finding equilibrium solutions by solving [pic], we are also solving [pic]; zeros of f are called critical points.

• A phase line graph shows the y-axis and how solutions either converge toward or diverge from equilibrium solutions.

• For stable equilibrium solutions, [pic]points toward the equilibrium.

• For unstable equilibrium solutions, [pic]points away from the equilibrium.

• For he equation [pic], T is called the threshold level for the population.

Section 2.6: Exact Equations and Integrating Factors

In this section, we examine the equation [pic].

If once again the solution comes from [pic], then

[pic]

Thus:

Theorem 2.6.1: Exact Differential Equations and Their Solution

Let the functions [pic]be continuous in some rectangular region R. Then the equation [pic]is an exact differential equation in the region R if and only if [pic]at each point of R. That is, there exists a function ( satisfying [pic]if and only if [pic].

The method of solving an exact equation is identical to finding the potential for a conservative vector field…

Integrating Factors…

Sometimes a differential equation that is not exact can be converted to an exact equation by multiplying the equation by an integrating factor[pic]:

[pic]

For this equation to be exact, it must be true that

[pic]

Which leads to the partial differential equation:

[pic]

If ( can be found, then an exact equation is formed, whose solution can be obtained using the prior technique. Unfortunately, the equation to find ( is usually harder to solve than the original equation (in general).

However, we can look for simplifications, like what happens if the original equation is such that ( depends only on x or y.

|Differential Equation where the integrating factor ( depends on x only|Differential Equation where the integrating factor ( depends on y only|

|(i.e., [pic]): |(i.e., [pic]): |

|[pic] |[pic] |

|Criterion for [pic] only: |Criterion for [pic] only: |

|[pic]is a function of x only. |[pic]is a function of y only. |

|Note that the equation for ( is both linear and separable now. |Note that the equation for ( is both linear and separable now. |

Section 2.7: Numerical Approximations: Euler’s Method

|[pic] | |

|Or, letting h = (x, [pic] | |

| |[pic] |

To perform Euler’s method quickly on your calculator:

• Store the initial conditions in variables X and Y.

• Enter the update formula for Y then for X separated by a colon.

• Repeatedly Use 2ND ENTER until you are at the final x-value.

• If you want to see the intermediate values of Y, you’ll have to type Y every time…

• Example: for [pic], y(1) = -2, h = 0.25

1 ( X

-2 ( Y

Y + (X + e^(-Y))*0.25 ( Y:X+0.25(X

Y

2ND ENTER 2ND ENTER …

Section 3.1: Homogeneous Equations with Constant Coefficients

Big idea: [pic] has solution [pic]or [pic] for some values r.

Here are four fundamental second-order differential equations and their solutions:

|[pic] |[pic] |[pic] |

|[pic] |[pic] |[pic] |

| | | |

| | | |

To solve a second-order differential equation with constant coefficients:

[pic]

Assume a solution of the form [pic], substitute it and its derivatives [pic] and [pic] into the equation.

[pic]

Then find the values r1 and r2 that satisfy the resulting characteristic equation:

[pic]

The general solution is a linear combination of the two solutions:

[pic]

Section 3.2: Solutions of Linear Homogeneous Equations; the Wronskian

Big Idea: Any two solutions we find of a homogeneous linear second-order differential equation are all that is needed if the Wronskian is non-zero on the interval of the existence of the solution.

Theorem 3.2.1: Existence and Uniqueness Theorem

(for Linear Second-Order Differential Equations)

For the initial value problem

[pic]

where p, q, and g are continuous on some open interval I containing the point t0, then there is exactly one solution [pic]of this problem, and the solution exists throughout the interval I.

Theorem 3.2.2: Principle of Superposition

If y1 and y2 are solutions of the differential equation

[pic],

then the linear combination c1y1 + c2y2 is a solution for any values of the constants c1 and c2.

The Wronskian: [pic]

Theorem 3.2.3:

If y1 and y2 are two solutions of the differential equation

[pic]

and the initial conditions [pic] must be satisfied by y, then it is always possible to choose the constants c1 and c2 so that

[pic]

satisfies this initial value problem if and only if the Wronskian [pic]is not zero at t0.

Theorem 3.2.4:

If y1 and y2 are two solutions of the differential equation

[pic]

then the family of solutions

[pic]

with arbitrary constants c1 and c2 includes every solution of the differential equation if and only if there is a point t0 where the Wronskian [pic]is not zero.

Notes:

• [pic] is called the general solution of [pic].

• y1 and y2 are said to form a fundamental set of solutions.

• To find the general solution, and thus all solutions of [pic], we need only find two solutions of the given equation whose Wronskian is non-zero.

Theorem 3.2.5:

If the differential equation

[pic]

has coefficients p and q that are continuous on some open interval I, and if t0 ( I, and if y1 is a solution that satisfies the initial conditions

[pic]

and if y2 is a solution that satisfies the initial conditions

[pic]

then y1 and y2 form a fundamental set of solutions of the equation.

Theorem 3.2.6: (Abel’s Theorem)

If y1 and y2 are two solutions of the differential equation

[pic]

With the coefficients p and q continuous on an open interval I, then the Wronskian [pic] is given by:

[pic]

where c is a certain constant that depends on y1 and y2, but not on t. Further, [pic] either is zero for all t ( I (if c = 0), or else is never zero in I (if c ( 0).

Section 3.3: Complex Roots of the Characteristic Equation

Big Idea: When there are complex roots to the characteristic equation, each of the fundamental solutions will have a sinusoidal factor.

Euler’s Formula:

[pic]

Notes:

• The complex solutions from the quadratic formula always come in conjugate pairs.

• If those complex roots are [pic], then the general solution [pic] can always be written as [pic].

Additional topics from homework:

• A Euler equation is of the form [pic]. It can be solved by making the substitution [pic], which results in the constant coefficient equation [pic]

• A general linear homogeneous equation [pic]can be solved using the substitution [pic]if and only if [pic].

Section 3.4: Repeated Roots; Reduction of Order

Big Idea: When there are repeated roots to the characteristic equation, the second of the fundamental solutions is formed by multiplying the first solution by a factor of the independent variable.

Summary of Sections 3.1 – 3.4:

• To solve a second-order linear homogeneous differential equation with constant coefficients:

[pic]

• Assume a solution of the form [pic], which leads to a corresponding characteristic equation:

[pic]

• Solve the characteristic equation to find its two roots r1 and r2:

• If the roots are real and unequal, then the general solution is

[pic]

• If the roots are complex conjugates [pic], then the general solution is

[pic]

• If the roots are the same, then the general solution is

[pic]

• If we have a more complicated second-order initial value problem like: [pic], and we know that y1 and y2 are solutions of the differential equation, then we can be sure that they form a fundamental set of solutions if the Wronskian is nonzero: [pic]

• Reduction of Order (end of 3.4): If we know a solution y1 of the more general second-order differential equation: [pic], then we can derive the second solution y2 by assuming [pic] and substitute y2 into the original equation, resulting in [pic], which is a first order equation for v(.

Section 3.5: Nonhomogeneous Equations; Method of Undetermined Coefficients

Big idea: One way to solve [pic]is to “guess” a particular solution that has the same form as g(t), then work out the value(s) of the coefficient(s) that make that guess work.

Theorem 3.5.1: The Difference of Nonhomogeneous Solutions Is the Homogeneous Solution

If Y1 and Y2 are two solutions of the nonhomogeneous equation [pic], then their difference [pic]is a solution of the corresponding homogeneous solution [pic]. If, in addition, y1 and y2 are a fundamental set of solutions of the homogeneous equation, then

[pic],

for certain constants c1 and c2.

Theorem 3.5.2: General Solution of a Nonhomogeneous Equation

The general solution of the nonhomogeneous equation [pic], can be written in the form [pic]m where , y1 and y2 are a fundamental set of solutions of the corresponding homogeneous equation, c1 and c2, are arbitrary constants, and Y is some specific solution of the nonhomogeneous equation.

Method of Undetermined Coefficients:

• When g(t) is an exponential, sinusoid, or polynomial, make a guess for the particular solution that is the same exponential, sinusoids of the same frequency, or polynomial of the same degree, except with arbitrary coefficients that you determine by substituting your guess into the nonhomogeneous equation.

• This technique works because derivatives of these types of functions result in the same type of function.

• If the form for the particular solution replicates any terms of the homogeneous solution, then factors of t must be applied until there is no replication.

|Forms for the Particular Solution of [pic]for Assorted g(t): |

|[pic](i.e., a polynomial of degree n) |[pic](i.e., a polynomial of degree n, but with different coefficients |

| |than Pn) |

| |Exception #1: r = 0 is a root of the homogeneous equation ( |

| |[pic] |

| |Exception #2: r = 0 is a double root of the homogeneous equation ( |

| |[pic] |

|[pic] |[pic] |

| |Exception: r = ±i( are roots of the homogeneous equation ( |

| |[pic] |

|[pic] |[pic] |

| |Exception #1: r = k is a root of the homogeneous equation ( |

| |[pic] |

| |Exception #2: r = k is a double root of the homogeneous equation ( |

| |[pic] |

|[pic] |[pic](i.e., the product of a polynomial of degree n, and an |

| |exponential). |

| |Exception #1: r = k is a root of the homogeneous equation ( |

| |[pic] |

| |Exception #2: r = k is a double root of the homogeneous equation ( |

| |[pic] |

|[pic] |[pic] |

| |Exception #1: r = k ± i( are roots of the homogeneous equation ( |

| |[pic] |

|[pic] |[pic](i.e., the sum of the products of distinct polynomials of degree |

| |n with sinusoids). |

| |Exception: r = ±i( are roots of the homogeneous equation ( |

| |[pic] |

|[pic] |[pic] |

| |Exception #1: r = k ± i( are roots of the homogeneous equation ( |

| |[pic] |

Section 3.6: Variation of Parameters

Big Idea: The method of variation of parameters works by assuming a solution that is of the form of a sum of the products of arbitrary functions and the fundamental homogeneous solutions.

The basic idea behind the method of variation of parameters, due to Lagrange, is to find two fundamental solutions y1 and y2 of a corresponding homogeneous equation, then assume that the solution of the nonhomogeneous equation is:

[pic]

with the additional criterion that [pic].

Theorem 3.6.1: Solution of a Nonhomogeneous Second-Order Equation

If the functions p, q, and g are continuous on an open interval I, and if the functions y1 and y2 are a fundamental set of solutions of the homogeneous equation [pic]corresponding to the nonhomogeneous equation [pic], then a particular solution is:

[pic],

where t0 is any conveniently chosen point in I. The general solution is

[pic].

Section 3.7: Mechanical and Electrical Vibrations

Big Idea: Second order differential equations with constant coefficients provide very good models for oscillating systems, like a mass hanging from a spring or a series RLC circuit.

[pic]

[pic]

Recall:[pic], where [pic]and [pic].

Resonant (angular) frequency: [pic]

Frequency: [pic]

Period: [pic]

Quasifrequency of the damped system:[pic],

Section 3.8: Forced Vibrations

Big Idea: A damped spring-mass (or circuit) system described in section 3.7 that is driven by a sinusoidal force will eventually settle into a steady oscillation with the same frequency as the driving force. The amplitude of this steady state solution increases as the drive frequency gets closer to the natural frequency of the system, and as the damping decreases. An undamped spring-mass system that is driven by a sinusoidal force results in an oscillation that is the average of the two frequencies and modulated by a sinusoidal envelope that is the difference of the frequencies.

Forced Vibrations with Damping

In this section, we will restrict our discussion to the case where the forcing function is a sinusoid. Thus, we can make some general statements about the solution:

The equation of motion with damping will be given by:

[pic]

Its solution will be of the form:

[pic]

Notes:

• The homogeneous solution [pic], which is why it is called the “transient solution.”

• The constants c1 and c2 of the transient solution are used to satisfy given initial conditions.

• The particular solution [pic] is all that remains after the transient solution dies away, and is a steady oscillation at the same frequency of the driving function. That is why it is called the “steady state solution,” or the “forced response.”

• The coefficients A and B must be determined by substitution into the differential equation.

• If we replace [pic] with [pic], then [pic], [pic], [pic], [pic], and [pic]. (See scanned notes at end for derivation)

• Note that as [pic], [pic].

• Note that when [pic], [pic]

• Note that as [pic], [pic] (mass is out of phase with drive).



• The amplitude of the steady state solution can be written as a function of all the parameters of the system:

[pic]

• Notice that [pic]is dimensionless (but proportional to the amplitude of the motion), since [pic]is the distance a force of F0 would stretch a spring with spring constant k.

• Notice that [pic]is dimensionless…[pic]

• Note that as [pic], [pic].

• Note that as [pic], [pic] (i.e., the drive is so fast that the system cannot respond to it and so it remains stationary).

• The frequency that generates the largest amplitude response is:

[pic]

• Plugging this value of the frequency into the amplitude formula gives us:

[pic]

• If [pic], then the maximum value of R occurs for [pic].

• Resonance is the name for the phenomenon when the amplitude grows very large because the damping is relatively small and the drive frequency is close to the undriven frequency of oscillation of the system.

[pic]

Forced Vibrations Without Damping

The equation of motion of an undamped forced oscillator is:

[pic]

When [pic](non-resonant case), the solution is of the form:

[pic]

When [pic] (resonant case), the solution is of the form:

[pic]

For the non-resonant case with initial condition [pic], [pic], and [pic], which can be written as [pic].

Section 4.1: General Theory of nth Order Linear Equations

Usual form of an nth order linear differential equation:

[pic]

Assumptions:

• The functions[pic]are continuous and real-valued on some interval [pic], and [pic]for any [pic].

Linear differential operator form:

[pic]

Notes:

• Solving an nth order equation ostensibly requires n integrations

• This implies n constants of integration

• Also implies n initial conditions to completely specify an IVP:

o [pic]

Theorem 4.1.1: Existence and Uniqueness Theorem

(for nth-Order Linear Differential Equations)

If the functions where [pic]are continuous on the open interval I, then exists exactly one solution [pic]of the differential equation [pic] that also satisfied the initial conditions [pic]. This solution exists throughout the interval I.

The Homogeneous Equation:

[pic]

Notes:

• If the functions [pic]are solutions, then so is a linear combination of them, [pic]

• To satisfy the initial conditions, we get n equations in n unknowns:

[pic]

• This system will have a solution for [pic]provided the determinant of the matrix of coefficients is not zero (i.e., Cramer’s Rule again). In other words, the Wronskian is nonzero, just like for second-order equations.

[pic]

• Note that a slightly modified form of Abel’s Theorem still applies:

[pic]

Theorem 4.1.2:

If the functions where [pic]are continuous on the open interval I, if the functions [pic]are solutions of [pic], and if [pic]for at least one point in I, then every solution of the differential equation can be written as a linear combination of [pic].

Notes:

• [pic] is called the general solution of [pic].

• [pic]are said to form a fundamental set of solutions.

Linear Dependence and Independence:

• The functions [pic]are said to be linearly dependent on an interval I if there exist constants [pic], NOT ALL ZERO, such that [pic] FOR ALL [pic].

• The functions [pic]are said to be linearly independent on I if they are not linearly dependent there.

Theorem 4.1.3:

If [pic]is a fundamental set of solutions to [pic] on an interval I, then [pic] are linearly independent on I. Conversely, if [pic] are linearly independent solutions of the equation, then they form a fundamental set of solutions on I.

The Nonhomogeneous Equation:

[pic]

Notes:

• If Y1(t) and Y2(t) are solutions of the nonhomogeneous equation, then [pic]

• I.e., the difference of any two solutions of the nonhomogeneous equation is a solution of the homogeneous equation.

• So, the general solution of the nonhomogeneous equation is: [pic], where Y(t) is a particular solution of the nonhomogeneous equation.

• We will see that the methods of undetermined coefficients and reduction of order can be extended from second-order equations to nth-order equations.

Section 4.2: Homogeneous Equations with Constant Coefficients

For equations of the form

[pic], “it is natural to anticipate that”

y = ert is a solution for correct values of r. Under this anticipation,

[pic], where the polynomial

[pic]

is called the characteristic polynomial, and [pic]is called the characteristic equation.

Recall that a polynomial of degree n has n zeros, and thus the characteristic polynomial can be written as:

[pic]

Practice: All roots are real and unequal…

( [pic]

Practice: Some roots are complex…

• Recall that if a polynomial has real coefficients, then it can be factored into linear and irreducible quadratic factors.

• The linear irreducible quadratic factors will factor into complex conjugate roots [pic], which will correspond to solutions of [pic].

Practice: Some roots are repeated…

• If a root r1 is repeated s times, then that repeated root will generate s solutions: [pic]

• The same applies if the repeated roots are complex.

Section 4.3: The Method of Undetermined Coefficients

If the nonhomogeneous term g(t) of the linear nth order differential equation with constant coefficients

[pic]

is of an “appropriate form” (i.e., a sinusoidal, polynomial, or exponential function), then the method of undetermined coefficients can be used to find the particular solution.

Recall that if any term of the proposed particular solution replicates a term of the homogeneous solution, then the entire particular solution must be multiplied by a sufficient number of factors of t to eliminate the replication.

Section 4.4: The Method of Variation of Parameters

Big Idea: The method of variation of parameters for determining a particular solution of the nonhomogeneous nth order linear differential equation [pic] is a direct extension of the method for second-order differential equations in section 3.6.

The basic idea is that once the homogeneous solution is determined,

[pic]

Then the particular solution is of the form

[pic]

with the (n – 1) additional criteria

[pic]

This results in the system of equations:

[pic]

The solution for any one function um((t) is:

[pic], where W(t) is the Wronskian [pic], and Wm(t) is the determinant obtained from W by replacing the mth column with the column [0, 0, …, 1].

Thus, [pic].

Section 5.1: Review of Power Series

1. Definition of convergence of a power series: A power series [pic]is said to converge at a point x if [pic]exists for that x.

2. Definition of absolute convergence of a power series: A power series [pic]is said to converge absolutely at a point x if [pic]converges.

a. Absolute convergence implies convergence…

3. Ratio test: If, for a fixed value of x, [pic], then the power series converges absolutely at that value of x if [pic], and diverges if [pic]. If [pic], then the test is inconclusive.

4. If [pic]converges at x = x1, it converges absolutely for [pic], and if it diverges at x = x1, it diverges for [pic].

5. Radius and interval of convergence: The radius of convergence ( is a nonnegative number such that [pic]converges absolutely for [pic]and diverges for [pic].

a. Series that converge only when x = x0 are said to have ( = 0.

b. Series that converge for all x are said to have ( = (.

c. If ( > 0, then the interval of convergence of the series is [pic].

Given that [pic]and[pic]converge for [pic]…

6. Sum of series: [pic]

7. Product and Quotient of series:

[pic]

To do a quotient, can write as a multiplication and equate terms:

[pic]

OR can do long division…

8. Derivatives of a series:

[pic]

[pic]

9. Taylor series:

[pic]is the Taylor series for a function f(x) about the point x = x0.

10. Equality of series: every corresponding term is equal…

11. Analytic functions: have a convergent Taylor series with non-zero radius of convergence about some point x = x0.

12. Shift of index of Summation…

Section 5.2: Series Solutions Near an Ordinary Point, Part I

Will consider homogeneous equations of the form [pic]where the polynomial coefficients are polynomials, like:

The Bessel equation: [pic]

The Legendre equation: [pic]

Ordinary point: a point x0 such that P(x0) ( 0. ( [pic]

Singular point: a point x0 such that P(x0) = 0. ( p(x) or q(x) becomes unbounded (See sections 5.4 – 5.7)

To solve a differential equation near an ordinary point using a power series technique:

• Assume a power series form of the solution [pic]which converges in the interval [pic]

• Plug the power series into the equation and equate like terms

• Obtain a recurrence relation for the coefficients of higher-order terms in terms of earlier coefficients.

• If possible, try to find the general term, which is an explicit function for the coefficients in terms of the index of summation.

• Write the final answer in terms of those coefficients.

• Check the radius of convergence.

Section 5.3: Series Solutions Near an Ordinary Point, Part II

To justify the statement that a solution of [pic] can be written as [pic], we must be able to compute [pic]for any order derivative m based only on the information given by the IVP.

Note:

So it seems that p and q need to at least be infinitely differentiable at x0, but in addition they also need to be analytic at x0. In other words, they must have Taylor series expansions that converge in some interval about x0.

Theorem 5.3.1 (due to Fuchs)

If x0 is an ordinary point of the differential equation [pic] (i.e., [pic] and [pic]are analytic at x0), then its general solution is

[pic],

where a0 and a1 are arbitrary, and y1 and y2 are two power series solutions that are analytic at x0. The solutions y1 and y2 form a fundamental set of solutions. Also, the radius of convergence of y1 and y2 is at least as large as the minimum of the radii of convergence of p and q.

Note:

From the theory of complex-valued rational expressions, it turns out that radius of convergence of a power series of a rational expression about a point x0 is the distance from x0 to the nearest zero of the (complex-valued) denominator.

Section 5.4: Euler Equations; Regular Singular Points

[pic]

Singular points are x = x0 such that P(x0) = 0.

Euler equations have singular points at x = 0:

[pic]

To solve Euler equations, make an assumption for the solution of y = xr.

When the roots are real and distinct, then the two fundamental solutions are obtained.

When the roots are repeated, multiply the first fundamental solution by ln(x) to obtain the second fundamental solution.

When the roots are complex, [pic], the fundamental solutions are [pic]and[pic]

In summary, the solutions of [pic] are:

[pic]

For negative x values, one can make the substitution [pic], and show that, for any x value, the solutions are:

[pic]

We do not have a general theory of how to handle any possible singularity of [pic]. So, for the next few sections, we will restrict the discussion to the power series solution of this type of equation near regular singular points.

A regular singular point at x = x0 is a singular point with the additional restrictions that

[pic], and [pic].

Section 5.5: Series Solutions Near a Regular Singular Point, Part I

Big Idea: According to Frobenius, it is valid to assume a series solution of the form [pic]to a second-order linear differential equation near a regular singular point x = x0.

Recall:

If x = x0 is a regular singular point of the second order linear equation

[pic],

then [pic] and [pic].

This means [pic] and [pic]are convergent for some radius about x0.

Thus, we can write the original equation as: [pic]

If all the coefficients pn and qn are zero except for p0 and q0, then the equation reduces to [pic], which is an Euler equation. In fact, this is called the corresponding Euler equation when all the coefficients pn and qn are not zero, and the roots of this corresponding Euler equation play a role in the solution called “the exponents at the singularity.”

Since the equation we are trying to solve looks like an equation with “Euler coefficients” times power series, we will look for a solution that is of the form of an “Euler solution” times a power series:

[pic]

As part of the solution, we must determine:

• The values of r that make this a valid solution.

• The recurrence relation for the coefficients an.

• The radius of convergence.

The theory behind a solution of this form is due to Frobenius.

To solve a linear second order equation near a regular singular point using the method of Frobenius:

• Identify singular points and verify they are regular.

• Assume a solution of [pic]and its derivatives: [pic], [pic]

• Substitute the assumed solution and its derivatives into the given equation.

o You may have to re-write coefficients in terms of [pic]…

• Shift indices so that all series solutions have [pic]in the general term.

• “Spend” any terms needed so that all series start at the same index value.

o This should result in a term that looks like the characteristic equation of the corresponding Euler equation.

o This is called the indicial equation.

o The roots of the indicial equation are called the exponents of the singularity.

o They are the same as the roots of the corresponding Euler equation.

• Set the coefficient of [pic] to zero to get the recurrence relation.

• Use the recurrence relation with each exponent to get the general term for each of the two solutions for each exponent.

o If the exponents of the singularity are equal or differ by an integer, then it is only valid to get the series solution for the larger root. (What to do for the second solution will be covered in 5.6).

• Compute the radius of convergence for each solution.

5.6: Series Solutions Near a Regular Singular Point, Part II

To solve a linear second order equation near a regular singular point using the method of Frobenius REGARDLESS OF THE EXPONENTS OF THE INDICIAL EQUATION:

• Identify singular points and verify they are regular.

• Find the exponents of the singularity by solving the corresponding Euler equation [pic], where [pic]and [pic], or using the indicial equation.

• Assume a solution of [pic].

• Substitute the assumed solution and its derivatives into the given equation.

• Shift indices so that all series solutions have [pic]in the general term.

• “Spend” any terms needed so that all series start at the same index value.

o This should result in a term with a factor that looks like the characteristic equation of the corresponding Euler equation; this is the indicial equation.

• Set the coefficients of [pic] to zero to get the recurrence relation.

• Get the second solution using the theorem below…

Theorem 5.6.1: General Solution of a Second Order Equation with Real Exponents of the Singularity near a Regular Singular Point

Consider the differential equation [pic], where x = 0 is a regular singular point. Then [pic]and[pic]are analytic at x = 0 with convergent power series expansions [pic] and [pic]for [pic]where [pic]is the minimum of the radii of convergence for [pic]and[pic]. Let r1 and r2 be the roots of the indicial equation [pic], with r1 ( r2 if r1 and r2 are real. Then in either the interval [pic]or[pic], there exists a solution of the form

[pic]

where the [pic]are given by the recurrence relation

[pic]

with a0 = 1 and r = r1. There are three cases for the second solution:

• If r1 ( r2 is not zero or a positive integer, then in either the interval [pic]or[pic], there exists a second solution of the form

[pic].

The[pic]are determined by the same recurrence relation as the[pic], with with a0 = 1 and r = r2. These power series solutions converge for at least [pic].

• If r1 = r2 then the second solution is of the form

[pic].

• If r1 ( r2 = N, a positive integer, then the second solution is of the form

[pic].

The coefficients [pic] and the constant a can be determined by substituting the form of the series solution for y2 into the original differential equation. The constant a may turn out to be zero. Each of these series converge at least for [pic] and defines an analytic function near x = 0.

In all three cases, the two solutions form a fundamental set of solutions.

Section 5.7: Bessel’s Equation

Bessel’s Equation: [pic]

• x = 0 is a regular singular point

• The roots of the indicial equation are ((.

• The value of ( is called the “order” of the equation.

• The first solution for a given value of ( is called the “Bessel function of the first kind of order (,” and is denoted by J((x).

• The second solution for a given value of ( is called the “Bessel function of the second kind of order (,” and is denoted by Y((x).

Bessel Equation of Order Zero (i.e., ( = 0): [pic]

• The roots of the indicial equation are r1 = r2 = 0.

• [pic] ( [pic]

[pic]

• J0(x) ( 1 as x ( 0.

• [pic].

• [pic]

o [pic]; i.e., a partial sum of the harmonic series up to m.

• The traditional Bessel function of the second kind of order zero is a linear combination of [pic]and[pic]:

• [pic]

• [pic]

o ( is the “Euler-Mascheroni” constant: [pic]

[pic]

• [pic].

• [pic].

Bessel Equation of Order One-Half (i.e., ( = ½ ): [pic]

• The roots of the indicial equation are r1 = + ½ , r2 = - ½ .

• [pic]

• By convention, [pic], x > 0.

[pic]

• [pic]

• The Bessel function of the second kind of order one-half is:

• [pic], x > 0.

Bessel Equation of Order One (i.e., ( = 1): [pic]

• The roots of the indicial equation are r1 = 1, r2 = -1.

• [pic] (Note: J1(x) = ½ y1(x) so that the series can be written as powers of (x/2)…

• J1(x) ( 0 as x ( 0.

• [pic].

• [pic], x > 0

• The traditional Bessel function of the second kind of order one is:

• [pic]

• [pic].

• [pic].

Bessel Equations of Higher Positive Integer Order (i.e., ( = positive integer, ( > 1): [pic]

• The roots of the indicial equation are r1 = (, r2 = -(.

• [pic]

• J((x) ( 0 as x ( 0.

• [pic].

• Weisstein, Eric W. "Bessel Function of the First Kind." From MathWorld--A Wolfram Web Resource.

• [pic]

• [pic].

• [pic].

• Weisstein, Eric W. "Bessel Function of the First Kind." From MathWorld--A Wolfram Web Resource.

Section 6.1: Definition of the Laplace Transform

Review of Improper Integrals (Calc 2):

If f is continuous on the interval [a, (), then the improper integral [pic] is defined as:

[pic].

If the limit exists as A ( (, then the integral is said to converge. Otherwise, the integral diverges.

Piecewise Continuous Functions:

A function f is piecewise continuous on an interval [pic]if the interval can be partitioned by n points [pic]such that

• f is continuous on each subinterval Ii = [pic]

• the limit of f as the endpoints are approached from within each limit are finite.

Upshot: f is continuous everywhere except at a finite number of jump discontinuitues.

[pic]

To integrate a piecewise continuous function:

[pic]

It can be difficult to tell if an improper integral with any given piecewise converges when the antiderivative can’t be written in terms of elementary functions…

Theorem 6.1.1: Integral Comparison Test for Convergence and Divergence

If f is piecewise continuous for[pic], and if [pic]when [pic]for some positive constant M, and if [pic]converges, then [pic] also converges. However, if [pic]for [pic]and if [pic]diverges, , then [pic] also diverges.

Integral Transforms:

An integral transform, in general, is a relation of the form [pic], where K is a given function, called the kernel of the transformation. Alpha and beta need not be finite numbers. Note that an integral transform transforms a function f into another function F, called the “transform of f.”

The Laplace Transform:

…has a kernel of [pic].

…is defined by [pic][pic]

…we will use the Laplace transform to:

• Transform an initial value problem for an unknown function f in the t domain to an unknown function F in the s domain.

• Solve the resulting algebraic problem to find F.

• Recover f from F by “inverting the transform.”

Theorem 6.1.2: Criteria for a Laplace Transform to Exist

Suppose that:

1. f is piecewise continuous on the interval [pic] for any positive A.

2. [pic]when [pic]for some real, positive constants K and M, and a real a.

Then the Laplace transform [pic]exists for [pic].

Functions that satisfy the hypotheses of theorem 6.1.2 are described as being “of exponential order” as [pic]. [pic]is not of exponential order.

Section 6.2: Solution of Initial Value Problems

Theorem 6.2.1: Laplace Transform of the Derivative of a Function

Suppose that f is continuous and that f ( is piecewise continuous on any interval [pic]and that there exist constants K, a, and M such that [pic]for [pic]. Then [pic]exists for

s > a and is given by:

[pic].

Laplace Transform of the second derivative of a function:

[pic]

Theorem 6.2.2: Laplace Transform of the nth Derivative of a Function

…. Then [pic]exists for s > a and is given by:

[pic].

( To solve an initial value problem with a Laplace transform:

• Transform the differential equation.

• Solve the resulting algebraic equation for [pic].

• Look up each term in a table of Laplace transforms (page 317 or the table on the next page); the actual solution is the inverse of each term of [pic]. Note that some algebra will most likely need to be done to get the transform to look like sums of the transforms in the table below.

|Laplace Transform Table |

|[pic] |[pic] |

|1 |[pic], s > 0 |

|[pic] |[pic], s > 0 |

|[pic], n = positive integer |[pic], s > 0 |

|[pic], p > -1 |[pic], s > 0 |

|[pic] |[pic], s > 0 |

|[pic] |[pic], s > 0 |

|[pic] |[pic], s > |a| |

|[pic] |[pic], s > |a| |

|[pic] |[pic], s > a |

|[pic] |[pic], s > a |

|[pic], n = positive integer |[pic], s > a |

|[pic] |[pic], s > 0 |

|[pic] |[pic] |

|[pic] |[pic] |

|[pic] |[pic], c > 0 |

|[pic] |[pic] |

|[pic] |[pic] |

|[pic] |[pic] |

|[pic] |[pic] |

Section 6.3: Step Functions

Definition of the unit step function (or Heaviside function):

[pic]

The Laplace transform of [pic] is [pic].

Theorem 6.3.1: Laplace transform of a horizontally translated function.

If [pic]exists for [pic], and if c is a positive constant, then

[pic].

Conversely if [pic], then [pic].

Theorem 6.3.2: Inverse Laplace transform of a horizontally translated transform.

If [pic]exists for [pic], and if c is a constant, then

[pic], s > a + c.

Conversely if [pic], then [pic].

Section 6.4: Differential Equations with Discontinuous Forcing Functions

Pertinent relations from section 6.3:

[pic]

[pic] ( [pic]

[pic] ( [pic]

Section 6.5: Impulse Functions

Recall from Physics and calc 2 applications that an impulse is the time integral of a force, which results in a change in momentum:

[pic]

In this section, we will examine “the” impulse function, which models a force of a very large magnitude that acts over a very short time.

Let [pic]. Note: [pic]

[pic]

Then the impulse imparted by this force is:

[pic]

Define the Dirac delta function (or an idealized unit impulse function) as:

[pic]

[pic]

Note: The Dirac delta function can be shifted, which results in a description of an impulse occurring at a later time:

[pic]

[pic]

The Laplace Transform of the Dirac delta function can be derived by looking once again at the limit of [pic].

[pic]

Note:

[pic]

Section 6.6: The Convolution Integral

Theorem 6.6.1: The Convolution Theorem

If [pic]and[pic]both exist for s > a ( 0, then

[pic]

where

[pic].

The function h is known as the convolution of f and g. The two integrals above are known as convolution integrals.

Section 7.1: Introduction to Systems of First Order Linear Equations

A second-degree or higher differential can be converted to a system of first order equations by letting x1 = u, x2 = u(, etc.

General Notes:

• In this chapter, we will examine systems of first-order differential equations:

[pic]

• A solution of this system on an interval [pic] is a set of n functions that are differentiable in I and that satisfy the above system of equations. An IVP is formed when n initial conditions are stated.

[pic]

[pic]

• Theorem 7.1.1 Existence and Uniqueness of a Solution to a System of First-Order Equations

Let each of the functions F1, …, Fn and the partial derivatives [pic], …, [pic], …, [pic], …,[pic] be continuous in an open region R of the tx1x2…xn space defined by [pic], [pic],[pic], …, [pic], and let the point [pic]be in R. Then there is an interval [pic]in which there exists a unique solution [pic]of the system of differential equations that also satisfies the initial conditions.

• In this chapter, the discussion will be restricted to systems where the functions F1, …, Fn are linear functions of x1, …, xn. Thus, the most general system we will examine is:

[pic]

• If all the g1(t), …, gn(t) are zero, then the system is homogeneous. Otherwise, it is nonhomogeneous.

• Theorem 7.1.2 Existence and Uniqueness of a Solution to a System of Linear First-Order Equations

Let each of the functions p11, p12, …, pnn, g1(t), …, gn(t) are continuous on an open interval [pic], then there exists a unique solution [pic]of the system that also satisfies the initial conditions. Moreover, the solution exists throughout I.

Section 7.2: Review of Matrices

A matrix is a rectangular array of numbers (called elements) arranged in m rows and n columns:

[pic]

aij is used to denote the element in row i and column j.

(aij) is used to denote the matrix whose generic element is aij.

The transpose of a matrix is a new matrix that is formed by interchanging the rows and columns of a given matrix. The transpose is denoted with a superscript T:

if [pic], then [pic]; if A = (aij), then AT = (aji).

The conjugate of a matrix is a new matrix that is formed by taking the conjugate of every element of a given matrix. The conjugate is denoted with a bar over the matrix name.

if [pic], then [pic]; if [pic], then[pic].

The adjoint of a matrix is the transpose of the conjugate of a given matrix. The adjoint is denoted with a superscript asterisk:

if [pic], then [pic].

Properties of Matrices

1. Equality: Two matrices A and B are equal only if all corresponding elements are equal; i.e., aij = bij for all i and j.

2. Zero: The matrix with zero for every element is denoted as 0.

3. Addition: The sum of two m ( n matrices A and B is computed by adding corresponding elements:

[pic]

[pic]

a. Matrix addition is commutative and associative:

[pic]; [pic]

4. Multiplication by a number: The product of a complex number and a matrix is computed by multiplying every element by the number:

[pic]

[pic]

a. Multiplication by a constant obeys the distributive laws

[pic]; [pic]

b. The negative of a matrix is negative one times the matrix:

[pic]

5. Subtraction: The difference of two m ( n matrices A and B is computed by subtracting corresponding elements:

[pic]

[pic]. Note: the last step is not required, but it illustrates a simplifying technique that is sometimes used.

6. Multiplication of matrices: Matrix multiplication is only defined when the number of columns in the first factor equals the number of rows in the second factor.

a. The product of an m ( p and a p ( n matrix is an m ( n matrix.

b. If [pic], then element cij is computed by taking the sum of the products of corresponding elements from row i in matrix A and column j of matrix B:

[pic]

c. Matrix multiplication is associative:

[pic]

d. Matrix multiplication is distributive:

[pic]

e. Matrix multiplication is usually NOT commutative:

[pic](usually)

7. Multiplication of vectors:

a. The dot product: [pic]

b. Properties of the dot product:

[pic]; [pic]; [pic]

c. The scalar (inner) product: [pic]

d. The magnitude, or length, of a vector x can be represented with the inner product: [pic].

e. Note: Two vectors are orthogonal if [pic]; the (orthogonal) unit vectors are [pic], [pic], and [pic].

f. Properties of the scalar product:

[pic], [pic], [pic], [pic]

8. Identity: The multiplicative identity for matrices, denoted by I, is the square matrix with one in every diagonal element and zero elsewhere. The diagonal runs from the top left to bottom right, only.

[pic]

a. Multiplication by the identity matrix is commutative for square matrices: [pic].

9. Inverse: For some square matrices, there exists another unique matrix such that their product is the identity. Such matrices are said to be nonsingular, or invertible. Matrices that are not invertible are called noninvertible or singular. The “other unique matrix” is called the (multiplicative) inverse, and is denoted with a superscript -1:

If A is invertible, then AA-1 = A-1A = I.

a. The determinant is a quantity that can be computed for any square matrix, and whose value determines if a matrix is singular or not. A zero determinant means the matrix is noninvertible. The determinant of a matrix A is denoted as:

det(A) = |A|.

b. The determinant of any square matrix can be computed by taking the sum of the products of all the elements and their cofactors from any one row or column of the matrix.

c. The cofactor Cij of a given element aij is the product of the minor Mij of the element and an appropriate factor of -1:

Cij = (-1)i+jMij .

d. The minor Mij of an element aij is the determinant of the matrix obtained by eliminating the ith row and jth column from the original matrix.

e. Thus, a determinant can be computed using a chain of smaller determinants. It is useful to know the following formula for the last step in the chain: [pic] ( [pic]

f. If B = A-1, then [pic], where (Cji) is the transpose of the cofactor matrix of A.

g. This formula is extremely inefficient for computing the inverse. The preferred method for computing the inverse of a matrix A is to form the augmented matrix A | I, and then perform elementary row operations on the augmented matrix until A is transformed to the identity matrix. That will leave I transformed to A-1. This process is called row reduction or Gaussian elimination.

h. The three elementary row operations are:

i. Interchange any two rows.

ii. Multiply any row by a nonzero number.

iii. Add any multiple of one row to another row.

10. Matrix Functions: Are matrices with functions for the elements. They can be integrated and differentiated element-by-element…

Section 7.3: Linear Algebraic Equations; Linear Independence, Eigenvalues, Eigenvectors

Systems of Linear Algebraic Equations

A system of n linear equations in n variables can be written in matrix form as follows:

[pic]([pic]([pic]

If [pic], then the system is homogeneous; otherwise, it is nonhomogeneous.

If A is nonsingular (i.e., det(A) ( 0), then A-1 exists, and the unique solution to the system is:

[pic]

A picture for the 3 ( 3 system is:

|[pic] | |

|Solution: (-1, 2, -3) |[pic] |

If A is singular, then either there are no solutions, or there are an infinite number of solutions.

Example pictures for 3 ( 3 systems are:

|Three parallel planes ( No Solution | |

|[pic] |[pic] |

|Note: Inconsistent systems yield false equations (like 0 = 2 or 0 = 4)| |

|after trying to solve them. | |

|Planes intersect in three parallel lines( No Solution | |

|[pic] |[pic] |

|Note: Inconsistent systems yield a false equation (like 0 = -6) after | |

|trying to solve them. | |

|All three planes intersect along same line ( Infinite number of | |

|solutions |[pic] |

|[pic] | |

|Note: This type of dependent system yields one equation of 0 = 0 after| |

|row operations. | |

|All three planes are the same( Infinite number of solutions | |

|[pic] |[pic] |

|Note: This type of dependent system yields two equations of 0 = 0 | |

|after row operations. | |

If A is singular, then the homogeneous system Ax = 0 will have infinitely many solutions (in addition to the trivial solution).

If A is singular, then the nonhomogeneous system Ax = B will have infinitely many solutions when (B, y) = 0 for all vectors y satisfying A*y = 0 (recall A* is the adjoint of A). These solutions will always take the form x = x(0) + (, where x(0) is the particular solution, and ( is the general form for the corresponding homogeneous solution.

In practice, linear systems are solved by performing Gaussian elimination on the augmented matrix A | B.

Linear Independence

A set of k vectors [pic]are said to be linearly dependent if there exist k complex numbers [pic] not all zero, such that [pic]. This term is used because if it is true that not all the constants are zero, then one of the vectors depends on one or more of the other vectors: [pic].

On the other hand, if the only values of [pic] that make [pic] be true are [pic], then the vectors [pic] are said to be linearly independent.

The test for linear dependence or independence can be represented with matrix arithmetic:

Consider n vectors with n components. Let xij be the ith component of vector x(j), and let

X = (xij). Then:

[pic]

If det(X) = 0, then c = 0, and thus the system is linearly independent.

If det(X) ( 0, then there are nonzero values of ci, and thus the system is linearly dependent.

Note: Frequently, the columns of a matrix A are thought of as vectors.

The columns of vectors are linearly independent iff det(A) ( 0.

If C = AB, it happens to be true that det(C) = det(A)det(B). Thus, if the columns of A and B are linearly independent, then so are the columns of C.

Eigenvalues and Eigenvectors

The equation Ax = y is a linear transformation that maps a given vector x onto a new vector y. Special vectors that map onto multiples of themselves are very important in many applications, because those vectors tend to correspond to “preferred modes” of behavior represented by the vectors. Such vectors are called eigenvectors (German for “proper” vectors), and the multiple for a given eigenvector is called its eignevalue.

To find eigenvalues and eigenvectors, we start with the definition:

[pic], which can be written as

[pic], which has solutions iff

[pic]

The values of ( that satisfy the above determinant equation are the eigenvalues, and those eigenvalues can then be plugged back into the defining equation [pic] to find the eigenvectors.

You will see that eigenvectors are only determined up to an arbitrary factor; choosing the factor is called normalizing the vector. The most common factor to choose is the one that results in the eigenvector having a length of 1.

Notes:

• In these examples, you can see that finding the eigenvalues of an n ( n matrix involved solving a polynomial equation of degree n, which means that there are always n eigenvalues for an n ( n matrix.

• Also in these two examples, all the eigenvalues were distinct. However, that is not always the case. If a given eigenvalue appears m times as a root of the polynomial equation, then that eigenvalue is said to have algebraic multiplicity m.

• Every eigenvalue will have q linearly independent eigenvectors, where 1 ( q ( m. The number q is called the geometric multiplicity of the eigenvalue.

• Thus, if each eigenvalue of A is simple (has algebraic multiplicity m = 1), then each eigenvalue also has geometric multiplicity q = 1.

• If (1 and (2 are distinct eigenvalues of a matrix A, then their corresponding eigenvectors are linearly independent.

• So, if all the eigenvalues of an n ( n matrix are simple, then all its eigenvectors are linearly independent. However, if there are repeated eigenvalues, then there may be less than n linearly independent eigenvectors, which will pose complications later on when solving systems of differential equations (which we won’t have time to get to…).

• Symmetric matrices are a subset of Hermitian, or self-adjoint matrices: A*= A; [pic]

• Hermitian matrices have the following properties:

• All eigenvalues are real.

• There are always n linearly independent eigenvectors, regardless of multiplicities.

• All eigenvectors of distinct eigenvalues are orthogonal.

• If an eigenvalue has algebraic multiplicty m, then it is always possible to choose m mutually orthogonal eigenvectors.

Section 7.4: Basic Theory of Systems of First Order Linear Equations

Section 7.5: Homogeneous Linear Systems with Constant Coefficients

Assume a solution of the form [pic], plug it into the system, then find the eigenvalues for r and the eigenvectors for (.

Section 7.6: Complex Eigenvalues

If the eigenvalues come in complex conjugate pairs, then the solution will break down into an exponential times vectors with sines and cosines for elements.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download