BYU Math



Chapter 4: General Vector Spaces4.5 DimensionProblem 5Find a basis for the solution space of a homogenous linear system, and determine the dimension of that space.Problem 9Determine the dimension of certain subspaces of square matrices (symmetric, diagonal, upper triangular, etc.)Problem 15Given a linearly independent set of vectors, enlarge it to a basis.Problem 25Explain why if you have a finite dimensional vector space, any subspace of that vector space is also finite dimensional.4.6 Change of BasisProblems 1,3, and 5Find transition matrices from one basis into another.Be able to compute coordinate vectors.Use transition matrices to find a coordinate vector in a different basis.Problem 11This problem tested whether you knew how to find transition matrices, but it also involved reflecting vectors about a line, and it involved taking the transpose of a matrix.Problem 15This problem gave a matrix, and asked if it were a transition matrix into the standard basis what the original basis would be. It also asked if were a transition matrix from the standard basis what the new basis would be.Problem 17This problem transformed the standard basis vectors, and then asked what the transition matrix back to the standard basis vectors would be.4.7 Row Space, Column Space, and Null SpaceProblem 1This asks to express a matrix-vector product (Ax) as a linear combination of the columns of A.Problem 3Determine whether a vector is in the column space of a matrix.Problem 9Find bases for the null space and row space of a matrix.Problem 13Find bases for the row space and column space of a matrixPart b: find a basis for the row space consisting of only rows of the original matrix (not the row reduced one)Problem 15Gives 4 vectors and asks for a basis for their span.4.8 Rank, Nullity, and the Fundamental Matrix SpacesProblem 1Asks for the rank and nullity of a matrix.Problem 5Given the reduced row echelon form of a matrixFind its rankFind its nullityVerify that the number of columns in the matrix equals the sum of the rank and the nullity.Be able to count the number of leading variables (pivot positions)Be able to count the number of parameters in solutions to the homogenous equation (Ax = 0).Problems 14 & 15Gives a matrix with a variables in some of the entries, and then asks how the variables affect the rank.Problem 25Gives a matrix with 4 rows and 6 columns, and then asks us to show that the null space of the transpose is orthogonal to the column space of the original matrix.Chapter 5: Eigenvalues and Eigenvectors5.1 Eigenvalues and EigenvectorsProblem 3Gives a matrix and a vector. Asks us to confirm that the vector is an eigenvector, and to find its eigenvalue.Problems 5,7,11Gives a matrix, and asks forThe characteristic equationThe eigenvaluesA basis for each eigenspace.Problems 28-29These problems proved that the characteristic equation for a 2 by 2 matrix can be expressed as λ2-trAλ+det?(A). Then they worked with how you could determine whether the eigenvalues of A were real or repeated or imaginary by using some formulas that came from the quadratic formula.Problem 33This asked us to show that the eigenvalues of the inverse of a matrix are the reciprocals of the eigenvalues of the original matrix.True/FalseKnow that an eigenvector of a square matrix A is a non-zero vector x such that Ax = λx for some scalar λ.Know that an eigenvalue λ will have the property that there are non-zero vectors x where (A-λI)x = 0.Know that if a square matrix does not have 0 as an eigenvalue, then it is invertible.Know that the eigenspace of a matrix A corresponding to λ is any (potentially zero) vector x such that Ax = λx(This is tricky. Eigenvectors by definition cannot be zero. However, eigenspaces include the zero vector. This might seem kind of annoying, but it really comes from two conflicting demands we have. First, we want eigenvectors to not be zero because otherwise every scalar would be an eigenvalue with eigenvector 0. And, we want eigenspaces to actually be subspaces, so they have to include 0.)Know that row reducing a matrix changes its eigenvalues.Be familiar with the Equivalent Statements theorem (Theorem 5.1.5)5.2 DiagonalizationProblem 3Know that determinants, invertibility, rank, nullity, trace, characteristic polynomials, eigenvalues, and the dimensions of each eigenspace are preserved when two matrices are similar. So, if two matrices do not share all of these things, they cannot be similar.Problem 9This problem tested whether you know that a matrix is diagonalizable only when the algebraic multiplicity equals the geometric multiplicity for each eigenvalue.Problem 11Find geometric and algebraic multiplicities of eigenvalues. Use this information to determine if a matrix is diagonalizable. If it is diagonalizable, know how to diagonalize the matrix.Problem 21Use diagonalization to find a formula for An that is faster than multiplying A to itself n times.Problem 37This proof asked you to show that if A was diagonalizable, then Ak is also diagonalizable for all whole number values of k.Chapter 6: Inner Product Spaces6.1 Inner ProductsProblem 1Given a formula for an inner product, and two vectors u and v, know how to calculate:<u, v> (Their inner product)||u||(Length, with respect to this inner product)d(u,v) (Distance between them, with respect to this inner product)Problem 9 Know the standard inner product on matrices. (Note to Logan: Later classes call this the Frobenius inner product.)Problem 11Know the standard inner product for polynomials (Not the integral one, but the one where you just multiply and add coefficients)Problem 19Be able to find ‘lengths’ of polynomials and ‘distances between’ polynomials using the standard inner product for polynomials. Problem 37This problem was one where the inner product was an integral. Otherwise, the stuff you needed to know was identical to problem 1.6.2 Angle and Orthogonality in Inner Product SpacesProblems 1,3,5Given an inner product and two vectors, be able to find the cosine of the ‘angle’ between them.Problems 7,9,11Given two vectors and an inner product, be able to determine if they are orthogonal.Problem 27Given some vectors in Rn, be able to find a basis for their orthogonal complement.Problem 33Know how to integral inner products, and use them to find ‘lengths’ of functions.Problem 41This was a proof. It asked you to show that if a vector w was orthogonal to each of the vectors u1, u2, …, ur ,then it was orthogonal to their span.Problem 43This was along the same lines as 41. It asked us to show why the orthogonal complement of a subspace W was the set of all vectors orthogonal to a basis for W.6.3 Gram-Schmidt Process; QR-DecompositionProblems 29 & 31Both of these problems just had you apply the Gram-Schmidt process to a set of vectors. In both cases, the inner product you used was the dot product (the book called it the inner product)6.4 Best Approximation; Least SquaresProblems 3&5 Both of these problems gave you a matrix A and a vector b then asked you to find the least squares solution to the equation Ax = b.Problem 17This problem asked you to find the projection of a vector onto a span of a set of other vectors.Chapter 7: Diagonalization and Quadratic Forms7.1 Orthogonal MatricesProblem 3This problem tested whether or not you could identify an orthogonal matrixProblem 26This was a pretty tough question. It asked you to show an orthogonal matrix has one of these forms:cosθ-sinθsinθcosθ or cosθsinθsinθ-cosθ.7.2 Orthogonal DiagonalizationProblem 3Find the characteristic polynomial of a symmetric matrix, and determine the dimension of each of its eigenspaces without finding bases for each of the eigenspaces (This problem wants you to take advantage of the fact that symmetric matrices always have geometric multiplicity equal algebraic multiplicity.)Problems 7 & 11These problems have you orthogonally diagonalize a matrix.Problem 26This was a proof. It had a very similar flavor to the spectral decomposition of a matrix.7.3 Quadratic FormsProblem 1This gave you a function that was a quadratic form, and then asked you to find a symmetric matrix A where we could express the quadratic form as xTAx.Problems 5 & 7These problems gave a quadratic form, and then asked you to make an orthogonal change of variables so that it no longer had cross product terms.Problem 13 & 15These problems gave you an equation for a conic section centered at the origin. They ask you to rotate it’s principle axes so that it is in standard position. Then, they ask you to identify what kind of conic section it is (circle, ellipse, parabola or hyperbola). They also ask for the angle of rotation.7.4 Optimization Using Quadratic FormsProblems 13&15Both of these problems gave a function f, and then asked you toFind the critical points of f.Classify the critical points as either relative maxima, relative minima, or saddle points.Chapter 8: General Linear Transformations8.1 General Linear TransformationsProblems 3, 5, 7, and 9These problems give a transformation between vector spaces, and then ask if it’s linear. If it’s linear, they ask for the kernel.Problem 23This problem gave a transformation that from 3rd degree polynomials to 2nd degree polynomials. It asked us toShow it was linear.Find a basis for its kernel.Find a basis for its range.Problem 28This asked for the dimension of the kernel of the trace function.8.2 Compositions of Linear TransformationsProblem 1This exercise had you find the kernel of a transformation to determine if the transformation was one-to-oneProblem 3This exercise had you find the nullity of a matrix, and then use that to determine if multiplication by that matrix was a one-to-one transformation.Problem 13This gave you three linear transformations, and then asked you to give a formula for the transformation that was the composition of all of them.Problem 21This gave a transformation Tx1, x2, …, xn=(a1x1, a2x2, …,anxn), then it asked what conditions on the constants a1, a2, …, an would make the transformation one-to-one. Then, it asked for a formula for the inverse of T.Problem 29This problem had youShow differentiation and integration are linear functions.Explain why integration is not technically an inverse to differentiation if the domain of the derivative includes constants.Explain how to restrict the domain of the derivative so that integration does become an inverse.Problem 31This gave a transformation that was a definite integral. It asked if it was a one-to-one transformation.8.3 IsomorphismProblems 3 and 5These problems gave a transformation, and asked if it was an isomorphism.Problem 9This problem asked you to create an isomorphism between two different vector spaces.Problem 23This was a proof. It asked to show that if U was isomorphic to V and V was isomorphic to W, then U was isomorphic to W.8.4 Matrices for General Linear TransformationsProblem 1This gave a transformation from second degree polynomials to 3rd degree polynomials. It asked for the matrix representation with respect to the power basis.It then asked you to verify that if you first found the coordinate vector of a polynomial, and then multiplied it by the matrix representation of the transformation that you got the same thing if you first transformed the polynomial, and then found the coordinate vector.Problem 3This was another transformation like problem 1, but this time it was an operator on polynomials of degree 2Problem 5This gave a transformation, and then asked for it’s matrix representation in a super-funky basis.Afterwards, it still wanted you to check that if you first found the coordinate vector and then multiplied by the matrix representation you would get the same thing as applying the transformation first, and then found the coordinate vector.Problems 9 and 11This gave the matrix representation of a linear transformation relative to a funky basis. It first asked for the coordinate vectors of the image of each of the basis vectors. Then, it asked for the images. Then it asked for an explicit formula for the transformation. Then it asked you to use the explicit formula on one specific vector.Problem 15This gave a transformation from polynomials to 2 by 2 matrices. It asked for matrix represntations in different bases, and then did the some old thing where it has you find a coordinate vector first, and then multiply by the matrix representation, and then use the resulting coordinate vector to find the actual image of the vector. Then, it had you do the transformation directly to see if you get the same result. ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download