Lecture 13 - Department of Scientific Computing



Lecture 13

The advent of high speed computers and sophisticated software tools have

made the computation of derivatives for functions defined by evaluation programs both easier and more important.

On one hand, the dependence of certain program outputs with respect to certain input parameters can now be determined

and quantified more or less automatically, i.e. without the user having to

append or rewrite the function evaluation procedure.

On the other hand, such qualitative and quantitative dependence analysis is invaluable for the optimization of key output objectives with respect to suitable decision variables or the identification of model parameters with respect to given data.

In fact, we may juxtapose the mere simulation of a physical or social system by repeatedly running an appropriate computer model for various input data to its optimization by a systematic adjustment of certain decision variables and model parameters.

The transition from the former computational paradigm to the latter may be

viewed as a central characteristic of present day scientific computing.

In the coming lectures we will first address the issue of calculating derivatives using finite difference approximations for approximating:

1. Gradient

2. Sparse Jacobian

3. Approximating the Hessian ( matrix of second order derivatives)

4. Approximating a sparse Hessian

And then we will focus our attention on automatic differentiation using mainly the book of Andreas Griewank and material of TAMC as well as other materials.

Finite differencing

Based on Taylor’s theorem following change in function values as a response to small perturbations of unknowns , we estimate the response to infinitesimal perturbations namely the derivatives

The partial derivative of a smooth function

[pic]

with respect to the i-th variable I , may be approximated by a central difference formula:

[pic]

Here [pic] is a small positive scalar while [pic]is the i-th unit vector that means that all the elements of this vector are 0 except for a 1 in the i-th position.

Finite Difference Derivative Approximation

Finite differencing is an approach to the calculation of approximate derivatives whose motivation (like that of so many algorithms in optimization) comes from Taylor’s theorem.

Many software packages perform automatic calculation of finite differences whenever the user is unable or unwilling to supply a code to calculate exact derivatives.

Although they yield only approximate values for the derivatives, the results are adequate in many situations.

By definition, derivatives are a measure of the sensitivity of the function to infinitesimal changes in the values of the variables.

Our approach in this section is to make small, finite perturbations in the values of x and examine the resulting differences in the function values.

Taking ratios of the function difference to variable difference we are able to obtain approximations to the derivatives.

Gradient approximation

An approximation to the gradient vector [pic] can be obtained by evaluating the function f at (n + 1) points and performing some elementary arithmetic.

We describe this

technique, along with amore accurate variant that requires additional function evaluations.

A popular formula for approximating the partial derivative ∂ f/∂xi at a given point x is the forward-difference, or one-sided-difference, approximation, defined as

[pic](8.1)

The gradient can be built up by simply applying this formula for i _ 1, 2, . . . , n.

This process requires evaluation of f at the point x as well as the n perturbed points x + [pic] ,

i =1, 2, . . . , n: a total of (n + 1) points.

The basis for the formula (8.1) is Taylor’s theorem, Theorem 2.1 in Chapter 2 of the book of Nocedal and Wright, second Edition.

When f is twice continuously differentiable, we have:

[pic]

If we choose L to be a bound on the size of [pic] in the region of interest,

it follows directly from this formula that the last term in this expression is bounded by

(L/2[pic]), so that

[pic]

We now choose the vector p to be [pic]ei , so that it represents a small change in the value

of a single component of x (the i-th component).

For this p, we have that [pic] f (x)T p = [pic] f (x)T ei = ∂ f/∂xi , so by rearranging (8.3), we conclude that :

[pic](8.4)

We derive the forward-difference formula (8.1) by simply ignoring the error term [pic] in this

expression, which becomes smaller and smaller as [pic] approaches zero.

An important issue in implementing the formula (8.1) is the choice of the parameter [pic]. The error expression (8.4) suggests that we should choose [pic] as small as possible. Unfortunately,

this expression ignores the round off errors that are introduced when the function f is evaluated on a real computer, in floating-point arithmetic. From our discussion in the

Appendix (see (A.30) and (A.31)), we know that the quantity u known as unit roundoff is crucial: It is a bound on the relative error that is introduced whenever an arithmetic operation is performed on two floating-point numbers. (u is about 1.1 × 10−16 in double precision

IEEE floating-point arithmetic.)

The effect of these errors on the final computed

value of f depends on the way in which f is computed. It could come from an arithmetic formula, or from a differential equation solver, with or without refinement.

As a rough estimate, let us assume simply that the relative error in the computed f is bounded by u, so that the computed values of f (x)

and f (x +[pic]ei ) are related to the exact values in the following way:

[pic]

where comp(·) denotes the computed value, and L f is a bound on the value of | f (·)| in the region of interest. If we use these computed values of f in place of the exact values in (8.4) and (8.1), we obtain an error that is bounded by:

[pic]

[pic]

[pic]

ERROR ANALYSIS AND FLOATING-POINT ARITHMETIC

In most of this book our algorithms and analysis deal with real numbers. Modern digital computers, however, cannot store or compute with general real numbers. Instead they work with a subset known as floating-point numbers. Any quantities that are stored on the computer, whether they are read directly from a file or program or arise as the

intermediate result of a computation, must be approximated by a floating-point number.

In general, then, the numbers that are produced by practical computation differ from those that would be produced if the arithmetic were exact.

Of course, we try to perform our computations in such a way that these differences are as tiny as possible.

Discussion of errors requires us to distinguish between absolute error and relative error.

If x is some exact quantity (scalar, vector, matrix) and [pic] is its approximate value, the

absolute error is the norm of the difference, namely, [pic]. (In general, any of the norms

(A.2a), (A.2b), and (A.2c) ( See Nocedal and Wright Appendix A ) can be used in this definition.) The relative error is the ratio of

the absolute error to the size of the exact quantity, that is,

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

Definition of sparse matrix

In the mathematical subfield of numerical analysis a sparse matrix is a matrix populated primarily with zeros.

Conceptually, sparsity corresponds to systems which are loosely coupled. Consider a line of balls connected by springs from one to the next; this is a sparse system. By contrast, if the same line of balls had springs connecting every ball to every other ball, the system would be represented by a dense matrix. The concept of sparsity is useful in combinatorics and application areas such as network theory, of a low density of significant data or connections.

Huge sparse matrices often appear in science or engineering when solving partial differential equations.

When storing and manipulating sparse matrices on a computer, it is beneficial and often necessary to use specialized algorithms and data structures that take advantage of the sparse structure of the matrix. Operations using standard matrix structures and algorithms are slow and consume large amounts of memory when applied to large sparse matrices. Sparse data is by nature easily compressed, and this compression almost always results in significantly less memory usage. Indeed, some very large sparse matrices are impossible to manipulate with the standard algorithms.

Figure Illustration of Sparse Matrix

[pic]

Storing a sparse matrix

The naive data structure for a matrix is a two-dimensional array. Each entry in the array represents an element ai,j of the matrix and can be accessed by the two indices i and j. For a m×n matrix we need at least enough memory to store (m×n) entries to represent the matrix.

Many if not most entries of a sparse matrix are zeros. The basic idea when storing sparse matrices is to store only the non-zero entries as opposed to storing all entries. Depending on the number and distribution of the non-zero entries, different data structures can be used and yield huge savings in memory when compared to a naïve approach.

One example of such a sparse matrix format is the (old) Yale Sparse Matrix Format[1]. It stores an initial sparse m×n matrix, M, in row form using three one-dimensional arrays. Let NNZ denote the number of nonzero entries of M. The first array is A, which is of length NNZ, and holds all nonzero entries of M in left-to-right top-to-bottom order. The second array is IA, which is of length m + 1 (i.e., one entry per row, plus one). IA(i) contains the index in A of the first nonzero element of row i. Row i of the original matrix extends from A(IA(i)) to A(IA(i+1)-1). The third array, JA, contains the column index of each element of A, so it also is of length NNZ.

For example, the matrix

[ 1 2 0 0 ]

[ 0 3 9 0 ]

[ 0 1 4 0 ]

is a three-by-four matrix with six nonzero elements, so

A = [ 1 2 3 9 1 4 ]

IA = [ 1 3 5 7 ]

JA = [ 1 2 2 3 2 3 ]

Another possibility is to use quadtrees.

Example

A bitmap image having only 2 colors, with one of them dominant (say a file that stores a handwritten signature) can be encoded as a sparse matrix that contains only row and column numbers for pixels with the non-dominant color. Diagonal matrices

A very efficient structure for a diagonal matrix is to store just the entries in the main diagonal as a one-dimensional array, so a diagonal n×n matrix requires only n entries

Bandwidth

The lower bandwidth of a matrix A is the smallest number p such that the entry aij vanishes whenever i > j + p. Similarly, the upper bandwidth is the smallest p such that aij = 0 whenever i < j − p (Golub & Van Loan 1996, §1.2.1). For example, a tridiagonal matrix has lower bandwidth 1 and upper bandwidth 1.

Matrices with small upper and lower bandwidth are known as band matrices and often lend themselves to simpler algorithms than general sparse matrices; one can sometimes apply dense matrix algorithms and simply loop over a reduced number of indices.

Reducing bandwidth

The Cuthill-McKee algorithm can be used to reduce the bandwidth of a sparse symmetric matrix. There are, however, matrices for which the Reverse Cuthill-McKee algorithm performs better.

The U.S. National Geodetic Survey (NGS) uses Dr. Richard Snay's "Banker's" algorithm because on realistic sparse matrices used in Geodesy work it has better performance.

There are many other methods in use.

Reducing fill-in

"Fill-in" redirects here. For the puzzle, see Fill-In (puzzle).

The fill-in of a matrix are those entries which change from an initial zero to a non-zero value during the execution of an algorithm. To reduce the memory requirements and the number of arithmetic operations used during an algorithm it is useful to minimize the fill-in by switching rows and columns in the matrix. The symbolic Cholesky decomposition can be used to calculate the worst possible fill-in before doing the actual Cholesky decomposition.

There are other methods than the Cholesky decomposition in use. Orthogonalization methods (such as QR factorization) are common, for example, when solving problems by least squares methods. While the theoretical fill-in is still the same, in practical terms the "false non-zeros" can be different for different methods. And symbolic versions of those algorithms can be used in the same manner as the symbolic Cholesky to compute worst case fill-in.

Solving sparse matrix equations

Both iterative and direct methods exist for sparse matrix solving. One popular iterative method is the conjugate gradient method.

Book

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download