Topic 1: Solutions to Differential Equations



Multigrid Methods

A method like Jacobi is highly effective at killing off high-frequency components of the iterates . On the other hand, slow convergence is due to slow dimuniation of low-frequency components. Consider, then, several iterations on a grid of N gridpoints. High frequencies will be disposed of, but low frequency errors present in the iterates remain. Think of a specific iterate as a continuous function represented at N gridpoints. The notion of high and low frequency is relative to the grid spacing. So on a grid of N/2 points, some of what looked like low frequency errors on grid N now look more like high frequency. So perform several iterations on grid N/2, getting rid of what now look like high frequency components. The entire process can be repeated, coarsening the grid successively all the way down to a grid of 1 interior point. This coarsening removes more components – components that on grid N looked low, and were thus hampering convergence.

Now take this compute ‘solution’ and interpolate it back to successively finer grids, performing a few iterations of Jacobi at each grid level along the way. This interpolation gives back ‘accuracy’. Finishing with several iterations back on grid N gives a so-called V-cycle of multigrid. Once we are on the finest grid, we can repeat the V-cycle as often as needed to drive the residual to zero.

There are a couple of degrees of freedom we have remaining – the numbers of iterations performed at each grid level, the way restriction onto coarser grids and interpolation back to finer grids is performed. These choices effect the rate of convergence, especially the restriction and interpolation operators.

Other scheduling is possible also. Starting from the coarsest grid, one could interpolate up two levels of fineness, back down one, up two, back one, until finally reaching the N-grid. This is a W-cycle. Vs and Ws are the most common schedules to follow.

Now we explain effective use of multigrid. Given the ODE

Write the discrete version on a mesh with spacing as

Let be an approximate solution, and

is the error or the correction. The residual or defect is defined as

Because is linear, .

Now coarsen the grid. Set and solve the new residual equation.

How? We need to define a fine-to-coarse grid transfer operator, called a restriction operator and define

Now relax (say, with Jacobi) for a couple of iterations. Given the solution, transfer back to the finest grid with a prolongation or interpolation operator

The new solution, then, is

When it is time to solve the defect equation on grid H, the obvious trick is to repeat the procedure – that is, relax a couple of times, restrict again to grid 2H, “solve” for the defect on that grid, and interpolate back up. You can continue this process down to a coarsest grid, maybe a grid with one interior point or a small grid on which a direct method is applicable.

Choice of restriction and interpolation operators is important. A popular interpolation is bi-linear interpolation. For restriction, an obvious (but not always safe) choice is straight injection – each coarse grid point is given the value of the corresponding fine grid point. A better and safe choice is to make injection the adjoint of interpolation. As a rule, if m is the order of the differential operator (e.g. 2 for the Poisson equation), and p is the order of the interpolation (i.e. polynomial of degree p-1 is interpolated exactly), and r the order of the restriction operator, then you should choose operators so that p+r > m (e.g. bi-linear interpolation and its adjoint give

p+r = 4 > 2=m.

Homework 1 Part 2 Write a multigrid code for the 1D Poisson equation from Part 1, using Jacobi and RBGS as the relaxation schemes, with 0 as initial guess at the solution. Try alternative schedules, V- and W-cycles, proceeding to the coarsest grid (one interior point). Write a report detailing how the choice of V or W, and the number of relaxation steps at each level, impacts the overall convergence rate. How does the magnitude of the jump in a(x) impact convergence? What other factors come into play?

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download