Introduction to 2-Dim Gaussian Quadrature over a rectangle



Section 4A:

Introduction to iterated 2-dimensional Gaussian Quadrature over a rectangle

We are now going to use the work on 1-dimensional quadrature to develop a rule for 2-dimensional quadrature. First recall Gaussian Quadrature for an interval [a, b]: For example, with n = 3, we get 4 points in [a, b], x0, x1, x2, x3, and 4 positive weights w0, w1, w2, w3.

[pic]

In general, for a degree 2n+1 rule, we have n+1 points, xi, and weights, wi, and the quadrature rule is then Q(f) = w0f(x0) + ….. + wnf(xn) (with the nodes in [a, b] and the weights positive). For a continuous function f(x) on [a, b], we then approximate ([a,b] f(x)dx ( Q(f). We know that we get exact results if f is any polynomial with

deg f ( 2n+1.

Now we consider a function f(x, y) defined on a rectangular region R ( [a, b] x [c, d]:

[pic]

We want to estimate I = ((R f(x, y)dx dy. In Calculus we learned that this can be computed as an iterated integral:

I = ([c,d] ([a,b] f(x, y)dx dy,

or I = ([a,b] ([c,d] f(x, y)dy dx.

For example, let n = 3 and consider a polynomial f(x, y) with degrees of x and y separately at most 2n+1 (= 7). We next apply Gaussian Quadrature on [a, b] with n = 3, to find x0, ….., xn in [a, b], and positive weights w0, ……, wn. Then we apply Gaussian Quadrature on [c, d] with n = 3 and find y0, ….., yn in [c, d], and positive weights v0, ……, vn.

[pic]

Now we define our 2-dimensional quadrature rule by the following formula:

Q[f] = (0(i(n (0(j(n wi vj f(xi, yj).

For example, for n = 1 we get:

Q[f] = w0 v0 f(x0, y0) + w0 v1 f(x0, y1) + w1 v0 f(x1, y0) + w1 v1 f(x1, y1).

We claim that Q[f] = ((R f(x, y)dx dy if the degree of f in x ( 2n+1 and if the degree of f in y ( 2n+1. We will show this for n = 1. Suppose f(x, y) has degree ( 3 in x and y separately.

Let q(y) = f(x0, y), a polynomial in y of degree at most 3. Therefore,

v0 q(y0) + v1 q(y1) = ([c,d] q(y) dy . (A)

Now let r(y) = f(x1, y), a polynomial in y of degree at most 3. Therefore,

v0 r(y0) + v1 r(y1) = ([c,d] r(y) dy . (B)

Now let p(x) = ([c,d] f(x, y) dy, a polynomial in x of degree ( 2n+1, so

([a,b] p(x) dx = w0 p(x0) + w1 p(x1) (C).

Now Q[f] = w0 v0 f(x0, y0) + w0 v1 f(x0, y1) + w1 v0 f(x1, y0) + w1 v1 f(x1, y1)

= w0 (v0 f(x0, y0) + v1 f(x0, y1)) + w1 (v0 f(x1, y0) + w1 v1 f(x1, y1))

= w0 (v0 q(y0) + v1 q(y1)) + w1 (v0 r(y0) + v1 r(y1))

= w0 p(x0) + w1 p(x1) (by (A) & (B))

= ([a,b] p(x) dx (by (C))

= ([a,b] ([c,d] f(x, y)dy dx

= ((R f(x, y)dy dx.

For n = 1, this completes the proof that Q is a quadrature rule with exact results for polynomials f(x, y) with degrees at most 2n+1 in x and y separately. For n > 1, the same kind of proof works. This kind of rule is called 2-dimensional iterated Gaussian Quadrature. The same kind of rule can be developed for triple integrals or in higher dimensions. Notice that with n = 1 we have a rule with 4 nodes which is exact for any polynomial with degrees at most 3 in x and y separately, so the total degree of this polynomial is at most 6. But this rule is not exact for all polynomials of degree 6, which include x6, x5 y, x4 y2, x3 y3, x2 y4, x y5, y6. To obtain a rule that is exact for all polynomial of degree 6 requires 10 points [2]. The minimum number of nodes required for exact results up to a given degree > 6 is unknown.

We conclude this section with Mathematica code that implements this approach to 2-dimensional iterated Gaussian Quadrature. Recall the code for 1-dimensional Gaussian Quadrature:

[pic]

The Gaussian nodes are x[0], …, x[n] and the weights are weights[1], …, weights[n+1]. We now use the 1-dimensional rule separately in the x-direction and the y-direction to compute the following 2-dimensional iterated Gaussian quadrature rule.

[pic]

The variable II holds estimate of ([a,b] ([c,d] f(x, y)dy dx based on 2-dimensional iterated Gaussian quadrature. In the next section we illustrate this method in several examples.

Section 4B:

Experiments with iterated 2-dimensional Gaussian Quadrature for rectangular regions

In these experiments we are trying to find the smallest n which gives a Gaussian relative error less than 1% in the 2-dimensional Gaussian Quadrature approximation. The error is based on a comparison with the Mathematica approximation, which we consider to be the

"correct" value.

Throughout these experiments, we use 100-digit precision in order to get accurate results for the matrix inversion in the Gaussian code (Section 4A). Final results are only shown to around 6 significant digits (Mathematica's default precision).

Example 1:

f(x) = 3x2y2 + 4x3y – 2xy2+ 3x2+ 2y2– 6xy +3x + 2y +1,

a = 1, b = 3, c = 2, d = 5.

The graph for this function is:

[pic]

We see that N ( n = 1 satisfies our objective:

| |Mathematica |Iterated Gaussian |Gaussian |

| | | |Relative Error |

|N=1 |1608. |1608. |1.41402*10-16 |

The maximum degree in each variable separately for the above function is ( 3, so n = 1 should give exact results. Therefore, the answer for the Gaussian relative error should be 0, but because of roundoff error, it gave us 1.41402*10-16.

Example 2:

f(x) = x4 + 3x3y + 7y4 – 6x2y2 +2x –4y +6,

a = 1, b = 3, c = 2, d = 5.

The graph for this function is:

[pic]

We were able to get a relative error less than 0.01 on our first attempt (with n = 1) as you can see in the table. Since the maximum degree in each variable separately is 4, n = 2 should give exact results, which the table shows (to within machine roundoff error).

| |Mathematica |Iterated Gaussian |Gaussian |

| | | |Relative Error |

|N = 1 |7383.6 |7364.16 |0.00263196 |

|N = 2 | |7383.60 |2.46355*10-16 |

Example 3:

f(x,y) = ((x2+y2),

a = 1, b = 6, c = 4, d = 6.

The graph for this function is:

[pic]

n = 1 gave us a good Gaussian relative error, but we included n = 2 for confirmation.

| |Mathematica |Iterated Gaussian |Gaussian Relative Error |

|N = 1 |62.2769 |62.2692 |0.000124067 |

|N = 2 | |62.2773 |5.80135x10-6 |

Example 4:

f(x,y) = (x+y)/(x-y),

a = 1, b = 5, c = 1/4, d = 1/2

The graph for this function is:

[pic]

n = 1 did not give us a Gaussian relative error less than 1%, so we used n = 2 in order to obtain a better Gaussian relative error. Next, n = 3 confirms the approximation.

| |Mathematica |Iterated Gaussian |Gaussian Relative Error |

|N = 1 |1.38016 |1.3577 |0.0162743 |

|N = 2 | |1.3749 |0.00381203 |

|N = 3 | |1.37894 |0.000878947 |

Example 5a:

f(x,y) = Sin(x2y + yx2),

a = 0, b = 3, c = 1, d = 3.

The graph for this function is:

[pic]

The method seemed to start giving us positive results up to N = 2, but it broke down when it reached n = 3, since the relative error suddenly increased. In addition, the method had to reach n = 6 in order to give us a Gaussian relative error less than 1%, but as n increases further the relative error does not stabilize. This difficulty is caused by the extremely irregular graph: small errors in measuring (x, y) will cause big errors in measuring f(x, y).

| |Mathematica |Iterated Gaussian |Gaussian Relative Error |

|N = 1 |0.645902 |1.52852 |1.36648 |

|N = 2 | |0.593642 |0.0809097 |

|N = 3 | |-0.333341 |1.51609 |

|N = 4 | |1.30596 |1.02192 |

|N = 5 | |0.752682 |0.165318 |

|N = 6 | |0.6452 |0.00108682 |

|N = 7 | |0.926495 |0.43442 |

|N = 8 | |0.387134 |0.400631 |

|N = 20 | |0.649014 |0.00481717 |

Example 5b:

f(x,y) = Sin(x2y + yx2),

a = 0, b = 1, c = 0, d = 1.

We use the same function as before, but we move the region closer to the origin, where the graph is more regular.

[pic]

n = 1 did not satisfy Gaussian relative error less than 1%. However, n = 2 and n = 3 showed small errors.

| |Mathematica |Iterated Gaussian |Gaussian Relative Error |

|N = 1 |0.28955 |0.295064 |0.0190411 |

|N = 2 | |0.289447 |0.000424248 |

|N = 3 | |0.289544 |0.0000218553 |

In conclusion, when can compare the tables for Examples 5a and 5b. We can see that for the intervals a = 0, b = 3, c = 1, and d = 3, where the graph shows more variations, the results started to break down. It is difficult to integrate f(x, y) in this region. On the other hand, with the intervals a = 0, b = 1, c = 0, and d = 1, the graph is smoother and the method quickly gave us a Gaussian relative error less than 1%.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download