NUMERICAL INTEGRATION OF POLYNOMIALS AND …



EXPERIMENTS WITH GAUSSIAN QUADRATURE IN 1 AND 2 VARIABLES.

CLARA ENUMA, ELECTRICAL ENGINEERING MAJOR AND

KIMBERLY LINTON, COMPUTER SCIENCE & MATH. MAJOR

SUNY NEW PALTZ

SPRING / SUMMER 2005.

MENTOR: DR. L. FIALKOW

Introduction

This project has to do with the approximation of integrals using Gaussian Quadrature. We first studied Gaussian quadrature on an interval [a,b], based on a program created by

R. Curto and L. Fialkow [2] and implemented in Mathematica. We also compared the Gaussian approximations to estimates using Simpson’s rule and to the Mathematica estimate NIntegrate. We next studied iterated Gaussian quadrature over a rectangular region [a,b] x [c,d], and then studied 2-dimensional iterated Gaussian quadrature over certain non-rectangular regions. Finally, we studied the problem of finding the fewest points in a cubature rule, using the moment matrix method of [5].

Acknowledgement

A big thanks goes to Deb Gould, Sarah Browne and Roderick Woods, the coordinators of Alliance for Minority Participation (AMP) / C-STEP / CSEMS at SUNY New Paltz, who chose us for our mentor Dr. L. Fialkow. Dr. Fialkow began this project with us in the spring semester of 2005, and continued with us in the summer semester 1 of 2005.

Table Of Contents

1. Introduction…………………………………………………………………….2

2. Acknowledgement………………………………………………………….….3

3. Numerical integration using Gaussian Quadrature in one variable……………5

4. Creation of tables……………………………………………… ……………17

5. Experiments in 1-dimension numerical integration……………………….….18

6. 2-dimensional iterated Gaussian Quadrature over a rectangular region………34

7. Experiments with 2-dimensional iterated Gaussian Quadrature over a

Rectangular region……………………………………………………………38

8. 2-dimensional Gaussian Quadrature over a non-rectangular region………….51

9. Experiments with 2-dimensional iterated Gaussian Quadrature over a

non-rectangular region…………………………………………………………53

10. The matrix extension method for constructing quadrature rules with the

fewest points……………………………………………………………………57

11. References………………………………………………………………………66

Numerical Integration:

Numerical integration is the approximation of an integral using numerical techniques; Gaussian Quadrature and Simpson’s Rule are the methods of numerical integration we studied in this project. We implemented these methods using the software tool Mathematica, where numerical integration is also implemented by a proprietary method called NIntegrate.

The Fundamental Theorem of Calculus concerning integration is that the integral of a continuous function f(x) over an interval [a,b] is given by ([a,b] f(x) dx = F(b) – F(a), where F(x) is an antiderivative of f(x), i.e., F´(x) = f(x) [6]. Take as an example,

([a,b] Cos(x) dx = Sin(b) – Sin(a), where f(x) = Cos(x) and F(x) = Sin(x). In case we cannot guess an antiderivative F(x), we seek to approximate ([a,b] f(x) dx using some numerical method.

Gaussian Quadrature: In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration [4],

([a,b] f(x) dx ( (0(i(n wif(xi). Let n > 0 and suppose the quadrature rule has points x0,….., xn in an interval [a,b] and positive weights w0,….., wn. In the Gaussian Quadrature rule Q ( Qn of size n + 1 (i.e., with n + 1 points), we require that if p(x) is any polynomial up to degree p = 2n + 1, then

( [a,b] p(x) dx = Q [p]

= (0(i(n wip(xi)

= w0p(x0) + w1p(x1) + ……. + wnp(xn) [4]. It turns out that n + 1 points are the fewest that will give this precision. Also, in Gaussian Quadrature, the quadrature points are symmetrically distributed about the midpoint of the interval. For example with n = 2, we have the following picture:

[pic]

For any continuous function f(x) defined on [a,b], we now define the Gaussian Quadrature rule Qn (f) = Q(f) ( (0(i(n wif(xi)

= w0f(x0) + w1f(x1) + ……. + wnf(xn), and we will approximate ([a,b] f(x) dx by Q(f). We can motivate this approximation as follows.

Suppose we are given a continuous function f(x) on a fixed interval [a,b] and we have a number ( > 0. By Weierstrass’ Theorem, a polynomial p(x) could be subtracted from f(x) in such a way that the absolute value of the difference will be less than (, i.e., (f(x) – p(x)(( ( for every x in [a,b].

[pic]

It then follows that,(([a,b] f(x) dx – ([a,b] p(x) dx (= (([a,b] f(x) – p(x) dx (( ([a,b]((f – p) ((x( dx ( ( (b - a(, so if ( is small, we get ( f ( ( p. Since ( p = Q(p) provided that degree p ( 2n + 1, we can estimate (f(x) dx, by Q(f) ( (0(i(n wif(xi). Giving a better explanation, we have that ( f ( ( p = Q(p) ( Q(f). To justify the last approximation,

(Q(p) – Q(f)( = ((wip(xi) - (wif(xi)(

= ((wi(p(xi) – f(xi)) (

( (0(i(n (wi(p(xi) – f (xi))(

( (wi(p(xi) – f(xi) (

( (wi(

( ( (wi

= ((b-a) (since (wi = (wix0 = ([a,b] x0 dx = b – a).

For a given small ( > 0, deg p might be very large, so we might not have deg p ( 2n + 1 and therefore we might only have ( p ( Q(p). Alternatively, in our experiments, we computed Qn for larger and larger values of n until we obtained a good approximation, ([a,b] f(x) dx ( Qn (f).

Another numerical method is Simpson’s Rule [4, ch 5.] [6], which can be defined as follows: Simpson’s Rule: If f(x) is a continuous function, we can approximate ([a,b] f(x) dx by applying Simpson’s Rule in the following way. Let n > 0. Consider the following Riemann sums that could be used to approximate ([a,b] f(x) dx:

[pic]

The left endpoint estimate gives:

[pic]

The right endpoint estimate gives:

[pic]

The midpoint estimate gives:

[pic]

The trapezoid estimate gives the average of the left (Ln) endpoint and that of the right (Rn) endpoint, i.e.,

[pic]

[pic]

Tn = (Ln + Rn) / 2.

Now the parabolic estimate (Simpson’s Rule) gives:

[pic]

which can also be written as: Sn = (2 ( Mn + Tn) / 3 .

In our experiments we compared Sn (f) and Qn (f) to see which method approximates

([a,b] f(x) dx more quickly as n gets larger and larger.

Analysis Of The Gaussian Program:

We used an implementation of Gaussian Quadrature based on [2] [5]. In this program, if we are given an integer n > 0, and we have an interval [a,b], then we compute points x0,……., xn within the interval and positive weights w0, …, wn which implement Gaussian Quadrature: if p(x) is a polynomial with degree p ( 2n + 1, then

([a,b]p(x) dx = (0(i(nwip(xi). Likewise for any continuous non-polynomial function f(x), ([a,b] f(x) dx ( (0(i(nwif(xi).

We illustrate the validity of the method of [2] [5] by proving its correctness for n = 2. We must find n + 1 = 3 points, x0, x1, x2, in interval [a,b], and positive weights w0, w1, w2 such that if (j =([a,b] xj dx = xj+1/ (j+1)([a,b] = ( bj+1 – aj+1) / (j+1) for ( j = 0, 1, 2, 3, 4, 5),

then (j = (0(i(2wixij (0 ( j ( 5). In detail, we must solve the following system for wi > 0 and xi ([a,b] (0 ( i ( 2)

(0 = w0(1 + w1(1 + w2(1

(1 = w0( x0 + w1( x1 + w2( x2

(2 = w0( x02 + w1( x12 + w2( x22

(3 = w0( x03 + w1( x13 + w2( x23

(4 = w0( x04 + w1( x14 + w2( x24

(5 = w0( x05 + w1( x05 + w2( x25

Step 1: The first step is to find a matrix H which “represents” integration of any polynomial p of degree p ( 2. Each polynomial p with degree p ( 2 is of the form

p(t) = d + et + ft2; Let p^ ( (d,e,f)T (T for transpose) be the vector of coefficient of p.

We want to find a matrix H ( H2 (with 3 rows and columns) such that

(H p^, q^( = ([a,b] p(t) q(t) dt (for all polynomial p,q with degree p, degree q ( 2).

[pic]

Therefore if H = (Hij) 0 ( i , j ( 2, then Hij = (H tj^, ti^( = ([a,b] ti+j dt = (i+j

For example, H00 = (0

H01 = H10 = (1

H02 = H11 = H20 = (2

H12 = H21 = (3

H22 = (4

So we get H ( H2 =

=

(H is a Hankel matrix, constant on cross-diagonals).

Let us denote the first column vector of H as t0 = ((0, (1, (2)T, the second column vector as t1 = ((1, (2, (3)T, and the third column vector as t2 = ((2, (3, (4)T. We also have a vector t3 = ((3, (4, (5)T

= .

STEP 2: The next step is to find vector c = (c0, c1, c2) such that H c = t3, i.e.,

[pic] [pic] [pic].

Since Determinant [H] = (b – a)9 / 2160 (( 0, since a < b), then H is invertible.

If J = H-1, then c = Jt3 and we can write ((() in terms of column vectors as

c0t0 + c1t1 + c2t2 = t3, where c0 = (1/20)[(a + b)(a2 + 8ab + b2)], c1 = -(3/5)(a2 + 3ab + b2),

and c2 = 3(a + b) / 2.

Consider the polynomial

f(t) = t3 – (c0 + c1t + c2t2)

In Mathematica, we can find the roots of f(t) = 0 by using Solve [f(t) ( 0, t] or

NSolve [f(t) ( 0, t]. It turns out that f(t) = 0 has exactly 3 distinct roots:

x0 =

x1 = and

x2 = .

We can show that x0, x1, x2 are in [a,b] by the following: Since x0 = (a + b) / 2, we see that it’s the average of a and b, so it has to be between a and b.

For x1, ((a2 – 2ab + b2) = ((b – a)2 = b – a, since b > a.

Therefore, x1 – a = (1/10)(-5a + 5b - (15 ((a2 – 2ab + b2)) = (1/10)(5 - (15)(b – a) > 0 and b – x1 = (1/10)(-5a + 5b + (15 ((a2 –2ab + b2)) = (1/10)(5 + (15)(b – a) > 0.

So x1 ( [a,b], and similarly for x2.

STEP 3: The next step is to define the Vandermonde matrix, which is the following

(n + 1) x (n + 1) matrix:

V =

=

Now Det V = .

Since ((a2 – 2ab + b2) = ((b – a)2 = b – a, so det V ( 0, and therefore V is invertible.

Also, let w = and t0 = .

We next solve the equation Vw = t0, which means that

This equation is equivalent to the first three equations of the system (().

Since V is invertible, therefore w = V-1t0 =

( (w0, w1, w2)T, and we see that each weight is a positive number (since a < b). So these weights and points automatically solve first n + 1(= 3) equations of (() above.

Now we look at the last three equation of (():

(3 = w0( x03 + w1( x13 + w2( x23

(4 = w0( x04 + w1( x14 + w2( x24

(5 = w0( x05 + w1( x05 + w2( x25

From (((), we have that

(3 = c0(0 + c1(1 + c2(2

= c0[w0 + w1 + w2] + c1[w0x0 + w1x1 + w2x2] + c2[w0x02 + w1x12 + w2x22] (from the first three equations of (())

= (c0w0 + c1w0x0 + c2w0x02) + (c0w1 + c1w1x1 + c2w1x12) +

(c0w2 + c1w2x2 + c2w2x22)

= w0(c0 + c1x0 + c2x02) + w1(c0 +c1x1 + c2x12) + w2(c0 + c1x2 + c2x22)

= w0x03 + w1x13 + w2x23 (because f(xi) = 0, i = 0, 1, 2).

This completes the proof of the fourth equation in ((). This same procedure can be repeated to prove the last two equations in (():

(4 = c0(1 + c1(2 + c2(3 (from (()

= c0[w0x0 + w1x1 + w2x2] + c1[w0x02 + w1x12 + w2x22] + c2[w0x03 + w1x13 + w2x23]

(from equations 2, 3, 4 of (())

= (c0w0x0 + c0w1x1 + c0w2x2) + (c1w0x02 + c1w1x12 + c1w2x22) +

(c2w0x03 + c2w1x13 + c2w2x23)

= w0(c0x0 + c1x02 + c2x03) + w1(c0x1 + c1x12 + c2x13) + w2(c0x2 + c1x22 + c2x23)

= w0x04 + w1x14 + w2x24. (because (((() implies that x0, x1, x2 satisfy

t4 = c0t + c1t2 + c2t3).

Likewise,

(5 = c0(2 + c1(3 + c2(4 (from (()

= c0[w0x02 + w1x12 + w2x22] + c1[w0x03 + w1x13 + w2x23] +

c2[w0x04 + w1x14 + w2x24] (from equation 3, 4, 5 of (())

= (c0w0x02 + c0w1x12 + c0w2x22) + (c1w0x03 + c1w1x13 + c1w2x23) +

(c2w0x04 + c2w1x14 + c2w2x24)

= w0(c0x02 + c1x03 + c2x04) + w1(c0x12 + c1x13 + c2x14) + w2(c0x22 + c1x23 + c2x24)

= w0x05 + w1x15 + w2x25. (because (((() implies that x0, x1, x2 satisfy

t5 = c0t2 + c1t3 + c2t4).

This is the program we adopted, created by R. Curto and L. Fialkow:

(*Compute the densities and nodes

for Gaussian quadrature

of degree (precision) 2n+1 over [a,b]; {where a < b as shown above}

generates n+1 nodes and densities.

Based on R. Curto and L. Fialkow

[Houston J. Math. 17(1991), 603-635],

MR 93a:47016*)

Gaussian[n_,a_,b_]:=(

For[i=0,i≤2n+1,i++,

bb[i] = (b^(i+1)-a^(i+1))/(i+1)];

For[i=0,i≤n,i=i+1,

For[j=0,j≤n,j=j+1,

Mtab[i,j]=bb[i+j]]];

H=Table[Mtab[r,s],{r,0,n},{s,0,n}]; {H is the Hankel matrix}

J = N[Inverse[H],100]; {100 places of precision}

For[i=0, i≤n,i=i+1,

v[i] = bb[n+1+i]];

w =Table[v[i],{i,0,n}];

z = J.w; {z in the program is the same as C = Jt3 as found in the proof above}

p = 0;

For[i=0,i≤n,i=i+1,

p = p + z[[i+1]] t^i];

q = t^(n+1)-p; {This is f(t) in (((()}

roots = Solve[q?0,t]; {the roots in this case are n + 1 distinct roots}

For[i=1,i≤n+1,i=i+1,

x[i-1] = roots[[i]][[1]][[2]]]; {These are the n + 1 points for Gaussian Quadrature}

For[ i=0,i≤n,i=i+1,

For[j=0,j≤n,j=j+1,

TTab[i,j]= x[j]^i]];

V= Table[TTab[rr,ss],{rr,0,n},{ss,0,n}]; {V is the Vandermonde matrix}

For[i=0, i≤n,i=i+1,

vvv[i] = bb[i]];

w =Table[vvv[i],{i,0,n}];

weights=N[Inverse[V].w,100];){These are the weights for Gau. Quad. in 100 places}

(*Gaussian quadrature estimate of

degree 2n+1 for integral of

predefined function f[x]

over [a,b]; uses results of

previous call to Gaussian*)

G[j_]:=( (*j is ignored*)

ss=0;

For[i=0, i≤n,i++,

ss = ss + weights[[i+1]] f[x[i]]]; {This is the calculation of Q(f)}

ss= N[ss]);

In general, the precision incorporated in Mathematica is 6 decimal places, which gave us, inaccurate results in some of our calculations, especially when computing the inverses of matrixes H and V. For this reason, we increased the precision to 100 decimal places, and then the program performed better.

Creation Of Tables:

In the following experiments, we compared Gaussian Quadrature to Simpson’s Rule for several functions, each time comparing the results we got to that derived when we computed Mathematica’s NIntegrate, which we took as the exact value. We also calculated the accuracy of our results by calculating the absolute error as well as the percent relative error. The absolute error in any estimation is the difference between the measured value of a quantity, say x0, and its actual value x, given by

(x ( (x0 – x(. The relative error is often a more useful way to determine the accuracy of a measurement; we calculated the percent relative error and represented the values we achieved in our tables. The relative error is the quotient of the absolute error and the actual value, i.e., (x = (x / (x(. In our case we used (Q(f) – NIntegrate [f] (for the absolute error, then we computed the percent relative error as

(Q(f) – NIntegrate (/ (NIntegrate( ( 100. We then replaced Q (f) with S (f) so that we could determine the accuracy we achieved in Simpson’s rule as well.

Experiments In 1-dimensional numerical integration:

The functions we worked on are as follows:

(1) f(x) = 3x6 + 5x3 – 2x +1, a = 1, b = 2.

a. For this polynomial of degree 6 = 2 ( 3 we expect Gaussian to be exact for n = 3.

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson’s |Gaussian |Simpson’s |

|1 |71.1786 |70.6111 |72.0310 |0.797290 |1.197550 |

|2 | |71.1775 |71.2390 |0.001545 |0.076290 |

|3 | |71.1786 |71.1890 |0 |0.015030 |

|4 | |71.1786 |71.1820 |0 |0.004780 |

|5 | |71.1786 |71.1800 |0 |0.001970 |

|10 | |71.1786 |71.1789 |0 |0.000843 |

For this interval we can see that Gaussian is first exact when n = 3. Simpson was not exact until n = 11. Also, the Gaussian relative error reached under 1% first at n = 1 but Simpson got there when n = 2.

b. We changed the interval, making a = -8, and b = 0, producing this graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson’s |Gaussian |Simpson’s |

|1 |893731 |752257 |1109060 |15.8296 |24.0933 |

|2 | |891484 |909384 |0 |1.75140 |

|3 | |893731 |896904 |0 |0.35503 |

|4 | |893731 |894744 |0 |0.11335 |

|5 | |893731 |894148 |0 |0.04666 |

|10 | |893731 |893758 |0 |0.00302 |

For this interval, we can see that Gaussian is first exact when n = 3. Simpson was not exact until n = 44. Simpson’s relative error reached under 1% first at n=3. In general, increasing the interval size causes Simpson’s Rule to need a larger value of n for the same accuracy.

c. We tried yet another interval, this time a = -10 and b = 10, resulting in this:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson’s |Gaussian |Simpson’s |

|1 |8.57145*10^6 |2.22224*10^6 |2*10^7 |74.07393 |133.333 |

|2 | |Error |1.0625*10^7 |- |23.9580 |

|3 | |8.57145*10^6 |9.02608*10^6 |0 |5.30400 |

|4 | |Error |8.72072*10^6 |- |1.74148 |

|5 | |8.571415*10^6 |8.63362*10^6 |0 |0.72531 |

|10 | |Error |8.57542*10^6 |- |0.04632 |

For Gaussian we can see that there was an error for every even n because of the indeterminate expression 00. The Gaussian points are distributed symmetrically about the midpoint (a+b)/2. So when n is even, the midpoint is a Gaussian point. For a = -10 and

b = 10, the midpoint is 0. When the program computes x[j]i for x[j] = 0 and i = 0, it is trying to compute 00, which Mathematica cannot compute. We could have modified the code to cover this special case. Gaussian was exact at n = 3 and Simpson at n = 40. The large interval length causes the delay in Simpson.

The next function we experimented with was

(2) f(x) = (3x2 + 4) / (2x2 – 1),

which is defined on the three intervals (-(, - Sqrt ½), (-Sqrt ½, Sqrt ½) and

(Sqrt ½, ().

[pic]

f(x) is undefined when 2x2-1 = 0. This occurs when 2x2 = 1, or x2 = ½, or x = ( Sqrt (½), so x cannot equal ( ( (½) ( ( 0.707.

a. We begin with a = 2, b = 4, resulting in this graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson’s |Gaussian |Simpson’s |

|1 |56.5398 |56.5351 |56.5472 |0.008313 |0.01309 |

|2 | |56.5395 |56.5405 |0.000531 |0.00124 |

|3 | |56.5398 |56.54 |0 |0.00036 |

|4 | |56.5398 |56.5398 |0 |0 |

|5 | |56.5398 |56.5398 |0 |0 |

|10 | |56.5398 |56.5398 |0 |0 |

Here, it was observed that Gaussian reached the exact value at n = 3 and Simpson at

n = 4. Both were under 1% at n = 1. This reflects the fact that the curve is fairly smooth in this interval.

b. We tried this interval, a = -0.7 and b = 0.7, resulting in the following graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson’s |Gaussian |Simpson’s |

|1 |14.2714 |-7.63083 |-96.3807 |46.53061 |575.942 |

|2 | |Error |-51.8588 |- |263.376 |

|3 | |-10.5608 |-37.4755 |26.00025 |162.592 |

|4 | |Error |-30.4984 |- |113.703 |

|5 | |-12.1113 |-26.4349 |15.13587 |85.2299 |

|10 | |Error |-18.8630 | |32.1734 |

For the Gaussian, it took n = 19 to get to the exact value and for every even n there was an error caused by the indeterminate expression 00 (see the comment in (1c) above). Simpson reached the exact value at n = 662. Both methods find it difficult to work with a function that is bending sharply in the interval.

(3) f(x) = Sin(x)

We tried first the interval a = 0 and b = 4.

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson’s |Gaussian |Simpson’s |

|1 |1.65364 |1.47012 |1.92026 |11.09794 |16.1232 |

|2 | |1.66018 |1.66405 |0.395490 |0.62952 |

|3 | |1.65352 |1.65560 |0.007257 |0.11853 |

|4 | |1.65365 |1.65424 |0.000600 |0.03628 |

|5 | |1.65364 |1.65388 |0 |0.01451 |

|10 | |1.65364 |1.65366 |0 |0.00121 |

Gaussian was exact at n = 5 and Simpson at n = 20. Both Gaussian and Simpson reached

under 1% at n = 2. Although there is bending, it is not as severe as in Ex. 2b, so convergence is moderate, neither fast nor very slow.

b. With this third function, we tried another interval, where a = 5 and b = 9.

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |1.19479 |1.06220 |1.38743 |11.09735 |16.1233 |

|2 | |1.19951 |1.20231 |0.395050 |0.62940 |

|3 | |1.94700 |1.19618 |0.007533 |0.11634 |

|4 | |1.97900 |1.19522 |0.000840 |0.03599 |

|5 | |1.97900 |1.19497 |0 |0.01507 |

|10 | |1.97900 |1.19480 |0 |0.00084 |

Gaussian was exact at n = 5 and Simpson at n = 15. Both their relative errors were under 1% at n = 2. The only difference here from the previous example is that the interval shifted, but since Sin x is periodic there was no major change in the performance of the methods.

c. The last interval we tried for this function was, a = -2 and b = 1 producing the following graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |-0.956449 |-0.931801 |-0.992764 |2.577032 |3.79686 |

|2 | |-0.956935 |-0.958250 |0.050810 |0.18830 |

|3 | |-0.956444 |-0.956791 |0.000523 |0.03576 |

|4 | |-0.956449 |-0.956556 |0 |0.01119 |

|5 | |-0.956449 |-0.956493 |0 |0.00460 |

|10 | |-0.956449 |-0.956452 |0 |0.00031 |

Gaussian was exact at n = 4 and Simpson at n = 17. Both their relative errors were under 1% at n = 2. Overall, for this function, the relatively slow bending and small changes in the function values cause both methods to work well even over large intervals.

(4) f(x) = Sin(5x)

The first interval we considered was a = -4 and b = 5 producing the following graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |-0.116624 |4.909280 |2.022890 |4309.494 |1834.54 |

|2 | |2.840070 |1.017510 |2535.236 |972.4705 |

|3 | |2.333700 |-0.729409 |2101.046 |525.4360 |

|4 | |3.892490 |0.997792 |3437.641 |955.5632 |

|5 | |-0.262303 |0.320191 |124.9130 |374.5498 |

|10 | |2.157990 |-0.154216 |1950.382 |32.23350 |

Due to the rapid fluctuations in the graph, we can expect both methods to be slow. We kept on trying larger and larger values of n. Gaussian was not exact until n = 20. Simpson on the other hand reached the exact value at n = 150.

b. The second interval we tried for this function is a = 0 and b = 1, producing the following graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |0.143268 |0.0760515 |0.239161 |46.91662 |66.9326 |

|2 | |0.1470920 |0.145643 |2.669120 |1.65773 |

|3 | |0.1431550 |0.143686 |0.078873 |0.29176 |

|4 | |0.1432700 |0.143395 |0.001400 |0.08865 |

|5 | |0.1432680 |0.143319 |0 |0.03560 |

|10 | |0.1432680 |0.143271 |0 |0.00209 |

On this shorter interval, with fewer oscillations, we expect faster convergence. Gaussian reached the exact value at n = 5 and Simpson at n = 14. Both reached under 1% at n = 3.

c. The third interval we tried was a = -1 and b = -0.5, producing the following graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |0.216961 |0.214533 |0.220558 |1.119095 |1.65790 |

|2 | |0.216994 |0.217154 |0.01521 |0.08896 |

|3 | |0.216961 |0.216998 |0 |0.01705 |

|4 | |0.216961 |0.216973 |0 |0.00553 |

|5 | |0.216961 |0.216966 |0 |0.00230 |

|10 | |0.216961 |0.216962 |0 |0 |

Here the graph is even smoother than before because the interval is much smaller. Gaussian was exact at n = 3 and Simpson at n = 10. Both were under 1% at n = 2.

(5) f(x) = Sqrt(x)

We started this function by considering the interval a = 10 and b = 20, producing the following graph:

[pic]

Relative Error (%)

|N |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |38.5466 |38.5484 |38.5439 |0.00467 |0.007005 |

|2 | |38.5467 |38.5464 |0.00026 |0.000519 |

|3 | |38.5466 |38.5466 |0 |0 |

|4 | |38.5466 |38.5466 |0 |0 |

|5 | |38.5466 |38.5466 |0 |0 |

|10 | |38.5466 |38.5466 | |0 |

In this interval the graph is growing slowly since the derivative ½ x - ½ is small. As x gets larger and larger x – ½ approaches zero. So we expect rapid convergence. Here both Gaussian and Simpson reached the exact value at n = 3. Also, both were under 1% at

n = 1.

b. The next interval considered, was a = 0, b = 10, producing the following graph:

[pic]

Relative Error (%)

|N |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |21.0819 |21.3102 |20.1776 |1.08292 |4.289462 |

|2 | |21.1613 |20.7612 |0.37663 |1.521210 |

|3 | |21.1186 |20.9072 |0.17408 |0.828673 |

|4 | |21.1018 |20.9684 |0.09439 |0.538377 |

|5 | |21.0939 |21.0007 |0.05692 |0.385165 |

|10 | |21.0840 |21.0531 |- |0.136610 |

Here the interval includes x = 0, where there is a vertical tangent and very rapid growth in the function values, so we expect slower convergence. For Gaussian, the exact value was reached with n = 57. For Simpson, it did not reach the exact value until n = 10000.

c. The last interval considered, was a = 0 and b = 1, producing the following graph:

[pic]

Relative Error (%)

|n |NIntegrate |Gaussian |Simpson |Gaussian |Simpson |

|1 |0.666667 |0.673887 |0.638071 |1.08300 |4.289398 |

|2 | |0.669180 |0.656526 |0.37695 |1.521149 |

|3 | |0.667828 |0.661144 |0.17415 |0.828450 |

|4 | |0.667297 |0.663079 |0.09450 |0.538200 |

|5 | |0.667046 |0.664100 |0.05685 |0.385050 |

|10 | |0.666750 |0.665759 |0.01245 |0.136200 |

Although the interval is now much smaller, we still have the region of rapid change, near x = 0, so we cannot expect rapid convergence. For Gaussian, it did not reach the exact value until n = 49, while in the case of Simpson it reached the exact value when n = 850.

Conclusion for the experiments on the comparison of Gaussian Quadrature and Simpson’s Rule.

In conclusion, it is easy to see that Gaussian always produces the exact value at a lesser n value than Simpson. We can also see that both Gaussian and Simpson behave differently for different functions and intervals. In some graphs where there is a bend, Gaussian reached the exact value much faster than Simpson. For example, Gaussian reached the exact value faster in Ex 3a than in Ex 2b. The reason for this is that in

Ex 2b, as the x changes from a small interval say from 0.6 to 0.65 the y goes from

–20 to –140, which is a ratio of 5:120. On the other hand, in Ex 3a as the x goes from 1.6 to 4.6 the y goes from –1 to 1, a ratio of 3:2, which is much better. Also if we take a look at a parabola, and pick an interval close to the origin, this produces a less rapid growth in which we expect a slower convergence. But if we pick a wider interval we will have a more rapid convergence. Therefore we can note that finding the exact values for both Gaussian and Simpson depends on the functions and their intervals.

2-Dimensional Iterated Gaussian Quadrature Over A Rectangular Region.

In this section we consider two-dimensional iterated Gaussian Quadrature. Recall that the one-dimension Gaussian Quadrature rule, with n + 1 points xi in [a,b] and weights wi > 0, is given as Q(f) = w0f(x0) + w1f(x1) + … + wnf(xn), and is exact if f is any polynomial with degree f ( 2n + 1, i.e., Q(f) = ([a,b] f(x) dx. Now consider a function f(x,y) defined on a rectangular region R = [a,b] x[c,d].

Example: If f(x,y) = 3x3y2 + 4xy – 7x3y +2, then the total degree of f is 5. Also, the highest degree of x in f is 3, degx f = 3, and the highest degree of y in f is 2, degy f = 2.

Form Calculus, we know that

V ( ((R f(x,y) dxdy = ([c,d] (([a,b] f(x,y) dx( dy, = ([c,d] F(y) dy, where F(y) = ([a,b] f(x,y)dx. In order to approximate this integral with 2-dimensional iterated Gaussian Quadrature, we apply Gaussian Quadrature first to interval [a,b], and get n + 1 points x0, x1,…., xn in [a,b] and positive weights w0, w1,…, wn so that if g(x) is a polynomial of degree ( 2n + 1, then ([a,b] g(x) dx = (0(i(n wi g(xi). We then apply Gaussian Quadrature to interval [c,d]. We get n + 1 points y0, ….,yn in [c,d] and positive weights v0, …., vn so that if h(y) is any polynomial of degree ( 2n + 1, then ([c,d] h(y) dy = (0(j(n vj h(yj). We now consider

(n + 1)2 grid points in this form:

(x0, y0), (x0, y1) ………, (x0, yn),

(x1, y0), (x1, y1), ………., (x1, yn),

(xn, y0), (xn, y1), ………., (xn, yn).

For 0 ( i, j ( n, we let uij = wivj > 0. We now define iterated Gaussian Quadrature Q( ) as follows. For a function f(x, y) defined on rectangle [a,b] x [c,d],

Q(f) = (0(j(n (0(i(n uij f(xi, yi).

We will show that if f(x,y) is a polynomial, and if degx f ( 2n+1 and degy f ( 2n+1, then Q(f) = ((R f(x,y) dx dy. We can see this since we have

Q (f) =

=

=

=

(by Gaussian Quadrature on [a,b] for gj (x))

=

= (by Gaussian Quadrature on [c,d] for h(y), since degy h ( 2n + 1)

=

=

Therefore Q(f) = ((R f(x,y) dx dy.

For an arbitrary continuous function on R, we can approximate, ((R f(x,y) dx dy ( Q(f).

Here is the picture illustrating n = 2:

[pic]

We employed the program for one-dimensional Gaussian Quadrature together with the program below in order to implement 2-dimension iterated Gaussian Quadrature over a rectangular region. The addition to the program goes thus:

(*Iterated Gaussian quadrature estimate of

degree 2n+1 for integral of predefined

function f[x,y] over rectangle [a,b] x [c,d];

exact if f is a polynomial whose degrees in x and y

are each ≤ 2n+1; the value is stored in II*)

G2dim[n_,a_,b_,c_,d_]:=(

Gaussian[n,a,b]; {Gaussian Quadrature in the x-direction for [a,b]}

For[i=0,i≤n,i++,

xx[i]=x[i];

xweight[i]= weights[[i+1]]];

Gaussian[n,c,d]; {Gaussian Quadrature in the y-direction for [c,d]}

For[i=0,i≤n,i++,

yy[i]=x[i];

yweight[i] = weights[[i+1]]];

II=0;

For[j=0,j≤n,j++,

For[i=0,i≤n,i++,

II = II + xweight[j] yweight[i] f[xx[j],yy[i]]]]; {2-D iterated Gaussian Quadrature}

Below is found the various experiments we worked on concerning iterated 2-dimensional Gaussian Quadrature.

Experiments With 2-D Gaussian Quadrature Over A Rectangular Region:

(1) f(x,y) := 6x7y4-2xy7+3x3y3

a. For polynomials, we will verify that the program does give exact results if degx f ( 2n+1 and degy f ( 2n+1. The first rectangle we tried for this function is a = 1, b = 2, c = 0 and d = 3, producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |7062.19 |1505.7 |78.67942 |

|1 | |7205.34 |2.02699 |

|2 | |7085.7 |0.3329 |

|3 | |7062.19 |0 |

In this polynomial function we have that 7 is the highest degree for both x and y separately. The Gaussian was exact at n = 3, and since 7 = 2(3) + 1, this is what we expected. The relative error reached under 1% at n = 2.

b. The other rectangle we tried for this function is a = 0, b = 5, c = 1,and d = 6,

producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |4.50469*10^8 |1.29846*10^7 |97.11754 |

|1 | |3.37665*10^8 |25.04146 |

|2 | |4.45931*10^8 |1.007395 |

|3 | |4.50469*10^8 |0 |

Here again we can see that Gaussian was exact at n = 3 because highest degree for both x and y is 7.

We then tried a second polynomial function

(2) f(x,y) = 6x9y9-5x3y4+3

a. For this function we tried a = 1, b = 2, c = 0 and d = 3, producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |3.62353*10^6 |2.6354.8*10^5 |99.27268 |

|1 | |2.01746*10^6 |44.32335 |

|2 | |3.45919*10^6 |4.535356 |

|3 | |3.61982*10^6 |0.102386 |

|4 | |3.62353*10^6 |0 |

In this polynomial function we have that 9 is the highest degree for both x and y separately. The Gaussian was exact at n = 4, which is correct since

degx f = 2*4+1 = 9 = degy f. The relative error reached under 1% at n = 3.

b. The second rectangle we tried for this function is a = 0, b = 5, c = 1 and d = 6,

producing the following graph:

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |3.54294*10^13 |4.5098*10^10 |99.87271 |

|1 | |1.52483*10^13 |56.96145 |

|2 | |3.32419*10^13 |6.174251 |

|3 | |3.53851*10^13 |0.125037 |

|4 | |3.54294*10^13 |0 |

Again we can see that Gaussian was exact at n = 4 because the highest degree for both x and y is 9. The relative error reached under 1% at n = 3.

The next function we tried was

(3) f(x,y) = Cos(xy)

a. The first rectangle we tried for this function is a = 1, b = 2, c = 0 and d = 3, producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |-0.423965 |-1.88452 |344.423 |

|1 | |-0.211464 |50.1223 |

|2 | |-0.437171 |3.11488 |

|3 | |-0.423508 |0.107792 |

|4 | |-0.423975 |0.00236 |

|5 | |-0.423965 |0 |

First, we started with a small rectangle region where a = 1, b = 2, c = 0 and d = 3 so that we can see the graph clearly. According to the table, we can see that Gaussian is exact (to six places) at n = 5 and the relative error reached under 1% at n = 3.

b. The other rectangle we tried for this function is a = 0, b = 5, c = 1, and d = 6,

producing the following graph:

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |0.0168253 |-19.5211 |116121.7 |

|1 | |2.93031 |17316 |

|2 | |-6.86194 |40883.46 |

|6 | |0.182556 |985.009 |

|10 | |0.0162729 |3.283151 |

|15 | |0.0168253 |0 |

Here we have a larger rectangle, which also includes most of the first rectangle. According to the table above, we can see that Gaussian was not exact to six places until

n = 15. The reason for this is because the graph is changing rapidly, while the previous graph is not changing quickly because of the rectangle we chose in that example.

We went ahead and tried a third polynomial in which the degrees of x and y separately were different.

(4) f(x,y) = 8x5y9-7x8y3+3xy2-5y

a. The first rectangle we tried for this function is a = -5, b = -10, c = 0 and d = 6,

producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |8.18769*10^12 |1.68865*10^11 |97.93757 |

|1 | |4.90829*10^12 |40.05281 |

|2 | |7.83542*10^12 |3.936031 |

|3 | |8.1796*10^12 |0.098807 |

|4 | |8.18769*10^12 |0 |

In this polynomial function we have degx f = 8 and degy f = 9. The Gaussian was exact at n = 4, which is correct since

degx f = 8 ( 2(4)+1 = 9

degy f = 9 ( 2(4)+1 = 9.

The relative error reached under 1% at n = 3.

b. The other rectangle we tried for this function is a = 5, b = 15, c = 2, and d = 10,

producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |1.5092*10^16 |6.32877*10^14 |95.80654 |

|1 | |1.12299*10^16 |25.59030 |

|2 | |1.48693*10^16 |1.475616 |

|3 | |1.50895*10^16 |0.016565 |

|4 | |1.5092*10^16 |0 |

Again we can see that Gaussian was exact at n = 4 because the highest degrees for both x and y separately are 8 and 9 respectively, which are both less than or equal to 2n+1 for

n = 4. The relative error reached under 1% at n = 3.

(5) f(x,y) = ((x3y3)

a. The first rectangle we tried for this function is a = 4, b = 8, c = 10 and d = 12,

producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |4353.82 |4289.49 |1.477553 |

|1 | |4353.7 |0.002756 |

|2 | |4353.82 |0 |

Gaussian reached the exact value at n = 2 and was under 1% at n = 1.

b. The other rectangle we tried for this function is a = 20, b = 30, c = 40 and d = 50,

producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |3.79812*10^6 |3.77336*10^6 |0.651901 |

|1 | |3.7981*10^6 |0.000527 |

|2 | |3.79812*10^6 |0 |

Here we can see that Gaussian was exact at n = 2 just the same as the previous interval.

The only difference is that it reached under 1% at n = 0.

(6) f(x,y) = x7 + y7

(x + y)3

a. By observing this function, we can see that x and y cannot have the same magnitude and opposite signs or else the function will be undefined. This can occur in quadrant II and IV. The first rectangle we tried for this function is a = 1, b = 10, c = 1 and d = 10, producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |1.12549*10^8 |1.2341*10^7 |98.9035 |

|1 | |9.26131*10^7 |91.77131 |

|2 | |1.11957*10^8 |0.525993 |

|3 | |1.12549*10^8 |0 |

Gaussian was exact to 6 places at n = 3 and was under 1% at n = 2.

b. The other rectangle we tried for this function is a = -1, b = -10, c = -1 and d = -10, producing the following graph:

[pic]

|N |NIntegrate |Gaussian 2D |Relative Error (%) |

|0 |-1.12451*10^8 |-1.23225*10^7 |89.04189 |

|1 | |-9.25214*10^7 |17.72292 |

|2 | |-1.11859*10^8 |0.526452 |

|3 | |-1.12451*10^8 |0 |

Again, we can see the same result in the previous example, where Gaussian was exact at n = 3 and was also under 1% at n = 2.

Conclusion for the Experiments with 2-D Gaussian Quadrature over a Rectangular Region

In conclusion, we can see that for all polynomials in which degx f ( 2n+1 and

degy f ( 2n+1, the results came out to be exact to 6 decimal places. For experiment 3b, Gaussian was exact to 6 places at n = 15, due to the rapid changes in the graph of Cos(xy) over a larger rectangle.

2-Dimensional Iterated Gaussian Quadrature Over A Non-Rectangular Region.

Now we are to consider two-dimensional iterated Gaussian Quadrature over a

non-rectangular region. The region is bounded by the lines x = a and x = b and by 2 curves y = c(x) and y = d(x) where c(x) < d(x) for a ( x ( b. We begin as in the last section by applying Gaussian Quadrature in the x-direction over interval [a,b], giving points x0,…, xn in [a,b] and positive weights w0,…, wn. For each xi we apply Gaussian Quadrature in the y-direction over the interval [c(xi), d(xi)], giving points yi0,…, yin and positive weighs vi0,…, vin. Let f(x,y) be a continuous function on the region. The quadrature rule is then defined as Q(f) ( (0(i(n (0(j(n wi vij f(xi, yij).

[pic]

(*Iterated Gaussian quadrature of

degree 2n+1 for integral of predefined

function f[x,y] over region between

predefined functions

cc(x) and dd(x), for a≤x≤b;

II is the estimate*)

G2dim[n_,a_,b_]:=(

Gaussian[n,a,b];

For[i=0,i≤n,i++,

xx[i]=x[i]; {Gaussian points x0,…, xn}

xweight[i]= weights[[i+1]]]; {weights w0,…, wn for [a,b]

For[ii=0,ii≤n,ii++,

Gaussian[n, cc[xx[ii]],dd[xx[ii]]];

For[jj=0,jj≤n,jj++, {Gaussian points and weights

Ytab[ii,jj]=x[jj]; for [c(xi), d(xi)]}

Wtab[ii,jj] = weights[[jj+1]]]];

II=0;

For[iii=0,iii≤n,iii++,

Ii = 0;

For[jjj=0,jjj≤n,jjj++,

Ii = Ii + Wtab[iii,jjj] f[xx[iii],Ytab[iii,jjj]]]; {This is the quadrature rule}

II = II + xweight[iii] Ii];

)

We can no longer expect the rule to be exact if p is a polynomial with degx p ( 2n + 1 and degy p ( 2n + 1 as the following example shows.

Example: n = 1, a = 0, b = 1, c(x) = 0, d(x) = x2. Let p(x,y) = x3y3, so degx p ( 2n + 1, degy p ( 2n + 1.

When we compute ([c(x),d(x)] x3y3 dy, we get H(x) = x3 y4 = x11

4 4 , and

degx H is no longer ( 2n + 1. So we cannot expect the rule to correctly integrate H(y).

Experiments With 2-D Gaussian Quadrature Over A Non-Rectangular Region:

(1) f(x,y):= x + y

[pic]

For this polynomial we integrate over the region where a = 0, b = 1, c = 0, and

d = 1-x.

|N |NIntegrate |Gaussian 2-D NR |Relative Error (%) |

|0 |0.333333 |0.375 |12.5001 |

|1 | |0.333333 |0 |

Here, Gaussian is exact at n = 1.

(2) f(x,y):= x3y

For this polynomial we integrate over the region where a = 0, b = 1, c = 0, and d = x.

|N |NIntegrate |Gaussian 2-D NR |Relative Error (%) |

|0 |0.0833333 |0.015625 |81.24999 |

|1 | |0.0763889 |8.333283 |

|2 | |0.0833333 |0 |

By the table we can see that Gaussian is exact at n = 2.

(3) f(x,y):= x + y

For this polynomial we integrate over the region where a = -1, b = 1, c = -((1-x2) and

d = ((1-x2).

|N |NIntegrate |Gaussian 2-D NR |Relative Error (%) |

|0 |1.19283*10^-17 |error |- |

|1 | |1.11022*10^-16 |(0 |

|2 | |error |- |

|3 | |1.11022*10^-16 |(0 |

|4 | |Error |- |

|5 | |-2.35922*10^-16 |(0 |

The exact value is 0 but NIntegrate gives an approximation. We can see that the values for Gaussian are not the same as the actual value due to round-off error. Gaussian also gave an error for every even n because of the indeterminate expression 00.

(4) f(x,y):= 2x + 1

For this polynomial we integrate over the region where a = 0, b = 1, c = -((1-(x-1)2)

and d = ((1-(x-1)2).

|N |NIntegrate |Gaussian 2-D NR |Relative Error (%) |

|0 |9.42478 |error |- |

|1 | |9.79796 |3.95956 |

|3 | |9.48167 |0.60362 |

|11 | |9.42732 |0.02695 |

|31 | |9.42492 |0.00149 |

|61 | |9.42479 |0.0011 |

For every even n, there was an error because of an indeterminate expression 00. The

First value of n that gave a nearly exact answer was n = 61. So the relative error was under 1% after 3 steps, but it takes more than 10 steps to get 2 places accuracy and more that 60 steps to get 6 places accuracy.

The Matrix Extension Method for Constructing Quadrature Rules with the Fewest Points.

Consider the following problem: If we are given the unit square S ( (0,1( x (0,1( and m = 2n, we want to find the fewest points (x1,y1),…,(xk,yk) in S and w1,…,wk > 0 such that Q(p) := (1(i(k wi p(xi, yi) = ((S p(x,y) dx dy, whenever p(x,y) is a polynomial with degree p ( m.

Ex: Let n = 1 and m = 2. Consider two polynomials of degree 2 of the form

f ( a00 + a10 x1y0 + a01 x0y1 + a20 x2y0 + a11 x1y1 + a02 x0y2 and g ( b00 + b10 x1y0 + b01 x0y1 + b20 x2y0 + b11 x1y1 + b02 x0y2 , and let f^ = ( a1, a2, a3, a4, a5, a6)T and

g^ = ( b1, b2, b3, b4, b5, b6)T. Using 2-dimensional iterated Gaussian Quadrature on S, we can find a rule with 4 = (n+1)2 points that is exact for any polynomial p with

degx p ( 3 (= 2n+1) and degy p ( 3 (= 2n+1) . We will find a rule of degree 2 precision with only 3 points, and this will be the fewest points possible.

Step 1:

With a given n, we will build a t x t matrix M(n), where t (1/2*(n+1)(n+2) is the number of monomials in x and y up to degree n. M(n) will satisfy

( M(n) f^, g^( = ((S f(x,y) g(x,y) dx dy whenever deg f, deg g ( n. The rows and columns of M(n) are named by the monomials in x and y of degree at most n. So the entry in row xiyj, column xkyl must be (i+k, j+l := ((S xi+k yj+l dx dy

With n = 1, we get a 3 x 3 matrix of the form

M(1) = ,

where (ij = ((S xi yi dx dy, so

M(1) = .

Step 2:

The next step is to enlarge M(n) to M(n+1), in this case from M(1) to M(2), without increasing the rank.

M(2) =

In more detail, M(2) is of the following form:

The idea here is to assign values to g[3,0], g[2,1], g[1,2] and g[0,3] so that

CC := bbt M(1)-1 bb is Hankel, i.e., constant along the cross-diagonals.

Here we have bb = ,

where a = g[3,0], b = g[2,1], c = g[1,2], and d = g[0,3].

Before we can find M(1)-1, we first must check to see that M(1) is invertible. For this to be true, the determinant of M(1) must not equal 0, which is true since the determinant is 1/144. Therefore M(1) is invertible. Now we can now find CC, which will be used as the lower right hand corner of M(2). A calculation shows that

CC ( =

Notice that the main cross-diagonal is not constant. To force it to be constant, we need C31 = C22 . For this to be true we must choose values of a, b, c, and d such that

0 = (1/16)(7- 48b + 192b2- 48c + 192c2) - (1/9)(7- 18a -18b -18c + 108ac -18d + 108bd)

= (1/144)(-49 + 288a - 144b + 1728b2 -144c - 1728ac + 1728c2 + 288d -1728bd).

When we substitute the exact values a = (30, b = (21, and d = (03 and leave c as a variable, the preceding equation becomes 0 = (1/144)(47 – 56c + 1728c2) . We then solve for c and get two values:

c = (1/72)(12 - (3), c = (1/72)(12 + (3)

We will use the second value of c, and substitute the values for a, b, c, and d back into M(n) and CC by assigning the respective values to g[4,0],g[0,4], g[3,1],g[1,3],and g[2,2]. We now have M(2) as follows:

Next we row reduce M(2) to get

This shows that rank M(2) = 3 = rank M(1).

Step 3:

We next solve for the cubature points (xi, yi), i = 1,2,3. We consider the equation M(1)*K= bb where K= M(1)-1*bb. By doing so we get

M(1) K bb

=

which leads to

and therefore

K =

(Note: this is the upper right hand corner of the reduced matrix above.)

We use the above system (((((), in terms of columns, to define 3 polynomials, as follows:

(1) From column 1 of (((((), we have

X2 = k111 + k21X + k31Y, so we define polynomial p by

p(x,y) := x2 – (k11 + k21x + k31y).

Specifically, we have

,

so X2 = X -1/6 1 + 0Y and p(x,y) := x2 - x + 1/6

(2) From column 2 of (((((), we have

XY = k121 + k22X + k32Y, so we define polynomial q by

q(x,y) : xy – (k12 + k22x + k32y).

Specifically,

XY = (1/12)(-3-(3) 1 + ½ X + (1/6)(3+(3)Y and

q(x,y) := xy - (1/12)(-3-(3) - ½ x - (1/6)(3+(3)y

3) From column 3 of (((((), we have

Y2 = k131 + k23X + k33Y , so we define polynomial r by

r(x,y) := y2 – (k13 + k23x+ k33y).

Specifically,

,

Y2 = (1/2)(-2-(3) 1+ (1/(2(3) )X + Y and r(x,y):= y2 -(1/2)(-2-(3) - (1/(2(3) )x - y

Now, using Mathematica’s command, Solve[{p==0, q= =o, r= =0}, {x,y}], we find that p, q, and r have 3 common roots, as follows:

x1 = (1/6)(3-(3), y1 = 1/2 ( x1 (.211325, y1 = 0.5

x2 = (1/6)(3+(3), y2 = (1/6)(3-(6) ( x2 ( 0.788675, y2 ( 0.0917517

x3 = (1/6)(3+(3), y3 = (1/6)(3+(6) ( x3 ( 0.788675, y3 ( 0.908248

Note that all 3 points are in S.

Step 4:

Next we want to find the weights w1, w2, and w3.

From Q(p) := (1(i(k wi p(xi, yi) = ((S p(x,y) dx dy, note that the points and weights must satisfy:

We first check to see that V is invertible, and this is because the determinant is

((2)/3 (( 0). Then the weights are given by V-1 v, which gives us w1 = ½, w2 = ¼, and

w3 = ¼. Since we now know the weights and the points, we can then check to see if

Q(p) ( (1(i(3 wi p(xi, yi) is exact whenever p(x,y) is a polynomial with deg p ( 2. The rule does check, and it is also exact for x3, x2y, y3. If we plot all the points we can see that they lie in the unit square [0,1] x [0,1], as follows:

[pic]

We have now shown that there is a three point rule that is exact for p(x,y) whenever

deg p ( 2. This rule has the fewest points with this property.

We can try this approach for other regions R (instead of S) and for other values n. In general, the rule that is exact up to degree 2n must have at least size M(n) points [5]. Except for a few regions with n = 1, 2, 3 [1], it is unknown whether a rule with as few as size M(n) points exists with all points inside the region. For example, with n = 3, size M(3) = 10. For the unit disk as the region, it is impossible to find a 10-point degree 6 rule with all 10 points inside the disk and it is unknown what the minimum number of points would be for an “inside” rule of degree 6 [3].

We now use the above method to build a minimal degree 2 rule for the unit disk. In bb block, let a = b = d = 0 (exact) and let c be a variable. To make CC Hankel, we have the test (2-64c2 = 0, so c = ( (/8. With c = (/8 we use the method above and we get points

x1 = - ½ , y1 = 0, x2 = ½ , y2 = -1/(2 and x3 = ½ , y3 = 1/(2 and weights w1 = (/2,

w2 = (/4 and w3 = (/4. When we plot the points you can see that all the points lie in the unit disk and also that the sum of the weights add up to (, the area of the disk.

[pic]

REFERENCES:

1. R. Cods and P. Rabinowitz, Monomial cubature rules since “Stroud”: a compilation, J. Comput. Appl. Math. 48 (1993), 309-326.

2. R. Curto and L. Fialkow, Recursiveness, positivity, and truncated moment problems, Houston J. Math 17 (1991), 141-157.

3. C. Easwaran, L. Fialkow, and S. Petrovic, Can a minimal degree 6 cubature rule for the disk have all points inside? J. Comput. Appl. Math.; to appear.

4. J. F. Epperson, An introduction to numerical methods and analysis, John Wiley & Sons, Inc., New York 2002.

5. L. Fialkow and S Petrovic, A moment matrix approach to multivariable cubature, Integral Equations and Operator Theory, 52 (2005), 85-124.

6. S. L Salas and E. Hille, Calculus Part 1, Part 2, John Wiley & Sons, New York, 1982.

-----------------------

[pic]

[pic]

[pic]

0

[pic]

[pic]

=

[pic]

[pic]

1

X

Y

[pic]

[pic]

[pic]

[pic]

[pic]

[pic]

(((()

=

((()

[pic]

[pic]

[pic]

(()

[pic]

[pic]

[pic]

Let gj(x) = f(x, yj)

deg gj = degx f ( 2n+1

[pic]

[pic]

is a polynomial in y of deg ( 2n+1

Ex: if f(x,y)=3x3y2+4xy-7x3y+2, then h(y) 3x4y2+4x2y-7x4y+2x

4 2 4

a

b

[pic]

1 X Y

[pic]

[pic]

[pic]

[pic]

[pic]

1

X

Y

X2

XY

Y2

M(1)

1 X Y X2 XY Y2

bb

a b c

b c d

The lower right corner will be

CC ( bbt M(1)-1 bb

if this expression is a Hankel Matrix

( ( (

( ( (

( ( (

bbt

1 X Y X2 XY Y2

1

X

Y

X2

XY

Y2

[pic]

[pic]

[pic]

[pic]

1 X Y X2 XY Y2

1

X

Y

X2

XY

Y2

[pic]

[pic]

((((()

[pic]

[pic]

[pic]

,

[pic].

X2

1 X Y

[pic]

[pic]

[pic]

=

XY

[pic]

[pic]

[pic]

=

,

Y2

1 X Y

1 x y

[pic]

[pic]

[pic]

=

(x1,y1) (x2,y2) (x3,y3)

V

1

x

y

[pic]

=

[pic]

[pic]

v

[pic]

[pic]

[pic]

=

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download