Statistics 550 Notes 3 - Statistics Department



Statistics 550 Notes 10

Reading: Section 2.1.

Take-home midterm: I will e-mail it to you by the morning of Saturday, October 14th. It will be due Monday, October 23rd by 5 p.m.

I. Method of Moments

Suppose [pic]iid from [pic] where [pic]is d-dimensional.

Let [pic]denote the first d-moments of the population we are sampling from (assuming that they exist),

[pic]

Define the jth sample moment [pic]by

[pic].

The function

[pic]

is an estimating equation for which

[pic]

For many models, [pic] so that [pic] is a valid estimating equation.

Suppose [pic]is a 1-1 continuous function from [pic]to [pic]. Then the estimating equation estimate of [pic]based on [pic]is the [pic]that solves [pic], i.e.,

[pic]

Example 4: [pic]iid Gamma[pic].

[pic] for [pic]. (see Section B.2.2 of Bickel and Doksum).

The first two moments of the gamma distribution are

[pic]. (see Excercise B.2.3, page 526).

The method of moments estimator solves

[pic]

which yields

[pic] and [pic].

Example: The gamma model is frequently used for describing precipitation levels. In a study of the natural variability of rainfall, the rainfall of summer storms was measured by a network of rain gauges in southern Illinois for the years 1960-1964. 227 measurements were taken.

[pic]

For these data, [pic], so that the method of moments estimates are

[pic]

The following plot shows the Gamma ([pic])

density plotted on the histogram. In order to make the visual comparison easy, the density was normalized to have a total area equal to the total area under the histogram, which is the number of observation times the bin width of the histogram, or 227*.2=45.4.

[pic]

Qualitatively, the fit of the gamma model to the data looks reasonable; we will examine methods for assessing the goodness of fit of a model to data in Chapter 4.

Large sample motivation for method of moments:

A reasonable requirement for a point estimator is that it should converge to the true parameter value as we collect more and more information.

Suppose [pic]iid.

A point estimator h(X1,...,Xn) of a parameter [pic] is consistent if h(X1,...,Xn)[pic]as [pic]for all [pic].

Definition of convergence in probability (A.14.1, page 466). h(X1,...,Xn)[pic] means that for all [pic],

[pic].

Under certain regularity conditions, the method of moments estimator is consistent. We give a proof for a special case

Let [pic]. By the assumptions in formulating the method of moments, [pic]is a 1-1 continuous function from [pic]to [pic]. The method of moments estimator solves

[pic].

When the g’s range is [pic], then

[pic]. We prove the method of moments estimator is consistent when [pic] and [pic]is continuous.

Sketch of Proof: The method of moments estimator solves

[pic]

By the law of large numbers,

[pic].

By the open mapping theorem (A.14.8, page 467), since

[pic]is assumed to be continuous,

[pic]

Comments on method of moments:

(1) Instead of using the first d moments, we could use higher order moments (or other functions of the data – see Problem 2.1.13) instead, leading to different estimating equations. But the method of moments estimator may be altered by which moments we choose.

Example: [pic]iid Poisson([pic]). The first moment is

[pic]. Thus, the method of moments estimator based on the first moment is [pic].

We could also consider using the second moment to form a method of moments estimator.

[pic].

The method of moments estimator based on the second moment solves

[pic]

Solving this equation (by taking the positive root), we find that

[pic].

The two method of moments estimators are different.

For example, for the data

> rpois(10,1)

[1] 2 3 0 1 2 1 3 1 2 1,

the method of moments estimator based on the first moment is 1.1 and the method of moments estimator based on the second moment is 1.096872.

(2) The method of moments does not use all the information that is available.

[pic]iid Uniform[pic].

The method of moments estimator based on the first moment is [pic]. If [pic], we know that [pic]

II. Minimum Contrast heuristic

Minimum contrast heuristic: Choose a contrast function [pic]that measures the “discrepancy” between the data X and the parameter vector [pic]. The range of the contrast function is typically taken to be the real numbers greater than or equal to zero and the smaller the value of the contrast function, the more “plausible” [pic]is based on the data X.

Let [pic]denote the true parameter. Define the population discrepancy [pic]as the expected value of the discrepancy [pic]:

[pic] (1.1)

In order for [pic]to be a valid contrast function, we require that [pic]is uniquely minimized for [pic], i.e.,

[pic].

[pic]is the minimizer of [pic]. Although we don’t know [pic], the contrast function [pic]is an unbiased estimate of [pic] (see (1.1)). The minimum contrast heuristic is to estimate [pic]by minimizing [pic], i.e.,

[pic].

Example 1: Suppose [pic]iid Bernoulli (p), [pic]. The following is an example of a contrast functions and an associated estimate:

“Least Squares”: [pic].

[pic]

We have

[pic]

and it can be verified by the second derivative test that

[pic]

Thus, [pic]is a valid contrast function.

The associated estimate is

[pic]

The following is an example of a function that is not a contrast function:

[pic]

[pic]

For [pic], we find that [pic]is minimized at about p=0.57

[pic]

Least Squares methods for estimating regression can be viewed as a minimum contrast estimates (Example 2.1.1 and Section 2.2.1).

III. The Plug-in Principle (Chapter 2.1.2)

Cox and Lewis (1966) reported 799 waiting times (in seconds) between successive pulses along a nerve fiber.

[pic]

Let [pic] be the 799 waiting times.

“Nonparametric” model for nerve firings:

[pic]iid from a distribution with CDF F – no further restrictions on F.

How do we estimate parameters such as mean of F, variance of F, skewness of F?

Estimating F: A natural estimate of F is the empirical CDF

The empirical CDF [pic]for a sample of size n is the CDF that puts mass 1/n at each data point [pic]. Formally,[pic]

[pic]

The empirical CDF is a consistent estimate of F as [pic] in a strong sense:

Glivenko-Cantelli Theorem: [pic]

Plug-in-principle:

Consider a parameter [pic]that can be written as a function of F, i.e., [pic].

The plug-in estimator of [pic]is [pic].

Example 1:

(1) The mean. Let [pic]. The plug-in estimator is [pic], the sample mean. For the nerve firing data, [pic].

(2) The variance. Let [pic].

The plug-in estimator is

[pic]

For the nerve firings data, [pic].

Comments on plug-in estimates:

(1) The plug-in estimator is a good estimator for the nonparametric model in which nothing is assumed about the cdf F.

(2) The plug-in estimator is generally consistent.

(3) However, for more parametric models, we can often obtain estimators with better risk functions by utilizing the specific parametric structure.

(4) Plug-in estimators are often valuable as preliminary estimates in algorithms that search for more efficient estimates.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download