Statistics 550 Notes 3 - Statistics Department



Statistics 550 Notes 7

Reading: Section 1.5, 1.6.1

I. The Rao-Blackwell Theorem

Convex functions: A real valued function [pic]defined on an open interval [pic]is convex if for any [pic] and [pic],

[pic].

[pic]is strictly convex if the inequality is strict.

If [pic]exists, then [pic]is convex if and only if [pic]on [pic].

A convex function lies above all its tangent lines.

Convexity of loss functions:

For point estimation:

▪ squared error loss [pic]is strictly convex in [pic].

▪ absolute error loss [pic] is convex but not strictly convex in [pic]

▪ zero-one loss function

[pic]

is nonconvex in [pic]

Jensen’s Inequality: (Appendix B.9)

Let X be a random variable. (i) If [pic]is convex in an open interval [pic]and [pic]and [pic], then

[pic].

(ii) If [pic]is strictly convex, then [pic] unless X equals a constant with probability one.

Proof of (i): Let [pic]be a tangent line to [pic]at the point [pic]. Write [pic]. By the convexity of [pic], [pic]. Since expectations preserve inequalities,

[pic]

as was to be shown.

Rao-Blackwell Theorem: Let [pic]be a sufficient statistic. Let [pic]be a point estimate of [pic]and assume that the loss function [pic]is strictly convex in d. Also assume that [pic]. Let [pic]. Then [pic]unless [pic]with probability one.

Proof: Fix [pic]. Apply Jensen’s inequality with [pic] and let [pic]have the conditional distribution of [pic]for a particular choice of [pic].

By Jensen’s inequality,

[pic] (0.1)

unless [pic]with probability one in which case (0.1) is an equality. Taking the expectation on both sides of this inequality yields [pic]unless [pic]with probability one in which case [pic].

Comments:

(1) Sufficiency ensures [pic]is an estimator (i.e., it depends only on [pic]and not on [pic]).

(2) If loss is convex rather than strictly convex, we get [pic]in (1.2)

(3) Theorem is not true without convexity of loss functions.

Consequence of Rao-Blackwell theorem: For convex loss functions, we can dispense with randomized estimators.

A randomized estimator randomly chooses the estimate [pic], where the distribution of [pic]is known. A randomized estimator can be obtained as an estimator estimator [pic]where [pic]and U are independent and U is uniformly distributed on (0,1). This is achieved by observing [pic]and then using U to construct the distribution of [pic]. For the data [pic], [pic]is sufficient. Thus, by the Rao-Blackwell Theorem, the nonrandomized estimator [pic]dominates [pic]for strictly convex loss functions.

II. Minimal Sufficiency

For any model, there are many sufficient statistics.

Example 1: For [pic]iid Bernoulli ([pic]), [pic], [pic] are both sufficient but [pic] provides a greater reduction of the data.

Definition: A statistic [pic]is minimally sufficient if it is sufficient and it provides a reduction of the data that is at least as great as that of any other sufficient statistic [pic]in the sense that we can find a transformation [pic]such that [pic].

Comments:

(1) To say that we can find a transformation [pic]such that [pic] means that if [pic], then [pic]must equal [pic].

(2) Data reduction in terms of a particular statistic can be thought of as a partition of the sample space. A statistic [pic]partitions the sample space into sets [pic].

If a statistic [pic]is minimally sufficient, then for another sufficient statistic [pic]which partitions the sample space into sets [pic], every set [pic] must be a subset of some [pic]. Thus, the partition associated with a minimal sufficient statistic is the coarsest possible partition for a sufficient statistic and in this sense the minimal sufficient statistic achieves the greatest possible data reduction for a sufficient statistic.

A useful theorem for finding a minimal sufficient statistic is the following:

Theorem 2 (Lehmann and Scheffe, 1950): Suppose [pic]is a sufficient statistic for [pic]. Also suppose that if for two sample points [pic]and [pic], the ratio [pic]is constant as a function of [pic], then [pic]. Then [pic]is a minimal sufficient statistic for [pic].

Proof: Let [pic]be any statistic that is sufficient for [pic]. By the factorization theorem, there exist functions [pic]and [pic] such that [pic]. Let [pic]and [pic]be any two sample points with [pic]. Then

[pic].

Since this ratio does not depend on [pic], the assumptions of the theorem imply that [pic]. Thus, [pic]is at least as coarse a partition of the sample space as [pic], and consequently [pic]is minimal sufficient.

Example 1 continued: Consider the ratio

[pic].

This ratio is constant as a function of [pic]if [pic]. Since we have shown that [pic]is a sufficient statistic, it follows from the above sentence and Theorem 2 that [pic]is a minimal sufficient statistic.

Note that a minimal sufficient statistic is not unique. Any one-to-one function of a minimal sufficient statistic is also a minimal sufficient statistic. For example, [pic] is a minimal sufficient statistic for the i.i.d. Bernoulli case.

Example 2: Suppose [pic]are iid uniform on the interval [pic]. Then the joint pdf of X is

[pic]

The statistic [pic]is a sufficient statistic by the factorization theorem with [pic] and [pic].

For any two sample points [pic]and [pic], the numerator and denominator of the ratio [pic]will be positive for the same values of [pic]if and only if [pic]and [pic]; if the minima and maxima are equal, then the ratio is constant and in fact equals 1. Thus, [pic] is a minimal sufficient statistic by Theorem 2.

Example 2 is a case in which the dimension of the minimal sufficient statistic (2) does not match the dimension of the parameter (1). There are models in which the dimension of the minimal sufficient statistic is equal to the sample size, e.g., [pic] iid Cauchy([pic]), [pic].

(Problem 1.5.15).

III. Ancillary Statistics

A statistic [pic]is ancillary if its distribution does not depend on [pic].

Example 4: Suppose our model is [pic]iid [pic]. Then [pic]is a sufficient statistic and [pic] is an ancillary statistic.

Although ancillary statistics contain no information about [pic]when the model is true, ancillary statistics are useful for checking the validity of the model.

IV. Exponential Families

The binomial and normal models exhibited the interesting feature that there is a natural minimal sufficient statistic whose dimension is independent of the sample size. The exponential family models are a general class of models that exhibit this feature.

The class of exponential family models includes many of the mostly widely used statistical models (e.g., binomial, normal, gamma, Poisson, multinomial). Exponential family models have an underlying structure with elegant properties that we will discuss.

One-parameter exponential families: The family of distributions of a model [pic]is said to be a one-parameter exponential family if there exist real-valued functions [pic] such that the pdf or pmf may be written as

[pic] (0.2)

Comments:

(1) For an exponential family, the support of the distribution (i.e., [pic]) cannot depend on [pic]. Thus, [pic]iid Uniform [pic]is not an exponential family model.

(2) For an exponential family model, [pic]is a sufficient statistic by the factorization theorem.

(3) [pic]are not unique. For example, [pic]can be multiplied by a constant c and T can be divided by the same constant c.

Examples of one-parameter exponential family models:

(1) Poisson family.

Let [pic]. Then for [pic],

[pic].

This is a one-parameter exponential family with

[pic].

(2) Binomial family.

Let [pic]. Then for [pic],

[pic]

This is a one-parameter exponential family with

[pic]

The family of distributions obtained by taking iid samples from one-parameter exponential families are themselves one-parameter exponential families.

Specifically, suppose [pic] and [pic]is an exponential family, then for [pic]iid with common distribution [pic],

[pic]

A sufficient statistic is [pic] and it is one dimensional whatever the sample size m is.

For [pic]iid Poisson ([pic]), the sufficient statistic [pic]has a Poisson ([pic]) distribution and hence has an exponential family model. It is generally true that the sufficient statistic of an exponential family model follows an exponential family.

Theorem 1.6.1: Let [pic]be a one-parameter exponential family of discrete distributions:

[pic]

Then the family of the distributions of the statistic [pic]is a one-parameter exponential family of discrete distributions whose pdf may be written

[pic]

for suitable h*.

Proof: By definition,

[pic]

If we let [pic], the result follows.

A similar theorem holds for continuous exponential families.

Canonical exponential families: A useful reparameterization of the exponential family model is to index [pic]as the parameter to yield

[pic], (0.3)

where [pic]in the continuous case and the integral is replaced by a sum in the discrete space.

If [pic], then [pic]must be finite. Let [pic]. The model given by (0.3) with [pic]ranging over [pic]is called the canonical one-parameter exponential family generated by T and h. [pic]is called the natural parameter space and T is called the natural sufficient statistic. The canonical one-parameter exponential family contains the one-parameter exponential family (0.2) with parameter space [pic]and can be thought of as the “biggest” possible parameter space for the exponential family.

Example 1: Let [pic]. Then for [pic],

[pic] (0.4)

Letting [pic], we have

[pic].

We have

[pic]

Thus, [pic].

Note that if [pic], then (0.4) would still be a one-parameter exponential family but it would be a strict subset of the canonical one-parameter exponential family generated by T and h with natural parameter space [pic].

A useful result about exponential families is the following computational shortcut for moments of the natural sufficient statistic:

Theorem 1.6.2: If X is distributed according to (0.3) and [pic]is an interior point of [pic], then the moment-generating function of [pic]exists and is given by

[pic]

for s in some neighborhood of 0.

Moreover,

[pic].

Proof: This is the proof for the continuous case.

[pic]

because the last factor, being the integral of a density, is one. The rest of the theorem follows from the moment generating property of [pic] (see Section A.12 of Bickel and Doksum).

Comment on proof: In order for the moment generating function (MGF) properties to hold, the MGF must exist (be less than infinity) for s in some neighborhood of 0. The proof that the MGF exists for s in some neighborhood of 0 relies on the fact that [pic]is an interval or [pic], which we shall establish in Section 1.6.4.

Example 1 continued: Let [pic]. The natural sufficient statistic is [pic]and [pic], [pic]. Thus, using Theorem 1.6.2,

[pic]

Example 2: Suppose [pic]is a sample from a population with pdf

[pic]

This is known as the Rayleigh distribution. It is used to model the density of time until failure for certain types of equipment. The data comes from an exponential family:

[pic]

Here [pic].

Therefore, the natural sufficient statistic [pic]has mean [pic]and variance [pic].

Proving that a one parameter family is not an exponential family

A one parameter exponential family is a family

[pic], [pic].

Consider a one parameter family [pic]. If the support of [pic]is different for different [pic], then the family is not an exponential family because [pic] if and only if [pic].

Suppose that the support of [pic]is the same for all [pic]. We can write the pdf or pmf of the family as

[pic].

Furthermore, we can write the pdf or pmf of the family as

[pic]

In order for this to be an exponential family, we need to be able to write

[pic] (0.5)

for some functions [pic].

Suppose (0.5) holds. Then for any two sample points [pic] and [pic],

[pic] and

for any four sample points [pic], [pic], [pic], [pic],

[pic]

is constant as a function of [pic].

Thus, a necessary condition for a one-parameter exponential family is that for any four sample points,

[pic], [pic], [pic], [pic],

[pic]

must be constant as a function of [pic].

Proof that the Cauchy family is not an exponential family:

The Cauchy family is

[pic]

Thus, for the Cauchy family,

[pic] .

For any four sample points [pic],

[pic]

This is not constant as a function of [pic] so the Cauchy family is not an exponential family.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download