Statistics 512 Notes I D



Statistics 550 Notes 2

Reading: Section 1.2.

I will add one office hour, Wed., 9-10.

Prof. Small’s office hours: Tues., 4:45-5:45; Wed., 9-10; Thurs., 4:45-5:45.

TA Dan Yang’s office hours, 431.2 Huntsman Hall, Tues., 2-3.

I. Frequentist vs. subjective probability (Section 1.2)

Model: We toss a coin 3 times. The tosses are iid Bernoulli trials with probability p of landing heads.

What does [pic]mean here?

Mathematical definition – A probability function for a random experiment is a function on the sample space S of possible outcomes of the experiment that satisfies:

Axiom 1: For all (measurable) events E: [pic]

Axiom 2: [pic] Countable additivity

Axiom 3: For any sequence of mutually exclusive (measurable) events, [pic].

For a single coin toss, the axioms imply that [pic]must be between 0 and 1 and the probability of the coin landing tails must be [pic].

Meaning of probability as a model for the real world:

Frequentist probability – In “many” independent coin tosses, the proportion of heads would be about [pic].

▪ The French naturalist Count Buffon (1707-1788) tossed a coin 4040 times. Result: 2048 heads, or relative frequency 2048/4040=0.5069 for heads.

▪ Around 1900, the English statistician Karl Pearson heroically tossed a coin 24,000 times. Result: 12,012 heads, a relative frequency of 0.5005.

▪ While imprisoned by the Germans during World War II, the Australian mathematician John Kerrich tossed a coin 10,000 times. Result: 5067 heads, a relative frequency of 0.5067.

Subjective (personal) probability – Probability describes a person’s degree of belief about a statement.

“The probability that the coin lands heads is 0.5”

Subjective interpretation – This represents a person’s personal degree of belief that a toss of the coin will land heads.

Specifically, if offered the chance to make a bet in which the person will win [pic]dollars if the coin lands heads and lose[pic]dollars if the coin lands tails, the maximum [pic] the person would play the game with is [pic] dollars.

In general, a person’s subjective probability of an event [pic], [pic], is the value of [pic]for which the person thinks a bet in which she will win [pic]dollars if [pic]occurs and lose [pic]dollars if [pic]does not occur is a fair bet (i.e., has expected profit zero).

Subjective probability rejects the view of probability as a physical feature of the world and interprets probability as a statement about an individual’s state of knowledge.

“Coins don’t have probabilities, people have probabilities.” – Persi Diaconis.

An advantage of subjective probability is that it can be applied to things about which we are uncertain which cannot be envisioned as being part of a sequence of repeated trials:

“Will the Philadelphia Eagles win the Super Bowl this year?”

“Was there life on Mars 1 billion years ago?”

“Did Lee Harvey Oswald act alone in assassinating John F. Kennedy?”

“What is the 353rd digit of [pic]?”

If I say the probability that the 353rd digit of [pic]is 5 is 0.2, I mean that I would consider it a fair bet for me to gain 0.8 dollars if the 353rd digit of [pic]is 5 and lose 0.2 dollars if the 353rd digit of [pic]is not 5.

II. Coherence and the axioms of probability

A rational person should have a “coherent” system of subjective probabilities: a system is said to be incoherent if there exists some set of bets such that the bettor will take the set of bets but will lose no matter what happens.

If a person’s system of probabilities is coherent, then they must satisfy the axioms of probability.

Example:

Proposition 1: If [pic]and [pic]are mutually exclusive events, then [pic].

Proof: Let [pic], [pic] and [pic]be a person’s subjective probabilities for these events. Suppose that [pic]differs from [pic]. Then the person thinks that the following bets are fair: (i) a bet in which the person will win [pic]dollars if [pic]occurs and lose [pic]dollars if [pic]does not occur; (ii) a bet in which the person will win [pic]if [pic]occurs and lose [pic]dollars if [pic]does not occur; and (iii) a bet in which the person will win [pic]if [pic]occurs and lose [pic]dollars if [pic]does not occur

Say, [pic]and let the difference be [pic].

A gambler offers this person the following bets: (a) the person will lose [pic]dollars if [pic]occurs and win[pic]if [pic]does not occur; (b) the person will win [pic]dollars if [pic] occurs and lose [pic]dollars if [pic]does not occur; (c) the person will win [pic]dollars if [pic] occurs and lose [pic]dollars if [pic]does not occur.

According to the person’s subjective probabilities, bets (a)-(c) are all expected to yield a profit, so the person takes all of them.

However,

(1) Suppose [pic]occurs. Then the person loses [pic]from bet (a), wins [pic]from bet (b) and loses [pic]from bet (c). The person’s profit is

[pic]

(2) Suppose [pic]occurs. The person’s profit is

[pic]

(3) Suppose neither [pic]nor [pic]occurs. The person’s profit is

[pic]

Thus, the gambler has put the person in a position in which the person is guaranteed to lose [pic]no matter what happens (when a person accepts a set of bets that guarantees that they will lose no matter what happens, it is said that a Dutch book has been set against them). So it is irrational for the person to assign [pic]. A similar argument can be made that it is irrational to assign [pic].

Similar arguments can be made that a rational person’s subjective probabilities should satisfy the other axioms of probability: (1) for an event [pic],[pic]; (2) [pic], where [pic]is the sample space.

Although, from Proposition 1, it is clear that a rational person’s personal probabilities should obey finite additivity (i.e., if [pic]are mutually exclusive events, [pic]), there is some controversy about additivity for a countable infinite sequence of mutually exclusive events (see J. Williamson, “Countable Additivity and Subjective Probability,” The British Journal for the Philosophy of Science, 1999). We assume countable additivity holds for subjective probability.

The mathematical axioms of probability, and hence all results in probability theory, hold for both subjective probabilities and frequentist probabilities -- it is just a matter of how we interpret the probabilities.

In particular, Bayes theorem holds for subjective probability.

Let [pic]be an event and [pic]be mutually exclusive and exhaustive events where [pic]for all [pic]. Then

[pic].

III. The Bayesian Approach to Statistics

Example 1 from Notes 1: Yao Ming’s free throws in the 2007-2008 season and future seasons are iid Bernoulli trials with probability [pic]of success. In the 2007-2008 season, Yao made 345 out of the 406 free throws he attempted (85.0%). What inferences can we make about [pic]?

[pic]

The Bayesian approach to statistics uses subjective probability and regards [pic]as being a fixed but unknown object over which we have beliefs given by a subjective probability distribution and then uses the data to update our beliefs using Bayes rule.

Prior distribution: Subjective probability distribution about parameter vector ([pic]) before seeing any data.

Posterior distribution: Updated subjective probability distribution about parameter vector ([pic]) after seeing the data.

Bernoulli trials example: Suppose that [pic] are iid Bernoulli with probability of success [pic].

We want our prior distribution for [pic]to be a distribution that concentrates on the interval [0,1]. A class of distributions that concentrates on [0,1] is the two-parameter beta family.

The beta family of distributions: The density function [pic]of the beta distribution with parameters [pic]and [pic]is

[pic]

The mean and variance of the Beta(r,s) distribution are [pic]and [pic] respectively.

See Appendix B.2 for details.

[pic]

[pic]

Suppose our prior distribution for [pic]is Beta(r,s) with density [pic] and we observe [pic].

Using Bayes theorem, our posterior distribution for [pic]is

[pic]

The last expression is proportional to the Beta([pic]) density so the posterior density for [pic]is Beta([pic]).

Families of distributions for which the prior and posterior distributions belong to the same family are called conjugate families.

The posterior mean is thus equal to [pic] and the posterior variance is [pic].

Returning to our example in which Yao Ming made 345 out of the 406 free throws he attempted (85.0%) in 2007-2008, if we had a Beta(1,1) [uniform] prior, then the posterior distribution for Yao’s probability of making a free throw in the next season is (345+1,406-345+1)=Beta(346,62). The posterior mean is 0.848 and the posterior standard deviation is 0.018.

A valuable feature of the Bayesian approach is its ability to incorporate prior information.

Returning to our example in which Yao Ming made 345 out of the 406 free throws he attempted (85.0%) in 2007-2008, we could use information from Yao’s previous seasons.

| |Free Throws Made |Free Throws Attempted |FT% |

|2002-2003 |301 |371 |0.811 |

|2003-2004 |361 |446 |0.809 |

|2004-2005 |389 |497 |0.783 |

|2005-2006 |337 |395 |0.853 |

|2006-2007 |356 |413 |0.861 |

|Total |1744 |2122 |0.822 |

A way to think about the Beta([pic]) prior is the following: Suppose that if we had no informative prior information on [pic], we would use a uniform (Beta (1,1)) prior. Then a Beta([pic]) prior means that our prior information is equivalent to seeing [pic]successes and [pic] failures prior to seeing the current data. So, for example, a Beta(82,18) prior is equivalent to saying that our prior information is equivalent to seeing 81 made free throws and 17 missed free throws prior to seeing our data. If we consider all the free throws from seasons prior to 2007-2008 to be equivalent to free throws in 2007-2008, we would use a Beta(1744+1,2122-1744+1) =Beta (1745, 379) prior. If we consider each free throw in prior seasons to be equivalent to half a free throw of information, we would use a Beta (1744*0.5+1, (2122-1744)*0.5+1) =Beta (873, 190) prior.

[pic]

When there is more data and prior beliefs are less strong, the prior distribution does not play as strong a role.

The posterior mean is [pic]

The posterior mean is a weighted average of the sample mean and the prior mean, with the weight on the sample mean increasing to 1 as [pic].

Example 2: A cofferdam protecting a construction site was designed to withstand flows of up to 1870 cubic feet per second (cfs). An engineer wishes to compute the probability that the dam will be overtopped during the upcoming year. Over the previous 25-year period, the annual maximum flood levels of the dam had ranged from 629 to 4720 cfs and 1870 cfs had been exceeded 5 times.

Modeling the 25 years as 25 independent Bernoulli trials with the same probability [pic]that the flood level will exceed 1870 cfs and using a uniform prior distribution for [pic](which corresponds to a Beta (1,1)), the prior and posterior densities are

[pic]

IV. Bayesian Inference for the Normal Distribution

Suppose [pic]iid [pic], [pic]known, and our prior on [pic]is [pic].

The posterior distribution is proportional to

[pic]Thus, the posterior distribution is

[pic]

Note that the posterior mean is a weighted average of the sample mean and the prior mean, with the weight on the sample mean increasing to 1 as [pic].

V. Bickel and Doksum’s perspective on Bayesian models.

(a) Bayesian models are useful as a way to generate statistical procedures which incorporate prior information when appropriate.

However, statistical procedures should be evaluated in a frequentist (repeated sampling) way:

For example, for the iid Bernoulli trials example, if we use the posterior mean with a uniform prior distribution to estimate [pic], i.e., [pic], then we should look at how this estimate would perform in many repetitions of the experiment if the true parameter is [pic]for various values of [pic]. More to come on this frequentist perspective in Chapter 1.3.

(b) We can view the parameter as random and view the Bayesian model as providing a joint probability distribution on the parameter and the data in a frequentist probability sense.

Consider the frequentist probability model [pic] for the probability distribution of the data [pic].

The subjective Bayesian perspective is that there is a true unknown [pic]and our goal is to describe our beliefs about [pic]after seeing the data [pic]. This requires specifying a prior distribution [pic]for our beliefs about [pic]; the posterior distribution describes our beliefs about [pic]after seeing the data [pic].

Bickel and Doksum’s viewpoint is to see a Bayesian model as a model for the joint probability distribution of [pic] where [pic]is considered random. Specifically, a Bayesian model consists of a specification of the marginal distribution of [pic], which is the prior distribution [pic] and a specification of the conditional distribution of the data [pic], i.e., [pic].

For example, if the data [pic] are iid Bernoulli trials with probability [pic]of success and the prior distribution for [pic] is Beta(r,s), then the joint probability distribution for [pic] is generated by:

1. We first generate [pic]from a Beta(r,s) distribution.

2. Conditional on [pic], we generate [pic] iid Bernoulli trials with probability [pic]of success.

Although the conceptual basis for regarding [pic]as random in a frequentist sense is not clear, this point of view is useful for developing properties of Bayesian procedures.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download