Homework 10 Solutions - Statistics Department



Homework 10 Solutions

1. Ross 7.79 p. 418

a) Let [pic] and [pic] represent the weekly sales for the next 2 weeks; then [pic] has a bivariate normal distribution with common mean 40, common standard deviation 6, and correlation 0.6. Thus, [pic] has a normal distribution with mean 80 and standard deviation [pic], and the probability that the total of the next 2 weeks sales exceeds 90 is

[pic].

b) If the correlation were 0.2 instead of 0.6, the answer from (a) would decrease. Observe that a smaller correlation would result in a smaller standard deviation for [pic]. A deviation of 10 from the mean is more pronounced when the standard deviation is smaller, so the z-score would increase, making the right-tail probability that we are seeking decrease.

c) The standard deviation is now [pic], and the probability that the total of the next 2 weeks sales exceeds 90 is

[pic].

2.

a) The regression to the mean effect implies that for any test-retest situation, we would expect higher “scores” on the initial test to decrease on a re-test, and similarly, lower “scores” on the initial test to increase on a re-test, regardless of any other influences. In the context of this problem, we would naturally expect higher first readings to go down on the second reading, and lower first readings to go up on the second reading; the patient’s state of tension is irrelevant because the observed outcomes are consistent with the regression to the mean effect.

(b) If large study is performed, and it is found that first readings average 130 mm, second readings average 120 mm, and both readings have a standard deviation of about 15 mm, then this evidence supports the first doctor’s claim that patients are more relaxed on the second reading. A difference in the overall means between two readings (or a test and a re-test) cannot be attributed to the regression to the mean effect; see Notes 23 in which the regression to the mean effect occurs in a context in which the test and retest have the same overall means and standard deviations.

3.

i) Ross 8.2 p. 457

a) Let X be the random variable denoting the score on the final exam. Using Markov’s inequality, we have

[pic].

b) From Chebyshev’s inequality, we have

[pic].

Hence, [pic].

c) Let n represent the number of students taking the final exam. Then, the average test score of n students is a random variable [pic]with mean 75 and variance 25/n. Also, note that having the class average be within 5 of 75 with probability at least 0.9 is equivalent to having the class average be more than 5 away from 75 with probability at most 0.1; hence, using Chebyshev’s inequality again, we have

[pic].

Solving the RHS gives us [pic], so we need at least 10 students to have a probability at least 0.9 of having the class average be within 5 of 75.

ii) Ross 8.3 p. 457

Using the CLT, we have

[pic].

Looking in a normal table, this means [pic], so we need n to be at least 3 under the CLT (note, however, that the CLT is an asymptotic result, so the accuracy when n is small – starting at 3 in this case – can be reasonably questioned).

4. Ross 8.7 p. 457

Let [pic], [pic], represent the lifetime of the i-th lightbulb. Then, using the CLT, we have

[pic].

5. Ross 8.8 p. 457

Let [pic] be defined as above, and let [pic], [pic], be the time needed to replace the i-th lightbulb (note that once all bulbs have failed, we stop, so we do not include for the time needed to replace the very last lightbulb). So, the probability we are looking for is

[pic].

Now, since the replacement time is uniformly distributed over (0, 0.5), we have:

[pic]

Hence, using the CLT,

[pic].

6. Ross 8.13 p. 458

a) Let [pic] denote the average test score for the class of size 25. Then,

[pic]

b) Let [pic] denote the average test score for the class of size 64. Then,

[pic]

c) Since [pic], we have

[pic]

d) Same as (c): [pic].

7. Ross TE 8.9 p. 460

We would expect the proportion of heads on the remaining 900 tosses to be 0.5 (so the expected proportion of heads on the 1000 tosses would be 0.55) as the remaining tosses are independent of the first 100 tosses. The strong law of large numbers guarantees that the overall proportion of heads will be in the neighborhood of the expected proportion of 0.5; however, it does not suggest that the remaining 900 tosses will behave in a manner designed to have the 1000 tosses see a proportion of heads equal to 0.5 (for the proportion of heads to be 0.5 for the 1000 tosses, we would have needed the 900 remaining tosses to yield 44.444% heads).

8.

a) Since Z is an indicator variable, [pic]. The probability [pic] is the probability that the point Z falls inside the object, but this is just the proportion of the unit square that is covered by this object; i.e.,

[pic].

b) This is the essence of Monte Carlo integration: suppose we have a complicated shape S with unknown area A which can be contained entirely within a simple shape S* with known area A* (S* chosen so that its area is simple to compute). If we pick n points randomly from within S*, we can then approximate the area A as the fraction of those points that fall within S multiplied by A*. In the context of this problem, S* is the unit square, so A* = 1; hence, we’d estimate A using the fraction of the n points that fall within the object.

c) If we denote our n independent random points as [pic], [pic], then [pic]. The [pic]’s are indicators with mean A and variance [pic]. Hence, we have

[pic]

For this probability to equal 0.99, we require

[pic].

This is unsatisfying: the approximate sample size as it stands depends upon the very value that we’re estimating. However, observe that [pic] has a maximum value of 0.25 (when [pic]), so the most conservative sample size would be [pic]. Rounding up, we conclude that using 16,590 points would be sufficient for [pic] to be at least 0.99 no matter what the true value of A is.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download