Sites.ualberta.ca



Rothman KJ, Greenland S, Lash TL. Chapter 10: Precision and statistics in epidemiologic studies. Modern Epidemiology: 2008.

- Sampling error = random error due to the process of selecting specific study subjects.

- All epidemiologic studies are viewed as a figurative sample of possible people who could have been included in the study.

- A measure of random variation is the variance of the process, i.e.: the root mean squared deviation from the mean. The statistical precision of a measurement or process is often taken to be the inverse of the variance. Precision is the opposite of random error. A study has improved statistical efficiency if it is able to estimate the same quantity with higher precision.

- Significance and hypothesis testing

- Null hypothesis – formulated as a hypothesis of no association between two variables in a superpopulation – the test hypothesis.

- If p !< alpha, that does not mean there is no difference between the two groups – describing the groups does not require statistical inference.

- If p !< alpha, that does not mean that there is no difference between groups of the super-population – It means only that one cannot reject the null hypothesis that the super-population groups are different.

- Conversely, p/= t | H0), whereas the probability of making a type I error is p(|T| >/= t* |H0). For a standard normal variate at alpha = 0.05, t* = 1.96 and p(type I error) = 0.05, by definition.

- The probability of a type I error must account for the means by which the hypothesis was rejected – especially with respect to previous comparisons. So if p(type I error) = 0.05 = alpha, then the chance of not making a type I error at all over k repetitions is (1-alpha)k. The chance of making one or more type I errors is then the complement of making zero type 1 errors, which is 1-(1-alpha)k. By the time 4 comparisons have been performed, there is almost a 20% chance that 1 or more comparisons will falsely reject the null hypothesis.

- Appropriateness of statistical testing

- Type I and type II errors arise when investigators dichotomize study results into categories “significant” and “non-significant”. This is unnecessary and degrades study information.

- Statistical significance testing has roots in industrial and agricultural decision-making. However, for public health, making a qualitative decision on the basis of a single study is inappropriate. Meta-analyses often show that “non-significant” findings may actually represent a real effect – epidemiologic knowledge is an accretion of previous findings.

- Using statistical significance as the primary basis for inference is misleading, since examining the confidence intervals of an imprecise study show readily that the data are compatible with a wide range of hypotheses, only one of which may be the null hypothesis.

- A small P-value may be obtained with a small effect, while a large p-value may be obtained representing a large effect. However, the latter is often offered as evidence against a large effect in standard significance testing.

- Also, an association may be verified or refuted by different studies on the basis of statistical hypothesis testing, when the data in both instances may be maximally compatible with the same association.

Rothman KJ, Greenland S, Lash TL. Chapter 10: Precision and statistics in epidemiologic studies. Modern Epidemiology: 2008.

- Confidence intervals

- Compute p-values for a broad range of possible test hypotheses. The interval of parameter values for which the p-value exceeds alpha represents a range of parameters compatible, where compatible means offers insufficient evidence to reject the hypothesized parameter.

- The confidence level of a CI is 1-alpha.

- Using the Wald approximation: t* = t – tbar / (se of tbar). Also 1-alpha = P(-t ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download