Principles of sample size calculation - The EQUATOR Network

[Pages:21]Principles of sample size calculation

Jonathan Cook (with thanks to Doug Altman)

Centre for Statistics in Medicine, NDORMS, University of Oxford

EQUATOR ? OUCAGS training course 24 October 2015

Outline

Principles of study design Principles of study sample size calculation How to determine the sample size How to calculate in practice Summary

2

Study design ? general principles

Play of change could mislead

? The more subtle a question the more precise we need to be evaluation (i.e. more information that is more data)

We need to be clear about our question

? What exactly are we interested in? ? How precisely we want to know it?

Study (including sample size) should be fit for purpose

? Relevant ? Sufficient for the intended analysis

3

Study size ? how big?

Fundamental aspect of study design ? How many participants are needed?

Ethically and scientifically important ? legitimate experimentation ? Add to knowledge

Impact upon study conduct (e.g. 100 versus 2000) ? Management of project ? Timeframe ? Cost

4

Principles of sample size calculation

Aim

? We wish to compare the outcome between the treatments and determine if there is a difference between them

Typically approach for RCT sample size calculation

? Choose the key (primary) outcome and base the calculation on it ? Get a large enough sample size to have reassurance that we will be

able to detect a meaningful difference in the primary outcome

Main alternative approach

? Seek to estimate a quantity with a given precision

Same principles apply to all types of study

? What we are looking for may well differ

5

Reaching the wrong conclusion (1)

What can go wrong: May conclude that there is a difference in outcome

between active and control groups, when in fact no such difference exists Technically called a Type I error

? more usefully called a false-positive result

Probability of making such an error is designated , commonly known as the significance level

Risk of false-positive conclusion (Type I error) does not decrease as the sample size increases

6

Reaching the wrong conclusion (2)

May conclude that there is no evidence of a difference in outcomes between active and control groups, when in fact there is such a difference

Technically called a Type II error

? more usefully called a false-negative result

Probability of making such an error is often designated as (1- is commonly known as the statistical power)

Risk of missing an important difference (Type II error) decreases as the sample size increases

7

Type I and Type II errors

There really is a There really is difference no difference

Statistically significant

OK

Type I error

(false positive)

Statistically Type II error

OK

non-significant (false negative/

[1-power])

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download