Practical significance (effect sizes) versus or in ...

See discussions, stats, and author profiles for this publication at:

Practical significance (effect sizes) versus or in combination with statistical significance (p-values)

Article ? January 2003

CITATIONS

221

2 authors, including:

Suria M Ellis North West University South Africa 91 PUBLICATIONS 809 CITATIONS

SEE PROFILE

READS

3,611

Some of the authors of this publication are also working on these related projects: Eco labeling View project NRF IKS project (Prof J de Beer) Co-researcher (Ethnomathematics): Affordances of indigenous Knowledge Systems. View project

All content following this page was uploaded by Suria M Ellis on 01 June 2016. The user has requested enhancement of the downloaded file.

Management Dynamics Volume 12 No. 4, 2003

51

RESEARCH NOTE

Practical significance (effect sizes) versus or in combination with statistical significance (p-values)

S.M. Ellis H.S. Steyn

Potchefstroom University for CHE

ABSTRACT

Statistical significance tests have a tendency to yield small p-values (indicating significance) as the size of the data sets increase. The effect size is independent of sample size and is a measure of practical significance. It can be understood as a large enough effect to be important in practice and is described for differences in means as well as for the relationship in two-way frequenc y tables and also for a multiple regression fit.

INTRODUCTION

An advantage of drawing a random sample is that it enables one to study the properties of a population with the time and money available. In such cases the statistical significance tests (eg. ttests) are used to show that the result (eg. difference between two means) are significant. The p-value is a criterion of this, giving the probability that the obtained value or larger could be obtained under the assumption that the null hypothesis (eg. no difference between the means) is true. A small p-value (eg. smaller than 0.05) is considered as sufficient evidence that the result is statistically significant. Statistical significance does not necessarily imply that the result is important in practice as these tests have a tendency to yield small p-values (indicating significance) as the size of the data sets increase.

In many cases researchers are forced to consider their obtained results as a subpopulation of the target population due to the weak response of the planned random sample. In other cases data obtained from convenience sampling are erroneously analysed as if it were obtained by random

sampling. These data should be considered as small populations for which statistical inference and p-values are not relevant. Statistical inference draws conclusions about the population from which a random sample was drawn, using the descriptive measures that have been calculated. Instead of only reporting descriptive statistics in these cases, effect sizes can be determined. Practical significance can be understood as a large enough difference to have an effect in practice.

Many different effect sizes exist (see Rosenthall, 1991 and Steyn, 1999) but here we will only discuss those most frequently used, i.e. for the difference between means and for relationships in two-way frequency (contingency) tables and in multiple regression..

EFFECT SIZE FOR THE DIFFERENCE BETWEEN MEANS

Consider the following example of testing the difference in IQ's of two random samples of size 200 from different populations. With mean standard deviation of

110 ? 10 and 107 ? 12 a test statistic of z 110 107 2.72 102 122 200 200

with p=0.007 is obtained. It is apparent that the difference in mean IQ's are statistically significant (p ................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download