You are on page 1of 2

374 D I S C O VE R I N G STAT I ST I C S US I N G S P SS

10.2.12.1.  Post hoc procedures and Type I (α) and


Type II error rates 2
The Type I error rate and the statistical power of a test are linked. Therefore, there is
always a trade-off: if a test is conservative (the probability of a Type I error is small) then it
is likely to lack statistical power (the probability of a Type II error will be high). Therefore,
it is important that multiple comparison procedures control the Type I error rate but with-
out a substantial loss in power. If a test is too conservative then we are likely to reject
differences between means that are, in reality, meaningful.
smart The least-significant difference (LSD) pairwise comparison makes no attempt to control
alex
only the Type I error and is equivalent to performing multiple t-tests on the data. The only differ-
ence is that the LSD requires the overall ANOVA to be significant. The Studentized Newman–
Keuls (SNK) procedure is also a very liberal test and lacks control over the familywise error
rate. Bonferroni’s and Tukey’s tests both control the Type I error rate very well but are con-
servative tests (they lack statistical power). Of the two, Bonferroni has more power when the
number of comparisons is small, whereas Tukey is more powerful when testing large num-
bers of means. Tukey generally has greater power than Dunn and Scheffé. The Ryan, Einot,
Gabriel and Welsch Q procedure (REGWQ) has good power and tight control of the Type
I error rate. In fact, when you want to test all pairs of means this procedure is probably the
best. However, when group sizes are different this procedure should not be used.

10.2.12.2.  Post hoc procedures and violations of test


assumptions 2
Most research on post hoc tests has looked at whether the test performs well when the group
sizes are different (an unbalanced design), when the population variances are very different, and
when data are not normally distributed. The good news is that most multiple comparison proce-
dures perform relatively well under small deviations from normality. The bad news is that they
perform badly when group sizes are unequal and when population variances are different.
Hochberg’s GT2 and Gabriel’s pairwise test procedure were designed to cope with situa-
tions in which sample sizes are different. Gabriel’s procedure is generally more powerful but
can become too liberal when the sample sizes are very different. Also, Hochberg’s GT2 is very
unreliable when the population variances are different and so should be used only when you
are sure that this is not the case. There are several multiple comparison procedures that have
been specially designed for situations in which population variances differ. SPSS provides
four options for this situation: Tamhane’s T2, Dunnett’s T3, Games–Howell and Dunnett’s
C. Tamhane’s T2 is conservative and Dunnett’s T3 and C keep very tight Type I error control.
The Games–Howell procedure is the most powerful but can be liberal when sample sizes are
everybody small. However, Games–Howell is also accurate when sample sizes are unequal.

10.2.12.3.  Summary of post hoc procedures 2

The choice of comparison procedure will depend on the exact situation you have and whether
it is more important for you to keep strict control over the familywise error rate or to have
greater statistical power. However, some general guidelines can be drawn (Toothaker, 1993).
When you have equal sample sizes and you are confident that your population variances are
similar then use REGWQ or Tukey as both have good power and tight control over the Type
I error rate. Bonferroni is generally conservative, but if you want guaranteed control over
the Type I error rate then this is the test to use. If sample sizes are slightly different then use
CHAPTE R 1 0 Co mparing several means : A N O VA ( G LM 1 ) 375

Gabriel’s procedure because it has greater power, but if sample sizes are very different use
Hochberg’s GT2. If there is any doubt that the population variances are equal then use the
Games–Howell procedure because this generally seems to offer the best performance. I rec-
ommend running the Games–Howell procedure in addition to any other tests you might select
because of the uncertainty of knowing whether the population variances are equivalent.
Although these general guidelines provide a convention to follow, be aware of the other
procedures available and when they might be useful to use (e.g. Dunnett’s test is the only
multiple comparison that allows you to test means against a control mean).

CRAMMING SAM’s Tips Post hoc tests

 After an ANOVA you need a further analysis to find out which groups differ.
 When you have no specific hypotheses before the experiment use post hoc tests.
 When you have equal sample sizes and group variances are similar use REGWQ or Tukey.
 If you want guaranteed control over the Type I error rate then use Bonferroni.
 If sample sizes are slightly different then use Gabriel’s, but if sample sizes are very different use Hochberg’s GT2.
 If there is any doubt that group variances are equal then use the Games–Howell procedure.

10.3.  Running one-way ANOVA on SPSS 2

Hopefully you should all have some appreciation for the theory behind ANOVA, so let’s
put that theory into practice by conducting an ANOVA test on the Viagra data. As with the
independent t-test we need to enter the data into the data editor using a coding variable to
specify to which of the three groups the data belong. So, the data must be entered in two
columns (one called dose which specifies how much Viagra the participant was given and
one called libido which indicates the person’s libido over the following week). The data
are in the file Viagra.sav but I recommend entering them by hand to gain practice in data
entry. I have coded the grouping variable so that 1 = placebo, 2 = low dose and 3 = high
dose (see section 3.4.2.3).
To conduct one-way ANOVA we have to access the main dialog box by selecting
(Figure 10.9). This dialog box has a space in
which you can list one or more dependent variables and a second space to specify a group-
ing variable, or factor. Factor is another term for independent variable and should not be
confused with the factors that we will come across when we learn about factor analysis.
For the Viagra data we need select only libido from the variables list and drag it to the box
labelled Dependent List (or click on ). Then select the grouping variable dose and drag
it to the box labelled Factor (or click on ).
One thing that I dislike about SPSS is that in various procedures, such as one-way
ANOVA, the program encourages the user to carry out multiple tests, which as we have
seen is not a good thing. For example, in this procedure you are allowed to specify several
dependent variables on which to conduct several ANOVAs. In reality, if you had measured
several dependent variables (say you had measured not just libido but physiological arousal
and anxiety too) it would be preferable to analyse these data using MANOVA rather than
treating each dependent measure separately (see Chapter 16).