Professional Documents
Culture Documents
An Intermediate Course
Brian S. Everitt
7b Connie
and Vera
Everitt, Brian.
Statistics for psychologists :an intermediate course/Brian S. Everitt
cm.
p.
and index.
Includes bibliographical references
ISBN 0-8058-3836-8 (alk. paper)
1. Psychology-statisticalmethods.
I. We.
BF39.E9332M)l
519.5024154~21
oO-065400
Contents
iv
Preface
2
3
4
5
21
63
97
AppendixA
Statistical Glossary
133
161
209
237
267
293
327
358
References
369
373
Index
iii
Preface
Psychologists (and even those training to be psychologists) need little persuading that a knowledgeof statistics is important in both the designof psychological studies andin the analysis of data from such studies.As a result, almostall
undergraduate psychology students are exposed
an introductory
to
statistics course
during their 6rst year of study at college or university. The details of the topics
covered in such a course will no doubt vary from place to place, but they will
almost certainly include a discussion of descriptive statistics, correlation, simple regression, basic inference and significance tests, p values, and confidence
intervals. In addition, in an introductory statistics course, some nonparametric
tests may be described and the analysis of variance may begin to be explored.
Laudable and necessary as such courses are,they represent only the first step in
equipping students with enough knowledgeof statistics to ensure that they
have
of progressing to become proficient, creative, influential, and
a reasonable chance
or third years
even, perhaps, useful psychologists. Consequently, in their second
(and possibly in later postgraduate courses), many psychology students will be
encouraged to learn more about statistics and, equally important,how to apply
statistical methodsin a sensible fashion. It is to these students that this book is
of the text will help itreach its
aimed, and it is hoped that the following features
target.
iv
PREFACE
vi
PREFACE
Statistics in Psychology:
Data, Models, anda Little
History
1.1.
INTRODUCTION
CHAPTER 1
STATISTICS IN PSYCHOLOGY
Many statistical pitfallslie in wait for the unwary. Indeed statistics is perhaps more
open to misuse than any othersubject,particularly by the nonspecialist. The misleading average, the graph withfiddled axes: the inappropriate p-value and the linear
regression fitted to nonlinear dataare. just four examplesof horror stones whichare
part of statistical folklore.
Christopher Chaffield.
Most investigations will involve little clear separation between each of these
four components,but for now it will be helpful totry to indicate, in general terms,
the unique partsof each.
1.2.1. The
CHAPTER 1
1.2.2.
STATISTICS IN PSYCHOLOGY
TABLE 1.1
Frequencies and Percentagesof True Responses in a Test of Knowledge
of p values
Statement
1
25
4
46
1.4
35.1
60
42
5.1
65.1
85.1
60.0
CHAFTER 1
So should thep value be abandoned completely? Many statistician would answer yes, butI think a more sensible response,at least for psychologists, would be
a resoundingmaybe. Such values should rarely be used in a purely confirmato
way, but in an exploratory fashion they can be useful in giving some informal
guidance on the possible existence
of aninteresting effect,even when the required
assumptions of whatever testis beiig used are known to be invalid.isItoften possible to assess whethera p value is likely to be an underestimateor overestimate
and whether the result
is clear one way or the other. In this text, bothp values and
be used; purely from a pragmatic ofpoint
view, the
former
confidence intervals will
are neededby psychology students because they remain
of central importancein
the bulk of the psychological literature.
1.2.3.
of Data
20 = [persons
score),
initial
24 = (persons initial score]
+ [improvement}.
(1.1)
(1.2)
The improvement can be found by simply subtracting the first score from the
second.
Such a model is, of course, very naive, because it assumes that verbal ability
can be measured exactly.A more realistic representation of the
two scores, which
allows for a possible measurement
enor, is
STATISTICS IN PSYCHOLOGY
2 represent
as x1
x2.
Now the parametersare multiplied rather than added to give the observed scores,
x1 and x2. Here S might be estimatedby dividing x2 by xl.
A further possibilityis that there is a limit,h, to improvement, and studying the
dictionary improves performance on the verbal ability by
test
some proportionof
the childs possible improvement,h y. A suitable model would be
x1
x2
= y +El,
= y +(A.
-y)S + 2.
(1.7)
(1.8)
With this model there is no way to estimate S from the dataunless a value of A. is
given or assumed.
One of the principal usesof statistical modelsis to attemptto explain variation
in measurements.This variation may be the result of a variety of factors, including variation from the measurement system, variation caused by environmental
conditions that change over the course of a study, variation from individual to
individual (or experimental unit to experimental unit), and so on. The decision
about an appropriate model should be largely based on the investigators prior
knowledge of an area. In many situations, however, additive, linear models, such
as those given in Eqs. (1.1) and (1.2). are invoked by default, because such models allow many powerful and informative statistical techniques to be applied to
the data.Analysis of variance techniques (Chapters 3-5) and regression analysis
(Chapter 6),for example, use such models, and
in recent yearsgeneralized linear
models (Chapter 10) have evolved that allow analogous models to be applied to a
wide varietyof data types.
Formulating an appropriate model can be a difficult problem. The general principles of model formulation are covered in detail in books on scientific method,
as
but they include the need to collaborate with appropriate experts, to incorporate
CHAPTER 1
much background theoryas possible, and so on. Apart from those formulated entirely onapriori theoretical grounds, most modelsto are,
some extent at least, based
on an initial examinationof the data, although completely empirical models are
rare. The more usual intermediate case arises whenofamodels
class is entertained
apriori, but the initial data analysis
is crucial in selecting a subset
of models from
is determined
the class.In a regression analysis, for example, the general approach
apriori, but ascatlerdiugrum and a histogram (see Chapter
2) will be of crucial imof the relationshipand in checking assumptions
portance in indicating the shape
such as normality.
The formulationof a preliminary model from
an initial examinationof the data
is the first step in the iterative, formulation-criticism, cycle of model building.
This can produce some problems because formulating a model and testing
it on
the same datais not generally considered good science. isItalways preferableto
confirm whether a derived modelis sensible by testing on new data. When data
are difficult or expensive to obtain, however, then some model modification and
assessment of fit on the original data
is almost inevitable. Investigators need to be
aware of the possible dangers
of such a process.
The most important principalto have in mind when testing modelson data is
that of parsimony; that is, the best modelis one that provides an adequate fit to
data with thefewest number
of parameters. This is often known as Occams razor,
which for those with a classical education
is reproduced herein its original form:
Entia nonstunt multiplicanda praeter necessitatem.
I .3.
TYPES OF STUDY
It is said that when Gertrude Stein lay dying, she roused briefly and asked
as-her
STATISTICS IN PSYCHOLOGY
1.3.1. Surveys
Survey methods are based on the simple discovery that asking questions is a
remarkably efficientway to obtain information from and about people (Schuman
and Kalton, 1985, p. 635). Surveys involve an exchange of information between
researcher and respondent; the researcher identifies topics of interest, and the
respondent provides knowledge or opinion about these topics. Depending upon
length and content of the surveyas well as the facilities available,this exchange
or
can be accomplished by means of written questionnaires, in-person interviews,
telephone conversations.
Surveys conductedby psychologists are usually designed to elicit information
about the respondents opinions, beliefs, attitudes and values. Some examples
of
data collected in surveys and their analysis
are given in several later chapters.
1.3.2.
Observational Studies
10
CHAPTER l
1.3.3.
Quasi-Experiments
1.3.4.
Experiments
Experimental studies include most of the work psychologists carry out on anim
and many of the investigations performed on human subjects in laboratories. The
an experiment is the large degree of control in the hands of th
essential feature of
experimenters-in designed experiments, the experimenter deliberately changes
the measured quantities,
the levels of the experimental factors to induce in
variation
to leadto a better understanding of the relationship between experimental factors
and the response variable. And, in particular, experimenters control the manner
in which subjects are allocatedto the different levels of the experimental factors.
In a comparison of a new treatment with one used previously, for example, the
researcher would have control over the scheme for allocating subjects to treat
if the results of
The manner in which this control
is exercised is of vital importance
STATISTICS IN PSYCHOLOGY
11
12
CHAPTER 1
statement that the individual has an ofIQ151. The comment that a woman is tall
is not as accurate as specifying that her height
is 1.88 m. Certain characteristicsof
of
interest are more amenable to precise measurement than others. With the use
an accurate thennometer, a subjects temperaturebecan
measured very precisely.
Quantifying the level
of anxiety or depression of a psychiatric patient,
or assessing
the degree of pain of a migraine sufferer is, however, far a more difficult task.
Measurement scales may be classified into a hierarchy ranging from
categorical,
though ordinal to interval, and finally to ratio scales. Each of these will now be
considered in more detail.
1.4.l . NominalorCategorical
Measurements
Nominal or categorical measurements allow patientsbetoclassified with respect
to some characteristic. Examples of such measurements are marital starus, sex,
and blood group.
The propertiesof a nominalscale are as follows.
1. Data categories
are mutually exclusive (an individual can belong
to only one
category).
2. Data categories haveno logical order-numbers may be assigned to categories but merely as convenient labels.
A nominal scale classifies without the categories being ordered. Techniques
particularly suitable for analyzing this type of data are described in Chapters 9
and 10.
STATISTICS IN PSYCHOLOGY
.l 3
Interval Scales
The third level in the measurement scale hierarchy is the interval scale. Such
scales possess all the properties ofan ordinal scale, plus the additional property
that equal differences between category levels, on any part of the scale, reflect
equal differences in the characteristic being measured. An example of such a
scale is temperature on the Celsius or Fahrenheit scale; the difference between
temperatures of 80F and 90F is the sameas between temperatures of30F and
40F. An important point to make about interval scales is that the zero point is
simply another point on the scale;it does not represent the starting point of the
scale, nor the total absence of the characteristic being measured. The of
properties
an interval scale are
as follows.
The highest level in the hierarchyof measurement scales is the ratio scale.This
type of scale has one property in addition to those listed for interval scales, name
the characteristic
the possession of a true zero point that represents the of
absence
being measured. Consequently, statements can be made both about differences on
the scaleand the ratio of points on the scale.
An example is weight, where not only
100 kg and 50 kg the sameas between 75 kg and 25 kg,
is the difference between
but an object weighing100 kg can be said to be twice as heavy as one weighing
50 kg. This is not true of, say, temperature on the Celsius or Fahrenheit scales,
where a reading of 100" does not represent twice the warmth of a temperature
of 50". If, however, the temperatures were measured on theKelvin scale, which
does have atrue zero point, the statement about the ratio could be made.
CHAPTER 1
14
possess.
4. Equal differences in the characteristic are represented by equal differences
in the numbers assignedto the categories.
5. The zero point represents an absence of the characteristic being measured.
STATISTICS IN PSYCHOLOGY
15
and argued for the use of mathematical and statistical ideasin the study of psychological processes.
A long-standing tradition in scientific psychology is the application of John
Stuart Mills experimental method of difference to the study of psychological
problems. Groups of subjects are compared who differ with respect to the experimental treatment, but otherwise are the same in all respects. Any difference
in outcome can therefore be attributed to the treatment. Control procedures such
as randomization or matching on potentially confounding variables help bolster
the assumption that the groups are the same in every way except the treatment
conditions.
in psychology has long been wedded to a particular
The experimental tradition
statistical technique, namely the analysisof variance (ANOVA). The principles
of experimental design and the analysis
of variance were developed primarily by
Fisher in the 1920s,but they took time to be fully appreciated by psychologists,
who continued to analyze their experimental data with a mixture
of graphical and
simple statistical methods until
well into the 1930s.According to Lovie (1979),the
earliest paper thathad ANOVA in its titlewas by Gaskill and Cox(1937). Other
early uses of the technique are reportedin Crutchfield (1938) and in Crutchfield
and Tolman(1940).
Several of these early psychological papers, although paying
lip service to the
use of Fishers analysis of variance techniques, relied heavily on more informal
strategiesof inference
in interpreting experimental results.
Theyear 1940,however,
saw a dramatic increase in the use of analysis of variance in the psychological
Garrett and Zubin was able cite
to over
literature, and by 1943 the review paper of
40 studies using an analysis
of variance or covariancein psychology and education.
Since then, ofcourse, the analysis
of variance in all its guiseshasbecome the main
technique usedin experimental psychology. An examination of2 years of issues
of the British Journal ofpsychology, for example, showed that over50% of the
papers contain one
or the other applicationof the analysisof variance.
Analysis of variance techniques are covered in Chapters
3,4, and 5, and they
are then shown to be equivalent to the multiple regression model introduced in
Chapter 6.
16
CHAPTER l
STATISTICS IN PSYCHOLOGY
17
e-mailing everybody, exploring the delights of the internet, and in every other way
displaying their computer literacy on the current generationof Pcs, to imagine
just what life was like for the statisticianand the userof statistics in the daysof
of
simple electronicor mechanical calculators, or even earlier when large volumes
numerical tables were the only arithmetical aids available.
is a Itsalutary exercise
(of the young people today hardly know they
are born variety) to illustrate with
a little anecdote
what things were like in the past.
GodfreyThomsonwasaneducationalpsychologistduringthe1930sand
1940s.Professor A. E. Maxwell(personalcommunication)tellsthestory
of
Dr. Thomsons approach to the arithmetical problems faced when performing a
factor analysisby hand. According to Maxwell, Godfrey Thomson and his wife
would, early in the evening, place themselves on either
of their
sidesitting
room fire,
Mrs. Thomson
Dr. Thomson equipped with several pencils and much paper, and
with a copy of Barlows Multiplication Tables. For several hours the conversation would consist
of little more than Whats 613.23 multiplied by 714.62?, and
438134.44: Whats 904.72 divided by 986.31? and 0.91728: and so on.
Nowadays, increasingly sophisticated and comprehensive statistics packages
are available that allow investigators easy access
to an enormous variety of statistical techniques.This is not without considerable potential
for performing very poor
and often misleading analyses(a potential that many psychologists havegrasped
with apparent enthusiasm), but
it would be foolish to underestimate the advantages
of statistical software to users
of statistics suchas psychologists. In this text it will
or
be assumed that readers willbe carrying out their own analyses by using one
other of the many statistical packagesnow available, and we shall even end each
chapter with l i i t e d computer hints for canying out the reported analysesby using SPSS or S-PLUS.One major benefit of assuming that readers will undertake
analyses by using a suitable packageis that details of the arithmetic behind the
methods will only rarely have
to be given; consequently, descriptions
of arithmetical calculationswill be noticeable largely by their absence, although some tables
will contain a little arithmeticwhere this is deemed helpful in making a particular point. Some tables will also contain a little mathematical nomenclature and,
occasionally, even some equations, whichin a numberof places will use vectors
and matrices. Readers who find
this too upsetting to even contemplate should pass
speedily by the offending material, regarding it merely
as a black box and taking
heart that understandingits input and output will,in most cases, be sufficientto
undertake the corresponding analyses, and understand their results.
1.8. SAMPLESIZEDETERMINATION
CHAPTER 1
18
finding the type of subjects needed, and the cost of recruiting subjects. However,
when the question is addressed statistically,a particular type of approach is generally used, in which these factors are largely ignored, at least in the first instance.
Statistically, sample size determination involves
identifying the response variable of most interest,
specifying the appropriate statistical test to be used,
setting a significance level,
estimating the likely variability in the chosen response,
choosing the power the researcher would like to achieve (for those readers
who have forgotten, power is simply the probability of rejecting the null
hypothesis when it is false), and
committing to a magnitude of effect that the researcher would like to investigate. Typically the concept of apsychologically relevant difference is used,
that is, the difference one would not like to miss declaring to be statistically
significant.
These pieces of information then provide the basis of a statistical calculation of
the required sample size. For example, for a study intending to compare the mean
values of two treatmentsby means of a z test at the a! significancelevel, and where
the standard deviation of the response variable is know to be (I, the formula for
the number of subjects required in each group is
(1.9)
where A is the treatment difference deemed psychologically relevant, /? = 1 power, and
Za/2 is the value of the normal distribution that cuts off an upper tail probability of a/& that is, if a! = 0.05, then Zap = 1.96; and
Z, is the value of the normal distribution that cuts off an upper tail probability
of /3; that is, if power = 0.8, so that /? = 0.2, then Z, = 0.84.
2 x (1.96
+ 0.84)'
x 1
= 15.68.
(1.10)
1
So, essentially, 16 observations would be needed in each group.
Readers are encouraged to put in some other numerical values in Eq. (1.9) to get
some feel of how the formula works, but its general characteristicsare thatn is anincreasing function of 0 and a decreasing function of A, both of which are intuitively
n=
STATlSTlCS IN PSYCHOLOGY
19
1.9. SUMMARY
1. Statistical principlesare central to most aspects of a psychological investi-
gation.
2. Data and their associated statistical analyses form the evidential paas of
psychological arguments.
3. Significance testingis far from the be alland end allof statistical analyses,
that
be discounted as an artifact
but it does still matter because evidencecan
p values should
of sampling will not be particularly persuasive. However,
are often more informative.
not be taken too seriously; confidence intervals
4. A good statistical analysis should highlight thoseaspects of the data that
are relevant to the psychological arguments,
do so clearly and fairly, andbe
resistant to criticisms.
5. Experiments lead to the clearest conclusions about causal relationships.
6. Variable type often determines the most appropriate method
of analysis.
EXERCISES
1.1. A well-known professor of experimental psychology once told the author,
1.2. The Pepsi-Cola Company carried out research to determine whether people
tended to prefer Pepsi Cola to Coca Cola. Participants were asked to taste two
glasses of cola and then state which they prefemd. The two glasses were not
20
CHAPTER 1
labeled as Pepsi or Cokefor obvious reasons. Instead the Coke glass was labeled
Q and the Pepsi glass was labelledM. The results showed that more than half
choose Pepsi over Coke
muck and Sandler, 1979, 11).
p. Are there any alternative
explanations for the observed difference, other than the tasteof the two drinks?
Explain how you would carry
out a studyto assess any alternative explanation you
think possible.
13. Suppose you develop a headache while working for hours at your combut use your imagination).
puter (this is probably a purely hypothetical possibility,
You stop, go into another room, and take two aspirins. After about 15 minutes
your headache has gone andyou rem to work. Can you infer a definite causal
relationship between taking the aspirin
and curing the headache?
2
Graphical Methods
of Displaying Data
2.1.
INTRODUCTION
A good graph is quiet and letsthe data tell theirstory clearly and completely.
21
CHAFTER 2
22
G W H l C A L METHODS OF DISPLAYING
DATA
23
TABLE 2.1
Crime Ratesfor Drinkers and Abstainers
Drinkers
Crime
Arson
Rape
Vtolence
S1ealing
Fraud
Drinkers
6.6
11.7
20.6
50.3
10.8
6.4
9.2
16.3
44.6
23.5
Abstainers
of five different typesof crime. In the pie charts for drinkers and abstainers (see
Figure 2.1), the sections of the circle have areas proportional to the observed
percentages. In the corresponding bar charts (see Figure 2.2), percentages are
represented by rectangles of appropriate size placed along a horizontal
axis.
Despite their widespread popularity, both the general and scientificofuse
pie
(1983) comments that
charts has been severely criticized. For example, Tufte
tables are preferable to graphics
many
for small data sets.
A table is nearly always
better than adumb pie chart; the only worse design than a pie chart
is several of
them ...pie charts should never be used. A similar lack of affection is shown
are completely useless, and more
by Bertin(1981). who declares that pie charts
of all
recently by Wainer (1997), who claims that pie charts are the least useful
graphical forms.
An alternative display thatis always more useful than the pie chart
(and often
preferable to a bar chart)
is the dotplot. To illustrate, we first use an example from
Cleveland (1994). Figure 2.3shows apie chart of 10 percentages. Figure2.4shows
CHAPTER 2
24
Drinkers
Arson
Rope
FIG. 2.2.
VlOience
Stealing
Abstainers
Fraud
Arson
Rope
Violence Stealing
Fraud
GRAPHICAL
METHODS
OF DISPLAYING DATA
25
the alternative dot plot representationof the same values. Pattern perception is far
more efficient for the dot plot than for the pie chart. In the former it is far easier
to see a number of properties of the data that are either not apparent atall in the
pie chart or only barely noticeable.First the percentages have a distribution with
two modes (a bimodal distribution); odd-numbered bands lie around the value
8% and even-numbered bands around12%. Furthermore, the shapeof the pattern
for the odd values as the band number increases is the same as the shape for the
even values; each even valueis shifted with respect to the preceding odd
value by
approximately 4%.
Dot plots for the crime data in Table 2.1 (see Figure 2.5) are also more informative than the pie charts in Figure 2.1, but a more exciting application of the
dot plot is provided in Carr (1998). The diagram, shown in Figure 2.6, gives a
particular contrastof brain mass and body mass for 62 species of animal. Labels
and dots are grouped into small units of five to reduce the chances of matching
error. In addition,the grouping encourages informative interpretation
of the graph.
Also note the thought-provoking title.
2.3. HISTOGRAMS,STEM-AND-LEAF
PLOTS, AND BOX PLOTS
The data given in Table 2.2 shown the heights and ages of both couplesin a sample
of 100 married couples. Assessing general features of the data is difficult with
the data tabulated inthis way, and identifying any potentially interesting patterns
is virtually impossible. A number of graphical displaysof the data can help. For
example, simple histogramsof the heightsof husbands, the heightsof wives, and
Drinkers
l
0
Violence
Rape
Fraud
Anwn
Abstainers
FIG. 2.5.
ages.
26
Intelligence?
Human
Rnesus Monkey
Baboon
Chirnoanzee
O
w
lthonkey
Patas Monkey
Aslan Elephant
Vewet
Arctlc Fox
Red FOX
Ground Squirrel
Gray Seal
Atrican Elephant
Rock Hyrax:H. Brucci
Raccoon
Galago
Genet
Donkey
Goat
Okapi
Mole Ral
Sneep
Echidna
Gorilla
Cat
Chlnchilla
Tree Shrew
Gray Wolf
Gfratle
Horse
stow Loris
Rock Hyrax:P.Habessinica
Phalanager
Tree Hyrax
Jaguar
cow
Mountain Beaver
Earlern American Mole
Yettow.Bellled Marmol
Alrlcan Giant Pouched Rat
Rabbit
Star Nosed Mole
f
i
Rat
.l
.o
.0.5
from Carr,
1998).
CHAPTER 2
28
TABLE 2.2
49
25
40
52
58
32
43
47
31
26
40
35
35
35
47
38
33
32
38
29
59
26
50
49
42
33
27
57
34
28
37
56
27
36
31
57
55
47
64
31
1809
1841
1659
1779
1616
1695
1730
1740
1685
1735
1713
1736
1799
1785
1758
1729
1720
1810
1725
1683
1585
1684
1674
1724
1630
1855
1700
1765
1700
1721
1829
1710
1745
1698
1853
1610
1680
1809
1580
1585
43
28
30
57
52
27
52
43
23
25
39
32
35
33
43
35
32
30
40
29
55
25
45
44
40
31
25
51
31
25
35
55
23
35
28
52
53
43
61
23
1590
1560
1620
1540
1420
1660
1610
1580
1610
1590
1610
1700
1680
1680
1630
1570
1720
1740
1600
1600
1550
1540
1640
1640
1630
1560
1580
1570
1590
1650
1670
1600
1610
1610
1670
1510
1520
1620
1530
1570
25
19
38
26
30
23
33
26
26
23
23
31
19
24
24
27
28
22
31
25
23
18
25
27
28
22
21
32
28
23
22
44
25
22
20
25
21
25
21
28
(Continued)
29
TABLE 2.2
(Continued)
~~
Wife
Husband's
Husband's
Age (years)
Height (mm)
's
Age (yeam)
Wifeet
Height (mm)
Husband's Age
at First Mardage
~
35
36
40
30
32
20
45
59
43
29
47
43
54
61
27
27
32
54
37
55
36
32
57
51
50
32
54
34
45
64
55
27
55
41
44
22
30
53
42
31
1705
1675
1735
1686
1768
1754
1739
1699
1825
1740
1731
1755
1713
1723
1783
1749
1710
1724
1620
1764
1791
1795
1738
1666
1745
1775
1669
1700
1804
1700
1664
1753
1788
1680
1715
1755
1764
1793
1731
1713
35
35
39
24
29
21
39
52
52
26
48
42
50
64
26
32
31
53
39
45
33
32
55
52
50
32
54
32
41
61
43
28
51
41
41
21
28
47
31
28
1580
1590
1670
1630
1510
1660
1610
1440
1570
1670
1730
1590
1600
1490
1660
1580
1500
1640
1650
1620
1550
1640
1560
1570
1550
1600
1660
1640
1670
1560
1760
1640
1600
1550
1570
1590
1650
1690
1580
1590
25
22
23
27
21
19
25
27
25
24
21
20
23
26
20
24
31
20
21
29
30
25
24
24
22
20
20
22
27
24
31
23
26
22
24
21
29
31
23
28
(Continued)
30
CHAPTER 2
TABLE 2.2
(Continued)
Husbands
Age (years)
36
56
46
34
55
44
45
48
44
59
64
34
37
54
49
63
48
64
33
52
Husbands
Height (mm)
1725
1828
1735
1760
1685
1685
1559
1705
1723
1700
1660
1681
1803
1866
1884
1705
1780
1801
1795
1669
MfGS
Ase (yeam)
35
55
45
34
51
39
35
45
44
47
57
33
38
59
46
60
47
55
45
47
Wife>
Height (mm)
Husbands Age
at First Marriage
1510
160
1660
1700
1530
1490
1580
1500
26
30
22
23
34
27
34
28
41
39
32
22
23
49
25
27
22
37
17
23
1M)o
1570
1620
1410
1560
1590
1710
1580
1690
1610
1660
1610
1400
1600
"l
31
1700
1800
1SW
1800
1900
F?
1400
1703
1600
1500
-100
2W
IW
300
L O O married couples,
CHAPTER 2
32
Display 2.1
Stem-and-Leaf Plots
choosing asuitable pair of adjacent digits in theheights data, the tens digit, andthe units
digit. Next, split each
data value betweenthe two digits. For example, the value
98
would besplit as follows:
Datavalue
98
Split
9/8
stem
and
and
leaf
8
Our first scatterplot, givenin Figure 2.1l(a), shows ageof husband against age
of wifefor the data in Table 2.2.
As might be expected, the plot indicates a strong
l@),
correlation for the two ages. Adding the liney =x to the plot, see Figure 2.1
highlights that there are a greater number of couples in which the husband is
older than his wife, than thereare those in which the reverse is true. Finally, in
Figure 2.11(c) the bivariate scatter of the two age variables is framed with the
observations on each. Plotting marginaljoint
anddistributions together in this
way
is usually good data analysis practice; a further possibility
for achieving this goal
is shown in Figure 2.12.
GRAPHICAL
METHODS
OF DISPLAYING DATA
33
Low:
1410
14 : 24
14 : 99
15 : 0011123344
15 : 5555666667777777888888899999999
16 : 00000000111111112222333444444
16 : 555666666777778899
17 : 001234
17 : 6
:0
-0 :
-0 : 32
0 : 22011333444
0 : 566666777778888999
1 : 0000011111222222222333333344444
1 : 55566666677788999999
2 : 0011233444
2 : 5667889
wives
CHAPTER 2
34
Display 2.2
Constructing a BoxPlot
The plot is based on five-numbersummary of a data set: 1, minimum; 2, lower
quartile; 3,median; 4, upper quartile;5, maximum.
The distance betweenthe upper and lower quartiles,the interquarrile range,is a
measure. of the spread ofa distribution thatis quick to compute and, unlike the
range, is not badly affected by outliers.
The median, upper, and lower
quartiles can beused to define rather arbitrary butstill
useful limits,L and U,to help identify possible outliers
in the data:
U
L
= UQ + 1.5 x IQR,
= LQ -1.5 X IQR.
range, UQ-LQ.
Observations outsidethe limits L andU are regarded as potential outliers and
identified separatelyon thebox plot (andknown as outside values),which is
constructed as follows:
To construct a box plot, a "box" with ends at the lower and upper quartilesis fmt
drawn. A horizontal line (or some other feature) is used toindicate the position of
the median in the
box. Next, lines are drawn from each end of the box
to the most
remote observations that,however, are no?outside observations as defined in the
text. The resulting diagram schematically represents
the body of the dataminas the
extreme observations. Finally,the outside observationsare incorporated into the
final diagramby representing them individuallyin someway (lines, stars, etc.)
Outsidevalues
c-
Upper adjacentvalue
Upper quartile
1
n
c
. Median
.
c
- Lower
3-
quartile
1-
.
c
-
Outside value
OF DISPLAYING DATA
GRAPHICAL
METHODS
35
FIG. 2. IO.
Husbands
wives
CHAPTER 2
36
20
30
40
50
20
60
30
40
50
(a)
(b)
60
ID
3::
b
m
-52 ,3
m
-m
Fi
20
30
40
50
60
GRAPHICAL
METHODS
1700
OF DISPLAYING DATA
le00
1e00
1w)0
Husbands height
0
8.
00
Q-
P-
O 0 0
,
-
51550
la00
1650
1700
1750
le00
1850
0
0
Husband's hem
37
CHAPTER 2
38
,
c-
0
0
T
g B-,
ooo
00
0
0.
0"
0
0
g-
F0
0.
oo
000
0
O
o ~ o0
rooyo
O
00
0 0
0 0 0 ooooo
0 -m
o
00 0 0
eoo
00
0 -0
0
0
1550
1600
1850
17W
17H)
in
FIG. 2.14.
si --
j
0
i o
w-
ii .
i e
100
0 0 0
0
0
0
:
0
0
Em
._
it
l850
6
0
" i o
1BW
0
0
..................................................................................................
.
..
..
.
..
..
..............
0
F-
-10
-6
10
FIG. 2.15.
GRAPHICAL
METHODS
FIG. 2.16.
couples.
OF DISPLAYING DATA
39
CHAPTER 2
40
TABLE W
Visual Acuity and Lcns Strength
Right Eye
L 4 Eye
subject
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
L6/6
l16
110
117
115
112
113
114
119
110
119
117
116
118
120
110
118
118
110
112
112
105
110
105
122
W18
119
110115
118
116
114
115
110
119
120
118
119
112
120
120122
110
112
112
105
120
120
W36
116
114
120
120
94
105
123
119
118
11s
122
120
110
114
120
110
1122
R6/60
LW60
R6/36
RW18R6/6
124
120
113
118
116
118
123
124
120
121
118
124
120
118
122
110
123
l24
12
106
115
114
100
105
110
115
112
I10
120
120
112
120
110
105
118
117
100
117
112
120
116
117
99
105
110
115
120
119
123
118
120
119
I10
I IO
120
118
105
114
110
120
116
116
94
115
11s
120
120
118
123
116
122
118
108
l15
118
118
106
122
110
124
119
112
97
115
111
114
119
122
124
122
124
124
120
105
120
122
110
of this pattern of associations for the analysis of these data will be taken upin
Chapter 5.
GRAPHICAL
METHODS
FIG. 2.17.
OF DISPLAYING DATA
41
what is generally known as a bubble plot. Here two variables are used to form
a scatterplotin the usual way and then values of a third variable are represented
by circles with radii proportional to the values and centered on the appropriate point in the scatterplot. The bubble plot for the ice cream data is shown
in Figure 2.18. The diagram illustrates that price and temperature are largely
unrelated and, of more interest, demonstrates that consumption remains largely
constant as price varies and temperature varies below50F. Above this temperature, sales of ice cream increasewith temperature but remain largely independent of price. The maximum consumption corresponds to the lowest price and
highest temperature. One slightly odd observation is indicated in the plot; this
corresponds to a month with low consumption, despite low price and moderate
temperature.
2.6.2.
As a further example
of the enhancementof scatterplots, Figure2.19 shows a plot
of the height and weight of a sample of individuals with their gender indicated.
Also included in the diagram are circles centered on each plotted point; the radii
CHAPTER 2
42
TABLE2.4
Period
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Y(Pints perCapifa)
0.386
0.374
0.393
0.425
0.406
0.344
0.327
0.288
0.269
0.256
0.286
0.298
0.329
0.318
0.381
0.381
0.470
0.443
0.386
0.342
0.319
0.307
0.284
0.326
0.309
0.359
0.376
0.416
0.437
0.548
0.270
0.282
0.277
0.280
0.280
0.262
0.275
0.267
0.265
0.277
0.282
0.270
0.272
0.287
0.277
0.287
0.280
0.277
0.277
0.277
0.292
0.287
0.277
0.285
0.282
0.265
0.265
0.265
0.268
0.260
41
56
63
68
69
65
61
47
32
24
28
26
32
40
55
63
72
72
67
60
44
40
32
27
28
33
41
52
64
71
GRAPHICAL
METHODS
43
OF DlSPLAYlNG DATA
8
0
B@
Odd observation
I
30
40
50
60
70
2.6.3.
Bickel, Hammel, and OConnell (1975) analyzed the relationship between admission rate and the proportion of women applying to the various academic departments at the University of California at Berkeley. The scatterplot of percentag
ofwomenapplicantsagainstpercentageofapplicantsadmitted
ii shown in
Figure 2.21; boxes, the sizes of which indicate the relative number of applicants,
44
180
-e
2
140
160
Em
._
&
CHAPTER 2
0
.
120
F
SO
$S
70
Height (in.)
FIG. 2.19. Scatterplot of heightandweight for male (M) and female (F)participants: the radii of the circles representa change in
enhance the plot. The negative correlation indicated by the scatterplot is duealmost exclusively to a trend for the large departments.
If only a simple scatterplot
had been used here, vital information about the relationship would have been
lost.
45
0
so
65
Height (in.)
70
75
nG. 2.20. scatterplot of heightandweight with additionalinformationabout sex, smoking status, and change in pulse rate: M.
malesmoker: F, femalesmoker; m, malenonsmoker; f. female
nonsmoker: I indicates a decrease in pulserate.
the panel at the top
of the figureis known as the given panel;the panels below are
dependencepanels. Each rectangle in the given panel specifies a range of values
of the husbands age. Ona corresponding dependence panel, the husbands height
is plotted against the wifes height,
for those couplesin which the husbands age
is
in the appropriate interval. For age intervalsbetomatched to dependence panels,
the latter are examinedin order from left to right in the bottom row and again
from left to right
in subsequent rows. Here the coplot implies that the relationship
between the two height measures is roughly the same for each of the six age
intervals.
Coplots are a paaicular example of a more general formof graphical display
known as trellis graphics, whereby any graphical display, not simply a scatterplot,
of some variable. For example, Figure
2.23
can be shown for particular intervals
shows a bubble plotof height of husband, heightof wife, andage of wife (circles),
conditional on husbands age for the
data in Table 2.2. We leave interpretationof
this plot to the reader!
46
CHAPTER 2
r
90
0 Number ofapplicantsS 40
D
D
0
0
L"
1 ~
Scatterplot of the percentage of female applicants versus percentage of applicants admitted for 85 departments at the
University of California at Berkeley. (Reproduced with permission
from Bickel et al.. 1975.)
FIG. 2.2 1.
where p is the mean of the distribution anda2is its variance. Three examples of
normal distributions are shown in Figure
2.24.
1650
47
30
20
"
00
S
0
1550
00
1650
husbands age.
1550 16501650
1750
CHAPTER 2
48
l500
1600
1700
1800
1900
0
1600
1400
0
1600.
1400,
1500
l600
1700
1800
1900
hh
In general, graphical displays of the kind described in previous sections are extremely useful in the examinationof data; indeed, they are almost essential both
in the initial phase
of data exploration and in the interpretation of the results from
as will be seen in later chapters. Unfortunately
more formal statistical procedures,
it is relatively easy to mislead the unwary with graphical material, and not all
graphical displaysare as honest as they should be! For example, consider the plot
of the death rate per million from cancerof the breast, for several periods over
the past three decades, shown in Figure 2.27. The rate appears toshow a rather
alarming increase. However, when the data are replotted with the vertical scale
beginning at zero,as shown in Figure 2.28, the increase in the breast cancer death
rate is altogether less startling. This example illustratesthat undue exaggeration
or compressionof the scales is best
avoided when oneis drawing graphs (unless,
of course, you are actually in the business
of deceiving your audience).
A very common distortion introduced into the graphics most popular with
newspapers, television, and the media in general is when both dimensions of a
two-dimensionalfigure or icon are varied simultaneously in response to changes
GRAPHICAL
METHODS
49
OF DISPLAYING DATA
Display 2 3
defined as
where usually
pi =
i -0.5
n
If the data arise from a normal distribution, the plot shouldbe approximatelylinear.
""".
10
mean=5,sd=l
mean=lO,sd=5
mean=2.sd=0.5
FIG. 2.24.
Normal distributions.
15
20
CHAITER 2
50
-2
-l
-2
-1
(b)
51
52
CHAPTER 2
FIG. 2.27. Death rates from cancer of the breast where they axis
does not include the origin.
FIG. 2.28. Death rates from cancer of the breast where they axis
does include the origin.
in a single variable. The examples shownin Figure 2.29, both taken from Tufte
(1983), illustrate this point. Tufte quantifies thedistortion with what he calls the
lie factor of a graphical display, which
is defined as
lie factor =
(2.2)
I IN THE BARREL...
light ciude. leaving
Saudi Arabia
on Jan 1
FIG. 2.29.
53
$14.55
Lie factor values close to unity show that the graphic is probably representing
the underlying numbers reasonably accurately. The lie factor
for the oil barrels in
Figure 2.29 is 9.4 because a 454% increase is depictedas 4280%. The lie factor
for the shrinking doctorsis 2.8.
A furtherexamplegivenbyCleveland
(1994), andreproducedhere
in
Figure 2.30, demonstrates that even the manner in which a simple scatterplotis
drawn can lead to misperceptions about data. The example concerns the way
in which judgment about the correlationof two variables made on the basis of
looking at their scatterplot can be distorted by enlarging the area in which the
of correlation in the right-hand diagram in Figpoints are plotted. The coefficient
ure 2.30 appears greater.
Some suggestionsfor avoiding graphical distortion taken from Tufte
(1983) are
as follows.
1. The representationof numbers, as physically measured on the surface of the
graphic itself, should be directly proportional to the numerical quantities
represented.
2. Clear, detailed, and thorough labeling should be used to defeat graphical
distortion and ambiguity. Write out explanations of the data on the graphic
itself. Label important events in the data.
CHAPTER 2
54
2-
.......,. ..S*
........ .
.
.
.
.
.
.
.
.
. .....*.c. ..
.C!.
v,. :
*
l-
B;:.
0-
-..I..:;.p**
. . L..\. *
. ...*..
....
*a.
a.
-lf
-2-
..-.:
a.
. -1
FIG. 2.30.
4 5
3. To be truthful and revealing, data graphics must bear on the heart of quan-
Of course, not all poor graphics are deliberately meant to deceive. They are
just not as informative as they might be. An example is shown in Figure 2.31,
which originally appearedin Vetter (1980); its aim is to display the percentages
of
degrees awardedto women in severaldisciplinesof science and technology during
three time periods.At first glance the labels suggest that the graph
is a standard
divided bar chart with the length
of the bottom divisionof each bar showing the
percentage for doctorates, the length
of the middle division showing the percentage
for masters degrees, and the top division showing the percentagefor bachelors
is not correct, becauseit would
degrees. In fact, a little reflection shows that this
is
imply that in most cases the percentageof bachelors degrees given to women
generally lower than the percentage of doctorates. A closer examination of the
diagram reveals that thethree values of the data for each discipline during each
time periodare determined by the three adjacent vertical dotted lines. The top end
of the left-handline indicates the valuefor doctorates, the top end
of the middle
line indicates the
value for masters degrees, andthe top end
of the right-handline
indicates the positionfor bachelors degrees.
Cleveland (1994) discusses other problems with
the diagram in Figure 2.31and
also points out that
its manner of constructionmakes it hard to connect visually the
three values of a particular type of degree
for a particular discipline, that
is, to see
change through time. Figure
2.32 shows the same data
as in Figure 2.31, replotted
by Cleveland in a bid to achieve greater clarity. Itis now clear how the data are
GRAPHICAL
METHODS
OF DISPLAYING DATA
55
h
.:
:
:
...
.
.
.
y.
:
: :
:
:
:
:
:
...
.
.:
.
..
and
engineering
sciences
sciences
.
.
-
sciences
sciences
Agrkunurfil
sciences
sciences
CHAPTER 2
56
f
FIG. 2.32. Percentage of degrees earnedby women for three degrees (bachelors degree, masters degree, and doctorate), three
points foreach
time periods. andninedisciplines.Thethree
1959-1960.
disciplineandeachdegreeindicatetheperiods
1969-1 970, and 1976-1 977. (Reproduced with permission from
Cleveland, 1994.)
0
60
70
Calculated jolnt temperature (F)
80
57
M)
70
Bo
The data for no failures are not plotted in Figure 2.33. The engineers involved believed that these data were irrelevantto the issue of dependence. They
2.32, which includes all the
were mistaken, as shown by the plot in Figure
data. Here a patterndoes emerge, and a dependence of failure on temperatureis
revealed.
To end the chapter on a less sombre note, and to show that misperception
andmiscommunication are certainlynotconfined to statisticalgraphics,see
Figure 2.35.
2.10. SUMMARY
1. Graphical displays are an essential feature
in the analysisof empirical data.
CHAPTER 2
58
SOFTWARE HINTS
SPSS
Pie charts, bar charts,and the like are easily constructedfrom the Graph menu.
You enter the data you want to use in the chart, select the type of chart you
want from the Graph menu, define how the chart should appear, and then click
OK. For example, the first steps in producing a simple bar chart would be as
follows.
1, Enter the datayou want to use to create the chart.
2. Click Graph, then clickBar. When you do this you will see the
Bar Charts
dialog box appear and the required chart can
be constructed.
3. Once the initial chart has been created, it can be edited and refined very
and thenmakingtherequired
simply by double-clickingthechart
changes.
GRAPHICAL
METHODS
OF DISPLAYING DATA
59
S-PLUS
In S-PLUS,there is a powerful graphical users interface whereby a huge variety of
graphs can be plotted by using either the Graph menu or the two-dimensional (2D
or three-dimensional (3D) graphics palettes.For example, the necessary steps to
as follows.
produce a scatterplot are
[Detailed information about thepie function, orany of the other functions menhelp(pie).]
tioned above,is available by using the help command; for example,
Coplots and trellis graphics are currently only routinely available
in S-PLUS.
They can be implemented
by use ofthe coplot function, orby using the 2Dor 3D
graphical palettes with the conditioning mode on. Details
are given in Everitt and
Rabe-Hesketh (2001).
EXERCISES
2.1. According to Cleveland (1994), The histogram
is a widely used graphical
method that is at least a century old.But maturity and ubiquitydo not guarantee
the efficiency of a tool. The histogram is a poor method. Do you agree with
Cleveland? Give your reasons.
2.2. Shortly after metric units
of length were officially introduced in Australia,
each of a groupof 44 students was asked to guess, to the nearest meter, the width
of the lecture hallin which they were sitting. Another group 69
of students in the
same room were asked to guess the width in feet, to the nearest foot. (The true
ft).
width of the hall was 13.1 m or 43.0
CHAPTER 2
60
Guesses in meters:
8
101010
9101010
13
14141413
1515151515
16 17 17 1717
18 18 20
Guesses in feet:
24 30
30
30
30
30
27
25
30 32
35 35 36 36
36
40
40
40
40
37
37
40 41 41 42 42 42 43
43
42
45 45
45
46
46
48
48
47
70 75 94
55 55 63
80
60
60
11
13
12
12
11
11
15
2240
38
35
27
25
11
44
50
50
15161615
32 34
34
34
33
40 40
44 44 45
50 51 54
54
54
40
45
40
45
TABLE 2 3
Variable
State
Alabama
California
Iowa
Mississippi
New Hampshire
Ohio
Oregon
Pennsylvania
South Dakota
Vermont
3615
21198
2861
2341
812
10735
2284
11860
681
472
3624
5114
4628
3098
4281
4561
4660
4449
4167
3907
2.115.1
1.110.3
0.5
2.3
2.412.5
0.7
3.3
0.8
0.6
16.1
0.5
0.6
.o
69.05
71.71
72.56
68.09
71.23
70.82
72.13
70.43
72.08
71.64
7.4
4.2
1.7
5.5
41.3
62.6
59.0
41.0
57.6
53.2
60.0
50.2
52.3
57.1
20
20
140
50
174
124
44
126
172
168
Note. Variables anas follows: 1, population (xIOOO); 2.average per capita income
($);3.illiteracy
rate (96population);4. life expectancy (years);5, homicide rate@er IOOO): 6,percentage of high school
graduates; 7, average numberof days per year below freezing.
GRAPHICAL
METHODS
61
OF DISPLAYING DATA
enforcement
300.
After stricter
enforcement
275 L
1056
1955
of a
TABLE 2 6
Age Group
Country
25-34
45-54
35-44
Canada
Israel
Japan
22
9
22
29
16
28
48
7
8
26
4
28
22
10
20
27
19
19
40
25
35
65
8
11
29
7
41
34
13
22
Austria
France
GelIllany
Hungary
Italy
Netherlands
Poland
Spain
Sweden
Switzerland
UK
USA
31
10
21
52
36
41
84
11
18
36
10
46
41
15
28
55-64
65-74
34
14
31
53
47
59
81
18
20
32
16
51
50
17
33
24
27
49
69
56
52
107
27
28
28
22
35
41
22
37
62
CHAPTER 2
2.5. Mortality rates per 100,OOOfrom male suicides for anumber of age groups
and a number of countries are shown in Table 2.6. Construct side-by-side box
plots for the data from different age groups, and comment on what the graphic
says about the data.
Analysis of Variance I:
The One-way Design
3.1. INTRODUCTION
In a studyof fecundity of fruit flies, per diem fecundity (number
of eggs laid per
female per dayfor the first 14 days of life) for25 females of each of three genetic
lines of the fruitfly Drosophila melanoguster was recorded. The results
areshown
in Table 3.1. The lines labeled
RS and SS were selectively bredfor resistance and
for susceptibility to the pesticide,DDT, and the NS line as a nonselected control
strain. Of interest here is whether the data give any evidenceof a difference in
fecundity of the three strains.
In this study, the effect of a single independent factor (genetic strain) on a
response variable (per diem fecundity)
is of interest. The data arise from
what is
generally known as a one-way design. The general question addressed in such a
data setis, Do the populations giving rise to the different oflevels
the independent
factor have different mean values? What statistical technique is appropriate for
addressing this question?
64
CHAPTER 3
TABLE 3.1
Fecundity of Fruitflies
Resistant (RS)
Susceptible (SS)
Nonselected INS)
12.8
21.6
14.8
23.1
34.6
19.7
22.6
29.6
16.4
20.3
29.3
14.9
27.3
22.4
27.5
20.3
38.7
26.4
23.7
26.1
29.5
38.6
44.4
23.2
23.6
38.4
32.9
48.5
20.9
11.6
22.3
30.2
33.4
26.7
39.0
12.8
14.6
12.2
23.1
29.4
16.0
20.l
23.3
22.9
22.5
15.1
31.0
16.9
16.1
10.8
35.4
21.4
19.3
41.8
20.3
37.6
36.9
37.3
28.2
23.4
33.7
29.2
41.7
22.6
40.4
34.4
30.4
14.9
51.8
33.8
37.9
29.5
42.4
36.6
41.4
65
Display 3.1
H~:fil
=fi2=*=fik.
Suppose the hypothesisis investigated by using a series of r tests, onefor each pair
of means.
The total numbero f t tests needed is N = k(k 1)/2.
Suppose that each t test is performed at significance levela, so that for each of the
tests,
Pr (rejecting the equalityof the two means given that theyare equal) =a.
Consequently,
Pr (accepting the equalityof the two means when they are equal) = 1 -a.
Therefore,
Pr (rejecting the equalityof af leasr one pair ofmeans when H&,is true)
= 1 (1 -alN(Psay).
For particular values ofk and for a = 0.05, this results leads to the following
numerical values:
k
2
3
4
10
1
3
45
1 (0.95) = 0.05
1 (0.95) = 0.14
1 (0.95) = 0.26
1 (0.95)45= 0.90
The probability of falsely rejecting thenull hypothesis quickly increases above the
nominal significance levelof .05. It is clear that suchan approach is very likely to
lead to misleading conclusions. Investigators unwise enough to apply the procedure
would be led toclaim more statistically significant results than their data justified.
66
CHAPTER 3
is a partitioning of the total variance in a set of data into a number of component parts, so that the relative contributions of identifiable sources of variation
to the total variation in measured responses can be determined. But how does
this separation or partitioningof variance help in assessing differences between
means?
For the one-way design, the answer
this question
to
can be found by considering
two sample variances, one
of which measures variation between the observations
within the groups, and the other measures the variation between the group mea
If the populations corresponding to the various levels
of the independent variable
have the same mean for the response variable, both the sample variances describ
estimate thesame population value. However,if the population means differ, variation between the sample means will be greater than that between observations
within groups, and therefore the two sample variances will be estimating
diferent
population values. Consequently, testing for the equality of the group meansrequires a testof the equalityof the two variances. The appropriate procedure.
is an
F test. (This explanationof how an F test for the equalityof two variances leads
to a test of the equalityof a set of means is not specific to the one-way design;
it applies toall ANOVA designs, which should be remembered in later chapters
where the explanationis not repeated in detail.)
An alternative way of viewing the hypothesis test associated with the one-way
design is that two alternative statistical models are being compared. In one the
mean and standard deviation are the same in each population; in the other the
means are different but the standard deviations are again the same. The F test
assesses the plausibility of the first model. If the between group variability is
p < .05), the second modelis to be preferred.
greater than expected (with, say,
A more formal account of both
the model andthe measuresof variation behind
the F tests in a one-way ANOVA is given in Display 3.2.
The data collected from a one-way design
to satisfy
have the following assumptions to make theF test involved strictly valid.
67
Display 3.2
expected or
subjects ina particular groupcome froma population with a particular
average value for the response variable, with differences between the
subjects within
type of random variation,usually referred to as
a group beiig accounted for by some
"error." So the modelcan be written as
= Pi + G j ?
where yij represents the jth observation in theith group, and theE(] represent
zero and
random error terms, assumedto be from a normal distribution with mean
variance c2.
T h e hypothesis of the equality of population means
can now bewritten as
Ho:w,=@2==".=
pk
= PP
Yij=G++i+Eij,
+ + -+
Et,
Ho:a1=a2=..-=ak=O,
so that underH0 the model assumedfor the observationsis
Yij
= Y +E i j
as before.
(Continued)
CHAPTER 3
68
Display 3.2
(Continued)
The total variation inthe observations, that is, the sum of squares of deviations of
[xf=Ixy=,(y,j y..)2].
observations fromthe overall meanof the response variable
can be partionedinto that dueto difference in groupmeans, the behveen groups sum
of squares
n(p,, y,,)2],where n is thenumber of observations
in eachgroup,
and thatdue to Merences among observations within groups,
the within groups sum
ofsquares
C;=,(yij y i , ) 2 ] . mere we have writtenout the formulasfor the
various sums ofsquares in mathematical terms by using
the conventional dot
notation, in which adot represents summation over a particular suffix.
In future
chaptersdealing with more complex
ANOVA situations, weshall not botherto give
the explicit formulasfor sums of squares.)
These sums of squares can be convertedinto between groups and within groups
variances by dividing them by their appropriate degrees
of freedom (see following
are estimates
text). Underthe hypothesis ofthe equality of population means, both
of u2.Consequently, an F test ofthe equality ofthe two variancesis alsoa test of
the equality of population means.
The necessary termsfor the F test are usually arrangedin ananalysis of variance
fable as follows (N = kn is the total number of observations).
[x:=,[x;=, -
Source
DF
Between
groups
k
1
W ~ t h i n g r ~ ~ pN~ k
(error)
Total
N-l
SS
MS
BGSS
BGSS/(k
1)
WGSS WGSS/(N k)
MSRO
MSBG/MSWG
69
CHAPTER 3
70
variance of the raw data.(A far better way of testing the normality assumption
in
an ANOVA involves the useof quantities known as residuals, to be described in
Chapter 4.)
The ANOVA table from aone-way analysis of variance of the fruitfly data is
shown in Table 3.2. The F test for equality of mean fecundityin the three strains
three
has an associated p value thatis very small, and so we canconclude that the
strains do have different mean fecundities.
Having concluded that the
strains do differ in their average fecundity does not,
of course, necessarily imply that all strain means differ-the equality of means
hypothesis might be rejected because theofmean
one strain differs from the means
of the othertwo strains, which are themselves equal. Discovering more about which
particular means differ
in a one-way design generallyrequires the useof what are
known as multiple comparison techniques.
os
B
L
71
CHAPTER 3
72
TABLE 3.2
Source
SS
DF
Between strains
681.11
8.67
1362.21
2
Wifhinstrains(Enor) 5659.02
78.60
72
3.5.1.
MS
e.001
BonferonniTest
1. The serieso f t tests contemplated hereis carried out only after a significant
F value is found in the one-wayANOVA.
2. Each t test here uses the pooled estimate
of error variance obtained from the
observations in all groups rather thanjust the two whose means are being
within groups meansquare as defined
compared. This estimate is simply the
in Display3.2.
3. The problem of inflating the m e I error, as discussed in Section 3.2, is
tackled by judging thep value from each t test against a significance level
of a h ,where m is the numbero f t tests performed and a is the sizeof the
Type I error (the significance level) that the investigator wishes to maintain
in the testing process.(This is known as the Bonferonni correction.)
73
Dlsplay 3.3
Bonfemoni t Tests
TABLE 3 3
Resulu fmm the BonferonniProcedure Used on theFruit Fly Data
In this example N
Comparison
NS-RS
NSSS
RSSS
Est.of meandifferenceLowerboundUpperbound
8.12
9.74
1.63
1.97
3.60
-4.52
14.34
15.W
7.77
zero.
'Interval does not contain the value
also shown in graphical form in Figure3.3.Here itis clear that the mean fecundity
of the nonselected strain differs from the meansof the other two groups, which
Scheffes Multiple
Comparison Procedure
3.5.2.
74
CHAPTER 3
TABLE3.4
Scheffc's Procedure Applied to FruitFly Data
Here the critical valueis 2.50 and the confidence intervalsfor each comparison are
Comparison
NS-RS
NSSS
usss
Upper bound
14.44
16.W
1.9
l-
(""""-4""""
NS-RS
NS-SS
RS-SS
(-""""4""""4
6
-6
4"
"
"
"
"
"
"
"
-2
4
6
8
1 0 1 2 1 4 1 6
simultaneous 95 %confidencelimits, Bonferroni meMod
response variable: fecundity
FIG. 3.3. Graphical display of Bonferroni multiple comparison resuits for fruit fly data.
Display 3.4
Scheffes Multiple Comparison Procedure
The test statistic used hereis onceagain the t statistic used in the Bonferunni
procedure anddescribed in Display 3.3. In this case, however,each observed test
statistic is compared with[(k -l)Fk-l,,+~&)]'",
where F~-I.N-&Y)
is the F value
with k l . N -k degrees of freedom correspondingto significance level a,(More
details are given in Maxwell and Delaney, 1990.)
The method can again be used to construct a confidenceinterval for two means as
"
"
"
-
&
I
-6
-4
-l
"
(
"
"
(
-l
l
75
-l
I
-2
0
2
4
6
8 12 10
1418 16
Simultaneous 95 %confidence limits, Scheff6 method
response variable:fecundity
of Scheff6multiplecomparison
CHAPTER 3
76
TABLE 3 5
Caffeine and Finger-Tapping Data
0 m1
100m1
200m1
300m1
242
245
244
248
241
248
242
244
246
242
248
246
245
241
248
250
247
246
243
244
246
248
250
252
248
250
252
248
245
250
248
250
25 1
251
248
251
252
249
253
251
minute.
using conventional significance levels whereas the latter require rather stringent
significance levels. An additional difference is that when a planned comparison
approach is used, omnibus analysisof variance is not required. The investigator
moves straight to the comparisons
of most interest. However, thereis one caveat:
planned comparisons have to be just that, and not the result of hindsight after
inspection of the sample means!
77
Display 3.5
Planned Comparisons
The hypothesis of particularinterest in the finger-tapping experimentis
Ho :PO = ~ / ~ [ P I w+ ~
2 w~ 3 r d
.io
where .io,
?m,and .imare the sample means of the four groups, and s2is
once again theerror mean square fromthe one-way ANOVA.
The values of thefour sample meansare .io = 244.8, .ilw = 246.4,2200= 248.3,
and .iZw = 250.4. Here s2 = 4.40.
The t statistic takes the value -4.66. This is tested as a Studentst with 36 degrees
of freedom (the degrees of freedom oferror
the mean square).The associated
p value is very small(p = .000043), and the conclusionis that the tapping meanin
the caffeinecondition does not equalthat when no caffeine is given.
A corresponding 95% confidence intervalfor the mean difference between
the two
conditions can be constructedin theusual way to givethe result (-5.10, -2.04).
More finger tapping occurs when caffeine is given.
An equivalent method of looking
at planned comparisons isto first put the
hypothesis of interestin a slightly differentform from that given earlier:
Ho:~o-f~lw-3~~0-4/13(*)=0.
The estimate ofthis comparison ofthe fourmeans (called acontrast because the
defining constants, 1, -1/3, -1/3, -1/3 sum to zero), obtained fromthe sample
means, is
244.8
+ (1/3)21 = 95.41.
(For more details, including thegeneral case, see Rosenthal and Rosnow,1985.)
This mean squareis tested as usual against theerror mean square as an F with 1 and
U degrees of freedom, where
U is the numberof degrees of freedom
of the error
mean square (inthis case the value36). So
F = 95.41/4.40 = 21.684.
The associated p value is .000043, agreeing with thatof the t test described above.
The two approaches outlinedare exactly equivalent, becausethe calculated F
statistic is actually the square ofthe t statistic (21.68 = [-4.6712).
The second version of assessing a particular
contrast by using the F statistic, andso
on, will, however, helpin the explanation of orthogonal polynomials
to be given in
Display 3.6.
CHAPTER 3
78
N I
None
lml
2ooml
Amounl d cenelne
3wml
finger-tapping data.
Display 3.6
The Use of Orthogonal Polynomials
When the levels of the independent variable
form a series of ordered steps,it is often
of interest toexamine the relationship betweenthe levels andthe means of the
response variable.In particular the following questions wouldbe ofinterest.
1.Do the means of the treatment groups increase
in a linear fashion withan increase
in the level ofthe independent variable?
2. Is the trend just linear or is there
any evidenceof nonlinearity?
3. If the trend is nonlinear, what degree equation (polynomial)
is required?
The simplest wayto approach these and similar questions
is by the use of
orthogonal
polynomial contrasts. Essentially, these correspond to particular comparisons among
so on. Theyare defined by
the means, representinglinear trend, quadratic trend, and
of independent
a series of coefficients specificfor the particular number of levels the
variable. The coefficients are available from most statistical tables.
A small part of
such a tableis shown here.
Level
3
2
4 5
/1c Trend
-1
0
1
3 Lmear
Quadratic
1
-2
1
4 Linear
-3
-1
1
3
Quadratic-1
1
-1
1
Cubic
-1-3
3
1
5 Linear
-2
-1
0
1 2
Quadratic
2
-1
-2
-1
2
Cubic
-1
2
0 - 2 1
Quartic -4
16
-4
1
(Note that these coefficients
are only appropriatefor equally spaced levels of the
independent variable.)
These coefficients can beused to produce a partition of the between groups
sums of
squares into single-dep-of-freedom sums of squares correspondingto the trend
components. These sums of squaresare found by using the approach described
for
planned comparisonsin Display 3.5. For example, the sum of squares Corresponding
to the linear trend for thecaffeine data isfound as follows:
S&,,
[-3
244.8
= 174.83.
+ (-1)
1/10t(-3)2
246.4
+ 1 248.3 +3 X 250.41
+ + (3)21
X
within
364.40 158.50
c.001
c.001
0.00
.71
.97
Note that the sum of squaresof linear, quadratic, andcubic effects add to the
between groupssum of squares.
Note also that in this example the difference in
the means ofthe fourordered
caffeine levelsis dominated by the linear effect; departures from linearity have a
sum of squares of only(0.62
0.00) = 0.62.
79
CHAPTER 3
80
TABLE 3.6
Row Group
Comer Gmup
lime
EFT
lime
EFT
317
59
33
49
69
65
26
29
62
31
139
74
31
342
222
219
513
295
285
408
543
298
494
317
407
48
23
9
128
44
49
87
43
55
58
113
464
525
298
491
1%
268
372
370
739
430
410
Display 3.7
The Analysis of CovarianceModel
In general terms the model is
observation = mean group effect covariate effect error.
More specifically, if yij is used to denote the valueof the response variablefor the
jth individual in the ith group and xi/ is the valueof the covariatefor this individual,
then the model assumedis
Yij
= N + W B h j -E) + i j +
5 3i
B 5E
c
8-
w-
Row o
FIG. 3.6.
Calm group
CHAPTER 3
82
TABLE 3.7
ANCOVA Results from SPSS for WISC Blocks Data
ss
Source
Field dependence
&UP
Error
MS
109991
11743
9.41
1.00
.006
DF
1
109991
11743
245531
11692
.33
21
mw
........
20
Calumn group
40
80
80
100
l20
140
EFT
Scatterplot for time against the EFT for the WiSC data,
showing f i t t e d regressions for each group.
FIG. 3.7.
scatterplot suggests
that the slopesof the two regression lines
are not too dissimilar
for application of the ANCOVA model outlinedin Display 3.7.
Table 3.7 shows the default results fromusing SPSS for the ANCOVA of the
data in Table3.6. In Table 3.8 the corresponding results from usingS-PLUSfor
are similar but not identical
the analysisare shown. You will notice that the results
The testfor the covariatediffers.The reasonfor this difference will be made clear
in Chapter 4. However despite the small differencein the results from the two
is the same:it appears that field dependence
is
packages, the conclusion from each
is no difference between
predictive of timeto completion of the task, but that there
the two experimental groups in mean completion time.
83
TABLE 3.8
ANCOVA of WISC Data by Using S-PLUS
Source
Field dependence
Group
Error
ss
110263
11743
245531
11692
DF
MS
1 9.43 110263
1
11743
21
1.00
0.006
0.33
Further examplesof analysis of covariance are given in the next chapter, and
then in Chapter6 the modelis put in a more general context. However, one point
about the method that should be made here concerns its use with naturally oc
or intact groups, rather
than groups formedby random allocation.
The analysisof covariance was originally intended
to be used in investigations
in which randomization had been used to assign patients to treatment groups.
Experimental precision could be increased by removing from the error term in
the ANOVA that part of the residual variabilityin the response that was linearly
predictable from the covariate. Gradually, however, the technique became more
widely used to test hypotheses that were generally stated in such terms as the
group differences on the response are zero when the grouponmeans
the covariate
are made equal,or the group means on the response after adjustment
for mean
differences on the covariate are equal. Indeed, some authors (e.g., McNemar,
1962) have suggested that if there
is only a small chance difference between the
groups on the covariate, the useof covariance adjustment may not be worth the
effort. Such a comment rules out the very situation for which the ANCOVA was
originally intended, because
in the caseof groups formedby randomization, any
are necessarily the result
of chance! Such advice
group differences on the covariate
is clearly unsound because more powerful tests of group differences result from
the decrease in experimental error achieved when analysis
of covariance is used
in association with random allocation.
In fact, it is the use of analysis of covariance in an attempt to undo built-in
differences among intact groups that causes concern. For example, Figure
3.8
shows a plot of reaction time and age for psychiatric patients belonging to two
distinct diagnostic groups.An ANCOVA with reaction time as response and age
as covariate might lead to the conclusion that reaction time does not differ in the
are of
two groups. In other words, given that two patients, one from each group,
be similar. Is
approximately the same age, their reaction times are also likely to
such a conclusion sensible? An examination of Figure 3.6 clearly shows that it
is not, because the ages of the two groups do not overlap and an ANOVA has
CHAPTER 3
84
,
0
0 o
0 0
ooo
0
0
0 0
oo
Diagnostic
group2
Age
FIG. 3.8. Plot of reaction time against age for two groups of psychiatric patients.
The data shown in Table 3.9 were obtained from a study reported by Novince
(1977), which was concerned with improving the social
skills of college females
and reducing their anxiety
in heterosexual encounters. There were three groups
in
the study: a control group, a behavioral rehearsal group, and a behavioral reh
plus cognitive restructuring group. The values of the following four dependent
in the study:
variables were recorded for each subject
anxiety-physiological anxiety in a series of heterosexual encounters;
measure of social skillsin social interactions;
appropriateness;
assertiveness.
by separate oneBetween group differences inthis example could be assessed
way analyses of variance on eachof the four dependent variables.
An alternative
ills
ss
85
TABLE 3.9
Social
Anxiety
Behavioral Rehearsal
5
3
5
4
4
5
5
5
4
3
4
5
5
4
4
4
4
4
Control Gmup
6
6
2
2
2
2
4
6
4
7
5
5
5
5
4
2
4
2
4
4
4
4
4
4
4
4
5
4
3
4
1
2
2
4
4
1
3
3
3
3
3
4
3
5
5
4
5
6
4
3
4
5
4
5
4
4
4
5
3
4
4
4
3
4
4
5
5
4
5
3
3
3
3
4
5
5
4
4
4
4
5
4
3
4
3
4
~~
86
CHAPTER 3
and, in many situations involving multiple dependent variables, a preferable procedure is to consider the four dependent variablessimultaneously. This is particularly sensible if the variables are correlated and believed to share a common
conceptual meaning.In other words, the dependent variables considered together
make sense as a group. Consequently, the real question of interest is, Does the
set of variables as a whole indicate any between group differences? To answer
this question requires the use of a relatively complex technique knownas multivariate analysis of variance, or MANOVA. The ideas behindthis approach are
introduced most simply by considering first the two-group situation and the multivariate analog of Students t test for testing the equalityof two means, namely
Hotellings T Ztest. The test is described in Display 3.8 (the little adventure in
matrix algebrais unavoidable, Im afraid), and its application to the two experimental groupsin Table 3.9 is detailed in Table3.10. The conclusion tobe drawn
from the T2test is that the mean vectors of the two experimental groupsdo not
differ.
T2test would simply reflect
It might be thought that the results from Hotellings
those of a seriesof univariate t tests, in the sensethat if no differences are found
by the separatet tests, then Hotellings
T 2test will also lead to the conclusion that
the population multivariate mean vectors
do not differ, whereasif any significant
TZstatistic will also
be significant.
difference is found for the separatevariables, the
In fact, this is not necessarily the case (if it were, the TZtest would be a waste
of time); it is possible to have no significant differencesfor each variable tested
separately but a significant TZ value, and vice versa. A full explanation of the
differences between a univariate and multivariate
fortest
a situation involving
just
two variablesis given in Display 3.9.
When a setof dependent variablesis to be compared in more than
two groups,
the multivariate analogof the one-wayANOVA is used. Without delving into the
technical details, let us say that what the procedure attempts to do is to combine the dependent variables in some optimalway, taking into consideration the
correlations betweenthem and deciding what unique information each variable
beprovides. A number of such composite variables giving maximum separation
tween groups are derived, and these form the basis of the comparisons made.
Unfortunately, in the multivariate situation, when there are more than two groups
to be compared, no single test statistic can be derived that is always the most
powerful for detecting all types of departures from the null hypothesis of the
equality of the mean vectors of the groups. A number of different test statistics
have been suggested that may give different results when used on the same set
of data, although the resulting conclusion from each is often the same. (When
only two groups are involved, all the proposed test statistics are equivalent to
Hotellings T2.) Details and formulas for the most commonly used MANOVA
test statistics aregiven in Stevens (1992),for example, andin the glossary in Appendix A. (The various test statistics are eventually transformed
into F statistics
87
Display 3.8
Hotellings T 2Test
If there are p dependent variables,the null hypothesis is that the p means of the first
population equal the corresponding meansof the second population.
By introduction of some vector and matrix nomenclature,
the null hypothesis can be
written as
Ho :P: = pzr
where p1 and p2 contain the population means ofthe dependent variables in the two
groups; thatis, they are the population mean vectors.
The test statistic is
when nl and n2 are the numbers of observations in the two groups and
D2is defined
as
where Zl and Z2 are the sample mean vectors inthe two groups andS is the estimate
of the assumed common covariance matrix
calculated as
S=
(nl
- + (n2 -1 ) s ~
1)s:
+n2-2
nl
-1)T2
+n2 -2 ) ~
has a Fisher F distribution with p and n I +n2 -p -1degrees of freedom.
F=
(nl +n2 - p
CHAPTER 3
88
TABLE 3.10
-0.44
= (-0.31
0.45
0.33
0.37
-0.25
-0.44
0.33
0.36
0.34
0.34
0.56
0.37)
-0.13
-0.13
-0.11
Display 3.9
Univariate and Multivariate
Tests for muality of Meansfor TWO Variables
Supposewe have a sample nofobservations on two variables x1 and x2, and we wish
of the two variables p l and p2 are both zero.
to test whether the population means
Assume the mean and standard deviation
of thexI observations are PIand s1
respectively, and of the x2 observations, P2 and s2.
If we test separately whethereach mean takes the valuezero,then we would use two
t tests. For example, to test
= 0 against p1 # 0 the appropriate test statistic is
PI - 0
t=-
SllJ;;
The hypothesis p1 = 0 would be rejected at the a percent levelof significance if
t c -rlwl-~m) or t
tlwI-+); that is, ifP1 fell outside the interval
[--s~f~~~-f.)lJii~
Sltlwl- .)/&l where rlwl-j., is the 1W1- fa)percent
point of thet distribution wi!hn -1degrees of freedom.Thus the hypothesis would
not be rejected i f f 1 fell within this interval.
(Continued)
89
Display 3.9
(Continued)
Similarly, the hypothesis112 = 0 for the variablex2 would not be rejectedif the
mean, 22, of the observations fell within a corresponding interval with
s2 substituted forSI.
The multivariate hypothesis[PI,1121= [O,01 would therefore notberejected if both
these conditions were satisfied.
If we were to plot the point(21,22)against rectangularaxes, the area within which
the point couldlie and the multivariate hypothesis not rejected
is given by the
rectangle ABCD of the diagram below, whereAB and DCare of length
2Sltlaocl-~+/iiwhile AD and BC are of length 2SZrlM(I-~a)fi.
Thus a sample that gave the means(21,22) represented by the pointP would lead to
CHAPTER 3
90
TABLE 3.11
Multivariate Testsfor the Data in Table3.9
Test Name
Value
Corresponding
F Value
DFI
56.00
Pillai 8.003.60443
0.67980
Hotelling 5.12600
1.57723
Wilk
0.36906
54.00 8.004.36109
52.00
8.00
DF2
.002
<.001
<BO1
91
3.10. SUMMARY
1. A one-way ANOVA is used to compare the means of k populations when
k 2 3.
2. The null and alternative hypotheses
are
Ho:pI=p2=.'.=
pk,
variances.
4. A significantF test should be followed
by one or other multiple comparison
test to assess which particular means differ, although careis needed in the
application of such tests.
5. When the levelsof the independent variable form an ordered scale, the use
of orthogonal polynomialsis often informative.
6. Including variables thatare linearly related to the response variable as covariates increases the precision
of the studyin the senseof leading to a more
powerful F test for the equalityof means hypothesis.
7. The examples inthis chapter have all involved the same number of subjects
in each group. This is not a necessary condition for a one-way ANOVA,
of
although it is helpful because it makes depatures from the assumptions
normality and homogeneity
of variance even less critical than usual. ofSome
the formulas in displays require minor amendments
for unequal group sizes.
8. In the one-way ANOVA model consideredin this chapter, the levelsof the
independent variable have been regarded
asfired; that is, the levels used are
are conthose of specific interest.An alternative model in which the levels
sidered as a random samplefrom some populationof possible levels might
also have been considered. However, because such models are usually of
greater interestin the case of more complex designs, their description will
be left until later chapters.
9. Several dependent variables can be treated simultaneously byMANOVA;
this approach is only really sensibleif the variables can truly be regarded,
in some sense,as a unit.
COMPUTER HINTS
SPSS
In SPSS the first stepsin conducting a one-wayANOVA are as follows.
1. Click Statistics, click GeneralLinearModel,
CHAPTER 3
92
S-PLUS
In S-PLUS, a simpleone-way ANOVA can be conducted by using the following
steps.
1. Click Statistics, click ANOVA, and click Fixed Effects;then theANOVA
93
EXERCISES
3.1. Reproduce all the results giventhis
inchapter by using your favorite piece
of statistical software.m s exercise should be repeatedin subsequent chapters,
and those readers finding any differences between their results are
andinvited
mine
to e-mail me atb.even'rr@iop.kclac.uk)
3.2. The datagiven in Table 3.12 were collectedin an investigation described
by Kapor (1981), in which the effect of knee-joint angle on the efficiency of
cycling was studied. Efficiency was measured in terms of the distance pedalled on
three knee-joint angles
an ergocycle until exhaustion. The experimenter selected
TABLE 3.12
The Effect of Knee-Joint Angleon the Efficiency
of Cycling: TotalDistance Covered(km)
8.4
7.0
3.0
8.0
7.8
3.3
4.3
3.6
8.0
6.8
10.6
7.5
5.1
5.6
10.2
11.0
6.8
9.4
10.4
8.8
3.2
4.2
3.1
6.9
7.2
3.5
3.1
4.5
3.8
3.6
CHAPTER 3
94
of particular interest: 50", 70", and 90". Thirty subjects were available for the
experiment and 10 subjects were randomly allocated to each angle. The drag of
the ergocycle was kept constant14.7
at N,and subjects were instructed
to pedal at
a constantspeed of 20 kmih.
3.4. The datain Table 3.13 were collectedin an investigationof maternal behavior of laboratory rats.The response variable was the time (in seconds) require
for the mother to retrieve the pupto the nest, after being
moved a fixed distance
away. In the study, pups of different ages were used.
Age of Pup
5 Days
15
10
25
15
20
18
20 Days
35 Days
30
15
20
25
23
20
40
35
50
43
45
40
95
TABLE 3.14
Method 1
Method 2
Method 3
Age (yeats)
Initial Atuiety
Final Anxiety
27
32
23
28
30
35
32
21
26
27
29
29
31
36
23
26
22
20
28
32
33
35
21
28
27
23
25
26
27
29
30.2
35.3
32.4
31.9
28.4
30.5
34.8
32.5
33.0
29.9
32.6
33.0
31.7
34.0
29.9
32.2
31.0
32.0
33.0
31.1
29.9
30.0
29.0
30.0
30.0
29.6
32.0
31.0
30.1
31.0
32.0
34.8
36.0
34.2
30.3
33.2
35.0
34.0
34.2
31.1
31.5
32.9
34.3
32.8
32.5
32.9
33.1
30.4
32.6
32.8
34.1
34.1
33.2
33.0
34.1
31.0
34.0
34.0
35.0
36.0
methods. The ages of the patients (who were all women) were also recorded.
Patients were randomly assigned
to the three methods.
1. Carry out a one-wayANOVA on the anxiety scores on discharge.
2. Construct some scatterplots to assess informally whether the relationship
between anxiety score on discharge and anxiety score prior to the operation is the same in each treatment group, and whether the relationship is
linear.
96
CHAFTER 3
3.7. Show that when only a single variable is involved, the test statistic for
Hotellings T2 given in Display 3.8 is equivalent to the test statistic used in an
independent samplest test.
3.8. Apply Hotellings T 2 test to each pairof groups in the data in Table 3.9.
Use the three T2 values and the Bonferroni correction procedureto assess which
groups differ.
4.1. INTRODUCTION
Many experimentsin psychology involve the simultaneous study of the effects
of
two or more factors on aresponse variable. Such an arrangement
is usually referred
to as afucroriul design. Consider, for example, the datashown in Table 4.1, taken
from Lindman (1974), which gives the improvement scores made by patients with
one of two diagnoses, being treated with different tranquillizers. The questions
or otherwise of the average imof interest about these data concern the equality
provement in the two diagostic categories, and similarlyfor the different drugs.
Because these questions are the same
as those considered in the previous chapter,
the reader might enquire whywe do not simply apply a one-wayANOVA separately to the datafor diagostic categoriesand for drugs. The answer is that such
analyses would omit an aspect of a factorial designis often
that very importaut,as
we shall demonstratein the next section.
0.05
CHAPTER 4
98
TABLE 4.1
Improvement Scores for ' b o Psychiatric Categories
and Three TranquillizerDrugs
Dm8
Category
4
6
8
A1
A2
El
B2
8
4
0
8
10
6
10
6
14
0
4
2
E3
15
9
12
TABLE 4.2
ANOVA of Improvement Scores in Table 4.1
Sourre
Error
Drugs Only
1.20 Drugs
Error
AN Six Gmups
42.45Groups
Error
8.89
12
ss
DF
MS
20.05
289.90
16
18.68
44.11
274.83
2
15
22.05
18.32
1.07
0.31
212.28
106.67
tranquillizer drug means. Nevertheless, the F test for the six groups togetheris
significant, indicating a difference between
six means
the that hasnot been detected
A clue to the causeof the problemis
by the separateone-way analyses of variance.
provided by considering the degreesof freedom associated with each
of the three
between groups sumof squares. That correspondingto psychiatric categorieshas
a single degree
of freedom and thatfor drugs has two degreesof freedom, making
However, the between
a totalof three degreesof freedom for the separate analyses.
groups sumof squares for the six groups combinedhas five degrees of freedom.
99
+
+
= fi +ai +p] + X j + B i l k *
CHAPTER 4
100
Display 4.1
(Continued)
x;=, XI=,
XI=,.
H?) : a ,
a2 =
Hi) : @ I = h =
H?): yI1 = =
...= a, = 0,
*.m
p b = 0,
= yab = 0.
I
= P +ai +P]+Pi]t,
ss
BSS
AB
ABSS
WCSS
Error
(within cells)
ASS
MS
DF
1
b- 1
(U
l)(b
ab(n
-1)
-1)
- MSNerror MS
USS
MSB/error MS
MSAB/e.rrorMS
wcss
ASS
(1)
U -
(b-I)
.6(n-li
The total variation in the observations can be partitioned into parts dueto each
factor and a part due
to their interactionas shown.
4.1to the psychiatric data
The resultsof applying the model described in Display
in Table 4.1 are shown in Table4.3.The significant interactiontern now explains
the previously confusing results
of the separate one-way analyses
of variance. (The
detailed interpretationof interaction effectswill be discussed later.)
101
TABLE 4.3
Source
48
A
B
AxB
Error
ss
DF
2.04
2
2
12
MS
18
18
24
8.15 144
0.18
2.72
72
8.83
106
0.11
0.006
TABLE 4.4
dl
d2
45
36
29
46
40
23 43
30
37 110
88
29 72
23
92
61
49
124
Dnrg
31
D1
82
D2
43
22
44
D3
D4
4.3.1.
30
36
31
33
45
63
76
45
71
66
62
35
31
40
d3
21
18
38
23
25
24
22
56
102
71
38
Rat Data
To providea further example of a two-way ANOVA, the data in Table 4.4 will be
used. These data arise from an experiment in which rats were randomly assigned
to receive each combination of one of four drugs andone of three diets for a week,
and thentheir timeto negotiate a standard maze wasrecorded; four rats were used
CHAPTER 4
102
for each diet-drug combination. TheANOVA table for the data and the
cell means
4.5.The observed standard deviations
and standard deviations are shown in Table
in this table shed some doubt
on the acceptability
of the homogeneity assumption,
a problemwe shall conveniently ignore here
(but see Exercise 4.1). The
F tests in
Table 4.5 show that there
is evidence of a differencein drug means and diet means,
but no evidence of a drug x diet interaction. It appears that what is generally
known as a main effects model provides an adequate description of these data.
In
other words, diet and drug (the effects)
main act additively on time to run the maze
The results of both the Bonferonni and Schefft? multiple comparison procedures
for diet and drug are givenin Table 4.6. A l l the results are displayed graphically
in Figure 4.1. Both multiple comparison procedures give very similar results
for
this example. The mean time to run the maze appears to differfor diets 1 and 3
and for diets2 and 3,but not for diets 1 and 2. For the drugs, the average time
is
different for drug1 and drug2 and alsofor drugs 2 and 3.
When the interaction effect
is nonsignificantas here, the interaction mean square
becomes a further estimate of the error variance,u2. Consequently, there is the
possibility of pooling the errorsum of squares and interaction sum of squaresto
provide a possibly improved error mean square based on a larger number of
of freedom. In some cases the of
usea pooled error mean square will provide more
powerful tests of the main effects. The resultsof applying this procedure to the
rat data are shownin Table 4.7. Here the results are very similar to those ingiven
Table 4.5.
Although pooling is often recommended, it can be dangerous for a number of reasons. In particular, cases may arise in which the test for interaction
is nonsignificant, but in which there is in fact an interaction. As a result, the
(this happooled mean square may be larger than the original error mean square
pens for the rats data), and the difference maybe large enough for the increase
in degrees of freedom to be more than offset. The net result is that the experiis really only acceptable when there
menter loses rather than gains power. Pooling
are good a priori reasons for assuming no interaction, and the decisionto use a
pooled error termis based on considerations thatare independent of the observed
data.
With only four observations in each cell of the table, itis clearly not possible
to use, say, normal probability plots to assess the normality requirement needed
within each cellby the ANOVA F tests used in Table 4.5. In general, testing the
normality assumptionin an ANOVA should be donenor on the raw data, but on
deviations between the observed values and the predicted
orfitted values from the
model applied to the data. These predicted values from a main effects model
for
the rat data are shownin Table 4.8; the termsare calculated from
(4.1)
I 03
DF
Diet
ss
MS
2 <.001
23.22
5165.06
10330.13
3 <.W1
13.82
3070.69
9212.06
6 1.87 416.90
2501.38
222.42
8007.25
36
DW
Diet x Drug
Error
.l 1
dl
d2
dl
41.25
6.95
32.00
7.53
21.00
2.16
88.0
16.08
81.5
33.63
33.5
4.65
56.75
15.67
37.50
5.69
23.50
1.29
61.00
11.28
66.75
27.09
32.50
2.65
Dl
M m
SD
D2
Meall
SD
D3
Mean
SD
D4
Mean
SD
4.3.2.
Slimming Data
Our final example of the analysisof a two-way design involves the data shown
in
Table 4.9.These data arise
from an investigation intotypes of slimming regime.In
104.
CHAPTER 4
TABLE 4.6
Multiple Comparison TestResults forRat Data
fifimate
Diet
Bonfemni
d l 4
dl43
d243
Scheffbc
d l 4
dl43
d243
Std Error
Lower Bound
UpperBound
7.31
34.10
26.80
5.27
5.27
5.27
-5.93
20.90
13.60
20.6
47.46
40.16
7.31
34.10
26.80
5.27
5.27
5.27
-6.15
20.70
13.30
20.8
47.6b
40.36
28.40
14.20
-14.20
6.09
6.09
6.09
6.09
6.09
6.09
-53.20
-24.80
-39.00
11.40
-2.75
-31.20
-19.3ob
9.17
-5.oob
45.406
31.20
2.83
-36.20
-7.83
-22.00
28.40
14.20
-14.20
6.09
6.09
6.09
6.09
6.09
6.09
-54.1
-25.7
-39.9
10.6
-3.6
-32.0
-18.406
10.00
-4.15b
46.3ob
32.10
3.69
D w
Bonfemnid
Dl-D2
D1-D3
Dl-D4
DZ-D3
D2-D4
D344
ScbeW
Dl-D2
Dl-D3
Dl-D4
D2-D3
D2-W
DSD4
-36.20
-7.83
-22.00
this case, thetwo factor variablesare treatment with two levels, namely whether
105
_"e""
dl&
dl-d3
d2-d3
l-
e"""*"""
I
-5-10
""-
"6)"_
*
1
0 10 5
50
(a)
-l
"
.
"
"
"
e"""*"""-
-5-10
"
"
"
"
"
"
15
0 10 5
20 25 30
45 40 35
simultaneous 95 % confidence limits, Bonfenoni method
response variable: The
_"e""
dl&
d l 4
d2-d3
"
"
"
)
1
30 35 40 45
simulteneous 95 % c o n f i d e n c e limb. Scheff6 method
response vaflable: Tlme
50
(b)
Dl-D2
D1-D3
Dl-D4
D2-D3
D2-D4
DSD4
&""C""
(""-W.-
"
t
"
"
e"""-
(
"
"
"
"
"
I
&
"
"
"
"
"," _
Dl-DZ Dl-D3
&
;
;
e*
;
*j ~
:
j I
D1-D4
D2-D3
D2-D4
D3-D4
"
"
C
"
"
+
"
"
6
*
"
-
6"
"
+
"
(""4
T'
-l
"
"
"
"
-60
-50
50
(d)
FIG. 4.l . Multiple comparison results for rat data: (a) Bonferroni
tests for diets. (b)Sheffk tests for diets, (c)Bonferroni tests for
drugs. (d)Xheffk tests for drugs.
The ANOVA table and the means and standard deviations for these data are
shown in Table 4.10.Here the maineffects of treatment and statusare significant
but in addition there is asignificant interaction between treatmentand status. The
presence of a significant interaction means that some care is needed in arriving
at a sensible interpretation of the results. A plot of the four cell means as shown
in Figure 4.3 is generally helpful in drawing conclusions about the significant
CHAPTER 4
106
TABLE4.7
Pooling Error and Interaction Terms in the ANOVA
of the Rat Data
ss
Soume
DF
Diet
Drugs
Pooled emr
3 3070.69 9212.06
42
250.21
1058.63
MS
5165.06
20.64
12.27
c.ooo1
c.ooo1
TABLE 4.8
Fitted Valuesfor Main Effects Model on Rat Data
Diet
D w
33.10
Dl
74.17
D2
45.75
D3
59.92
D4
Dl
D2
D3
45.22
81.48
53.06
67.23
37.92
11.10
47.35
18.94
107
TABLE 4.9
Slimming Data
Qpe of S l i m r
Manual Given?
Novice
Experienced
No manual
-2.85
-1.98
-2.12
Manual
-4.44
-2.42
0.00
-2.14
-0.84
0.00
-1.64
-2.40
-2.15
0.00
-8.11
-9.40
-3.50
e
e
-2
-1
TABLE 4.10
Analysis of VarianceTable
Source
SS
DF
MS
21.83
1
21.83
1
25.53
25.53
c x s
20.95 0.023
1 6.76 20.95
Enor 3.10 12 37.19
Condition(C)
status(S)
7.04
0.014
8.24
0.021
1.30
Experienced
Parameter
Novice
No manual
Meall
-1.74
-1.50
-6.36
2.84
-1.55
1.08
1.22
SD
Manual
Mean
SD
......
No manual
Manual
Novice
Experienced
status
FIG. 4.3.
108
109
1. Nomanual-no
CHAPTER 4
110
TABLE 4.11
Biofeedback
Biofedback
Present
Absent
Drug X
Diet Absent
170
175
165
180
160
158
Diet Present
161
173
157
152
181
190
Drug Y
Drug Z
186
194
201
215
219
209
Drug X
194
173
194
197
1
9
0
176
198
189
194
217
206
199
195
202
228
190
206
224
204
162
184
183
156
180
173
164
190
169
164
176
175
171
173
1%
1
9
9
180
203
205
199
170
160
179
179
180
187
199
170
204
164
1
6
6
159
182
187
174
Drug Z
Drug Y
Display 4.2
+
+
+
+
+
More specifically, letyilt1 represent the Ith observation in thekth level of factor C
(with c levels), thejth level offactor B (with b levels), and theith level of factor A
(with a levels). Assume that thereare n subjects percell, and that the factor levels
are of specific interestso that A, B, and C have fixed effects.
The linear model for the observations inthis case is
YijU
+ +Olt
Ti&
+6ijt
Ciltt.
Here ai, B j , and yk represent maineffects ofA, B, and C, respectively: S,, s t , and
representfirsr-order interactions,AB, AC, and BC;0,tl representssecond-oder
interaction. ABC; andqltl are random error terms assumedto be normally
distributed with zero mean and varianceU*. (Once againthe parameters have to be
constrained in some wayto make the model acceptable);see. Maxwell and Delaney,
1990, for details.)
(Continued)
111
Display 4.2
(Continued)
112
CHAPTER 4
TABLE 4.12
ss
Source
Diet
Biofeed
mug
Diet x biofeed
Diet x drug
Biofeed x drug
Diet x biofeed x drug
Error
5202
2048
3675
32
903
259
1075
9400
DF
1
2
1
2
2
2
60
MS
5202.0
2048.0
1837.5<.W1
32.0
451.5
129.5
537.5
156.67
33.20
13.07
11.73
0.20
2.88
0.83
3.42
<.W1
<.001
.M
.M
.44
.04
1. A model is called afixed effects model if all the factorsin the design have
fixed effects.
2. A model is called a random effects model
if all the factors in the design have
random effects.
Absent
(a)
210
7,
Present
170
170
Absent
(C)
Biofeed
180
Absent
113
Present
Biofeed
3. A model is called amixed effects model if some of the factorsin the design
CHAPTER 4
I14
Display 4.3
Random EffectsModel for a 'No-Way Design with Factors
A and B
Let yijt represent thekth observation inthejth level of factor B (with b levels
+ + +
= P +(XI Sj
cijtl
where now thecri are assumed to be random variables from a normal distribution
with mean zero and varianceU,'. Similarly, Bj and yil are assumed to be random
variables from normal distributions and zero means with variances
U; and U:,
respectively. As usual the 6 i j k are error terms randomly sampled from anormal
distribution with mean zero and variancea2.
The hypotheses of interest in the random effects modelare
H:): a,' = 0
HZ': U; = 0
H$'):a'Y = 0.
The calculation of the sumsof squares and mean squares remains the sameas for
the fixed effects model. Now, however, the various mean
squares provide estimators
of combinationsof the variancesof the parameters: (a)A mean square, estimator of
u2 +nu: + nbu,'; (b)B mean square, estimator of U* +nu: +nu$; (c) AB mean
square, estimator of u2 nu:; and (d) error meansquare, as usual, estimator
of 2.
If H$)is true, then theAB mean square and the error meansquare both estimatea2.
Consequently, the ratio of the AB mean square and theerror mean square can be
tested as an F statistic to provide atest of the hypothesis.
If '
:
H is true, the A mean square and the AB mean square both estimate a2 nu:.
so the ratioof the A mean square and theAB mean square provides an F test of the
hypothesis.
If Hi') is true, the B mean square and theAB mean square both estimate u2 + U ; , so
the ratioof the B mean square and theAB mean square provides an F test of the
hypothesis
Estimators of the variancesU,', U;, and ;
a can be obtained from
nb
MSB -MSAB
8; =
nu
MSAB -errorMS
32 =
Y
n
Bj,
I15
TABLE 4.13
ANOVA of Rat Data by Using a Random Effects Model
ss
Source
DF
Diets
Drugs
Diets x drugs
Enor
2
c.Ooo1
10330.1312.39
3
9212.06
7.37
6
2501.38
36222.42
8007.25
MS
5165.06
3070.69
416.90
1.87
C.001
.l 1
Note.
;a = 3070.69-416.90
12
221.15,
CHAPTER 4
116
1 l7
TABLE 4.14
Study of Australian Aboriginal andwhite Children
1
2
3
4
A
A
6
7
8
9
10
A
A
11
12
13
14
1s
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
M
M
A
A
A
M
M
M
M
M
M
A
A
A
A
A
A
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
F
F
F
F
F
F
F
F
M
M
M
M
M
M
M
M
F
F
F
F
F
F
F
F
F0
F0
F1
F1
F2
F2
F3
F3
F0
F0
F1
F1
F2
F2
F3
F3
F0
F0
F1
F1
F2
F2
F3
F3
F0
F0
F1
F1
F2
F2
F3
F3
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
SL
AL
2, 11, 14
5,5.13,20.22
6.6. IS
7. 14
6,32,S3, S7
14, 16, 16. 17.40.43.46
12,15
8.23.23.28.34.36.38
3
S, 11.24.45
5,6,6,9,13,U,25,32,53,54
55, 11, 17.19
8,13,14,20.47,48.60.81
L
5.9.7
0.2.3.5. 10, 14,21,36,40
6.17.67
0 , 0 . 2 , 7 , 11, 12
0.0.5.5.5, 11, 17
3.3
22,30,36
8.0, 1.5.7, 16.27
12.15
0,30, IO. 14,27,41.69
25
10.11. U), 33
5,7,0,1,s,5,s,5,7,11,15
S, 14.6.6.7.28
0,5,14,2,2,3,8.10,12
1
8
1,9, 22,3,3,S, IS, 18.22.37
CHAPTER 4
I18
TABLE 4.15
Data Obtained in a Study of the Effectof Postnatal Depressionof the Motheron a
sa of
Mother
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
Depression
Child
IQ
PS
VS
QS
ND
103
124
56
65
67
52
50
42
61
63
46
46
52
ND
ND
ND
D
ND
ND
ND
ND
ND
ND
D
D
ND
D
ND
ND
ND
ND
ND
ND
ND
D
ND
D
ND
ND
ND
D
ND
D
ND
34
ND
ND
35
36
37
38
39
ND
ND
ND
ND
ND
124
104
96
92
124
99
92
116
99
22
81
117
100
89
l25
l27
112
48
139
118
107
106
129
117
123
118
58
55
46
61
58
22
47
68
54
48
64
64
50
23
68
58
45
46
63
43
64
64
64
61
55
43
68
50
51
50
38
53
47
41
66
68
57
20
75
64
50
22
41
63
53
46
67
71
64
25
64
61
58
53
67
63
56
61
60
64
66
64
48
52
66
60
16
120
l27
103
71
67
55
55
58
57
70
71
41
54
64
65
41
58
45
84
109
43
46
117
102
141
124
110
98
50
44
46
37
77
61
47
48
53
68
52
43
61
52
69
58
52
48
50
63
59
48
HuSband's
Psychiatric History
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
History
History
No history
History
No history
No history
History
No history
No history
No history
No history
History
No history
No history
No history
History
No history
History
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
(Continued)
119
TABLE 4.15
(Continued)
sex of
Mother
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
Depression
Child
IQ
PS
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
ND
D
ND
ND
D
ND
ND
ND
ND
ND
ND
118
117
115
119
117
92
65
60
52
63
63
45
53
59
78
ND
ND
D
ND
ND
ND
ND
ND
ND
101
119
144
119
127
113
l27
103
128
86
112
115
117
99
110
139
117
96
111
118
126
126
89
66
67
61
60
57
70
45
62
59
48
58
54
70
64
134
93
115
99
99
122
106
52
54
58
66
69
49
56
74
47
55
50
58
55
49
124
66
100
114
43
61
102
VS
QS
57
61
48
55
58
59
55
52
59
65
62
67
58
68
48
64
67
62
56
42
48
65
67
54
63
54
59
50
65
45
51
58
68
48
61
72
57
45
60
28
46
60
59
62
41
47
66
51
47
50
62
67
55
36
50
59
45
61
48
60
54
54
58
56
56
47
74
59
68
48
55
60
62
64
66
36
49
68
44
64
Husband's
Psychiatric
History
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No his~~ry
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
No history
History
No history
No history
History
No history
No history
No history
(Continued)
120
CHAPTER 4
TABLE 4.15
(Continued)
sexof
Mother
Depression
80
81
82
83
ND
ND
ND
ND
ND
ND
ND
D
D
ND
ND
ND
D
D
ND
51
84
85
86
87
88
a9
90
91
92
93
94
Note.
Score.
Child
48
62
62
48
65
S8
65
50
IQ
PS
VS
QS
121
119
108
110
127
118
107
123
102
110
114
118
101
121
114
64
63
66
50
60
50
48
66
61
66
58
53
52
59
58
64
52
47
44
64
62
61
60
56
71
S6
54
68
52
47
S6
63
56
62
55
47
Husbands
PsychiafricHistory
No history
No history
No history
No history
No history
No history
No history
No history
No history
History
No history
No history
No history
No history
No history
ND.not depressed, D,depressed, PS, perceptual score: VS. verbal score; QS, quantitative
Total
ND
75(70)
4(8)
79
D
9(14)
6(2)
15
Total
84
10
94
122
CHAPTER 4
I23
TABLE 4.16
ANOVA of Postnatal Depression Data
Source
Onier l
Mother
Husband
Mother x Husband
Error
Order 2
Husband
Mother
Husband x Mother
Error
DF
1
l
1
90
1
1
1
90
ss
1711.40
6.82
1323.29
9.29
2332.21
22583.58
2526.44
508.26 2.025
0.003 2332.21
9.29
22583.58
MS
1711.40
.02
1323.29
2332.21
250.93
2526.44
508.25
2332.21
250.93
5.27
10.07
.01
.m3
0.002
0.16
CHAF'TER 4
124
Husband
......
. No history
history
depressed
Depressed
Mother
In Table 4.18 is the defaultSPSS analysis of variance tablefor these data, which,
inappropriately for the reasons given above, gives
Type III sums of squares. Look,
for example, atthe origin line in this table and compare it with that in Table 4.17.
The difference is large; adjusting the origineffect for the large number of mostly
nonsignificant interactions has dramatically reduced the origin sum of squares
and the associated F test value. Only the values in the lines corresponding to the
origin x sex x grade x type effect and the error termare common to Tables 4.17
and4.18.
125
TABLE 4.17
Analysis of Data on Australian Children
Order I
Source
ss
DF
MS
P
~
Origin
Sex
Grade
l
1
3
Qpe
Origin x Sex
Origin x Grade
Sex x Grade
Origin x Qpe
Sex x Qpe
Grade x Qpe
Origin x Sex x Grade
Origin x Sex x ?Lpe
Origin x Grade x 'I).pe
Sex x Grade x Type
Origin x Sex x Grade x Qpe
Error
1
3
3
1
I
3
3
3
3
3
122
2637.69
342.09
1827.77
129.57
144.44
3152.79
1998.91
3.12
76.49
2171.45
221.54
824.03
895.69
378.47
349.60
23527.22
2637.69
342.09
609.26
129.57
144.44
1050.93
666.30
3.12
76.49
723.82
73.85
824.03
298.56
126.16
116.53
192.85
13.68
1.77
3.16
0.67
0.75
5.45
3.46
0.02
<.W1
0.40
.53
.013
.77
3.75
0.38
4.27
1S5
0.65
0.60
.19
.03
.4 l
.39
.m
.019
.90
.04
.21
.58
.6 1
TABLE 4.18
Default ANOVA Table Given bySPSS for Data on Australian School Children
Sourre
DF
Origin
Sex
Grade
1
I
m
Origin x Sex
Origin x Grade
origin x Qpe
Sex x
Grade x Qpe
Sex x Grade
Origin x Sex x Grade
Origin x Grade x Qpe
Sex x Grade x Qpe
Origin x Sex x Grade x Qpe
Error
3
1
1
3
1
1
2
3
3
3
3
3
122
ss
MS
228.72
349.53
1019.43
177.104
3.82
1361.06
53.21
0.86
2038.98
1612.12
34.87
980.47
412.46
349.60
23527.22
228.72
.28
349.53
339.81
177.04
.89
3.82
453.69
.08
53.21
.95
0.86
.02
679.66
537.37
11.62
326.83
137.49
116.53
192.85
1.19
1.762
1.762
0.92
0.02
2.35
0.28
0.00
3.52
2.79
0.06
1.72
0.71
0.60
.18
.16
.34
.60
.M
.98
.19
55
.61
126
CHAPTER 4
4.7. ANALYSISOFCOVARIANCE
IN FACTORIAL DESIGNS
Analysis of covariance in the context of a one-way design was introduced in
Chapter 3. The technique can
be generalized to factorial designs, and
in this section
an example of its use in a 3 x 3 two-way design will be illustrated by using data
from Howell(1992).The data are given in Table
4.19 and arisefrom an experiment
in which subjects performed either a pattern recognition task, a cognitive task,
TABLE 4.19
NS
Errors
Distract
10
83
94
12 8 9
107133123
DS
Errors
Distract
AS
Errors
Distract
Cognitive Task
12 7 1 4
101 75138
8
10 9
86112117
11
8 108 10 8
130 111 102 120
118
134
97
4
8 1 1 1 6 1 7
5
6 9 6 6
94 138127126
124 100 103 120 91 138
11
10
7 1 6
88
118
8
9
1
9 7 1 6 1 9
1 1 2 2 1 2 1 8
8 1 0
64 135 130 106 l 2 3 117 124 141 95
98
95
103
134
119123
NS
Errors
Distract
DS
Errors
Distract
AS
Errors
Distract
27 34 19 20 56 35 23 37
4 30 4 42 34 19 49
126 154 113 87 125 130 103 139 85 131 98 107 107 96 143
48 29 34 6
113 100 114 74
18 63 9 54 28 71 60 54 51 25 49
76 162 80 118 99 146 132 135 111 106 96
34 65 55 33 42 54
108 191 112
98
128
145
76
107
128
142
61 75
38
21 44
61 51 32 47
144 131 110 132
Driving Simulation
NS
Errors
Distract
DS
Errors
Distract
14
15 2 2
5
96112114137125168102
110
7
0
93
102
108
0 17
16 9 14
0 1 2 1 7
100 123131
15 3 9
15 13
109 111 137 106 117 101 116
1 1 14 4
5 1 6
3
103
139
101
102
99 116 81 103 78
5 1 1
AS
Errors
Distract
3
130
83
0
0
91 109
92
2
0
6
4
1
106 99 109 136
102
119
84 114
67
68
127
smoking, smoked duringor just before the task;delayed smoking, not smoked for
3hours; andnonsmoking, which consisted of nonsmokers. The dependent variable
was the number of errors on the task. Each subject was also given a distractibility
score-higher scores indicate a greater possibility
of beiig distracted.
Inhis analysis of these data, Howell gives the ANOVAtable shown inTable4.20.
This table arises from using SPSS and selecting UNIQUE sum of squares (the
default). The arguments against such sums
of squares are the same hereas in the
previous section, and a preferable analysis of these data involves
I sums of
squares with perhaps the covariate orderedfirst and the group x task interaction
ordered last. The results
of this approach are shown in Table 4.21. There is strong
as an exercisefor the
evidence of a groupx task interaction,which again we leave
reader to investigate in more detail.is this
(It conflict betweenType I and Type III
sums of squares that led to the different analysesof covariance results givenby
SPSS and S-PLUSon theWISC data reportedin the previous chapter.)
TABLE 4.20
ANCOVA of Pattern Recognition Experiment
by Using l)pe ILI Sums of Squsres
Source
ss
DF
(distract)
Covariate
Group
Task
1 464.88 4644.88
281.63
563.26
2
2
23870.49
4
1626.51
125
8942.32
Group x Task
Error
MS
11935.24
406.63
71.54
64.93
3.94
166.84
5.68
O
. oo
.M2
O
. oo
O
. oo
TABLE 4.21
ANCOVAof Pattern Recognition Data by Using 'Qpe I Sums of Squares with
the CovariateOrdeed F i t
ss
Source
DF
Distract
Task
Group
1
10450.43
2 11863.3923726.78
2
585.88
4
1626.51
l25
71.54
8942.32
Group x Task
Error
MS
10450.43
146.08
165.83
4.10
5.68
<.m1
292.94
406.63
<.m1
.M
<.m1
128
C W E R4
4.8. MULTIVARIATEANALYSIS
O F VARIANCE FOR FACTORIAL DESIGNS
PS
50.9
53.3
VS
QS
54.8 51.9
57.1 55.3
The children of depressed mothers have lower average scores on each of the
three cognitive variables.
TABLE4.22
MANOVA of Cognitive Variables from a Posmatal Depnssion Study
Source
PillaiStatistic
DE?
DF1Appmr F
Mother
Husband
3
Sex
3
Mother x Husband
Mother x Sex
Husband x Sex
Mother x Husband x Sex
0.09
0.05
0.05
0.07
0.01
0.002
0.04
2.83
1.45
1.42
2.25
0.3 1
0.07
1.11
3
3
3
3
3
84
84
.04
84
84
84
84
.U
84
$23
.09
.82
.98
.35
129
4.9. SUMMARY
1. Factorial designs allow interactions between factorsbe
tostudied.
effects. The latter are generally more applicable to psychological experiments, but random effects willbe of more importance in the next chapter
and in Chapter7.
3. Planned comparison and multiple comparison tests can be applied to factor
designs in a similar way to that described
for a one-way designin Chapter 3.
4. Unbalanced factorial designs require carein their analyses. It is important
to be aware of what procedure for calculating sums of squares is being
employed by whatever statistical software
is to be used.
COMPUTER HINTS
Essentially, the samehints apply as those given in Chapter 3. The most important
thing to keep a look out for is the differences in how the various packages deal
with unbalanced factorial designs. Those such
as SPSS that give'15peIU sums of
squares as their default should be used with caution, and then only after specifiI sums of squares and then considering main effects before
cally requesting
interactions, first-order interactions before second-order interactions, andso on.
When S-PLUS is used for the analysis of variance of unbalanced designs, only
I sums of squares are provided.
EXERCISES
4.1. Reexamine and, if you consider it necessary, reanalyze the rat data in the
light of the differing cell standard deviations and the nonnormality
of the residuals
as reported in the text.
4.2. Using the slimming data in Table 4.9, construct the appropriate95% confidence intervalfor the differencein weight change between novice and experienced
slimmers when members of both groups have access
to a slimming manual.
43. In the drug, biofeed, and diet example, carry out separate two-way analyse
of variance of the data corresponding to each drug. What conclusions
do you reach
from your analysis?
CHAPTER 4
I30
Display 4.4
ANCOVA Model for a Tko-Way Design with Factors
A and B
In general terms the modelis
where yilt is the kth observed valueof the response variable in the uth
cell of the
design andxijk is the corresponding valueof the covariate, whichhas grand mea0;F..
The regression coefficient linking the response and covariate variable
is p. The other
4.1.
terms correspond to those described in Display
The adjusted valueof an observationis given by
adjusted value= observed response value
+estimated regression coefficient
x (grand meanof covariate
-observed covariate value).
TABLE 4.23
Blood Ressure, Family Histoty, and Smoker Status
Smoking Status
Family History
Yes
No
Nonsmoker
Exsmoker
Current Smoker
125
156
103
129
110
128
135
114
110
91
136
105
125
103
110
114
107
135
120
123
113
134
140
120
115
110
128
105
90
165
145
120
140
125
123
108
113
1
6
0
131
TABLE 4.24
Data from a Trialof Estrogen Patchesin the Treatmentof Postnatal Depression
Baseline 2
Depression Score
24
24
18
27
17
15
20
28
16
26
19
20
22
15
10
12
5
5
9
11
13
6
18
27
19
25
19
21
21
25
25
15
27
27
15
28
18
20
21
24
25
22
26
7
8
2
6
11
5
11
6
6
10
Baseline
%ament
Placebo
Active
18
25
24
19
22
21
21
26
20
10
mothers depression.Do these plots giveyou any concerns about the resultsof the
analysis reported in Table 4.16? If so, reanalyze the data as you see fit.
4.6. Consider different orders of main effects in the multivariate analysis described in Section 4.7.
4.7. Plot some suitable graphs to aid the interpretation of the origin x sex x
in the analysis of the data on Australianschool children.
4.8. Graphically explore the relationship between errors and the distraction
measure in the smoking and performance data in 4.17.
TableDo your graphs cause
you any concern about the validityof the ANCOVA reported inSection 4.6?
4.9. Use the formula given in Display 4.4 to calculate the adjusted values
of the observations for the pattern recognition experiment. Using these values,
plot a suitable diagram to aid in the interpretation of the significant group x task
interaction.
4.10. The data in Table 4.23 (taken fromBoniface, 1995) were collected during
a survey of systolic blood pressure of individuals classed according to smoking
132
CHAPTER 4
Analysis of Repeated
Measure Designs
5.1. INTRODUCTlON
Many studies undertaken in the behavioral sciences and related disciplines involve recordingthe value of a response variablefor each subject under more than
one condition, on more than one occasion, or both. The subjects may often be
arranged in different groups, either those naturally occurring such as gender, or
those formed by random allocation, for example, when competing therapies are
assessed. Such a design is generally referred to as involving repeated measures.
Three examples will help in gettinga clearer picture of a repeated measures type
of study.
1. visual acuity data. This example is one already introduced in Chapter 2,
involving response timesof subjects using their rightand left eyeswhen light was
flashed through lenses of different powers. The data are givenin Table 2.3. Here
there are two within subjectfactors,eye and lens strength.
2. Field dependence and a reverse Stroop task. Subjects selected randomly
from a large group of potential subjects identified as having field-independentor
field-dependent cognitive style were required to read two types of words (color
and form names) underthree cue conditions-normal, congruent, and incongruent.
133
CHAPTER 5
134
TABLE 5.1
Field Independence and a ReverseStmop Task
h Color
bl Form
Subject
196
Cl
0 c2 (C)
176
Cl
0 c2 (C) c3 0
a1 (Ficld Independent)
1 219
206
191
2
175 186 183
3
166
165
190
4
210 5185 182 171 187 179
6
182 171 175 183
174 187 168
7
8
185
186
9
189
10
191 208 192
11
162 168 163
12
170 162
182
c3 0
169
185 159
201177
183
187 169
141
147
146
150
184
185
210
178
167
153
173
168
135
142
I60
187
161
183
140
145
151
a2 (Field Dependenf)
267
216
277
235
165
400
183
13
14
15
16
17
18
19
20
21
22
23
24
216
150
223
162
163 172
159
237
205
164
140
150
214
404
215
179
159
233
177
190
186
271
165
379
140
146
184
1 4 4 165 156
192 189 170
143
150
238
207 228225
217
205 230208
211
144
155
187
139
151
156
163
148
177
163
The dependent variable was the timein milliseconds taken to read the stimulus
words. The dataare shown in Table 5.1. Here thereare two within subjects factors,
type and cue, and one between subjectsfactor, cognitive style.
3. Alcohol dependenceand salsolinol excretion.W Ogroups of subjects, one
with severe dependence and one with moderate dependence on alcohol, had their
(in millimoles) recordedon four consecutive days (for
salsolinol excretion levels
in chemistry, salsolinolis an alkaloid
those readers without the necessary expertise
135
TABLE 5.2
Salsolinol ExcretionRates (mol) for Moderately and Severely Dependent
Alcoholic Patients
DaY
Subject
0.40
7.80
0.70
0.90
2.10
0.32
0.69
6.34
2.33
1.80
1.12
3.91
0.73
0.63
3.20
0.70
1.01
0.66
2.45
3.86
2
0.70
1.85
4.20
1.60
1.30
1.20
1.30
0.40
3
1.00
3.60
7.30
1.40
0.70
2.60
4.40
1.10
4
1.40
2.60
5.40
7.10
0.70
1.80
2.80
8.10
136
CHAPTER 5
5.2. ANALYSISOFVARIANCE
FOR EPEATED MEASURE DESIGNS
The structureof the visual acuity study and the field-dependence study is, superficially at least, similar to that
of the factorial designsof Chapter 4, and it might
be imagined that the analysis
of variance procedures described there could again
be applied here. This would be to overlook the fundamental difference thatin a
repeated measures design the same subjects are observed under the levels of at
least someof the factorsof interest, ratherthan a different sample as required in a
ANOVA
factorial design. Itis possible, however, to use relatively straightforward
137
procedures for repeated measures data if three particular assumptions about the
observations are valid. They are as follows.
1. Normality: the data arise from populations
with normal distributions.
2. Homogeneity ofvariance: the variancesof the assumed normal distributions
are required to be equal.
3. Sphericity: the variances of the differences between all pairs of repeated
This condition implies that the correlations between pairs
measurements are equal.
of repeated measures are also equal, the so-calledcompound syrnrnerry pattern.
Sphericity must also holdin all levels of the between subjects partof the design
5.2.1 and 5.2.2 to be strictly valid.
for the analysesto be described in Subsections
RepeatedMeasuresModel
for Visual Acuity Data
5.2.1.
Eye
SS
52.90
1650.60
Lensstrength
571.85
Error
14.6757
836.15
Eye x Strength1.51
14.58343.75
Error
9.68
57
551.75
Enor
DF
MS
1
19
52.90
86.87
190.62
0.61
.44
12.99
401
.22
CHAFTER 5
138
Display 5.1
ASS
=p
+ + + + +
a j
pk
Yjk
Ti
(ra)ij
(rk%ik
(rY)ijk
6jk.
where a j represents the effect of thejth level of factorA (with a levels), is the
effect of the kth level
of factor B (with b levels), andYjk the interaction effect of
the
two factors. The term is a constantassociatedwith subject i , and
(T&
and ( r y ) i j k represent interaction effectsof subject i with each factorand their
interaction. As usual, represent random
error terms. Finally, we assume that
there are n subjects in the study(see Maxwell and Delaney, 1990, for full details).
The factor A (aj).factor B (a). and theAB interaction ( y j k ) terms areassumed to
be fixed effects, but the subject and
error terms are assumed to be random variables
from normal distributions with
zero means and variances specificto each term.
This isan example ofa mixed model. Correlations betweenthe repeated measures
arise from the subject terms because
these apply to eachof the measurements made
on the subject.
The analysis of variance table
is as follows:
Source
SS
DF
MS
MSR(F)
ASS
A
a- 1
AMsms1
(a-1)
A x Subjects
ESSl (a l)(n 1)
FSSl
(a-INn-l)
(error 1)
B
BSS
b- 1
BSS
BMSiEMS2
&l)
Essz
(b l)(n 1)
B x SubjectsESS2
(b-lI(n-1)
(error 2)
ABSS
AB
ABSS (a l)(b 1)
(0-O(b-1)
- -
AB x Subjects =S3
- - (a -W -001-1)
Ess3
(o-])(b-l)(,,-l)
(error 3)
Here it is useful to give specific expressions both
for thevarious sums of squares
above andfor the terms that the corresponding mean
squares are estimating (the
expected mean squares),although the informationlooks a little daunting!
(Continued)
139
Display 5.1
(Continued)
Here S represents subjects, the sigma terms
are variances of the various random
effects in the model, identifiedby an obvious correspondence to these terms,
and
a
b
:e =
a;. ;e =
,
;
p ;e =
are A, B,and AB effectsthat
are nonzero unless the correspondmg null hypothesis that the a],pi,or yjt terms are
zero is true.
F tests of the various effects involve different error terms essentially for the same
reason as discussed in Chapter 4, Section 4.5, in respect to a simple random effects
model. For example, under the hypothesis of no factor A effect, the mean square for
A and the mean square for A x Subjects are both estimators ofthe same variance.If,
however, the factor effectsaj are not zero, then 0
,
' is greater than zero and the mean
square for A estimates a larger variance than the mean square for A x Subjects.
Thus the ratio of these two mean squares provides an F test of the hypothesis that
the populations corresponding to thelevels of A have the same mean. (Maxwelland
Delaney, 1990, give an extended discussionof this point.)
the four lens strengths
are as follows.
6/60
6/36
6/18616
112.93
115.05
115.10
118.23
CHAPTER 5
140
Display 5.2
Repeated Measures ANOVA Model for Three-Factor Design(A, B, and C) with Repeated
Measures on bo Factors (A and B)
Let y1jn represent the observation on theith person for factorA level j , factor
B level k, and factor C level 1.The model for the observationsis
Yijtl
= p +aj
+81 +
+bk
jk
OJI + t i l
+r1 +
+ +
Xjkl
+ (rb)it +(ry)ijt
(?ah/
~1jkl~
Subjects within C
(error 1)
ss
css
MSR( F)
CMSEMS1
EsSl
Within subjects
ASS
CASS
Ess2
AMsEMs2
CAMSEMS2
BSS
CBSS
Ess3
BMSEMS3
CBMSEMS3
AB
ABSS
CxAxB
CABSS
ABMSlEM.94
CABMSEMS4
CxAxBx
Ess4
CxA
C x A x Subjects
within C (error 2)
B
CxB
C X B x Subjects
within C (error 3)
Subjects
within C (error 4)
+ +- -
141
TABLE 5.4
ANOVA for Reverse Stroop Experiment
Source
ss
DF
18906.25
162420.08
25760.25
3061.78
43622.30
5697.04
292.62
5545.00
345.37
90.51
2860.79
MS
l 2.5618906.25
22
7382.73
1
25760.25 12.99 .W16
l
1.54
3061.78
22
1982.83
2
22.60
2848.52
2 1.16 146.31
44
126.02
2
172.69
2.66
2 0.70 45.26
44
65.02
.l2
.23
e.OO01
-32
.08
.so
hypothesis when itis true; that is, increasing the sizeof the Qpe I error over the
This will lead to an investigators claiming
nominal value set by the experimenter.
a greater numberof significant results than are actually justified by the data.
Sphericity implies that the variances of the differences between any pair of
the repeated measurements are the same.
In terms of the covariance matrixof the
repeated measures,I:(see the glossaryin Appendix A), it requires the compound
symmetry pattern, that is, the variances on the main diagonal must equal one
another, and the covariances off the main diagonal must also
all be the same.
Specifically, the covariance matrix must have the following form:
(.::
i=
U2
pa2
...
),
*;.
pa
pa
(5.1)
U2
where a2is the assumed common variance andp is the assumed common correThis pattern must hold
in each levelof
lation coefficientof the repeated measures.
the between subjectfactors-quite an assumption!
As weshall see in Chapter 7, departures from the sphericity assumption
are very
likely in the case of longitudinal data,but perhaps less likelyto be so much of a
of the within subject factors
problem in experimental situations in which the levels
are given in a random order to each subject in the study.
But where departuresare
suspected, what can be done?
CI"TER 5
142
5.4. CORRECTIONFACTORS
IN THE ANALYSIS OF VARIANCE
OF REPEATED MEASURE DESIGNS
Box (1954)and Greenhouse and Geisser (1959)considered the effects of departures from the sphericity assumption in a repeated measures ANOVA. They
demonstrated that the extent to which a set of repeated measures data deviates
from the sphericity assumption canbe summarized in terms of a parameter, E,
which is a function of the covariances and variances of the repeated measures.
(Insome waysit is a pity that E has becomethe established way to representthis
correction term, as there is some dangerof confusion with all the epsilons occurring as error terms in models. Try not to become too confused!) Furthermore,
an estimate of this parameter can be used to decrease the degrees of freedom
of F tests for the within subjects effects, to accountfor departure from sphericity. In this way larger values will be neededto claim statistical significance than
when the correction is not used, and so the increased risk of falsely rejecting
the null hypothesis is removed. For those readers who enjoy wallowing in gory
mathematical details, the formulafor E is given in Display 5.3. When sphericity
holds, E takes its maximum valueas one and the F tests need notbe amended. The
minimum value of E is l/(p -l), where p is the number of repeated measures.
(1959),have suggested using
Some authors,for example, Greenhouse and Geisser
this lower bound in all cases so as to avoid the need to estimate E from the data
(seebelow). Such an approach is, however, very conservative. That
is, it will too
often fail to reject the null hypothesis
when it is false. The procedure
is not recommended, particularly now that most software packages will estimateE routinely.
In fact, there have been two suggestions for such estimation,both of which are
usually providedby major statistical software.
ANALYSIS OF REPEATED
MWiSURE
DESIGNS
143
2. Huynh and Feldt (1976) suggest taking the estimateE ofto be min(1, a/b),
where a = n g ( p -l)?-2 and b = (p-l)[g(n 1) (p l)?], and where
n is the numberof subjects in each group,g is the numberof groups, and2
is the Greenhouse and Geisser estimate.
-- -
5.5. MULTIVARIATEANALYSIS
OF VARIANCE FOR REPEATED
MEASURE DESIGNS
144
e,
145
60.82
I 46
CHAPTER 5
TABLE 5 5
Field-Dependent
Variance
Field-Independent
Variance
c142
68.52
138.08
147.61
116.02
296.79
123.36
178.00
123.54
304.27
330.81
311.42
544.63
16.08
47.00
66.27
C1-C3
C1 x 4
C145
Cl-C6
m43
C2-a
czc5
(2x6
c x 4
c3-c5
C346
c445
C446
C546
478.36
344.99
2673.36
2595.45
3107.48
778.27
2866.99
2768.73
3364.93
2461.15
2240.99
2500.79
6 S4
1
150.97
103.66
c 4 c2
c3
CS
C6
185.00
190.82
160.01
156.46
161.77
192.09
156.46192.09
172.23
161.77
270.14
256.27
164.09
375.36
343.82
497.64
c1
c2
C3
c4
5105.30
1698.36
4698.27
4708.48
4708.48
4790.02 1443.91
4324.00
c3
4698.27
4324.00 1569.944636.24
c 4 1443.911698.36
964.79 1569.94
c 5 1603.66
1847.93
1790.64 1186.02
1044.64
C6 1309.361595.73
1664.55
1193.64
1138.00
1003.73
c2
CS
C6
1847.93
1603.66
1790.64
1044.64
1595.73
1309.36
1664.55
1003.73
1138.00
ANALYSIS OF REPEATED
MEASURE
DESIGNS
147
TABLE 5.6
Calculating Correction Factors
The Mean of the Covariance Matricesin Table 5.5 is the Mat& S Say,Given By
cz
Cl
c4
c3
c5
929.59
1001.31
2450.70
c 1 2427.10
2637.22
2490.42
2272.66882.72 800.18
c2
2427.10
750.73
.32 871.082505.71
c 3 2272.66
2450.70
800.18 632.44
c4
929.59
1.0887
689.55657.39
c 5882.72 1001.31
975.32
657.39 721.14 740.91
9.55 914.32
C6 750.73 890.36
Mean of diagonal elements of S is 1638.76
Mean of all the elements inS is 1231.68
Sum of squares of all the elements inS is 72484646
The row means of S are
Q
Cl
1603.97
c3
c4
c5
C6
890.364
845.64
C6
1722.72
Sum of the squaredrow means of S is 10232297
a=
Huynh-Feldt Estimate
a
b
min(l,o/b)
=12x5~2~2-2,
=5~(2xll-5~8),
= 0.303.
CHAPTER 5
I48
TABLE 5.7
ANOVA With and WIthout Correction Factors for Stroop Data
ss
Source
Group (G)
Enor
Condition (C)
GxC
Enor
110
DF
17578.34
162581.49
28842.19
4736.28
53735.10
MS
17578.34
1
22 7390.07
511.81
5768.42
947.26
51.94
488.50
2.38
.l4
<.ooOOl
.094
Nofc. Repeated measures are assumed to arise from a singlewithin subject factorwith six
levels. For thesedata the Greenhouse and Geisser estimateof the correction factor is 0.2768. The
adjusted degrees of freedom and the revised p values for the within subject tests are as follows.
Condition: &l = 0.2768 x 5 = 1.38; df2 = 0.2768 x 110 = 30.45; p = .0063:Group x Condition:
dfl = 1.38; df2 = 30.45; p = .171. The Huynh-Feldt estimate of c is 0.3029 so that the following
is me. Condition: dfl = 0.3028 x 5 = 1.51; dP2 = 0.3029 x 110 = 33.32; p = .OOO41;Group x
Condition: dfl = 1.51; dt2 = 33.32; p = .168.
Display 5.4
Multivariate Test for Equality of Means of Levels of a Within Subject Factor A with U
Levels
The null hypothesisis that in the population, the means of the U levels of A are
equal, thatis,
H0 :p l = p 2 = * . . = P a t
where pt. p?, ...,p. are the population meansof the levels of A.
This is equivalent to
H o :pl-/12=0;p2-p)=0;..;11.-1-p.=0
(see Exercise 5.5).
Assessing whetherthese differences are simultaneously equal to zero gives the
required multivariate testof H.
T 2is given by
T 2= ndsdd,
where n is the sample size and Sd is the covariance matrixof the differences between
the repeated measurements.
Under Ho,F = (n -p l)/[(n
l)(p
l)]T2has an F distribution withp 1
and n p 1 degrees of freedom, where p is the number of repeated measures.
-+
- -
l 49
TABLE 5.8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
Cl-Q
C S C c243
6
C4C5
-15
-8
-13
-3
1
16
-8
8
6
c344
4
-22
16
4
-19
1
-12
-16
-5
-8
-55
-55
-15
25
-22
31
7
-1
-7
-1
-1
0
10
19
0
-4
18
-53
-16
4
4
28
-12
-22
-5
-40
-21
-1
sa=
43
38
23
38
-11
-11
20
32
28
40
33
28
117
110
25
165
47
40
2
16
31
12
67
48
-14
-6
-8
-8
-4
-3
-3
7
-6
-4
-1
-6
-5
-26
-22
0
-9
-6
-12
-5
-4
-6
-25
-25
-18
-10
-6
-18
-4
-4
-24
-4
-16
7
-17
-9
-3
2
-22
-22
-19
-7
-18
-3
-11
-12
= [-1.42. -9.33,40.88,
-12
-8.00, -10.921.
261.91
-228.88
-20.53
12.83
36.90
-228.88
442.23
-208.65
34.61
36.90
-208.65
1595.51 -143.13
100.10
54.52 -14.70
12.83
34.61 -143.13
-20.53
65.64
100.10 -14.70
81.73
-3
CHAPTER 5
I 50
Display 5.5
Multivariate Test foran A x B Interaction in a Repeated Measures Design with Wlthin
Subject FactorA (at a Levels) and Between Subjects FactorB (at b Levels)
interaction.
Keselman, Keselman, andLix (1995) give some detailed comparisonsof univariate and multivariate approaches to the analysis of repeated measure designs,
and, for balanced designs (between subjects factors), recommend the degreesof
freedom adjusted univariateF test, as they suggest that the multivariate approach
is affected more by departures from normality.
5.6. MEASURING
RELLABILITY
FOR QUANTITATIVE VARIABLES:
THE INTRACLASS CORRELATION
COEFFICIENT
One of the most important issues in designing experimentsis the question of the
reliability of the measurements. The concept of the reliability of a quantitative
variable can best be introduced by use o f statistical models that relate the unknown true values of a variable to the corresponding observed values.This is
151
SI =
S*=
(
(
68.52
-26.89
-22.56
-26.89
123.36
-138.08
-22.56
-138.08
330.81
-17.73
11.44 -17.54
-17.73
0.02 -26.23
11.44
-17.54
-26.23
101.14
16.08 -17.67
101.14 -17.67
66.27 o . o
478.36
-455.82
119.82
10.18
-42.09
-455.82
778.27
-186.21
60.95
168.20
119.82
-186.21
2461.15
-140.85
85.18
10.18
60.95 -140.85
-42.09
168.21
85.18
61.54
-7.11
-7.11
103.66
'
'
2
0
152
CHAPTER 5
Display 5.6
Models for theReliability of Ouantitative Variables
Let x represent the observed valueof some variable ofinterest for a particular
individual. If the observation was madea second time, say somedays later,it would
almost certainly differ to somedegree fromthe first recording.A possible modelfor
x is
X=t+,
where t i s the underlying true value ofthe variable for the individualand E is the
measurement error.
Assume that t has a distribution with meanp and variance U:. In addition, assume
that E has a distribution with mean
zero and variance U,', and that t and c are
independent of each other; that is;the size of the measurement error does
not dependon thesize of the true value.
A consequence of this model is that variabilityin theobserved scoresis a
combination of true score variance anderror variance.
The reliability, R,of the measurements is defined as the ratio of the true score
variance to the observedscore variance,
+O+,
.~=.:+.,'+.~.
(Continued)
153
Display 5.6
(Continued)
The interclass correlation coefficient forthis SiNation is given by
R=
0:
u:+u,2+u:.
An analysis of varianceof the raters' scoresfor each subject leads to the following
ANOVA table.
r -1
Source DF
Subjects n 1
Raters
Error
(n -1)(r
MS
SMS
RMS
EMS
-1)
It can beshown that the populationor expected values of thethree mean squares
are
follows:
8;
8:
= SMS -EMS
r
RMS -EMS
8,'
=EMS.
CHAPTER 5
I 54
TABLE 5.10
Judges Scores for 40 Competitors in a Synchronized Swimming Competition
Swimmer
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Judge I
33.1
26.2
31.2
27.0
28.4
28.1
27.0
25.1
31.2
30.1
29.0
27.0
31.2
32.3
29.5
29.2
32.3
27.3
26.4
27.3
27.3
29.5
28.4
31.2
30.1
31.2
26.2
27.3
29.2
29.5
28.1
31.2
28. I
24.0
27.0
27.5
27.3
3 1.2
27.0
31.2
Judge 2
32.0
29.2
30.1
21.9
25.3
28.1
28.1
27.3
29.2
30.1
28.1
27.0
33.1
31.2
28.4
29.2
31.2
30.1
27.3
26.7
28.1
28.1
29.5
29.5
31.2
31.2
28.1
27.3
26.4
27.3
27.3
31.2
27.3
28. l
29.0
27.5
29.5
30.1
27.5
29.5
Judge 3
31.431.2
25.328.4
29.230.1
28.127.3
26.225.6
28.428.1
27.028. I
27.326.2
30.131.2
30.128.1
27.029.2
25.327.3
29.231.2
32.3
28.430.3
28.129.2
31.229.2
29.229.2
26.427.3
26.426.4
26.428.4
26.427.3
27.528.4
27.329.2
29.228.1
30.331.2
26.226.2
28.127.0
27.327.3
28.129.2
29.229.2
28.431.2
28.427.3
25.326.4
28.127.3
25.324.5
28.126.2
29.227.3
27.327.3
28.430.1
Judge 4
31.2
27.3
31.2
24.7
26.7
32.0
28.1
27.5
32.0
28.6
29.0
26.4
30.3
31.2
30.3
30.9
29.5
29.2
28.1
26.4
27.5
28.4
28.6
31.2
31.2
31.2
25.9
28.1
27.3
28.4
28.1
31.2
28.4
25.1
26.4
25.6
27.5
30.1
27.0
28.4
Judge S
31.2
155
r l
Judge 1
Judge 2
Judge 3
Judge4
Judge 5
ratings. The relationship between the intraclass correlation coefficient and Pearsons coefficient isgiven explicitly in Display 5.7.
Sample sizes required in reliability studies concerned with estimating the intraclass correlation coefficient are discussed by Donner and Eliasziw (1987), and
the same authors (Eliasziw and Donner, 1987) also consider the question of the
optimal choice of r and n that minimizes the overall cost of a reliability study.
Their conclusion is that an increase in r for fixed n provides more information
than an increase in n for fixed r.
CHAPTER 5
156
n
26
28
30
32
0
0
0 0 0
g o
Judge 1
0 0
0000
m oo oo
oo
o '80'8
'008
0 8
0 0
0
0
0
0 0
0 0
Judge 2
070
0 0
m g m o
09
'
0 0
030
00 0
0
0
0
0
00
m
0
0
0
0
0 0 0
0 0
m o 0
0g0
0
0
0
0 0
0
0
8
0
o m 0
0 0
o m
0 0
000 O m
I0 0 0
$
28
30
32
0
0
0
0
0
000
0 0
0
0
Judge 4
" $ 8
26
-%
08
m o
000
0 0
an
00
o m 0
0 0 8 0 0
-(u
i
0
oo
o m 0 0
2 0
0
0
0
0
o o m
O 0
80
o o o oo
0
8 0 0 0
0 0
0
0
0
0 0
0 0
0
00 0
8
om 0
O O 0
0
0
0 0
00
m o
0003
8 0
0 0 O(
om0
0 0
Judge 3
0ogDmo
0 0
0 0
o m
88"
o p o o o
0
0-
0 0 0
o,Ooo
1 S7
TABLE 5.11
ANOVA and Calculation of Intraclass Correlation Coefficient
for Synchronized Swimming Judgments
ANOVA Table
Source
Swimmers
Judges
Error
ss
DF
MS
521.27
19.08
163.96
39
4
156
13.37
4.77
1.05
Note. The required estimates of the three variance terms in the model are
8; =
13.37 - 1.05
<
= 2.53,
4.77 21.05
= 0.093,
40
82 = 1.05.
,1
CT;
R=
2.53
2.53
Display 5.7
Relationship Between the Intraclass Correlation Coefficient, R, and the Product Moment
Correlation Coefficient, Rhl, in the Case of l b o Raters
where i,
and 22 are the mean values of the scores of the two raters, and s; and S; are
the corresponding variances.
5.7. SUMMARY
1. Repeated measures data arise frequently in many areas of psychological
research and require special care in their analysis.
2. Univariate analysis of variance of repeated measures designs depends on the
assumption of sphericity. This assumption is unlikely to be valid in most
situations.
3. When sphericity does not hold, either a correction factor approach or a
MANOVA approach can be used (other possibilities are described by
Goldstein, 1995).
00
158
CHAPTER 5
COMPUTER HINTS
SPSS
In SPSS the basic steps to conducta repeated measuresanalysis are as follows.
Click Statistics, click General Linear Model, andthen click GLMRepeated Measures to get the GLM-Repeated Measures Define Variable(s)
dialog box.
Specify the namesof the within subject factors,and the numberof levels of
these factors. Click Add.
Click Define and move the names of the repeated measure variables to the
Within Subjects Variables box.
Both univariate tests, with and withoutcorrection factors, and multivariate tests
can be obtained.A test of sphericity is also given.
TABLE 5.12
Skin Resistanee Data
Subject
1
2
3
140
4
300
5
6
7
8
9 90
10
500
660
250
220
72
135
105
11
12
200
75 15
160
13
14
15
16
m
200
600
370
98
600
75
250
310
220
250
54 240
84
50
58 180
180
290
45
450
180
135
82
32
220
320
250
220
300
150170
66
26 107
310
20
lo00
48
230
1050
50
33
430
190
73
70
78
32
34
64
280
135
88
300
80
50
92
280
220
51
45
159
TABLE 5.13
Data from Quitting Smoking Experiment
Before
Gendcr
Male
Tmatment
Taper
Home
work
Home
work
4
2
4
5
5
8
8
5
6
5
l
8
l
5
6
l
l
5
Intermediate
Aversion
6
8
5
8
9
4
l
8
Taper
Intermediate
6
6
5
8
4
5
6
5
8
5
6
5
6
Aversion
6
4
8
Female
Afer
l
4
9
5
6
4
5
5
6
5
9
5
8
4
9
5
8
5
5
6
4
6
5
6
5
5
5
5
6
4
6
6
8
6
8
6
8
6
5
6
8
4
6
4
6
5
4
6
8
4
3
6
4
5
5
4
6
8
4
2
3
0
3
2
3
5
5
5
4
6
5
8
4
8
3
3
4
8
5
6
3
l
5
160
CHAPTER 5
S-PLUS
In S-PLUS,the main function for analyzing repeated measures
is Ime, but because
this is perhaps more applicable to longitudinal studies
to the
thanexamples covered
it until Chapter7.
in this chapter, we shall not say more about
EXERCISES
5.1. Apply both the correction factor approachand the MANOVA approach
to the Stroop data,now keeping both within subject factors. How do the results
compare with the simple univariate
ANOVA results givenin the text?Do you think
any observations shouldbe removed prior to the analysis of these data?
5.2. Reanalyze the visual acuity data
by using orthogonal polynomial contrasts
Analysis
6.1. INTRODUCTION
In Table 6.1 a small set of data appears giving the average vocabulary size of
children at various ages.Is it possible (or indeed sensible) tto
ry to use these data
to construct a model for predicting the vocabulary sizeof children older than6,
and how should we
go aboutit? Such questions serve to introduceofone
the most
widely used of statistical techniques, regression analysis. (It has to be admitted
that the methodis also often misused.)In very general terms, regression analysis
involves the development and useof statistical techniques designed to reflect the
way in which variation in an observed random variable changes with changing
circumstances. More specifically, theaim of a regression analysisis to derive an
equation relating a dependentanand
explanatory variable, or, more commonly, several explanatory variables. The derived equation
may sometimes be used solely
for
prediction, but more often
its primary purposeis as a way ofestablishing the relative
importance of the explanatory variable@) in determining the response variable, th
is, in establishing a useful model to describe the data. (Incidentally,
regresthe term
sion was first introduced by Galton
in the 19th Century
to characterize atendency
toward mediocrity, thatis, more average, observedin the offspringof parents.)
161
CHAPTER 6
162
TABLE 6.1
The Average Oral Vocabulary Size of Children
at Various Ages
~
Age (Years)
1.o
1.S
2.0
2.5
3.0
3.5
4.0
4.5
5.0
6.0
~~
Number of WO&
22
446
896
1222
2072
163
Display 6.1
Simple Linear Regression Model
The basic idea of simple linear regression is that the mean values of a response
variable y lie on a straight line when plotted againstvalues of an explanatory
variable, x.
The variance of y for a givenvalue of x is assumed to be independent of x .
More specifically, the model can be writtenas
Yi =(I+&
+Ei,
where y1, yz, ...,yn are the n observed values ofthe response variable,and
...,x, are the corresponding values ofthe explanatory variable.
X I , x2,
*The~i,i=l,...,nareknownasresidualorerrortermsandmeasurehowmuch
an observed value,yi, differs from the value predicted bythe model, namely
+pxi.
(I
The two parameters of the model,(Iand p, are the intercept and slope of the line.
Estimators of (Iand p are found by minimizing the sum
of thesquared deviations of
observed and predicted values, a procedure known
as least squares. The resulting
estimators are
& = J-ba,
S =
-9d2
c:=,(Yi
n-2
(Continued)
1369.0
CHAPTER 6
164
Display 6.1
(Continued)
[xy=,(y~
Therelevant terms are usually arranged in an analysis of variance
table as follows.
SS
Source
DF MS
F
Rearession
RGSS
Residual
RSS
1 MSRGMSR
RGSSll
n-2 RSS/(n 2)
The F statistic provides a test of the hypothesisthat the slope parameter, p,is
zero.
TABLE 6.2
Results of Fitting a Simple Linear RegressionModel to Vocabulary Data
Parameter Estimates
Estimate
SE
CoejJicient
Intercept
88.25
Slope
24.29
-763.86
561.93
ANOVA Table
Source
Regression
Residual
8
ss
7294087.0
109032.2
DF
1 539.19
MS
7294087.0
LINEAR
AND
MULTIPLE
165
REGRESSION ANALYSIS
age
Display 6.2
Using the Simple Linear RegressionModel for Prediction
4.7
CHAPTER 6
I66
TABLE 6 3
Predictions and Their Confidence Intervals for the Vocabulary Data
Age
Predicted
Vmabulary Size
5.5
7.0
10.0
20.0
SE ofPredicfion
95% Cl
133.58
151.88
203.66
423.72
(9496,11453)
2326.7
(2018.2635)
3169.6
(2819.3520)
(4385,5326)
Display 6.3
Simple LinearRegression Model with Zero Intercept
= BXi +ci.
where s2 is the residual meansquare from the relevant analysis of variance table.
into future ages at the same rate as observed between ages 1 and 6, namely between approximately513 and 610 words per year.This assumption is clearly false,
because the rateof vocabulary acquisition will gradually decrease. Extrapolation
outside the range of the observed values
of the explanatory variableis known in
general to be a risky business, and
this particular exampleis no exception.
Now you might remember that in Chapter
1 it was remarked thatif you do not
believe in a model you should not perform operations and analyses that assume
it is true. Bearing this warning in mind, is the simple linear model fitted to the
A little thought shows itthat
is not. The estimated
vocabulary data really believable?
on axisis -763.86,
vacabulary size at age zero,that is, the estimated intercept they,
with an approximate95% confidence intervalof (-940, -587). This is clearly a
It reflects the inappropriate
silly intervalfor avalue that is known apriori to be zero.
nature of the simple linear regressionmodel for these data.An apparently more
suitable model would be one in which the intercept is constrained to be zero.
Estimation for such amodel is describedinDisplay 6.3.Forthe vocabulary data, the
ANALYSIS
I67
Age
line.
ri = yi -ji.
(6.2)
1.
A histogram or stem-and-leaf plotof the residuals can be useful in checking for symmetry and specifically for normality of the error terms in the
regression model.
2. Plotting the residuals against the corresponding valuesof the explanatory
variable. Any sign of curvature in the plot might suggest that,
aquadratic
say,
term in the explanantory variable should be included in the model.
168
CHAPTER 6
+l
-I
Residusl 0
FIG. 6.3.
..:*:::::**...:
.
*.
..
..
S .
XI
Idealizedresidual plots.
1. Figure 6.3(a) is whatis looked forto confirm that the fitted model
is appropriate.
2. Figure 6.3(b) suggests that the assumption of constant isvariance
not justified
so that some transformation of the response variable before fitting might b
sensible.
169
ANALYSIS
TABLE 6.4
Residuals from Fitting a Simple Linear Regression to the Vocabulary Data
1.0
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
6.0
3
22
272
-201.93
79.03
360.00
446
640.96
8%
1222
1540
1870
2072
2562
92 1.93
120.89
1483.86
1764.82
2045.79
2607.72
204.93
-57.03
-87.99
-194.96
-25.92
-19.11
56.15
105.19
26.22
-45.70
3. Figure 6.3(c) implies that the model requires a quadratic term in the explanantory variable.
170
171
Display 6.4
Multiple RegressionModel
= Bo +BIXII
+Bzxiz + +Bpxip + 1.
coefficientsPI,h,...,B,, give the amount ofchange
Yi
The regression
inthe
response variable associated with a unit
change in the corresponding explanatory
variable, conditionul on theother explanatory variablesin the model remaining
unchanged.
The explanatory variablesare strictly assumed to be fixed; thatis, they are not
random variables.In practice, wherethis is rarely thecase, theresults from a
multiple regressionanalysis are interpreted as beiig conditionul on the observed
values of the explanatory variables.
The linear in multiple linear regression refersto the parameters rather than the
explanatory variables,so the model remainslinear if, for example, a quadratic term,
for one of these variablesis included. ( A n example of anonlinear model is
y = plep2x1
f13e@4Xz.)
The residual terms in the model,cl, i = 1, ...,n, are assumed to have a normal
distribution with meanzero and varianceoz. This implies that, for given values of
the explanatory variables,the response variableis normally distributed with a mean
that is a linear function of theexplanatory variables and a variance that
is not
dependent on these variables.
The aim of multiple regressionis to arrive at a set of values for the regression
coefficients that makes the values
of theresponse variable predictedfrom the model
as close as
possible to the observed values.
As in the simple linear regression model,the least-squares procedureis used to
estimate the parametersin themultiple regression model.
The resulting estimatorsare most conveniently written with the
help ofsome
matrices and vectors.The result mightlook complicated particularlyif you are
not very familiar with matrix algebra, but you
can take my word that the result
looks even more horrendous when written without the
use of matricesand
vectors!
Sobyintroducingavectory=[yl,y2, . . . , y n ]a n d a n n x ( p + l ) m a t n j t X
given by
(Continued)
172
CHAPTER 6
Display 6.4
(Continued)
= (xX)Xy.
These matrix manipulations are. easily performed on a computer, but you must
ensure that there are no linear relationships between the explanatory variables, for
example, that one variableis the sum of several others; otherwise, your regression
software will complain.(SeeSection 6.6.)
(More details of the model in matrix form and the least-squares estimation process
are given in Rawlings,1988.)
The variation in the response variable canbe partitioned into a part due to regression
on the explanatory variables and a residual
as for simple linear regression. These can
be arranged in an analysis of variance tableas follows.
Source
Regression
Residual
DF
ss
RGSS
n -p -1 RSS
p
MS
RGSSlp
RSS/(n -p
-1)
F
RGMSIRSMS
= s*(xx)-l.
The diagonal elementsof this matrix give the variancesof the estimated regression
coefficients and the off-diagonal elements their covariances.
A measure of the fit of the model is provided by the mulriple correlation coeficient,
R,defined as the correlation between the observed values
of the response variable,
y , , ..
.,yn,and the values predicted by the fitted model, that is,
j+=h++Blxil
+*..+Bp~p
173
Simple Example
of Multiple Regression
6.3.1. A
As a gentle introduction to the topic, the multiple linear regression model will be
applied to the data on ice cream sales introduced in Chapter 2 (see Table 2.4).
Here the response variableis the consumptionof ice cream measured overthirty
4-week periods. The explanatory variables believed
to influence consumption are
average price and mean temperature
in the 4-week periods.
A scatterplot matrixof the three variables (see Figure 6.5) suggests that temperof applying
ature is of most importancein determining consumption. The results
are shown in Table 6.5. The F value
the multiple regression model to these data
of 23.27 with 2 and 27 degrees of freedom has an associated
p value that is
very small. Clearly, the hypothesis that both regression coefficients areis zero
not
tenable.
For the ice cream data, the multiple correlation coefficient (defined in Display
6.4) takes the value 0.80, implying that the two explanantory variables, price and
temperature, accountfor 64% of the variability in consumption. The negative regression coefficient for price indicates that,
for a given temperature, consumption
decreases with increasing price. The positive coefficientfor temperature implies
that, for a given price, consumption increases with increasing temperature. The
is ofprice
more
sizes of the two regression coefficients might appear to imply that
importance than temperature in predicting consumption.this
But
is an illusion produced by the different scalesof the two variables. The raw regression coefficients
should not be used to judge the relative importance of the explanatory variables,
although the standardized values of these coefficients can, partially at least, be
used in this way. The standardized values might be obtained
by applying the regression model to the values of the response variable and explanatory variables,
standardized (divided by) their respective standard deviations.
In such an analysis, each regression coefficient represents the change
in the standardized response
variable, associated with a change
of one standard deviationunit in the explanatory
variable, again conditional on the other explanatory variables remaining constant
The standardized regression coefficients can, however, be found without undertak
ing this further analysis, simplyby multiplying the raw regression coefficient by
the standard deviationof the appropriate explanatory variable and dividing
by the
standard deviationof the response variable.In the ice cream example the relevant
standard deviationsare as follows.
Consumption:
0.06579,
Price:
0.00834,
Temperature:
16.422.
The standardized regression coefficients
become as shown.
CHAPTER 6
174
8-
0
0
00
N0
- 0
8-
0
0
0
0
0
0
0 0
6-
00
0
0
0
0
0
0 0
0 0
0
00
0
0
0
I
0.35
0.25
0 0
0
I
50
30
40
60
70
175
TABLE 65
Multiple Regressionfor the Ice Cream Consumption Data
Pammefer
Estimate
SE
BI
82
-1.4018
0.0030
0.9251
0.0005
ss
DF
MS
0.07943
0.04609
2
27
0.03972
0.00171
23.27
Price:
-1.4018 X 0.00834
0.06579
=
:-0.1777.
Temperature:
0.00303 x 16.422
= 0.7563.
0.06579
1 76
CHAPTER 6
TABLE 6.6
Multiple Regressionfor the Ice Cream Data: The Effect of Order
consumption = B0
areao
+PI x price.
ss
Source
Regression
Residual
(C)
0.00846
0.1171
MS
DF
2.021
28
0.00846
0.00418
Temperature is now addedto the model; thatis, the following new modelis fitted:
consumption = B0 /31 x price
x temperature.
consumption = B0 BI x temperature.
(b)The parameter estimatesambo = 0.21 and = 0.0031. The multiple correlation is 0.78 and
the ANOVA table is
Source
0.001786
Regression
Residual
28
ss
0.07551
0.05001
DF
42.28
1
MS
0.07551
(c) Price is now added to the modelto give theresults shown in Table6.5.
Example of Multiple
Linear Regression in which One
of the Explanatory Variables is Categorical
6.3.2. An
The data shownin Table 6.7 are taken froma study investigatinga new method of
measuring bodycomposition,and give the body fat percentage, age
and sex for 20
normal adults between 23 and 61 years old. The question of interest is how are
percentage fat, age and sex related?
The data in Table 6.7 include the categorical variable, sex. Can such an exif so, how? In
planatory variable be included in a multiple regression model, and,
fact, it is quite legitimate to include
a categorical variable suchas sex in a multiple
ANALYSIS
177
TABLE 6.1
Human Fatness, Age. and Sex
Age
Percentage Far
SeX
23
23
27
27
39
41
45
49
50
53
53
54
54
56
57
57
58
58
9.5
27.9
7.8
17.8
31.4
25.9
27.4
25.2
31.1
34.7
42.0
20.0
29.1
32.5
30.3
21.0
33.0
33.8
33.8
41.1
0
1
0
0
1
1
60
61
0
1
1
1
1
0
1
1
1
0
1
1
1
1
1. For men, for whom sex is coded as zero and so sex x age is also zero, the
fitted modelis
(6.5)
CHAPTER 6
I78
Display 6.5
Possible Models that Mightbe Considered for the Human Fat Data
in Table 6.4
The first model that might
be considered is the simple
linear regression model
relating percentagefat to age:
%fat =
x age.
After such a modelhasbeen fitted, afurther question of interest might be, Allowing
for the effect of age,does a persons sex have any bearing
on their percentage of
fatness? The appropriate model would be
%fat=~+~lxage+~2xsex,
where sexis a dummy variable that codes men as zero and womenas one.
The model now describes thesituation shownin the diagram below, namelytwo
parallel l i e s with a vertical separation B
of*.Because in this case, the effect of age
on fatness is the same for both sexes,or, equivalently, the effect of a persons sex
is
there isno interaction between age and
the same for all ages; the model assumes that
sex.
Age
Suppose now that a modelis requiredthat does allow for the possibility ofan age x
sex interaction effect on
the percentage of fatness. Such a model must include a new
variable, definedas the product ofthe variables ofage andsex. Therefore,the new
model becomes
%fat = Bo
+B,x age.
%fat = &,
(Continued)
179
Display 6.5
(Continued)
However, for women sex= 1 and so sex x age = age and the model becomes
%fat
TABLE6.8
h
P3
0.19
0.35
16.64
-0.15
0.14
8.84
1176.18
364.28
16
392.06
22.77
17.22
180
CHAPTER 6
-1
0 P
Female
........
wc
S
8
.........
..........
0N
............
...........
..............
.
.
.
.
.......
.........
11-
............
..............
0
,..'........
0
.........
0
30
40
60
Age
FIG. 6.6. Plot of model for human fat data, which includes
x age interaction.
a sex
2. For women, for whom sex is coded as one and so sex x age is simply
age, the fitted modelis
%fat = 3.47
(6.7)
The fitted model actually represents separate simple linear regressions with
different intercepts and different slopes
for men and women. Figure6.6 shows a
plot of the fitted model together with the original data.
The interaction effect is clearly relatively small, and it seems that a model
including only sex and age might describe
the data adequately.In such a model the
estimated regressioncbfficient for sex (11.567)is simply the estimated difference
between the percentage fat
of men and that of women, which
is assumed to be the
same at all ages.
(Categorical explanatory variables with more than two categories have
to recoded int e r n of a seriesof binary variables,dummy variables,before beingused
in a regression analysis, but
we shall not givesuch an example until Chapter
10.)
ANALYSIS
181
6.3.3.
In Table 6.9,data for47 states of the USA are given. These data will be used to investigate how the crime rate1960
in depended on the other variables listed. The data
originatefrom the UniformCrime Report of the F B I and other government sources.
Again it is useful to begin OUT investigation of these data by examining the
scatterplot matrixof the data;it is shown in Figure 6.7.The scatterplots involving
crime rate (the row
top ofFigure 6.7) indicate that some, at least,
of the explanatory
the very
variables are predictive of crime rate. One disturbing feature of the isplot
strong relationship between police expenditure
in 1959 and in 1960.The reasons
that such a strongly correlatedofpair
explanatory variables can cause problems for
multiple regression will be taken in
upSection 6.6. Here we shall preempt oneof
the suggestions to be made
in that sectionfor dealing with the problem, by simply
dropping expenditurein 1960 from consideration.
are
The resultsof fitting the multiple regression model to the crime rate data
given in Table 6.10.The global hypothesis thatall the regression coefficients are
zero is overwhelmingly rejected. The square of the multiple correlation coefficient is 0.75, indicating that the12 explanatory variables account for75% of the
variability in the crime rates
of the 47 states.
The overall test that all regression coefficients in a multiple regression
are
zero is seldom of great interest. In most applications it willbe rejected, because
it is unlikely that all the explanatory variables chosen for study will be unrelated to the response variable. The investigator
is far more likely to be interested
in the question of whether some subset of the explanatory variables exists that
might be a successful as the full set in explaining the variation in the response
variable. If using a particular (small) numberof explanatory variables results in
a model that fits the data only marginally worse than a much larger set, then
a more parsimonious description of the data is achieved (see Chapter 1). How
can the most important explanatory variables be identified? Readers might look
again at the results for the crime rate data in Table 6.10 and imagine that the
answer to this question is relatively straightforward; namely, select those variables for which the correspondingt statistic is significant and drop the remainder.
Unfortunately, such a simple approach is only of limited value, because of the
relationships between the explanatory variables. For this reason, regression coefficients and their associated standard errors are estimated condirional on the
other variablesin the model. Ifa variableis removed from a model, the regression
coefficients of the remaining variables (and their standard errors) to
have
be reestimated from a further analysis. (Of course,
if the explanatory variables happened
to be orthogonal to one another, there would be no problem and the t statistics
could beused in selecting the most important explanatory variables.
This is, however, of little consequencein most practical applicationsof multiple regression.)
182
CHAPTER 6
TABLE 6.9
Age
l
2
3
4
5
79.1
163.5
57.8
196.9
123.4
68.2
96.3
155.5
85.6
70.5
167.4
84.9
51.1
66.4
79.8
94.6
53.9
92.9
75.0
122.5
74.2
43.9
121.6
96.8
52.3
199.3
34.2
121.6
104.3
69.6
37.3
75.4
107.2
92.3
65.3
127.2
83.1
56.6
82.6
115.1
151
143
142
136
141
121
l27
1131
1157
140
124
134
128
135
1152
1142
143
135
130
125
126
1157
132
131
130
131
135
152
119
166
140
125
1147
7
8
9
10
11
12
13
14
15
16
17
l8
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
126
123
150
177
133
149
1145
Ed
Ex0
Er1
91
113
89
121
121
110
111
109
58
103
45
149
109
118
82
115
65
71
121
75
67
62
57
81
0
0
0
90
118
0 105
0 108
0 113
0 117
87
88
0 110 66
1 104 123
0 116 128
0 108 113
0 108 74
89 47
0 %
87
0 116 78
0 116 63
0 121 160
0 109 69
0 112 82
0 107 166
1 89
58
0 93 55
0
0
0
0
0
1
0
l
109
104
90
63
118 97
102 97
100 109
87 58
104 51
88 61
104 82
LP
NW
U1
U2
41
36
33
39
20
29
38
35
28
24
35
31
25
27
43
47
35
34
34
58
33
34
32
42
21
41
22
28
92 36
72 26
135 40
105 43
76 24
102 35
124 50
87 38
76 28
99 27
86 35
88 31
108
96
94
102
91
84
97
79
81
100
77
83
77
77
92
116
114
89
78
130
l02
97
83
142
70
102
80
103
394
557
318
673
578
689
620
472
421
526
657
580
507
529
405
427
487
631
627
626
557
288
513
540
486
674
564
537
637
396
453
617
462
589
572
559
382
425
395
488
261
194
250
167
174
126
168
206
239
174
170
172
206
190
264
247
166
165
135
166
195
276
227
176
1%
152
139
215
154
237
200
163
233
166
158
153
254
225
251
228
(Continued)
LINEAR AND
MULTIPLE
183
REGRESSION ANALYSIS
TABLE6.9
(Continued)
41
42
43
44
45
46
47
88.0
54.2
82.3
103.0
45.5
50.8
84.9
148
141
162
136
139
126
130
0
0
122
121
1
0
0
109
99
88
104
121
72
56
75
95
46
106
90
66
S4
70
96
41
97
91
601
S23
S22
574
480
S99
623
998
968
996
1012
968
989
1049
9
4
40
29
19
40
19
2
208
36
49
24
22
84
107
73
111
135
78
113
20
37
27
590
489
496
37 622
S3 457
25 S93
40 588
144
170
224
162
249
171
160
6.4. SELECTINGSUBSETS
OF EXPLANATORY VARIABLES
In this section we shall consider
two possible approaches to the problem
of idenas informative as the
tifying a subsetof explanatory variables likely to be almost
is known
complete set of variables for predicting the response. The first method
as allpossible subsets regression,and the second is known as automatic model
selection.
184
CHAPTER 6
1
S,
50
0.0
OS
80
140
503
loo
80140
XfJm
LINEAR
AND
MULTIPLE
185
REGRESSION ANALYSIS
TABLE 6.10
Multiple Regression Resultsfor Crime Rate Data(Ex0 not Used)
Estiinated Coe&ients, SEs and I Values
SE
155.55
0.43
15.20
0.65
0.27
0.15
0.21
0.13
0.07
0.44
0.17
0.87
0.03
0.11
0.24
-4.76
2.53
-0.54
2.50
3.19
0.03
0.88
-0.13
-0.03
-1.39
2.24
1.37
3.41
4.001
0.02
0.59
0.02
4.001
0.97
0.39
0.90
0.97
Value Parameter
(Intercept)
Age
S
Ed
Ex1
LF
N
NW
U1
U2
W
X
-739.89
I.09
-8.16
l .63
1.03
0.01
0.19
-0.016
-0.002
-0.62
1.94
0.15
0.82
0.18
0.01
CHAPTER 6
186
Display 6.6
Mallows Ct Statistic
= @ssk/s2)-(n -2p),
where RSSk is the residualsum of squares from the multiple regression model with
a
set of k explanatory variables andszis the estimateof uzobtained from the model
that includesall the explanatory variables under consideration.
If Ct is plotted againstk,the subsetsof variables worth consideringin searching for
a parsimonious model are those lying close to theline C, = k.
In this plot the value of k is (roughly) the contribution toc k from the varianceof the
estimated parameters, whereasthe remaining C, k is (roughly) the contribution
from the bias ofthe model.
This feature makes the plot
a useful device for abroad assessment of the ck values
of a range of models.
TABLE 6.11
Results of all Subsets Regression for Crime Rate Data Using Only
Age, Ed, and Ex1
ck
Model
9.15
41.10
50.14
3.02
11.14
42.23
4.00
Size Subset
Variables in Model
Ex 1
Ed
Age
Age,Exl
EdBXl
Age,Ed
AgeSExl
ck
LINEAR
AND
MULTIPLE
REGRESSION
ANALYSIS
51-
8-
Ed
187
'3
2 83
x
m
-
8-
E- U1
E4U1
W.U1
I
4kY.E
Size of subset
6.4.2.
= 5.65,
= 5.91.
c
6
c
6
AutomaticModelSelection
CHAPTER 6
188
TABLE 6.12
Some of the Results of All Subsets Regression for Crime Rate Data UsingAll 12
Explanatory Variables
Model
1
2
3
4
5
6
7
8
9
10
l1
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
ck
Subset Sue
33.50
67.89
79.03
80.36
88.41
89.80
90.30
90.38
93.58
93.61
20.28
23.56
30.06
30.89
31.38
32.59
33.56
34.91
35.46
35.48
10.88
12.19
15.95
16.82
18.90
20.40
21.33
21.36
22.19
22.27
8.40
9.86
10.40
11.19
12.39
12.47
12.65
12.67
12.85
2
2
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
3
4
4
4
4
4
4
4
4
4
4
5
5
5
5
5
5
5
5
5
VariabIes in Model
(Continued)
189
TABLE 6.12
(Continued)
Model
40
41
42
43
44
45
46
47
48
49
50
Ck
12.87
5.65
5.91
8.30
9.21
9.94
10.07
10.22
10.37
11.95
14.30
Size
Subset
Variables in Model
5
6
6
6
6
6
6
6
6
6
6
CHAPTER 6
190
ma
10
12
Size of subset
using
ANALYSIS
191
Display 6.7
where SS&3u.l.t -S&idud.t+l is the decrease in the residual sum of squares when a
variable is added to an existing k-variable model (forward selection), or the increase
in the residual sum of squares when a variableis removed from an existing
(k l)-variable model (backward elimination), andSS&dd.t+) is the residual sum
of squares for the model includingk + 1 explanatory variables.
The calculated F value is then compared with a preset term know
as the F-to-enter
In the former,
(forward selection)or the F-to-remove (backward elimination).
of the candidate
calculated F values greaterthan the F-to-enter lead to the addition
variable to the current model.In the latter, a calculatedF value less than the
F-to-remove leads to discarding the candidate variable from the model.
Application of both forward and stepwise regressionto the crime rate data results in selecting the five explanatory variables,
E x l , X,Ed,Age, and U2. Using the
backward elimination technique produces these five variables plus
W. Table 6.13
shows some o f the typical results given by the forward
selection procedure implemented in a package such asSPSS.
The chosensubsetslargelyagreewiththe
selection made by all subsets
regression.
It appears that one suitable model
for the crime rate data is that which includes
Age, E x l , Ed, U2, and X. The parameter estimates, and so on, for a multiple
regression model including only these variablesare shown in Table 6.14. Notice
that the estimated regression coefficients and their standard errors have changed
from the values given in Table
6.10, calculated for amodel including the original
12
explanatory variables.The five chosen variables account
for 71% o f the variabilty
of the crime rates(theoriginal 12 explanatoryvariablesaccounted for 75%).
Perhaps the most curious feature
of the final model
is that crime rate increases with
increasing police expenditure (conditional on the other four explanatory variables).
Perhaps this reflects a mechanism whereby
as crime increases thepolice are given
more resources?
CHAPTER 6
192
TABLE 6.13
Model
1
2
3
Model Summary
Model
Ex1
1Constant,
2 Constant, Exl, X
3 Constant, Exl, X. Ed
4 Constant, Exl. X.Ed, Age
5 Constant, Exl, X Ed, Age, U2
Source
ss
DF
MS
Regression
Residual
Total
Regression
Residual
Total
Regression
Residual
Total
Regression
Residual
Total
Regression
Residual
Total
30586.257
38223.020
68809.277
38187.893
30621.383
68809.227
43886.746
24922.530
68809.277
46124.540
22684.737
68809.277
48500.406
20308.871
68809.277
1
45
46
2
44
46
3
43
46
4
42
46
5
41
46
30586.257
849.400
36.009
19093.947
695.941
27.436
14628.915
579.594
25.240
11531.135
540.113
21.349
9700.081
495.338
19.58
R2
0.667
0.745
0.799
0.819
0.840
0.445
0.555
0.638
0.670
0.705
Note. No further variables are judged to contribute significantly to the prediction of crime rate.
Model summary abbreviations are defined in Table 6.9.
is based. We have, in Section6.2, already described the use of residuals for this
purpose, butin this section we shall go into a little moreanddetail
introduce several
other useful regression diagnostics thatare now available. These diagnostics are
methods for identifying and understanding differences between a model and the
data to which it is fitted. Some differences between the data and the model may
be the resultof isolated observations; one,or a few, observations maybe outliers,
or may differin some unexpectedway from therest of the data. Other differences
may be systematic;for example, a termmay be missingin a linearmodel.
A number of the most useful regression diagnosticsare described in Display
6.8.We shall now examine the use
of two of these diagnosticson the final model
I 93
TABLE 6.14
Multiple RegressionResults for the Final Model Selectedfor Crime Rate Data
Pammeter
Estimate
SE
Intercept
0.37
Age
Ex1
Ed
-528.86
1.02
99.62
-5.31
<.W1
U2
1.30
2.04
0.65
0.99
0.16
0.50
0.16
0.45
8.12
4.11
4.18
2.19
.W
4.001
<.001
4.001
.03
selected for the crime rate data, and Figure 6.10 shows plots of the standardized and deletion residuals against fittedvalues. States 11, 19,and 20 might perhaps be seen as outliers because they fall outside the boundary, (-2,2). There
is also perhaps some evidence that the variance of crime rate is not constant as
required. (Exercise 6.6 invites readers to consider more diagnostic plotsfor these
data.)
6.6. MULTICOLLINARITY
One of the problems thatoften occurs when multiple regressionis used in practice
is multicollinearity. The term is used to describe situations in which there are
moderate to high correlations among some or all of the explanatory variables.
Multicollinearity gives rise to a number of difficulties when multiple regression is
applied.
1. It severely limits the size of the multiple correlation coefficient R because
the explanatory variables are largely attempting to explain
much of the same
variability in the response variable (see, e.g., Dizney and Gromen,1967).
2. It makes determining the importance
of a given explanatory variable difficult
because the effects of the explanatory variables are confoundedas a result
of their intercorrelations.
3. It increases the variances of the regression coefficients, making the use of
the fitted model for prediction less stable. The parameter estimates become
unreliable.
CHAPTER 6
1 94.
Display 6.8
Regression Diagnostics
H = X(XX)-X,
where X is the matrix introduced in Display6.4.
In a multiple regression the predicted values
of the response variable can be written
in matrix form as
9 =W
ri
r,del =
ri
Dl =
r?w
&zi=i5l
11
11
36
23
36
23
26
26
41 7
33
39
21 9
24 40
38
31
45
46
47
47
22
22
19
29
19
29
FIG. 6.10. Diagnostic plots for final model selected for crime rate data: la) stand.
ardized residuals against fitted values and (b)deletion residuals agains; predicted values.
CHAPTER 6
196
factors (VIFs) of the explanatory variables. The variance inflation factor VIFj, for
the jth variable, is given by
where R; is the square of the multiple correlation coefficient from the regression of
the jth explanatory variable on the remaining explanatory variables. The variance
inflation factor of an explanatory variable indicates the strength of the linear relationship between the variable and the remaining explanatory variables. A rough
rule of thumb is that variance inflation factors greater than 10 give some cause for
concern.
Returning to the crime rate data, we see that the variance inflation factors of
each of the original 13 explanatory variables are shown in Table 6.15. Here it is
clear that attempting to use both Ex0 and EX1 in a regression model would have
led to problems.
How can multicollinearity be combatted? One way is to combine in some way
explanatory variables that are highly correlated-in the crime rate example we
could perhaps have taken the mean of Ex0 and Exl. An alternative is simply to
select one of the set of correlated variables (this is the approach used in the analysis
of the crime rate data reported previously-only Ex1 was used). Two more complex
TABLE 6.15
VIFs for the Original 13 Explanatory Variables in the
Crime Rate Data
Variable
Age
S
Ed
Ex0
Ex 1
LF
M
N
Nw
u1
u2
w
X
VIF
2.70
4.76
5.00
100.00
100.00
3.57
3.57
2.32
4.17
5.88
5 .OO
10.00
8.33
197
+ ai +
Eij.
(6.9)
(6.10)
i=l
- - a 2 - * * * - ak-1-
(6.11)
(Other constraints could be used to deal with the overparameterization problemsee Exercise 6.5.)
How can this be put into an equivalent form to the multiple regression model
given in Display 6.4? The answer is provided in Display 6.9, using an example in
which there are k = 3 groups. Display 6.10 uses the fruit fly data (see Table 3.1)
to show how the multiple regression model in Display 6.4 can be used to find
the relevant sums of squares in a one-way ANOVA of the data. Notice that the
parameter estimates given in Display 6.10 are those to be expected when the
model is as specified in Eqs. (6.9) and (6.10), where p is the overall mean of the
number of eggs laid, and the estimates of the a,are simply the deviations of the
corresponding group mean from the overall mean.
Moving on now to the two-way analysis of variance, we find that the simplest way of illustrating the equivalence of the model given in Display 4.1 to the
multiple regression model is to use an example in which each factor has only two
CHAPTER 6
198
Display 6.9
Multiple Regression Model for a One-way Design with Three Groups
Introduce two variables x 1 and x~ defined below, to label the group to which an
observation belongs.
X]
x.,
1
1
0
Group
2
0
1
3
-1
-1
c:=,0,
a,=
This is exactly the same form as the multiple regression model in Display 6.4.
Display 6.10
One-way ANOVA of Fruit Fly Data, Using a Multiple Regression Approach
Define two variables as specified in Display 6.9.
Regress the number of eggs laid on X I and x2.
The analysis of variance table from the regression is as follows.
Source
Regression
Residual
SS
1362
5659
DF
2
72
MS
681.1
78.6
F
8.7
This is exactly the same as the one-way analysis of variance given in Table 3.2.
The estimates of the regression coefficients from the analysis are
fi = 27.42,
= -2.16,
&., = -3.79.
&I
The estimates of the a;are simply the differences between each group mean and the
grand mean.
levels. Display 6.1 1 gives the details, and Display 6.12 describes a numerical example of performing a two-way ANOVA for a balanced design using multiple
regression. Notice that in this case the estimates of the parameters in the multiple
regression model do not change when other explanatory variables are added to
the model. The explanatory variables for a balanced two-way design are orthogonal (uncorrelated). But now consider what happens when the multiple regression
199
Display 6.11
Multiple Regression Model for a 2 x 2 Factorial Design (Factor A at Levels A1 and A2,
Factor B at Levels B1 and B2)
The constraints imply that the parameters in the model satisfy the following
equations:
a1 = -a2,
81 = - 8 2 ,
Y I j = -y2J?
XI = - X 2 .
x2.
+ a I x l + 81X2+ y1lx3 +
'IJk,
CHAPTER 6
200
Display 6.12
Analysis of a Balanced Two-way Design by Using Multiple Regression
Consider the following data, which have four observations in each cell.
B1
B2
A1
23
25
27
29
26
32
30
31
A2
22
23
21
21
37
38
40
35
Introduce the three variables X I , x2, and xj as defined in Display 6.1 1 and perform a
multiple regression, first entering the variables in the order XI,followed by x?,
followed by x3. This leads to the following series of results.
Step I : x1 entered. The analysis of variance table from the regression is as
follows.
Source
Regression
Residual
ss
12.25
580.75
DF
1
14
MS
12.25
41.48
The regression sum of squares gives the between levels of A sum of squares that
would be obtained in a two-way ANOVA of these data. The estimates of the
regression coefficients at this stage are
fi = 28.75,
&I
= -0.875.
ss
392.50
200.50
DF
2
13
MS
196.25
15.42
The difference in the regression sum of squares between steps 1 and 2, that is,
380.25, gives the sum of squares corresponding to factor B that would be obtained in
a conventional analysis of variance of these data. The estimates of the regression
coefficients at this stage are
fi = 28.75,
&I
= -0.875,
b1 = -4.875.
(Continued)
20 1
Display 6.12
(Continued)
Step 3: X I , x?, and x3 entered. The analysis of variance table for this final regression is
as follows.
Source
Regression
Residual
ss
536.50
56.50
DF
3
12
MS
178.83
4.71
The difference in the regression sum of squares between steps 2 and 3, that is,
144.00, gives the sum of squares correspondingto the A x B interaction in an
analysis of variance of these data.
The residual sum of squares in the final table corresponds to the error sum of squares
in the usual ANOVA table.
The estimates of the regression coefficients at this final stage are
j2 = 28.75,
61 = -0.875,
/9* = -4.875,
911 = 3.000.
Note that the estimates of the regression coefficients do not change as extra variables
are brought into the model.
6.8. SUMMARY
1. Multiple regression is used to assess the relationship between a set of explanatory variables and a continuous response variable.
2. The response variable is assumed to be normally distributed with a mean that
is a linear function of the explanatory variables and a variance that is independent of the explanatory variables.
3. The explanatory variables are strictly assumed to be fixed. In practice where
this is almost never the case, the results of the multiple regression are to be
interpreted conditional on the observed values of these variables.
CHAPTER 6
202
Display 6.13
Analysis of an Unbalanced Two-way Design by Using Multiple Regression
In this case consider the following data.
B1
B2
A1
23
25
27
29
30
27
23
25
A2
22
23
21
21
19
23
17
26
32
30
31
37
38
40
35
39
35
38
41
32
36
40
41
38
Three variables X I , x2, and x3 are defined as specified in Display 6.11 and used in a
multiple regression of these data, with variables entered first in the order xl ,
followed by x2, followed by x3. This leads to the following series of results.
Step I: x1 entered. The regression ANOVA table is as follows.
Source
Regression
Residual
ss
DF
149.63
1505.87
1
30
MS
149.63
50.19
The regression sum of squares gives the A sum of squares for an unbalanced design
(see Chapter 4).
The estimates of the regression coefficients at this stage are
fi
= 29.567,
21
= -2.233.
Source
Regression
Resdual
ss
1180.86
476.65
DF
2
29
MS
590.42
16.37
(Continued)
203
Display 6.13
(Continued)
~~
The increase in the regression sum of squares, that is, 1031.23,is the sum of squares
that is due to B, conditional on A already being in the model, that is, BIA as
encountered in Chapter 4.
The estimates of the regression coefficients at this stage are
6 = 29.667,
$1
= -0.341,
p1 = -5.977.
Source
Regression
Residual
ss
1474.25
181.25
DF
3
28
MS
491.42
6.47
The increase in the regression sum of squares, that is, 293.39, is the sum of squares
that is due to the interaction of A and Byconditional on A and B, that is, ABIA,B.
The estimates of the regression coefficients at this stage are
6 = 28.606,
61 = -0.667.
fi, = -5.115
911 = 3.302.
Now enter the variables in the order &x2followed by x 1 (adding xj after x2 and x1will
give the same results as step 3 above).
Step I : x? entered. The regression ANOVA table is as follows.
Source
Regression
Residual
ss
1177.70
477.80
DF
1
30
MS
1177.70
15.93
6 = 29.745,
f i , = -6.078.
Source
Regression
Residual
ss
1180.85
474.65
DF
2
29
MS
590.42
16.37
(Continued)
CHAPTER 6
204
Display 6.13
(Continued)
The estimates of the regression coefficients at this stage are
fi = 29.667,
&I
= -0.341,
j1
= -5.977.
Note how the regression estimates for a variable alter, depending on which stage the
variable is entered into the model.
4. It may be possible to find a more parsimonious model for the data, that is,
one with fewer explanatory variables, by using all subsets regression or one
of the stepping methods. Care is required when the latter is used as implemented in a statistical package.
5. An extremely important aspect of a regression analysis is the inspection of a
number of regression diagnostics in a bid to identify any departures from
assumptions, outliers, and so on.
6. The multiple linear regression model and the analysis of variance models
described in earlier chapters are equivalent.
COMPUTER HINTS
SPSS
To conduct a multiple regression analysis with one set of explanatory variables,
use the following basic steps.
1. Click Statistics, click Regression, and click Linear.
2. Click the dependent variable from the variable list and move it to the
Dependent box.
3. Hold down the ctrl key, and click on the explanatory variables and move
them to the Independent box.
4. Click Statistics, and click Descriptives.
s-PLUS
In S-PLUS, regression can be used by means of the Statistics menu as follows.
1. Click Statistics, click Regression, and click Linear to get the Multiple
Regression dialog box.
2. Select dependent variable and explanatory variables, or define the regression
of interest in the Formula box.
3. Click on the Plot tag to select residual plots, and so on.
205
Multiple regression can also be applied by using the command language with
the lm function with a formula specifying what is to be regressed on what. For
example, if the crime in the USA data in the text are stored in a data frame, crime
with variable names as in the text, multiple regression could be applied by using
EXERCISES
6.1. Explore other possible models for the vocabulary data. Possibilities are to
include a quadratic age term or to model the log of vocabulary size.
6.2. Examine the four data sets shown in Table 6.16. Also show that the estimates of the slope and intercept parameters in a simple linear regression are the
TABLE 6.16
Four Hypothetical Data Sets, Each Containing 11 Observations for Two Variables
Data Set
1-3
Variable
Observation
1
2
3
4
5
6
7
8
9
10
11
10.0
8.0
13.0
9.0
11.0
14.0
6.0
4.0
12.0
7.0
5.0
8.04
6.95
7.58
8.81
8.33
9.96
7.24
4.26
10.84
4.82
5.68
9.14
8.14
8.74
8.77
9.26
8.10
6.13
3.10
9.13
7.26
4.74
7.46
6.77
12.74
7.11
7.81
8.84
6.08
5.39
8.15
6.42
5.73
8.O
8.0
8.0
8.0
8.0
8.0
8.0
19.0
8.0
8.0
8.0
6.58
5.76
7.71
8.84
8.47
7.04
5.25
12.50
5.56
7.91
6.89
CHAPTER 6
206
TABLE 6.17
Memory Retention
1
5
15
30
60
120
240
480
720
1440
2880
5760
10080
0.84
0.71
0.61
0.56
0.54
0.47
0.45
0.38
0.36
0.26
0.20
0.16
0.08
TABLE 6.18
Marriage and Divorce Rates per 1000 per Year
for 14 Countries
Marriage Rate
5.6
6.O
5.1
5.0
6.7
6.3
5.4
6.1
4.9
6.8
5.2
6.8
6.1
9.7
Divorce Rate
2.0
3.O
2.9
1.9
2.0
2.4
0.4
1.9
2.2
1.3
2.2
2.0
2.9
4.8
207
TABLE 6.19
Quality of Childrens Testimonies
Age
Gender
Location
Coherence
Maturity
Delay
Prosecute
Quality
5-6
5-6
5-6
5-6
5-6
5-6
5-6
5-6
8-9
8-9
8-9
5-6
5-6
5-6
8-9
8-9
8-9
8-9
5-6
8-9
8-9
8-9
5-6
Male
Female
Male
Female
Male
Female
Female
Female
Male
Female
Male
Male
Female
Male
Female
Male
Female
Female
Female
Male
Male
Male
Male
3
2
1
2
3
3
4
2
3
2
3
1
3
2
2
4
2
3
4
2
4
4
4
3.81
1.63
3.54
4.21
3.30
2.32
4.51
3.18
3.02
2.77
3.35
2.66
4.70
4.3 1
2.16
1.89
1.94
2.86
3.11
2.90
2.41
2.32
2.78
3.62
1.61
3.63
4.1 1
3.12
2.13
4.3 1
3.08
3.00
2.7 1
3.07
2.72
4.98
4.21
2.91
1.87
1.99
2.93
3.01
2.87
2.38
2.33
2.79
45
27
102
39
41
70
72
41
71
56
88
13
29
39
10
15
46
57
26
14
45
19
9
No
Yes
No
No
No
Yes
No
No
No
Yes
Yes
No
No
Yes
No
Yes
Yes
No
Yes
No
No
Yes
Yes
34.11
36.59
37.23
39.65
42.07
44.91
45.23
47.53
54.64
57.87
57.07
45.81
49.38
49.53
67.08
83.15
80.67
78.47
77.59
76.28
59.64
68.44
65.07
same for each set. Find the value of the multiple correlation coefficient for each
data set. (This example illustrates the dangers of blindly fitting a regression model
without the use of some type of regression diagnostic.)
6.3. Plot some suitable diagnostic graphics for the four data sets in Table 6.16.
6.4. The data shown in Table 6.17 give the average percentage of memory
retention, p , measured against passing time, t (minutes). The measurements were
taken five times during the first hour after subjects memorized a list of disconnected
items, and then at various times up to a week later. Plot the data (after a suitable
transformation if necessary) and investigate the relationship between retention and
time by using a suitable regression model.
6.5. As mentioned in Chapter 3, ANOVA models are usually presented in overparameterized form. For example, in a one-way analysis of variance, the constraint
208
CHAPTER 6
Cf=,= 0 is often introduced to overcome the problem. However, the overparameterization in the one-way ANOVA can also be dealt with by setting one of
the atequal to zero. Carry out a multiple regression of the fruit fly data that is
equivalent to the ANOVA model with a3 = 0. In terms of group means, what are
the parameter estimates in this case?
6.6. For the final model used for the crime rate data, examine plots of the
diagonal elements of the hat matrix against observation number, and produce a
similar index plot for the values of Cook's distance statistic. Do these plots suggest
any amendments to the analysis?
6.7. Table 6.18 shows marriage and divorce rates (per lo00 populations per
year) for 14 countries. Derive the linear regression equation of divorce rate on
marriage rate and show the fitted line on a scatterplot of the data. On the basis of
the regression line, predict the divorce rate for a country with a marriage rate of 8
per lo00 and also for a country with a marriage rate of 14 per 1o00. How much
conviction do you have in each prediction?
6.8. The data shown in Table 6.19 are based on those collected in a study of
the quality of statements elicited from young children. The variables are statement
quality; child's gender, age, and maturity; how coherently the child gave evidence;
the delay between witnessing the incident and recounting it; the location of the
interview (the child's home, school, a formal interviewing room, or an interview
room specially constructed for children); and whether or not the case proceeded
to prosecution. Carry out a complete regression analysis on these data to see how
statement quality depends on the other variables, including selecting the best subset
of explanatory variables and examining residuals and other regression diagnostics.
Pay careful attention to how the categorical explanatory variables with more than
two categories are coded.
Analysis of Longitudinal
Data
7.1.
INTRODUCTION
2 10
CHAPTER 7
on one very simple approach and one more complex, regression-based modeling
procedure.
for each occasion should not
However, first the questionof why the observations
groups of subjects are beiig
be separately analyzed has to be addressed. two
When
in Chapter 5 (seeTable 5.2), this occasion-bycompared as with the salsolinol data
two groups are
occasion approach would involve a oft
series
tests. When more than
present, a series of one-way analysesof variance wouldbe required. (Alternatively,
some distribution-free equivalent might be used;see Chapter 8.) The procedure
is straightforward but has a number of serious flaws and weaknesses. The first
is that the series of tests performed are not independentof one another, making
interpretation difficult.For example, a succession
of marginally significant group
mean differences might be collectively compelling
if the repeated measurements
are only weakly correlated, but much less convincing
if there is a patternof strong
correlations. The occasion-by-occasion procedure also assumes that each repeat
measurement is of separate interestin its own right. This is unlikely in general,
because the real concern
is likely to involve something more global,
in particular,
the seriesof separate tests provide little information about the longitudinal development of the mean response profiles.In addition, the separate significance tests
do not givenan overall answer to whether or not isthere
a group difference and,
of
particular importance,do not provide a useful estimate of the overall treatment
effect.
7.2. RESPONSEFEATUREANALYSIS:
THE USE OF SUMMARY MEASURES
A relatively straightforward approach to the analysis
of longitudinal datais to first
transform theT (say) repeated measurements
for each subjectinto a single number
considered to capture some important aspect
of a subjects response profile; that
is, for each subject, calculate
S given by
where X I , x2, ...,XT are the repeated measuresfor the subject andf represents
S has to be decided
the chosen summary function. The chosen summary measure
on before the analysis
of the data begins and should,
of course, be relevant to the
particular questions that are of interest in the study. Commonly used summary
measures are (1) overall mean,(2) maximum (minimum) value, (3) time to maximum (minimum) response,(4) slope of regression line of response on time, and
(5) time to reach a particular value (e.g., a fixed percentage of baseline).
in the
Having identified a suitable
summary measure, the analysis of differences
levelsof the between subject factor(s)
is reduced to a simple (two
t test
groups) or an
analysis of variance (more than two groups or more than a single between sub
21 1
day l?
2. How should the missing observations
be dealt with?
We shall leave aside the first
of these questionsfor the moment and concentrate
on only the nine postdiet recordings.The simplest answer to the missing values
problem is just to remove rats with such values from the analysis. This would
leave 11 of the original 16 rats for analysis. This approach may be simple butit
is not good! Usingit in this case would mean an almost 33% reduction in sample
size-clearly very unsatisfactory.
An alternative to simply removing rats with any missing values
is to calculate
the chosen summary measure from theavailable measurements on a rat. In this
way, rats with different numbers of repeated measurements canall contribute to
this approach, and again using the mean
as the chosensumthe analysis. Adopting
mary measures, leads
to the valuesin Table 7.3 on which tobase the analysis. The
results of a one-way analysis of varianceof these measures, andthe Bonferonni
212
CHAPTER 7
TABLE 7.1
Subject
Group 1 (moderate)
Avemge Response
0.06
l
2
3
4
5
6
Group 2 (severe)
0.19
0.19
-0.02
-0.08
0.17
-0.05
0.28
0.52
0.20
0.05
0.41
0.37
0.06
2
3
4
5
6
7
8
Means andSt&d
Deviations
Gmup
Mean
SD
Moderate
Severe
0.0858
0.1186
6
8
0.2301
0.1980
Note. Log to base 10 is used. The 95% confidence interval for the diffennce between the two groups is
multiple comparisonstests, are given in Table 7.4. (The Scheffk procedure gives
almost identical resultsfor these data.) Clearly the mean bodyweightof Group 1
differs from the mean bodyweight of Groups 2 and 3, but the means for the latter
do notdiffer.
(It should be noted here that when the numberof available repeated measures
differs from subject to subject in a longitudinal study, the calculated summary
2 13
TABLE 7.7,
Bodyweights of Rats (grams)
DOY
Gmup
50
57
64
262
258
240
240
266
243
261
270
265
238
264
274
276
284
282
273
456
415
462
612
507
543
553
550
272
247
268
273
278
219
218
245
269
215
280
28l
284
278
478
4%
472
628
525
559
548
IS
29
22
240
225
250
230
250
255
260
265
275
255
415
420
445
560
465
525
525
510
255
230
260
250
NA
255
265
270
275
270
268
428
440
262
265
270
275
265
268
273
271
274
265
443
460
455
591
493
255
1
1
43
1
1
36
245
2
2
2
2
3
3
3
3
255
255
270
260
260
425
430
450
565
475
530
530
520
NA
580
485
533
540
NA
NA
270
438
448
455
590
481
535
545
530
540
546
538
NA
278
276
265
442
458
45 1
595
493
525
538
535
NA
274
468
484
466
618
518
544
555
553
NA
measures areliely to have differing precisions, The analysis should then, ideally,
of weighting. This was not attempted
take this into account by using some form
in the rat data example because it involves techniques outside the scope
of this
text. However,for those readers courageous enough
to tackle a little mathematics,
is given in Gombein,
a description of a statistically more satisfactory procedure
Lazaro,and Little, 1992.)
Having addressed the missing value problem. now
we need to consider if and
as a basehow to incorporate the prediet bodyweight recording (known generally
line measurement) into an analysis. Such a pretreatment value can be used in
association with the response feature approach
in a numberof ways. For example,
if the average response over time is the chosen summary (as in the two examples above), there are three possible methods of analysis prison and Pocock,
1992).
1. POST-an
CHAPTER 7
214
TABLE 7 3
Gmup
Rat
AVE. Bodyweight
Group 1
1
2
3
4
5
6
7
8
262.9
239.2
261.1
266.9
270.6
276.0
272.1
267.8
2
3
4
457.4
456.9
593.9
495.4
537.7
542.7
534.7
Group 2
444.1
Group 3
2
3
4
TABLE 7.4
Soume
Group
Error
ss
DF
MS
<.001
SE
Comparison
Estimate
gl-g2
g1-g3 22.4
@-g3
-285
-223.0
-263.0
-39.5
Bound
Lower
22.4
25.9
-324
-111
UpperBound
-120
-201'
31.4
ANALYSIS OF LONGITUDINAL
DATA
215
CHAPTER 7
216
9
r
I-
Ancova
Post scores
._._._.Change
scores
,..,,*,,
20
40
60
60
Sample size
217
ANALYSIS OF LONGITUDINAL
DATA
_.-.
.......
...."...
........~
..*'
2L
,,I,(....
2*....,
x-
* I
2-
Post scores
Rhoa.6
Alpha=O.OS
I
20
40
60
Sample size
m~7
..2. Power cutves comparing POST, CHANGE, and
ANCOVA.
not the results of the different diets.A telling question that might be asked here
is, why were the three diets were tried on animals, some of which were markedly
lighter than the rest?
The response feature method has a number of advantages
for the analysisof
longitudinal data. Three given by Matthews(1993) are as follows.
549.1
127.6
456.0
129
218
CHAPTER 7
TABLE 7.5
CHANGE and ANCOVA for the Rat Data
Source
DF
CHANGE
Group
1098
1659
Error
ANCOVA
Remament
Group
912
1550
Error
ss
MS
2
13
<.ooO1
1972.01
1 254389.0
254389.0
2
12
I-
.,...,.,
,-.-,-.
Anwva
Post scores
Change scores
Rhod.9
Alphad.05
I
20
40
80
Sample size
FIG. 7.3.
ANCOVA.
CHANGE.
and
ANALYSIS OF LONGITUDINAL
DATA
2 19
7.3. RANDOMEFFECTMODELS
FOR LONGITUDINAL DATA
A detailed analysis of longitudinal data requires consideration of models that
represent both the level and the
of aslope
group's profile of repeated measurements,
and also account adequately for the observed pattern of dependences in those
measurements. Such models are now widely available, but confusingly they are
often referred to
by different names, including,for example, random growth curve
and hierarchical models.Such
models, multilevel models, random effects models,
is now
models have witnessed a huge increase
in interest in recent years, and there
aconsiderableliterature surrounding them. Here we shall try
only
to give the flavor
of the possibilities,by examining one relatively straightforward example. Fuller
accounts of this increasingly important area of statistics are availablein Brown
(2000).
and Prescott (1999) and Everitt and Pickles
Alzheimer's
Three graphical displaysof the datain Table 7.6 are shown in Figures 7.4,7.5,
and 7.6. In the first, the raw data
for each patientare plotted, and they are labeled
with the group to
which they have been assigned. There
is considerable variability
TABLE 1.6
Viii
Group
l
1
1
1
1
U)
15
12
14
13
13
12
5
5
7
4
14
7
LO
7
11
7
16
9
6
14
12
9
12
11
3
17
1
1
l
l
11
11
17
2
12
l5
1
1
10
16
0
7
l
11
16
1
1
1
1
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
10
16
15
1
1
2
7
2
7
19
7
13
9
6
11
7
8
3
4
l1
1
6
0
3
13
5
12
7
I8
10
11
10
7
8
9
14
12
4
7
1
7
14
6
14
8
16
10
14
12
8
l2
18
10
3
10
11
10
3
7
3
l8
6
9
4
4
15
14
9
3
13
11
220
12
11
19
3
18
15
10
10
7
18
10
1
1
1
1
10
9
9
10
2
7
3
19
l5
16
7
13
143
13
3
8
3
10
7
7
0
6
2
5
8
5
5
9
5
9
9
0
4
10
7
8
6
6
5
12
10
8
17
15
21
14
9
14
12
19
7
17
I5
4
9
4
5
6
18
16
21
l5
12
16
14
22
8
18
16
5
10
22
l8
22
17
9
16
7
16
19
17
19
10
20
9
19
21
- 14
- 2
14 -
2I
3
ViSlt
FIG. 7.4.
8
14
14
2-
f
I
0
I
0
0
0
0
t
Q
8
80
0
0
WSR
FIG. 7.5.
221
CHAPTER 7
222
14
i
FIG. 7.6. Box plots of
=h f k h x s i t j
f BzGroup, i-C i j k ,
(7.2)
N
N
HG.7.7. Scatterplotmatrix of the cognitive scoreson the five visits of the lecithin
trial. showing treatment group (0.placebo; 1, lecithin).
CHAPTER 7
224
TABLE 7.7
SE
Estimate Parameter
0.802
6.93Bo(Intemept)
0.224
Pi2.199
Wit)
4.803
0.636
&(Group)
0.494
3.056
8.640
<.001
.029
4.001
that this model takes no account of the real structure of the data; in particular,
the observations madeat each visiton a subject are assumed independent
of each
other.
The resultsof fitting the model specified inQ. (7.2) to the datain Table 7.6
are given in Table 7.7. The results suggest both a significant Visit effect, and a
significant Group effect.
in the lecithin
Figure 7.7 clearly demonstrates that the repeated measurements
trial have varying degrees of association so that the model in Eq. (7.2) is quite
unrealistic. How can the model be improved? One clue is provided by the plot
of the confidence intervals for each patients estimated intercept and slope in
a regression of his or her cognitive score on Visit, shown in Figure 7.8. This
suggests that certainly the intercepts and possibly also the slopes
of the patients
is to introduce random effect
vary widely.A suitable way to model such variability
terms, and in Display 7.1 a model where this is done for the intercepts only is
described. The implications
of this modelfor the relationship between the repeated
measurements is also describedin Display 7.1. The results given there show that
7.1 is
although inaoduced in a somewhat different manner, the model in Display
essentially nothing more than the repeated measure
ANOVA model introduced in
Chapter 5, but now written in regression terms
for the multiple response variables,
see that the modelin Display 7.1
that is, the repeated measures. In particular, we
only allows for a compound symmetry pattern of correlations for the repeated
mesurements.
The results of fitting the model in Display 7.1 to the lecithin data are given
in Table 7.8. Note that the standard error for the Group regression coefficient
is greater in the random effects model than in the multiple regression model
(seeTable 7.7), and the reverseis the casefor the regression coefficientof visit.
This arisesbecausethemodelused
to obtaintheresultsinTable
7.2,
225
.
I
-5
10
15
20
-2
Eq. 7.2, ignores the repeated measures aspect of the data and incorrectly combines the between group and the within group variations in the residual standard
error.
The model in Display 7.1, implying as it does the compound symmetry of the
correlations between the cognitive scores on the five visits, is unlikely to capture
the true correlationpattern suggested by Figure 7.7. One obvious extension of the
model is to allow for a possible random effect of slope as well as of intercept, and
such a model is described in Display 7.2. Note that this model allows for a more
complex correlationalpattern between the repeated measurements. The results of
fitting this new model are given in Table 7.9.
Random effects models can be compared in a variety of ways, one of which
is described in Display 7.3. For the two random effects models considered above,
the AIC (Akaike information criterion) values are as follows.
Random intercepts only, AIC = 1282;
Random intercepts and random slopes, AIC = 1197.
C W E R7
226
Display 7.1
Simple Random EffectsModel for the LecithinTrial Data
The model is an extension ofthe multiple regression modelin Eq. (7.2) to include a
random intercept termfor eachsubject:
Yijk
= (Bo+ a d +BlMsitj
+B2Gmupi +
Eijk,
where the are random effects that modelthe shift in the intercept for eachsubject,
which, because thereis a fixed change forvisit, are preserved for all values of visit.
The ak are assumed to be normally distributed with mean
zero and variance 6.'.
A l l other terms in the model are as in Eq. (7.2).
Between subject variability in theintercepts is now modeled explicitly.
The model can be written in matrix notation as
Ylk
= &p
+zak
+Elk+
where
p'
fik
= [ B o * 81, &l.
l 4 Group,
1 5
Group,
ElSkl.
The presence of the random effect that is common to each visit's observations for
of the
subject k allows the repeated measurementsto have a covariance matrix
following form:
I:= Zd.'Z'
021,
227
TABLE7.8
Results of Fitting the Model Described in Display 7.1 to the Data in Table 7.6
Pammeter
Estimate
SE
6.927
0.494
3.055
0.916
0.133
1.205
7.56
3.70
2.54
<.001
<.001
.015
Bo(InteEep0
pl(Visit)
&(Gmup)
Display 7.2
Random Intercept, Random Slope Model for the Lecithin Trial Data
The model in this case is
YiJt = ( P O
A further random effect has been added compared with the model outlined in
Display 7.1. The term bk shifts the slope for each subject.
The bk are assumed to be normally distributed with mean zero and variance uz.
The random effects are not necessarily assumed to be independent. A covariance
parameter, u.b, is allowed in the model.
All other terms in the model are as in Display 7.1.
Between subject variability in both the intercepts and slopes is now modeled
explicitly.
Again, the model can be written in matrix notation as
Yik
= &p
+ zbk +
itr
where now
b; = k , bkl.
The random effects are assumed to be from a bivariate normal distribution with both
given by
means zero and covariance matrix, I,
The model implies the following covariance matrix for the repeated measures:
x = ZIZ+ U*I.
CHAPTER 7
228
TABLE 1.9
Results of Fitting the ModelDescribed in Display 7.2 to the Data in "hble 7.6
Estimate
Parameter
p&tercept)
B1W W
BzGmup)
6.591
0.494
3.774
SE
1.107
0.226
1.203
5.954
2.185
3.137
4 0 1
= 6.22, ab = 1.43,
= -0.77,
.030
.W3
d. = 1.76.
Display 7.3
Comparing RandomEffects Models for Longitudinal Data
+2n,,
It appears that the model in Display 7.2is to be preferred for these data.
This
conclusion is reinforced by examination
of Figures 7.9 and 7.10, the first
of which
shows the fitted regressions for each individual by using the random intercept
model, and the second of which shows the corresponding predictions from the
random intercepts and slopes model. Clearly, the first model does not capture
the varying slopes of the regressions of an individual's cognitive score on the covariate visit. The numberof words recalled in the lecithin treated groupis about
4 more thanin the placebo group, the 95% confidence interval
is (1.37,6.18).
As with the multiple regression model considered in Chapter
6, the fitting of
random effects models to longitudinal data
has to be followedby an examination
of residuals to check
for violations of assumptions. The situation
is more complex
now, however, because of the nature of the data-several residuals are available
for each subject. Thus here the plot of standardized residuals versus fitted values from the random intercepts and slopes model givenin Figure 7.11 includes
a panel for each subject in the study. The plot does not appear to contain any
229
230
23 1
232
233
particularly disturbing features that might give causefor concern with the fitted
model.
It may also be worth looking at the distributional propertiesof the estimated
random effects from the model (such estimates
are available from suitable random
effects software such as that available in S-PLUS).Normal probability plots of
the random intercept and random slope termsfor the second model fittedto the
lecithin data are shown in Figure 7.12. Again, there seem to be no particularly
worrying departures from linearity
in these plots.
7.4. SUMMARY
1. Longitudinal data are common
in many areas, includingthe behavioral sciences, andthey require particular care in their analysis.
2. The response feature, summary measure approach provides a simple, but not
simplistic, procedurefor analysis that can accommodate irregularly spaced
observations and missing values.
3. Pretreatment values, if available, can be incorporated into a summary measures analysisin a numberof ways, of which being used
as a covariate in an
ANCOVA is preferable.
4. The summary measure approach to the analysis
of longitudinaldata cannot
give answers to questions about the development
of the response over time
or to whetherthis development differs in the levels
of any between subject
factors.
5. A more detailed analysis of longitudinal data requires the use of suitable
models suchas the random effects models briefly described in the chapter.
Such models have been extensively developed over the past 5 years or so
and are now routinely available in a number
of statistical software packages.
6. In this chapter the models have been considered onlyfor continuous, normally distributed responses, but they can be extended
to deal with other types
of response variable,for example, those that are categorical (see Brown and
prescott, 1999,for details).
COMPUTER HINTS
S-PLUS
In S-PLUS random effects models are available through the Statistics menu,
by
means of the following route.
1. Click Statistics, click Mixed effects, and click Linear, and the Mixed Effects dialogbox appears.
2. Specify data set and details
of the model thatis to be fitted.
CHAPTER 7
234
=ML,random=-llSubject,data=lecit)
1ecit.B < -lme(Cognitive-Visit+Group,method
=ML,random=-VisitlSubject,data=lecit)
Many graphics are available for examining fits, andso on. Several have been
used in the text. For example, to obtain the plots of predicted values from the
two models fitted (see Figures 7.9 and 7.10), we can use the following commands.
plot(augPred(Jecit.fitl),aspect= ky,grid=T),
plot(augPred(lecit.fit2),aspect= ky,grid=T).
EXERCISES
7.1. Apply the summary measure approach to the lecithin data in Table 7.6,
using as your summary measure the maximum number of words remembered
on
any occasion.
7.2. Fit a random effects model with a random intercept and a random slope,
and suitable fixed effects to the rat data in Table 7.2. Take care over coding the
7.3. The data in Table 7.10 arise from a trial of estrogen patches
in the treatment of postnatal depression. Women who had suffered an episode of postnatal
depression were randomly allocated to two groups; the members of one group
received an estrogen patch, and the members of the other group received a du
patch-the placebo. The dependent variable was a composite measure
of depression, which was recorded on two occasions prior to randomizationand for
each of 4 months posttreatment.A number of observations are missing (indicated
by -9).
235
TABLE 7.10
Estrogen Patch Trial Data
Wit
Baseline
Gmup
17 18
26 25
0
0
26
0
0
0
200
160
0
280
0
1
1
1
1
1
1
1
1
1
1
19
24
19
18
27
16
17
15
27
18
21
27
25
21
27
28
24
24
21
14
15
17
20
18
28
21 6
13
14
15
9
7
8
I1
15
17
18
-9
-9
13
4
9
9
-9
8
7
-9
23
10
11
13
27
24
15
17
21
18
17
17 14
12
28
24
19
18
1723
19
13
26
22
28
24
12
12
17
12
14
16
5
19
15
9
15
10
13
l1
7
8
14
12
9
10
12
14
6
12
2
2
6
236
CHAPTER 7
7.5. Fit a model to the lecithin data that includes randomeffects forboth the
intercept and slope of the regression of cognitive score on Visit and also includes
fixed effects forGroup and Group x Visit interaction.
7.6. For the random intercept and random slope model for the lecithin data
described in Display 7.2, find the terms in the implied covariance matrix of the
repeated measures,implicitly in t e r n o the
f parameters U, ua,a b , and
Distribution-Free
and Computationally
Intensive Methods
8.1.
INTRODUCTION
The statistical procedures describedin the previous chapters have relied in the
main for their validity on an underlying assumption of distributional normality.
How well these procedures operate outside the confines of this normality conin most situations,
straint varies from setting
to setting. Many people believe that
normal based methods are sufficiently powerful to make considerationof alternatives unnecessary. They may have a point; r tests and F tests, for example,
have been shownto be relatively robustto departures from the normality assumption. Nevertheless, alternativesto normal based methods have been proposed and
it is important for psychologists to be aware of these, because many are widely
quoted in the psychological literature. One classof tests that do not rely on the
normality assumptions are usually referred to as nonparametric or distribution
free. (The two labels are almost synonymous, and we shall use the latterhere.)
Distribution-free procedures are generally based on ranks
the of the raw data and
of distributions for the data, although they
do
are usually valid over a large class
often assume distributional symmetry. Although slightly less efficient than their
are normal, theyare
normal theory competitors when the underlying populations
237
238
CHAPTER 8
8.2. THE
WILCOXON-MANN-WHITNEY
TEST AND WILCOXONS SIGNED
RANKS TEST
Fmt, let us deal with the question of whats in a name. The statistical literature
ways
the Wilcoxonranksum
refers to two equivalent tests formulated in different as
test and theMunn-Whimey tesr. The two names arise because of the independent
development of the two equivalent tests by W1lcoxon (1945) and by Mann and
Whitney (1947). In both cases, the authors
aim was to come
up with adistributionfree alternative to the independent samples
t test.
The main points to remember about the Wdcoxon-Mann-Whitney test are as
follows.
239
Display 8.1
Wilcoxon-Mann-Whitney Test
Interest lies in testing the null hypothesis that two populations have the same
probability distribution but the common distribution is not specified.
The alternative hypothesis is that the population distributions differ in location.
We assume that a sample of R I observations, XI, x2, . ,x,,, ,are available from the
first population and a sample of nz observations, y ~ y2,
,
,ynl, from the second
population.
The combined sample of nl n2 observations are ranked and the test statistic, S, is
the sum of the ranks of the observations from one of the samples (generally the
lower of the two rank sums is used).
If both samples come from the same population, a mixture of low. medium, and high
ranks is to be expected in each. If, however, the alternative hypothesis is true, one of
the samples would be expected to have lower ranks than the other.
For small samples, tables giving p values are available in Hollander and Wolfe
(1999). although the distribution of the test statistic can also now be found directly
by using the pennurational approach described in Section 8.6.
A large sample approximation is also available for the Wilcoxon-Mann-Whitney
test, which is suitable when nl and nz are both greater than 15. Under the null
hypothesis the statistic Z given by
..
...
z=
nl(nl
ClnZ(n1
+nz
+ 1)/2
+ n2 + 1)/121'/2
As our first illustration of the application of the test, we shall use the data shown
in Table 8.1, adapted from Howell (1992). These data give the number of recent
stressful life events reported by a group of cardiac patients in a local hospital and
a control group of orthopedic patients in the same hospital. The result of applying
the Wilcoxon-Mann-Whitney test to these data is a rank sum statistic of 21 with
an associated p value of 0.13. There is no evidence that the number of stressful
life events suffered by the two types of patient have different distributions.
To further illustrate the use of the Wilcoxon-Mann-Whitney test, we shall
apply it to the data shown in Table 8.2. These data arise from a study in which
the relationship between child-rearing practices and customs related to illness in
several nonliterate cultures was examined. On the basis of ethnographical reports,
39 societies were each given a rating for the degree of typical and socialization
anxiety, a concept derived from psychoanalytic theory relating to the severity and
rapidity of oral socialization in child rearing. For each of the societies, a judgment
CHAPTER 8
24.0
TABLE 8.1
Cardiac Patients(C)
Orthopedic Patients(0)
32872950
12436
Observation
Rank
2
3
4
5
6
7
8
29
32
1
2 (0)
3 (0)
4 (0)
5 (0)
6
7 (0)
8
9
10
If
Note. Sum of 0ranks is 21.
TABLE 8.2
was also made (by an independent set of judges) of whether oral explanations
of illness were present. Interest centers
on assessing whether oral explanations
of
illness aremore likely to be present
in societieswith high levels of oral socialization
anxiety. This translates into whether oral socialization anxiety scores differ in
location in thetwo types of societies. The resultof the normal approximation test
described in Display 8.1 is a Z value of -3.35 and an associatedp value of .0008.
COMPUTATlONALLY
INTENSlVE
METHODS
24 1
Display 8.2
+1-
su/2.
The 1 -a confidence interval (At,A,) is then found from the valuesin the C,
(A,) and the nlnz + 1 C. (A") positions in the ordered sample differences.
CHAPTER 8
242
TABLE 8.3
Estimate of Location Difference and ConfidenceInterval Construction for Oral
Socialization Data
DifferencesBetween Pairs of Observorions. One Observationfrom Each Sample
0224455666677788
8991011-11133445555
66677788910-111334
4555566677788910-1
1133445555666777
88910-111334455556
6677788910-1 113344
555566677788910-20
0223344445556667
789-2002233444455
56667789-3-1-1
11223
3334445556678-4-2-2
0011222233344455
67-4-2-200112222333
4445567-4-2-2001122
223334445567-4-2-20
0112222333444556
7-6-4-4-2-2-1-100001112
223345-6-4-4-2-2-1-1000
01112223345-7-5-5-3-3
-2-2-1-1-1-10001112234
243
z=
-n(n + 1)/4
+ 1)(h+
T+
[n(n
1)/24]l2
TABLE8.4
Subject
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Electtvde I
Elechvde 2
500
660
400
250
72
135
27
100
105
90
200
15
160
250
170
66
107
600
370
140
300
84
50
180
180
290
45
200
400
310
loo0
48
244
CHAPTER 8
l
-2
-1
Display 8.4
Estimator and Confidence Interval Associated with Wdcoxon's Signed-Rank Statistic
An estimator of the treatment effect, 6, is the median of the n(n 1)/2 averages of
pairs of the n diference.s on which thestatistic T+ is based, thatis, the averages
n.
(zr+z,)/2forisj=l, ...,
For a symmetric two-sided confidence interval for
6, with confidence level1 -a, we
first find the uppera/2percentile pointof the null distributionof T+,
,,,,r from
Table A.4 in Hollander andWoKe (1999) orfrom a permutational computation.
Then set
L
The lower and upperlimits of the required confidence intervalare now found as the
C, and z,/z position valuesof the ordered averages of pairs of differences.
245
TABLE 8.5
Estimation and ConfidenceInterval Construction forElectrode Data
CHAF'TER 8
246
Display 8.5
distribution.
For the Kruskal-Wallis test tobe performed, the observationsare first ranked without
regard to group membership and then the sums of the ranks of the observations in
each group are calculated. These sums will be denoted by RI,R2. ...,Rk.
If the null hypothesis is m e , we would expect the R,s to be more or less equal, apart
from differences causedby the different samplesizes.
A measure of the degreeto which the Rjs differ from one another is
given by
c:=,
where N =
nj
Under the null hypothesis the statisticH has a chi-squared distribution withk -1
degrees of freedom.
the one-way ANOVA procedure describedin Chapter 3. Details of the KruskalWallismethodforone-waydesignswiththreeormoregroupsaregivenin
Display 8.5. (When the numberof groups is two, the Kruskal-Wallis test is equivalent to the Wilcoxon-Mann-Whitney test.)
To illustrate theuse ofthe Kruskal-Wallis method, we apply it to the
shown
data
in Table 8.6.These data arise from an investigation
of the possible beneficial effects
of the pretherapy training
of clients on the process and outcome
of counseling and
psychotherapy. Sauber (1971) investigated four different approaches
to pretherapy
training.
Control (no treatment),
Therapeutic reading(TR)(indirect learning),
vicarious therapy pretraining (V")(videotaped, vicarious learning), and
Group, role induction interview
(M)
(direct learning).
Nine clientswere assigned to each of these four conditions and a measure of
psychotherapeutic attraction eventually given to each client. Applying
h s k a l -the
Wallis procedure to these data gives a chi-squared test statistic
4.26, which
of
with
3 degrees of freedom has an associatedp value of .23. There is no evidence of a
difference between the four pretherapy regimes.
(Hollander and Wolfe, 1999, describe some distribution-free analogs
to the
multiple comparison procedures discussed in Chapter 3, for identifying which
247
COMPUl"ATI0NALLY
INTENSIVE
METHODS
TABLE8.6
Psychotherapeutic Attraction Scores for Four Experimental Conditions
Contml
0
1
3
3
5
10
13
17
26
Rroding (m)
Wotape (VTP)
0
6
7
9
11
13
20
20
24
Gmup (HI)
0
5
8
9
11
13
16
17
20
1
5
12
13
19
22
25
27
29
Display 8.6
cy=1
Under thenull hypothesis that in the population the conditions have same
the
distribution, S has a chi-squared distribution withk -1 degrees of freedom.
particular groupsin the one-way design differ, after obtaining a significant result
from the Kruskal-Wallis test.)
CHAPTER 8
248
TABLE8.7
Percentage of Consonants Correctly Identified underEach of Seven Conditions
Subject
2
3
4
AL
AC
LC
ALC
1.1
1.1
13.0
36.9
33.3
28.6
23.8
52.4
42.9
34.5
40.5
34.5
33.3
33.3
35.7
31.0
41.7
44.0
33.3
46.4
83.3
77.3
81.0
69.0
98.8
63.0
81.0
76.1
65.5
35.7
13.0
42.9
31.0
34.5
41.7
37.0
33.3
45.2
32.1
46.4
33.3
69.0
95.2
89.3
70.2
86.9
67.9
86.9
85.7
81.0
95.2
91.7
95.2
5
11.9
6 47.4 42.90 46.4
78.6
7
5.0
4.0
8
9
0
1.1
10
2.4
11
12
0
0
13
0
14
1.1
15
1.1
16
17
0
0
18
40.5
22.6
57.1
27.4
20.2
29.8
27.4
26.2
29.8
21.4
32.1
28.6
28.6
36.9
27.4
41.7
22.6
42.9
38.0
31.0
38.0
21.4
33.3
23.8
29.8
33.3
26.1
35.7
44.0
32.1
41.7
25.0
40.5
42.9
34.5
39.3
35.7
31.0
44.0
45.2
%A
n.4
73.8
91.7
85.7
71.4
92.9
59.5
82.1
72.6
78.6
95.2
89.3
95.2
To illustrate the use of the Friedman method, we apply it to the data shown
in Table 8.7, taken from a study reported by Nicholls and Ling(1982), into the
hand cues in the teachingof language to severely
effectiveness of a system using
hearing-impaired children. In particular, they considered syllables presented to
hearing-impaired children under the following seven conditions.
A:
L:
AL:
C:
AC:
LC:
ALC:
audition,
lip reading,
audition and lip reading,
cuedspeech,
audition and cued speech.
lip reading and cued speech, and
audition, lip reading, and cued speech.
COMPUTATIONALLY
INTENSIVE
METHODS
249
8.5. DISTRIBUTION-FREECORRELATION
AND REGRESSION
A frequent requirement in many psychological studies is to measure the correlation between two variables observed on a sample of subjects.
Io Table 8.8, for
example, examination scores (out
75)of
and corresponding exam completion times
(seconds) are given for 10 students. A scatterplot of the data is shown
in Figure 8.2.
r, takesthe
TheusualPearsonsproductmomentcorrelationcoefficient,
value 0.50, and a t test that the population correlation coefficient, p,takes the
value zero gives t = 1.63. (Readers are reminded that the test statistic here is
r d(n -2)/( 1 r*), which under thenull hypothesis thatp = 0 has at distribution with n -1 degrees of freedom.) Dealing with Pearsons coefficient involves
TABLE 8.8
Examination Scores and lIme to Complete
Examination
score
49
70
55
52
61
65
57
71
69
44
lime (S)
2160
2063
2013
2000
1420
1934
1519
2735
2329
I590
CHAPTER 8
250
e
e
e
e
e
I
65
70
Exam w r e
FIG. 8.2.
times.
the assumption that the data arises from a bivariate normal distribution
(see Everitt
and Wykes. 1999, for a definition). Buthow can we test for independence when
we are not willingto make this assumption? There. area number of possibilities.
8.5.1.
Kendalls mu
8.5.2.
25 1
COMPUTATIONALLY
INTENSIVE
METHODS
Display 8.7
Kendalls Tau
Variable l
1
2
Variable 2
1
When the subjectsare listed inthe orderof theirrankson variable 1, there isan
inversion ofthe ranks on variable 2 (rank 3 appears before rank2).
Kendalls tau statistic can now be definedas
7=1-
n(n
21
1)/2
in the data.
where I is the number of inversions
A significance testfor the null hypothesis thatthe population correlationis zero is
provided by z given by
+ 5)1/[9n(n -l)]
z=
Display 8.8
Spearmans Correlation Coefficient
for Ranked Data
Assume that n observations ontwo variables are available. Within each variable the
observations are ranked.
Spearmans correlation coefficientis defined as
where Dj is the differencein the ranks assigned to subject j on the two variables.
(This formula arises simply from applyingthe usual Pearson correlation coefficient
formula to theranked data.)
A significance testfor the null hypothesis that
the population correlationis zero is
given by
z = (n
which is tested as a standard normal.
-l)zrs,
CHAPTER 8
252
z test is 1.24 with an associatedp value of 0.22. Again, the conclusionis that the
two variablesare independent.
8.5.3. Distribution-FreeSimple
Linear Regression
Linear regression, as described in Chapter 6, is one of the most commonly used
of regression coefficients depends
of statistical procedures. Estimation and testing
the model
largely on the assumption that the error terms
in arenormally distributed.
It is, however, possible to use a distribution-free approach to simple regression
where there is a single explanatory variable, based on a methodbysuggested
Theil
(1950). Details are given in Display8.9.
We shall usethe distribution-free regression procedure on the data introduced in
Chapter 6 (see Table 6.1). giving the average number of words
known by children
Display 8.9
Distribution-Free Simple Linear Regression
The null hypothesisof interest is that the slope of the regressionline between two
variables x and y is zero.
We have a sample of n observation on thetwo variables, x i , y j j, = 1, ...,n.
The test statistic, C, is given by
i=l I=i+l
where the functionc is zero if yj = yi, takes the value 1 if yI yi. and the value 1
if^/ <yi.
The p value of the teststatistic can be found from TableA.30 in Hollander and
Wolfe (1999).
A large sample approximation uses the statistic z given by
L=
[n(n
-l)(% +5)/18I1l2'
-ka/2 -2
2
'
COMPUTATIONALLY
INTENSIVE
METHODS
253
TABLE 8.9
Distribution-FreeEstimation for Regression Coefficientof Vocabulary
Scores on Age
Si] Values (see Display 8.9)for the Vocabulary Data
38.00000
533.42857
600.00000
624.00000
900.00000
652.00000
648.00000
404.00000
269.00000
517.25000
607.2m
633.33333
776.00000
644.00000
566.66667
461.33333
Note. The estimatesof the intercept and thedope are obtained from these
values as
d = median(vocabu1ary score j x age):
j= median(Sij).
CHAPTER 8
254
we
FIG. 8.3. Scatterplotofvocabularyscoresatdifferent
ages,
showing fitted least squares and distribution-free regressions.
introduced in the 1930s by Fisher and Pitman, but initially they were largely
of theoretical rather than practical interest because of the lack of the computer
technology required to undertake the extensive computation often needed in their
application.However,witheachincreaseincomputerspeedandpower,the
permutation approach is being applied to a wider and wider variety of problems,
and with todays more powerful generation of personal computers,
it is often faster
to look up an asymptotic
to calculate a p value for an exact permutation test than
approximation in abook of tables. Additionally, the statistician(or the psycholoree to choose a test statistic
gist) is not limited
by the availability of tables but fis
a particular alternaexactly matched to testing a particular null hypothesis against
tive. Significance levelsare then, so to speak, computedon the fly (Good, 1994).
The stages in a general permutation are
test as follows.
1. Choose a test statisticS.
2. Compute S for the original set of observations.
3. Obtain thepermutarion disrriburion Sof
by repeatedly rearranging the observations. Whentwo or more samplesare involved (e.g., when the difference
between groups is assessed), all the observations
are combined into a single
large sample before they
are rearmnged.
255
as exact.
121,118,110,
34,33,12.
The null hypothesisis that thereis no difference between the treatmentsin their
effect on the dependent variable.The alternative is that the first treatment results
in higher valuesof the dependent variable.(The author is aware that the resultin
this case ispretty clear without any test!)
The first stepin a permutation testis to choose atest statistic that discriminates
between the hypothesisand the alternative.An obvious candidateis the sum of the
observations for the first treatment group.If the alternative hypothesisis true, this
sum ought to be larger than the sum of the observations in the second treatment
group. If the null hypothesisis true, then the sum
of the observationsin each group
should be approximately the same. Onesum might be smaller or larger than the
other by chance, but the two should not be very different.
The value of the chosen test statistic for the observed values is 121 118
110 = 349. To generate the necessary permutation distribution, remember that
under the null hypothesis the labels treatment 1 and treatment 2provide no
information about the test
statistic, as the observationsare expected to have almost
the same values in each
of the two treatment groups. Consequently, topermutate the
observations, simply reassign
the six labels,three treatment 1and three treatment 2,
to the six observations; for example, treatmentl-121,118,34; treatment 2-1 10,
12,22; and so on. Repeat the process until
all the possible20 distinct arrangements
have been tabulated as shown in Table 8.10.
From the results givenin Table 8.10, it is seen that the sum of the observations
in the original treatment 1 group, that is, 349, is equalled only once and never
exceeded in the distinct random relabelings. If chance alone is operating, then
such an extreme value has a 1 in 20 chance of occurring, that is, 5%. Therefore
at the conventional .05 significance level, the test leadsto the rejection of the null
hypothesis in favor of the alternative.
The Wilcoxon-Mann-Whimey test described in the previoussection is also an
example of a permutation test, but one applied to the ranks of the observations
rather than their actual values. Originally the advantage of this procedure was
that, because ranks always take the same values(1,2, etc.), previously tabulated
distributions couldbe used to derive p values, at least for small samples. Consequently, lengthy computations were avoided. However,
it is now relatively simple
+ +
CHAPTER 8
256
TABLE 8.10
Permutation Distributionof Examplewith Three Observations in Each Group
349
12
121 1 22
121 2
121 3
118 4
5
68
121 6
121251
7
118 8
22 121 9
11810
11
12
13
14
15
16
17
18
19
110 20
121
34
121
118
110
121
110
118
110
121
118
110
118
34
34
l18
12 118
110
121
110
118
110
12 110
22 118
12110
118
110
110
1234
121
34
34
34
2234
34
118
22
22
22
22
121
sum of First
Second Gmup
FirstGmup
110
2234
34
26234
22
3422
3412
3422
12
24012
110
22
22
12
118
22
110
12
12
12
12
12
12
110
118
12
Cmup Obs.
273
22
12
265
22
34
12
261
253
118
110
121
250
243
22
121
118
118
121
121
121
121
121
34
34
12
22
12
167
110
166
177
174
164
156
118
110
110
118
22
34
34
155
152
144
TABLE 8.11
Data from a Study of Organizationand Memory
~~
Tmining Gmup
No Tmining Gmup
257
TABLE 8.12
Permutational Distributionof the Wllcoxon-Mann-Whitney Rank Sum Statistic
Applied to theS02 Scores
20 21 22 23 24
27 26 25
7
9 1 1 4 1 6 1 8 1 9 2 0
39 3834
37 36 35
40
17
9
5
3
2
1
1
+2 + 1 + 1)/(252) = 0.028.
that hadreceived training (more detailsof the studyare given inRobertson, 1991.)
The permutational distribution
of the Wilcoxon-Mann-Whitneyrank sum statistic
is shown inTable 8.12. As can be seen,this leads toan exact p value for the testof
a group difference on the S02 scores of .028. There is evidence of a training effect.
CHAPTER 8
258
4. Obtain the upper a-percentage point of the bootstrap distributionand accept or reject the null hypothesis according to whetherS for the original
observations is smaller or larger thanthis value.
5. Alternatively, consmct a confidence intervalfor the test statistic by using
the bootstrap distribution (see the example given later).
Unlike a permutation test, the bootstrap does not provide exact p values. In
addition, itis generally less powerful than a permutation test. However, it be
may
possible to use a bootstrap procedure when no other statistical
is applicable.
method
Full details of the bootstrap, including many fascinating examples
of its application,
are given by Efron and Tibshirani (1993).Here, we merely illustrate its use in
two particular examples,first in the constructionof confidence intervals for the
difference in means and in medians of the groups involved in the WISC data
3(see Table 3.6),and then in regression, using the vocabulary
introducedin Chapter
scores dataused earlier in this chapter and givenin Chapter 6 (see Table 6.1).
The resultsfor the WISC example summarized
are
in Table8.13. The confidence
interval for the mean difference obtained
by using the bootstrapis seen tobe narrower than the corresponding interval obtained conventionally witht the
statistic.
TABLE 8.13
Construction of Confidence Intervalsfor the WISC Data by Using the Bootstrap
The procedure is based on drawing random samplesof 16 observations with replacement from
each of the m and comer p u p s .
The random samples a m found by labeling the observations in each p u p with the integers
1.2,. .., l 6 and selecting random samples o f these integers (with replacement), usingan appro.
priate computer algorithm (these
are generally available in most statistical
software packages).
The mean difference and median difference is calculated for each bootstrap sample.
In this study, lo00 bootstrap samples were used, resultingl in
o 0 0 mean and mediandifferences.
The first five bootstrap samples in terms
of the integers 1.2.. ., l 6 were as follows.
1, 2.2,6,
7, 7, 8, 9. 11. 11, 13, 13, 14. 15, 16, 16
1. 2 3, 3, 5, 6, 7. IO, 11, 1 1 , 12, 13. 14, 15, 16
1,1,3,3,3,4,5,6,8,8,9,14,14,15,16,16
1,2,4,4,4,5,5,5,6,10,11,11,12,12,12,14
1, 2. 3, 3. 5, 5. 6,6.9. 9, 12. 13, 14, 15, 15, 16
A rough 95% confidence interval can be derived by taking the 25th and 975th largest of the
replicates, leading to the following intervals: mean(16.06,207.81), median (7.5.193.0).
The 95%confidence interval for the mean difference derived
usual
inway,
the assumingnormality
of the population distributions and homogeneity
of variance, is(11.65.21 1.59).
259
!lm
100
0
100
200
300
l o o 0 bootstrap
300
I"
100
100
200
Histogramofmediandifferences
samples of the WlSC data.
FIG. 8.5.
for l o o 0 bootstrap
Histograms of the mean and median differences obtained from the bootstrap samples areshown in Figures 8.4 and 8.5.
The bootstrap results
for the regressionof vocabulary scoreson age are summarized in Table 8.14.The bootstrap confidence interval
of (498.14,617.03) is wider
(513.35,
than the interval given
in Chapter 6 based on the assumption of normality
610.51). The bootstrap results are represented graphically
in Figures 8.6 and 8.7.
We see that the regression coefficients calculated from the bootstrap samples sho
a minor degree of skewness, and that the regression coefficient calculated from
of the bootstrap
the observed datais perhaps a little biased compared to the mean
distribution of 568.35.
CHAPTER 8
260
TABLE 8.14
BootstrapResults for Regression of Vocabulary Scores on Age
Observed Value
6.42
Bias
SE
Mean
561.93
500
550
600
Confidence
650
700
8.8. SUMMARY
1. Distribution-freetests are usefulalternativestoparametricapproaches,
when the sample size
is small and therefore evidence for any distributional
assumption is not available empirically.
2. Such tests generally operateon ranks and so are invariant under transforonly the ordinal property
mations of the data that preserve order. They use
of the raw data.
26 1
COMPUTATIONALLY
INTENSIVE
METHODS
-2
COMPUTER HINTS
SPSS
Many distribution-free tests are available in
SPSS; for example, to apply the
Wilcoxon-Mann-Whitney test, we would use the following steps.
1. Click on Statistics, click on Nonparametric Tests, and then click on 2
Independent Samples.
2. Move the names of the relevant dependent variables to the Test Variable
List.
CHAPTER 8
262
able box.
4. Ensure thatMann-Whitney U is checked.
5. Click on OK.
For related samples, after clicking
Nonparametric
on
we would click
2 Related
Samples and ensurethat Wilcoxon is checkedin the Test Type dialog box.
S-PLUS
Various distribution-free tests described
in the textare available from the Statistics
menu. The following steps access the relevant dialog boxes.
All these distribution free tests are also available by a particular function in
are wilm.test, krusM.test, and
thecommandlanguage;relevantfunctions
friedmantest. For example, to apply the Wilcoxon-Mann-Whitney test to the
data in two vectors, x1 and x2, the command would be
wikox.test(y1 ,y2,paired=T).
The density, cumulative probability, quantiles, and random generation for the
distribution of the Wllcoxon-Mann-Whimey rank sum statistic are also readily
and rwilcox, and these canbe very
available by using dwilcox, pwikox, qwilcox,
useful in particular applications.
if distribution-free confidence intervals
The outer function is extremely helpful
are required. For example, in the electrode example
in the text,if the datafor each
electrodel and electrode2, then the requiredsums
electrode are stored in vectors,
COMPUTATIONALLY
INTENSIVE
METHODS
263
EXERCISES
8.1. The data in Table 8.15 were obtained in a study reported by Hollander
and Wolfe (1999).A measure of depression was recordedfor each patient on both
of therapy. Use the Witcoxon signed rank
the first and second visit after initiation
test to assess whether the depression scores have changed over the two visits and
construct a confidence interval
for the difference.
TABLE 8.15
Depression Scores
~~
Patient
1
2
3
4
5
6
7
8
9
Vuir l
b i t2
1.83
0.50
1.62
2.48
1.68
1.88
1.55
3.06
1.30
0.88
0.65
0.60
2.05
1.06
1.29
1.06
3.14
1.29
CHAPTER 8
264
TABLE 8.16
Pair i
1
2
3
4
5
6
7
8
9
10
11
12
13
Twin Xi
Twin fi
277
169
157
139
108
213
232
229
114
232
161
149
128
256
118
137
144
146
221
184
188
97
231
114
187
230
8.2. The data in Table 8.16 give the test scoresof 13 dizygous (nonidentical)
1999).Test the hypothmale twins (the data are taken from Hollander and Wolfe,
esis of independence versus the alternative that the twins scores are positively
correlated.
8.3. Reanalyze the S02 data given in Table 8.11 by using a permutational
approach, taking as the test statistic the sumof the scores in the group that had
received training.
COMPUTATIONALLY
INTENSIVE
METHODS
265
TABLE 8.17
Psychotherapy
Patient
Bdom
Ajier
TABLE 8.18
s:c
48
42
44
35
36
28
30
13
22
24
3.32
3.74
3.70
3.43
3.65
4.50
3.85
4.15
5.06
4.21
suitable distribution-fre test to determine whether there has been any change
effect.
8.7. Thedata in Table 8.18 show the value of anindex known as the psychomotor expressiveness factor (PEF) andtheratio of striatum to cerebellum
266
CHAPTER 8
Analysis of Categorical
Data I: Contingency mbles
and the Chi-square Test
9.1. INTRODUCTION
Categorical data occur frequently in the social and behavioral sciences, where
so on
is frequently
information about marital status, sex, occupation, ethnicity,
and
of a setof categories. %o
of interest. In such cases the measurement scale consists
specific examplesare as follows.
.-
CHAPTER 9
268
9.2. THE
TWO-DIMENSIONAL
CONTINGENCY TABLE
A two-dimensional contingency tableis formed from cross-classifying two categorical variables and recording
how many members of the sample
fall in each cell
of the cross-classification. An example of a 5 x 2 contingency table is given in
Table 9.1, and Table9.2 shows a3 x 3 contingency table. The main question asked
TABLE 9.1
"%Dimensional Contingency Table: Psychiatric Patients by
Diagnosisand Whether Their Wtment PrescribedDrugs
Diagnosis
Drugs
No Drugs
schimphnnic
disorder Affective
Neurosis
disorder
Personality
Special symptom
105
12
18
8
2
41
19
52
13
ANALYSIS OF CATEGORICAL
DATA
269
TABLE 9.2
Incidence of Cerebral Tumors
9
26
Site
23
21
34
78
n
m
37Total
28
4
24
3
17
Total
75
141
II.
TABLE 9.3
Estimated Expected Values Under the Hypothesis
of Independence
for the Diagnosis and Drugs Data in Table
9.I
Drugs
symptoms
No Drugs
Diagnosis
schizophrenic
4.77disorder
Af'fective
Neurosis
Personality
disorder
33.72
Special
4.43
74.51
9.23
24.40
65.28
8.57
38.49
12.60
about such tablesis whether the two variables forming the table are independent
given in Display 9.1. (Contingency tables formed from more than two variables
will be discussed in the next chapter.)
Applying the test described in Display 9.1 to the data in Table 9.1 gives the
estimated expected values shown in Table 9.3 and a chi-square value of 84.19
with 4 degrees of freedom. The associated p value is very small. Clearly, diagnosis and treatment withdrugs are not independent.For Table 9.2 the chi-square
7.84,which with 4 degrees of freedom leads to p
a value
statistic takes the value
of .098. Here there is no evidence against the independenceof site and type of
tumor.
CHAPTER 9
270
Display 9.1
Testing for Independence in an r x c Contingency Table
Suppose a sampleof n individuals have been cross-classified with respect
to two
categorical variables,one with r categories andone with c categories, to form an
r x c two-dimensional contingency table.
The general form of a two-dimensional contingency table
is asfollows.
Variable 1
Here nij represents the numberof observations in the ijth cell of the table, ni.
represents the total number of observationsin the ith row of the table andnj.
represents the total number of observationsin thejth column of thetable-both ni,
and n.j are usually termedmarginal totals.
The null hypothesisto be tested is that thetwo variables are independent. This
hypothesis can beformulated more formallyas
HO Pi/ = Pi. X P , j *
where in the population from which the
n observations havebeen sampled, pi) is the
probability of an observation being
in the ijth cell, pi. is theprobability of beingin
the ith category of the row variable, and
p.j is theProbability of beingin thej t h
category of the columnvariable. F e hypothesis is just a reflection of that
elementary mle of probability thatfor two independent eventsA and B, the
probability ofA nnd B is simply the product of the probabilitiesof A and ofB.)
In a sample ofn individuals we would, if the variables
are independent, expect
npi.p., individuals in theijth cell.
Using the obvious estimatorsfor p;.and p.j, we can estimate this expected valueto
give
- -
ANALYSIS OF CATEGORICAL
DATA
27 1
TABLE M
Sa
Diagnosis
Male
Schizophrenia
Other
Total
43
15
58
Female
32
32
64
Total
75
47
122
alleged
Fault
Not
Alleged
Total
Not
Guilty
153
105
zsa
358
Total
24
76
100
177
181
YeS
No
Total
2
18
20
14
20
8
32
40
Hem the verdict is classified against whether the defense alleged that the victim
was somehow partially at faultfor the rape.
CHAPTER 9
272
Display 9.2
2 x 2 Contingency Tables
The general form of a 2 x 2 contingency tableis as follows.
Variable 2
Category 1
Variable 1
Category2
Total
Category 1 a
a+b
Category2
c+d
n = aTotal
+ b +bc++dd
c
a+c
(a
B1 = a+c'
jj2
b+d'
= b
TABLE 9.5
Results of Analyzing theh
2 x 2 ContingencyTables in Table9.4
1. For the schizophnia and genderdata, X2 = 7.49 with an associatedp value
of .0062.The 95% confidence intervalfor the Merence in the probability
of beiig diagnosed schizophrenic for men and for women
is (0.01.0.41).
2. For the rape data, X* = 35.93 with an associatedp value thatis
very small. The 95% confidence intervalfor the difference in the probability
of beiig found guilty or not guilty when thedefense docsnot suggest thatthe
rape is partially the faultof the victimis (0.25.0.46). .i.e., defendentmore
likely to be found guilty.
3. For the suicidal feelingsdata, X2 = 2.5 with an associatedp value of .l 1.
The 95% confidence interval for the difference in the probability
of
having suicidal feelings in the two diagnostic categoriesis (-0.44.0.04).
..
ANALYSIS OF CATEGORICAL
DATA
273
We can illustrate the use of Fishers exact test on the data on suicidal
in feeling
Table 9.4 because this has some small expected values (see Section
9.4 for more
comments). The p-value form applying the test ,235,
is indicating that diagnosis
and suicidal feelingsare not associated.
To illustrate McNemars test, we use the data shown in Table 9.6. For these
data the test statistic takes the value 1.29, which is clearly not significant, and
we can conclude that depersonalization is not associated with prognosis where
endogenous depressed patients are concerned.
Display 93
Yatess Continuity Correction
CHAPTER 9
274
Display 9.4
Fishers Exact Testfor 2 x 2 Contingency Tables
Ma, b. c, d ) =
(a
+b)!(a+c)!@ +d ) ! ( b+d ) !
a!b!c!d!n!
where a!-read a factorial-is the shorthand method of writing the producta and
of
allthe integers less than it. (By definition,O! is one.)
Fishers testuses this formula to find the probability of the observed arrangement of
as much or more evidenceof an
frequencies, and of every other arrangement giving
association betweenthe two variables, keepingin mind thatthe marginal totals are
regarded as fixed.
The sum of the probabilities ofallsuch tablesis the relevant p value for Fishers
test.
Display 95
McNemars Testfor Paired Samples
Apresent
Aabsent
A absent
b
d
( b -C)
=b+c
which, if there is no
difference, has a chi-squared distribution with
single
a degree of
freedom.
275
TABLE9.6
Depersonalized P0tien:s
Patients
Not Depersonalized
Recovered
Not Recovered
Total
Recovered
14
19
Not recovered
Total
16
4
23
9.3. BEYONDTHECHI-SQUARETEST:
FURTHER EXPLORATION OF
CONTINGENCY TABLES BY USING
RESIDUALS AND CORRESPONDENCE
ANALYSIS
A statistical significance test is,as implied in Chapter 1, often a crude and blunt
instrument. Thisis particularly truein the caseof the chi-square testfor independence in the analysis
of contingency tables, and after a significant of
value
the test
statistic is found, it is usually advisableto investigate in more detailwhy the null
at two approaches, the
hypothesis of independence fails to fit. Here we shall look
first involving suitably chosen residuals and the second that attempts
to represent
the associationin a contingency table graphically.
9.3.1. The U s e
of Residuals in the
rij=nij-Eij,
i = l , ...,r, j = l ,
...,c.
(9.1)
CHAPTER 9
276
= (nij -~ ~ ~ ) / f i ~ ~ .
These terms are usually knownas standardized residuals and are such that the
chi-squared test statistic is given by
When the variables forming the contingency table are independent, the adjusted
residuals are approximately normally distributed with mean zero and standard
deviation one. Consequently, values outside (-1.96, 1.96) indicate those cells
departing most from independence.
Table 9.7 shows the adjusted residuals
for the datain Table 9.1. Here the values
of the adjusted variables demonstrate thataN cells in the table contribute to the
TABLE 9.7
Adjusted Residuals for the Data in Table 9.1
Drugs
Diagnosis
schizophrenic
-78.28
Affective disorder
Neurosis 11.21
Personality43.23
disorder
Special symptoms
13.64
No Drugs
151.56
8.56
-21.69
-83.70
-26.41
-4.42
ANALYSIS OF CATEGORICAL
DATA
277
9.3.2.
Correspondence Analysis
Display 9.6
CorresuondenceAnalvsis
~~
,= nil
- -
CHAPTER 9
278
TABLE 9.8
Cross-Classificationof Eye Color and Hair Color
Hair Color
Eye Color
Fair
Light 116 4
Blue38
Medium
Dark48
688
326
343
98
Red
188
110
1
84
68
584
241
26
909
403
412
81
TABLE 9.9
Derived Coordinatesfrom Correspondence Analysisfor
Hair Color-Eye Color Data
Category
coonl 1
coonl2
0.44
0.40
-0.04
-0.70
0.09
0.17
-0.24
0.14
0.54
0.23
0.04
-0.59
-1.08
0.17
0.05
-0.21
0.1 1
0.27
be expected with,for example, fair hair being associated with blue and light eyes
and so on.
The second exampleof a correspondence analysis will involve the data shown
in Table 9.10, collected in a survey in which peoplein the UK were asked about
which of a number of characteristics couldbe applied to people in the UK and
in other European countries. Here the two-dimensional correspondence diagram
is shown in Figure 9.2. It appears that the respondents judged, for example, the
French to be stylish and sexy, the Germans efficient and the British boring-well,
they do say you can prove anything with statistics!
279
TABLE 9.10
What do People in the
UK Think about Themselves and Theirh e r s in the
European Community?
Chamcteristic
French0
Spanish(S)
Italian0
British@)
Irish (Ir)
Dutch@)
German(G)
37
7
30
9
1
5
4
29
14
12
14
7
4
48
21
8
19
4
2
1
-1.0
19
9
10
6
16
2
12
10
27
20
27
30
15
3
10
7
7
12
3
2
9
8
3
12
2
10
0
2
9.5
8
7
13
13
11
0.0
6
3
5
26
5
24
41
23
13
16
11
12
10
29
25
22
1 28
1 8 38
1
1
27
4
8
0.5
c1
CHAPTER 9
280
-0.5
0.5
0.0
1.o
c1
FIG. 9.2. Correspondence
analysisdiagram
Table 9. lo.
9.4.
for the
data
in
SPARSE DATA
ANALYSIS OF CATEGORICAL
DATA
28 1
TABLE 9.11
Firefighters' Entrance Exam Results
EfhnicCIWDofEntmnt
Test Result
Pass
No show
Fail
Total
White
0
0
5
Black Hispanic
Asian
2
5
0
3
5
Total
0
1
4
5
9
2
9
20
TABLE 9.12
Reference Set for the Firefighter Example
CHAPTER 9
282
TABLE 9.13
W OTables in the Reference Set for the Firefighter Data
9
2
9
20
9
20
Note. Xz = 14.67, and the exact probability is .00108. for (a).
X2 = 9.778 (not larger that the ohserved X2 value) and so it does
not contribute to the exact p value, for (b).
TABLE 9.14
Reference Set fora Hypothetical6 x 6 Contingency 'Mle
XIX13
I
XI2
x14
x16
XI5
x21
x22
x23
X26
x24
X25
x3 I
x32
x33
X34
xu
-W
x4 I
X42
x43
X44
14.5
X46
x5 I
x52
x53
X54
X55
X%
7 5
7
7
12
4
4
34
is also a member of the reference set. Its X* value is 9.778, which is less than
that of the observed table and so its probability does not contributeto the exact
p value. The exact p value calculated in this way is .0398. The asymptotic p
value associated with the observed X* value is .07265. Here the exact approach
leads to a different conclusion,namely that the test result is not independent of
race.
The real problem
in calculating exact
p values for contingency tablesis computational. For example, the number of tables
in the reference set
for Table 9.14 is 1.6
billion! Fortunately, methods and associated softwarenow
areavailable that make
this approach a practical possibility (see StatXact, Cytel Software Corporation;
pralay@cytel.com.).
ANALYSIS OF CATEGORICAL
DATA
283
TABLE 9.15
Calculation of the Odds Ratio and Its Confidence Interval for the Schizophrenia
and Gender Data
$= (43 X 52)/(32
15) = 4.66
The odds in favor of being disgnosed schizophrenicamong males is nearly five times the
corresponding odds for females.
The estimated varianceof the logarithmof the estimated odds ratio is
1/4
m= (0.80.2.29).
CHAPTER 9
284
Display 9.7
The Odds Ratio
The general 2 x 2 contingency table metin Display 9.2 can be summarized in terms
of the probabilities of
an observation beingin each of the four
cells of the table.
Variable 1
category 1
Category 2
Variable 2
Category 1
P11
P21
Category 2
P12
P22
The ratios p11/p12and p21/p= are know as odds. The first is the odds ofbeing in
category 1 of variable1 for thetwo categoriesof variable 2. The second is the
corresponding odds for categoIy 2 of variable1. (Odds will be familiarto those
readers who like the occasional flutteron the Derby-Epsom or Kentucky!)
A possible measure for the degree of dependenceof thetwo variables forming a
2 x 2 contingency tableis the so-called odds ratio, given by
Note that(I-may take any value betweenzero and infinity, with a value of1
corresponding to independence (why?).
The odds ratio, J
r,has a number ofdesirablepropecries for representing dependence
among categorical variables that other competing measures
do not have. These
properties include thefollowing.
1. remains unchanged ifthe rows and columnsare interchanged.
2. If the levels ofa variable are changed (i.e.. listing category2 before category l), (Ibecomes l/*.
3. Multiplying either row by aconstant oreither column by a constantleaves Jr
unchanged.
The odds ratio is estimated h m the four frequencies in an observed 2 x 2
contingency table(see Display 9.2) as
+ l / b + l / c+ l/d.
log 4 i 1.96 x
d w .
285
TABLE 9.16
B
C
11
37
Total
Total
38
33
10
3
84
S
11
14
0
3
S
3
1
0
6
10
44
47
3s
23
149
149 patients into four classes:(1) A, certainly suffering from multiple sclerosis;
(2) B, probably suffering from multiple sclerosis;(3) C, possibly suffering from
multiple sclerosis;and (4) D,doubtful, unlikely, and definitely not suffering from
multiple sclerosis.
One intuitively reasonable index of agreement for the two raters is the proportion PO of patients that they classify into the same category; for the data in
Table 9.16,
CHAPTER 9
286
TABLE 9.11
WOHypotheticalData Secs, Eachof which Shows 66%
Agreement Between theWOObservers
Observer B
Observer A
Data Set 1
1
2
Total
l
1
8
1
2
8
64
8
24
5
1
30
13
20
7
40
80
10
Data Set 2
1
2
3
Total
Total
3
1
8
1
10
3
10
80
10
100
40
30
22
30
30
100
(9.7)
(9.8)
Consequently, P, is given by
pc
=l[100
ANALYSIS OF CATEGORICAL
DATA
287
=
I
:
(Po
-Pc)/(l -p,).
(9.10)
P,
1
X 84 471149 X 37
149
231149 X 17) = 0.2797.
= -(44/149
+351149 X 1 1
(9.1 1)
(9.12)
where pi] is the proportionof observations in the ijth cell of the table of counts of
agreements and disagreements for the two observers, pi. and p,, are the row and column
mminal proportions, and r is the number of rows and columns in the table.
CHAPTER 9
288
Landis and Koch(1977). They are as follows.
kappa
Strength
Agreement
of
0.00
Poor
0.01-0.20 slight
0.21-0.40 Fair
0.41-0.60 Moderate
0.61-0.80 Substantial
0.81-1.00 Perfect
9.7. SUMMARY
1. Categorical data occur frequently
in psychological studies.
2. The chi-square statistic can
be used to assess the independence or otherwise
of two categorical variables.
3. When a significant chi-square value for a contingency table has been obexamined
tained, the reasons for the departure from independencebehave
to
in more detailby the use of adjusted residuals, correspondence analysis,or
both.
4. Sparse data can be a problem when the 2 x 2 statistic is used. This is a
p values.
problem that can be overcome by computing exact
ANALYSIS OF CATEGORICAL
DATA
289
COMPUTER HINTS
SPSS
The most common test used on categorical data, the chi-square for
testindependence in a two-dimensional contingency table, can be applied
as follows.
1. Click on Statistics, click on Summarize, and then click onCrosstabs.
2. Specify the row variable in the Rows box and the column variable in the
Columns box.
3. Now click on Statisticsand select Chi-square.
4. To get observed and expected values, click on Continue, click on Cells,
and then click on Expected and Total and ensure that Observed is also
checked.
5. Click on Continue and then OK.
S-PLUS
Various tests described in the text can be accessed from the Statistics menu as
follows.
1. Click on Statistics, click on Compare Samples, and click on Counts and
Proportions. Then,
(a) Click onFishers exact test for the Fisher test dialog box,
(b)Click on McNemarstest for the McNemar test dialog box,
(c) Click onChi-square testfor the chi-square test
of independence.
These tests can alsobe used in the command line approach, with the relevant
functions being chiqtest, fisber.test, and mcnemar.test. For example, for a
contingency table in which the frequencies were stored in a matrix X, the chisquare testof independence could be applied with the command
chisq.test(X).
chisq.test(X,correct=F).
CHAPTER 9
290
EXERCISES
9.1. Table 9.18 shows a cross-classification of gender and in
belief
the afterlife
for a sampleof Americans. Test whether the two classifications are independent,
and construct a confidence interval
for the differencein the proportionof men and
proportion of women who believein the afterlife.
the results are statistically significant. The government,however, lost the case,
because of applying the statistic when the expected count in two cells was less
than five-the defendant argued that the software used to analyze the data print
settletest
out a warning that the chi-square test might not be valid. Use an toexact
the question.
Gender
YCS
No
Female
Male
435
375
147
134
TABLE 9.19
Application for Membershipat the LansdowneSwimming Club
Parameter
membership
Accepted
for
Rejected for membership
Total applicants
Black Applicants
WhiteApplicants
1
5
6
379
0
379
29 1
TABLE 9.20
Six-Point Scale
Janitors Ratings
Ratings
Bankers
0
0
0
0
0
3
0
0
1
0
0
0
25
4
4
11
0
8
27
13
1
1
0
8
0
6
21
48
4
3
0
0
TABLE 9.21
Berkeley College Applications
Females
Males
Admitted
Department
Refucd
512
353
120
138
53
22
A
207
B
138
E
F
313
205
219
351
Admitted
89
l7
202
131
94
24
Refused
19
8
391
244
299
317
One setof data collectedis shown in Table 9.20. Investigate the levelof agreement
between the two ratersby using the kappa coefficient.
9.4. Consider the following table of agreement for two raters rating a binary
response.
Yes
No
Total
Yes No
15
5
5
20
35
40
Total
20
40
60
CHAPTER 9
292
TABLE 9.22
No
Yes
Total
No
YeS
49
17
31
Total
66
80
69
149
52
83
Show that the kappa statisticfor these data is identical to both their intraclass
correlation coefficient and their product moment correlation coefficient.
9.5. Find the standardized residuals and the adjusted residuals
for the incidence
of cerebral tumors in Table 9.2. Comment on your results in the light of the
nonsignificant chi-square statisticfor these data.
9.6. The datain Table 9.21 classify applicantsto the Universityof California
at Berkeley by their gender, the department applied for, and whether or not they
were successfulin their application.
9.7 Table 9.22 shows the results of mothers rating their children in two
consecutive years, as to whether or not they were doing well at school. Carry
out the appropriate test
of whether there has been a change in the mothers ratin
over the2 years.
l0
Analysis of Categorical
Data 11: Log-Linear Models
and Logistic Regression
1 0 . 1 . INTRODUCTION
The two-dimensional contingency tables, which were the subject of the previous
chapter, will have been familiar to most readers. Cross-classifications of more
than two categorical variables, however, will not usually have been encountered
arethat
the
in an introductory statistics course. It is such tables and their analysis
subject of this chapter. W Oexamples ofmultidimensional contingencytables, one
resulting from cross-classifying three categorical variables and one from crossclassifying five such variables, appear in Tables
10.1 and 10.2. Both tables will be
examined in more detail later in this chapter. To begin our account of how to deal
with this type of data, we shall look at three-dimensional tables.
10.2. THREE-DIMENSIONAL
CONTINGENCY TABLES
The analysis of three-dimensional contingency tables poses entirely new conceptual problems compared with the analysis of two-dimensional tables. However,
the extension from tables of
three dimensions to those of four or more, although
293
CHAPTER 10
294
TABLE 10.1
Cross-Classificationof Method of Suicide by Age and Sex
Method
A@
(Years)
StX
455
121 10-40398
399
Male 41-70
6
93
r70
95
15 10-40 259
450 Female 41-70
r 7 0 154
4, W
Male
Male
Female
Female
316
14 33
55
124
82
26
40
71
75
38
185
38
60
10
TABLE 10.2
Danish Do-It-Yourself
Accommodnlion Dpe
Apartment
Tenurn
work
Answer
Yes Rent
Skilled
15
13
No
Yes Own
Age <30
No
Unskilled
Yes
Rent
No
Yes Own
No
Yes RentOffice
21
own
23
No
Yes
No
18
415
5
17
34
2
3
30
25
61
8
476
House
31-45
46+
34
28
35
6
9
561
211
15
19
3
0
1
1029
17
0
~ 3 0 3145
10
2
6
3
13
52
31
13
16
56
12
4416
23
951
22
19
40
2512
5 102 191
1
54
2
19
2
46+
49
11
29s
Parameter
Death
Penally
Not
Death
Penally
White Wctim
White defendant foundguilty of murder
Black defendant foundguilty of murder
190
110
1320
Black Wctim
White defendant foundguilty of murder
Black defendant foundguilty of murder
0
60
90
970
190
170
1410
1490
520
CHAPTER 10
296
TABLE 10.4
Died
Place Where
Care Received
clinicA
Clinic B
More Prenatal
Care
Less Prenatal
Cam
Care
More Prenatal
Care
3
17
4
2
176
197
293
23
Less Prenatal
Prenatal
Amount of
Less
More
Total
20
6
26
313
316
689
393
322
715
The reasonfor these different conclusions will become apparent later. Howev
these examples should make it clear
why consideration of two-dimensional tables
resulting from collapsing a three-dimensional table
is not a sufficient procedure
for analyzing the latter.
Only a single hypothesis, namely that
of the independenceof the two variables
involved, is of interest in a two-dimensional table. However, the situation
is more
complex for a three-dimensional table, and several hypotheses about the three
variables may have to be assessed. For example,
an investigator may wishto test
that some variables are independentof some others, or that a particular variable
is independent of the remainder. More specifically, the following hypotheses may
be of interest in a three-dimensional table.
1. There is mutual independence of the three variables; that is, none of the
variables are related.
2. There is partial independence; that is, an association exists between twoof
the variables, bothof which are independentof the third.
3. There is conditional independence; that is, two of the variables are independent in each level of the third, but each may be associated with the
third variable (this is the situation thatholds in the case of the clinic data
discussed above).
ANALYSIS OF CATEGORICAL
DATA
II
297
Xi = 2
observed x In (observed/expected),
(10.1)
C W E R 10
298
Display 10.1
Testing the Mutual Independence Hypothesis
in a Three-Dimensional Contingency Table
Using an obvious extensionof the nomenclature introduced in Chapter9 for a
two-dimensional table, wecan formulate the hypothesisof mutual independenceas
where pijt represents the probabilityof an observation beingin the ijkth cell of the
table and pi.., p,]..and P . . k are the marginal probabilitiesof belonging to the i,j and
kth categories of the three variables.
The estimated expected values underthis hypothesis when thereis asample ofn
observations cross-classifiedare
Eijk
= n?i..$.j.?,.k*
whe? ?!,,!
F.,and B . . k are estimates of the corresponding probabilities.
The mtlutrve (Ad fortunately also themaximum likelihood) estimators of the
marginal probabilities are
where ?ti,., n,j.,and n.,k are single-variable marginal totalsfor each variable obtained
by summing the observed frequencies over the other two variables.
Display 10.2
Testing the Partial Independence Hypothesis inWe-Dimensional
a
Contingency Table
With the use of the same nomenclature as before, the hypothesisof partial
independence canbe written in two equivalentforms as
= n..r/n,
h].
= P.j.P..k.
= nij./n3
where nij. represents the two-variable marginal totals obtained by summing the
observed frequencies over thethird variable.
Using these probability estimates leads to the following estimated expected values
under this hypothesis:
299
TABLE 10.5
Testing Mutual Independence
for the Suicide Data
For thesuicidedata,theestimatedexpectedvalue,
independenceis obtained as
Elll, underthehypothesis
of mutual
Sex
Male
Male
Male
1
371.9
556.9
186.5
2
51.3
730.0
76.9
244.4
25.7
1040
41-70
r70
Female
Female
Female
212.7
318.5
106.6
29.4
44.0
14.7
Method
3
4
5
487.5
85.5 69.659.6
128.0 104.2
89.3
42.9
29.9
278.8
417.5
139.8
48.9
73.2
24.5
6
34.9
34.1
39.8
17.2
20.0
51.0
59.6
X2 = 747.37, X! = 790.30.
The mutual independence hypothesis has27 degrees of freedom.
Note that the single-variable marginal totals
of the estimated expected values under the hypothesis
of mutual independenceare equal to the corresponding marginal
totals of the observed values,
for example,
nl..
El,.= 371.9
how this hypothesis is formulated and the calculationof the estimated expected
values. In Table 10.6 the partial independence hypothesis
is tested for the suicide
this more complicated hypothesis
is not adequatefor these data.
data. Clearly, even
A comparisonof the observed values with the estimates
of the values to be expected
in all age groups are
under this partial independence hypothesis shows that women
underrepresented in the use
of guns, knives,or explosives (explosives!) to perform
the tragic task. (A more detailed account of how best to compare observed and
expected valuesis given later.)
Finally, consider the hypothesisof no second-order relationship between the
variables in the suicide data.This allows each pairof variables to be associated,
but it constrains the degree and direction of the association to be the same in
each level of the third variable. Details of testing the hypothesis for the suicide
data are given in Table 10.7. Note that, in this case, estimated expected values
cannot be found directly from any set of marginal totals. They are found from
the iterative procedure referredto earlier. (The technical reason for requiringan
iterative process is that, in this case, the maximum likelihood equations from
CHAPTER 10
300
TABLE 10.6
Testing the Partial Independence Hypothesis
for the Suicide Data
Here, we wish to test whether methodof suicide is independent of sex. and that age andsex a m
also unrelated. However, we wishto allow an association between age and method.
The estimated expected value El I I under this hypothesisis found as
Ell!
Sex
Male
Male
Male
418.0
540.1
157.1
2
3
349.9
86.5
793.3
60.4
318.7
7.0
10-40
41-70
>70
Female
Female
Female
239.0
308.9
89.9
49.5
34.6
4.0
Methods
4
107.5
123.4
25.5
200.1
453.7
182.3
61.5
70.6
14.6
5
60.44
77.6
40.7
6
103.1
90.3
15.3
34.6
44.4
23.3
59.0
51.7
8.7
Notethatinthiscasethehemarginaltotals.n,l.andE~~..areequal;forex~pmple.n~~.
= 398+259 =
= 485.3,
X i = 520.4.
which estimates arise haveno explicit solution. The equations have to be solved
iteratively.) Both test statistics are nonsignificant, demonstrating
that this particular
hypothesis is acceptable for the suicide data. Further comments on this result will
be given in the next section.
ANALYSIS OF CATEGORICAL
DATA
30 1
I1
TABLE 10.7
The hypothesisof interest is that the association between any two of the variables doesnot differ
in either degree or direction in each levelof the remaining variable.
More specifically,this means that the odds ratios corresponding to the 2 x 2 tables that arise
fmm the cross-classificationof pairs of categories of two of the variables are the same in all
levels of the mmaining variable. @etails are given in Everitt 1992.)
Estmates of expected values under this hypothesis cannot be found from simple calculations
on marginal totals of observed frequencies as in Displays 10.1 and 10.2. Instead, the required
estimates have to be obtained iteratively byusing a p d u r e described in Everitt(1992).
The estimated expectedvalues derived form this iterative procedureare as follows.
Age
1040
41-70
r70
Sex
Male
Male
Male
1
410.9
379.4
99.7
2
122.7
77.6
8.7
1040
41-70
>70
Female
Female
Female
246.1
496.6
147.3
13.3
17.4
2.3
Method
3
4
5
439.2 156.4 56.8
819.9 166.384.751.1
308.9
33.4 24.1
110.8
427.1
192.1
12.6
27.7
6.6
38.2
70.9
39.9
6
122.0
13.3
40.0
57.3
10.7
X2 = 15.40, X i = 14.90.
The degrees of freedom for this hypothesis are 10.
used for contingency tables can be introduced most simply(if a little clumsily)
in terms of a two-dimensional table; detailsare given in Display10.3. The model
introduced thereis analogous to the model used
in a two-way analysis of variance
(see Chapter4) but it differsin a numberof aspects.
1. The datanow consist of counts, rather than a score for each subject
on some
dependent variable.
2. The model does not distinguish between independent and dependent variables. A l l variables are treated alike as response variables whose mutual
associations are to be explored.
3.Whereas a linear combinationof parameters is used in the analysisof variance and regression models of previous chapters, in multiway tables the
natural model is multiplicative and hence logarithms are used to obtain a
model in which parametersare combined additively.
4. In previous chapters the underlying distribution assumedfor the data was
the normal, with frequency data the appropriate distributionis binomial or
multinomial (see the glossary
in Appendix A).
CHAPTER
302
10
Display 10.3
Log-Linear Model for a no-Dimensional Continnencv Table with
r Rows andc Columns
Again the general model consideredin previous chapters, that is,
= npi.p.j,
When logarithm are taken, the following linear model for theexpected frequencies
is arrived at:
hF;t=:hF;..+hF.,-hn.
By some simplealgebra (it really is simple, butsee Everitt, 1992, for details), the
model can be rewritten in theform
The formof the modelis now very similarto those usedin the analysis of variance
(see Chapters 3 and 4). Consequently, ANOVA terms are used for theparameters,
and U is said torepresent an overall mean effect,UI(I) is themain effect of category i
of the row variable.and uw)is themain effect ofthe jth category ofthe columns
variable.
The main effect parametersare defined as deviations of row
or columns means of log
frequencies from the overall
mean. Therefore, again using
an obvious dot notation,
The values takenby the main effects parameters in this model simply reflect
differences betweenthe row or columns marginaltotals and so are of little concern
303
In Fij =
+ +
Ul(i)
U2(j)
UIz(ij)
where the parametersu 1 2 ( ; j ) model the association between the two variables.
This is known as the saturated modelfor a two-dimensional contingency table
because the numberof parameters in the model
is equal to the numberof
independent cells in the table (see Everitt,
1992, for details). Estimated expected
values under this model wouldsimply be the observed frequencies themselves, and
the model providesa perfect fit to theobserved data, but, of course, no simplification
in descriptionof the data.
The interaction parametersu l 2 ( , j ) are related toodds ratios (see Exercise10.7).
Now consider how the log-linear model
in Display 10.3 has to be extended to
be suitable for a three-way table. The saturated modelwill now have to contain
main effect parameters for each variable, parameters to represent the possible
associations between each pair of variables, and finally parameters to represent
the possible second-order relationship between
the three variables. The model is
Ejk
+ +
+ +
Ulg)
U13(ik)
UZU)
+ +
+
UU(jk)
u3~) u12(ij)
ul23(ijk)-
(10.2)
304.
CHAPTER 1 0
assessed for fit. However, it is important to note that, in general, attention must
restricted to what are known as hierarchical models. These are such that whenever a higher-order effect is included in a model, the lower-order effects comif terms
posed from variables in the higher effect are also included. For example,
uls are included, so also must t e r n 1412, ~ 1 3~, 1 , 2 4 2 ,and u 3 . Therefore, models
such as
Kjk
= U +% ( j )
U3(k)
U 123( ijk)
(10.3)
are not permissible.(This restriction to hierarchical models arises from the constraintsimposed by the maximum likelihood estimation procedures used in fitting
log-linear models, details of whichare too technical to be included in this text.
In practice, the restriction is of little consequence because most tables can be
described by a series of hierarchical models.)
Each model that can be derived from the saturated
for a model
three-dimensional
table is equivalentto a particular hypothesis about the variables forming the table;
the equivalence is illustrated in Display 10.4. Particular points to note about the
material inthis display are as follows.
1. The first three models are of no consequence in the analysis of a threedimensional table. Model4 is known as the minimal model for such a table.
2. The fined marginuls or bracket notation is frequently used to specify the
series of models fitted when a multidimensional contingency tableis examined.
The notation reflects the fact noted earlier that, when testing particular hypoth
about a multiway table (or fitting particular models), certain marginal totals of
the estimated expected values are consmined to be equal to the corresponding
(This arises because of the form of the maximu
marginals of the observed values.
specify the model with the bracket notatio
likelihood equations.) The terms to
used
are the marginals fixed by the model.
ation
I1
ANALYSIS OF CATEGORICAL
DATA
305
Display 10.4
Hierarchical Modelsfor aGeneral Three-DimensionalContingency Table
+
+ +
+ + +
In E j t = 11 +
6. In f i j t
7. In f i j k
8. In kjlt
U2(j)
+u 2 ( / ) +
= U + ul(t)+u u )+
= U + ul(i) +U Z ( j ) +
+u23Uk)
=U +
U23(jl)
U1
U3(&)
~ 3 ( t ) ulzij)
u3(k) U 1 x i j )
U3(t) U l Z ( i j )
+
+
+ + +
~13(it)
U W t )
[11.[21
[11*[21,[31
[121,[31
[l219 [l31
[121, [131,[231
UIW)
[l231
UZ(j)
USG)
u3(it)
UIZ(t,)
+
+
+
U123(ijt)
(10.4)
The residuals for the final model selected for the suicide data are given in Table 10.10. All of the residuals are small, suggesting that the chosen model does
(A far fuller account
give an adequate representationof the observed frequencies.
of log-linear models is given in Agresti,
1996.)
CHAPTER 10
306
TABLE 10.8
Log-Liear Models for Suicide Data
The goodness of of
fita seriesof loglinearmodels is given below (variable
1is metbcd, variable
2 is age, and variable is3 sex).
Model
27
DF
[11.[21.[31
[W. [31
I
1131. P424.6
[=l. Dl
W]. [l31
[W. [231
[231
[121. [131.
17
22
25
12
15
10
Xi
790.3
520.4
658.9
154.7
389.0
14.9
P
<.m1
<.001
<.m1
<.m1
C.001
<.m1
.l4
The difference in the likelihoodratio statistic for two possible models can be
used to choose
between them. For example, the mutual independence model, [l], 121, [3], and the model that
allows an association between variables1 and 2,[121, [3], have X i = 790.3 with 27 degrees
of freedom andX i = 520.4 with 17 degreesof freedom, respectively. The bypothesis that the
are zero,that is. H0 :u12(il) = 0 for all i and j
extra parameters in the more complex model
(U 12 = 0 for short), is
tested by the difference in the two likelihood
ratio statistics, with degrees
of freedom equal to the difference in the degrees of freedom of the two models.
Herethis leads to a valueof 269.9 with 10 degreesof freedom. The result
is highly significant and
with the mutual independence
the second model provides a significantly improved fit compand
model.
A useful way of judging
a seriesof log-linear models is by means
of an analogof the square of
the multiple correlation coefficient
as used in multiple regession
(see Chapter 6). The measure
L is defined as
Xipaseline model) Xi(mode1 of interest)
L=
Xtpaseline model)
10.4. LOGISTICREGRESSION
FOR A BINARY RESPONSE VARIABLE
In many multidimensional contingency tables(I a m tempted to sayin almost all),
there is one variable that can properly be considered a response, and it is the
relationshipof this variable to the remainder, which is of most interest. Situations
in which the response variable has two categories are most common, and it is
these that are considered in this section. When these situations are introduced
307
TABLE 10.9
Parameter Estimates in the Final Model Selected for the Suicide Data
The final model s e l e c t e d for the suicide datais that of no secondader relationship between the
three variables, that is, model 7 in Display 10.4.
The estimated main effect parameters are as follows.
Estimate
1.33
1.27
1.55
category
Solid
Gas
Hang
GUn
-0.64
Jump
Other
-0.42
-0.55
EstimaWSE
34.45
-11.75
41.90
-8.28
-7.00
-7.64
10-40
41-70
270
0.56
-0.81
7.23
16.77
-16.23
Male
Female
0.41
-0.41
15.33
-15.33
0.25
Age
10-40 -O.O(-0.5)
41-70
-0.1 (-1.09)
>70
0.1 (1.1)
Gas
0.5 (4.9)
0.1 (1.0)
-0.6 (-3.6)
Method
Hanging
-0.6(-14.2)
0.1 (1.7)
0.5 (9.6)
Gun
-O.O(-0.4)
0.1 (1.3)
-0.1 (-0.6)
Jump
-0.2(-2.5)
-0.3 (-3.4)
0.5 (4.8)
Other
0.3 (4.2)
O.O(O.4)
-0.4 (-3.0)
Sex
Male
Female
Solid
Gas
-0.4(-13.1)
0.4(5.3)
0.0(0.3)
(8.4)
0.6
(13.1)
0.4
-0.4
(-5.3)
Method
Gun
Jump
-0.5 (-8.6)
0.5 (8.6)
0.1
(2.2)
-0.0 (0.3)
-0.6
(-8.4)
Other
-0.1 (-2.2)
The estimated interaction parameters for age andsex and the ratios of these estimates to their
standard errors (
i
nparentheses) are as follows.
Sex
Male
Female
10-40
0.3 (11.5)
-0.3 (-11.5)
(6.8)
Age
41-70
-0.1 (-4.5)
0.20.1 (4.5)
>70
-0.2 (-6.8)
CHAPTER 10
308
TABLE 10.10
Age
Sa
10-40
Male
Male
Male
Female
Female
Female
41-70
r70
10-40
41-70
>70
-0.6
1.0
-0.7
0.8
-0.9
0.6
-0.2
0.5
0.8
-0.8
-0.9
0.4
0.5 -1.5
-1.1
1.1
1.8 0.1-0.5
-0.1 0.2-0.2
0.1 -0.0
-0.1
0.4
0.4
0.3
-0.3
0.0
-0.3
-0.3
0.2
-0.3
0.4
-0.2
(10.5)
this probability would be the expected response and the corresponding observed
response would be the proportion of individuals
in the sample categorized
as cases.
ANALYSIS OF CATEGORICAL
DATA
309
I1
TABLE 10.11
GHQ Data
GHQ Scorn
3
4
5
6
7
8
9
10
1
2
3
4
5
6
7
8
9
10
Sa
No. of Cases
No. of Noncases
F
F
F
F
F
F
F
F
F
F
F
M
M
M
M
M
M
M
M
M
M
M
4
4
8
6
4
6
3
80
29
15
3
2
1
1
0
0
0
36
1
1
2
2
25
8
1
3
3
4
1
1
1
2
4
2
2
(10.6)
= p + error.
(10.7)
P = BO B w x B2GHQ
and
observed
response
To simplify thingsfor the moment, lets ignore gender and fit the model
Bo +BlGHQ
=
:
(10.8)
CHAPTER 10
310
TABLE 10.12
Linear Regression Modelfor GHQ Data with GHQ Score as the Single
Explanatory Variable
Estimate
Parameter
0.136
&,(inteFcept)
PI(GHQ)
0.096
SE
0.065
0.011
2.085
8.698
1
2
3
4
5
6
7
8
9
10
11
Predicted Pmb.
0.136
0.233
0.328
0.425
0.521
0.617
0.713
0.809
0.905
1 .001
1.W7
Observed Pmb.
0.041
0.100
0.303
0.500
0.700
0.818
0.714
0.750
0.857
1.Ooo
1 .ooo
31 1
I1
ANALYSIS OF CATEGORICAL
DATA
Display 10.5
A = ln[P/(l -P)].
A=In~p/~l-P~l=Bo+Blxl+~~z+~~~+Bqxq,
P=
exp(B0 + BA + + &x,)
+exp(h + + +p,xg)
*
~ I X I
There are a number ofsummary statistics that measure the discrepancy between
the
observed proportions ofsuccess (i.e., the one category of the response variable,
say), andthe fitted proportionsfrom thelogistic model for thetrue success
probability p. The most commonis known as the deviance, D,and it is given by
i=l
where yi is the number ofsuccess in the ith category ofthe observations, ni is the
total number of responsesin the ith category, andYi = nisi where fii is the
predicted success probabilityfor this category. (We are assuming here that the raw
Danish
data have been collectedinto categories as in the GHQ data and the
do-it-yourself data in the text. Whenthis is not so and the data consist of the original
zeros and onesfor theresponse, thenthe deviance cumof be used as a measure offit;
see Collett, 1991,for anexplanation of whynot.)
The deviance is distributed as chi-squared and differencesin deviance valuescan be
used to assess competing models;see the examples in the text.
The estimated regression coefficient for GHQ score in the logistic regression
is 0.74. Thus the log (odds) of being a case increases by 0.74 for a unit increase
in GHQ score. An approximate 95% confidence interval for the regression coefficient is
(10.9)
CHAPTER 10
312
TABLE 10.13
Modelfor GHQ Data with GHQ Score
as the Single Explanatory Variable
Logistic Regression
Esrmnte
SE
Parameter
h(intercept)
PI(GHQ)
ObservationNo.
1
2
3
5
6
7
9
10
11
-2.71 1
0.736
Predicted Pmb.
0.062
0.122
0.224
0.377
0.558
0.725
0.846
0.920
0.960
0.980
0.991
-9.950
0.272
0.095
7.783
Observed
Pmb.
0.041
0.100
0.303
0.500
0.700
0.818
0.714
0.750
0.857
1O
. OO
1.Ooo
However, such results givenas they are in terms of log (odds) are not immeif we translate everything back to odds, by
diately helpful. Things become better
exponentiating the various terms.
So, exp(0.74) = 2.08 represents the increasein
the oddsof beiig a casewhen the GHQ score increasesby one. The corresponding
confidence intervalis [exp(O.56), exp(O.O92)] = [1.75,2.51].
for the GHQ data, namely a logistic
Now let us consider a further simple model
regression for caseness with only gender as an explanatory variable. The results
of fitting such a model
are shown in Table 10.14. Again,the estimated regression
coefficientfor gender, -0.04, represents the change in log (odds) (here a decrease
For the dummy variable coding sex,
as the explanatory variable increases by one.
such a change implies that the observation arises from a man rather than a
Transfemng back to odds,
we havea value of 0.96 anda 95% confidence interval
of (0.55,1.70). Becausethis interval contains the value one, which would indicate
the independenceof gender and caseness (see Chapter g),it appears that gender
does not predict caseness
for these data.
If we now look at the 2 x 2 table of caseness and genderfor the GHQ data,
that is,
Sex
Female
Male
25
Case
43
Not case
79
131
11
ANALYSIS OF CATEGORICAL
DATA
313
10
GHQ
FIG. 10.1. Fitted linear and logistic regression models for the
probability of b e i n g a case as a function of the GHQ score for
the data in labie 10.I I
TABLE 10.14
Logistic Regression Modelfor GHQ Data with Gender as
the Single Explanatory Variable
EstimateParameter
Bo (intercept)
Bo (=x)
-1.114
-0.037
SE
0.176
0.289
-6.338
-0.127
we see that the oddsratio is (131 x 25)/(43 x 79) = 0.96,the same resultas given
by the logistic regression. For asingle binary explanatory variable,the estimated
regression coefficient is simply the log of the odds ratio from the 2 x 2 table
CHAPTER 10
314
as explanatory variables.If they do, they will notice that the estimated regression
coefficient for sex is no longer simply the log of the odds ratio from the 2 x 2
table above, because of the conditional nature of these regression coefficients w
in the fitted model.)
other variables are included
As a further more complex illustrationof the use of logistic regression, the
method is applied to the data shownin Table 10.2. The data come from asample of employed men aged between 18 and 67 years; who were asked whether,
in the preceding year, they had canied out work in their home that they would
have previously employed a craftsman to do. The response variable here is the
answer (yedno) to that question. There are four categorical explanatory variables:
(1) age: under 30, 31-45, and over 45; (2) accommodationtype: apartment or
house; (3) tenure: rent or own; and (4) work of respondent: skilled, unskilled, or
office.
of responTo begin, let us consider only the single explanatory variable, work
three categories. In Chapter 6, when the
dent, whichis a categorical variable with
it
use of multiple regression with categorical explanatory variables was discussed,
was mentioned that although such variables could be used, some care was neede
in howto deal with them when they have more two
thancategories, as here. Simply
as,say, 1,2, and 3for the work variable would really not do,
coding the categories
way to handle
because it would implyequal intervals along the scale. The proper
a nominal variable with more that k > 2 categories would be to recode it as a
number of dummy variables. Thus we will recode work
in terms of two dummy
variables, work1 and work2, definedas follows.
Work
Work1
Skilled
0
Unskilled 1
Office
0
Work2
0
0
Theresults of thelogisticregressionmodel
of theform logit(p) =
Blworkl
BZwork2 are given in Table 10.15. A cross-classification of work
TABLE 10.15
Logistic Regression for Danish Do-It-Yourself Data with
Work as the Single Explanatory Variable
Parnmerer
(intercept)
B1 (work11
F2 (WOW
Estimate
SE
0.706
-0.835
-0.237
0.112
6.300
-5.695
1.768
0.147
0.134
315
work
Response
239
210
119
No
Yes
241
Total
m Total
e Unskilled
Skilled
301
481
782
360
932
1591
449
against the response is shownin Table 10.16. From this table we can extract the
following pair of 2x 2 tables:
119
119
Response
Skilled
Unskilled
No
Yes
210 1 24
odds ratio = (119 X 210)/(241
Response
Skilled
Office
No
241 Yes
48 1
odds ratio = (1 19X 481)/(241
-0.237.
We see that the coding used produces estimates for the regression coefficients
workl and work2 thatare equal to the log
(odds ratio) from comparing skilled and
unskilled and skilled and office. (Readersare encouraged to repeatthis exercise;
as two dummy variables.)
use the age variable represented also
The results from fitting the logistic regression model all
with
four explanatory
variables to the Danish do-it-yourself data are shown in Table 10.17. Work has
been recoded as workl and work2 as shown above, and age has been similarly
coded in terms of two dummy variables, age1 and age2. So the model fitted is
logit(p) = BO Blworkl+ hwork2 Bsagel+ B4age2 Bstenure B6type.
Note that the coefficients for workl and work2 are similar, but not identical to
those in Table 10.15. They can, however, still be interpretedas log (odds ratios),
are taken into account.
but after the effects of the other three explanatory variables
The results in Table 10.17 appear to imply that work, age, and tenure are the
three most important explanatory variables for predicting the probability of answering yesto the question posedin the survey. However, the same caveats apply
to logistic regression as were issued in the case of multiple linear regressionin
316
CHAPTER 1 0
TABLE 10.17
Results from Fitting the Logistic Model toDanish Do-It-Yourself
Data by Using All Four ObservedExplanatory Variables
SE
0.154
0.152
0.141
0.137
0.141
0.138
0.147
1.984
-5.019
-2.168
-0.825
-3.106
7.368
-0.017
-O.llage1 -0.43age2.
(10.10)
317
TABLE 10.18
Comparing Logistic Models for the Danish Do-&Yourself Data
We will start with a model including only tenure as an explanatory variable and then add explanatory variables in an order suggested by the t statistics from Table 10.17.
Differences in the deviance values for the various models can be used to assess the effect of
adding the new variable to the existing model. These differences can be tested as chi-squares
with degrees of freedom equal to the difference in the degrees of freedom of the two models that
are being compared.
We can arrange the calculationsin what is sometimes known as an analysis ofdeviance table
Model
Deviance
DF
Tenure
Tenure +Work
Tenure Work
Tenure Work
12.59
40.61
29.61
29.67
34
32
30
29
+
+
+ Age
+ Age + ?srpe
Deviance
diff.
DF
diff.
31.98
10.94
0.00
2
2
1
<.OOOl
.004
1.00
In fitting these models, work is entered as the two dummy variables work1 and work2,
and similarly age is entered as age1 and age2.
and Figure 10.2(b) shows a normal probability plot of the Pearson residuals. These
plots give no obvious cause for concern, so we can now try to interpret our fitted
model. It appears that conditional on work and age, the probability of a positive response is far greater for respondents who own their home as opposed to those who
rent. And conditionalon age and tenure, unskilled and office workers tend to have
a lower probability of responding yes than skilled workers. Finally, it appears that
the two younger age groups do not differ in their probability of giving a positive
response, and that this probability is greater than that in the oldest age group.
MODEL
In Chapter 6 we showed that the models used in the analysis of variance and those
used in multiple linear regression are equivalent versions of alinear model in which
an observed response is expressed as a linear function of explanatory variables
plus some random disturbance term, often referred to as the ''error'' even though
in many cases it may not have anything to do with measurement error. Therefore,
the general form of such models, as outlined in several previous chapters, is
(10.12)
TABLE 10.19
StandardErrors for the Final Model
Parameter Estimates and
Selected for Danish Do-It-Yourself Data
Estimate
0.135
0.304
8.868
-5.019
-0.436
Parameter
Intercept
Tenure"
0.114
work10.152
Work2
Age1
Age2
SE
0.141
0.137
-2.169
-0.826
-3.116
1.014
-0.763
-0.305
-0.113
0.140
Observed
Cell
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
0.54
0.54
0.58
0.55
0.40
0.55
0.47
0.58
0.71
0.55
0.25
0.47
0.79
0.77
0.71
0.79
0.77
0.71
0.39
0.36
0.29
0.39
0.36
0.29
0.83
0.75
0.50
0.82
0.73
0.81
0.33
0.37
0.44
0.40
0.19
0.30
0.40
0.00
1.00
0.72
0.63
0.49
0.55
0.55
0.34
0.47
0.45
0.48
0.64
0.61
0.53
0.64
0.61
0.53
0.50
0.47
0.39
0.50
0.47
0.39
33
28
15
62
14
8
6
4
2
68
77
43
51
27
34
73
16
23
5
2
3
32
83
100
55
42
61
47
29
23
(Continued)
318
319
TABLE 10.19
(Continued)
Cell
0.67
0.71
0.33
0.74
0.72
0.63
31
32
33
34
35
36
0.73
0.71
0.64
0.73
0.71
0.71
12
7
3
73
267
163
Display 10.6
Diagnostics for Logistic Regression
The first diagnostics fora logistic regressionare the Pearson miduals defined as
where sgn(yi -Y1) is the function that makesdl positive whenyi 2 9, and negative
when y~<j l .Plotting the deviance residuals against the fitted values canbe useful,
indicating problemswith models. Like the Pearson residuals,the deviance residual
is approximately normally distributed.
Specification of the model is completed by assuming some specific distribution for the error terms. In the case of ANOVA and multiple regression models, for example, the assumed distribution
is normal with mean zero and a constant
variance d .
Now consider the log-linear and the logistic regression models introduced
in this
chapter. Howmight thesemodels beput into a similar form to the models
in the
used
is by a relatively simple
analysis of variance and multiple regression? The answer
320
Ii
L-
32 1
10.6. SUMMARY
322
CHAPTER 1 0
COMPUTER HINTS
SPSS
The type of analyes described inthis chapter can be accessed from the Statistics
menu; for example, to undertake a logistic regression
by using forward selection
to choose a subset
of explanatory variables, we can use the following.
1. Click onStatistics, click on Regression, and then click
on Logistic.
2. Move the relevantbinary dependent variable into the
Dependent variable
box.
5. Click onOK.
S-PLUS
In S-PLUS,log-linear analysis and logistic regression can
be accessed by means
of the Statistics menu,as follows.
ANALYSIS OF CATEGORICAL
DATA
II
323
EXERCISES
10.1. The data in Table 10.20were obtained from a studyof the relationship
between car size and car accident injuries. Accidents were classified according
to their type, severity, and whether or not the driver was ejected. Using severity
as the response variable, derive and interpret a suitable logistic modelfor these
accounts.
10.2. The data shown in Table 10.21arise from a studyin which a sample of
1008 people were asked to compare two detergents, brand M and brand X. In
Number Hurt
Car Weight
S d
Small
Small
Small
Standard
Standard
Standard
Standard
Driver Ejected
Accident ljpe
No
Collision
Rollover
Collision
Rollover
Collision
Rollover
Collision
Rollover
No
Yes
Yes
No
No
YeS
Yes
Severely
Not
Severely
150
112
23
80
1022
404
161
265
350
60
26
19
1878
148
111
22
324
CHAPTER
10
TABLE 1031
Comparisonsof Detergents Data
Previous User of M
WaterSobess
Soft
Medium
23
Hard
B r d Preferred
High
Temp.
19
29
23
47
24
43
M
33
55 M
X
M
NotPrevious User of M
Low Temp.
High Temp.
Low Temp.
57
49
47
29
27
63
53
66
37
52
42
30
68
42
TABLE 10.22
Data for Class of Statistics Students
Student
I
2
3
4
5
6
7
9
10
11
12
13
14
15
16
17
18
19
20
yi
Test Score
Grade in Course
0
0
1
0
1
525
533
545
582
581
576
572
609
559
543
576
525
574
582
574
471
595
557
557
584
1
1
1
1
1
1
1
1
1
1
0
0
1
C
B
C
D
B
A
C
D
B
D
C
B
B
C
A
A
A
II
ANALYSIS OF CATEGORICAL
DATA
325
TABLE 10.23
Menstruation of Girls in Warsaw
11.08
11.33
11.58
11.83
12.08
12.33
12.58
12.83
13.08
13.33
13.58
14.08
14.33
14.58
15.08
15.33
15.58
17.58
2
5
10
17
16
29
39
51
47
67
81
79
90
93
117
107
92
1049
120
88
105
111
100
93
100
108
99
106
117
98
97
100
122
111
94
1049
326
CHAPTER
10
10.6. The data in Table 10.23 relate to a sample of girls in Warsaw, the response variable indicating whether or not the girl has begun menstruation and
the exploratory variable age
in years (measured to the month).
Plot the estimated
probability of menstruation as a functionof age and show the linearand logistic
regression fits to the dataon the plot.
10.7. Fit a logistic regression model
to the GHQ data that includes maineffects
for both gender andGHQ and an interaction between the two variables.
10.8. Examine both the death
penalty data (Table 10.3) and the infants survival
and suggest what it is that
data (Table 10.4) by fitting suitable log-linear models
leads to the spurious results
when the data are aggregated over a particular
variable.
Appendix A
Statistical Glossary
2. Freund,J.E.,andWilliams,F.J.(1966).Dicrionary/outlineofbasicstatistics.
New York Dover.
3. Everitt, B. S. (1998). The cambridge dictionary of statistics. Cambridge:
Cambridge University Press.
4. Everitt, B. S., and Wykes, T. (1999).A dictionary of statistics forpsychologists. London: Arnold.
327
APPENDIX A
328
A
Acceptance region: The set of values of a test statistic for which the null hypothesis is accepted. Suppose, for example, a z test is being used to test that
the mean of a population is 10 against the alternative that itis not 10. If the
significance levelchosen is .05, then the acceptance region consistsof values
ofzbetween-1.96and1.96.
Additive effect: A term used when the effect of administering two treatments
See also additive model.
together is the sum of their separate effects.
Additive model: A model in which the explanatory variables have anadditive
efecr on the response variable.So, for example, if variable A has an effect of
size a on some response measure and variable B one of size b on the same
response, then in an assumed additive modelA and
for B, their combined effect
would be a b.
Alpha(a):
Alternativehypothesis:
tested.
nullhypothesis is
B
Balanceddesign: A termappliedtoanyexperimentaldesigninwhichthe
same numberof observations is taken for each combinationof the experimental
factors.
STATISTICAL GLOSSARY
329
Reaction time
FIG. A. I .
APPENDIX A
330
-2
*
I
Binary variable: Observations that occur in one of two possible states, which
are often labeled 0 and 1. Such data are frequently encountered in psychological investigations; commonly occurring examples include improvedlnot
improved, and depressedhot depressed.
P(x) =
n!
p(1
x!(n -x)!
STATISTICAL GLOSSARY
33 1
where j$ is the sample mean ofthe y variable for those individuals for whom
= 1, j o is the sample mean of the y variable for those individuals having
= 0, sy is the standard deviation of the y values, p is the proportion of
individuals with x = 1, and q = 1 p is the proportion of individuals with
x = 0. Finally, U is the ordinate (height)of a normal distributionwith mean
zero and standard deviation one, at the point
of division between thep and q
proportions of the curve. See also point-biserial correlation.
x
x
Bivariate data:
variables.
Boxs test: A test for assessing the equality of the variances in a number of
populations that is less sensitive to departures from normality than Bartletts
test. See also Hartleys test.
C
Ceiling effect: A term used to describe what happens when many subjects in
a study have scores on a variable that areat or near the possible upper limit
(ceiling). Such an effect may cause problems
for some typesof analysis because
it reduces the possible amount of variation
in the variable. The converse,
orfloor
effect, causes similar problems.
Central tendency: A property of the distributionof a variable usually measured
by statistics suchas the mean, median, and mode.
Change scores: Scores obtained by subtracting a posttreatment score on some
variable from the corresponding pretreatment, baseline value.
Chi-squared distribution: Theprobabilitydistributionof the sumof squares of
a number of independent normal variables with means zero and standard deviations one. This distribution arises
in many area of statistics,for example, assessing the goodness-of-fit
of models, particularly those fitted to contingency tables.
Coefficient of determination: The square of the correlation coeficient between
two variables x and y. Gives the proportion
of the variation in one variable that
of 0.8 implies that64%
is accounted forby the other. For example, a correlation
of the varianceof y is accounted for by x.
Coefficient of variation: A measure of spread for a setof data defined as
332
APPENDIX A
Confidence interval: A range of values, calculated from the sample observations, thatarebelieved, with a particular probability, to contain the true par
ter value.A 95% confidence interval, for example, implies that if the estimati
process were repeated again and again, then95% of the calculated intervals
would be expected to contain the true parameter value. Note that the stated
probability level refers to properties of the interval and not to the parameter
a random variable.
itself, which is not considered
Conservativeand nonconservativetests: Terms usually encounteredin discussions of multiple comparisontests. Nonconservative tests providep r control
over the per-experiment error rate. Conservative tests, in contrast, may limit
the per-comparison ermr rate to unecessarily low values, and tend to have low
power unless the samplesize is large.
STATISTICAL GLOSSARY
333
where (XI, yl), (x2, yz), ...,(x,,, y,,) are the n sample values of the two variables of interest. The coefficient takes values between
-1 and 1, with the sign
indicating the direction of the relationship and the numerical magnitude its
strength. Values of -1 or 1 indicate that the sample values fallon a straight
of any linear relationship between the
line. A value of zero indicates the lack
two variables.
Correlation matrix: A square, symmetric matrix with rows and columns corresponding to variables, in which the off-diagonal elements
are the correlation
coefficientsbetween pairs of variables, and elementson the main diagonal are
unity.
Covariance: For a sample of n pairs of observation,
(x,,, y,,). the statisticgiven by
cxy
(XI, yl),
(Q,yz),
...,
= -z c x i -m y i -J),
n
APPENDIX A
334
:
a is the variance of the set of
where a2is the variance of the total scores and
0,lscores representing correct and incorrect answers on item
i.
Cross-validation: The division of data intotwo approximately equalsized subsets, oneof whichis used to estimate the parameters in some model of interest
and the other to assess whether the model with these parameter values fits
adequately.
Cumulativefrequency distribution: Alisting of the sample values of avariable,
STATISTICAL GLOSSARY
DF(df):
335
Diagonal matrix: A square matrix whose off-diagonal elements are all zero.
For example,
Dichotomous variable:
336
APPENDIX
Data = Smooth
+Rough,
where Smoothis the underlying regularityor pattern in the data. The objective
of the exploratory approach is to separate the Smooth from theRough with
STATISTICAL
337
minimal useof formal mathematicsor statistical methods. See also initial data
analysis.
Eyeball test: Informal assessmentof data simplyby inspection and mental calculation allied with experience
of the particular area from
which the data arise.
F
Factor: A term used in a variety of ways in statistics, but most commonly to
refer toacategoricalvariable, with a small number of levels, under investigation
in an experimentas a possible sourceof variation. This is essentially simply a
categorical explanatory variable.
Familywise errorrate: The probabilityof making any error in a given family
of
per-experiment error rate.
inferences. See also per-comparison error rate and
two independent
random variables, each having a chi-squared distribution. Divided by their
respective degreesof freedom.
Thestatisticzhasmean~In(l+p)/(l-p),wherepisthepopulationcorrelation
value and variance l/(n -3) where n is the sample size. The transformation
may be used to test hypotheses and to construct
confidence intervals for p.
Fishing expedition: Synonym for data dredging.
Fittedvalue: Usuallyusedtorefertothevalueoftheresponsevariableas
predicted by some estimated model.
Follow-up: The process of locating research subjects or patients to determine
whether or not some outcome of interest has occurred.
APPENDIX A
338
IQ Scom
Class Limits
15-19
80-84
85-89
90-94
95-99
100-104
105-109
110-114
2115
Observed Frequency
5
9
10
l
4
F test: A test for the equalityof the variances of two populations having
normal
G
Gamblers fallacy: The belief thatif an event has not happened for a long time,
it is bound to occur soon.
Goodness-of-fit statistics: Measures of agreement between aset of sample values and the corresponding values predicted from some model
of interest.
Grand mean: Mean ofall the values in a grouped data set irrespective of group
STATISTICAL
339
Graphical methods: A generic term for those techniques in which the results
or some other form of visual display.
are given in the form of a graph, diagram,
1
1 l
-=-c-.
H n
i=l Xi
Hartleys test: A simple test of the equality of variances of a number of populations. The test srarisric is the ratio of the largest to the smallest sample
variances.
Hawthorne effect: A term used for the effect that might be produced in an
experiment simply from the awareness by the subjects that they are particip
in some form of scientific investigation. The name comes from a study of
industrial efficiency at the Hawthorne Plant in Chicago
in the 1920s.
Hello-goodbye effect: A phenomenonoriginallydescribedinpsychotherapy
research but one that may arise whenever a subject is assessed
on
two occasions
with some intervention between the visits. Before an intervention,may
a person
present himselfor herself inas bad as light as possible, thereby hoping to qualify
for treatment, and impressing
staff with the seriousness of or
hisher problems.
At the end of the study the person may want to please the staff with
his or
her improvement, and so may minimize any problems. The result is to make
it appear that there has been some impovement when none has occurred,
or to
magnify the effects that did occur.
Histogram: A graphical representation of a set of observations in which class
frequencies are represented by the areas of rectangles centered on the class
areproporinterval. If the latterare all equal, the heights of the rectanglesalso
tional to the observed frequencies.
APPENDIX A
340
Hypothesis testing: A general term for the procedure of assessing whether sample data are consistent
or otherwise with statements made about the population.
See also null hypothesis, alternativehypothesis, composite hypothesis, sign@cance test, significance level, ljpe I e m r , and ljpe I1 error.
I
IDA Abbreviation for initial data analysis.
341
STATISTICAL GLOSSARY
calculating simplesummary statistics, and
constructing appropriate graphs.
Interaction: A term applied when two (or more) explanatory variables do not
act independentlyon a response variable.See also addirive efect.
Interval variable:
Intervalestimation:
Interval variable:
Interviewer bias: The bias that may occur in surveys of human populations
because of the direct result
of the actionof the interviewer. The bias canforarise
avariety of reasons, including failure to contact the right persons and systematic
errors in recording the answers received from the respondent.
J
J-shaped distribution: An extremely assymetrical distribution with its maximum frequency in the initial class and a declining frequency elsewhere. An
example is shown in FigureA.3.
K
Kurtosis: The extent to which the peak of a unimodal frequency distribution
departs from the shapeof a normal distribution, by either beingmore pointed
(leptokurtic) or flatter (platykurtic). It is usually measured for a probability
distribution as
P4IP: 3,
where p4 is the fourth central momentof the distribution, and p2 is its variance. (Corresponding functionsof the sample moments are used
for frequency
distributions.) For a normal distribution,
this index takes the value zero (other
distributions with zero kurtosis
arecalledmesokurtic);for one thatis leptokurtic
it is positive, andfor a platykurtic curve is
it negative.
L
Large sample method: Any statistical method basedon an approximation to a
normal distributionor otherprobability
distributionthat becomes more accurate
as sample size increases.
APPENDIX A
342
Reaction time
FIG. A.3.
Least-squares estimation: A method used for estimating parameters, particularly in regression analysis,
by minimizing the difference between the observed
response and the value predicted by the model. For example, if the expected
value ofa response variable
y is of the form
terscrand~maybeobtainedfromnpairsofsamplevalues(x~,
yl), (XZ, yz), ...,
(xn, yn)by minimizing S given by
i=l
to give
343
STATISTICAL
3.
1 0 0 0
;;
L=(;2 3 0 0
M
APPENDIX A
344
Matched pairs t test: A Students t test for the equality of the means of two
populations, when the observations ariseas paired samples. The test is based
on the differences between the observations of the matched pairs. The test
statistic is givenby
where n is the sample size, d is the mean of the differences, and sd is their
standard deviation. If the null hypothesis of the equality of the population
is true, thent has aStudents t distribution with n -l degrees of freedom.
Matching: The process of makinga studygroup and a comparison group comparable with respectto extraneous factors.It is often used in retrospective studies
in the selectionof cases and controlsto control variation in a response variable
to sources other than those immediately under investigation. Severa
that is due
kinds of matching can be identified, the most common of which is when each
case is individually matched with a control subject on the matching variables,
so on. See alsopaired samples.
such as age, sex, occupation, and
n
Mean vector: A vector containing the mean values of each variable in a set
of
multivariate data.
Measurement error: Errors in reading, calculating, or recording a numerical
value. This is the difference between observed valuesof a variable recorded
under similar conditions and some underlying true value.
STATISTICAL
Measures of association: Numerical indices quantifyingthestrength
statistical dependence of two or more qualitative variables.
345
of the
for a
Most powerful test: A test of a null hypothesis which has greater power than
any other test for agiven alternative hypothesis.
Multilevel models: Models for data that are organized hierachically, for example, children within families, that allow for the possibility that measurements
made on children from the same familyare likely to be correlated.
Multinomial distribution: A generalization of the binomial distributionto situationsinwhich r outcomescan occur on each of n trials, where r > 2.
Specifically the distributionis given by
APPENDIX A
346
Multiple comparison tests: Procedures for detailed examination of the differences between a set of means, usually after a general hypothesis that they are
all equal has been rejected. No single technique is best in all situations and a
major distinctionbetween techniques is how they control the possible inflation
of the type I error.
Multivariate analysis: A generic term for the many methodsof analysis important in investigating multivariate data.
Multivariate analysis of variance: A procedure for testing the equality of the
mean vectors of more than two populations. The technique is directly analogous to the analysis of variance of univariate data,except that the groups are
compared onq response variables simultaneously.In the univariatecase, Ftests
are used to assess the hypothesesof interest. In the multivariatecase, no single
test statisticcan be constructed thatis optimal in all situations. The most widely
used of the available test statistics is wilks lambda, which is based on three
matrices W (the within groups matrix of sums of squares andproducts),T (the
total matrixof sums of squares andcross products) and B (the between groups
matrix of sums of squares andcross products), defined asfollows:
t:
B =E
i=1 j=1
8
ni(Zi
-&)(Xij
-&),
-g)(%,-g),
i=l
where x i j , i = 1 , ...,g, j = 1,. ..,ni represent the jth multivariate observation in the ith group, g is the number of groups, and ni is the number of
observations in the ithgroup. The mean vector of the ith group is represented
by %i and the mean vector of all the observations by %.These matrices satisfy
the equation
T=W+B.
WWs lambda is given by theratio of the determinantsof W and T,that is,
STATISTICAL GLOSSARY
34.7
Multivariate normal distribution: The probability distributionof a setof variables x'= [XI,x ~.,..,xq]given by
E is their variance-covariance
where p is the mean vector of the variables and
assumed by multivariate analysis procedures such
as multivariate analysisof variance.
N
Newman-Keuls test: A multiple Comparison test used to investigate in more
detail the differences between a set of means, as indicated by a significant
F tesr in an analysis of variance.
Nominal significance level:The significance level
of a test when
its assumptions
are valid.
Nonorthogonal designs: Analysis of variance designs with two or more factors
equal.
in which the numberof observations in each cell are not
Normal distribution: A probability distributionof a random variable,x,that is
assumed by many statistical methods. It
is specifically given by
of x.This distribution
where p and uzare, respectively, the mean and variance
is bell shaped.
Null distribution: The probability distribution of a test statistic when the null
hypothesis is true.
APPENDIX A
348
One-sided test: A significance test for which the alternative hypothesis is directional; for example, that one population mean is greater than another. The
choice between a one-sided andtwo-sided test must be made before any test
statistic is calculated.
Orthogonal: A term that occurs in several
areasof statisticswith different meanings in each case.
It is most commonly encountered in relation
to two variables
or two linear functions
of a setof variables to indicate statistical independence.
It literally means
at right angles.
Orthogonal contrasts: Sets of linear functionsof either parametersor statistics
in which the defining coefficients satisfy a particular relationship. Specifically,
if c1 and c2 are twocontrusts of a set of m parameters such that
=~ I I+
S I~ I Z+S Z +a l m S m ,
cz =
+a2282 + +a2,,,Sml
c1
* *
zy!,4.
= 1, then the contrastsare said to be orthonormal.
they are orthogonal if
alin~
orthogonal matrix: A square matrix that is such that multiplying the matrix
by
its transpose results inan identify matrix.
Outlier: An observation that appears to deviate markedly from the other members of the sample in which it occurs. In the set of systolic blood pressures,
(125,128,130,131,198), for example, 198 might be considered an outlier.
Such extreme observations may be reflecting some abnormality
in the measured
characteristicof a patient,or they mayresult froman error in the measurement
or recording.
P
Paired samples: nvo samples of observations with the characteristic feature
that each observation in one sample has one and
one matching
only
observation
in the other sample. There
are several waysin which such samples can arise
in
STATlSTlCAL GLOSSARY
349
r12
-rl3r23
APPENDIX A
350
Point-biserial correlation: A special caseof Pearsonspmduct moment correlation coeficient used when one variable is continuous 0)and the other is a
binary variable(x)representing a natural dichotomy. Given
by
P(x)=-
X!
x=o,1,2
)....
Population: In statistics this term is used for any finite or infinite collection of
units, which are often people but maybe, for example, institutionsor events.
See alsosample,
Power: The probability of rejecting the null hypothesis when it is false. Power
gives a methodof discriminating between competing tests of the same hypothesis; the test with the higher power is preferred.
It is also the basis
of procedures
for estimating the sample size needed to detect an effect of a particular ma
tude.
Probability: The quantitative expression of the chance that an event will occur.
This can be defined in a varietyof ways, of which the most common
is still that
involving long-term relative frequency:
P(A) =
GLOSSARY
STATISTICAL
35 1
p value: The probability of the observed data(or data showing a more extreme
departure from thenull hypothesis) when the null hypothesisis true. See also
misinterpretationofp values, significancetest, and significancelevel.
Q
Quasi-experiment: A term used for studies that resemble experimentsbut are
weak on some the
of characteristics, particularly that manipulation
of subjects to
For example, if interest centered
groups is not under the investigators control.
on the health effects of a natural disaster, those who experience the disaster
can be compared with those who do not, but subjects cannot be deliberately
See also experimental design.
assigned (randomlyor not) to the two groups.
R
Randomization tests: Procedures for determiningstatisticalsignificance directly from data without recourse to some particular sampling distribution.
The data are divided (permuted) repeatedly between treatments, and
for each
test statistic (for example,at or F)is calcudivision (permutation) the relevant
lated to determine the proportion
of the data permutations that provide
as large
a test statistic as that associated with the observed data.If that proportion is
a, the resultsare significant at the
a level.
smaller than some significance level
Random sample: Either a setof n independent and identically distributed random variables, or a sampleof n individuals selected from a population in such
a way that each sample of the same size
is equally likely.
Random variable: A variable, the values of which occur according to some
specifiedprobabiliv distribution.
Random variation: Thevariationinadatasetunexplainedbyidentifiable
sources.
Range: The difference between the largest and smallest observationsin a data
set. This is often usedas an easy-to-calculate measure
of the dispersionin a set
of observations, butit is not recommendedfor this task because of its sensitivity
to outliers.
352
APPENDlX A
Ranking: The process of sorting a set of variable values into either ascending
or descending order.
STATISTICAL
353
Sampling error: The difference between the sample result and the population
characteristic being estimated. In practice, the sampling error can rarely be
determined because the population characteristicis not usually known. With
appropriate sampling procedures, however, it can be kept small and the invest
gator can determineits probable h i t s of magnitude. See also standard e m r .
Sampling variation: The variation shown by different samples of the same size
from the same population.
Saturated model: A model that containsall main effectsand all possible interactions between factors. Because such a model contains the same number
of
parameters as observations, it results in a perfectfit for a data set.
Scatter diagram: A two-dimensional plot
of a sampleof bivariate observations.
type of relationshiplinksthe
The diagramis an important aid in assessing what
two variables.
354
APPENDIX A
STATISTICAL GLOSSARY
Standard scow:
355
where 2 1 and fz are the means of samples of sizenl and nz taken from each
population, ands2 is an estimate of the assumed common variance given by
+ -
Symmetric ma*
A square matrix that is symmetrical about its leading diagonal; that is, a matrix with elements ai] such that ail = aji. In statistics,
correlation matricesand covariunce matricesare of this form.
356
APPENDIX A
in relation to some
Test statistic: A statisticusedto assess a particular hypothesis
population. The essential requirement
of such a statistic
is a known distribution
when the null hypothesis is true.
Tolerance: A term used in stepwise regression
for the proportionof the sum of
squares about themean of an explanatory variable not accounted
for by other
variables already included in the regression equation. Small values indicate
possible multicolliiearityproblems.
Trace of a matrix: The sum of the elements on the main diagonalof a square
matrir, usually denotedas MA).So, for example, ifA=(:
then &(A) = 4.
i)
U
Univariate data: Data involving a single measurement on each subject or patient.
U-shapeddistribution: A probability distribution of frequencydistribution
shaped more or less like a letter U, though it is not necessarily symmetrical. Such a distribution has
its greatest frequenciesat the two extremes of the
range of the variable.
V
Variance: In a population, the second moment about the mean.
estimator of the population valueis provided by sz given by
An unbiased
STATISTICAL GLOSSARY
357
W
Wilcoxons signed rank test: A distribution free method for testing the difference between two populations using matched samples. The test is based on
two samples, ranked
the absolute differences of the pairs of characters in the
according to size, with eachrank being given the sign of the difference. The
test statistic is the
sum of the positive
ranks.
z test: A test for assessing hypotheses about populationmeans when their variances are known. For example, for testing that the means of two populations
are equal,that is H0 :p
1 = p2, when the variance of each population is known
to be u2, the test statisticis
where R1 and f z are the means of samples of size nl and n2 from the two
populations. If H0 is true, then z has a standard normal distribution. See also
Students t tests.
Appendix B
Answers to Selected
Exercises
CHAPTER 1
1.2. One alternative explanationis the systematic bias that may be producedby
always using the letterQ for Coke and the letterM for Pepsi. In fact, when the
Coca Cola company conducted another study in which Coke was put into both
glasses, one labeled
M and the other labeled
Q, the resultsshowed that a majority
of people chose the glass labeled
M in preference to the
glass labeled Q.
1.4. Thequotationsarefromthefollowingpeople:
1, FlorenceNightingale;
2, Lloyd George; 3, Joseph Stalin; 4, W. H. Auden; 5, Mr. Justice Streatfield;
6, Logan Pearsall Smith.
CHAPTER 2
23. The graph in Figure 2.34 commits the cardinal sin of quoting data out of
358
10
359
1
*
8
t
50
*
*
70
90
Knee-jointangle (deg)
CHAPTER 3
3.2. Here the simple plot of the data shown in Figure B1 indicates that the90"
p u p appears to contain two outliers, and that the observations in 50"
the group
in
seem to split into two relatively distinct classes. Suchoffeatures
the data would,
general, have to be investigated further before any formal analysis was undertaken
The ANOVA table for the data
is as follows.
Source
ss
DF MS
11.52
.002
2
Between angles 90.56 45.28
106.16
3.93 27
Withinangles
Source
Between
methods
Within
methods
SS
8.53
54.66
2.02
27
DF
2
MS
4.27
2.11
p
.l4
APPENDIX B
360
2
1
30
92
34
IniUel anvlely m
= CL + ai
Bl(Xij
-X) +B z ( z-~Z)~+
Eijk,
age, respectively.ANCOVA
The results
4.72
0.54
p
.039
.468
.W
36 1
ANSWERS TO SELECTED
EXERCISES
CHAPTER 4
4.1. Try a logarithmic transformation.
4.2. The 95% confidence interval is given by
Q 1
- X , ) ft I Z S J ( 1 I R I )
+ (I/nz>,
where X1 and 22 are the mean weightlosses for novice and experienced slimmers
both usinga manual, S is the square rootof the error mean squarefrom the ANOVA
of the data, tt2 is the appropriate valueof Students t with 12 degrees of freedom,
and nl and n2 are the samplesizes in thetwo groups. Applyingthis formula to the
numerical valuesgives
[-6.36
-(-1.55)]
f2.18&Xd-,
Fmx
ss
DF
294.00
864.00
384.00
2538.00
1
1
1
20
ss
DF
MS
3037.50
181.50
541.50
2892.00
1
1
1
20
3037.50
181.50
541.50
144.60
21.01
1.26
3.74
C.001
.28
.07
ss
DF
1
MS
2773.50
1261.50
181.50
198.50
13.97
6.36
0.91
.001
.02
.35
2773.50
1261.50
181.50
3970.00
1
1
20
MS
294.00
864.00
384.00
126.90
F
2.32
6.81
3.03
.l4
.02
.l0
The separate analyses give nonsignificant results for the diet x biofeed interaction, although for drugs X and Y , the interaction terms approachsignificance at
the 5% level. The three-way analysis given in the text demonstrates that the
nature
of this interaction is different for each drug.
APPENDIX B
362
4.11. The main effect parameters are estimated by the difference between row
a
(column) mean and the grand mean. The interaction parameter corresponding
to
a particular cellis estimated as (cell mean -row mean -column mean -grand
mean).
4.12. An ANCOVA of the data gives the following results.
Source
Pretreatment 1
Pretreatment
35.692
Between treatments
Error
ss
8.65
27.57
222.66
DF
1
1
1
17
MS
8.65 0.66
2.72
35.69
27.57
2.10
13.10
p
.43
.l2
.l7
CHAPTER 5
5.3. The ANOVA table for the data
is
DF MS
Source
ss
Between subjects
1399155.00 15 93277.00
Between electrodes 281575.43 70393.86
4
Error
1342723.37 22378.72
60
Thus usingthe nomenclature introducedin the text,we have
A2
U,
8;
8: = 22378.72.
So the estimateof the intraclass correlation coefficient
is
R=
14179.66
14179.66
DF MS
Source
ss
Between subjects 852509.55 14 60893.54
Between electrodes 120224.61 30056.15
4
Error
639109.39 56
11412.67
The intraclass correlation coefficient
is now 0.439.
363
CHAPTER 6
6.1. Figure B3 shows plots of the data with the following models fitted (a) log
/?lage a x 2age2;
(vocabulary size) = BO plage; (b)vocabulary size =
x 2agez p3age3.
and (c) vocabulary size = BO plage
terms model provides the best
fit. Examine the residuals
It appears that the cubic
1
x
x
1
2
Group
2
3
1 0 0
0 1 0
CHAPTER 7
7.1. Using the maximum scoreas the summary measurefor the lecithin trial data
Fixed effects
t
Estimate SE
Parameters
P
8.21
0.72
11.35 <.OOOl
Intercept
0.72 -3.28
.002
-2.37
Group
<
.
O
OOl
0.58
0.12
4.73
Visit
10.68
1.301
0.12
c.OOO1
Group x Visit
Random effects
&a = 4.54, &b = 0.61, &ab = -0.47, &c = 1.76
364.
Io
v)
n
N
8'0
SO
uoguwPl
V0
20
*Q
0
:
:
'
N
.o
365
APPENDIX B
366
CHAPTER 8
8.1. The signed rank statistic takes the value 40 with n = 9. The associated p
value is 0.0391.
N
95%
Confidence
Interval
1.039,7.887
200
1.012,9.415
400
800
1.012,8.790
1000
1.091,9.980
8.7.Pearsonscorrelationcoefficientis-0.717,witht = -2.91,DF=8,p = .020
for testing that the population correlation is zero.
Kendallss is -0.600, withz = -2.415,~ = .016fortestingforindependence.
Speannans correlation coefficient is -0.806, with z = -2.44, p = ,015 for
testing for independence.
CHAPTER 9
9.1. The chi-squared statistic is 0.111, which with a single degree of freedom
has an associatedp value of 0.739. The required confidence intervalis (-0.042,
0.064).
CHAPTER 10
10.1. An Investigation of a numberof logistic regression models, including ones
that
containing interactions between three
the explanatory variables, shows amodel
including each of the explanatory variables providesan adequate fit to the data.
The fitted model is
+0.3367(weight)
+ 1.030(ejection) + 1.639(type),
where weight is 0 for small and 1 for standard, ejectionis 0 for not ejected and
1 for ejected, and type is 0 for collision and 1 forrollover. The 95% confidence
367
intervals for the conditional odds ratios of each explanatory variable are weight,
1.18, 1.66;ejection, 2.31, 3.40,type, 4.38, 6.06.Thus, for example, the odds of
beiig severely hurt in a rollover accident are between 4.38 and 6.06 those for a
collision.
10.2. With four variables there are a host of models to dealwith. A good way to
begin is to compare the following three models: (a) main effects model, that is,
Brand Prev Soft Temp; (b)all firstader interactions, that is, model (A)
above Brand. Prev Brand. Soft Brand. Temp Prev. Soft Prev. Temp
Soft. Temp; (c) all second-order interactions, that is, model (B)above Brand.
Prev. Soft Brand. Prev. Temp Brand. Soft. Temp Prev. Soft. Temp.
The likelihood ratio goodness-of-fit statistics forthese models are as follows.
+
+
+
+
Model
18A
9 B
2 C
LRStatistic
42.93
9.85
0.74
+
+
DF
20r
2-
2t
0-
211
12
13
15
14
16
l?
Aw
FIG. 8.5. Linearandlogisticmodels
ation.
APPENDIX B
368
10.8. The plot of probability of menstruation with the fitted linear and logistic
regressions is shown in Figure B5. The estimated parameters (etc.) for the two
models are as follows.
Linear
Parameter Estimate SE
-2.07
0.257
Intercept
0.20
0.019
Age
Logistic
Intercept -19.78
0.848
-23.32
Age
1.52
0.065
-8.07
10.32
23.38
References
Chatteqee.
369
REFERENCES
370
Dunn.
REFERENCES
37 1
Friedman, M. (1937). The use of ranks to avoid the assumptionof normality implicit in the analysisof
variance. Journal of the American StatisticalAssociation,32,675-701.
Frison,L..andPocock,S.J.(1992).Repeatedmeasuresinclinicaltrials:analysisusingmeanssununary
Gaskill,H.V.,andCox,G.M.(1937).Pa~emsinemotionalnactions:
1.Respiration:theuseofanalysis
o f varianceandcovarianceinpsychologicaldata.
Journal of General Psychology, 16, 21-
38.
illness by questionnaire.
Oxford: OxfordUniversity
Goldberg. B. P.(1972). Thedetection ofpsychiatric
Press.
Goldstein, H. (1995). Multilcvelstatisticalmodels. London: Arnold.
Good, P. (1994). Permutation tests: A practical guide to resampling methodsfor testing hypotheses.
New York Springer-Verlag.
Gorbein, J. A., Lazaro,G. G.,and Little. R.J. A. (1992). Incomplete data in repeatedmeasures analysis.
Statistical Methodsin Medical Research,I, 275-295.
Greenacre, M. J. (1984). Theory and application of correspondence analysis. London: Academic
press.
Greenhouse.S.W..andGeisser,S.(1959).Onthemethodsintheanalysisofprofiledata.Psychometri~
24.95-112.
Habeman. S. J.(1974). The analysis offrequency data.
Chicago: University of Chicago Press.
Hollander, M,, and Wolfe,D.A. (1999). Nonparametric statistical methods.New York. Wlley.
Howell, D.C. (1992). Statistical methods forpsychology.Belmont, CA: Duxbury Press.
Huynh, H., and Feldt.L. S. (1976). Estimates of the correction for degrees
of freedom for sample data
in randomised block and split-plot designs.
Journal of Educational Statistics, I, 69-82.
Johnson, V. E.,and Albert, J. H. (1999). Onfinal data modeling.New York Springer.
Kapor, M. (1981). Eflciency on ergocycle in relation to knee-joiht angle and drag. Unpublished
master's thesis, Universityof Delhi, Delhi.
K e s e l m a n , H. J., Keselman. J. C.,and Lix, L. M. (1995). The analysis of repeated measurements:
univariatetests,multivariate tests or both? British Journal of Mathematical and Statistical
Psychology, 48.319-338.
Krause, A., and Olson, M. (2000).The basics of S and S-PLUS. New York: Springer.
Krzanowski. W. J. (1991). Principle of multivariate analysis. Oxford: Oxford UniversityPress.
Landis, J. R,and Koch. G.C. (1977). The measurement of observer agreementfor categorical data.
Biomemcs. 33,1089-1091.
Lindman.H. R. (1974). Analysis of variance in complex e.rgerimental designs. San Francisco,
California: Freeman.
Lovie, A. D.(1979). The analysis of variance in experimental psychology:1934-1945. British Journal
OfMarhematicalandStatistica1Psychology,32,151-178.
Lundberg, G.A. (1940). The measurement of socio-economic status. American Sociological Review,
5,29-39.
M m , H. B., and Whimey,D.R.(1947). On a test of whether oneof two random variablesis stochastically larger than the other.Annals of Mathematical Staristics,18.50-60.
Matthews. J. N. S. (1993). A refinement to the analysis o f serial mesures using summary measures.
Statistics in Medicine,12.27-37.
Maxwell, S. E.,and Delaney, H. D.(1990). Designing experimentsandanalysing data. Belmont, C A
Wadswonh.
London: Chapman and
McCullagh, and Nelder,J. A. (1989). Generalized linear models (2nd d.).
Hall.
McKay. R. J.. and Campbell, N. A. (1982a). Variable selection techniques in discriminant analysis.
I. Description. British Journal of Mathematical and Statistical Psychology,35.1-29.
.'F
372
REFERENCES
McKay. R. l.. and Cambell, N.A. (1982b). Variable selection techniques in discriminant analysis.
U. Allocation. British Journal OfMathematical andStatistica1 Psychology,
35.30-41.
McNemar, Q. (1962).Statistical methodsfor research workers(4th d.).Edinburgh: Oliver and Boyd.
Morgan, G. A., andGriego, 0.V. (1998). Eary use and interpretation of SPSS
for Windows. Hillsdale,
NI: Erlbaum.
Series
Nelder. J.A. (1977). A reformulationof linear models. Joumul ofthe Royal Stafistical Sociefy,
A, J40,48-63.
Nelder, J.A., and Wedderbum, R. W.M. (1972). Generalized linear models. Journal ojthe Royal
Statistical Sociefy, SeriesA, J55,370-3&1.
Nicholls, G. H.,and Ling, D. (1982). Cued speech and the reception of spoken language. Joumul of
Speech and Hearing Research, 25,262-269.
Novince, L. (1977). The contribution of cognifive resrructuring to the effectiveness of behavior
reheard in modifiing social inhibition infemales. Unpublished doctoral dissertation, University
of Cincinnati.
Oakes. M. (1986). Statistical inference: A commentary for the social and behuviouml sciences.
Chichester:Wdey.
Quine, S. (1975). Achievement orientation ofaboriginal and wbite AusIralian adolescents. Unpublished
doctoral dissertation, Ausualian National University., Canberry.
Rawlings, l. 0. (1988). Applied regressiononnlysis. Belmont, C A Wadswonh and Bmks.
Robertson, C. (1991). Computationally intensive statistics. InP. Lovie and A. D. Lovie (Eds.).New
developments in statistics
forpsychology andf h e social sciences@p. 49-80). London: BPS Books
and Routledge.
Rosenthal, R.,and Rosnow,R. T. (1985). Contrust undysis. Cambridge: Cambridge UniversityPress.
Sauber, S. R. (1971). Approaches to precounseling and therapytraining: an investigation of its potential inluenceon process outcome. Unpublished doctoral disserlation, Florida State University,
Tallahassce.
Schapim. S. S.,and WW M. B. (1965). An analysisof variance testfor normality (complete samples).
Biometriku, 52,591-61 1.
Schmid, C.F.(1954). Handbook of graphic presentation.New York: Ronald.
Schouten, H. l. A. (1985). Statistical measurement of interobserver agreement. Unpublished doctoral
dissertation, Erasmus University, Rotterdam.
Scbuman, H., and Kalton, G. (1985). Survey methods.In G. Lindzey andE.Aronson (Eds.),Hundbmk
of socialpsychology (Vol.1. p. 635). Reading. MA: Addison-Wesley.
Senn. S. J.(1994a). Repeated measures in clinical trials: analysis using meansummary statistics for
design. Statistics in Medicine,J3.197-198.
Senn. S. J.(1994b). Testing for baseline balance
in clinical trials.Statistics inMedicine, 13,1715-1726.
Senn, S. l. (1997). Statistical issues in drug development.
Chichester: Wrley.
Singer, B. (1979). Distribution-free metbods for non-parametric problems: a classified and selected
bibliography. British Joumul of MathematicalandStatistical Psychology, 32.1-60.
Stevens. J.(1992). Applied multivariate statistics
for the social sciences.Hillsdale, NI: Erlbaum.
Theil, H. (1950). A rank-invariant methodof linear and polynomial regession analysis. J. P m . Koa
Ned. Akud, v. Weremch A.53,386-392.
Tufte, E. R. (1983). The visual display of quantitative
infomution. Cheshire,CP Graphics.
Vetter, B. M.(1980). Working women scientistsand engineers. Science, 207.28-34.
Wainer, H.(1997). visual revclationS. New York: Springer-Verlag.
Westlund, K. B.,and Kurland, L.T. (1953). Studies in multiple sclerosis inW h p e g , Manitoba and
New Orleans. Lousiana. American Joumul of Hygiene,57.380-396.
Wkoxon, F. (1945). Individual comparisons by
ranking methods. Biometrics Bulktiit, l , 80-88.
Wilkinson, L. (1992). Graphical displays.Statistical Methods in Medical Research,
I, 3-25.
Williams. K. (1976). The failure of Pearson's goodness of fit statistic. The Statisticiun, 25.49.
Index
Assumptionsof F test, 66
Avoiding graphical distortion,53-54
B
Backward selection, see variable selection
methods in regnssion
Balanced designs. 116
Barchart, 22-24
Baseline measurement, 213
Benchmarks for evaluation of kappa values, 288
Between groupssum of squares. 68
Between subject factor, 134
Bimodal distribution,25
Bonferonni correction,72
Bonferonni test, 72-73
Bootshap distribution.258
Bootstrap, 257-260
Box plots, 3 W 5
Bracket notationfor log linear models, 304
Bubble plot,4 0 4 l
3173
374
C
Calculating correction factors for repeated
measures, 147
Case-contml study, 9
Categorical data, 267
Challenger space shuttle,
55.57
Chance agreement, 285
Change score analysisfor repeated measures,
215-218
C hi -~ q ~ a r ctest,
d 269,270
Compound symmetry, 139-141
Computationally intensive methods,253-260
Computers and statistical software. 16-17
Concomitant variables,80
Conditional independence,2%
Conditioning plots,see Coplots
Confidence intervalfor Wllcoxon signed-rank
test, 244
Confidence intervalfor
Wdcoxon-Mm-Whitneytest, 2 4 1
Confidence intervals,5
Contingency tables,
chi-squared testfor independence in,270
residuals for, 275-277
r x c table. 270
thrrc-dimensional, 293-300
two-by-two. 271-275
two-dimensional, 268-275
Cook's distance, 194
Coplots, 44-46
Correction factorsfor repeated measures,
142-143
Greenhouse and Geisser correction factor,
142
Huynh and Feldt correction factor, 143
Correlation coefficients
Kendall's tau, 250
Pearson's product moment,249
Spearman's rank,250-252
Correspondence analysis.277-280
Covariance matrix, 141
p l e d within-group, 87,88
D
Data sets,
alcohol dependence and salsolinol excretion,
135
anxiety scoresduring wisdom teeth
extraction, 95
INDEX
applications for Berkeley college, 291
blood pressure and biofeedback,110
blood pressure, family history and smoking,
130
bodyweight of rats, 213
brain mass and bodymass for 62 species
of animal, 27
caffeine and finger tapping,76
car accidentdata, 323
cerebral tumours, 269
children's school performance, 292
consonants correctly identified,248
crime in theUSA, 182-183
crime ratesfor drinkers and abstainers,23
Danish do-it-yourself.294
days away from school, 117
depression scores, 263
detergents data, 324
diagnosis anddrugs, 268
estrogen patches inmtment ofpostnatal
depression, 13 1
fecundity of fruit ilies. 64
field dependence anda reverse Sttooptask,
134
firefighters entrance exam, 281
gender andbelief in theafterlife, 290
GHQ data, 309
hair colourand eye colour,278
height, weight,sex and pulse rate,44,45
heights and ageso f married couples, 2&30
human fatness, 177
ice-cream sales, 42
improvement scores, 98
information about 10 states in the USA,60
janitor's and banker's ratings of
socio-economic status, 291
knee joint angle and efficiencyof cycling, 93
length guesses, 60
marriage and divorcerates,206
maternal behaviour in rats,94
measurements for schizophrenic patients,
265
membership of W o w n e Swimming Club,
290
memory retention, 206
menstruation, 325
mortality ratesfmm male suicides,61
oral vocabulary sizeof children at various
ages, 162
organization and memory, 256
pairs of depressed patients,271
INDEX
postnatal depression and childs cognitive
development, 118
proportion of d e p s in science and
engineering, 55.56
psychotherapeutic attractionscorn, 247
quality of childrens testimonies,207
quitting smoking experiment,159
racial equality and the death penalty,
295
rat data. 101
scores in a synchronized swimming
competition, 154
sex and diagnosis,271
skin resistanceand electrndetype, 243
slimming data. 107
smoking and performance. 126
social skills data,85
statistics students, 324
stressful life events, 240
suicidal feelings,271
suicide by age andsex, 294
survival o f infants and amountof care, 2%
test scoresof dizygous twins,264
time to completion of examination scores.249
treatment of Alzheimers disease,220
university admission rates,46
verdict in rape cases,271
visits to emergency m m ,265
visual acuity and lens strength,40
what peoplethink about the European
community,279
WISC blncks data, 80
Deletion residual. 194
Dependence panel,45
Deviance residual,319
Deviance, 311
Dichotomous variable,271
Dot notation,68
Dot plot,23-25
Draughtsmans plot. synonymous with
scatterplot matrix
Dummy variable, 178-180
E
Enhancing scatterplots,40-46
Exact p values,281
Expected mean squares, 138
Experiments. 10
Explanatory variable, 14
Exponential distribution,47
375
F
Factor variable, 14
Factorial design. 97
Fishers exact test, 273,274
Fitted marginals, 304
Fitted values,102.163.168
Five-number summary, 32
Forward selection, see variable selection
methods in regression
Fricdman test 247-249
G
H
Hat matrix, 194
Histograms. 25-31
Homogeneityof variance, 137
Hotellings Tztest,
147-150
Huynh and Feldt comtion factor, see
correction factors
Hypergeometric distribution,274
84-90.
K
Kappa statistic,283-288
Kappa statistic, large sample variance,
287
INDEX
376
Kelvin scale, 13
Kruskat-Wallis test,245-247
L
Latent variable, 128
Latin square. 112
Least squares estimation, 163
Lie factor,52
Likelihood ratio statistic,297
Locally weighted regression,253
Logistic regression.306-317
Logistic transformation. 311
Longitudinal data,209
Longitudinal designs.136
Multicollinarity, 193-197
Multidimensional contingency tables,
293
Multilevel modelling,66,219
Multiple comparisontechniques, 70-75
Multiple correlation coefficient,172
Multiple regression and analysis of
variance, eqivalence of,197-201
Multivariate analysisof variance, see analysis
of variance
Muhlal independence,2%
N
Observational studies,9
Method of difference,
15
Misjudgement ofsize of correlation,
Occams razor, 8
ratio scales, 13
54
Missing observations.211
Models, 6-8
analysis of covariance, 81
in data analysis,6-8
fixedeffects, 112-116
for contingency tables,300-306
himhical, 219,304
linear, 67
logistic, 306-317
log-linear, 302-303
main effect, 102
minimal, 304
mixedeffects. 113,138
multiple linear regression,171-172
one way analysisof variance, 67
multiplicative, 301
random effects,112-1 16,219-233
saturated 303
Parsimony, 8
Partial F test, 187
Partial independence,296
Partitioning of variance,66
Pearson residuals,319
Permutation distribution,241
Permutationtests, 253-257
Pie chart,22-24
Pillai statistic, 128
Planned comparisons,75-76
Pooling sums of squares, 102
Post hoc comparisons.
76
Power curves,216-218
Power, 18
Prediction using simple linear regression,
165
Predictor variables,14
Probability of falsely rejecting the
null
hypothesis. 65
Probability plotting,46-48
normal probability plotting,
49
INDEX
Prospective study. 9
Psychologicallyrelevant difference, 18
pvalue. 2.4-6
Q
Quasicxperiments, 10
R
raison d&e of statistics and statisticians,2
Random allocation. 11
Random intercepts andslopes model, 227
Random intercepts model,226
Regression to the mean,215
Regression, 161-169
automatic model selection in,187-191
all subsets, 183-187
diagnostics. 191-193
distribution-fee, 252-253
generalized linear,317-321
logistic, 306-317
multiple linear, 169-183
residuals, 167-169
simple linear, 162-169
with zero intercept, 1 6 1 6 7
Relationship between intraclass correlation
coefficient and product moment
comlation coefficient, 157
Residuals
adjusted, 276
in contingency tables,275-277
deletion, 194
standardized. 194
Response feature analysis,210-218
Response variable, 14
Risk factor, 9
S
377
Simple effects. 106
Sparse data in contingencytables, 280-283
Sphericity, 137,139-141
Spline smoothers.253
S-PLUS
ANOVA, 92
aov. 93
Bootstrap, 263
chisq.test, 289
Chi-square test, 289
Compare samples,262
Coplots, 59
Counts and proportions.289
fisher.test, 289
Fishers exacttest, 289
Friedman Rank Test, 262
friedman.test, 262
glm, 322
graph menu,59
graphics palettes,59
help, 59
kruskal.test, 262
Kruskal-WallisRank Test, 262
Im, 205
h e , 234
W O V A , 92
mcnemar.test, 289
Mixed effects,233
Multiple Comparisons,92
outer, 263
pie, 59
Regression,204
Regression, logistic,322
Regression. log-linear, 322
Resample, 263
Signed Rank Test, 262
trellis graphics,59
wilcox.test. 262
Wkoxon Rank Test, 262
SPSS
bar charts, 58
Chi-square, 289
Crosstabs, 289
General linear model,91
GLM-Multivariate. 92
GLM-Repeated measures,158
graph menu, 58
Mann-Whitney U. 262
Nonparametric tests,261
Regression,204
Regression, logistic,322
INDEX
378
Standard normal distribution, 47
Standardized regressioncoefficients, 173-175
Standardized residual. 194
Stem-and-leaf plots,32-33
Students t-tests, 63-65
Summary measures, use of, 210-219
Surveys, 9
T
U
Unbalanced designs, 116-126
Unique sums of squares, 121-122
V
W
Weighted kappa, 288
Wkoxons signed ranks test,243-245
WlkOXOn-MaM-Whitneytest, 238-242
Witbin groups sumof squares, 68
W~thiisubject factor. 133
squares
squares