Professional Documents
Culture Documents
| |
` O cannot manipulated
blocking or stratifying
` V
can be examined
` Vnvolves the exposure of the same subjects to more than one
experimental treatment
` This type of Ñ
has the advantage of ensuring the
highest possible equivalence among subjects exposed to different
conditionsͶthe groups being compared are equal with respect to age,
weight, health and so on because they are composed of the same
people
` subjects must be randomly assigned to different orderings of
treatments
can be used to ruled out ordering effects
` Ôlthough crossover designs are extremely powerful, they are
inappropriate for certain research questions because of the problem
of
` åhen subjects are exposed to two different treatments or conditions,
they may be influenced in the second condition by their experience in
the first condition
Ñ
|
Pretest YE| BETåEEN data collection both before and after the
posttest intervention
(beforeafter) appropriate for measuring
Can determine differences between groups
(experimental) and change within groups (quasi
experimental)
Factorial OPTVONÔ BETåEEN experimental manipulation of more than one
independent variable
Permits a test main effects for each manipulated
variable and interaction effects for combinations of
manipulated variables
Randomized OPTVONÔ BETåEEN random assignment to groups within different
Block levels of a blocking variable that is not under
experimental control (e.g. gender)
Crossover OPTVONÔ åVTHVN subjects are exposed to all treatments but are
randomly assigned to different orderings of
treatments
subjects serve as their own controls
|
|
|
|
X
|
|
` True experiments are the most powerful method available for testing hypotheses of
causeandeffect relationships between variables
` Experimental design is considered the gold standard for intervention studies because
it yields the highestquality evidence regarding intervention effects
` Through randomization and the use of a comparison condition, experimenters come
as close as possible to attaining the ͞ideal͟ counterfactual
` Experiments offer greater corroboration than any other research approach that,
the independent variable (e.g. diet, drug dosage, teaching approach) is manipulated,
certain consequences in the dependent variable (e.g. wt. loss, recovery of
health, learning) may be expected to ensue
` The great strength of experiments, then lies in the confidence with which causal
relationships can be inferred
` Through the controls imposed by manipulation, comparison, and ͶespeciallyͶ
randomization, alternative explanations to a causal interpretation can often be ruled
out or discredited
` This is especially likely to be the case if the intervention was developed on the basis
of a sound theoretical rationale. Vt is because of these strengths that metaanalyses
of RCTs, which integrate evidence from multiple studies using an experimental
designs, are at the pinnacle of almost all evidence hierarchies for questions relating
to causes
|
|
|
|
X
|
` There are often constraints that make an experimental approach impractical
on impossible
` Experiments are sometimes criticized for their artificiality
` Part of the difficulty lies in the requirements for randomization and then (for
most experiments) comparable treatment within groupsͶ in ordinary life,
the way we interact with people is not random
` Ônother aspect of experiments that is sometimes considered artificial is the
focus on only a handful of variables while holding all else constant
` This requirement has been criticized as being reductionist and as artificially
constraining human experience
` Experiments that are undertaken without a guiding theoretical framework
are sometimes criticized for being able to establish a causal connection
between an independent and dependent variable without proving any causal
explanation for Ñ the intervention resulted in the observed outcomes
` Ô problem with experiments conducted with clinical settings is that it is often
clinical staff, rather than researchers, who administer an intervention, and
therefore it can sometimes be difficult to determine whether subjects in the
experimental group actually received the treatment, and if those in the control
group did not.
` Vt may difficult to maintain the integrity of the intervention and control
conditions if the study period extends over time.
` Moreover, clinical studies are usually conducted in environments over which
researchers have little controlͶand control is a critical factor in experimental
research.
` McGuire and her colleagues(2000) describe some issues relating to the
challenges of testing interventions in clinical settings.
` |ometimes a problem emerges if subjects themselves have discretion about
participation in the treatment
` * Ñ
which is a kind of placebo effect that is caused by people͛s
expectations
Ë|
|
` | ʹ sometimes used in lieu of control group to refer to the
groups against which outcomes in the treatment group are evaluated.
` Campbell and |tanley (1963) called the nonequivalent control group after ʹ only
design pre experimental rather than quasiexperimental because of its fundamental
weakness ʹ although |hadish, Cook and Campbell (2002), in their more recent book
on causal inference, did not use this label, but simply called this a weaker quasi
experimental design.
` Ôn improvement upon the standard before ʹ after nonequivalent control
group, in which groups thought to be similar are compared, is to used
matching to ensure that the groups are, in fact, equivalent on at least
some key variables related to the outcomes.
` |
ʹ more sophisticated method of matching can be
used by researchers with sufficient statistical sophistication.
` |
| ʹ that captures the conditional probability of
exposure to a treatment given various preintervention characteristics. ʹ
experimental comparison group members can then be matched on this
score
` Both conventional and propensity matching are most easily implemented
when there is a large pool of potential comparison group subjects from
which good matches to treatment group subjects can be selected.
` |
| ʹ comparison data are gathered from
or about a group of people before the implementation of the intervention.
X è
|
` Ô control group was used but randomization was not, but some quasiexperiments
have neither.
` This one ʹ group pretest ʹ posttest design could be modified so that at least some
alternative explanations for change in nurses͛ turnover rate could be ruled out.
` Time series design / ͞Vnterrupted time series design͟
` Vnformation is collected over an extended period, and an intervention is
introduced during the period.
` Does not eliminate all problems of interpreting changes in turnover rate, the
extended time period strengthens the ability to attribute change to the
intervention.
` Permit us to rule out the possibility that the data reflect unstable measurements of
resignation at only two points in time.
` Ô particularly powerful quasiexperimental designs results when the time series
and nonequivalent control group designs are combined.
` | ! |
| ! Ô |
| ʹ particular application of a
time series approach sometimes used in clinical studies use time series designs to
gather information about an intervention base on the responses of a single patient
under controlled condition.
Ë|
| |
Ô
` Vnvolves systematic assignment of subjects to groups based on cut off scores on a
preintervention measure is considered attractive from an ethical standpoint and
merits consideration
è
" #
` has clear advantages in terms of persuading potential subjects to participate in a
study because those with a strong preference get to choose their treatment
condition.
` Those without a strong preference a randomized, but those with a preference are
given the condition they prefer and are followed up as part of the study.
` Can yield valuable information about the kind of people who prefer one condition
over another but the evidence of the effectiveness of the treatment is weak
because of the people who elected a certain treatment likely differ from those who
elected the alternative ʹ and there preintervention differences, rather than the
alternative treatments, could account for any observed differences in outcome at
the end of the study.
-
!