You are on page 1of 29

Windhu Purnomo

S3 IKM Unair
2009 1

Learning Outcomes

By the end of this session the students should :


 Understand the principles of critical appraisal
and why
you should undertake it
 Be able to appraise published research and
judge its
reliability
 Be able to assess the relevance of published
research
to your own work

1
What is critical appraisal ?

Critical appraisal is the assessment of


evidence by systematically reviewing its
relevance, validity and results to spesific
situations (Chambers, 1998)

Critical appraisal is :

Balances assessment of benefits and


strengths of research against its flaws and
weaknesses
Assessment of research process and results
Consideration of quantitative and qualitative
aspects of research
To be undertaken by all health profesionals as
part of their work

2
 Vast and expanding
literature.
 Limited time to read.
 Different reasons to
read – mean different
strategies.
 Keeping up to date.
 Answering specific
clinical questions.
 Pursuing a research
interest.

 Keeping up to date.
 Skimming the main journals and summary
bulletins.
 Answering specific clinical questions.
 Finding good quality literature on subject.
 Pursuing a research interest.
 Extensive literature searching.

3
 What kind of reports do I want?
 How much detail do I need?
 How comprehensive do I need to be?
 How far back should I search?

 The answers to these questions should flow


from the reasons for reading.

Key Steps to Effective Critical Appraisal

Three broad question :


Are the Results valid ?
What are the results ?
How will these results-help me work with my
patients ?

4
11 items

1. What is the research question ?


2. What is the study type ?
3. What are the outcome factors and how are they measured
?
4. What are the study factors and how are they measured ?
5. What important confounders are considered ?
6. What are the sampling frame and sampling method ?
7. In an experimental study, how were the subjects assigned
to groups ?
In a longitudinal study, how many reached final follow-up ?
In a case control study, are the controls appropriate ? (Etc)
8. Are statistical test considered ?
9. Are the results clinically/socially significant ?
10. Is the study ethical ?
11. What conclusions did the authors reach about the study
question ?
9

 Title, authorship
 Abstract
 Introduction
 Methods
 Results
 Discussion
 Conclusion & recommendation
 References

10

5
 Describing the outcome
 Describing relationship between risk
factor and outcome
 Clear and communicative
 12-16 words

11

 Complete name of all of the authors


 Name of institution

12

6
 English and bahasa Indonesia
 Overview summary of the work
 Highlight of result (objective, method)
 General statement of significant findings
 Around 200-250 words
 Key words (3-8 key words)

13

Rationale (background)
 Magnitude of problems
 Impact of outcome
 Differences between previous results (risk factors
and outcomes)
 To identify specific risk factors and outcomes for a
specific area or population
 Literatur review  relevant (up date)

 Aim of the study


 Benefit

14

7
 Study design
 Population and time
 Sample (inclusion & exclusion criteria),
samples size, sampling method
 Variables identification
 Predictor, dependent variables
 Data collection method
 Data measurement method
 Statistical method
 Ethics, informed consent

15

 General characteristics of the data


(textular, table, graphical)

Discussion

 Important aspects
 Meaning (implication of results)
significancy, comparison with other
study
16

8
 Answer the research problem and aim of the
study
 Acknowledgement
 References

17

THE EPIDEMIOLOGIC STUDY


Controlled Assignment Uncontrolled Assignment
Experimental Studies Observational Studies

Not randomized Randomized Sampling with Sampling with


assignment assignment Regard to Regard to
Disease or Expostire
effect Characteristic
Community Clinical or Cause
Trials Trials
Cross-sectional
and/or Retro- Prospective
spective Studies Studies

Exposure or History Exposure or


Characteristic at Time Characteristic (Prior to
of Study Time of Study)

Cross-sectional Studies Retrospective Studies

Fig.1. The epidemiologic study


18

9
EPIDEMIOLOGICAL STUDIES

Past Present Future


history risk composition of disease
factor study population
Cross- Present Whole population or random
sample from population
sectional Absent
Present Disease
Absent No disease
Case-control Present
Absent Cases of
Disease
Present Matched
Absent controls
Cohort Exposed/at risk Disease
No
Whole population or random
sample from population disease
Not exposed/ at risk Disease
No
Fig.2. Comparison of Analytic study designs disease 19

 Population
 Inclusion & exclusion criteria
 Minimal sample size
 Sample selection methods

20

10
SAMPLING
METHOD
PROCEDURES
1
Determination of
objective of the study 2

Determination of sampling 3
population
4
Sample size estimation
5
Determination of sampling method

Determination the subjects/sample

21

• Purposive/judgemental
• Quota
• Accidental/Convenience
SAMPLING
METHODS

• Simple random
• Systematic random
• Simple stratified random
• Proportional stratified random
• Cluster stratified random
• Multistages sampling

Source : Bachtiar A, 2000


22

11
 Is it of interest?
 Why was it done?
 How was it done?
 What has been found?
 What are the
implications?
 What else is of
interest?

23

 Is it of interest?
 Title, abstract, source.
 Why was it done?
 Introduction.
▪ Should end with a clear statement of the purpose of the study.
▪ The absence of such a statement can imply that the authors had no
clear idea of what they were trying to find out.
▪ Or they didn’t find anything but wanted to publish!

24

12
 How was it done?
 Methods.
▪ Brief but should include enough detail to enable one to
judge quality.
▪ Must include who was studied and how they were
recruited.
▪ Basic demographics must be there.
▪ An important guide to the quality of the paper.

25

 What has it found?


 Results.
▪ The data should be there – not just statistics.
▪ Are the aims in the introduction addressed in the
results?
▪ Look for illogical sequences, bland statements of
results.
▪ ? Flaws and inconsistencies.
▪ All research has some flaws – the impact of the flaws
need to assessed.

26

13
 What are the implications?
 Abstract / discussion.
▪ The whole use of research is how far the results can be
generalised.
▪ All authors will tend to think their work is more
important
▪ What is new here?
▪ What does it mean for health care?
▪ Is it relevant to my patients?

27

 What else is of interest?


 Introduction / discussion.
▪ Useful references?
▪ Important or novel ideas?
▪ Even if the results are discounted it doesn’t mean there
is nothing of value.

28

14
 The degree to which a variable actually
represent what it supposed to represent
 Best way to assess: comparison with a
reference standard
 Threatened by: systematic error (bias)
Contributed by:
 Observer
 Subject
 Instrument

29

DEFINITION : BEST WAY TO ASSESS:


Precision: Comparison among repeated
the degree to which a measure
variable has nearly the THREATENED BY: random error
same value when (variance)
measured several time Contributed by:
- Observer
- Subject
- Instrument

30

15
 The basic types of error may be divided into :
 Random (chance) error
 Systematic error
 Random error is the by-chance error which make observed
values differ from the true value. This may occur through
sampling variability or random fluctuation of the event of
interest
 Systematic error or Bias is any difference between the true
value and observed value due to all causes other than
random fluctuation and sampling variability. This type of
error is generally more important, and hard to detect, e.g.
over-estimate of body weight of every subject by 0.1
kilogram resulted from using inaccurate weighing scale.

31

What one gets from a study !?!

OBSERVED VALUE = FACT + DISTORTION

SYSTEMATIC ERROR RANDOM ERROR


(BIAS) (CHANCE)

Inherent difference Difference in handling


between groups & evaluation between
SELECTION BIAS groups
ALLOCATION BIAS INFORMATION BIAS
CONFOUNDING

Can be solved Proper study Quality control Statistical Testing


by: designs & analysis

Figure 3. Schematic presentation of common bias & error found in epidemiologi study
32

16
 Selection bias
 Information bias
 Confounding bias

33

Methods of sample
Population selection:
- Random
- Systematics
Sample - Multistage
- Purposive
- etc.

34

17
Methods of selection :
 Source of data ?
 Instrument and media (reagent) ?
 Executors ?

35

Selection Subjects are representative for target populations


Most of the cases are being the sample

Information Standardized data collection methods


(diagnostics, questionaire, human resources, etc)

Confounding Identify all potential confounding factors


Analysis all potential confounding factors

36

18
Subject (biological) Random
variation

Repeatibility Systematic

Observer
(measurement) Within observer
variation (tends to be random)
Evaluation of
quality of Between observer
measurement Sensitivity (ability (tends to be systematic)
to indentify true
positives)
Validity
Specificity (ability to
exclude true
negatives)

37

1
SAMPLING
2
PATIENTS
ASSEMBLE
SAMPLING
EXTERNAL BIAS CHANCE
PATIENTS
INTO GROUP
VALIDITY SELECTION
BIAS CHANCE
3
MEASUREMENT
MAKE BIAS CHANCE
MEASUREMENT

4 CONFOUNDING
5
ANALYZE BIAS
GENERELIZE
CONCLUSIONS TO RESULT
OTHER PATIENTS CONCLUSIONS
FROM SAMPLE
Source: Amri Z, 2005
38

19
 Sampling – chance
 Assemble into groups – selection bias &
chance
 Make measurement – Measurement bias
 Analyzed result – confounding
 Conclusion from sample – generalization?

39

TRUTH IN THE TRUTH IN THE FINDINGS IN


UNIVERSE Inference STUDY Inference THE STUDY
#2 #1

EXTERNAL INTERNAL
VALIDITY VALIDITY

Figure 4. The two inferences involved in drawing conclusions from the


finding of a study and applying them to the universe outside

40

20
Drawing TRUTH IN THE Inter TRUTH IN THE Inter FINDINGS IN
conclusions UNIVERSE STUDY THE STUDY

Designing RESEARCH STUDY ACTUAL


and QUESTION PLAN STUDY
implementing design Implement

EXTERNAL INTERNAL
VALIDITY VALIDITY

Figure 5. The process of designing and implementing a research project


sets the stage for the process of drawing conclusions from it

41

CRITICAL
APPRAISAL
SURVEY & CASE
CONTROL STUDY

42

21
 Is it of interest ? / title, abstract
 Why was it done ? / introduction
 How was it done ? / methods
 What has it found ? / results
 What are the implications ? / abstract,
discussion
 What else is of interest ? / introduction,
discussion

43

 Statistical significance
 The play of chance
 The logic of statistical tests
 Confidence intervals

 Bias
 Confounding

44

22
Relative Risk Confidence Interval Comment

1.2 0.1-9 Not significant, imprecise result


1.2 0.9 – 1.4 Not significant, precise result
1.2 1.1 – 1.3 Significant, precise result
4 1.1 – 8 Significant, imprecise result

45

Mother’s knowledge Malnutrition


(Independent variable) (dependent variable)

Family income
(counfounding variable)

46

23
 Are the aims clearly stated ?
 Was the sample size justified ?
 Are the measurements likely to be valid and reliable ?
 Are the statistical methods described ?
 Do the numbers add up ?
 Was the statistical significant assessed ?
 What do the main findings mean ?
 How are null findings interpreted ?
 Are important effects overlooked ?
 How do the results compare with previous reports?
 What implications does the study have for your practice ?

47

 The essential questions:


 Who was studied ?
 How was the sample obtained ?
 What was the response rate ?

48

24
 The detailed questions
 Are the aims clearly stated ?
 Is the design appropriate to the stated objective?
 Was the sample size justified?
 Are the measurements likely to be valid and
reliable ?
 Are the statistical methods described ?
 Is the result could be generalized ?

49

 Conduct :
 Did untoward events occur during the study ?

 Analysis :
 Were the basic data adequately described?
 Do the numbers add up ?
 Was the statistical significance assessed ?
 Were the findings serendipitous?

50

25
 Interpretation :
 What do the main findings mean ?
 How could selection bias arise ?
 How are null findings interpreted ?
 Are important effects overlooked?
 Can the results be generalised ?
 How do the results compare with previous reports ?
 What implications does the study have for your
practice ?

51

52

26
 The essential questions
 How were the cases obtained ?
 Is the control group appropriate ?
 Were data collected the same way for cases
and controls

53

 The detailed questions :


 Are the aims clearly stated ?
 Is the method appropriate to the aims ?
 Was the sample size justified?
 Are the measurements likely to be valid and
reliable ?
 Are the statistical methods described?

54

27
 Conduct :
 Did untoward events occur during the study?

 Analysis :
 Were the basic data adequately described?
 Do the numbers add up ?
 Was there data-dredging ?
 As the statistical significant assessed ?

55

 Interpretation :
 What do the main findings mean ?
 Where are the biases?
 Could there be confounding ?
 How are null findings interpreted ?
 Are important effects overlooked ?
 How do the results compare with previous reports ?
 What implications does the study have for your
practice ?

56

28
57

29

You might also like