Professional Documents
Culture Documents
S3 IKM Unair
2009 1
Learning Outcomes
1
What is critical appraisal ?
Critical appraisal is :
2
Vast and expanding
literature.
Limited time to read.
Different reasons to
read – mean different
strategies.
Keeping up to date.
Answering specific
clinical questions.
Pursuing a research
interest.
Keeping up to date.
Skimming the main journals and summary
bulletins.
Answering specific clinical questions.
Finding good quality literature on subject.
Pursuing a research interest.
Extensive literature searching.
3
What kind of reports do I want?
How much detail do I need?
How comprehensive do I need to be?
How far back should I search?
4
11 items
Title, authorship
Abstract
Introduction
Methods
Results
Discussion
Conclusion & recommendation
References
10
5
Describing the outcome
Describing relationship between risk
factor and outcome
Clear and communicative
12-16 words
11
12
6
English and bahasa Indonesia
Overview summary of the work
Highlight of result (objective, method)
General statement of significant findings
Around 200-250 words
Key words (3-8 key words)
13
Rationale (background)
Magnitude of problems
Impact of outcome
Differences between previous results (risk factors
and outcomes)
To identify specific risk factors and outcomes for a
specific area or population
Literatur review relevant (up date)
14
7
Study design
Population and time
Sample (inclusion & exclusion criteria),
samples size, sampling method
Variables identification
Predictor, dependent variables
Data collection method
Data measurement method
Statistical method
Ethics, informed consent
15
Discussion
Important aspects
Meaning (implication of results)
significancy, comparison with other
study
16
8
Answer the research problem and aim of the
study
Acknowledgement
References
17
9
EPIDEMIOLOGICAL STUDIES
Population
Inclusion & exclusion criteria
Minimal sample size
Sample selection methods
20
10
SAMPLING
METHOD
PROCEDURES
1
Determination of
objective of the study 2
Determination of sampling 3
population
4
Sample size estimation
5
Determination of sampling method
21
• Purposive/judgemental
• Quota
• Accidental/Convenience
SAMPLING
METHODS
• Simple random
• Systematic random
• Simple stratified random
• Proportional stratified random
• Cluster stratified random
• Multistages sampling
11
Is it of interest?
Why was it done?
How was it done?
What has been found?
What are the
implications?
What else is of
interest?
23
Is it of interest?
Title, abstract, source.
Why was it done?
Introduction.
▪ Should end with a clear statement of the purpose of the study.
▪ The absence of such a statement can imply that the authors had no
clear idea of what they were trying to find out.
▪ Or they didn’t find anything but wanted to publish!
24
12
How was it done?
Methods.
▪ Brief but should include enough detail to enable one to
judge quality.
▪ Must include who was studied and how they were
recruited.
▪ Basic demographics must be there.
▪ An important guide to the quality of the paper.
25
26
13
What are the implications?
Abstract / discussion.
▪ The whole use of research is how far the results can be
generalised.
▪ All authors will tend to think their work is more
important
▪ What is new here?
▪ What does it mean for health care?
▪ Is it relevant to my patients?
27
28
14
The degree to which a variable actually
represent what it supposed to represent
Best way to assess: comparison with a
reference standard
Threatened by: systematic error (bias)
Contributed by:
Observer
Subject
Instrument
29
30
15
The basic types of error may be divided into :
Random (chance) error
Systematic error
Random error is the by-chance error which make observed
values differ from the true value. This may occur through
sampling variability or random fluctuation of the event of
interest
Systematic error or Bias is any difference between the true
value and observed value due to all causes other than
random fluctuation and sampling variability. This type of
error is generally more important, and hard to detect, e.g.
over-estimate of body weight of every subject by 0.1
kilogram resulted from using inaccurate weighing scale.
31
Figure 3. Schematic presentation of common bias & error found in epidemiologi study
32
16
Selection bias
Information bias
Confounding bias
33
Methods of sample
Population selection:
- Random
- Systematics
Sample - Multistage
- Purposive
- etc.
34
17
Methods of selection :
Source of data ?
Instrument and media (reagent) ?
Executors ?
35
36
18
Subject (biological) Random
variation
Repeatibility Systematic
Observer
(measurement) Within observer
variation (tends to be random)
Evaluation of
quality of Between observer
measurement Sensitivity (ability (tends to be systematic)
to indentify true
positives)
Validity
Specificity (ability to
exclude true
negatives)
37
1
SAMPLING
2
PATIENTS
ASSEMBLE
SAMPLING
EXTERNAL BIAS CHANCE
PATIENTS
INTO GROUP
VALIDITY SELECTION
BIAS CHANCE
3
MEASUREMENT
MAKE BIAS CHANCE
MEASUREMENT
4 CONFOUNDING
5
ANALYZE BIAS
GENERELIZE
CONCLUSIONS TO RESULT
OTHER PATIENTS CONCLUSIONS
FROM SAMPLE
Source: Amri Z, 2005
38
19
Sampling – chance
Assemble into groups – selection bias &
chance
Make measurement – Measurement bias
Analyzed result – confounding
Conclusion from sample – generalization?
39
EXTERNAL INTERNAL
VALIDITY VALIDITY
40
20
Drawing TRUTH IN THE Inter TRUTH IN THE Inter FINDINGS IN
conclusions UNIVERSE STUDY THE STUDY
EXTERNAL INTERNAL
VALIDITY VALIDITY
41
CRITICAL
APPRAISAL
SURVEY & CASE
CONTROL STUDY
42
21
Is it of interest ? / title, abstract
Why was it done ? / introduction
How was it done ? / methods
What has it found ? / results
What are the implications ? / abstract,
discussion
What else is of interest ? / introduction,
discussion
43
Statistical significance
The play of chance
The logic of statistical tests
Confidence intervals
Bias
Confounding
44
22
Relative Risk Confidence Interval Comment
45
Family income
(counfounding variable)
46
23
Are the aims clearly stated ?
Was the sample size justified ?
Are the measurements likely to be valid and reliable ?
Are the statistical methods described ?
Do the numbers add up ?
Was the statistical significant assessed ?
What do the main findings mean ?
How are null findings interpreted ?
Are important effects overlooked ?
How do the results compare with previous reports?
What implications does the study have for your practice ?
47
48
24
The detailed questions
Are the aims clearly stated ?
Is the design appropriate to the stated objective?
Was the sample size justified?
Are the measurements likely to be valid and
reliable ?
Are the statistical methods described ?
Is the result could be generalized ?
49
Conduct :
Did untoward events occur during the study ?
Analysis :
Were the basic data adequately described?
Do the numbers add up ?
Was the statistical significance assessed ?
Were the findings serendipitous?
50
25
Interpretation :
What do the main findings mean ?
How could selection bias arise ?
How are null findings interpreted ?
Are important effects overlooked?
Can the results be generalised ?
How do the results compare with previous reports ?
What implications does the study have for your
practice ?
51
52
26
The essential questions
How were the cases obtained ?
Is the control group appropriate ?
Were data collected the same way for cases
and controls
53
54
27
Conduct :
Did untoward events occur during the study?
Analysis :
Were the basic data adequately described?
Do the numbers add up ?
Was there data-dredging ?
As the statistical significant assessed ?
55
Interpretation :
What do the main findings mean ?
Where are the biases?
Could there be confounding ?
How are null findings interpreted ?
Are important effects overlooked ?
How do the results compare with previous reports ?
What implications does the study have for your
practice ?
56
28
57
29