You are on page 1of 14

Otolaryngol Clin N Am

40 (2007) 1261–1274

Teaching Evidence-Based Medicine


in Otolaryngology
Jeffrey C. Liu, MD, Michael G. Stewart, MD, MPH*
Department of Otorhinolaryngology, Weill Cornell Medical College, 1305 York Avenue,
5th Floor, New York, NY 10021, USA

The term evidence-based medicine (EBM) has been used by many educa-
tors and physicians to describe the need for applying the ever-enlarging
body of knowledge to everyday care. Practicing EBM is a structured process
that integrates the best available literature with patient desires and clinical
experience to direct patient care decisions. The goal of this article is to pro-
vide a framework for teaching EBM in otolaryngology, particularly in a res-
ident training program.
This article begins by defining EBMdwhat it is and what it is not. Next,
potential barriers to EBM teaching are identified, and then examples of
structured learning environments discussed to provide effective frameworks
for learning. Later, key concepts to consider when critically evaluating pa-
pers are discussed. Finally, the article reviews an otolaryngologic example
of applying an evidence-based review to address a clinical question. Exam-
ples and literature from multiple disciplines are cited.

Defining the scope of the problem


EBM has been defined as the ‘‘conscientious, explicit, and judicious use
of current best evidence in making decisions about the care of individual pa-
tients’’ [1]. It requires the integration of the best available evidence with phy-
sician clinical experience. EBM is not just about the evidence for treatment
of a medical problem, it goes beyond that to refer to the practice of applying
the literature to specific and individual patient care decisions. Eddy [2] calls
this process ‘‘Evidence-based individual decision making,’’ which is perhaps
a good alternate name for EBM.

* Corresponding author.
E-mail address: mgs2002@med.cornell.edu (M.G. Stewart).

0030-6665/07/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.otc.2007.07.006 oto.theclinics.com
1262 LIU & STEWART

Evidence-based guidelines [2] are systematic reviews of evidence by ex-


perts that synthesize a summary concerning a particular treatment problem.
These documents rigorously review the best available evidence and create
a summary. Although these guidelines can be very useful, EBM as defined
for this article refers to more than just knowing the evidence or formulating
a guideline. It refers to providing evidence-based treatment to individual
patients.
The major benefit of practicing EBM is that it ensures the practice of up-
to-date medicine. Every clinician knows that the treatment and management
of diseases can change, and management guidelines can be updated as new
information becomes available. Sometimes, in a paradigm shift, previously
accepted treatments are found to be ineffective or even harmful. For exam-
ple, hormone replacement therapy in postmenopausal women was long
thought to have positive effects on health. However, recent evidence has
shown that this therapy may increase the risk of stroke [3], which has dras-
tically changed the indications for its use. Thus, EBM constantly incorpo-
rates the best available evidence into patient care decisions, leading to the
practice of the best possible medicine.
Practicing up-to-date medicine has additional advantages, such as when
the contemporary treatment is less invasive with similar or improved out-
comes. For example, many previously open vascular procedures, such as
aortic aneurysm repair, have been replaced with less invasive interventional
procedures with good results. In otolaryngology, endoscopic sinus surgery
has almost entirely replaced most open sinus surgery with excellent results.
In addition, when EBM is practiced and applied to many patients by many
physicians, the overall result should be better population outcomes and re-
duced overall complications.
Although many physicians understand the value of EBM in improving
practice, misconceptions still exist. A common misconception is that EBM
limits the physician’s ability to practice, or that it is ‘‘cookbook’’ medicine.
There is concern that the evidence will limit the physician’s freedom in the
decision-making process and restrict medical care to only following guide-
lines and algorithms. This is inaccurate. In EBM, the evidence is only one
component of treating the individual patient. Clinical experience and patient
preferences must also be combined with the literature to make clinical deci-
sions. Indeed, some have likened EBM to a three-legged stooldwith evi-
dence forming only one of the legs [4]. Patient preferences and clinical
judgment are the other two legs; removal of any one leg will cause the stool
to fall over. Thus, published evidence should not limit the physician, but
rather should inform him or her as part of the clinical decision process [1].
Some are also concerned that practicing EBM means using only evidence
from randomized control trials (RCTs) in clinical practice, as RCTs are con-
sidered the highest level of evidence. Using the ‘‘best available evidence’’ is
not the same as using ‘‘only the best’’ evidence. Although it is true that
RCTs provide a highly rigorous and systematic evaluation of a treatment’s
TEACHING EVIDENCE-BASED MEDICINE IN OTOLARYNGOLOGY 1263

effectiveness, not all problems are amenable to such evaluation. A mock sys-
tematic review of RCTs on the effectiveness of parachutes in preventing ma-
jor trauma [5], for example, was one author’s tongue-in-cheek method of
expressing that RCTs cannot be performed to answer all clinical questions.
The judicious review of other available types of evidencedcohort, case con-
trol, case reports, and even expert opiniondcan provide significant guid-
ance in clinical decision making. A related misconception about EBM is
that ‘‘lack of evidence’’ means ‘‘lack of effectiveness.’’ This is also untrue.
Many effective treatments have not been studied systematically, and exter-
nal evidence cannot answer every question.
Practicing EBM is not acquired instantaneously. It is developed over
time, and learning should begin with resident physicians in training. Indeed,
EBM involves appraisal of one’s own practice experience and assessing
whether it could be improved, which is part of practice-based learning
and improvementda core competency for residency training. However, un-
like taking a good history or performing a thorough physical exam, the pro-
cess of acquiring and critically analyzing literature to inform clinical
decision making is not uniformly well taught in medical schools. In one
study, Caspi and colleagues [6] evaluated the perceived competence and per-
formance of graduating medical students in EBM techniques across the
United States. They first evaluated the respondents’ self-assessment of com-
petency in critically evaluating evidence, and then tested their actual skills
using an online instrument. The study found that although respondents
felt competent about their ability to evaluate problems across multiple com-
petencies, the average score of 55% correct on the assessment instrument
suggested otherwise. This is further evidence that residency training pro-
grams need to refine the skills of graduating medical students as they pro-
ceed toward becoming active physicians.
Residency programs have always provided residents with strong training
in the ‘‘clinical judgment’’ and ‘‘physician experience’’ aspects of medical
care, as well as the art of integrating patient preference and expectation
into decision making. However, searching, identifying, and appraising the
best evidence are often relegated to monthly ‘‘journal club’’ conferences
and maybe one or two didactic lectures, often from a nonphysician (ie, a stat-
istician). Teaching the process of EBM to residents should be a key skill in
their training.

Barriers to teaching
Teaching EBM to residents remains a significant challenge because bar-
riers exist at multiple levels. These barriers can be divided into three broad
categories: institutional, resident related, and attending staff related. Much
of this section is derived from an excellent study by Bhandari and colleagues
[7]. This study used structured interviews to assess surgical resident
1264 LIU & STEWART

perspectives at a major teaching institution on learning EBM. Some of the


barriers identified in their study are detailed below.
Two major institutional barriers to teaching EBM are the availability of
resource materials, and the time needed to obtain them. The digital age has
hopefully improved the availability of resources. In the past, obtaining the
evidence required going to the library and spending time searching, retriev-
ing, and photocopying. Today, electronic resources are immediately avail-
able through online MEDLINE searches and digital PDF (portable
document format) documents. At the present authors’ institution, multiple
computers are connected to the Internet on all in-patient floors, which al-
lows for quick literature searches to be performed when clinical questions
arise during routine patient care activities. In addition, there are hundreds
of journals available online through the institution’s library that can be ac-
cessed from these locations. Some facilities also have institutional subscrip-
tions to online resources that present evidence-based reviews of available
clinical evidence on a topical basis [8]. The digital availability of these re-
sources for residents who are under time pressure is critical for the incorpo-
ration of published evidence into their clinical practice. An institutional
commitment to having these resources easily available dramatically reduces
this barrier to the physician’s practice of EBM.
Another institutional barrier identified was staff shortages. Again, per-
forming literature searches and evaluating the evidence take time. When
there is a shortage of staffdfor example, in an outpatient clinicdless time
is available to the physician to gather and review evidence. Instead, there
is a pressing need to complete the clinical tasks needed to get patients
seen. Thus, adequate staff coverage is one of the resources needed to provide
time for the physician to pursue the reading, review, and incorporation of
the literature into his or her practice.
Multiple barriers at the resident level were identified in Bhandari and col-
leagues [7] study. Some residents lacked the motivation or desire to analyze
and apply published literature to their practice. Heavy call requirements and
fatigue on some rotations often influenced this feeling. Some residents also
felt that EBM training would not be useful because they sensed resistance or
apathy on the part of the attending staff. It was also suggested that discus-
sions with faculty surgeons about evidence that may conflict with current
practice might result in a backlash or repercussions. In general, resident at-
titudes were strongly shaped by the faculty surgeons.
At the attending level, a lack of EBM knowledge and practice amongst
attending surgeons was noted as a potential barrier [7]. Since patients are
ultimately the responsibility of the attending physician, lack of interest by
the faculty in EBM provides no motivation for residents to acquire these
skills. In addition, a lack of expertise or experience with EBM at the faculty
level limits the capability to teach these skills to residents. Finally, even
worse than a lack of knowledge or interest, negative attitudes by faculty
concerning applying evidence toward patient care can be a serious barrier.
TEACHING EVIDENCE-BASED MEDICINE IN OTOLARYNGOLOGY 1265

Taken together, many barriers can exist to the teaching of EBM in resi-
dency. These barriers lie in three areas: resident, attending, and institutional.
Strategies and mechanisms to overcome institutional barriers to teaching
EBM are beyond the scope of this article. However, assuming that resources
are available, the ability to provide good EBM teaching is mostly limited by
attending and resident physician factors.
The ability to provide EBM teaching to residents necessarily begins with
a commitment by attending surgeons. Many of the factors identified in
Bhandari and colleagues [7] stem from a lack of background or commitment
amongst attending physicians. Establishing a fundamental knowledge base
in EBM and widespread acceptance of its value are important steps to over-
come these key barriers to teaching EBM. In an upcoming section, struc-
tured strategies that have been reported by training programs to teach
EBM are discussed.

The process of delivering evidence-based medicine


To discuss how to teach EBM, the process of delivering EBM must first
be understood. This process has been described elsewhere in greater depth
[9], but in summary the practice of EBM is a five-step process (Fig. 1).
1. Formulate a focused, clinically pertinent question.
2. Search for and retrieve appropriate literature.
3. Critically review and grade this literature.
4. Summarize and formulate recommendations from the best available
evidence.
5. Recommendations from step 4 are integrated with the physician’s expe-
rience and patient factors to determine optimal care.
By definition, EBM includes only clinical research on human subjects. Al-
though bench research is important, it is not graded or included as evidence
until it is translated into human subject research.

Clinically pertinent question


Given all the complexity of modern medicine, it is surprising that the first
step of formulating a focused clinical question might be the most difficult
step. Many clinical questions arise from a specific patient’s presentation.
It is important then to formulate questions that are as direct and specific
as possible. For example, a question like ‘‘What is the best method of per-
forming a tonsillectomy?’’ can be difficult to answer as it is very broad and
could refer to multiple scenarios. One helpful strategy when formulating
specific questions is to consider the acronym PICO: Patient, Intervention,
Comparison, Outcome [4]. To refine the previous question, ‘‘In patients
aged 4 to 7 years old with recurrent tonsillitis (patient), does cold surgical
technique (intervention) have a greater reduction in postoperative pain
1266 LIU & STEWART

Step I:
Formulate a
focused clinical
question.

Step II:
Search for and retrieve
appropriate literature.

Step III:
Review and Grade the
literature.
Steps III & IV:
Critically evaluate evidence
based reviews or
guidelines.
Step IV:
Summarize and formulate
recommendations.

Step V:
Integrate summary
recommendations with clinical
experience and patient preferences

Fig. 1. Five steps of the evidence-based medicine process.

and return to regular oral intake (outcome) when compared with electrocau-
tery technique (comparison)?’’ This focused question lends itself to literature
searches, and gives the best chance for a successful answer.

Literature search
The second step is the literature search. A computerized database of most
of the world’s biomedical literature is called MEDLINE and is maintained
by the National Library of Medicine. Searching this database is available
free in the United States through the PubMed Web site at www.pubmed.
gov. Executing a search of the literature requires practice and experience.
There are definitely techniques to improve the effectiveness of literature
searching, and a detailed discussion of literature searching is beyond the
scope of this article. Others have addressed this question in more detail
[10]. The third and fourth stepsdcritical literature review and summaryd
are covered later in this article.
Although it is important for physicians to understand the concepts be-
hind comprehensive searchingdand to perform literature searches, if
neededdin reality, this process can be lengthy and tedious. Therefore, the
clinician successfully practicing EBM should be able to identify and use
TEACHING EVIDENCE-BASED MEDICINE IN OTOLARYNGOLOGY 1267

available resources to assist in EBM practice. In many cases, the steps of lit-
erature search, critical review, and evidence synthesis have already been per-
formed and published. The clinician can then use these summaries, either as
the definitive review or as a key starting point. Evidence-based reviews and
working group or committee recommendations are typically performed by
large groups of experts (often a multidisciplinary group), and undergo
peer review. The time saved by using these resources can be significant, es-
pecially for the busy clinician. However, because these articles require faith
that the process was thorough and unbiased, it is important that clinicians
still understand the skills needed to perform these reviews. In addition, ev-
idence-based reviews have not been performed for some questions, or only
one or two articles are all that are available, again pointing out the clini-
cian’s need to be able to review and synthesize evidence on their own.
Unlike UpToDate [8] or the Cochrane Library [11], which are respected
resources for EBM reviews on selected topics, there is no consolidated
source for otolaryngologic evidence-based reviews. The Agency for Health-
care Research and Quality (www.ahrq.gov) has some reviews of otolaryn-
gology topics such as their review on acute bacterial sinusitis [12].
Reviews such as these are sometimes also published in otolaryngology jour-
nals [13,14].
There are some problems with relying on existing evidence-based reviews.
They usually only tackle problems of general interest, and many questions
that challenge a clinician may not have ‘‘made the radar’’ for evaluation.
Some organizations that perform evidence-based reviews make the decision
to only review and report on the highest-level evidence (eg, RCTs). The phi-
losophy is that compiling and summarizing lower-quality evidence is not
worth the time, and efforts should be directed at addressing questions
with higher-level evidence. This creates a problem in several areasdpartic-
ularly surgical questionsdwhere there may be a paucity of randomized or
controlled trials, and therefore the evidence that is available is not system-
atically reviewed. Also, some databases are proprietary and expensive, put-
ting them out of reach for clinicians not in a large organization.
The final step, integration of the best available evidence with clinical
judgment and the patient’s wishes, is the ‘‘balancing the three-legged stool’’
concept discussed earlier. It is important to remember that the evidence was
gathered from populations of patients, but is being applied to the care of an
individual patient.

Structured methods for teaching evidence-based medicine


Taken together, the structured activities for teaching EBM in residency
are designed to teach the necessary skills of this five-step process. Three
structured systems for teaching EBMdjournal club, professor rounds,
and open evidence-based curriculumdare major paradigms for teaching
residents EBM. Journal clubs are a popular forum for teaching EBM,
1268 LIU & STEWART

being used by 95% of internal medicine programs [15] and 73% of family
medicine programs [16]. The varying strategies and structures used in
journal clubs have been described in detail elsewhere [17]. Most journal
clubs are structured around reviewing a single landmark or recent publi-
cation in depth. The research methodology, statistical analysis, premises,
and conclusions are among the items reviewed to help evaluate the qual-
ity of the paper. The paper is also reviewed in the greater context of the
practice of medicine to determine its applicability. Discussion of all these
aspects allow for residents to discover how to better perform critical anal-
ysis. In this way, the journal club emphasizes critical analysis skills of the
literature. Although a useful exercise, it is limited because the discussions
usually occur in a general context and not in reference to a specific pa-
tient. In addition, studies should be assigned specific levels of evidence
based on methodology in each journal club, a step that does not uni-
formly occur.
In open evidence-based curricula [17], a clinical scenario is presented that
is usually based on an actual patient. Residents then perform literature and
evidence reviews to answer a clinically related question regarding appropri-
ate management of the presented patient. Doing so allows for many other
steps to occur that go beyond the traditional journal club. The critical ap-
praisal of the literature is necessary to determine the quality of the evidence,
which is similar to a journal club. However, open evidence-based curricula
differ in that participants engage in the literature search process, summari-
zation, and integration with clinical experiences. Discussion that ensues re-
volves not only on the quality of the evidence, but also the search method
and different styles of evidence integration with clinical experience. Thus,
the incorporation of these additional steps provides a powerful teaching
strategy.
Finally, professor rounds offer a structured way of teaching EBM [18].
In this format, a patient is presented to an attending physician who is
unfamiliar with the case. Discussion on the diagnosis, workup, and treat-
ment are performed. When questions arise regarding any part of this pro-
cess, these questions are refined into an answerable form. In the interval
between sessions, residents divide the questions and explore the literature.
These results are synthesized and presented at the following session. Like
the open curricula, the emphasis is less on strictly critical appraisal of the
literature but rather directing this literature toward care of a single pa-
tient. The review of the literature and discussion of the evidence as it ap-
plies to an individual case encompass the broader aspect of the EBM
process.
These three strategies are all slightly different in their method of teaching
EBM. Journal club puts a heavy focus on critical journal reviewda key part
of being able to deliver EBM. The other two activities, open curriculum and
professor rounds, emphasize the integration of the literature toward a single
patient.
TEACHING EVIDENCE-BASED MEDICINE IN OTOLARYNGOLOGY 1269

Grading the evidence


Critical review of the literature
The goal in this section of the article is to briefly highlight the key con-
cepts and ideas to consider when reviewing a publication. These include un-
derstanding the study type, parameters of the patient population, measures
and instruments used, and finally some basic statistics. The concept of num-
ber needed to treat (NNT) is also an important statistical tool to evaluate
the relative value of an intervention. Critical literature review is an impor-
tant skill, and a thorough review is beyond the scope of this article. A
good resource to consider is the Journal of the American Medical Associa-
tion’s running series titled ‘‘Users’ Guides to Medical Literature,’’ by the
Evidence-Based Medicine Working Group. These articles discuss various
ways to analyze and incorporate medical literature into one’s practice. A
good beginning is number XX: ‘‘Integrating research evidence with the
care of the individual patient’’ [19].
In EBM, the fundamental parameter to grading evidence is evaluation of
the study methodology. All evidence is not equal, and the ‘‘levels’’ of evi-
dence are based on the quality of the study methodology. For experimental,
comparative, or observational studies, papers are assigned into one of five
general levels:

1. RCT
2. Cohort studies
3. Case control studies
4. Case series
5. Expert opinion

The grading of evidence begins with categorization of each study. Table 1


[20] shows the different types of studies and how they are graded. Table 2
shows how to grade the conclusions drawn from the review of all the indi-
vidual studies. Although the relative strength of study designs could be de-
bated, the underlying concept is the following: prospective studies are better
than retrospective, randomized design is better than nonrandomized, and
controlled is better than noncontrolled. Again, only human-subject studies
are graded as part of EBM.
Understanding the study type is the first key step to critically evaluating
a paper. The next step is to determine whether there are problems with study
design, enrollment, analysis, or interpretation, all of which may cause bias
or confounding. Many types of bias exist, such as lead time bias, selection
bias, and so forth. A full discussion of different types of bias is beyond
the scope of this article, but the EBM clinician should be familiar with these
concepts and recognize studies that have the potential for bias.
Another important factor to consider in evaluating literature critically is
the outcomes that were assessed and techniques used to measure them. For
1270 LIU & STEWART

Table 1
Levels of evidence
Therapy/prevention, Economic and decision
Level etiology/harm Prognosis analyses
1a SR (with homogeneity) SR (with homogeneity) SR (with homogeneity) of
of RCTs of inception Level 1 economic studies
cohort studies;
CDR validated in
different populations
1b Individual RCT (with Individual inception Analysis based on clinically
narrow confidence cohort study sensible costs or
interval) with O80% follow-up; alternatives; systematic
CDR validated in a review(s) of the evidence;
single population and including multiway
sensitivity analyses
1c All or none All or none case series Absolute better-value or
worse-value analyses
2a SR (with homogeneity) SR (with homogeneity) of SR (with homogeneity) of
of cohort studies either retrospective cohort Level O2 economic
studies or untreated studies
control groups in RCTs
2b Individual cohort study Retrospective cohort study Analysis based on clinically
(including low or follow-up of untreated sensible costs or
quality RCT; eg, !80% control patients in an alternatives; limited
follow-up) RCT; derivation of CDR review(s) of the evidence
or validated on split or single studies; and
sample only including multiway
sensitivity analyses
2c ‘‘Outcomes’’ research; ‘‘Outcomes’’ research Audit or outcomes research
ecological studies
3a SR (with homogeneity) SR (with homogeneity) of
of case control studies Level 3b and better
studies
3b Individual case control Analysis based on limited
study alternatives or costs, poor
quality estimates of data,
but including sensitivity
analyses incorporating
clinically sensible
variations
4 Case series (and poor Case series (and poor Analysis with no sensitivity
quality cohort and quality prognostic cohort analysis
case control studies) studies)
5 Expert opinion without Expert opinion without Expert opinion without
explicit critical appraisal, explicit critical appraisal, explicit critical appraisal,
or based on physiology, or based on physiology, or based on economic
bench research, or bench research, or ‘‘first theory, or ‘‘first
‘‘first principles’’ principles’’ principles’’
Abbreviations: CDR, Clinical decision rule; MA, Meta analysis; RCT, Randomized control
trials; SR, Systematic review.
Adapted from Phillips B, Ball C, Sackett D, et al. Levels of evidence and grades of recom-
mendation. Oxford Centre for Evidence-Based Medicine, 2001. Available at: (http://www.cebm.
net/levels_of_evidence.asp).
TEACHING EVIDENCE-BASED MEDICINE IN OTOLARYNGOLOGY 1271

Table 2
Evidence-based grading system
Grade A Consistent Level 1 studies
Grade B Consistent Level 2 or 3 studies or extrapolations from Level 1 studies
Grade C Level 4 studies or extrapolations from Level 2 or 3 studies
Grade D Level 5 evidence or troublingly inconsistent or inconclusive studies of any level
Adapted from Phillips B, Ball C, Sackett D, et al. Levels of evidence and grades of recom-
mendation. Oxford Centre for Evidence-Based Medicine, 2001. Available at: (http://www.cebm.
net/levels_of_evidence.asp).

example, if the study evaluated quality of life as an outcome, did the authors
use a previously validated instrument or a proprietary one? What did the au-
thors consider the ‘‘gold standard’’ for diagnosis? These are important con-
siderations that should be evaluated before study results can be translated
into clinical practice.
Finally, the physician should understand the potential difference between
statistical significance and clinical significance. Statistical significance mea-
sures the probability that the results seen might have been a result of chance
and, in many ways, statistical significance is a function of the sample size. It
is possible to find a statistically significant difference that is not clinically
significant. For example, after use of an oral appliance, patients with sleep
apnea might have a mean apnea-hypopnea index of 46, versus 49 in a non-
treated group. Even if that difference achieves statistical significance (eg,
P!.05, probably only achieved with a very large sample size), that just
means that the difference seen was likely not due to chance. From a clinical
standpoint, both groups still have severe sleep apnea, and a difference of 3
has no clinical significance. So the clinician should assess the clinical signif-
icance of results as well as statistical significance. This helps determine
whether the results are clinically useful.
Another measure of the clinical impact of an intervention is the NNT. In
a RCT, the NNT is the number of patients needed to be treated by a particular
therapy that would result in one less negative outcome. This is a helpful statis-
tical calculation to understand the relative magnitude of differences in out-
come, and gives a sense of the population impact of changes in treatment.

Example evidence reviewdperitonsillar abscess


In this last section, a fictional resident learning experience is described to
highlight the EBM learning process.
An otherwise healthy 22-year-old male presents to the resident clinic with
painful swallowing, mild trismus, ‘‘hot-potato’’ voice, and unilateral neck
pain for the last 7 days. The junior resident in the clinic evaluates the patient
and makes a presumptive diagnosis of peritonsillar abscess (PTA). The at-
tending physician recommends treating the patient with needle aspiration.
However, another resident notes that he routinely performs transoral inci-
sion and drainage (I&D) of PTAs because he thinks that it is more
1272 LIU & STEWART

successful and definitive than needle aspiration alone. The attending faculty
physician suggests that the resident team perform a literature search to look
for evidence on each potential treatment.
An online literature search using MEDLINE identifies a published evi-
dence-based review of PTA treatment by Johnson and colleagues [21].
This review article is reviewed by the resident team: there have been three
published RCTs (Level 1 evidence), which compare I&D versus needle aspi-
ration. None of these studies provided statistical analysis, and the reported
success rates were nearly the same in all studies. The review has pooled the
data from those studies, and the success rate was approximately 93.7% for
I&D and 91.6% for needle aspiration; the difference was statistically insig-
nificant. A NNT analysis from the review showed that 48 patients would
need to undergo I&D to save one patient a treatment failure from needle as-
piration. Overall, the grade of this evidence was C, even though there were
some limited RCTs.
After the findings are discussed, the participants agree that the evidence is
appropriate for their patient, and that either needle aspiration or I&D
would be appropriate treatment. Both options are presented to the patient,
who then relates his significant fear of needles and expresses his preference
for I&D. This is successfully performed.
This example illustrates the EBM process. A clinical question was identi-
fied from the resident clinic and presented to the supervising faculty. A lit-
erature review quickly identified a published review that addressed the
desired question, which was then reviewed in depth. This information was
subsequently put in light of the patient context again. Although the less in-
vasive option was initially offered (needle aspiration), patient’s preference
directed an I&D to be performed. Although this example is not one of the
formal structured teaching strategies discussed previously, it shows how
adequacy of resources, sufficient time in a resident clinic, and attending mo-
tivation can allow the EBM process to occur when clinical questions arise,
all while resulting in good patient care.

Summary
EBM is rapidly becoming a cornerstone for practicing good medicine. It
brings the experience of clinicians and experts to inform clinical decisions
faced by practitioners every day. Practicing good EBM is a structured intel-
lectual process. Although taught in medical schools, the skills and abilities
of medical graduates to engage in EBM are generally inadequate. It falls
on medical residency to develop and refine these skills.
EBM should always begin with a question about an individual patient,
which is restructured into a readily answerable question. An appropriate re-
view of the literature is then performed through database searches and crit-
ical review of journal articles. Reviewing published evidence-based
guidelines and reviews is a possible alternative. This evidence is summarized
TEACHING EVIDENCE-BASED MEDICINE IN OTOLARYNGOLOGY 1273

and ultimately combined with one’s clinical experience and patient factors to
finally render an appropriate treatment.
Teaching this process has many potential barriers at the institutional, res-
ident, and attending physician level; however, EBM can be successfully
taught in a residency training program. EBM is probably best taught by ex-
ample, from faculty physicians. The residents in training do not need to see
every step in the process every time, but they should see the end result:
knowledge of the best available evidence, and integration of that knowledge
into the care of individual patients.

References
[1] Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it
isn’t. BMJ 1996;312(7023):71–2.
[2] Eddy DM. Evidence-based medicine: a unified approach. Health Aff (Millwood) 2005;24(1):
9–17.
[3] Anderson GL, Limacher M, Assaf AR, et al. Effects of conjugated equine estrogen in post-
menopausal women with hysterectomy: the Women’s Health Initiative randomized con-
trolled trial. JAMA 2004;291(14):1701–12.
[4] Bailey BJ, Johnson Jonas T, Newlands Shawn D. Head & neck surgery–otolaryngology. 4th
edition. Philadelphia: Lippincott Williams & Wilkins; 2006. p. 33–40.
[5] Smith GC, Pell JP. Parachute use to prevent death and major trauma related to gravitational
challenge: systematic review of randomised controlled trials. BMJ 2003;327(7429):1459–61.
[6] Caspi O, McKnight P, Kruse L, et al. Evidence-based medicine: discrepancy between per-
ceived competence and actual performance among graduating medical students. Med Teach
2006;28(4):318–25.
[7] Bhandari M, Montori V, Devereaux PJ, et al. Challenges to the practice of evidence-based
medicine during residents’ surgical training: a qualitative study using grounded theory.
Acad Med 2003;78(11):1183–90.
[8] UpToDate. [Electronic online resource]. Available at: http://www.utdol.com. Accessed
June, 2007.
[9] Sackett DL. Evidence-based medicine: how to practice and teach EBM. 2nd edition. New
York: Churchill Livingstone; 2000.
[10] Stewart MG, Kuppersmith RB, Moore AS. Searching the medical literature on the internet.
Otolaryngol Clin North Am 2002;35(6):1163–74, v–vi.
[11] The Cochrane Library. Wiley Intersicence. Available at: http://www.thecochranelibrary.
com/. Accessed June, 2007.
[12] Diagnosis and treatment of acute bacterial rhinosinusitis. Summary, evidence report/tech-
nology assessment: Number 9, March 1999. Agency for Health Care Policy and Research.
Available at: http://www.ahrq.gov/clinic/epcsums/sinussum.htm. Accessed June, 2007.
[13] Benninger MS, Sedory Holzer SE, Lau J. Diagnosis and treatment of uncomplicated acute
bacterial rhinosinusitis: summary of the Agency for Health Care Policy and Research evi-
dence-based report. Otolaryngol Head Neck Surg 2000;122(1):1–7.
[14] Rosenfeld RM, Casselbrant ML, Hannley MT. Implications of the AHRQ evidence report
on acute otitis media. Otolaryngol Head Neck Surg 2001;125(5):440–8 [discussion: 439].
[15] Sidorov J. How are internal medicine residency journal clubs organized, and what makes
them successful? Arch Intern Med 1995;155(11):1193–7.
[16] Heiligman RM, Wollitzer AO. A survey of journal clubs in U.S. family practice residencies.
J Med Educ 1987;62(11):928–31.
[17] Green ML. Evidence-based medicine training in graduate medical education: past, present
and future. J Eval Clin Pract 2000;6(2):121–38.
1274 LIU & STEWART

[18] Haines SJ, Nicholas JS. Teaching evidence-based medicine to surgical subspecialty residents.
J Am Coll Surg 2003;197(2):285–9.
[19] McAlister FA, Straus SE, Guyatt GH, et al. Users’ guides to the medical literature: XX.
Integrating research evidence with the care of the individual patient. Evidence-Based Med-
icine Working Group. JAMA 2000;283(21):2829–36.
[20] CEBM (Center for Evidence Based Medicine). ‘‘Levels of Evidence.’’ Available at: http://
www.cebm.net/index.aspx?o¼1047. Accessed June, 2007.
[21] Johnson RF, Stewart MG, Wright CC. An evidence-based review of the treatment of peri-
tonsillar abscess. Otolaryngol Head Neck Surg 2003;128(3):332–43.

You might also like