EBM Studyguide
EBM Studyguide
EBM Studyguide
LEARNING OBJECTIVES:
1. Understand the principles of evidence-based medicine (EBM) & develop insight into
relevance in clinical setting.
2. Develop knowledge of the hierarchy of evidence in EBM
3. Examine barriers to the adoption of EBM
4. Examine economic analysis as an example of the breadth of EBM
OBJECTIVE 1: Understand the principles of evidence-based medicine (EBM) & develop insight
into relevance in clinical setting.
What is EBM?: method of critical thinking in clinical setting that permits better decision making
& drives better care & outcomes.
USES PRINCIPLES
Reducing emphasis on Not all evidence is equal
unsystematic clinical practice
Hierarchy of evidence to guide clinical
Reducing variations in decision-making
treatment/procedures
Evidence alone is never enough to make a
Post-grad education good decision
Model for continuous self-directed, Decision makers must balance risk and
problem-based, lifelong learning benefits of alternative management
strategies in the context of patient values and
preferences
CATEGORIES OF QUESTIONS
Clinical Finding how to properly gather and interpret findings from the history and physical examination.
Etiology/Risk how to identify causes or risk factors for disease(including iatrogenic harms).
Clinical Manifestation knowing how often and when a disease causes its clinical manifestations and how to use
this knowledge in classifying our patients illnesses.
Differential Diagnosis when considering the possible causes of our patients’ clinical problems, how to select
those that are likely, serious, and responsive to treatment.
Diagnostic Test how to select and interpret diagnostic tests, in order to confirm or exclude a diagnosis,
based on considering their precision, accuracy, acceptability, safety, expense, etc.
Prognosis how to estimate our patient’s likely clinical course over time and anticipate likely
complications of the disorder.
Therapy how to select treatment to offer our patients, that do more good than harm and that are
worth the efforts and costs of using them
Meaning how to empathize with your patients’ situations, appreciate the meaning they find in the
experience, and understand how this meaning influences their healing.
Improvement how to keep up-to-date, improve your clinical and other skills, and run a better, more
efficient clinical care systems.
Background Question: 3 components (RVC)
● 1) Question Root, 2) Verb, 3) Condition
● Ex: How is hydrocephalus diagnosed? What causes swine flu?
STUDY DESIGN HIERARCHY: Best study designs for type of clinical question
● Clinical Exam/Diagnostic Testing:
prospective, blind comparison to gold
standard
● Therapy: RCT
Meta-Analysis: pool results from individual studies into large study; “take systematic review 1 step further”
● Pro: synthesize many small studies; help validate evidence
● Con: trials must be similar enough to combine; original studies are subject to bias
HIERARCHY of STRENGTH of Evidence for Prevention/Treatment Decisions
(strongest @ top; weakest @ bottom)
N-of-1 Randomized Trial [single patient is entire trial 🡪 determine best care for that specific pt]
Systematic Review of randomized trials
Single Randomized trial (1 randomized trial is better than review of an observational one)
Systematic Review of observational trials (addressing pt-impt outcomes)
Single observational study (addressing pt-impt outcomes)
Physiologic studies
Unsystematic Clinical observations
BIAS
Cause systematic variations in process/result; [starred from PPT are bolded; those in FA highlighted]
Attention Bias (Hawthorne Effect): people change behavior when they know they are being watched 🡪 results not generizable
Attrition Bias: unequal loss of participants from diff groups in a trial 🡪 trial ends unbalanced
Channeling Bias: drug therapies w/ similar indications assigned to groups of pt w/ varying baseline prognoses; type selection bias in
observational studies
Chronological Bias: treatment/disease definitions change over time; thus time becomes confounding variable
Detection Bias: systematic error in assessing outcomes (ex: put subject in wrong group, or misclassify intervention); reduce via blinding
Differential Verification Bias: Measurement bias in which results of diagnostic test affect whether gold standard procedure is used to verify
test result; cohort studies susceptible to this (b/c not ethical to give each pt a gold standard test)
● Causes too high sensitivity and too low specificity
Lead Time Bias: early detection of disease is interpreted as increased survival
Length Time Bias: screening test detects disease w/ long latency period while those w/ rapid onset become symptomatic earlier
Publication Bias: studies w/ + results are more commonly published (but this can be bad if there are a lot of studies disproving a
theory/treatment that are not published)
Recall Bias: awareness of disorders alters recall by subjects; common in retrospective studies
Reporting Bias: only some trial outcomes reported; if an entire study is not published, synonymous to publication bias
Sampling Bias: methods used to sample pop are such that a certain part of pop is more likely to be selected; type of selection bias
Selection Bias: nonrandom sampling/treatment allocation to subjects 🡪 pop. studied is not representative of target pop.
Spectrum bias: tests not perform equally in all subjects (ex: CXR detect large pneumothorax in sick but not small ones in healthy)
Start of study
As study proceeds
Completion of study
COST ANALYSIS
How does a treatment/test fare compared to the opportunity costs associated w/ alternative?
Value = Quality/Cost
In this, we are examining CLINICAL QUESTIONS (arise from work w/ patients) as opposed to research questions (AKA
embark lit reviews & study design etc). This will help us make clinical decisions, as opposed to advancing medical
knowledge (which is what research questions do).
1 Systems Medical info resources integrated into EMR; Beaumont integrates UpToDate; support clinical decision-making
2 Summaries Current best evidence summarized clear/succinct; **UpToDate, DynaMed, Clinical Practice Guideline
● Do not confuse w/ textbooks that provide background info
UpToDate DynaMed
Larger database Smaller database
2-tier graded sys (some may not have b/c 3-tier graded sys
imposed retrospectively)
“updated daily”; used Graded System
.
3 Synopses of 1-page summary of systematic review w/ expert commentary (AKA critical appraisal); DARE, ACP Journal Club
● Exactly same as synopses of studies but w/ systematic reviews
Syntheses
4 Syntheses Systematic reviews & Meta-analyses; comprehensive/reproducible review of specific topic based on exhaustive lit
searches; Cochrane; PubMed Health
5 Synopses of 1-page summary of individual study w/ expert commentary (AKA critical appraisal); articles selected for critical
appraisal based on relevance & interest; Evidence-Based Journals, ACP Journal Club
Studies
6 Studies PubMed Clinical Queries (diff than regular Pubmed)
2) SORT (Strength of Recommendation Taxonomy): use study quality (1-3) & strength of
recommendation (A-C) to evaluate grade of literature
Summary
1. ASK: Use PICO to help translate clinical cases into searchable question [see S1, objective 1]
2. ACQUIRE:
● Start your search using resources at the top of the 6S Pyramid and work your way down
to discover new or rare clinical studies
● You can also use ACCESSSS or Trip to search across these resources
3. APPRAISE: For critiquing the evidence you find, use JAMAevidence and the SORT tool
S4-6: Screening & Diagnosis
LEARNING OBJECTIVES:
Lead Time: time btwn detecting disease via screening & time of diagnosing disease based on
symptoms; ideally want this to be as long as possible
Screening Hazards:
● False-positive results may cause unnecessary anxiety, expense, and even a risk of hazardous
intervention in unaffected individuals (ex: breast self-exam-RCT showed no survival benefit,
50% unnecessary biopsies done)
● False-negative results may reassure and delay diagnosis of people who, in fact, have a
disease
**Randomized Control Trial (RCT): best b/c lowest risk of bias for screening; prospective
● Compare disease-specific cumulative mortality rate btwn those randomized to screening
● Pro: Eliminates confounding & lead time bias
● Con: expensive; time-consuming (b/c prospective so have to wait for disease to develop);
ethical concerns; tech changes
OBJECTIVE 3: Explain common BIASES that occur in screening and diagnostic trials
Lead Time Bias: early detection of disease is interpreted as increased survival
● Solution: RCT study design
Length Time Bias: screening initiative more likely to detect slow-growing disease
● Solution: count all outcomes regardless of method detected (not just screening)
Volunteer Bias: pop. not representative of general target pop. (if no randomization in selection, volunteers for study
likely to be in better health than general pop.)
● Solution: count all outcomes regardless of group
Selection Bias: is there diagnostic dilemma? (so not just choosing pt w/ obvious shingles or other disease)
Verification Bias: If the sample which is used to assess a diagnostic test is restricted only to those who have the
condition measured, the sensitivity of the measurement can be overestimated (pts with low probability for
Pulmonary embolism on V/Q scan did not undergo first standard test – angiography – they were monitored (second
standard), only pts with high probability for PE underwent Pulmonary angiogram
Pre-Test Probability: probability of target condition being present before results of diagnostic
test known; (AKA you think the pt has the disease based on your clinical knowledge [but before
doing a test])
Post-Test Probability: probability of target condition being present after results of diagnostic
test are known; compare to thresholds [in diagram below]
Prognostic Factor: pt characteristic that confer increased or decreased risk of an outcome from disease
Validity:
● Was the sample representative (demographics, SES, gender, etc.)
o Solution: report any filters passed before entering study (ex: was
treatment at 1º, 2º, or 3º care facility); describe which patients were
included and excluded
● Were patients sufficiently homogeneous w/ respect to prognostic risk?
o We expect that outcome should apply to each member of group
o Requirement: subjects should be @ similar point in disease process
● Were outcome criteria objective & unbiased?
o As the process of determining an outcome grows to be more
subjective, it is important to blind to prognostic factors
● Threat to validity increases w/ increasing ratio of:
“lost to follow-up”:Outcomes
Importance:
● How likely are outcomes over time? 🡪 in order to determine need to use measures that relate
events to time (ex: survival rate, median survival, survival cure [% original sample who have not
yet had outcome of interest])
o Kaplan Meier Survival Estimates: provides plot of probability of survival over time
● How precise are estimates of likelihood 🡪 use confidence intervals [range within which it is likely
that the true mean lies]
o For Survival Curve: as sample size decrease over time, precision of estimate decrease 🡪
increases width of confidence interval (AKA it increases)
Applicability:
● Were the study patients and their management similar to mine?
o Possible Threats: uneven application of therapies to diff subgroups or uneven application
over time
● Follow-up sufficiently long?
o Possible Threat: impt outcomes outlast study duration
● Can I use results of study in management of my practice?
o Look to see if effect of prognostic factor crosses decision threshold
S8: Quality Improvement (QI)
LEARNING OBJECTIVES:
QI Studies: done in real world clinical environment; data from usual clinical documentation; expedited review;
context-dependent + complex + iterative; data observational/anecdotal
● Unit of Analysis: if QI targeting clinicians, then clinicians should be unit of analysis
OBJECTIVE 2: Discuss quality improvement initiatives and interventions to improve the care
and outcomes of patients
RANDOMIZED Designs
Individual-patient
randomized control trials
Cluster randomized trials Randomly assigning groups of pt
NON-RANDOMIZED Designs
Stepped Wedge Sequential rollout of QI intervention to clinicians/organizations over a number or
periods, so that by end of study all participants have received intervention
● Can randomize order in which participants receive intervention
● Data collected/outcomes measured @ each point a new group of participants
(“step”) receive intervention
Run Chart: data in chart over time w/ median; help determine common cause vs. special cause variation; should
have 16-25 points NOT on median (so can identify if any special causes)
● Run: series of points that do not cross median; points on median do not count as run
● Successful intervention will create smaller than expected # of runs
Alternating Points Rule At least 14 points consecutively alternate 🡪 indicate special cause in way data was
sampled (maybe 2 or more sources)
Number of Runs Rule Count # of runs 🡪 compare to “Test for Number of Runs Above & Below Median”
(this is a chart that is on slide 65 of the PPT)
Special Cause exists if: # of runs < lower limit; indicate >1 average
OBJECTIVE 3: Identify problems with the quality and safety of patient care
Potential Harms:
● Prematurely adopt inadequately proven intervention 🡪 increase cost & result in harm
OBJECTIVE 5: Identify the considerations in appraising and using an article about quality
improvement
ASSESSING QI ARTICLE
Validity: Was data quality acceptable? Project Design: Data Analysis:
● Prognostic balance maintained as ● Aims of QI clearly stated? ● Number participants initially
study progressed? ● Definitions/measurement systems approached, participated, dropped
● Prognostic balance maintained when reported for all impt data? out all reported?
study completed?
● Extent of Blinding? Data Collection: Results:
o Pt unaware of the group they ● Staff trained & quality assurance ● Estimate of treatment effect?
are in maintained? ● Intervention exportable to my
o Investigator/Caregiver site?
unaware of group pt is in Data Management: ● Follow-up complete & sufficiently
o Double Blind Study: both ● Review/report missing & outlier data? long?
groups unaware
● Groups treated equally? (aside from
experimental intervention)
S10-12: Harm
LEARNING OBJECTIVES:
OBJECTIVE 1: Summarize methodology, strength, & weakness of case-control & cohort study.
Design Goal of Harm Study: make exposed & unexposed similar in all respects other than
exposure and to maintain balance throughout
OBSERVATIONAL STUDIES
Case-Co Compare group of people w/ disease to Validity assessment:
group w/o disease 🡪 assess if prior ● Were cases & controls similar w/ respect
ntrol exposure/risk had impact on disease state to indication or circumstance that woud
Study lead to exposure
Odds Ratio (OR) = ad/bc ● Methods of determining exposure
similar for cases & controls?
Ex: pt w/ COPD had higher odds of smoking
history than those w/o COPD
Survival Analysis: set of methods used for analyzing data when outcome variable is time until
occurrence of an event of interest (ex: death, hospitalization, disease, surgical revision); use
Kaplan-Meier & Cox Proportional Hazards Regression to analyze
● How long before I am better?; How long am I going to live?
● Survival Function: probability of surviving (AKA not experiencing event) @ time point
● Hazard Function: probability of experiencing event @ time point
S13-15: Systematic Review &
Meta-Analysis
LEARNING OBJECTIVES:
1. Explain the value and importance of systematic reviews in learning and practicing EBM
2. Identify the differences between narrative reviews and systematic reviews
3. Learn criteria to critically appraise systematic reviews
4. Recognize elements of a forest plot in a meta-analysis and interpret results displayed
Systematic Review: clearly stated set of objectives w/ pre-defined eligibility criteria for studies;
explicit/reproducible methodology; search identifies all studies meeting eligibility criteria &
assesses their validity
● Qualitative type: summarizes all studies but not statistically combine them; look at
demographics, characteristics of health problem/intervention, characteristic of study,
differences btwn studies (“heterogeneity”)
● Quantitative type (AKA Meta-Analysis): statistically combine data from diff studies
o 2 Stages: 1) Summary statistic (OR, RR, etc.) calculated for each study, 2)
Summary effect calculated for all studies
● Pros: save time; resolve uncertainty when original studies disagree; increase precision of
results; more generalizable results;
OBJECTIVE 2: Identify the differences between narrative reviews and systematic reviews
Narrative Review: discusses 1 or more aspects of disease, prognosis, management; good for
background questions
OBJECTIVE 3: Learn criteria to critically appraise systematic reviews
Did Review explicitly address clearly focused question? (include PICO components)
● Inclusion & exclusion criteria should clearly map question so only include relevant studies
Was search for relevant studies detailed & exhaustive?
● Need to define exact search strategy (ex: terms used), platforms used, dates searched; specify if include non-English
studies or unpublished literature
● 2 Phases of Search: 1) screen titles/abstracts; 2) screen full articles
o Specify reason for exclusion
Publication Bias: studies w/ + results are more commonly published (but this can be bad if there are a lot of studies
disproving a theory/treatment that are not published)
Were 1º studies of high methodological quality?
● Assess validity [see table right]
● Assess & report bias
Were selection & assessments of studies reproducible? [includes data extraction & blinding researchers]
● Data: any info from a study; more than 1 person should extract data from every study to minimize error/bias
Heterogeneity: differences btwn studies; [ex: clinical (participants, intervention, outcome); methodological (study
design/risk of bias in study); statistical (in effect size)]