EBM Studyguide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

S1: Intro to EBM

LEARNING OBJECTIVES:

1. Understand the principles of evidence-based medicine (EBM) & develop insight into
relevance in clinical setting.
2. Develop knowledge of the hierarchy of evidence in EBM
3. Examine barriers to the adoption of EBM
4. Examine economic analysis as an example of the breadth of EBM

OBJECTIVE 1: Understand the principles of evidence-based medicine (EBM) & develop insight
into relevance in clinical setting.

What is EBM?: method of critical thinking in clinical setting that permits better decision making
& drives better care & outcomes.

USES PRINCIPLES
Reducing emphasis on Not all evidence is equal
unsystematic clinical practice
Hierarchy of evidence to guide clinical
Reducing variations in decision-­making
treatment/procedures
Evidence alone is never enough to make a
Post-­grad education good decision

Model for continuous self-­directed, Decision makers must balance risk and
problem-­based, lifelong learning benefits of alternative management
strategies in the context of patient values and
preferences

Ask 🡪 Acquire 🡪 Appraise 🡪 Apply 🡪 Assess

CATEGORIES OF QUESTIONS
Clinical Finding how to properly gather and interpret findings from the history and physical examination.
Etiology/Risk how to identify causes or risk factors for disease(including iatrogenic harms).
Clinical Manifestation knowing how often and when a disease causes its clinical manifestations and how to use
this knowledge in classifying our patients illnesses.
Differential Diagnosis when considering the possible causes of our patients’ clinical problems, how to select
those that are likely, serious, and responsive to treatment.
Diagnostic Test how to select and interpret diagnostic tests, in order to confirm or exclude a diagnosis,
based on considering their precision, accuracy, acceptability, safety, expense, etc.
Prognosis how to estimate our patient’s likely clinical course over time and anticipate likely
complications of the disorder.
Therapy how to select treatment to offer our patients, that do more good than harm and that are
worth the efforts and costs of using them
Meaning how to empathize with your patients’ situations, appreciate the meaning they find in the
experience, and understand how this meaning influences their healing.
Improvement how to keep up-­to-­date, improve your clinical and other skills, and run a better, more
efficient clinical care systems.
Background Question: 3 components (RVC)
● 1) Question Root, 2) Verb, 3) Condition
● Ex: How is hydrocephalus diagnosed? What causes swine flu?

**Foreground Question: AKA PICO analysis


● Question related to specific pt/pop. (ex: how effective CT scan compared w/ US in
diagnosing appendicitis in adult pt?)
● Patient/Problem; Intervention (exposure or diagnosis); Comparison/Control; Outcome

OBJECTIVE 2: Develop knowledge of the hierarchy of evidence in EBM

STUDY DESIGN HIERARCHY: Best study designs for type of clinical question
● Clinical Exam/Diagnostic Testing:
prospective, blind comparison to gold
standard

● Prognosis: cohort > case control > case


series

● Therapy: RCT

● Etiology/Harm/Prevention: RCT > cohort >


case control > case series

● Cost: economic analysis


Case Report/Series: observations in a series of pt; call attention to unusual assoctionas & unique observations;
● Pro: preliminary observation of problem, new/rare diagnosis, low cost, can lead to further studies
● Con: no control/comparison; no statistical validity; no hypothesis tested (b/c not planned); limited
scientific merit

Meta-Analysis: pool results from individual studies into large study; “take systematic review 1 step further”
● Pro: synthesize many small studies; help validate evidence
● Con: trials must be similar enough to combine; original studies are subject to bias
HIERARCHY of STRENGTH of Evidence for Prevention/Treatment Decisions
(strongest @ top; weakest @ bottom)
N-of-1 Randomized Trial [single patient is entire trial 🡪 determine best care for that specific pt]
Systematic Review of randomized trials
Single Randomized trial (1 randomized trial is better than review of an observational one)
Systematic Review of observational trials (addressing pt-impt outcomes)
Single observational study (addressing pt-impt outcomes)
Physiologic studies
Unsystematic Clinical observations

OBJECTIVE 3: Examine barriers to the adoption of EBM

Barriers to adoption of EBM


● Volume of info (“info overload”)
● Knowledge of how to access relevant info and not having enough time to do it
● Inability to convert knowledge to real practice
● Unintended consequences of adopting particular practices
● All that is published is not correct
● Perception that EBM drives cook-­cook medicine
● Intersection with patient values

Why Studies Mislead:


● Every clinical question has set of possible answers 🡪 some more likely to be corrected than others (in
context relevant to situation at hand)
● Research only provide estimate of underlying truth
● Random Error + Bias

BIAS
Cause systematic variations in process/result; [starred from PPT are bolded; those in FA highlighted]
Attention Bias (Hawthorne Effect): people change behavior when they know they are being watched 🡪 results not generizable
Attrition Bias: unequal loss of participants from diff groups in a trial 🡪 trial ends unbalanced
Channeling Bias: drug therapies w/ similar indications assigned to groups of pt w/ varying baseline prognoses; type selection bias in
observational studies
Chronological Bias: treatment/disease definitions change over time; thus time becomes confounding variable
Detection Bias: systematic error in assessing outcomes (ex: put subject in wrong group, or misclassify intervention); reduce via blinding
Differential Verification Bias: Measurement bias in which results of diagnostic test affect whether gold standard procedure is used to verify
test result; cohort studies susceptible to this (b/c not ethical to give each pt a gold standard test)
● Causes too high sensitivity and too low specificity
Lead Time Bias: early detection of disease is interpreted as increased survival
Length Time Bias: screening test detects disease w/ long latency period while those w/ rapid onset become symptomatic earlier
Publication Bias: studies w/ + results are more commonly published (but this can be bad if there are a lot of studies disproving a
theory/treatment that are not published)
Recall Bias: awareness of disorders alters recall by subjects; common in retrospective studies
Reporting Bias: only some trial outcomes reported; if an entire study is not published, synonymous to publication bias
Sampling Bias: methods used to sample pop are such that a certain part of pop is more likely to be selected; type of selection bias
Selection Bias: nonrandom sampling/treatment allocation to subjects 🡪 pop. studied is not representative of target pop.
Spectrum bias: tests not perform equally in all subjects (ex: CXR detect large pneumothorax in sick but not small ones in healthy)

WAYS TO MITIGATE BIAS @ DIFFERENT STAGES OF STUDY

Start of study

As study proceeds

Completion of study

OBJECTIVE 4: Examine economic analysis as an example of the breadth of EBM

COST ANALYSIS
How does a treatment/test fare compared to the opportunity costs associated w/ alternative?
Value = Quality/Cost

3 Perspectives: patient, clinician, population

Cost Benefit Analysis: measured in $; examine cost per


incremental gain

Cost Effectiveness Analysis: cost to achieve single domain


of clinical outcome

Cost Utility Analysis: cost vs. overall benefit

Multidimensional Indices: QALY, DALY


● Takes into account both quantity & quality of
remaining life w/ weight on diff health states.

Incremental Cost-Effectiveness Ratio =


Cost-Effectivness Ratio = (Cost of Tx) / (Outcome w/ Tx)
S2: Locating Best Available Evidence
LEARNING OBJECTIVES:

1. Identify the hierarchy of evidence & pre-appraised resources


2. Identify appropriate tools for critically appraising the literature

OBJECTIVE 1: Identify the hierarchy of evidence & pre-appraisal information resources.


HIERARCHY OF (Pre-Appraised) EVIDENCE
As fo farther up pyramid, resources are more relevant to clinical settings. Since we don’t have EPIC access, start @ summaries

In this, we are examining CLINICAL QUESTIONS (arise from work w/ patients) as opposed to research questions (AKA
embark lit reviews & study design etc). This will help us make clinical decisions, as opposed to advancing medical
knowledge (which is what research questions do).
1 Systems Medical info resources integrated into EMR; Beaumont integrates UpToDate; support clinical decision-making
2 Summaries Current best evidence summarized clear/succinct; **UpToDate, DynaMed, Clinical Practice Guideline
● Do not confuse w/ textbooks that provide background info
UpToDate DynaMed
Larger database Smaller database
2-tier graded sys (some may not have b/c 3-tier graded sys
imposed retrospectively)
“updated daily”; used Graded System
.
3 Synopses of 1-page summary of systematic review w/ expert commentary (AKA critical appraisal); DARE, ACP Journal Club
● Exactly same as synopses of studies but w/ systematic reviews
Syntheses
4 Syntheses Systematic reviews & Meta-analyses; comprehensive/reproducible review of specific topic based on exhaustive lit
searches; Cochrane; PubMed Health
5 Synopses of 1-page summary of individual study w/ expert commentary (AKA critical appraisal); articles selected for critical
appraisal based on relevance & interest; Evidence-Based Journals, ACP Journal Club
Studies
6 Studies PubMed Clinical Queries (diff than regular Pubmed)

Quick Resources: McMaster; TRIP


OBJECTIVE 2: Identify appropriate tools for critically appraising the literature

Tools for evaluating literature:


1) JAMAevidence
● Online resource solely dedicated to learning, teaching, and practicing EBM
● Includes:
o Full text of Users’ Guides to the Medical Literature
o Worksheets to help you critically appraise the literature
o Calculators & nomograms
o Question wizard to construct clinical questions

2) SORT (Strength of Recommendation Taxonomy): use study quality (1-3) & strength of
recommendation (A-C) to evaluate grade of literature

Summary
1. ASK: Use PICO to help translate clinical cases into searchable question [see S1, objective 1]
2. ACQUIRE:
● Start your search using resources at the top of the 6S Pyramid and work your way down
to discover new or rare clinical studies
● You can also use ACCESSSS or Trip to search across these resources
3. APPRAISE: For critiquing the evidence you find, use JAMAevidence and the SORT tool
S4-6: Screening & Diagnosis
LEARNING OBJECTIVES:

1. Describe the features of a DISEASE that make it amenable to SCREENING


2. Identify characteristics of a good SCREENING TEST
3. Explain common BIASES that occur in screening and diagnostic trials
4. Understand the DIAGNOSIS PROCESS
5. Explain PRE-TEST and POST-TEST probability

OBJECTIVE 1: Describe the features of a DISEASE that make it amenable to SCREENING

Pre-Clinical Phase: time between onset of disease & manifestation of symptoms

Clinical Phase: after patient starts to have symptoms

Lead Time: time btwn detecting disease via screening & time of diagnosing disease based on
symptoms; ideally want this to be as long as possible

SCREENING: identification of disease


that pt may have increased risk of
developing, before any symptoms
present; ideally done before
pre-clinical phase
● Screen if: negative impact to
health; identifiable
asymptomatic period;
serious/treatable [AKA only
screen if we can do something
about the disease]; improved
outcomes w/ early
intervention

DIAGNOSIS: identification of disease


that pt has by analysis of history,
presence of symptoms, evaluation of
test results, & investigation
OBJECTIVE 2: Identify characteristics of a good SCREENING TEST

Screening Hazards:
● False-positive results may cause unnecessary anxiety, expense, and even a risk of hazardous
intervention in unaffected individuals (ex: breast self-­exam-­RCT showed no survival benefit,
50% unnecessary biopsies done)
● False-negative results may reassure and delay diagnosis of people who, in fact, have a
disease

Compare to REFERENCE STANDARD: this is necessary to compare to in order to decide if new


test works or not; determines validity; Characteristics of ideal reference standard:
● Test that defines whether pt has disease or not
● Ideally 100% sensitive & 100% specific for disease in question
● Applied to all patients in study
● Identify all cases of disease that are of pathological significance
● Not identify any cases that are not pathologically significant
These requirements are not realistic so usually the reference standard is the best test currently
available.

STUDY DESIGNS for Screening Tests:

**Randomized Control Trial (RCT): best b/c lowest risk of bias for screening; prospective
● Compare disease-specific cumulative mortality rate btwn those randomized to screening
● Pro: Eliminates confounding & lead time bias
● Con: expensive; time-consuming (b/c prospective so have to wait for disease to develop);
ethical concerns; tech changes

Observational Studies: retrospective


● Types:
o Cohort: compare disease-specific cumulative mortality rate btwn those who
choose to be screened
o Case Control: compare screening history btwn those w/ advanced disease/death
& healthy
o Ecological: compare screening patterns & disease experience (incidence +
mortality) btwn pop.’s
● Pro: cheap; fast
● Con: confounding bias due to health awareness; lead-time & length-time bias;
retrospective; difficult to determine appropriate control group

OBJECTIVE 3: Explain common BIASES that occur in screening and diagnostic trials
Lead Time Bias: early detection of disease is interpreted as increased survival
● Solution: RCT study design

Length Time Bias: screening initiative more likely to detect slow-growing disease
● Solution: count all outcomes regardless of method detected (not just screening)

Volunteer Bias: pop. not representative of general target pop. (if no randomization in selection, volunteers for study
likely to be in better health than general pop.)
● Solution: count all outcomes regardless of group

Detection/Observer Bias: systematic difference btwn groups in how outcomes determined


● Solution: Blinding of outcome assessors reduce this bias (reduces risk that knowledge of which
intervention was received, rather than intervention itself, affects outcome of measurement)

Selection Bias: is there diagnostic dilemma? (so not just choosing pt w/ obvious shingles or other disease)

Verification Bias: If the sample which is used to assess a diagnostic test is restricted only to those who have the
condition measured, the sensitivity of the measurement can be overestimated (pts with low probability for
Pulmonary embolism on V/Q scan did not undergo first standard test – angiography – they were monitored (second
standard), only pts with high probability for PE underwent Pulmonary angiogram

OBJECTIVE 4: Understand the DIAGNOSIS PROCESS

Pattern Recognition? 🡪 if not possible, move to probabilistic approach 🡪 determine pre-test


probability 🡪 calculate LR 🡪 determine post-test probability 🡪 compare to threshold 🡪 act
OBJECTIVE 5: Explain PRE-TEST and POST-TEST probability

Probabilistic Approach: use if pattern recognition fails

Pre-Test Probability: probability of target condition being present before results of diagnostic
test known; (AKA you think the pt has the disease based on your clinical knowledge [but before
doing a test])

Post-Test Probability: probability of target condition being present after results of diagnostic
test are known; compare to thresholds [in diagram below]

LIKELIHOOD RATIO (LR)


Likelihood Ratio (LR): used to asses value of performance of a diagnostic test
● LR >1: test result associated w/ disease
● LR~1: little significance b/c pre-test & post-test probabilities are same
● LR<1: test result associated w/ absence of disease

LR = (%probability test result in pt w/ disease) / (%probability test result in pt w/o disease)


● Test result can be + or –

LR+ = (%probability true +) / (%probability false +) = [sensitivity/(1-specificity)]


● >10 = large shift [useful diagnostic test]; (high value rules in disease)

LR– = (%probability false –) / (%probability true –) = [(1-sensitivity)/specificity]


● <0.1 = large shift [useful diagnostic test]; (low value rules out disease)

Use Pre-test probability + LR to determine post-test probability using Fagan’s Nomogram


[shown right].
S7: Prognosis
Risk Factor: pt characteristic associated w/ development of disease in the first place

Prognostic Factor: pt characteristic that confer increased or decreased risk of an outcome from disease

Associations: determine relationship between a factor & an outcome


● Measured via Odds Ratio, Likelihood Ratio; Relative Risk

PROSPECTIVE Study DIAGNOSTIC Study

Validity:
● Was the sample representative (demographics, SES, gender, etc.)
o Solution: report any filters passed before entering study (ex: was
treatment at 1º, 2º, or 3º care facility); describe which patients were
included and excluded
● Were patients sufficiently homogeneous w/ respect to prognostic risk?
o We expect that outcome should apply to each member of group
o Requirement: subjects should be @ similar point in disease process
● Were outcome criteria objective & unbiased?
o As the process of determining an outcome grows to be more
subjective, it is important to blind to prognostic factors
● Threat to validity increases w/ increasing ratio of:
“lost to follow-up”:Outcomes
Importance:
● How likely are outcomes over time? 🡪 in order to determine need to use measures that relate
events to time (ex: survival rate, median survival, survival cure [% original sample who have not
yet had outcome of interest])
o Kaplan Meier Survival Estimates: provides plot of probability of survival over time
● How precise are estimates of likelihood 🡪 use confidence intervals [range within which it is likely
that the true mean lies]
o For Survival Curve: as sample size decrease over time, precision of estimate decrease 🡪
increases width of confidence interval (AKA it increases)

Accuracy vs. Reliability:

cy: hits target but not necessarily


reproducible

ty: reproducible but not necessarily


hit target

Applicability:
● Were the study patients and their management similar to mine?
o Possible Threats: uneven application of therapies to diff subgroups or uneven application
over time
● Follow-up sufficiently long?
o Possible Threat: impt outcomes outlast study duration
● Can I use results of study in management of my practice?
o Look to see if effect of prognostic factor crosses decision threshold
S8: Quality Improvement (QI)
LEARNING OBJECTIVES:

1. Identify characteristics of quality improvement interventions


2. Discuss quality improvement initiatives and interventions to improve the care and outcomes
of patients
3. Identify problems with the quality and safety of patient care
4. Distinguish between traditional clinical research and quality improvement interventions
5. Identify the considerations in appraising and using an article about quality improvement

OBJECTIVE 1: Identify characteristics of quality improvement interventions

Need for QI: goal is to change clinician behavior


● Motivation: >9% of hospitalized pt harmed by adverse events; 100,000 preventable deaths/yr

QI Studies: done in real world clinical environment; data from usual clinical documentation; expedited review;
context-dependent + complex + iterative; data observational/anecdotal
● Unit of Analysis: if QI targeting clinicians, then clinicians should be unit of analysis

OBJECTIVE 2: Discuss quality improvement initiatives and interventions to improve the care
and outcomes of patients

RANDOMIZED Designs
Individual-patient
randomized control trials
Cluster randomized trials Randomly assigning groups of pt
NON-RANDOMIZED Designs
Stepped Wedge Sequential rollout of QI intervention to clinicians/organizations over a number or
periods, so that by end of study all participants have received intervention
● Can randomize order in which participants receive intervention
● Data collected/outcomes measured @ each point a new group of participants
(“step”) receive intervention

Basically cross-over study; most rigorous


Time Series Data collected/Outcomes measured @ multiple points before & after intro of QI
intervention; no control
● Points Before QI: allow estimation of underlying trend and secular effects
● Points After QI: allow measurement of intervention effect
Controlled Before-After Outcomes measured before & after intro of QI intervention both in a group that
receives the intervention & in a control group that does not
Uncontrolled Before-After Outcomes measured before & after intro of QI intervention in same study setting; no
control; least rigorous
Variation: always present in anything being measured
● Controlled (stable) Variation: predictable within well-defined limit, but impossible to predict where any specific
result will lie within those limits
o Cause: way that the process & system have been designed
o If eliminate special causes of variation (ex: snowstorm) 🡪 improvement depends on management
action
● Uncontrolled (unstable) Variation: due to special causes 🡪 behavior changes unpredictably
● Common Cause Variation: each input contributes random, small variation (ex: flipping coin); cannot treat data
points individually (b/c cannot predict what next point will be based on current point)

Run Chart: data in chart over time w/ median; help determine common cause vs. special cause variation; should
have 16-25 points NOT on median (so can identify if any special causes)
● Run: series of points that do not cross median; points on median do not count as run
● Successful intervention will create smaller than expected # of runs

STATISTICAL RULES for determining SPECIAL CAUSE in RUN CHARTS


Done when looking at a chart of data over time
1 point beyond 3 standard deviations
Trends Rule Special Cause exists if:
<21 data points: 6 consecutive points increase/decrease
21-199 data points: 7 consecutive points increase/decrease
>200 data points: 8 consecutive points increase/decrease

“omit entirely any points that repeat preceding value”


Clump of 8 Rule Special Cause exists if: Run of 8 all above/below median; indicate that 2 averages
in data & that process has changed in some way;
● special cause may not have occurred @ beginning of run

Alternating Points Rule At least 14 points consecutively alternate 🡪 indicate special cause in way data was
sampled (maybe 2 or more sources)
Number of Runs Rule Count # of runs 🡪 compare to “Test for Number of Runs Above & Below Median”
(this is a chart that is on slide 65 of the PPT)

Special Cause exists if: # of runs < lower limit; indicate >1 average

Sampling error if # of runs > lower limit

OBJECTIVE 3: Identify problems with the quality and safety of patient care
Potential Harms:
● Prematurely adopt inadequately proven intervention 🡪 increase cost & result in harm

OBJECTIVE 4: Distinguish between traditional clinical research and quality improvement


interventions
Traditional CLINICAL Research QI INTERVENTIONS
● Well-controlled ● In real world clinical environment
● Data collected specifically for study ● Data collected from usual clinical
● Need informed consent docs
● Attempt to standardize intervention ● Expedited review
● Emphasis on randomized trial ● Context-dependent, complex,
iterative
● More likelt to be
observation/anecdotal

OBJECTIVE 5: Identify the considerations in appraising and using an article about quality
improvement
ASSESSING QI ARTICLE
Validity: Was data quality acceptable? Project Design: Data Analysis:
● Prognostic balance maintained as ● Aims of QI clearly stated? ● Number participants initially
study progressed? ● Definitions/measurement systems approached, participated, dropped
● Prognostic balance maintained when reported for all impt data? out all reported?
study completed?
● Extent of Blinding? Data Collection: Results:
o Pt unaware of the group they ● Staff trained & quality assurance ● Estimate of treatment effect?
are in maintained? ● Intervention exportable to my
o Investigator/Caregiver site?
unaware of group pt is in Data Management: ● Follow-up complete & sufficiently
o Double Blind Study: both ● Review/report missing & outlier data? long?
groups unaware
● Groups treated equally? (aside from
experimental intervention)
S10-12: Harm
LEARNING OBJECTIVES:

1. Summarize methodology, strengths, and weaknesses of a case-control & cohort study.


2. Describe principle of confounding and identify possible confounders in a provided example
3. Define, calculate, and interpret RR, OR, & confidence intervals.

OBJECTIVE 1: Summarize methodology, strength, & weakness of case-control & cohort study.

Design Goal of Harm Study: make exposed & unexposed similar in all respects other than
exposure and to maintain balance throughout
OBSERVATIONAL STUDIES
Case-Co Compare group of people w/ disease to Validity assessment:
group w/o disease 🡪 assess if prior ● Were cases & controls similar w/ respect
ntrol exposure/risk had impact on disease state to indication or circumstance that woud
Study lead to exposure
Odds Ratio (OR) = ad/bc ● Methods of determining exposure
similar for cases & controls?
Ex: pt w/ COPD had higher odds of smoking
history than those w/o COPD

Cohort Compare group w/ given exposure/risk to Validity assessment:


group w/o exposure 🡪 assess if ● Were pt similar for prognostic factors
Study
exposure/risk associated w/ development known to be associated w/ outcome (or
of disease was statistical adjustment done)?
● Can be prospective or retrospective ● Methods of detecting outcomes similar?
● Follow-up sufficient?
Relative Risk (RR) = [a/(a+b)] / [c/(c+d)]

Ex: smokers had higher risk of developing


COPD than non-smokers

OBJECTIVE 2: Describe principle of confounding and identify possible confounders.


Consider relevant forms of bias to improve validity:

Recall Bias: awareness of disorders alters recall by subjects; common in


retrospective studies

Interviewer Bias: bias in assessment that can lead to certain outcomes of


interview

Detection/Observer Bias: systematic difference btwn groups in how


outcomes determined
● Solution: Blinding of outcome assessors reduce this bias
(reduces risk that knowledge of which intervention was
received, rather than intervention itself, affects outcome of
measurement)
OBJECTIVE 3: Define, calculate, and interpret RR, OR, & confidence intervals.

How strong is association btwn exposure & outcome?


OR > 1: Exposure is positively related to disease
OR = 1: Exposure is not related to disease
OR < 1: Exposure is negatively related to disease

Approximate RR when disease rare b~a+b & d~c+d


Chance for event : Chance against event (different thatn
probability

RR >1: Risk in exposed > Risk non-exposed

RR=1: Risk in exposed = Risk non-exposed

RR<1: Risk in exposed < Risk non-exposed

How precise is association btwn exposure & outcome?


CI: range of values that likely include true value of pop.
● CI = 95%, then if trial were repeated 100 times the
result would likely fall within the boundaries of CI 95
times
● Provides info about strength/precision & direction
of effect

Larger Effect size 🡪 smaller CI

Larger Sample size 🡪 smaller CI

More variability in distribution 🡪 larger CI

Better than p-values b/c p-values only provide 1 estimate


for the true value while the CI gives a range

Survival Analysis: set of methods used for analyzing data when outcome variable is time until
occurrence of an event of interest (ex: death, hospitalization, disease, surgical revision); use
Kaplan-Meier & Cox Proportional Hazards Regression to analyze
● How long before I am better?; How long am I going to live?
● Survival Function: probability of surviving (AKA not experiencing event) @ time point
● Hazard Function: probability of experiencing event @ time point
S13-15: Systematic Review &
Meta-Analysis
LEARNING OBJECTIVES:

1. Explain the value and importance of systematic reviews in learning and practicing EBM
2. Identify the differences between narrative reviews and systematic reviews
3. Learn criteria to critically appraise systematic reviews
4. Recognize elements of a forest plot in a meta-analysis and interpret results displayed

OBJECTIVE 1: Explain the value and importance of systematic reviews in EBM.

Systematic Review: clearly stated set of objectives w/ pre-defined eligibility criteria for studies;
explicit/reproducible methodology; search identifies all studies meeting eligibility criteria &
assesses their validity
● Qualitative type: summarizes all studies but not statistically combine them; look at
demographics, characteristics of health problem/intervention, characteristic of study,
differences btwn studies (“heterogeneity”)
● Quantitative type (AKA Meta-Analysis): statistically combine data from diff studies
o 2 Stages: 1) Summary statistic (OR, RR, etc.) calculated for each study, 2)
Summary effect calculated for all studies
● Pros: save time; resolve uncertainty when original studies disagree; increase precision of
results; more generalizable results;

OBJECTIVE 2: Identify the differences between narrative reviews and systematic reviews

Narrative Review: discusses 1 or more aspects of disease, prognosis, management; good for
background questions
OBJECTIVE 3: Learn criteria to critically appraise systematic reviews
Did Review explicitly address clearly focused question? (include PICO components)
● Inclusion & exclusion criteria should clearly map question so only include relevant studies
Was search for relevant studies detailed & exhaustive?
● Need to define exact search strategy (ex: terms used), platforms used, dates searched; specify if include non-English
studies or unpublished literature
● 2 Phases of Search: 1) screen titles/abstracts; 2) screen full articles
o Specify reason for exclusion

Publication Bias: studies w/ + results are more commonly published (but this can be bad if there are a lot of studies
disproving a theory/treatment that are not published)
Were 1º studies of high methodological quality?
● Assess validity [see table right]
● Assess & report bias

Were selection & assessments of studies reproducible? [includes data extraction & blinding researchers]
● Data: any info from a study; more than 1 person should extract data from every study to minimize error/bias

OBJECTIVE 4: Recognize elements of a forest plot in a meta-analysis and interpret results


FOREST PLOT
udy is 1 horizontal line on figure; line spans CI ond: overall effect estimate (AKA best estimate of intervention)
● if diamond cross line of no effect, then results not statistically significant
presented as outcome measure (OR, RR, etc)
: 1/variance; higher quality study has bigger weight; xplaining:
No Effect: intervention has no effect on outcome https://www.youtube.com/watch?v=py-L8DvJmDc&feature=youtu.be

Heterogeneity: differences btwn studies; [ex: clinical (participants, intervention, outcome); methodological (study
design/risk of bias in study); statistical (in effect size)]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy