Epidemiology Lecture
Epidemiology Lecture
Epidemiology Lecture
Describe key differences between meta-analyses and pooled analyses, and recognize examples of each Both combine info from similar studies to achieve a more reliable result Meta analysis: separate results (like relative risks) of different individual studies are combined by stratification methods that weight each result by its appropriate importance. EXAMPLE: Does passive smoking cause lung cancer? Use RR calculated from 19 case control studies. Pooled analysis: involves combining the original person-specific data from 2+ studies and reanalyzing enlarged results -Allows for uniform risk factor definitions (high vs low intake of fat Uses: 1. When studies w/ same hypothesis appear to give contradictory results 2. Identifying small effects on disease that are clinically relevant To be valid: 1. All studies must have a. Same exposure/outcome hypothesis b. Define E and O in similar ways 2. Inclusion of all well-designed and conducted relevant studies a. Clear exclusion criteria need to be described in order to exclude studies with flaws Valid study: will produced overall results that are close to the truth (based on its design, method, and procedures) Internal validity: 3 main factors need to be ruled out or accounted for: 1. Selection bias 2. Information bias 3. Confounding External validity: Can the findings be expanded/generalized to individuals out side the target population 1. Must be internally valid to be externally valid 2. Hard to generalize if narrowly selected population Bias: systematic error that results in an incorrect or invalid estimate of the measure of association Selection bias: arises when the selection of study subjects depends on both exposure and disease, and can result in increases/decreases in magnitude of association. Subjects are selected in such a way that their exposure is NOT representative of the true level of exposure in the population 1. Diagnostic bias: occurs before individuals are selected for a study where determination of the diagnosis is influenced by ones exposure history (hospital based case control) a. Avoid by: select cases from well-defined population, not hospital 2. Surveillance (detection) bias: occurs when the exposure is associated with increased monitoring for early signs of the disease, resulting in an increased ascertainment of disease in exposed subjects compared to unexposed a. Avoid by: selecting cases and controls from same well defined population (equal chance of being screened) 3. Non participation: in case control and retrospective cohort studies where those who do not participate are different wrt exposure and disease than those who do a. Avoid by: achieving high participation rate for cases and controls 4. Self selection/ volunteer bias: in retro cohort study, refusal or agreement to participate is related to both E and D a. Avoid by: not informing participants of hypothesis
3. Define selection bias, recognize different sources of selection bias, and discuss ways of avoiding each.
4. Define information bias, recognize different types of information bias (recall and interviewer bias), and discuss ways of avoid each
6. Define confounding, be able to identify confounding (by applying the 2 criteria for confounding), and describe/recognize the major ways of controlling for confounding.
5. Lost to follow up: concern for prospective cohort (and retro) a. Avoid by: minimize subjects lost to follow up and avoid different rates of follow up by exposure status 6. Intervention studies selection bias unlikely- but noncompliance and selective drop out may be result in bias if those who do not comply or are lost to follow up differ wrt which treatment they are taking and risk of disease a. Avoid by: minimize subjects lost to follow up and monitor compliance Information (observation) bias: arises when measurement of exposure or disease among some subjects is incorrect. -Can be limited by careful study design (using objective criteria for defining and measuring exposures) -Once information bias has occurred, cant be corrected in analysis and the resulting estimate of association will be wrong (like selection bias) 1. Misclassification bias: method or process of measuring exposure or outcome leads to change in magnitude of association 2. Recall bias: inaccurate recall of past exposures (case control and retro cohort studies). Differential misclassification: when cases report past exposure different from controls. Nondifferential misclassification: both cases and controls report inaccurately. a. Avoid by: verifying information of interviews against other reliable sources 3. Interviewer (observer) bias: investigator knows a) study hypothesis ad b) exposure under study while assessing outcomes (cohort) or outcome under study when assessing exposure (case control) a. Avoid by: i. Blind interviewers to disease status when assessing exposure (case control) or to exposure status when assessing disease (cohort) ii. Double blinding in intervention study (participants and investigators blind to treatment status) iii. 2+ independent observers assess same E or O 4. Surveillance (Detection bias): exposure associated w/ increased monitoring for early signs of disease increased ascertainment of disease in exposed subjects (cohort). (Selection bias in case ctrl) a. Avoid by: similar degrees of monitoring btwn exposed and nonexposed, examining association by stage of disease, studying disease that become symptomatic rapidly Is the degree of misclassification or error in measuring E or D the same among the comparison groups ? Non-differential (random) Misclassification: occurs with the same frequency in all study groups (case/controls, exposed/unexposed). Study results are biased towards the null (RR = 1.0) Differential (non-random) Misclassification: occurs with different frequency between comparison groups creating true bias that may increase or decrease estimates of the association. Study results biases towards or away from the null (RR = 1.0) Confounding: when apparent association is explained by a third (confounding) factor and can lead to over or underestimate of true association between E and D Conceptual definition of confounder: 3rd factor that is associated with E, and is independent risk factor disease (among non exposed). NOT an intermediate factor in causal chain leading to disease. Empirical definition: a potential confounder is an actual confounder when controlling for variable results in change of association btwn E and D by > 10%
7. Recognize which threats to validity are avoided by randomization, double-blind procedures, and matching.
Methods of controlling confounding: 1. Design of study: a. Random assignment of exposure (intervention) b. Matching on potential confounding (case-control) c. Restricting subjects to one level of confounding factor 2. Analysis of data: a. Stratification of analysis: examine association w/in each subcategory of confounding variable (1 at a time) b. Statistical control: adjusting for multiple potentially confounding factors simultaneously w/ statistical modeling i. Outcome continuous: ANOVA, multiple linear regression ii. Outcome yes/no: multiple logistic regression Randomization: confounding Double-blind procedures: information bias Matching: confounding