Looks like no one added any tags here yet for you.
strong evidence:
lowest possible random sampling error
based on a good design
high validity: of the study and measures association, Not the validity of the measure of events or exposure
Internal Validity
do the observed results accurately reflect the true association?
*if a study lacks internal validity, external validity is irrelevant *we do not compromise internal validity in an effort to achieve external validity (generalizability)
External validity (generalizability)
to whom can results be applied?
requires internal validity
Will be achieved by a sample that represents the target population
observing an association
Validity is:
having fewer errors error = measured value - true value
sources of error:
chance (random sampling error)
bias
systematic error in selection of participants and/ or measurement
Random sampling error
random
sample variation, sample to sample differences p value shows how likely the observed results might be due to chance -best way to minimize random sampling error is to increase sample size
P value
NOT an error
calculated as a guide for rejection or acceptance of null hypothesis
Bias
refers to a systematic error in the design or conduct of a study
when bias occurs in a study, the observed association between the exposure and outcome will be different from the true association
confidence interval
how confident we are, that our research does not include bias large sample size - confidence interval smaller
Selection bias
refers to a systematic study error in the way participants are selected or retained in a study
occurs when individuals have different probabilities of being included or retained in the study according to the exposure and/or outcome
if you see the word recruit -----> selection bias
*PARTICIPATION DIFFERS ON EXPOSURE AND DISEASE
Types of selection bias:
Inappropriate control selection (control-selection bias)
Differential participation (case-control cohort)
Differential loss to follow up (cohort, experimental)
Volunteer bias
Volunteers are more health-conscious or from a different socio-economic group
Differential exposure -Effect of interventions for enhancing physical activity in older adults
Non-response bias
those suffering from a disease with a particular belief
differential outcome
membership bias
Healthy worker effect eg. Service in Vietnam reduced mortality rates
Loss to follow-up bias
in clinical trials or longitudinal studies, the sickest usually leave the study early
Reducing selection bias
little or nothing can be done to fix selection bias once it has occurred
CANNOT be reduced by increasing sample size
information bias
Interviewer asks to many questions - answers become inaccurate eg. measurement error
Confounding bias
occurs when the exposed group and the unexposed group are not exchangeable
Defining a confounding variable:
Causally associated with the outcome (a true risk factor)
non causally or causally associated with the exposure
not an intermediate in the causal pathway between exposure and outcome
how to identify a confounding variable
literature review of comparable studies
consult experts
statistical tests
How to deal with confounding, at design stage:
restriction
limit study inclusion criteria
matching
produce case and control groups that have similar characteristics
randomization
How to deal with confounding, at the analysis stage:
standardization
age standardization is in fact 'adjustment' for age
stratified analysis
confounding factor in a multivariate regression model
So the big 3 threats to validity:
CBC
Chance
Bias - selection/information
confounding
low P-value vs High P-value
low = more likely results are valid and reliable (not up to chance)