Looks like no one added any tags here yet for you.
Reliability
Estimates evaluate the stability of measures, internal consistency of measurement instruments, and interrater reliability of instrument scores
Validity
the extent to which the interpretations of the results of a test are warranted, which depends on the particular use of the test is intended to serve.
Reliability estimates
used to evaluate:
the stability of measures administered at different times to the same individuals or using the same standard.
The equivalence of sets of items from the same test or of different observers scoring a behaviour or event using the same instrument.
Reliability coefficients
0.00 to 1.00
higher levels indicate higher reliability
Stability
(test-retest reliability)
administering a test at 2 different points in time to the same individuals and determining the correlation or strength of association.
Internal Consistency
gives an estimate of the equivalence of sets of items from the same test.
Cronbach's alpha is most widely used to measure internal consistency
Interrater Reliability
Establishes the equivalence of ratings obtained with an instrument when used by different observers.
Cohens Kappa is used
Exposure Measurement (4 doses)
Available dose:
cumulative vs current
Administrated dose:
the amount that comes in contact
Absorbed dose:
the amount that enters the body
Active dose:
that actually affects the target organ
Ratio
relationship between 2 numbers
numerator: NOT NECESSARILY INCLUDED in denominator eg. (binary) sex ratio
Proportion
relationship between 2 numbers
numerator: HAS TO BE INCLUDED in the denominator
proportion always ranges between 0-1
Calculating the odds
In a population of 100, 25 are diabetic. What are the odds of being diabetic?
probability of an event occurring relative to not occurring
25/75 = 0.33
Calculating the rate
speed of occurrence of an event over time
numerator: # of events observed for a given time
denominator: population in which the events occur 2 in 100 people?2/100 = 0.02
Measuring Prevalence
prevalence rate: the proportion of the population that has a given disease or other attribute at a specified time 2 types:
point prevalence rate
Period prevalence rate
Point Prevalence rate
PR:
Incidence rate
the proportion of the population at risk that develops a given disease or other attribute during a specific time period.
IR:# new events during specified time period/ population at risk
Incidence vs prevalence
Incidence:
measures frequency of disease onset
what is new
Prevalence
measures population disease status
what exists
all may be expressed in any power of 10
per 100, 1,000, 10,000
Relative risk
tells us how many times as likely it is that someone who is ‘exposed’ to something will experience a particular health outcome compared to someone who is not exposed
Tells us about the strength of an association
Can be calculated using any measure of disease occurrence: Prevalence Incidence rate
Calculation of relative risk
Random error
error due to chance
systematic error
error due to unrecognizable source *can have both random and systematic error
precision vs accuracy
A measurement scale/ tool with a high precision is reliable with high accuracy is valid
High precision is _____ HIgh accuracy is _____
reliable valid
Insufficient Precision
could be:
The measurement tool is not precise enough (a ruler in cm is not precise when meaningful differences are in millimeter)
Two (independent) interviewers rate the same person differently using the same scale (inadequate training?)
The same interviewer rates the same person differently
sources of measurement error
interviewer or observer
record abstracting (random error)
biased overestimation or underestimation
Participants
recall
random or systematic
Classification of participants
2 types:
Non-differential (the same in all study groups)
Usually weakens associations – i.e. brings effect estimates (RR, OR, AR) closer to the null value (but not always)
Differential (different in different study groups)
Effect estimates may change in any direction, depending on the particular error
Non-differential Misclassification of 10%
10% of all exposed cases and exposed controls are misclassified as unexposed & vice versa (10% of unexposed cases and unexposed controls are misclassified as exposed)
brings OR,RR, AR closer to the null
Differential Misclassification
20% of unexposed cases (but not controls) are misclassified as exposed
Reducing measurement error
Little (or nothing) can be done to fix information bias once it has occurred.
Information bias must be avoided through careful study design and conduct Information bias cannot be “controlled” in the analysis.
Case control vs cohort
Case control: case vs control Cohort: exposed vs unexposed