1/38
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
epidemiology
the branch of research dedicated to exploring the frequency & determinants of disease or other health outcomes in populatons
incidence
a descriptve epidemiological estimate that is concerned with how many persons have ONSET of a condition during a given span of time
cumulative incidence
the number of new cases of a disease during a specified time period divided by the total number of people at risk; the proportion of new cases of a disease in a population
prevalence
defined as a proporton of a total population of people who have a particular health-related condition
bias
Any infuence that may interfere with the valid relatonship between variables, potentially resulting in misleading interpretation of outcomes
systematic error
a form of measurement error, where error is constant across trials (e.g., mis-calibrated scale)
random error
a measurement error that occurs by chance, potentally increasing or decreasing the true score value to varying degrees (e.g., person misreading & recording an incorrect measurement)
methods to reduce measurement error
‒ Standardized assessment
‒ Methods of informing, training, & ensuring accuracy among raters
‒ Repeated measures
pretrial bias
‒ Study design
• Data collection methods
• Data collection protocols (e.g., training, blinding)
‒ Recruitment
during trial bias
Data collecton & recording
post trial bias
Data analysis
‒ Publicaton
reliability coefficient
the ratio of the variance of the true score to the total variance observed on an assessment
Interpretaton: ranges from 0 to 1.0 (1.0 = no error)
test-retest reliability
measures the ability of an assessment to remain stable
over time in what it aims to measure
‒ Estmated from a single assessment when data is gathered from the same group of subjects on two or more occasions within a short tme frame
split-half reliability
when investigators divide the items of a questonnaire into two smaller questonnaires (usually dividing it into odd an even items, or frist half-last half) & then correlates the scores obtained from the two halves of the assessment to test the reliability of the entire measure
unidimensional
constructs that are expected to have a single underlying dimension (e.g., weight)
multidimensional
constructs that consist of two or more underlying dimensions (e.g., quality of life)
parallel forms reliability
involves administraton of the alternatve forms to subjects at the same tme
internal consistency
the extent to which the items that make up an assessment covary or correlate with each other (i.e., homogeneity)
Cronbach’s
Can be used with nominal & ordinal data
• Alpha is the average of all split-half reliabilites for the items that make up the assessmen
item-to-total correlatons
Each item is correlated to the total test score
• Advantage over alpha: allow an assessment developer to identfy individual items that may be inconsistent with total score & contributng error to the assessment
interrater reliability
the ratings of more than one rater on a single assessment of a single subject are compared to estmate the ability of the assessment to be rated consistently across users
assessment validity
A measure derived from an assessment represents the underlying construct that the assessment is designed to measure
‒ Validate an interpretaton of the scores the assessment yields
face validity
an assessment has the appearance of measuring an underlying construct
‒ Weakest evidence of validity
content validity
the adequacy with which an assessment captures the domain or universe of the construct it aims to measure
‒ For example: self-care
criterion balidity
involves collectng objectve evidence about the validity of an assessment
− Refers to the ability of an assessment to produce results that concur with or predict a known criterion assessment or known variable (results closely match those of another existng assessment performed on the same criterion)
concurrent validity
an approach to establishing criterion validity that refers to evidence that the assessment under development or investgaton concurs or covaries with the result of another assessment that is known to measure the intended construct
• ‘gold standard’
predictive validity
an approach to establishing criterion validity that involves generating evidence that an assessment is a predictor of a future criterion
construct balidity
the capacity of an assessment to measure the intended underlying construct
‒ It is the ultmate objectve of all forms of empirically assessing validity
known groups method
identfying subjects who are demonstrated to difer on the characteristic the assessment aims to measure
discriminant analysis
evaluate the ability of the measure derived from the assessment to correctly classify the subjects into their known groups
convergence
two measures intended to capture the same underlying trait should be highly correlated
divergence
different traits show patyerns of association that discriminate between the traits
factor analytic method
an approach to demonstratng construct validity that examines a set of items that make up an assessment & determines whether there are one or more clusters of items
sensitivity
test’s ability to obtain a positve test when the target conditon is present (true positive rate)
• Detection of presence of conditon
specificity
test’s ability to obtain a negative test when the conditon is absent (true negative rate)
• No detection when conditon is not present
norm referencing
interpretation of a score based on its value relative to a standardized score
‒ Standardized according to statstcal norms established within groups of people (sample dependent)
criterion referencing
interpretaton of a score regarding what is considered adequate or acceptable performance based on a defned criterion
standardized scores
t-scores or z-scores
‒ The number of standard deviatons that a given value is above or below the mean of the distributon
‒ Typically 0-100 scale, mean = 50
standard error of measurement
a reliability measure of response stability, estimating the standard error in a set of repeated scores