1/62
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Predictive Validity and Prognosis Studies
Variable that is presumed to predict an outcome of interest
Intervention studies
presumed cause of or influence upon a measured effect
Levels of independent variables
number of forms the independent variable takes in a study
Single-factor (one-way)
effect of one variable
Two-factor (two-way)
effect of 2 variables as well as their interaction with one another
Factorial design
number of independent variables or factors included
Dependent variable
variable hypothesized to be caused or predicted by the by the independent variable
The dependent variable is the
outcome of interest
Extraneous variables
confound the relationship between the independent and dependent variable
Extraneous variables need to be
anticipated and controlled for if possible
Discrete variables are the same as
categorical variables
Continuous variables are the same as
scale variables
Discrete variables
can assume only distinct values
discrete variables can be
dichotomous or polytomous
A likert scale is a
discrete variable
Continuous variable
theoretically can assume infinitely finer degrees of measurement depending upon the instrument utilized
Measurement
method of assigning quantitative or qualitative values to variables
Utilizing measurements requires
clearly defined rules that are consistently applied during a study
Nominal
classification without value placed on the category and no ranking or order
Nominal data can use
names or numerals
Ordinal
classification with order, but without equal intervals between levels
Ordinal data can
use numbers to label categories, but can’t do math
Interval
order and interval distance known, origin is unknown
With interval data you can do
addition and subtraction
Ratio
order, interval distance, and origin are known
With ratio data you can
add, subtract, multiply, and divide
Measures are a combination of
true value (signal) and error (noise)
Sources of error for reliability
errors made by examiners, subject variability, instrumentation flaws or failures
Categories of measurement reliability
instrument and rater
Instrument reliability
test-retest, internal consistency (constructs)
Rater reliability
Intra-rater (within), inter-tester (between or among)
Measurement validity
the degree to which a measure captures what is is intended to measure
reliability is a… but… condition for validity
Necessary, not sufficient
Types of measurement validity
face, content, construct, criterion related
Construct validity types
convergent and discriminant
Criterion related validity types
concurrent and predictive validity
Face validity
does the measurement appear, on the face of it to assess what is intended
Face validity is
all or nothing
Face validity is addressed from the standpoint of
the tester and the patient or family member
face validity is
rather subjective
Content validity
extent to which a measurement is judged to reflect the meaningful elements of a variable
content validity is judged by
content experts or people with experience with the variable
content validity is usually only pertinent to
multidimensional measurements
Examples of things that content validity would be assessed in
disability measures, functional measures, self-report tools, knowledge assessment
Construct validity
validity of abstract concepts that underlie the measure
construct validity is achieved via
operational definitions, logical arguments, theoretical arguments, and research evidence
convergent validity
comparison of scores between two similar instruments expected to produce similar results
discriminant validity
differentiation among characteristics or levels of the same characteristic
Criterion validity
extent to which one measure is systematically related to other measures or outcomes
Criterion validity requires
direct comparison of index measure with a standard (criterion) measure or with known outcome
Concurrent validity
ability of an index measure to capture an outcome similar to that of another measure
To ascertain concurrent validity you compare
the index measure to the criterion measure, that were obtained at approximately the same time
Predictive validity
the ability of an index measure to predict a future outcome
To ascertain predictive validity you compare
the index measure to the criterion measure that was obtained at a later point in time
Responsiveness to change
ability of a measure (instrument) to detect change in the phenomenon of interest
responsiveness to change fit
between instrument and operational definition of variable
Responsiveness to change depends upon
number of values on the measurement scale
Standard error of measurement (SEM)
extent of which observed scores are dispersed around the true score
Minimal Detectable Change (MDC)
tiniest amount of change needed to detect difference
If a measurement has less error and is more reliable, it will have
a lower MDC
All measurement tools have
error
Floor effect
failure of a measure to detect lower scores for patients whose status has declined
Ceiling effect
failure of a measure to detect higher scores for patients whose status has improved