1/59
Clinical and Research Applications
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Reliability
the extent to which a measured value can be obtained consistently w/ repeated assessments in constant conditions
concept of ____ is used to estimate reliability
variance
reliability coefficient categories
relative reliability
absolute reliability
relative reliability
coefficient reflect true variance as a proportion of the total variance in a set of scores
relative reliability is useful for ____ comparisons
group
ICC
interclass correlation coefficient
the closer to ___ the ICC/kappa = the ____ reliable to identify a true difference
1.0
more
does relative reliability have units of measure
no
absolute reliability
indicate how much of a measured value may be due to error
absolute reliability is useful for ____ comparisons
individual
SEM
standard error of measurement
absolute reliability uses ___ to find the range where a person’s true score will fall
SEM
does absolute reliability use units of measurement
yes
types of reliability
test-retest
rater reliability
alternate forms
internal consistency
test-retest reliability
ability of an instrument to measure performance consistently
carryover
practice or learning on initial trial that alters performance on subsequent trials
carryover neutralizes w/ _____
pre-trials
testing effects
act of measurement changes the outcome
systematic error
predictable, unidirectional error
____ and ____ can show up as systemic error
carryover
testing effects
intra-rater reliability
stability of data recorded by 1 rater across trails
inter-rater reliability
stability of data recorded by 2 or more raters
______ reliability of each rater should be established before _____ reliability
intra-rater
inter-rater
if _____ reliability has not been established, we can’t assume other raters would obtain similar results
inter-rater
change scores
difference btwn pre vs post test
regression toward the mean
tendency for extreme scores to fall closer to the mean on retesting
RTM
regression toward the mean
validity
The confidence we have that our measurement tools are giving us accurate info about a relevant construct so that we can apply results in a meaningful way
the extent to which a test measures what it is intended to measure
questions addressed by validity
is a test capable of discriminating among individuals w/ or w/o certain traits, diagnoses, or conditions?
can the test evaluate the magnitude or quality of a variable or the degree of change from 1 time to another?
can we make useful & accurate predictions about a pts future function or status based on the outcome of a test?
_____ relates to consistency of a measurement
_____ relates to alignment of the measurement w/ a targeted construct
reliability
validity
the 3 C’s
content validity
criterion-related validity
construct validity
content validity
refers to the adequacy that the complete universe of content is sampled by a test’s items
requirements of content validity
the items must adequately represent the full scope of the construct being studied
the number of items that address each component should reflect the relative importance of that component
the test should not contain irrelevant items
face validity
the implication that an instrument appears to test what is intendent to test
criterion-related validity
ability of a test/measure to align w/ results on an external criterion
are the 2 tests in agreement?
types of criterion-related validity
concurrent validity
predictive validity
concurrent validity
target & criterion test scores obtained at approximately the same time to reflect the same incident of behavior
sensitivity
true positive
specificity
true negative
construct validity
reflects the ability of an instrument to measure the theoretical dimensions of a construct
norm referencing
validity of a test depends on the evidence supporting its use for a specified purpose
norm-referenced test
a standardized assessment designed to compare & rank individuals w/in a defined population
external validity
can the results be generalized to other persons, settings, or times?
internal validity
is there evidence of a casual relationship btwn independent & dependent variables?
threats to internal validity
hx
maturation
attrition
testing
instrumentation
regression to the mean
selection
hx
unanticipated events that occur during a study that affect DV
maturation
changes in the DV due to normal development or simple passages of time
attrition
differential loss of subjects across groups, drop outs occur for specific reasons to the experimental situation
controlling for threats to internal validity
random assignment & control groups
blinding subjects & investigators
threats to construct validity
operational definitions *
comprehensive measurements
subgroup differences
time frame
multiple tx interactions
experimental bias
operational definitions
results of a study can only be interpreted w/in the contexts of these definitions
experimental bias
subjects behavior changes bc being studied, investigators influence how subjects respond
threats to external validity
influence of selection
adherence
influence of settings
ecological validity
influence of hx
adherence
compliance w/ protocol
ecological validity
generalization to real world
strategies to control for subject validity
homogenous samples
random assignment
blocking variables
matching
repeated measures
per-protocol analysis
only includes those subjects who complied w/ the trials protocols
non-completers are removed from the analysis
intention to treat analysis
data are analyzed according to original random assignments, regardless of the tx subjects actually received, if they dropped out or were non-compliant
ITT
intention to treat
____ may make the experimental group look more successful. ____ may underestimate a tx effect
per-protocol analysis
intention to treat analysis