1/43
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
reliability
the extent to which a measured value can be obtained consistently during repeated assessment of unchanging behavior; can be conceptualized as reproducibility or dependability; consistent responses under steady conditions; estimate how much of a measure is attributable to an accurate reading and how much is error
classical measurement theory
an observed score can be thought of as a function of a fixed, true score (this is unknown) +/- an unknown error component
measurement error
any difference between the true value and the observed value
systematic errors
types of measurement error; predictable, constant measures of error in the same direction; not considered a statistical problem for reliability, but can hurt validity; example includes a tape measure that is incorrectly marked
random errors
types of measurement error; unpredictable errors due to chance or variability; affects reliability by moving values further from the true value; over/underestimates should occur with equal frequencies over long run, so averaging trials helps; examples: errors due to fatigue, mechanical inaccuracies, a patient moving during a height measurement
individual taking measure, measuring instrument, variability
List the three general sources of error within a measurement system.
variance
a measure of the variability among scores within a sample
greater
A larger variance has a LESSER/GREATER dispersion of scores.
relative
RELATIVE/ABSOLUTE reliability coefficients reflect true variance as a proportion of total variance.
general reliability ratio (coefficient)
true score variance / (true score variance + error variance)
true
T/F: 1.00 indicates perfect reliability.
intraclass correlation coefficient (ICC), Kappa coefficients
List the two most common relative reliability coefficients.
absolute
RELATIVE/ABSOLUTE reliability indicate something about how much an actual measured value is likely due to error.
standard error of measurement
What is the most common absolute reliability?
ICC
What relative reliability coefficient is used for continuous scales?
Kappa coefficients
What relative reliability coefficient is used for categorical scales?
number/timing of trials
Which of the following considerations on reliability are PTs most concerned about in our setting?
subject characteristics
training and skill of examiners
setting
number/timing of trials
false
T/F: Reliability is an all-or-none.
test-retest, rater, alternate forms, internal consistency
List the four types of relative reliability.
test-retest, rater
List the two types of relative reliability most used in PT.
test-retest reliability
used to establish that an instrument is capable of measuring an unchanging variable with consistency; one sample is measure two times (at least), keeping all testing conditions as constant as possible
carryover effects
consists of learning and practice, and can affect the second measurement in test-retest reliability
testing effects
another consideration that can affect the second measurement in test-retest reliability, such as fatigue
true
T/F: If scores are reliable, they should be similar.
ICC
ICC/KAPPA are used for quantitative measures when considering test-retest reliability.
kappa
ICC/KAPPA are used for categorical data when considering test-retest reliability.
intra, inter
List the two types of rater reliability.
rater reliability
to establish this type of reliability, we must assume the instrument and response variable are considered stable, so differences between scores can be attributed to ________ error
intrarater reliability
the stability of the data recorded by 1 individual across two or more recordings; best established with two or more recordings; essentially the same as test-retest reliability when the rater skill is relevant to the accuracy of the test; should be established FIRST
interrater reliability
concerns variation between two or more raters who measure the same characteristic; when it is not established, it limits generalizability of study results; best assessed when all raters assess the exact same trial, simultaneously and independently
alternate forms
when multiple versions of a measurement instrument are considered equivalent; achieved by giving both versions of the test to the same group in the same siting, and correlating the results
correlation
Most reliability coefficients are based on _________ metrics.
correlation
the degree of association between two sets of data, but cannot tell us the extent of agreement between two sets
internal consistency
generally applicable to surveys, questionnaires, written examinations, and interviews; reflects the extent to which items homogeneously measure various aspects of the same characteristic
internal consistency
commonly tested by evaluating the correlation between each item and a summative, Cronbach's alpha, and split-half reliability
Cronbach's alpha
relative reliability index
split-half reliability
combining two sets of items testing the same content into one long instrument with redundant halves; score then halves and correlate the results
change
When evaluating _______, we need to have confidence that the instrument is reliable so we can assume that the observed difference represents true _______.
change scores
measure the difference between the first and next measures (ex: pretest to posttest)
regression to mean
extreme scores can reflect substantial error, and tend to move closer to the expected average score when retested; extreme scores are a concern when classifying subjects based on score
minimum detectable change
the amount of change in a variable that must be achieved to reflect some true difference (outside of measurement error)
smaller
The greater the reliability, the SMALLER/LARGER the MDC.
smaller
MDC is generally SMALLER/LARGER than MCID.
MCID
the amount of change that is considered meaningful