1/19
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Reliability
refers to consistency of the data collected using a given tool
observed score = tru score + measurement error
reliability coefficient
true scores variance / observed scores variance
→ the shared variance between the true score and the observed score
rtt
0.90 → excellent reliability
>0,70 → good reliability
experiments assessing the reliability
Test-Retest Agreement
Inter-Rater Agreement
Agreement between methods
e.g.:
Pearson's Correlation Coefficient
Intraclass Correlation Coefficient
Cronbach's Alpha
test-retest
correlation between repeated applications over time
equivalents forms, split-half, internal consistency
correlation across different variations of the instrument
inter-rater reliability
correlation from different observers
test-retest choosing intervals
too short interval → recall of the previous responses
too long interval → change in the subject's state
interclass correlation coefficient
= number of patients*(variance of patients - variance of errors) / nop*variance of patients + number of repetitions*variance of repeats +(nop-nor)*variance of error
variability between patients
each patient is different
variability within patients
measurement error
residual variability
scores vary for other causes
equivalent forms reliability
the agreement between scores when using two alternative instruments
eliminates learning effects and memory bias
difficult to construct truly equivalent versions
parallelism assumption
scores should have the same variability and possibly the same mean
split-half
dividing the questionnaire into two halves
instruments have to have relatively large number of items
eliminates the learning effect
difficult to construct two halves truly parallel
internal consistency
the extent to which items on a psychometric scale are correlated with each other
Cronbach’s α coefficients - conservative
Cronbach’s α
sum of item variances / variance of the sum
scales developed to maximize reliability tend to be long
item selection using Cronbach’s α
if Cronbach's α does not decrease when an item is omitted, that item is a candidate for elimination from the scale
the Spearman-Brown formula
predicts the gain in α that is expected by increasing the number of items
x*α/x*α + (1-α)
internal consistency method
based on the principle that each question in the questionnaire is a parallel form of the other questions
→ the overall reliability of the questionnaire is the average correlation between all possible pairs of items
items must be homogeneous
construction of reliable questionnaire
Construct simple, precise, concrete questions to reduce ambiguity
Increase the length of the scale → random error decreases exponentially as length increases
Increase the homogeneity of items (statistical packages suggest which questions to eliminate to increase reliability)
Control the administration of the questionnaire, which must be done under standard conditions and according to standard procedures