1/38
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
reliability
the precision of an instrument in terms of providing accurate measures of the relevant underlying construct
classical test theory
observed scores, true scores, measurement error
CTT assumptions
observed scores are determined by true scores and measurement error
measurement error is random
CTT formula
observed score = true score + error
variability: measurement error: relating to SEM
measurement error harms ability to use observed differences among people as indicators of the true differences
4 ways of conceptualizing reliability
internal consistency, test-retest, alternative-forms, interrupter
standard error of measurement
estimate of amount of error inherent in a childs obtained score
represents standard deviation of the distraction of scores
3 methods of estimating reliability
alternate forms (parallel), test-retest, internal consistency
types of reliability
test-retest, interrater, parallel forms, internal consistency
parallel
2 versions of the test
compute correlation between scores from the 2 forms
test-retest
given the same test twice
compute correlation between scores from the 2 forms
internal consistency
split half - 1 test administration, 2 subtest scores (even/odd)
cronbach’s alpha (raw standardized and omega)
shows a correlation between scores
computed with SPSS
steps for calculating cronbachs alpha
compute inter-item covariances
compute average inter-item correlation
compute alpha
improving reliability
using more items (increase test length)
identifying/selecting items that are consistent with each other (internal consistency)
point estimate
best guess of individuals true score
confidence interval
range of scores in which the true score likely falls
reflects precision of point estimate
validity
extent to which inferences made from a test are appropriate meaningful and useful
content validity
contains appropriate content
criterion types
predictive and concurrent
predictive criterion validity
reflects future abilities
concurrent criterion validity
reflects ability in the current setting
construct validity types
convergent and discriminant
convergent construct validity
relates to what it should
discriminant construct validity
does not relate to what it should not
threats
construct underrepresentation and construct irrelevant content
consequences of testing
facet of validity
guard against adverse consequences
actual and potential consequences of test use
test score interpretation and test score use
consequential validity
consequential validity
match between intended consequences and actual consequences of use
nomological network
network of constructs, behaviors, or properties associated with a construct
types of methods of identification
focused examinations
unsystematic examination of sets of correlations
multi-trait multi method matrix
quantifying construct validity
focused examinations
looks at selected correlations
unsystematic examination of sets of correlations
scans many correlations broadly
MTMM
structured comparison of traits and methods
quantifying construct validity
uses cfa to numerically estimate validity
test bias
systematic error in a test or assessment process that result in differential performance between groups
validity convergent
extent to which a measure strongly correlates with other measures of the same or similar constructs
validity discriminant
measure is not correlated with other measure it theoretically should not be related to
validity consequential
assesses social and ethical consequences of using a test
reliability vs validity
R: consistency, are the results repeatable
V: accuracy, does it measure what it should