1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
extent to which a measured value can be obtained consistently with repeated assessment in constant conditions
reliability
without sufficient ____, cannot have confidence in data
reliability
concept of ____ is used to estimate validity
variance
two types of reliability
relative and absolute
relative reliability: coefficient that reflects _____ as a proportion of the total variance in a set of scores
true variance
type of reliability that is good for group comparison
relative reliability
type of reliability that is good for comparing reliability of different measures
relative reliability
relative reliability = ____ unit of measurement
no
relative reliability lets us compare _____
different tests
absolute reliability: indicate how much of a measured value may be due to ____
error
type of reliability useful for looking at individuals
absolute reliability
with absolute reliability, use _____ to find the range where a persons true scores will fall
SEM
absolute reliability = ____ unit of measurement
includes
reliability exists in a specific context: things relevant to application
the ____ of the groups
raters ___ and ____
number of ____
population, skill, training, trials
reliability helps reduce _____ variance, there are many sources of ____ and some can be controlled for to improve ____
unexplained, error, reliability
4 types of reliability
test-retest, rater, alternate forms, internal consistency
assess the measurement instruments consistency
test-retest reliability
assess the consistency of 1 or more raters
rater reliability
compares two forms of an instrument
alternate forms reliability
extent to which the items of a multi-item test are successful in measuring various aspects of the same characteristic
internal consistency reliability
test-retest reliability
most meaningful for measures that don’t rely on ____
____ between tests must be considered
neutralize carryover with ____
testing effects: act of ___ changes outcome, can show up as ____ error (predictable, unidirecitonal)
raters, interval, pre-trials, measurement, systematic
two types of rater reliability
intra and inter
stability of data recorded by one rater across trials
intra-rater reliability
intra-rater reliability
possibility of _____ when _____ criteria is used to rate responses
in instances where the raters ____ is relevant to the accuracy of the test, then its essentially the same as _____
bias, subjective, skill, test-retest
stability of data recorded by two or more raters
inter-rater
____-rater should be established before ____-rater
intra, inter
inter-rater reliability
best when all raters measure the _____
simultaneously yet ____
same response, independently
change scores - difference between ____ and ___ test
pre, post
____ in measurement is necessary precondition for being able to interpret change scores
reliability
tendency for extreme scores to fall closer to the mean on retesting
regression toward the mean
measures with strong reliability are less likely to have _____
regression toward the mean
maximizing reliability
____ measurement protocols
train ____
____ and improve instruments
take ____ measurements
choose a sample with ____ or ____ in scores
____ testing
standardize, raters, calibrate, multiple, range, variance, pilot