1/28
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Measurement Error
Refers to the inherent uncertainty associated with any measurement, even after care has been taken to minimize preventable mistakes.
Variance
the SD2, useful in describing sources of test score variability.
True Variance
Variance from true difference
Error Variance
Variance from irrelevant, random sources
Random Error
Consists of unpredictable fluctuations and inconsistencies of other variables in the measurement process.
Systematic Error
Do not cancel each other out, because they influence in a consistent way.
Bias
The degree to which a measure predictably overestimates or underestimates a quantity.
Test Environment
room temperature, level of lighting, and amount or ventilation and noise.
Test taker Variables
Pressing emotions, physical discomfort, lack of sleep, or effects from drugs or medications.
Examiner-related variables
(potential sources of error variance) The examiner’s physical appearance or demeanor.
Test-retest reliability
is an estimate obtained by correlating pairs of scores from the same people on two different administrations of the same test. (coefficient of stability)
Coefficient of Equivalence
The degree of the relationship between various forms of a test can be evaluated by means of an alternate-forms or parallel forms coefficient of reliability.
Parallel Forms
when for each form of the test, the means and the variances of the observed test scores are equal.
Parallel forms of reliability
refers to an estimate of the extent to which item sampling and other errors have affected test scores on versions of the same test, when for each form of the test, the means and variances of the observed tests scores are equal.
Alternate forms
different versions of a test that have been constructed so as to be a parallel
Alternate reliability forms
refers to an estimate of the extent to which these different forms of the same test have been affected by item sampling error, or other error.
Split-half reliability
is obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once.
Divide the test into equivalent halves
Calculate the Pearson r between scores on the two halves of the test
Adjust the half-test reliability using the Spearman-Brown formula
Odd-even reliability
Split a test by assigning odd-numbered items to one half of the test and even-numbered items to the other half.
Spearman-Brown
Allows a test developer or user to estimate internal consistency reliability from a correlation between two halves of the test.
Cronbach’s Alpha
used to measure internal consistency
Inter scorer reliability
is the degree of agreement or consistency between two or more scorers with regard to a particular measure.
Internal Consistency
To evaluate the extent to which items on the scale relate to one another
Homogeneous
a test that is said to be functionally uniform throughout
Heterogenous
An estimate of internal consistency might be low relative to a more appropriate estimate of the test-retest reliability.
Dynamic Characteristic
a trait, state, or ability presumed to be ever-changing as a function of situational and cognitive experience
Static Characteristic
A trait, state, or ability presumed to relatively unchanging
Speed test
items of uniform level of difficulty so that, when given generous time limits, all testtakers should be able to complete all the test items correctly.
Power test
When a time limit is long enough to allow testtakers to attempt all items, and if some items are so difficult that no test taker is able to obtain a perfect score
Criterion-referenced tests
is designed to provide an indication of where a testtaker stands with respect to some variable or criterion, such as an educational or a vocational objective.