1/20
Vocabulary flashcards covering key terms and concepts related to Classical Test Theory and Item Response Theory.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Classical Test Theory (CTT)
A foundational psychometric framework that proposes observed scores consist of true scores and random errors, expressed as X = T + E.
Observed Score
The total score obtained from a test, which includes both true score and error.
True Score
The hypothetical score that would be obtained with no measurement errors, representing the actual ability level.
Error Score
Random fluctuations in measurement resulting from various sources, such as test-taker factors or test administration conditions.
Reliability
The consistency of measurement, often quantified as a coefficient from 0 to 1, indicating the proportion of score variance attributable to true score variance.
Validity
The degree to which a test measures what it claims to measure, ensuring accurate assessment of the intended construct.
Levels of Measurement
Four levels (Nominal, Ordinal, Interval, Ratio) proposed by Stanley Stevens, indicating how precisely variables are recorded in research.
Item Response Theory (IRT)
A modern psychometric approach focusing on the relationship between individual item responses and the latent traits they measure.
Item Characteristic Curves (ICC)
Mathematical functions representing the relationship between ability and item performance, indicating probability of correct responses.
Difficulty Parameter (b)
The ability level at which a person has a 50% chance of answering an item correctly.
Discrimination Parameter (a)
A measure of how well an item differentiates between varying ability levels.
Guessing Parameter (c)
The probability that a low-ability examinee answers an item correctly by chance.
Unidimensionality
The test must measure a single latent trait; all items should assess the same underlying construct.
Local Independence
Responses to different items are statistically independent after controlling for ability level.
Parameter Invariance
Item parameters should remain constant across different populations while differentiating according to ability.
Criterion-Keyed Method
A test development approach comparing responses between distinct groups to identify discriminative items.
Factor Analytic Method
An approach that uses statistical techniques to identify underlying factors from a large set of test items.
Theoretical/Rational Method
Constructing tests based on psychological theories to measure specific constructs without relying on empirical relationships.
Sample Dependency
Limitation in CTT where item parameters depend on the sample in which they were calculated.
Equating
The process of ensuring scores from different tests or forms are comparable.
Adaptive Testing
Testing that adjusts the difficulty of questions based on the examinee's ability level, often utilizing IRT.