1/375
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Assumption 6
Testing should be fair and unbiased.
Tests
Tools that assess knowledge or skills.
Test Publishers
Organizations creating standardized assessment instruments.
Fairness-related Problems
Issues affecting test equity despite guidelines.
Test Manual Guidelines
Instructions for proper test administration and use.
Test User
Individual using a test on diverse backgrounds.
Assumption 7
Testing benefits society by ensuring merit-based decisions.
Nepotism
Hiring based on personal relationships, not merit.
Educational Diagnosis
Instruments identifying learning difficulties in students.
Neuropsychological Impairments
Conditions affecting cognitive functions and behavior.
Military Screening
Assessing recruits for key variables efficiently.
Norm-Referenced Testing
Evaluating scores relative to a comparison group.
Norm
Typical performance standards for defined test groups.
Normative Sample
Group analyzed to evaluate individual test performance.
Norming
Process of establishing norms for test scores.
Race Norming
Norming based on racial or ethnic backgrounds.
User Norms
Statistics from a specific test-taker group.
Standardization
Administering tests to establish consistent norms.
Standard Error of Measurement
Estimates deviation of observed scores from true scores.
Standard Error of Estimate
Error in predicting one variable from another.
Standard Error of Mean
Measure of error in sample means.
Standard Error of Difference
Estimates significance of score differences.
Sampling
Selecting a representative portion of a population.
Stratified Sampling
Including diverse subgroups to prevent bias.
Stratified Random Sampling
Equal chance for all population members in sampling.
Purposive Sampling
Arbitrary selection believed to represent the population.
Convenience Sampling
Using readily available participants for a study.
Percentiles
Dividing distribution into 100 equal parts.
Descriptive Statistics
Summarizing data using central tendency and variability.
15th Percentile
Score at or below which 15% fall.
Raw Score
Actual number of correct answers on a test.
Percentage Correct
Correct answers divided by total items, multiplied by 100.
PMET Assumptions
Assumptions regarding performance measurement and evaluation techniques.
Age-Equivalent Scores
Scores indicating average performance for specific ages.
Mental Age
Age corresponding to a child's intellectual ability.
IQ Calculation
Mental age divided by chronological age, multiplied by 100.
Mean IQ Score
Average IQ score set at 100.
IQ Standard Deviation
Standard deviation of approximately 16 for IQ scores.
Tracking
Children maintain relative performance levels over time.
Grade Norms
Average performance indicators for specific school grades.
Mean or Median Score
Average score calculated for each grade level.
Developmental Norms
Norms based on traits affected by age or grade.
National Norms
Normative sample representative of the national population.
Subgroup Norms
Norms segmented by specific selection criteria.
National Anchor Norms
Stabilize test scores by comparing with other tests.
Equipercentile Method
Calculates equivalency of scores based on percentiles.
Chronological Age
Actual age of a child in years.
Skewed Data
Data distribution where values are not symmetrical.
Convenient Gauge
Easy comparison of student performance within grades.
Performance Measurement
Assessing abilities through standardized testing.
Test Manual
Document detailing test administration and norms.
Selection Criteria
Factors used to choose children for testing.
Local Norms
Performance benchmarks based on local population data.
Test Revision
Adapting tests to fit local contexts and norms.
Fixed Reference Group
Group used for scoring future test administrations.
Score Distribution
Spread of scores from a fixed reference group.
PMET Norming
Using historical norms for future test scaling.
Criterion-Referenced Evaluation
Assessment based on specific performance standards.
Standard Definition
Criterion used for making judgments in evaluations.
Weighted General Average
Average considering different subject weightings in scores.
Passing Criteria
Minimum scores required to pass an evaluation.
Content Area Focus
Emphasis on scores related to specific subjects.
Socioeconomic Level
Economic status influencing educational opportunities.
Geographic Region
Location affecting educational performance and norms.
Criterion-Referenced Testing
Measures individual performance against a standard.
Reliability Coefficient
Proportion indicating true score variance ratio.
Good Reliability Estimate
0.70 to 0.80 is acceptable for research.
Clinical Reliability Standard
.90 reliability may be insufficient in clinical settings.
Classical Test Theory
Scores reflect true ability plus measurement error.
Observed Score Formula
X = T + E, where X is observed score.
True Score
The actual ability level of the test taker.
Measurement Error
Factors affecting measurement beyond the variable measured.
Random Error
Unpredictable fluctuations causing measurement inconsistencies.
Systematic Error
Constant error that can be predicted and fixed.
True Variance
Variance from genuine differences in ability.
Error Variance
Variance from irrelevant, random measurement sources.
Variance Definition
Standard deviation squared, indicating score variability.
Sampling Theory
Bell-shaped distribution of random measurement errors.
Mean of Observations
Estimates true score from repeated test applications.
Proportion of True Variance
Higher proportion indicates more reliable test scores.
Consistency of Test Scores
Stable true differences yield consistent scores.
Noise in Testing
Random error that varies without a discernible pattern.
Predictable Error
Systematic error that can be anticipated and corrected.
Test Score Variability
Sources of variability include true and error variance.
Measurement Process Factors
All influences on measurement besides the target variable.
Overweight Scale Error
Scale inaccurately adds 7 pounds to all weights.
Relative Standings
Order of weights remains unchanged despite errors.
Item Sampling
Variation among test items within and between tests.
Test Administration Errors
Influences affecting testtaker's performance during testing.
Test Environment Factors
Conditions like temperature and noise affecting test results.
Testtaker Variables
Personal issues impacting test performance, like stress.
Examiner-Related Variables
Influences from the examiner's presence or demeanor.
Scoring Errors
Mistakes in scoring due to technical glitches or bias.
Methodological Error
Flaws in research design affecting data integrity.
Test-Retest Reliability
Consistency of scores from same test over time.
Coefficient of Stability
Reliability measure when testing interval exceeds six months.
Carryover Effect
Previous test influences scores on subsequent tests.
Subjectivity in Scoring
Scorer's bias affecting the reliability of results.
Ambiguous Wording
Unclear questions leading to misinterpretation by respondents.
Intervening Factors
External influences affecting scores between test administrations.