Stanford–Binet Intelligence Quotient (IQ)
Stanford–Binet Intelligence Quotient (IQ)
Core Definition
- IQ (Intelligence Quotient) is a numerical score intended to indicate a person’s cognitive ability relative to the population.
- Originated with the Stanford–Binet Intelligence Scale, an early and influential intelligence test.
Canonical Formula
- (\text{IQ} = \frac{\text{Mental Age}}{\text{Chronological Age}} \times 100)
- Mental Age (MA): The level of intellectual functioning a test taker displays, expressed as the age at which that performance is typical.
- Chronological Age (CA): The test taker’s actual, biological age in years, months, and days.
- Multiplying by 100 sets the average IQ at 100 when mental and chronological ages are equal.
- (\text{IQ} = \frac{\text{Mental Age}}{\text{Chronological Age}} \times 100)
How to Interpret
- \text{IQ} > 100 ⟶ Performance is above the norm for the person’s age.
- \text{IQ} < 100 ⟶ Performance is below the norm.
- Example: If a 10-year-old performs at the level typical for 12-year-olds,
\text{IQ} = \frac{12}{10} \times 100 = 120.
Historical Context & Significance
- Developed early 20th century by Alfred Binet (later revised at Stanford University).
- Pioneered objective, standardized assessment of cognitive ability.
- Influential in educational placement, military screening, and psychological research.
Limitations & Ethical Considerations
- Over-reliance on a single score can obscure multiple intelligences and socio-cultural factors.
- Potential for misuse in labeling or discriminating against individuals/groups.
- Modern IQ tests now use age-normed scoring systems (deviation IQ) rather than the original ratio formula.
Connections to Later Material
- Forms the conceptual basis for contemporary intelligence tests (e.g., Wechsler scales).
- Opens discussion on nature vs. nurture, test validity, and cultural fairness in assessment.