ap psych unit XI

 Unit XI

Module 60 Introduction to Intelligence

Is Intelligence One General Ability?

Charles Spearman (1863–1945) believed we have one general intelligence (often shortened to g) that is at the heart of all our intelligent behavior. He granted that people often have special, outstanding abilities. But he noted that those who score high in one area, such as verbal intelligence, typically score higher than average in other areas, such as spatial or reasoning ability. Spearman’s belief stemmed in part from his work with factor analysis, a statistical procedure that identifies clusters of related items. This idea of a general mental capacity expressed by a single intelligence score was controversial in Spearman’s day, and so it remains.

One of Spearman’s early opponents was L. L. Thurstone (1887–1955). Thurstone gave 56 different tests to people and mathematically identified seven clusters of primary mental abilities (word fluency, verbal comprehension, spatial ability, perceptual speed, numerical ability, inductive reasoning, and memory). Thurstone did not rank people on a single scale of general aptitude. But when other investigators studied these profiles, they detected a persistent tendency: Those who excelled in one of the seven clusters generally scored well on the others. So, the investigators concluded, there was still some evidence of a g factor. 

Several distinct abilities tend to cluster together and to correlate enough to define a general intelligence factor. Distinct brain networks enable distinct abilities, with g explained by their coordinated activity. The result is a chorus of actions orchestrated through the best possible distribution of mental resources.

Gardner’s Multiple Intelligences

Howard Gardner has identified eight relatively independent intelligences:

  •  Verbal-linguistic intelligence: Being capable of learning new languages and understanding how to use language to achieve goals.

  • Logical-mathematical intelligence: Being able to analyze problems logically and solve abstract problems.

  • Spatial-visual intelligence: Being able to use visual aids to arrive at a solution.

  • Bodily-kinesthetic intelligence: Being capable of using the entire body and engaging in movement to skillfully address a challenge.

  • Musical intelligence: Being able to produce and analyze pitch, rhythm and sound. 

  • Interpersonal intelligence: Being able to detect and explore the intentions, moods and desires of others.

  • Intrapersonal intelligence: Being able to fully understand oneself and to effectively regulate one’s own life and emotions. 

  • Naturalist intelligence: Being able to recognize and classify the various plant and environmental species in one’s surroundings.

Gardner has also proposed a ninth possible intelligence—existential intelligence— the ability “to ponder large questions about life, death, existence.”  Gardener views these intelligence domains as multiple abilities that come in different packages. Brain damage, for example, may destroy one ability but leave others intact. 

Sternberg’s Three Intelligences

Robert Sternberg (1985, 2011) agrees with Gardner that there is more to success than traditional intelligence and that we have multiple intelligences. But Sternberg’s triarchic theory proposes three, not eight or nine, intelligences:

  • Analytical (academic problem-solving) intelligence is assessed by intelligence tests, which present well-defined problems having a single right answer. Such tests predict school grades reasonably well and vocational success more modestly.

  • Creative intelligence is demonstrated in innovative smarts: the ability to adapt to new situations and generate novel ideas.

  • Practical intelligence is required for everyday tasks that may be poorly defined and may have multiple solutions.

With support from the U.S. College Board, Sternberg (2015) and his collaborators have developed new measures of creativity and practical thinking. Compared to older measures, these more comprehensive assessments improve prediction of American students’ first-year college grades.

Gardner and Sternberg differ in some areas, but they agree on two important points: Multiple abilities can contribute to life success, and differing varieties of giftedness bring both spice to life and challenges for education. Trained to appreciate such variety, many teachers have applied multiple intelligence theories in their classrooms.

Criticisms of Multiple Intelligence Theories

Research using factor analysis confirms that there is a general intelligence factor: g matters. It predicts performance on various complex tasks and in various jobs. And extremely high cognitive ability scores predict exceptional achievements, such as doctoral degrees and publications.

Even so, “success” is not a one-ingredient recipe. High intelligence may help you get into a good college and ultimately a desired profession, but it won’t make you successful once there. Success is a combination of talent and grit: Those who become highly successful tend also to be conscientious, well-connected, and doggedly energetic. K. Anders Ericsson and others report a 10-year rule: A common ingredient of expert performance in chess, dance, sports, computer programming, music, and medicine is “about 10 years of intense, daily practice.” The recipe for success is a gift of nature plus a whole lot of nurture.

Emotional Intelligence

Some psychologists have further explored our social intelligence—the know-how involved in understanding social situations and managing ourselves successfully. Psychologist Edward Thorndike first proposed the concept in 1920. A critical part of social intelligence, emotional intelligence, consists of four abilities:

  • Perceiving emotions: recognizing them in faces, music, and stories, and identifying one’s own emotions

  • Understanding emotions: predicting them and how they may change and blend

  • Managing emotions: knowing how to express them in varied situations, and how to manage others’ emotions. Using emotions to facilitate adaptive or creative thinking. 

Emotionally intelligent people are both socially aware and self-aware. Thus, they are more often succeed in relationship, career, and parenting situations than do academically smarter but less emotionally intelligent people. They also tend to be happy and healthy. Aware of these benefits, school-based programs have sought to increase teachers’ and students’ emotional intelligence. Some scholars, however, are concerned that emotional intelligence stretches the intelligence concept too far. Howard Gardner includes interpersonal and intrapersonal intelligences as two of his multiple intelligences. However he notes that we should respect emotional sensitivity, creativity, and motivation as important but different. Stretch intelligence to include everything we prize and the word will lose its meaning.

Theory

Summary

Strengths

Other Considerations

Spearman’s General Intelligence (g)

A basic intelligence predicts our abilities in varied academic areas.

Different abilities, such as verbal and spatial, do have some tendency to correlate.

Human abilities are too diverse to be encapsulated by a single general intelligence factor.

Thurstone’s Primary Mental abilities

Our intelligence may be broken down into seven distinct factors.

A single g score is not as informative as scores for seven primary mental abilities.

Even Thurstone’s seven mental abilities show a tendency to cluster, suggesting an underlying g factor.

Gardner’s Multiple intelligences

Our abilities are best classified into eight or nine independent intelligence, which include a broad range of skills beyond traditional school smarts.

Intelligence is more than just verbal and mathematical skills. Other abilities are equally

important to our human adaptability.

Should all our abilities be considered intelligences? Shouldn’t some be called less vital talents?

Sternberg’s Triarchic theory

Our intelligence is best classified into three areas that predict real-world success: analytical, creative, and practical.

These three domains can be reliably measured.

These three domains may be less independent than Sternberg thought and may actually share an underlying g factor.

Emotional intelligence

Social intelligence is an important indicator of life success. Emotional intelligence is a key aspect, consisting of Perceiving, understanding, managing, and using emotions.

These four components predict social success and emotional well-being.

Does this stretch the concept of intelligence too far?

Module 61 Assessing Intelligence

Early and Modern Tests of Mental Abilities

Some societies emphasize the collective welfare of the family, community, and society. Other societies focus on promoting individual opportunity. Plato, a pioneer of the individualist tradition, wrote more than 2000 years ago in The Republic that “no two persons are born exactly alike; but each differs from the other in natural endowments, one being suited for one occupation and the other for another.” As heirs to Plato’s individualism, people in Western societies have pondered how and why individuals differ in mental ability.

Francis Galton: Presuming Hereditary Genius

Western attempts to assess such differences began in earnest with English scientist Francis Galton (1822–1911), who was fascinated with measuring human traits. When his cousin Charles Darwin proposed that nature selects successful traits through the survival of the fittest, Galton wondered if it might be possible to measure “natural ability” and to encourage those of high ability to mate with one another. At the 1884 London Health Exhibition, more than 10,000 visitors received his assessment of their “intellectual strengths” based on such things as reaction time, sensory acuity, muscular power, and body proportions. But alas well-regarded adults and students did not outscore others. Nor did the measures correlate with each other.

Although Galton’s quest for a simple intelligence measure failed, he gave us some statistical techniques that we still use. And his persistent belief in the inheritance of genius illustrates an important lesson from both the history of intelligence research and the history of science: Although science itself strives for objectivity, individual scientists are affected by their own assumptions and attitudes.

Alfred Binet: Predicting School Achievement

Modern intelligence testing traces its birth to early twentieth-century France, where a new law required all children to attend school. French officials knew that some children, including many newcomers to Paris, would struggle and need special classes. To minimize bias, France’s minister of public education gave psychologist Alfred Binet the task of designing fair tests. Binet and his student, Théodore Simon, began by assuming that all children follow the same course of intellectual development but that some develop more rapidly. A “dull” child should score much like a typical younger child, and a “bright” child like a typical older child. Thus, their goal became measuring each child’s mental age, the level of performance typically associated with a certain chronological age. 

To measure mental age, Binet and Simon theorized that mental aptitude, like athletic aptitude, is a general capacity that shows up in various ways. They tested a variety of reasoning and problem-solving questions on Binet’s two daughters, and then on “bright” and “backward” Parisian schoolchildren. Items answered correctly could then predict how well other French children would handle their schoolwork.

Binet and Simon made no assumptions concerning why a particular child was slow, average, or precocious. Binet personally leaned toward an environmental explanation. To raise the capacities of low-scoring children, he recommended “mental orthopedics” that would help develop their attention span and self-discipline. He believed his intelligence test did not measure inborn intelligence as a scale measures weight. Rather, it had a single practical purpose: to identify French schoolchildren needing special attention. Binet hoped his test would be used to improve children’s education, but he also feared it would be used to label children and limit their opportunities.

Lewis Terman: Measuring Innate Intelligence

Binet’s fears were realized soon after his death in 1911, when others adapted his tests for use as a numerical measure of inherited intelligence. Stanford University professor Lewis Terman (1877–1956) tried the Paris-developed questions and age norms with California schoolchildren. Adapting some of Binet’s original items, Terman extended the upper end of the test’s range from teenagers to “superior adults.” He also gave his revision the name today’s version retains—the Stanford-Binet. For Terman, intelligence tests revealed the intelligence with which a person was born.

From such tests, German psychologist William Stern derived the famous term intelligence quotient, or IQ. The IQ was simply a person’s mental age divided by chronological age and multiplied by 100 to get rid of the decimal point. Thus, an average child, whose mental and chronological ages are the same, has an IQ of 100. But an 8-year-old who answers questions as would a typical 10-year-old has an IQ of 125: IQ=mental age of 10chronological age of 8×100=125

The original IQ formula worked fairly well for children but not for adults. Most current intelligence tests, including the Stanford-Binet, no longer compute an IQ in this manner Instead, they assign a score that represents a test-taker’s performance relative to the average performance (which is arbitrarily set at 100) of others the same age. Intelligence test scores for most people— about 68 percent—fall between 85 and 115. Terman promoted the widespread use of intelligence testing to “take account of the inequalities of children in original endowment” by assessing their “vocational fitness.” He sympathized with Francis Galton’s eugenics and he envisioned that the use of intelligence tests would “ultimately result in curtailing the reproduction of feeble-mindedness and in the elimination of an enormous amount of crime, pauperism, and industrial inefficiency.” 

With Terman’s help, the U.S. government developed new tests to evaluate both newly arriving immigrants and World War I army recruits— the world’s first mass administration of an intelligence test. To some psychologists, the results indicated the inferiority of people not sharing their Anglo-Saxon heritage. Even Terman came to appreciate that test scores reflected not only people’s innate mental abilities but also their education, native language, and familiarity with the culture assumed by the test. Abuses of the early intelligence tests, such as in immigrant screening, serve to remind us that science can be value-laden. Behind a screen of scientific objectivity, ideology sometimes lurks.

David Wechsler: Testing Separate Strengths

Psychologist David Wechsler created what is now the most widely used individual intelligence test, the Wechsler Adult Intelligence Scale (WAIS), together with a version for school-age children (the Wechsler Intelligence Scale for Children [WISC]) and another for preschool children. The latest (2008) edition of the WAIS consists of 15 subtests, including these:

  • Similarities—reasoning the commonality of two objects or concepts 

  • Vocabulary—naming pictured objects, or defining words

  • Block Design—visual abstract processing

  • Letter-Number Sequencing—on hearing a series of numbers and letters, repeating the numbers in ascending order, and then the letters in alphabetical order

Standardization

The number of questions you answer correctly on an intelligence test would reveal almost nothing. To know how well you performed, you would need some basis for comparison. That’s why test-makers give new tests to a representative sample of people. The scores from this pretested group become the basis for future comparisons. If you then take the test following the same procedures, your score will be meaningful when compared with others. This process is called standardization.

If we construct a graph of test-takers' scores, the scores typically form a bell-shaped pattern called the bell curve, or normal curve. No matter what attributes we measure the curve’s highest point is the average score. On an intelligence test, we give this average score a value of 100. Moving out from the average toward either extreme, we find fewer and fewer people. For both the Stanford-Binet and Wechsler tests, a person’s score indicates whether that person’s performance fell above or below the average. A performance higher than all but 2.5 percent of all scores earns an intelligence score of 130. A performance lower than 97.5 percent of all scores earns an intelligence score of 70.

To keep the average score near 100, the Stanford-Binet and Wechsler scales are periodically restandardized. If you compared the performance of the most recent standardization sample with that of the 1930s sample, do you suppose you would find rising or declining test performance? Amazingly intelligence test performance has improved. This worldwide phenomenon is called the Flynn effect, in honor of New Zealand researcher James Flynn, who first calculated its magnitude. The average person’s intelligence test score in 1920 was only a 76. Although there have been some regional reversals, the historic increase is now widely accepted as an important phenomenon. 

The Flynn effect’s cause has been a psychological mystery. Flynn attributes the performance increase to our need to develop new mental skills to cope with modern environments. Regardless of what combination of factors explains the rise in intelligence test scores, the phenomenon counters a concern of some who see intelligence as inherited—that the higher twentieth-century birthrates among those with lower scores would shove human intelligence scores downward.

Reliability

Knowing where you stand in comparison to a standardization group still won’t say much about your intelligence unless the test has reliability. A reliable test gives consistent scores, no matter who takes the test or when they take it. To check a test’s reliability, researchers test people many times. They may split the test in half, test with alternative forms of the test, or retest with the same test. The higher the correlation between the two scores, the higher the test’s reliability. The tests we have considered are very reliable after early childhood. When retested, people’s scores generally match their first score closely.

Validity

High reliability does not ensure a test’s validity—the extent to which the test actually measures or predicts what it promises. Tests that tap the pertinent behavior, or criterion, have content validity. The road test for a driver’s license has content validity because it samples the tasks a driver routinely faces. Course exams have content validity if they assess one’s mastery of course material. 

Aptitude test scores do predict school grades. But the critics are right that the predictive power of aptitude tests peaks in the early school years and weakens later. Academic aptitude test scores are reasonably good predictors of achievement for children ages 6 to 12, where the correlation between intelligence score and school performance is about +0.6. Intelligence scores correlate even more closely with scores on achievement tests. The SAT, used in the United States as a college entrance exam, has been less successful in predicting first-year college grades. By the time we get to the Graduate Record Examination, the correlation with graduate school performance is an even more modest but still significant +0.4.

This will be true even if the test has excellent predictive validity with a more diverse sample of students. Likewise, modern grade inflation has produced less diverse high school grades. With their diminished range, high school grades now predict college grades no better than have SAT scores. So, when we validate a measure using a wide range of scores but then use it with a restricted range of scores, it loses much of its predictive validity.

Module 62 The Dynamics of Intelligence

Aging and Intelligence

With the results of cross-sectional studies showing that older adults give fewer correct answers on intelligence tests than do younger adults, WAIS creator David Wechsler concluded that “the decline of mental ability with age is part of the general [aging] process of the organism as a whole.” For a long time, this rather dismal view went unchallenged. Many corporations established mandatory retirement policies, assuming the companies would benefit by replacing aging workers with younger, more capable employees. 

After colleges in the 1920s began giving intelligence tests to entering students, several psychologists saw their chance to study intelligence longitudinally. They retested the same cohort over a period of years. What they found was a surprise: Until late in life, intelligence remained stable. On some tests, scores even increased, due partly to experience with the tests the more optimistic results from longitudinal studies challenged the presumption that intelligence sharply declines with age. So the answers to our age-and-intelligence questions depend on what we assess and how we assess it. Crystallized intelligence—our accumulated knowledge as reflected in vocabulary and analogies tests—increases up to old age. Fluid intelligence—our ability to reason speedily and abstractly, as when solving novel logic problems—decreases beginning in the twenties and thirties, slowly up to age 75 or so, then more rapidly, especially after age 85. 

With age, we lose and we win. Fluid intelligence may decline, but older adults’ social reasoning skills increase, as shown by an ability to take multiple perspectives, to appreciate knowledge limits, and to offer helpful wisdom in times of social conflict. Decisions also become less distorted by negative emotions such as anxiety, depression, and anger. These cognitive differences also help explain why mathematicians and scientists produce much of their most creative work during their late twenties or early thirties, when fluid intelligence is at its peak. In contrast, authors, historians, and philosophers tend to produce their best work in their forties, fifties, and beyond—after accumulating more knowledge. 

Stability Over the Life Span

For most children, casual observation and intelligence tests before age 3 only modestly predict their future aptitudes. Even Albert Einstein was once thought “slow”—as he was in learning to talk to predict their adolescent and adult scores. The consistency of scores over time increases with the age of the child. By age 11, the stability becomes impressive, as Ian Deary and his colleagues discovered. Their amazing longitudinal studies have been enabled by their country, Scotland. On June 1, 1932, essentially every child in the country born in 1921 took an intelligence test. The aim was to identify working-class children who would benefit from further education. Patricia Whalley, the wife of Deary’s co-worker, Lawrence Whalley, discovered the test results on dusty storeroom shelves at the Scottish Council for Research in Education, not far from Deary’s Edinburgh University office. “This will change our lives,” Deary replied when Whalley told him the news.

And so it has, with dozens of studies of the stability and the predictive capacity of these early test results. For example, when the intelligence test administered to 11-year-old Scots in 1932 was readministered to 542 survivors as turn-of-the-millennium 80-year-olds, the correlation between the two sets of scores was striking. Ditto when 106 survivors were retested at age 90. Another study that followed Scots born in 1936 from ages 11 to 70 confirmed the remarkable stability of intelligence, independent of life circumstance.

Children and adults who are more intelligent also tend to live healthier and longer lives. Why might this be the case? Deary reports four possible explanations:

1. Intelligence facilitates more education, better jobs, and a healthier environment.

2. Intelligence encourages healthy living: less smoking, better diet, more exercise.

3. Prenatal events or early childhood illnesses might have influenced both intelligence and health.

4. A “well-wired body,” as evidenced by fast reaction speeds, perhaps fosters both intelligence and longevity.

The Low Extreme

Intellectual disability is a developmental condition that is apparent before age 18, sometimes with a known physical cause. To be diagnosed with an intellectual disability, a person must meet two criteria. The first is low intellectual functioning as reflected in a low intelligence test score. Guidelines specify test score performance that is in the lowest 3 percent of the general population, or about 70 or below. The second criterion is that the person must have difficulty adapting to the normal demands of independent living, as expressed in three areas, or skills: conceptual (language, reading, and concepts of money, time, and number), social (interpersonal skills, being socially responsible, following basic rules and laws, avoiding being victimized), and practical (health and personal care, occupational skill, and travel). In mild forms, intellectual disability, like normal intelligence, results from a combination of genetic and environmental factors.

When, thanks to the Flynn effect, intelligence tests are periodically restandardized, the intellectual-disability test score boundary can shift. Depending on when they were tested, two people with the same ability level could be classified differently. After one restandardization, individuals who scored near 70 on earlier tests suddenly lost about 6 test score points. As this has occurred, more people have become eligible for special education and financial assistance.

For some, intelligence test scores can mean life or death. In the United States, the Flynn effect means fewer Americans are now eligible for execution: The U.S. Supreme Court ruled in 2002 that the execution of people with an intellectual disability is “cruel and unusual punishment.”  In 2014, the U.S. Supreme Court recognized the imprecision and arbitrariness of a fixed cutoff score of 70, and required states with death row inmates who have scored just above 70 to consider other evidence.

The High Extreme

Children whose intelligence test scores indicate extraordinary academic gifts mostly thrive. In one famous project begun in 1921, Lewis Terman studied more than 1500 California schoolchildren with IQ scores over 135. These high-scoring children (later called the “Termites”), like those in later studies, were healthy, well-adjusted, and unusually successful academically. When restudied over the next seven decades, most people in Terman’s group had attained high levels of education. Many were doctors, lawyers, professors, scientists, and writers, though none were Nobel Prize winners. Other studies have followed the lives of precocious youths who had aced the math SAT at age 13—by scoring in the top 1 percent of their age group. By their fifties, these 1650 math whizzes had secured 681 patents

One of psychology’s whiz kids was Jean Piaget, who by age 15 was publishing scientific articles on mollusks and who went on to become the twentieth century’s most famous developmental psychologist. Is there a gifted education program at your school? “Gifted child” programs tend to segregate high-scoring children in special classes, giving them academic enrichment not available to their peers. Critics note that tracking by aptitude sometimes creates a self-fulfilling prophecy: Implicitly labeling some children as “ungifted” and denying them opportunities for enriched education can widen the achievement gap between ability groups. Because minority and low-income youth are more often placed in lower academic groups, tracking can also promote segregation and prejudice— hardly, note critics, a healthy preparation for working and living in a multicultural society.

Critics and proponents of gifted education do, however, agree on this: Children have differing gifts, whether at math, verbal reasoning, art, or social leadership. Some children exhibit exceptional potential or talent in a given domain. Educating children as if all were alike is as naive as assuming that giftedness is something, like blue eyes, that you either have or do not have.  By providing appropriate placement suited to each child’s talents, we can promote both equity and excellence for all.

Module 63  Studying Genetic and Environmental Influences on Intelligence

Twin and Adoption Studies

Do people who share the same genes also share mental abilities? As you can see from Figure 63.1, which summarizes many studies, the answer is clearly Yes. Consider:

  • The intelligence test scores of identical twins raised together are nearly as similar as those of the same person taking the same test twice. Estimates of the heritability of intelligence range from 50 to 80 percent. Identical twins also exhibit substantial similarity (and heritability) in specific talents, such as music, math, and sports. Heredity accounts for more than half the variation in the national math and science exam scores of British 16-year-olds.

  • Scans reveal that identical twins’ brains have similar gray- and white-matter volume, and the areas associated with verbal and spatial intelligence are virtually the same. Their brains also show similar activity while doing mental tasks.

  • When 200 researchers pooled their data on 126,559 people, all of the gene variations analyzed accounted for only about 2 percent of the differences in educational achievement. And using a new genetic method, a follow-up British study recently found genes that predicted 9 percent of the variation in school achievement at age 16. This much seems clear: Intelligence is polygenetic, involving many genes. What matters for intelligence is the combination of many genes

Other evidence points to environment effects:

  • Where environments vary widely, as they do among children of less-educated parents, environmental differences are more predictive of intelligence scores.

  • Adoption enhances the intelligence scores of mistreated or neglected children. So does adoption from poverty into middle-class home. In one large Swedish study, children adopted into wealthier families with more educated parents had IQ scores averaging 4.4 points higher than their not-adopted biological siblings.

  • The intelligence scores of “virtual twins”—same-age, unrelated siblings adopted as infants and raised together—correlate +0.28. This suggests a modest influence of their shared environment.

Seeking to disentangle genes and environment, researchers have also compared the intelligence test scores of adopted children with those of (a) their biological parents and (b) their adoptive parents. Over time,adopted children accumulate experience in their differing adoptive families. So, would you expect the family-environment effect to grow with age and the genetic-legacy effect to shrink? Adopted children’s intelligence scores resemble those of their biological parents much more than their adoptive parents. Genetic influences become more apparent as we accumulate life experience. Identical twins’ similarities, for example, continue or increase into their eighties. Thus the heritability of general intelligence increases from about 30 percent in early childhood to well over 50 percent in adulthood. Similarly, adopted children’s verbal ability scores over time become more like those of their biological parents. Who would have guessed?

Early Environmental Influences

Nowhere is the intertwining of biology and experience more apparent than in the most hopeless human environments, such as J. McVicker Hunt observed in a destitute Iranian orphanage. The typical child Hunt observed there could not sit up unassisted at age 2 or walk at age 4. The little care the infants received was not in response to their crying, cooing, or other behaviors, so the children developed little sense of personal control over their environment. They were instead becoming passive “glum lumps.” Extreme deprivation was bludgeoning native intelligence— a finding confirmed by studies of children raised in poorly run orphanages in Romania and elsewhere.

Aware of both the dramatic effects of early experiences and the impact of early intervention, Hunt began a training program for the Iranian caregivers, teaching them to play language-fostering games with 11 infants. They imitated the babies’ babbling, engaged them in vocal follow-the-leader, and, finally, they taught the infants sounds from the Persian language. The results were dramatic. By 22 months of age, the infants could name more than 50 objects and body parts, and so charmed visitors that most were adopted. Hunt’s findings are an extreme case of a more general finding: Poor environmental conditions can depress cognitive development. 

Poverty-related stresses also impede cognitive performance, impoverished people’s worries and distractions consume cognitive bandwidth and can diminish their thinking capacity. So, extreme conditions can slow normal brain development. Is the reverse also true? Will an “enriched” environment give children a superior intellect? Most experts are doubtful. There is no environmental recipe for fast-forwarding a normal infant into a genius. All babies should have normal exposure to sights, sounds, and speech. 

Early Intervention

Later in childhood, schooling is one intervention that pays intelligence score dividends. Schooling and intelligence interact, and both enhance later income. Hunt was a strong believer in the ability of education to boost children’s chances for success by developing their cognitive and social skills. Indeed, his 1961 book, Intelligence and Experience, helped launch Project Head Start in 1965, a U.S. government-funded preschool program that has served more than 30 million children, most of whom come from families below the poverty level. Does it succeed? Studies suggest that Head Start increases school readiness and gives a modest boost to later health and high school completion rate. Generally, however, aptitude benefits fade out over time.

More encouraging results come from intensive post-babyhood preschool programs. Across a number of experiments, intelligence scores also rise with nutritional supplements to pregnant mothers and newborns (3.5 points), with quality preschool experiences (4 points), and with interactive reading programs (6 points).

What we accomplish with our intelligence depends also on our own beliefs and motivation. One analysis of 72,431 undergraduates found that study motivation and study skills rivaled aptitude and previous grades as predictors of academic achievement. Motivation can even affect intelligence test performance.

Growth Mindset

These observations would not surprise psychologist Carol Dweck. She reports that believing intelligence is changeable, not fixed, fosters a growth mindset, a focus on learning and growing. Dweck teaches young teens that the brain is like a muscle, growing stronger with use as neuron connections grow. Receiving praise for effort and for tackling challenges, rather than being smart or accomplished, helps teens understand the link between hard work and success. They also become more resilient when others frustrate them.

Indeed, superior achievements in fields from sports to science to music arise from the combination of ability, opportunity, and disciplined effort. More than 300 studies confirm that ability + opportunity + motivation = success. Extremely high intelligence-test-scoring 12-year-olds are much more likely than the average person to later earn doctorates and hold patents. High school students’ math proficiency and college students’ grades reflect their aptitude but also their self-discipline, their belief in the power of effort, and a curious, “hungry mind.” And consider: South Asian–Americans won all nine U.S. national spelling bee contests between 2008 and 2016, an incredible achievement likely influenced by a cultural belief that strong effort will bring success. 

One downside, caution some researchers, is a possible overemphasis on positive thinking’s power to lift people out of disadvantage. Doing so comes at a social cost— blaming struggling individuals for their circumstances.

Module 64 Group Differences and the Question of Bias

Gender Similarities and Differences

In worldwide studies, men estimate their own intelligence higher than do females. Yet compared with the anatomical and physiological differences between men and women, our intelligence differences are minor. So far as g is concerned, boys and girls, and men and women, are the same. Girls outpace boys in spelling, verbal fluency, locating objects, detecting emotions, and sensitivity to touch, taste, and color. Boys outperform girls in tests of spatial ability and complex math problems, though in math computation and overall math performance, boys and girls hardly differ. Males’ mental ability scores also vary more than females’. Thus, boys worldwide outnumber girls at both the low extreme and the high extreme. Boys, for example, are more often found in special education classes, but also among those scoring very high on the SAT math test.

The most reliable male edge appears in spatial ability tests. The solution requires speedy mental rotation of three-dimensional objects. From an evolutionary perspective, those same skills would have helped our ancestral fathers track prey and make their way home. The survival of our ancestral mothers may have benefited more from a keen memory for the location of edible plants—a legacy that lives today in women’s superior memory for objects and their location. Evolutionary psychologist Steven Pinker has argued that biology affects gender differences in life priorities, in risk-taking, and in math reasoning and spatial abilities. Such differences are observed across cultures, stable over time, influenced by prenatal hormones, and observed in genetic boys raised as girls.

But social influences also construct gender. Researchers report that culturally influenced preferences help explain why American women, more than men, avoid math-intensive vocations. Social expectations and divergent opportunities also shape boys’ and girls’ interests and abilities. In Asia and Russia, teen girls have outperformed boys in an international science exam; in North America and Britain, boys have scored higher. More gender-equal cultures, such as Sweden and Iceland, exhibit little of the gender math gap found in gender-unequal cultures, such as Turkey and Korea. Since the 1970s, as gender equity has increased in the United States, the boy-to-girl ratio among 12- to 14-year-olds with very high SAT math scores has declined from 13 to 1 to 3 to 1. 

Racial and Ethnic Similarities and Differences

Fueling the group-differences debate are two other disturbing but scientifically agreed-upon facts:

  • Racial and ethnic groups differ in their average intelligence test scores.

  • High-scoring people are more likely to attain high levels of education and income.

There are many group differences in average intelligence test scores. New Zealanders of European descent outscore native Maori New Zealanders. And White Americans have outscored Black Americans. This Black-White difference has diminished somewhat in recent years, especially among children. Such group differences provide little basis for judging individuals. We have seen that heredity contributes to individual differences in intelligence. But group differences in a heritable trait may be entirely environmental. Consider one of nature’s experiments: Allow some children to grow up hearing their culture’s dominant language, while others, born deaf, do not. Then give both groups an intelligence test rooted in the dominant language, and those with expertise in that language will score higher. Although individual performance differences may be substantially genetic, the group difference is not.

Genetics research reveals that under the skin, we humans are remarkably alike.. Moreover, looks can deceive. Light-skinned Europeans and dark-skinned Africans are genetically closer than are dark-skinned Africans and dark-skinned Aboriginal Australians. Race is not a neatly defined biological category. Many social scientists see race primarily as a social construction without well-defined physical boundaries, as each race blends seamlessly into the race of its geographical neighbors. With increasingly mixed ancestries, more and more people defy neat racial categorization and self-identify as multiracial.

The intelligence test performance of today’s better-fed, better-educated, and more test-prepared population exceeds that of the 1930s population—by a greater margin than the intelligence test score of the average White today exceeding that of the average Black. When Blacks and Whites have or receive the same pertinent knowledge, they exhibit similar information-processing skill. Schools and culture matter. 

Countries whose economies create a large wealth gap between rich and poor tend also to have a large rich-versus poor intelligence test score gap. Moreover, educational policies such as kindergarten attendance, school discipline, and instructional time per year predict national differences in intelligence and knowledge tests. Math achievement, aptitude test differences, and especially grades may reflect conscientiousness more than competence. Asian students, who have outperformed North American students on such tests, have also spent 30 percent more time in school and much more time in and out of school studying math. Women in college and university similarly outperform equally able men, thanks partly to their greater conscientiousness.

Consider the striking results of studies that reveal a shrinking White/Black aptitude gap over time and with more equal education. One national study looked back over the mental test performances of White and Black young adults after graduation from college. From eighth grade through the early high school years, the average aptitude score of the White students increased, while that of the Black students decreased—creating a gap that reached its widest point at about the time that high school students like you take college admissions tests. But during college, the Black students’ scores increased “more than four times as much” as those of their White counterparts, thus greatly decreasing the aptitude gap.

The Question of Bias

If one assumes that race is a meaningful concept, the debate over racial differences in intelligence divides into three camps:

  • There are genetically disposed racial differences in intelligence.

  • There are socially influenced racial differences in intelligence.

  • There are racial differences in test scores because the tests are inappropriate or biased.

We have considered group difference from the first and second perspectives. Let’s turn now to the third: Are intelligence tests biased? The answer depends on which of two very different definitions of bias we use.

Two Meanings of Bias

The scientific meaning of bias hinges on a test’s validity—on whether it predicts future behavior only for some groups of test-takers. In this statistical meaning of the term, the near-consensus among psychologists has been that the major U.S. aptitude tests are not biased. The tests’ predictive validity is roughly the same for women and men, for various races, and for rich and poor. If an intelligence test score of 95 predicts slightly below-average grades, that rough prediction usually applies equally to all.

But we can also consider a test biased if it detects not only innate differences in intelligence but also performance differences caused by cultural experiences. This in fact happened to Eastern European immigrants in the early 1900s. Lacking the experience to answer questions about their new culture, many were classified as “feebleminded.” In this popular sense, intelligence tests are biased. They measure your developed abilities, which reflect, your education and experiences. You may have read examples of intelligence test items that make assumptions. Such items bias the test. In such cases, tests can be a vehicle for discrimination, consigning potentially capable children to unchallenging classes and dead-end jobs. 

Defenders of the existing aptitude tests have noted that racial group differences persist on nonverbal items, such as counting digits backward. Moreover, blaming the test for a group’s lower scores is like blaming a messenger for bad news. If unequal past experiences predict unequal future achievements, a valid aptitude test will detect such inequalities. So, test-makers’ expectations can introduce bias in an intelligence test. Our expectations and attitudes can influence our perceptions and behaviors. This is also true for the person taking the test.

Test-Takers’ Expectations

When Steven Spencer and his colleagues gave a difficult math test to equally capable men and women, women did not do as well—except when they had been led to expect that women usually do as well as men on the test. Expecting to struggle with the math affected their performance. There was a “threat in the air.” Spencer observed this self-fulfilling stereotype threat when Black students performed worse after being reminded of their race just before taking verbal aptitude tests. Follow-up experiments have confirmed that negatively stereotyped minorities and women may have unrealized academic and professional potential. If, when taking an intelligence test or performing a work-related task, you are worried that your group or “type” often doesn’t do well, your self-doubts and self-monitoring may hijack your working memory and impair your performance. For such reasons, stereotype threat may impair attention, performance, and learning. Remove the threat and stereotyped minorities often perform better.

Stereotype threat helps explain why Blacks have scored higher when tested by Blacks than when tested by Whites. It implies a possible effect of non-Black teachers having lower expectations for Black students than do Black teachers. And it gives us insight into why women have scored higher on math tests with no male test-takers present, and why women’s online chess performance drops sharply when they think they are playing a male opponent.

Steele concludes that telling students they probably won’t Succeed functions as a stereotype that can erode performance. Over time, such students may detach their self-esteem from academics and look for recognition elsewhere. Indeed, as African-American male students progress from eighth to twelfth grade, a growing disconnect appears between their grades and their self-esteem, and they tend to underachieve. When challenged to believe in their potential, increase their sense of belonging, or focus on intelligence as malleable, disadvantaged university students have likewise earned markedly higher grades and have had lower dropout rates.

What, then, can we realistically conclude about aptitude tests and bias? The tests are not biased in the scientific sense of failing to make valid statistical predictions for different groups. But they are indeed biased in one sense—sensitivity to performance differences caused by cultural experience. Are the tests discriminatory? Again, the answer can be Yes or No. In one sense, Yes, their purpose is to discriminate. In another sense, No, their purpose is to reduce discrimination by decreasing reliance on subjective criteria for school and job placement. Civil service aptitude tests, for example, were devised to discriminate more fairly and objectively by reducing the political, racial, ethnic, and gender discrimination that preceded their use. Banning aptitude tests would lead those who decide on jobs and admissions to rely more on other considerations, such as personal opinion.

Our rationality, practical intelligence, and emotional intelligence matter, too, as do other forms of creativity, talent, and character. The point to remember: There are many ways of being successful; our differences are variations of human adaptability. Life’s great achievements result not only from “can do” abilities but also from “will do” motivation. Competence + Diligence → Accomplishment.