CH 4-7

CHAPTER 4 Development of the Brain Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University Overcoming Blindness As a 3-year-old, Michael May was injured by a chemical explosion that destroyed his left eye and damaged the surface of his right eye so badly that he was blind. He could tell whether it was day or night, but otherwise he couldn’t see anything. An early attempt to restore his sight with corneal transplants failed, but Michael seemed undaunted. He learned to play Ping-Pong using his hearing alone (but only on the table at his parents’ house, where he learned to interpret the sound cues). Michael also enjoyed riding a bicycle, but his parents made him stop after he crashed first his brother’s bike and then his sister’s. As an adult, Michael became a champion skier, marrying his instructor and raising two sons. He also started his own company, making equipment to help blind people navigate on their own. Then, when Michael was 46, technical advances made it possible to restore vision in his right eye. As soon as the bandages were removed, he could see his wife’s blue eyes and blond hair. But he could not recognize her face unless she spoke to him, or recognize three-dimensional objects like cubes or spheres unless they were moving. Michael could still ski, but he found that he had to close his eyes to avoid falling over. On the slopes, seeing was more distracting than helpful. The doctors could tell that images were focusing properly on Michael’s retina, so why was his vision so poor? Picture, if you can, the number of neurons in the mature human brain—more than 80 billion (von Bartheld, 2018). There are many types of neurons, each forming a vast array of hundreds or thousands of connections. Hundreds of trillions of synapses are formed to guide our thoughts and behavior! Yet each of us began as a single microscopic cell, the fertilized egg. How can one cell divide and grow to form one of the most complicated machines on Earth, perhaps in the universe? No one can answer that question in detail, but in this chapter we’ll learn the basic principles at work in the extraordinary developing brain. 4.1 Growth and Development of the Brain Are Orderly Processes The Road Ahead In the first part of the chapter, we describe brains in terms of their progress from a single fertilized egg to a machine containing billions of neurons with an incredible number of connections. Learning this material should allow you to: 4.1.1 List the six stages of cellular processes needed for brain development. 4.1.2 Describe the two phases of brain development characterized by the loss of structures. 4.1.3 Identify the basic mechanism that directs each developing neuron to take on the appropriate structure and function. 4.1.4 Discuss the significance of generating new neurons in adulthood. Within 12 hours after a human egg is fertilized, the single cell begins dividing, forming a small mass of homogeneous cells, like a cluster of grapes, that is a mere 200 micrometers in diameter. Within a week, the emerging human embryo shows three distinct cell layers (FIGURE 4.1A)—the beginnings of all the tissues of the body. The nervous system develops from the outermost layer, called the ectoderm (from the Greek ektos, “out,” and derma, “skin”). As the cell layers thicken, they grow to form a groove that will become the midline and then the neural groove. At the head end of this, a thickened collection of cells forms (FIGURE 4.1B). FIGU R E 4 . 1 Development of the Nervous System in the Human Embryo and Fetus View larger image The sides of the neural groove come together to form the neural tube (FIGURE 4.1C), the beginning of the central nervous system (CNS). Cells in the neural crest, a column of ectodermal cells settling on either side of the neural tube, will migrate to form the peripheral nervous system. At the anterior part of the neural tube, three subdivisions become apparent, corresponding to the future forebrain (cortical regions, thalamus, and hypothalamus), midbrain, and hindbrain (cerebellum, pons, and medulla) (FIGURE 4.1D and E), which were discussed in Chapter 1. The interior of the neural tube becomes the fluid-filled cerebral ventricles of the brain, the central canal of the spinal cord, and the passages that connect them. By the end of the eighth week, the human embryo shows the rudimentary beginnings of most body organs. The rapid development of the brain is reflected in the fact that by this time the head is half the total size of the embryo! The developing human is called an embryo during the first 10 weeks after fertilization and a fetus thereafter. Development of the nervous system can be divided into six distinct stages From a cellular viewpoint it is useful to consider brain development as a sequence of six distinct stages. The six stages proceed at different rates and times in different parts of the nervous system. Some of the stages may overlap even within a region: 1. Neurogenesis, the mitotic division of nonneuronal cells to produce neurons 2. Cell migration, the massive movements of nerve cells or their precursors to establish distinct nerve cell populations (nuclei in the CNS, layers of the cerebral cortex, and so on) 3. Cell differentiation, the refining of cells into distinctive types of neurons or glial cells 4. Synaptogenesis, the establishment of synaptic connections as axons and dendrites grow 5. Neuronal cell death, the selective death of many nerve cells that begins well before birth 6. Synapse rearrangement, the loss of some synapses and the development of others, to refine synaptic connections, which extends throughout our lifespan Humans are unique among primates in showing dramatic brain growth after birth, as illustrated by FIGURE 4.2. FIGU R E 4 . 2 Fetal-like Rapid Development of the Brain outside the Womb View larger image Cell proliferation produces cells that become neurons or glia The production of neurons is called neurogenesis. Neurons themselves do not divide, but the cells that will give rise to neurons begin as a single layer of cells along the inner surface of the neural tube. These cells divide in a process called mitosis, which takes place within the ventricular zone inside the neural tube (FIGURE 4.3A). Eventually, some cells leave the ventricular zone and begin transforming into either neurons or glial cells. As the nervous system grows, cell migration follows, as the cells move over relatively long distances to fill out the brain (FIGURE 4.3B). FIGU R E 4 . 3 The Six Stages of Neural Development View larger image Newly arrived cells in the brain bear no more resemblance to mature nerve cells than they do to the cells of other organs. Once the cells reach their destinations, however, they alter their gene expression, now using those particular subsets of genes that are needed to produce proteins that will turn them into neurons. This process of cell differentiation enables cells to acquire the distinctive appearance and functions of neurons characteristic of their particular regions (FIGURE 4.3C). Once they take on the characteristics of neurons, they begin making synaptic connections with one another, in the process of synaptogenesis (FIGURE 4.3D). The particular fate of a differentiating cell depends on where that cell happens to be and what the cell’s neighbors are doing. Cells in the developing brain are constantly sending chemical signals to one another, each shaping gene expression in their neighbors, thereby controlling their development. This is the hallmark of vertebrate development: cells sort themselves out via cell-cell interactions, taking on fates that are appropriate in the context of what neighboring cells are doing. When the negotiations are all over, if things go properly, a new person is formed with all the types of cells in the brain that they need to live. This system of cell-cell interactions determining how brain cells develop has an important consequence: If cells that have not yet differentiated extensively can be obtained and placed in a particular brain region, they may differentiate in an appropriate way and become properly integrated. Such undifferentiated cells, called stem cells, are present throughout embryonic tissues, so they can be gathered from umbilical cord blood, miscarried embryos, or unused embryos produced during in vitro fertilization. It may even be possible someday to take cells from adult tissue and, by treating them with various factors in a dish, transform them into stem cells (Cyranoski, 2018). It is hoped that placing stem cells in areas of brain degeneration, such as loss of myelin in multiple sclerosis (see Chapter 2) or loss of dopaminergic neurons in Parkinson’s disease (see Chapter 5), might reverse such degeneration as the implanted cells differentiate to fill in for the missing components (Y. Cha et al., 2023). In the adult brain, newly born neurons aid learning At birth, mammals have already produced most of the neurons they will ever have. The postnatal increase of human brain weight (see Figure 4.2) is due primarily to growth in the size of neurons, branching of dendrites, elaboration of synapses, increase in myelin, and addition of glial cells. But research has shown that we are also capable of adult neurogenesis, the generation of new neurons in adulthood, especially in the dentate gyrus of the hippocampal formation (FIGURE 4.4) (Anacker et al., 2018). By one estimate, 700 new neurons are produced every day in the adult human hippocampus (Spalding et al., 2013). FIGU R E 4 . 4 Neurogenesis in the Dentate Gyrus View larger image Indeed, although the new neurons acquired in adulthood represent just a tiny minority of the total, there’s reason to think they matter (Snyder, 2019). In experimental animals, the birth and/or survival of new neurons is enhanced by factors like exercise, environmental enrichment, and training (Cope and Gould, 2019). Neurogenesis appears to enhance various forms of hippocampus-dependent learning, such as spatial memory and fear conditioning, in some (but not all) studies (Hernández-Mercado and Zepeda 2022). Mice with a genetic manipulation that turns off neurogenesis in the brains of adults showed a marked impairment in spatial learning with little effect on other behaviors (C. L. Zhang et al., 2008). So by studying this chapter, you may be giving your brain a few more neurons to use on exam day! (You’re welcome.) Physical exercise also boosts neurogenesis in rats—an effect that can be blocked by stressors such as social isolation (Lauretta et al., 2022)—so invest in exercise and a network of friends too. The death of many neurons is a normal part of development As strange as it may seem, cell death is a crucial phase of brain development (FIGURE 4.3E). This developmental stage is not unique to the nervous system. Naturally occurring cell death, also called apoptosis (from the Greek apo, “away from,” and ptosis, “act of falling”), is evident as a kind of sculpting process in the emergence of other tissues in both animals and plants. The number of neurons that die during early development is quite large. In some regions of the brain and spinal cord, most of the young nerve cells die during prenatal development. Viktor Hamburger (1900–2001) first described naturally occurring neuronal cell death in chicks, in which nearly half the originally produced spinal motor neurons die before the chick hatches. A similar loss of spinal motor neurons was later reported in developing humans (FIGURE 4.5). FIGU R E 4 . 5 Many Neurons Die during Normal Early Development View larger image These cells are not dying because of a defect. Rather, they die as a consequence of complex interactions with surrounding cells, so they are actively “committing suicide.” Your chromosomes carry death genes— genes that are expressed only when a cell undergoes apoptosis (Montero et al., 2022). Genetically interfering with death genes in fetal mice causes the mice to grow brains that are too large to fit in the skull (W. R. Kim et al., 2007), so we can see how vital it is that some cells die. Neurons compete for connections to target structures (other nerve cells or end organs, such as muscle). Cells that make adequate synapses remain; those without a place to form synaptic connections die. Apparently the cells compete not just for synaptic sites, but for a chemical that the target structure makes and releases. Neurons that receive enough of the chemical survive; those that do not, die. Such target-derived chemicals are called neurotrophic factors (or simply trophic factors) because they act as if they “feed” the neurons to help them survive (in Greek, trophe means “nourishment”). That same competition for neurotrophic factors probably also controls the next phase of development, when many of the synapses formed are retracted, as we’ll see next (FIGURE 4.6). FIGU R E 4 . 6 A Model for the Action of Neurotrophic Factors View larger image How’s It Going? 1. What six stages of cellular processes take place in the developing brain? 2. What is cell differentiation, and what guides this process in each cell in the developing brain? 3. What two classes of brain structures undergo loss during development? 4. What are neurotrophic factors, and what role do they play in brain development? Food for Thought The idea that many neurons normally die in development surprised many neuroscientists. Why might natural selection favor such a seemingly wasteful process? 4.2 An Explosion of Synapse Formation Is Followed by Synapse Rearrangement The Road Ahead Now we describe the lifelong process by which synapses are lost and gained in the brain. Learning this material should allow you to: 4.2.1 Describe the process of synapse rearrangement, and offer evidence that a net loss of synapses may be adaptive. 4.2.2 Describe studies showing that visual experience early in life is required to develop normal vision. Before birth and after, neurons in the human cortex grow ever longer and more elaborate dendrites, each jammed with synapses. As we noted earlier, this massive increase in dendrites and synapses is responsible for most of the increase in brain size after birth (FIGURE 4.7). But just as not all the neurons produced by a developing individual are kept into adulthood, some of the synapses formed early in development are later retracted. Some original synapses are lost, and many, many new synapses are formed (FIGURE 4.3F). This synapse rearrangement, or synaptic remodeling, typically takes place after the period of cell death. FIGU R E 4 . 7 Cerebral Cortex Tissue in the Early Development of Humans View larger image For example, as we learned already, about half of the spinal motor neurons that form die later (see Figure 4.5). By the end of the cell death period, each surviving motor neuron innervates many muscle fibers, and every muscle fiber is innervated by several motor neurons. But later the surviving motor neurons retract many of their axon collaterals, until each muscle fiber comes to be innervated by only one motor neuron. Again, which synaptic connections are retained, and which new connections are formed, is thought to depend on competition for trophic factors during development (see Figure 4.6) and/or competition between synapses that we will discuss shortly. Similar events have been documented in several neural regions, including the cerebellum, the brainstem, the visual cortex, and the autonomic nervous system. In human cerebral cortex there is a net loss of synapses from late childhood until mid-adolescence (FIGURE 4.8). Further synaptic remodeling through puberty is evident in thinning of the cortical gray matter as pruning of dendrites and axon terminals progresses, which is not complete until 20 years of age or so. The thinning process continues in a caudal– rostral (posterior–anterior) direction during maturation (FIGURE 4.9), so prefrontal cortex matures last (Walhovd et al., 2017). Since prefrontal cortex is important for inhibiting behavior (see Chapter 14), this delayed brain maturation may contribute to teenagers’ impulsivity and relative lack of control (Albaugh et al., 2021). In Chapter 8 we’ll discuss brain development in response to hormones at puberty. FIGU R E 4 . 8 The Postnatal Development of Synapses in Human Cortex View larger image FIGU R E 4 . 9 Synapse Rearrangement in the Developing Human Brain View larger image What determines which synapses are kept and which are lost? Although we don’t know all the factors, one important influence is neural activity. One theory is that active synapses take up some neurotrophic factor that maintains the synapse, while inactive synapses get too little trophic factor to remain stable (see Figure 4.6). Intellectual stimulation probably contributes, as suggested by the fact that teenagers with the highest IQ show an especially prolonged period of cortical thinning (J. E. Schmitt et al., 2019). Retaining too many synapses can impair intellectual development One syndrome indicates that the loss of synapses actually helps the brain function better. The most frequent inherited cause of intellectual disability, a significant limitation in intellectual functioning and adaptive behavior, is fragile X syndrome (FIGURE 4.10). Researchers have found that some people are born with an X chromosome that is prone to breaking because the DNA at one site is unstable (Berry-Kravis, 2022). People with this condition often have a modified facial appearance, including elongation of the face, large prominent ears, and a prominent chin. A wide range of cognitive effects—from mild to severe impairment—are associated with the syndrome. Cortical neurons from the brains of people with fragile X syndrome, as well as mice genetically engineered to have this syndrome, possess an excess of small, immature dendritic spines (Bagni and Greenough, 2005). These findings suggest that the syndrome affects mental development by blocking the normal elimination of synapses after birth (see Figure 7.9). FIGU R E 4 . 1 0 Too Many Synapses? A man with fragile X syndrome. View larger image We’ll run into lifelong synaptic rearrangement repeatedly in this book. In Chapter 5 we’ll see that regions of cerebral cortex that process touch information undergo synapse rearrangement throughout life, and in Chapter 13 we’ll find that learning and memory happen through the strengthening and weakening of existing synapses or through synapse rearrangement. Next we’ll consider another example of synaptic rearrangement, in the visual system, where experience is crucial for proper development. Visual deprivation can lead to blindness Some people do not see forms clearly with one of their eyes, even though the eye is intact and a sharp image is focused on the retina. Such impairments of vision are known as amblyopia (from the Greek amblys, “dull,” and ops, “eye”). Some people with this disorder have an eye that is turned inward (are cross-eyed) or outward. Children born with such a misalignment see a double image rather than a single fused image. By the time an untreated person reaches the age of 7 or 8, pattern vision in the deviated eye is almost completely suppressed. If the eyes are realigned during childhood, the person learns to fuse the two images and has good depth perception. But if realignment is done in adulthood, it’s too late to restore acute vision to the turned eye. Much of what we know about the causes of amblyopia comes from visual deprivation experiments with lab animals, in which the eyelids are reversibly sutured shut or the animal is fitted with frosted contact lenses, thereby preventing focused images on the retina. These experiments have revealed startling changes related to disuse of the visual system in early life. Binocular deprivation, depriving animals of sight in both eyes, produces structural changes in visual cortical neurons: a loss of dendritic spines and a reduction in synapses. If such deprivation is maintained for several weeks during development, when the animal’s eyes are opened, it will be blind. Although light enters its eyes, and the cells of the eyes send messages to the brain, the brain seems to ignore the messages, and the animal is unable to detect visual stimuli. If the deprivation lasts long enough, the animal is never able to recover eyesight. Thus, early visual experience is crucial for the proper development of vision, and there is a sensitive period during which these manipulations of experience can exert long-lasting effects on the system (FIGURE 4.11). These effects are most extensive during the early period of synaptic development in the visual cortex. After the sensitive period, the manipulations have little or no effect. FIGU R E 4 . 11 Brain Development in the Visual Cortex of Cats View larger image Depriving only one eye of light during the developmental sensitive period—monocular deprivation—produces profound structural and functional changes in the thalamus and visual cortex and permanently impairs vision in the deprived eye. The effect of visual deprivation can be illustrated graphically by an ocular dominance histogram, which portrays the strength of response of brain neurons to stimuli presented to either the left or the right eye. Normally, most cortical neurons are excited equally by light presented to either eye (FIGURE 4.12A). Keeping one eye closed or covered in development results in a striking shift from the normal graph; most cortical neurons now respond only to input from the nondeprived eye (FIGURE 4.12B). In cats the critical period for this effect is the first 4 months of life. In rhesus monkeys the sensitive period extends to age 6 months. After these ages, visual deprivation has little effect. FIGU R E 4 . 1 2 Ocular Dominance Histograms View larger image During early development, synapses are rearranged in the visual cortex, and axons representing input from each eye “compete” for synaptic places. Active, effective synapses predominate over inactive synapses. Thus, if one eye is “silenced,” synapses carrying information from that eye are retracted while synapses driven by the other eye are maintained. Donald O. Hebb (1949) proposed that effective synapses (those that successfully drive the postsynaptic cell) might grow stronger at the expense of ineffective synapses. Thus, synapses that grow stronger or weaker depending on their effectiveness in driving their target cell are known as Hebbian synapses (FIGURE 4.12D). In Chapter 13 we will learn that a particular neurotransmitter receptor, the NMDA receptor, causes synapses to act like Hebbian synapses and likely plays a role in learning and memory (see Figure 13.22). Researchers offer a similar explanation for amblyopia produced by misalignment of the eyes. This human condition can be produced by surgically causing the eyes to diverge in kittens. The ocular dominance histogram of these animals reveals that the normal binocular sensitivity of visual cortical cells is greatly reduced (FIGURE 4.12C). A much larger proportion of visual cortical cells is excited by stimulation of either the right or the left eye in these animals than in control animals. The reason for this effect is that after surgery, visual stimuli falling on the misaligned eyes no longer provide simultaneous, convergent input to the cells of the visual cortex. Neurotrophic factors may be playing a role in experience-driven synapse rearrangement. For example, if the postsynaptic cells are making a limited supply of a neurotrophic factor, and if active synapses take up more of the factor than inactive synapses do, then perhaps the inactive axons retract for lack of neurotrophic factor. Brain-derived neurotrophic factor (BDNF) has been implicated as a neurotrophic factor being competed for in the mammalian visual cortex (Sansevero et al., 2019). So perhaps ineffective synapses wither for lack of neurotrophic support. Early exposure to visual patterns helps fine-tune connections in the visual system Human disorders have also proven that early experience is crucial for vision. Babies born with cataracts (cloudy lenses) in industrialized countries usually have them removed a few months after birth and will have good vision. But if a child grows up with the cataracts in place, removing them in adulthood is much less effective; the adults acquire the use of vision slowly (Ostrovsky et al., 2009) and to only a limited extent. Early visual experience is known to be especially crucial for learning to perceive faces, because infants with cataracts that occlude vision for just the first 6 months of life are impaired at recognizing faces even 9 years later (Le Grand et al., 2001). These experience-dependent effects are probably mediated by synapse rearrangement within the visual cortex like that seen in kittens. Why does Michael May, whom we met at the start of the chapter, have such poor vision despite the clear images entering his eye? When similar injuries happen to adults, the surgery to let light back into the eye restores normal vision. But, like a kitten growing up with opaque contact lenses, Michael was deprived of form vision, beginning while he was a child and lasting for 40 years. So, synaptic connections within his developing visual cortex were not strengthened by the patterns of light moving across the retina, and in the absence of patterned stimulation, synapses between the eye and the brain apparently languished and disappeared. In one sense, Michael was lucky that his blindness came as late as it did. He had normal form vision for the first 3½ years of his life, and that stimulation may have been sufficient to maintain some synapses that would otherwise have been lost. These residual synapses are probably what allow him to make any sense whatsoever of his vision. Yet, even after more than a decade of visual experience as an adult, Michael still had problems distinguishing three-dimensional objects or faces (FIGURE 4.13) (Huber et al., 2015). Other people who gain vision for the first time as adults have similar difficulties recognizing objects and faces (Sikl et al., 2013). FIGU R E 4 . 1 3 What Do I See? View larger image Learning to See Michael May finds it particularly difficult to recognize faces by sight alone. View larger image How’s It Going? 1. What is the evidence that early visual experience is important for being able to see? 2. What’s an ocular dominance histogram, and what manipulation(s) can alter it in cats? 3. What is a Hebbian synapse, and how might such synapses affect development of the visual system? FOOD FOR THOUGHT Why might natural selection evolve sensitive periods for synapse rearrangement, making it much less extensive in adults? 4.3 Experience Can Affect Brain Development by Altering Gene Expression The Road Ahead In this section we discuss how experience can affect the way genes work. Learning this material should allow you to: 4.3.1 Describe how the environment can regulate gene expression and how experience and genes interact to affect behavior. 4.3.2 Define epigenetic changes, and explain how they might play a role in the interaction of genes and the environment. 4.3.3 Describe the mechanism by which rodent dams can alter their offspring’s lifelong stress response. Many factors shape the form, arrangements, and connections of the developing brain. One influence is genes, which direct the production of every protein the cell can make. An individual who has inherited an altered gene will make an altered protein, which will affect every cell that uses that protein. Thus, every neuronal structure, and therefore every behavior, can be altered by changes in the relevant gene(s). But we’ll find that genes do not work by themselves. Genotype is fixed at birth, but phenotype changes throughout life Two terms help illustrate the interaction of genes with the environment. The sum of all the intrinsic, genetic information that an individual has is its genotype. The sum of all the anatomical, physiological, and behavioral characteristics that make up an individual is its phenotype. Your genotype was determined at the moment of fertilization and remains the same throughout your life. But your phenotype changes constantly, as you grow up and grow old and even, in a tiny way, as you take each breath. In other words, phenotype is determined by the interaction of genotype and extrinsic factors, including experience. Thus, as we’ll see, individuals who have identical genotypes do not have identical phenotypes, because they have not received identical experiences. And since their nervous system phenotypes are somewhat different, they do not behave exactly the same. Hundreds of different genetic disorders affect the metabolism of proteins, carbohydrates, or lipids, having a profound impact on the developing brain. The genetic defect usually results in the absence of a particular enzyme that controls a critical biochemical step in the synthesis or breakdown of a vital body product. An example is phenylketonuria (PKU), a heritable disorder of protein metabolism that at one time resulted in many people with intellectual disability. About 1 out of 100 persons is a carrier; 1 in 10,000 births produces an affected victim. The basic defect is the absence of an enzyme necessary to metabolize phenylalanine, an amino acid that is present in many foods. As a result, the brain is damaged by an enormous buildup of phenylalanine, which becomes toxic. The discovery of PKU marked the first time that an inherited error of metabolism was associated with intellectual disability. These days, the level of phenylalanine in the blood is measured in children a few days after birth. Early detection is important because brain impairment can be prevented simply by reducing phenylalanine in the diet. Such dietary control of PKU is critical during the early years of life (L. Bernstein et al., 2017). Note this important example of the interaction of genes and the environment in PKU: the dysfunctional gene causes intellectual disability only in the presence of dietary phenylalanine. Reducing phenylalanine consumption reduces or prevents this effect of the gene. PKU illustrates one reason why, despite the importance of genes for nervous system development, understanding the genotype alone could never enable an understanding of the developing brain. Knowing that a baby is born with PKU doesn’t tell you anything about how that child’s brain will develop unless you also know something about the child’s diet. Another reason why genes alone cannot tell the whole story is that experience can affect the activity of genes, as we discuss next. Be Careful What You Eat Millie Lonergan, who has phenylketonuria, eats fruit and proteinfree rice and pasta (without cheese) for a diet low in phenylalanine. View larger image Experience regulates gene expression in the developing and mature brain Genetically identical animals, called clones, used to be known mainly in science fiction and horror films. But life imitates art. In pigs, genetically identical clones show as much variation in behavior and temperament as do normal siblings (G. S. Archer et al., 2003), and researchers know that genetically identical mice raised in different laboratories, or in varying conditions within the same laboratory, can behave very differently on a variety of tests. If genes are so important to the developing nervous system, how can genetically identical individuals differ in their behavior? Recall that although nearly all of the cells in your body have a complete copy of your genotype, each cell uses only a small subset of those genes at any one time. We mentioned earlier that when a cell uses a particular gene to make a particular protein, we say the cell has expressed that gene. Epigenetics is the study of factors that affect gene expression without making any changes in the nucleotide sequence of the genes. The same protein is produced, but the amount of protein can vary considerably, which leads to variation in brain development. We’ll consider two factors that may affect gene expression next. RESEARCHERS AT WORK Maternal Care Affects Mouse Behaviors FIGU R E 4 . 1 4 Epigenetic Effects on Mouse Behavior View larger image Gene expression in the brain can be affected by mothering One particular influence of mothering on gene expression has been well documented. Methylation is a chemical modification of DNA that does not affect the nucleotide sequence of a gene but makes that gene less likely to be expressed. Rodent pups provided with inattentive mothers, or subjected to interruptions in maternal care, secrete more glucocorticoids in response to stress as adults (T. Y. Zhang and Meaney, 2010). Poor maternal care produces this heightened stress hormone response by inducing methylation of the glucocorticoid receptor gene in the brain, making the pups hyperresponsive to stress for the rest of their lives (FIGURE 4.15). FIGU R E 4 . 1 5 Early Experience Imprints Genes to Affect the Stress Response in Adulthood View larger image A similar mechanism may apply to humans, because this same gene is also more likely to be methylated in the postmortem brains of suicide victims than of controls, but only in those victims who were also abused as children. Suicide victims who did not suffer childhood abuse were no more likely to have the gene methylated than were controls (McGowan et al., 2009). These results suggest that methylation of the gene in abused children may make them hyperresponsive to stress as adults—a condition that may lead them to take their own lives. This is a powerful demonstration of epigenetic influences on behavior. Taken together, these studies lead to the conclusion that the incredible capabilities of the adult brain are due not only to the inheritance of genes provided us by natural selection, but also to the effect of the environment and experience that determines where and when those genes are expressed in the brain, especially in development. Thus, the tremendous, fetal-like development of the human brain after birth (see Figure 4.2) is molded by experience and social guidance. We wish we could tell you that once your brain has been sharpened by experience (including what you gain by reading this book), you will remain brilliant forever. Sadly, development continues relentlessly toward old age. Just as our faces and bodies weaken and fade, the brain also declines, the depressing topic that concludes this chapter. How’s It Going? 1. Compare changes in genotype and phenotype in an individual during development and aging. 2. How does PKU illustrate an interaction between genes and the environment? 3. Describe two demonstrations of epigenetic effects on development. FOOD FOR THOUGHT Would pharmacological tools to methylate particular genes in particular tissues be clinically useful? 4.4 The Brain Continues to Change as We Grow Older The Road Ahead In the final part of this chapter, we consider the aging brain. Learning this material should allow you to: 4.4.1 Describe the current model of processes underlying the degeneration seen in Alzheimer’s disease. 4.4.2 Critique the amyloid hypothesis of Alzheimer’s. 4.4.3 Discuss how brain imaging may help us monitor the progression of Alzheimer’s. The passage of time brings us an accumulation of joys and sorrows— perhaps riches and fame—and a progressive decline in many of our abilities. Although slower responses seem inevitable with aging, many of our cognitive abilities show little change during the adult years, until we reach an advanced age. What happens to brain structure from adolescence to the day when we all become a little forgetful and walk more hesitantly? Memory impairment correlates with hippocampal shrinkage during aging Starting at about 40 years or so, human brain volume steadily declines, alas. The extent of brain shrinkage varies considerably across people as they age (FIGURE 4.16A), which may explain individual differences in behavioral effects of aging. Numerous studies using brain scans in aging people have asked if shrinkage of any particular brain regions correlates with memory decline. When effects of sex, age, IQ, and overall brain atrophy are eliminated statistically, only the volume of the hippocampal formation, known to be important for memory (Chapter 13), consistently correlates with memory loss (Persson and Andersson, 2022). The loss in hippocampal volume typically begins after age 60 (FIGURE 4.16B), although again there is a lot of variability across people. FIGU R E 4 . 1 6 Well, That’s Kinda Depressing! View larger image PET scans of elderly people reveal that cerebral metabolism normally remains almost constant as we age. This stability is in marked contrast to the dramatic decline of brain activity in Alzheimer’s disease (see Figure 1.20F), which we will consider next. Alzheimer’s disease is associated with a decline in cerebral metabolism The population of elderly people in the United States is increasing dramatically. Most people reaching an advanced age lead happy, productive lives, although at a slower pace than they did in their earlier years. In a growing number of elderly people, however, age has brought a particular agony: the disorder called Alzheimer’s disease, named after Alois Alzheimer (1864–1915), the neurologist who first described a type of dementia (drastic failure of cognitive ability, including memory failure and loss of orientation). Nearly 6 million Americans suffer from Alzheimer’s disease, and the progressive aging of our population means that these ranks will continue to swell (Alzheimer’s Association, 2019). This disorder is found worldwide with almost no geographic differences. The frequency of Alzheimer’s increases with aging up to age 85–90, but people who reach that age without symptoms become increasingly less likely ever to develop them (Y. Zhao et al., 2018). This last finding indicates that Alzheimer’s is in fact a disease, and not simply the result of wear and tear in the brain. The fact that remaining physically and mentally active reduces the risk of developing Alzheimer’s disease (Gallagher et al., 2019) also refutes the notion that brains simply “wear out” with age. Extensive use of the brain makes Alzheimer’s less likely. There is growing evidence that inflammation—swelling and fluid accumulation—also plays a role in Alzheimer’s (Naik and Fuchs, 2022). Alzheimer’s disease begins as a loss of memory of recent events. Eventually this memory impairment becomes all-encompassing, so extensive that people with Alzheimer’s cannot maintain any form of conversation, because both context and prior information are rapidly lost. They cannot answer simple questions such as, What year is it? or Where are you now? Cognitive decline is progressive and relentless. In time, people with Alzheimer’s become disoriented and easily lost even in familiar surroundings. Observations of the brains of people with Alzheimer’s reveal striking cortical atrophy (shrinkage), especially in the frontal, temporal, and parietal areas. PET scans show marked reduction of metabolism in posterior parietal cortex and some portions of the temporal lobe (see Figure 1.20F) (Teipel et al., 2016). The brains of individuals with Alzheimer’s also reveal progressive changes at the cellular level (FIGURE 4.17): Strange patches termed amyloid plaques appear in cortex, the hippocampus, and associated limbic system sites. The plaques are formed by the buildup of a substance called beta-amyloid (Selkoe and Hardy, 2016), which is how amyloid plaques got their name. Some cells show abnormalities called neurofibrillary tangles, which are abnormal whorls of neurofilaments that form a tangled array inside the cell. The number of neurofibrillary tangles is directly related to the magnitude of cognitive impairment, and they are probably a secondary response to amyloid plaques. People with Alzheimer’s gradually lose many neurons in the basal forebrain, which make the transmitter acetylcholine (ACh). Drugs that boost ACh signaling may reduce some of the symptoms of Alzheimer’s for a time. FIGU R E 4 . 1 7 People with Alzheimer’s Show Structural Changes in the Brain View larger image The four genes that have been shown to increase the risk of Alzheimer’s are all associated with beta-amyloid processing (Riek and Eisenberg, 2016), further implicating the substance in the disease. Perhaps measuring beta-amyloid will help future Alzheimer’s research, as Signs & Symptoms discusses next. SIGNS & SYMPTOMS Imaging Alzheimer’s Plaques At present, the only surefire diagnosis for Alzheimer’s is postmortem examination of the brain revealing amyloid plaques and neurofibrillary tangles. But one innovative approach is to inject dyes that have an affinity for beta-amyloid or Tau (SheikhBahaei et al., 2018). Then a PET scan determines how much of the dye accumulates in the brain. The brain of virtually every person diagnosed with Alzheimer’s accumulates such dyes, as do the brains of many elderly people showing mild cognitive impairment (FIGURE 4.18). Research confirms that levels of Tau indeed correlated with who would develop Alzheimer’s, and with decline in memory over a 2-year period (R. Smith et al., 2023). One important implication of this finding is that now it will be easier to track the effectiveness of various therapies for Alzheimer’s. FIGU R E 4 . 1 8 Imaging Tau Tangles and Amyloid Plaques in the Brain View larger image Drugs that interfere with enzymes that favor beta-amyloid and Tau production have been developed (Karran and De Strooper, 2022) but have at best modest effects on slowing the disease (Van Dyck et al., 2023). Thus, there is increasing skepticism about whether beta-amyloid actually causes the symptoms of Alzheimer’s, or whether the accumulating amyloid is the result of something the brain is doing to avoid the symptoms of Alzheimer’s (Makin, 2018). In that case, interfering with amyloid production would not help, and it might even exacerbate the disease. Supporting that gloomy idea, several manipulations in mice that successfully interfered with amyloid accumulation provided no behavioral benefit. In the meantime, and in keeping with the repeated theme of this chapter—that genes and experience interact—there is good evidence that physical activity (Ngandu et al., 2015), mental activity (Gates and Sachdev, 2014), and adequate sleep (Gelber et al., 2015) can postpone the appearance of Alzheimer’s disease (Dhana et al., 2022). So unless you want to pin your hopes on medical miracles in the future, the best way to avoid suffering from Alzheimer’s is to remain physically and mentally active. Perhaps you should consider a career in neuroscience research … How’s It Going? 1. What is Alzheimer’s disease, and how is it diagnosed? 2. Why is beta-amyloid a suspected cause of Alzheimer’s? 3. Although genes clearly influence the risk of Alzheimer’s, what environmental factors can postpone its onset? Food for Thought Because Alzheimer’s is a very debilitating disease, it seems like evolution should have selected heavily against the disease. Why then are cases becoming more frequent? RECOMMENDED READING Barresi, M. J. F., and Gilbert, S. F. (2019). Developmental Biology (12th ed.). Sunderland, MA: Oxford University Press/Sinauer. Bianchi, L. (2022). The Developing Brain and Its Connections. Boca Raton, FL: CRC Press. Breedlove, S. M. (2017). Foundations of Neural Development. Sunderland, MA: Oxford University Press/Sinauer. Heisinger, P. R. (2021). The Self-Assembling Brain: How Neural Networks Grow Smarter. Princeton, NJ: Princeton University Press. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 4 View larger image LIST OF KEY TERMS adult neurogenesis Alzheimer’s disease amblyopia amyloid plaques beta-amyloid Binocular deprivation cell-cell interactions cell death cell differentiation cell migration clones dementia ectoderm embryo Epigenetics fetus forebrain fragile X syndrome gene expression genotype Hebbian synapses hindbrain intellectual disability Methylation midbrain mitosis monocular deprivation neural tube neurofibrillary tangles neurogenesis neurotrophic factors ocular dominance histogram phenotype phenylketonuria (PKU) sensitive period stem cells synapse rearrangement synaptogenesis ventricular zone CHAPTER 5 The Sensorimotor System Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University What’s Hot? What’s Not? Ashlyn B. has never felt pain. While most babies cry after the arduous birth process, newborn Ashlyn calmly stared out from her blankets. Later, she developed a terrible diaper rash, and it didn’t seem to bother her at all. That seemed strange to her mother, although the doctors dismissed it. But when Ashlyn’s teeth came in, she nearly chewed off part of her tongue! When she reached up to her eye and scratched the cornea deeply, that should have been excruciating, but her parents only found out about the injury when the eye swelled and grew bloodshot. Soon her mother had to wrap Ashlyn’s hands to keep her from biting them and rubbing her face raw (Heckert, 2012). Despite feeling no pain—in fact because she feels no pain—Ashlyn’s daily life is full of peril. For instance, as a teenager, she was stirring noodles in boiling water when the spoon slipped in, and Ashlyn reflexively reached in to retrieve it. With wonderful support from her family, Ashlyn has learned to think carefully about what she does to avoid injury because, although she doesn’t feel pain, she can be damaged, just like everyone else, and that could lead to disability or death. How did Ashlyn come to have this dangerous “gift” of feeling no pain? What’s going on inside her so that experiences that would bring us agony cause her no discomfort at all? What can she teach us about the neuroscience of pain, and about the importance of pain for survival? Every individual of every species is immersed in a sea of environmental cues that signal opportunities and hazards. Molecules in the air are sensed as odors—of food, or mates, or smoke. Vibrations traveling through air are perceived as sounds, ranging from infant cries to the roar of a predator (or a waterfall). Light reflected from surfaces is used to create a visual representation of the world. We open this chapter by considering basic principles of sensory processing, using the sense of touch to illustrate some of the major concepts. We then take a closer look at an unpleasant but crucial sense: pain. To conclude the chapter, we turn our attention to the integration of sensory inputs that guide our movement: the streamlined motor system that allows us to interact with our environment. 5.1 Sensory Processing and the Somatosensory System The Road Ahead The first portion of the chapter covers the general principles that apply to all sensory systems, using touch as an example. Reading this material should allow you to: 5.1.1 Understand the concepts of labeled lines and sensory transduction. 5.1.2 Describe several different types of receptors in the skin and the stimuli they detect. 5.1.3 Relate the concepts of receptive fields and sensory adaptation. 5.1.4 Describe the neural pathway for the system reporting touch information from the body. Because species differ in the environmental features they must sense for survival, evolution has endowed each species with its own unique set of capabilities. Bats are specially equipped to detect their own ultrasonic cries, which we humans cannot hear. Some snakes have infrared-sensing organs in their faces that allow them to “see” heat sources (like a warm, tasty mouse) in the dark. Some of the impressive array of sensory modalities that animals possess are listed in TABLE 5.1. TA B LE 5 . 1 Classification of Sensory Systems System type Modality Sensed stimuli Mechanical Touch Contact with body surface Pain Tissue damage Hearing Sound vibrations in air or water Vestibular Head movement and orientation Joint Position and movement Muscle Tension Light Vision Photons, from light sources or reflected from surfaces Thermal Cold Decrease in skin temperature Warmth Increase in skin temperature Chemical Smell Odorant chemicals in air Taste Substances in contact with the tongue or other taste receptors Vomeronasal Pheromones in air or water System type Modality Sensed stimuli Electrical Electroreception Differences in density of electrical currents Magnetic Magnetoreception Magnetic fields for orientation Receptor cells detect various forms of energy All animals have sensory organs containing receptor cells that sense some forms of energy—called stimuli—but not others. So in a way, receptor cells act as filters, ignoring the environmental background and converting the key stimuli into the language of the nervous system: electrical signals. Information from sensory receptors floods the brain in an unending barrage of action potentials traveling along millions of axons, and our brains must make sense of it all. What type of stimulus was that, where did it come from, how intense was it, etc. Of course, different kinds of energy—light, sound, touch, and so on—need different sensory organs to convert them into neural activity, just as taking a photograph requires a camera, not a microphone. There is tremendous diversity in sensory organs across the animal kingdom; for example, the eye is just one type of sensory organ, yet it is found in a dazzling array of sizes, shapes, and forms, reflecting the varying survival needs of different animals. Likewise, the specific auditory abilities of species reflect their unique ecological pressures (FIGURE 5.1). FIGU R E 5 . 1 Do You Hear What I Hear? View larger image Although the end product of sensory receptors—action potentials—is the same for all the different sensory modalities, the brain recognizes the modalities as separate and distinct because the action potentials for each sense are carried in separate dedicated nerves. This is the concept of labeled lines: particular neurons that are, right from the outset, labeled for distinctive sensory experiences. Action potentials in one line signal a sound, activity in another line signals a smell, and activity in other lines signals touch. And there are labeled lines within general sensory categories too; for example, we can distinguish different types of touch because our skin contains a variety of receptors and uses some lines to signal light touch, others to signal vibration, and yet other lines to signal stretching of the skin (FIGURE 5.2). FIGU R E 5 . 2 Labeled Lines View larger image Receptor cells convert sensory signals into electrical activity The structure of a receptor cell determines the particular kind of energy or chemical to which it will respond. And although a wide variety of cellular mechanisms are used to detect different stimuli, the outcome is always the same: an electrical change in the receptor, called a receptor potential (or generator potential), that resembles the excitatory postsynaptic potentials we discussed in Chapter 3. Converting the signal in this way—from environmental stimuli into action potentials that our brain can understand—is called sensory transduction. Our skin contains a rich array of receptors that transduce different forms of energy to provide our sense of touch. But touch is not just touch. Careful studies of skin sensations reveal qualitatively different sensory experiences: pressure, vibration, tickle, “pins and needles,” and more-complex dimensions, such as smoothness or wetness—all recorded by the receptors in the skin (FIGURE 5.3), then transmitted along separate axons to the brain. FIGU R E 5 . 3 Receptors in Skin View larger image One skin receptor provides a clear example of the process of sensory transduction—the Pacinian corpuscle (or lamellated corpuscle) (A. Zimmerman et al., 2014), a tiny onion-like structure embedded in the innermost layer of the skin that selectively responds to vibration and pressure. Acting as a filter, the corpuscle allows only vibrations of more than about 200 cycles per second to stimulate the sensory nerve ending inside it; this type of stimulation is what’s created when we feel a texture against our skin (see Figure 5.3). By stretching the membrane of the sensory nerve ending, stimuli cause mechanically gated sodium channels to pop open, creating a graded receptor potential (FIGURE 5.4). The amplitude (size) of this receptor potential is directly proportional to the strength of the stimulus that was received. If the receptor potential exceeds threshold, action potentials are generated that travel via sensory nerves to the spinal cord. FIGU R E 5 . 4 The Structure and Function of the Pacinian Corpuscle View larger image Other dimensions of the sense of touch are mediated by their own unique sensory receptors. In contrast to the texture sensitivity of Pacinian corpuscles, Meissner’s corpuscles (also known as tactile corpuscles) and Merkel’s discs mediate most of our ability to perceive the forms of objects we touch. While Merkel’s discs are especially responsive to edges and to isolated points on a surface, the more numerous Meissner’s corpuscles seem to respond to changes in stimuli, allowing them to detect localized movement between the skin and a surface (Heidenreich et al., 2011). Ruffini corpuscles, which are only sparsely distributed in the skin (Deflorio et al., 2022), detect stretching of patches of the skin when we move fingers or limbs (Johansson and Flanagan, 2009). Finally, pain, itch, heat, and cold stimuli are detected by free nerve endings in the skin (see Figure 5.3), which we’ll return to a little later in the chapter. All of these sensory receptors are found in their highest concentrations in regions of the skin where our sense of touch is finest (fingertips, tongue, and lips). How’s It Going? 1. Discuss the relationship between the ecology of a species and its sensory capabilities. 2. What are labeled lines? What do they transmit? 3. Give a general explanation of sensory transduction. What is a receptor potential? 4. Identify and describe four sensory receptors found in the skin. Sensory information processing is selective and analytical Many people assume that sensory systems simply capture an accurate snapshot of stimulation and transmit it to the brain—in other words, that the sensory systems provide an uncolored window on the world. But neuroscientists realize that the sensory organs and pathways convey only limited—even distorted—information to the brain. A good deal of selection and analysis takes place along sensory pathways, before the information ever reaches the brain. So the brain ultimately receives a highly filtered representation of the external world, in which stimuli that are critical for survival are strongly emphasized at the expense of less important stimuli. This processing and filtering is seen in several aspects of sensory transduction, including stimulus coding and processing across receptive fields, as well as in adaptation and active suppression by the brain. Sensory events are encoded as streams of action potentials We’ve already seen that the nervous system uses labeled lines to identify the type of stimulus that is encountered. But how do sensory neurons tell the brain about the intensity or location of a stimulus? Because the action potentials produced by a sensory neuron always have the same size and duration (see Chapter 3), the intensity of a sensory stimulus must be encoded in the number and frequency of the action potentials, the pattern in which clusters of action potentials occur, and so on. We can respond to amazingly small differences in stimulus intensity, over a wide range of intensities. Although a single sensory receptor neuron could simply encode the intensity of a stimulus in the frequency of action potentials that the cell produces, only a very limited range of intensities could be represented this way, because neurons can fire only so fast (up to 1200 action potentials per second, and probably less in most neurons). Some sensory systems solve this problem by employing multiple sensory receptor cells, each specializing in just one part of the overall range of intensities, to cover the whole range. As the strength of a stimulus increases, additional sensory neurons sensitive to the higher intensities are “recruited”; thus, intensity of a stimulus can be represented by the number and thresholds of activated cells. The position of a stimulus, either outside or inside the body, is likewise an important piece of information. Some sensory systems— the somatosensory system (“body sensation” system), for example—reveal this information by the position of receptors on the sensory surface. Thanks to labeled lines that uniquely convey spatial information, we can directly encode which patch of skin that darn mosquito is biting, in order to know exactly where to aim the slap. Similarly, in the visual system an object’s spatial location determines which receptors in the eye are stimulated. In bilateral receptor systems—the two eyes, two ears, and two nostrils—differences in stimulation of the left and right receptors are encoded, providing the brain with additional cues to the location of the stimulus (this type of processing is discussed in more detail in Chapters 6 and 7). Neurons at all levels of the touch pathways—from the surface sheet of receptors all the way up to the cerebral cortex—are arranged in an orderly, maplike manner. The map at each level is not exact, but it does reflect both spatial positions and receptor density. More cells are allocated to the spatial representation of sensitive, densely innervated sites, like the lips, than to sites that are less sensitive, such as the skin on your back. Each cell in the sensory map thus preferentially responds to a particular type of stimulus occurring in a particular place, as we’ll see next. Sensory neurons respond to stimuli falling in their receptive fields The receptive field of a sensory neuron consists of a region of space in which a stimulus will alter that neuron’s firing rate. To determine this receptive field, investigators record the neuron’s electrical responses to a variety of stimuli to see what makes the activity of that cell change from its resting rate. For example, which patch of skin must we stimulate to change the activity of one particular touch receptor? Experiments show that these somatosensory receptive fields are shaped like doughnuts, with either an excitatory center and an inhibitory surround (FIGURE 5.5), or an inhibitory center and an excitatory surround. Such somatosensory receptive fields make it easier to detect edges on the objects we feel. Receptive fields differ in size and shape and in the quality of stimulation that activates them. For example, some neurons respond preferentially to light touch, while others fire most rapidly in response to painful stimuli, and still others respond to cooling. FIGU R E 5 . 5 Identifying Somatosensory Receptive Fields View larger image Experiments tracing sensory information along the pathway from the receptor cell to the brain show that neurons at every level will respond to particular stimuli, so each of these cells has its own receptive field. But as each successive neuron performs additional processing, the receptive fields change considerably, as we will see later in this chapter and in Chapters 6 and 7. Receptors may show adaptation to unchanging stimuli Sensory adaptation is the progressive decrease in a receptor’s response to sustained stimulation (FIGURE 5.6). This process allows us to ignore unimportant events. By not noticing the touch of our clothes on our skin, the buzz of overhead lights, and other stimuli that are unchanging, our sensory systems avoid overload and can remain vigilant for critical events. Neuroscientists distinguish between phasic receptors, which display this sort of adaptation, and tonic receptors, which show little or no adaptation and thus can signal the duration of a stimulus. (As each of us knows all too well, pain sensors are often tonic receptors, maintaining a high level of activity to help us avoid further injury.) FIGU R E 5 . 6 Sensory Adaptation View larger image The process of adaptation illustrates the principle we referred to earlier in our discussion of selection and analysis: sensory systems often shift away from accurate portrayal of the external world. In some mechanical receptors, such as the Pacinian corpuscle described earlier, adaptation develops from the elasticity of the receptor cell itself. When the corpuscle (which is a separate, accessory structure) is removed, the uncovered sensory nerve fiber does not adapt, but continues discharging action potentials in response to a constant stimulus. Sometimes we need receptors to be quiet We’ve already noted that survival depends more on sensitivity to important changes than on exact reporting of stimuli. To maintain such sensitivity, we need to suppress unneeded or unimportant sensory activity. While adaptation is one important way in which sensory activity is controlled, we are also equipped with two additional suppression systems. One way to suppress sensory activity is simply to physically prevent the stimuli from reaching the sensors. Closing the eyelids provides this function in the visual system; in the auditory system, tiny middle-ear muscles reduce the intensity of sounds that reach the inner ear. A second kind of suppression of sensory inputs is entirely neural in nature. In many sensory pathways, reciprocal neural connections descend from the brain to synapse on lower sensory levels, where they can then inhibit activity in the ascending sensory axons. This central modulation of sensory information, whereby the brain actively controls the information it receives, is a feature of many sensory and pain pathways. Such modulation helps the brain attend to some stimuli more than others. How’s It Going? 1. In general terms, explain how a sensory event is encoded in action potentials in sensory fibers. 2. Why do some receptor cells respond only to strong stimuli? 3. Describe receptive fields and how scientists detect them. 4. Name and briefly describe a couple of processes that change a sensory neuron’s response to stimuli. Successive levels of the CNS process sensory information Sensory information travels from the sensory surface to the highest levels of the brain, and each sensory system—such as touch, vision, or hearing—has its own distinctive pathway from the periphery to successively higher levels of the spinal cord and/or brain. For example, the somatosensory touch receptors that we’ve been discussing send their axons—eventually bundled into sensory nerves —from the skin to the dorsal (rear) part of the spinal cord. On entering the dorsal spinal cord, the somatosensory projections ascend as part of the spinal cord’s dorsal column system (FIGURE 5.7). These axons go all the way up to the brainstem, where they synapse onto neurons that project contralaterally (to the opposite side) and then go to the thalamus. From there, the incoming sensory information is directed to cortex. At all levels, the inputs are organized according to a somatosensory map in which the body surface is divided into discrete bands. Each band, called a dermatome (from the Greek derma, “skin,” and tome, “part” or “segment”), is the strip of skin that is innervated by a particular spinal dorsal root (FIGURE 5.8). This maplike organization of sensory inputs is a feature of several sensory systems, including touch, vision, and hearing. FIGU R E 5 . 7 Somatosensory Pathways View larger image FIGU R E 5 . 8 Dermatomes View larger image Each station in a sensory pathway accomplishes a basic aspect of information processing. For example, painful stimulation of the finger leads to reflexive withdrawal of the hand, which is mediated by spinal circuits before we even feel any pain. At the brainstem level, other circuits turn the head toward the source of pain. Eventually, sensory pathways reach the cerebral cortex, where the most complex aspects of sensory processing take place, perhaps consciously identifying the source of the pain (darn, another sliver!) and planning a response (where did I leave those tweezers?). For most senses, information reaches the thalamus before being relayed to the cortex (FIGURE 5.9). Information about each sensory modality is sent to a separate division of the thalamus. One way for the brain to suppress particular stimuli is for the cortex to direct the thalamus to emphasize some sensory information and suppress other information. FIGU R E 5 . 9 Levels of Sensory Processing View larger image Sensory cortex is highly organized For each sensory modality, researchers have identified a region designated as primary sensory cortex—primary somatosensory cortex, primary auditory cortex, and so on—that is generally the initial destination of sensory inputs to the cortex. However, other cortical regions may receive and process the same information, often in collaboration with the primary sensory cortex; sensibly enough (pardon the pun), we call these regions nonprimary sensory cortex (see Figure 5.9). Each cortical sensory region processes different aspects of our perceptual experiences. Primary somatosensory cortex (also called somatosensory 1 or S1) of each hemisphere is located in the postcentral gyrus, the long strip of tissue that lies just posterior to the central sulcus dividing the parietal lobe from the frontal lobe (FIGURE 5.10A). S1 receives touch information from the opposite side of the body. The cells in S1 are arranged as a map of the body (FIGURE 5.10B), but it is a very unusual, distorted map: the size of each region on the map is proportional to the density of sensory receptors found in that region of the skin. Parts of the body where we are especially sensitive to touch (like the hand and fingers) have large representations in S1 compared with less sensitive areas (like the shoulder). This proportional mapping is illustrated in the strange-looking character in FIGURE 5.10C, called a sensory homunculus, in whom the size of each body part reflects the proportion of S1 devoted to that part. FIGU R E 5 . 1 0 Representation of the Body Surface in Somatosensory Cortex View larger image Sensory brain regions influence one another and change over time Often the use of one sensory system influences perception from another sensory system. For example, humans detect a visual signal more accurately if it is accompanied by a sound from the same part of space (Hillyard et al., 2016). Many sensory areas in the brain—called association areas—process a mixture of inputs from different modalities. Some “visual” cells, for instance, also respond to auditory or touch stimuli. The convergence of information from different sensory systems on these polymodal neurons allows different sensory systems to interact (Emery and Wood, 2019). And for a few people, a stimulus in one sensory modality may evoke an additional perception in another sensory modality, as when seeing a number evokes a color, or music literally becomes a matter of taste, where each note has both a sound and a flavor (Schwartzman et al., 2019). This condition is known as synesthesia. At one time, most researchers thought that sensory regions of cortex were fixed early in life. Now, however, we know that cortical maps are highly plastic, changing considerably as a result of experience. For example, two artists born without arms who used their toes extensively had distinct maps of each toe, unlike control participants (Dempsey-Jones et al., 2019). Professional musicians who play stringed instruments have expanded cortical representations of their left fingers, presumably because they use these fingers to depress the strings for precisely the right notes (Hirano et al., 2020). Brain imaging also reveals cortical reorganization in people who lose a hand in adulthood (FIGURE 5.11). One man received a transplanted hand (from a deceased accident victim) 35 years after losing his own. Despite the length of time that had passed, his brain reorganized in just a few months to receive sensation from the transplanted hand in the appropriate part of S1 (Frey et al., 2008). Some changes in cortical maps occur after weeks or months of use or disuse; they may arise from the growth of new synapses and dendrites or from the loss of others. FIGU R E 5 . 11 Plasticity in Somatosensory Cortex View larger image How’s It Going? 1. Name the main somatosensory (touch) pathway to the brain, describe its organization, and name its main components. 2. Where is the primary somatosensory cortex located? How is it organized? 3. Discuss interactions between sensory modalities—for example, effects of auditory inputs on visual perception. Food for Thought Most of us have short, fine hairs on our arms that cannot possibly provide any insulation from cold. What are they good for? 5.2 Pain: The Body’s Emergency Signaling System The Road Ahead This next section describes the system bringing us the unpleasant but adaptive sensation of pain. Studying this material should enable you to: 5.2.1 List and describe the three separate components of pain experience. 5.2.2 Describe the neuronal receptor cells that detect painful stimuli and the molecular receptor proteins they use. 5.2.3 Trace the neuronal pathway that transmits pain information from the periphery to the brain, as well as the neuronal pathway by which the brain can modulate pain. 5.2.4 Discuss the various methods for controlling pain, including advantages and disadvantages of each. One important aspect of body sensation is a mixed blessing. The International Association for the Study of Pain defines pain as “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.” Pain forcefully guides our behavior in several ways that minimize the risk to our bodies (Melzack and Katz, 2013). Immediate, short-term pain causes us to withdraw from the source, often reflexively, thus preventing further damage. Longer-lasting pain encourages behaviors, such as sleep, inactivity, grooming, feeding, and drinking, that promote recuperation. And the pain-related social communication—grimacing, groaning, shrieking, and the rest of the miserable lineup—provides a warning to kin and elicits caregiving behaviors from them, including grooming, defending, and feeding. A Painless Life? Unable to feel pain, Ashlyn must be more vigilant than other people to avoid injury. View larger image Learning, experience, emotion, and culture all affect our perception of pain in striking ways, and these factors may strongly influence people’s descriptions of pain, ranging from an apparent absence of pain in badly injured soldiers and athletes, to the anguish of a child with a paper cut. A widely used quantitative measure of pain perception—the McGill Pain Questionnaire (Main, 2016)—asks people to select words that tap three different dimensions of pain: 1. The sensory-discriminative dimension (e.g., throbbing, gnawing, shooting) 2. The motivational-affective (emotional) dimension (e.g., tiring, sickening, fearful) 3. An overall cognitive-evaluative dimension (e.g., no pain, mild, excruciating) Researchers found that people use different constellations of descriptors in various forms of pain: tooth pain is described differently from arthritic pain, which in turn is described differently from menstrual pain. This more detailed analysis provides better information for the diagnosis and treatment of illness. A discrete pain pathway projects from body to brain Most tissues of the body (but not all) contain receptors specialized for detecting painful stimuli. These receptors are particularly well studied in the skin; in this section we discuss some features of these receptors, along with the peripheral and CNS pathways that mediate pain. Peripheral receptors get the initial message When tissue is injured, the affected cells release chemicals that activate nearby pain receptors, called nociceptors, free nerve endings specialized to detect damage. These chemicals also cause inflammation (FIGURE 5.12). Many different substances in injured tissue—serotonin, histamine, and various enzymes and peptides, to name just a few—can stimulate these nociceptors. Different free nerve endings respond to various stimuli, such as pain and/or changes in temperature. FIGU R E 5 . 1 2 Peripheral Mediation of Pain View larger image Identification of the nociceptor that detects physical damage was aided through careful study of the family of a boy in Pakistan who died in tragic circumstances—performing dangerous pranks because he could feel no pain. Scientists isolated a mutation in a gene (called SCN9A) that appears to be responsible for his congenital insensitivity to pain (CIP). Children with CIP require constant monitoring to prevent them from poking out their eyes or pulling out their teeth (Oppenheim, 2006). The SCN9A gene encodes a voltage- gated sodium channel expressed in free nerve endings that serve as nociceptors (Xue et al., 2021). Without that channel, pain fibers cannot produce the action potentials that ordinarily signal tissue damage. No one knows why nociceptors use this particular sodium channel while other neurons use others. Ashlyn, whom we met at the start of the chapter, has a mutation in this same gene. Interestingly, this same sodium channel is also required for fibers reporting odors, so Ashlyn and other people with her condition cannot detect odors (Heckert, 2012). Some free nerve endings detect temperature changes. Studies of capsaicin, the chemical that makes chili peppers spicy hot, helped reveal the receptor that signals sudden increases in temperature (this action is the reason spicy food seems to burn) (C. Moore et al., 2018). This receptor, with the not-so-spicy name transient receptor potential vanilloid type 1 (TRPV1, or just vanilloid receptor 1), belongs to a larger family of proteins called transient receptor potential (TRP) ion channels. Mice lacking the gene for TRPV1 still respond to mechanosensory pain, but not to mild heat or capsaicin (Fischer et al., 2020). TRPV1’s normal job is to report a rise in temperature to warn us of danger, so chili peppers cleverly evolved capsaicin to ward off mammalian predators—by falsely signaling burning heat. A related receptor, transient receptor potential type M3 (TRPM3), detects even higher temperatures than does TRPV1, but it does not respond to capsaicin (Vriens and Voets, 2018). TRPM3 receptors are found on A delta (Aδ) fibers, which are large-diameter, myelinated axons. Because of the relatively large axon diameter and myelination, action potentials in these fibers reach the spinal cord very quickly. In contrast, the nerve fibers that possess TRPV1 receptors consist of thin, unmyelinated fibers called C fibers. So, when you burn your hand on that hot pan, the initial sharp pain you feel is conducted by the fat A delta fibers activated by their TRPM3 receptors, and the long-lasting dull ache that follows arises from slower C fibers and their TRPV1 receptors. Other members of the TRP family of receptors detect coolness (Kashio and Tominaga, 2022) as well as constituents of spices like oregano, cloves, garlic, and wasabi (Rosenbaum et al., 2022), but their relation to pain receptors remains a delicious mystery (sorry). Stimulating your TRPV1 receptor too much can be hazardous to your health, as we see in Signs & Symptoms. SIGNS & SYMPTOMS A Professional Eater Meets His Match The 34-year-old man was a professional eater, entering contests to see how quickly he could down huge quantities of food. He’d been moderately successful in this pursuit, but a chili pepper contest proved to be too much. After eating an entire “Carolina Reaper” pepper, purposely bred to be 6 times hotter than a habanero pepper, the man suffered dry heaves and pain in his neck followed by a series of thunderclap headaches: excruciating, sudden-onset headaches that peak in a minute before subsiding, only to return (Boddhula et al., 2018). MRI scans of the man’s brain showed no abnormalities, but a CAT scan of blood vessels revealed that several arteries supplying his brain had narrowed to a remarkable extent (FIGURE 5.13A), which may have caused the headaches. Over the next few days, the man suffered several more thunderclap headaches lasting a few seconds. Once the headaches had stopped, the CAT scan showed that the arteries supplying his brain had expanded to a more normal size (FIGURE 5.13B). The gentleman may have gotten off lightly. People have suffered severe, even fatal, heart attacks after eating superhot chili peppers (N. Davis, 2018). FIGU R E 5 . 1 3 Thunderclap Headache View larger image Special neural pathways carry pain information to the brain Nerve fibers carrying information about pain and temperature send their axons to enter the dorsal horns of the spinal cord, where they synapse onto spinal neurons that project across the midline to the opposite side and then up toward the thalamus of the brain, forming the anterolateral system (or spinothalamic system) (FIGURE 5.14). This projection is distinct from the somatosensory system that we discussed earlier (the dorsal column system; see Figure 5.7), but as in that system, each hemisphere receives its inputs from the contralateral side of the body. Within the spinal cord, the arriving pain fibers release the excitatory transmitter glutamate along with a peptide, substance P, that selectively boosts pain signals and remodels pain pathway neurons (Zieglgänsberger, 2019). Mice lacking substance P cannot feel intense pain, but they still feel mild pain (Hökfelt et al., 2001). FIGU R E 5 . 1 4 Ascending Pain Pathways in the CNS Pain sensation travels from its origin to the brain via the anterolateral (spinothalamic) system, crossing the midline in the spinal cord. View larger image Pain information is eventually integrated in the cingulate cortex, part of the limbic system we mentioned in Chapter 1 (see Figure 1.14B). The extent of activation in the cingulate (as well as in somatosensory) cortex correlates with how much discomfort different people report in response to the same mildly painful stimulus (Lee et al., 2022). Different subregions of the cingulate cortex seem to mediate emotional versus sensory aspects of pain; one part of the cingulate cortex becomes active even when we just empathize with a loved one experiencing pain (T. Singer et al., 2004). Sometimes pain persists long after the injury that started it has healed. This neuropathic pain is a disagreeable example of neuroplasticity, where neurons continue to directly signal pain, and indeed amplify the pain signal, in the absence of any tissue damage (Ghazisaeidi et al., 2023). In a striking example of neuropathic pain, people with phantom limb pain experience great pain that seems to come from an amputated limb. It is notoriously difficult to treat. One approach that has some success involves using a mirror to trick the brain into believing it is controlling the missing limb (FIGURE 5.15) (Ramachandran and Rogers-Ramachandran, 2000); apparently, visual feedback (even if false) allows the brain to recalibrate the pain signal. FIGU R E 5 . 1 5 Using a Visual Illusion to Relieve Phantom Limb Pain View larger image How’s It Going? 1. Define pain. Why should pain be viewed as a positive adaptation? 2. Provide a general explanation of the way pain receptors work. How do pain receptors differ from touch receptors? 3. Name and distinguish between the two sizes of fibers that carry pain information from the periphery to the spinal cord. 4. Sketch the pain pathways from the periphery to the cortex. Pain control can be difficult Throughout history, suffering humans have sought remedies to reduce their experience of pain. It’s not easy; even cutting nerves may provide only temporary relief, until the pain system finds a way to restore its signal to the brain. A classic model of pain transmission, called the gate control theory, hypothesizes that spinal “gates”—modulation sites at which pain can be facilitated or blocked —control the signal that gets through to the brain (Melzack and Wall, 1965). If this theory is right, effective pain relief may depend on finding ways to keep the gates closed, cutting off the pain signal. Popular strategies for analgesia (absence of pain; from the Greek an, “not,” and algesis, “feeling of pain”) fall into four general categories, which we’ll discuss next. Life Saver Naloxone is sometimes called the “Lazarus drug” for its ability to revive people dying of a narcotic overdose in less than a minute. View larger image Analgesic drugs are highly effective The opiates (opium-related drugs, like morphine) have been known for centuries to relieve pain sensations. Along with brain-derived painkillers such as the endorphins and other endogenous opioids, opiate drugs bind to specific receptors in the brain to reduce pain (see Chapter 4). Researchers have found that this action is especially pronounced in the brainstem region called the periaqueductal gray (see Figure 5.14); one possibility is that the brainstem system activates the pain-gating mechanism of the spinal cord via descending projections, thereby blocking the transmission of pain signals. Similar benefits can be obtained by (carefully!) injecting opiates directly into the fluid-filled spaces of the spinal column; this is called an epidural or intrathecal injection. Although people sometimes become addicted to painkillers, that is usually not true of people who are using them to treat severe pain; in fact, the danger of addiction from the use of morphine to relieve surgical pain has been vastly exaggerated and is estimated to be less than 1 percent. Unfortunately, those few who do become addicted face a very real danger of death by overdose (Volkow et al., 2018); an opioid epidemic has been made worse by the development of extremely potent opioids such as OxyContin and fentanyl, resulting in more and more deaths (see Figure 3.9). If given in time, opioid antagonists like naloxone (Narcan) can save the lives of people who overdose on opioids, so more and more public safety officers carry the drug. Of course, there are other painkilling drugs, but none are as effective as the opioids. Over-the-counter medications like aspirin and acetaminophen (Tylenol) act via non-opioid mechanisms (especially the enzymes COX-1 and COX-2) to reduce pain and inflammation. Cannabis reduces pain by stimulating endogenous cannabinoid receptors (CB receptors) in the spinal cord, brain, and free nerve endings (Clapper et al., 2010). One woman with a genetic mutation that produced high levels of cannabinoids does not perceive pain (Habib et al., 2019). 1 Electrical stimulation can sometimes relieve pain In transcutaneous electrical nerve stimulation (TENS), mild electrical stimulation is applied to nerves around the injury sites to relieve pain. The exact mechanism of this pain relief is not clear, but one possibility is that TENS closes the spinal “gate” for pain that Melzack and Wall (1965) described. Recall, for example, the last time you stubbed your toe. In addition to expelling a string of expletives, you may have vigorously rubbed the injured area, bringing a little relief. TENS is a more efficient way of stimulating those adjacent nerves, and it may bring dramatic relief lasting for hours (Vance et al., 2014). We know that TENS acts at least in part by releasing endogenous opioids, because administration of the opioid antagonist naloxone partially blocks this analgesic action (Gonçalves et al., 2014). Placebos effectively control pain in some people, but not all In some people, simply believing that they are receiving a proven treatment can effectively relieve pain. In a classic example of this placebo effect, when participants who had just had their wisdom teeth extracted were given morphine or a placebo, fully a third of those receiving the placebo experienced pain relief (J. D. Levine et al., 1978). But when the placebo was coadministered with a drug that blocks opioid receptors (naloxone), the participants did not experience the benefits of the placebo effect. This latter finding strongly implies that placebos work by activating the brain’s endogenous opioid system. In fact, functional brain imaging indicates that opioids and placebos activate the same brain regions. For reasons unknown, some people consistently experience relief from placebos while others do not (FIGURE 5.16). FIGU R E 5 . 1 6 Placebos Affect Opioid Systems in the Brain View larger image Activation of endogenous opioids relieves pain Although the ancient pain-relieving technique acupuncture remains very popular, only a minority of people using acupuncture achieve lasting relief from chronic pain. In those people for whom acupuncture is effective, a release of endorphins may be an important part of the process, since treatment with naloxone often blocks acupuncture’s effectiveness (Staud and Price, 2006). Acupuncture thus resembles placebos in this regard. Although many rules govern needle placement in acupuncture, systematic research indicates that the placement of the needles actually has little to do with its effects on pain (Linde et al., 2009). The expectation that the needles will relieve pain appears to be the important factor, presumably inducing a release of endogenous opioids. Likewise, stressful life events can produce significant analgesia; for example, tales abound of gravely wounded soldiers who feel no pain for some time after their injuries occur (Bowman, 1997). Research in animals indicates that stress activates both an opioid-dependent form of analgesia, which can be blocked by naloxone, and another, non-opioid analgesia system that has not yet been characterized (but may rely on endocannabinoids) (A. G. Hohmann et al., 2005). These endogenous analgesic systems allow a wounded individual to fight or escape rather than be overwhelmed with pain. Pain relief remains a major challenge for neuroscience research. Chronic pain can have dramatic effects on the brain: for example, the prefrontal cortex in people with chronic back pain shrinks much faster than normal, as if these people are rapidly aging (Apkarian et al., 2004). The wide range of pain relief strategies (summarized in TABLE 5.2), some of which reflect desperation in the face of great anguish, testifies to the elusive nature of pain. As we learn more about how the brain controls pain, we can hope for better, safer analgesics in the future. TA B LE 5 . 2 Types of Pain Relief Type Mechanism PSYCHOGENIC Placebo May activate endorphin-mediated pain control system Hypnosis Alters brain’s perception of pain Stress Uses both opioid and non-opioid mechanisms Cognitive (learning, coping strategies) May activate endorphin-mediated pain control system PHARMACOLOGICAL Opiates Bind to opioid receptors in periaqueductal gray and spinal cord Spinal block Blocks pain signals in spinal cord Anti-inflammatory drugs Block chemical inflammatory signals at the site of injury (see Figure 5.12) Cannabinoids Act in nociceptor endings, spinal cord, and brain Type Mechanism STIMULATION TENS/mechanical On large fibers, blocks or alters pain signal to brain Acupuncture Activates endogenous opioids and/or placebo-like effect, possibly modulating effect on activity of peripheral pain pathways Central gray Electrically activates endorphin-mediated pain control systems, blocking pain signal in spinal cord How’s It Going? 1. What is the most effective pharmacological method of pain control? How and where do these drugs work in the brain? 2. How is TENS thought to work to control pain? 3. Compare and contrast placebos and acupuncture for pain. Discuss the possibility that they act on the same neural system. FOOD FOR THOUGHT If you had the chance to start life all over and could also choose to never feel any pain, would you choose that? 5.3 Movement and the Motor System The Road Ahead This chapter concludes with the system that enables the brain to move the body, allowing us to interact with the world. Learning this information means you can: 5.3.1 Discuss the importance of motor planning and sensory feedback in controlling behavior. 5.3.2 Trace the pathways by which the brain sends commands to individual muscles. 5.3.3 Distinguish between the two main types of sensory feedback from muscles to the nervous system. 5.3.4 Discuss the interaction of various cortical and subcortical brain regions regulating behavior. 5.3.5 Describe the behavioral symptoms and underlying pathology of two major motor disorders. Our apparently effortless adult motor abilities—such as reaching out and picking up an object, walking across the room, sipping a cup of coffee—require complex muscular systems with constant feedback from the body. Our survey of motor control starts with a discussion of a theoretical framework for motor feedback, followed by a tour of the anatomy and pathology of movement. When you think about it, all behavior must involve movements— contractions of muscles that provide our sole means of interacting with the world around us. Early discoveries suggested that reflexes —simple, unvarying, and unlearned responses to sensory stimuli such as touch, pressure, and pain—might be the basic units of behavior. It was thought that more-complex behaviors, or acts, such as getting dressed, walking, or speaking a sentence, might result from simply connecting together different reflexes, the sensation from one reflex triggering the next. The flaws of this perspective soon became apparent: for most acts, we have a plan in which several units (arm movements, leg movements, speech sounds) are placed in a larger pattern (the intended complete act), and they are not always produced in the same (or even the correct) order. So, researchers realized that acts require a motor plan (or motor program), a complex set of commands to muscles that is established before an act occurs. Feedback from movements informs and fine-tunes the motor program as the execution is unfolding, but the basic sequence of movements is planned. Examples of behaviors that exhibit this kind of internal plan range from highly skilled acts, such as piano playing, to the simple escape behaviors of animals such as crayfish. Researchers can track the simple movements that make up an act by recording the electrical activity of muscles as they contract—a technique called electromyography (EMG)—and the moment-tomoment positions of the body. The EMG recordings in FIGURE 5.17 show that a person pulling a lever will adjust their legs just before moving their arm—an example of motor planning. Motor plans resemble engineering concepts that are applied to the operation of machines. In designing machines, engineers commonly have two goals: (1) accuracy, to prevent or minimize error; and (2) speed, to complete a task quickly and efficiently. Improvements in one goal usually come at some cost to the other goal; in other words, there is a trade-off between speed and accuracy, a trade-off apparent in motor planning by the nervous system. FIGU R E 5 . 1 7 Electromyography View larger image The neuromuscular system consists of the muscles of the body plus a collection of brain mechanisms and nerves that prepare and execute motor plans and obtain feedback information from the sensory system for use in error correction. The system is organized in a distinct hierarchy: 1. The skeletal system and the muscles attached to it determine which movements are possible. 2. The spinal cord controls skeletal muscles in response to motor commands from the brain or, in the case of simple reflexes, in direct response to sensory inputs. 3. The brainstem integrates motor commands from higher levels of the brain and transmits them to the spinal cord. It also relays sensory information about the body from the spinal cord to the forebrain. 4. Some of the main commands for action are initiated in the primary motor cortex. 5. Areas adjacent to the primary motor cortex, nonprimary motor cortex, provide an additional source of motor commands, acting indirectly via primary motor cortex and through direct connections to lower levels of the motor hierarchy. At the very top of the movement hierarchy is the prefrontal cortex, which is crucial to the formulation of behavioral plans. 6. Other brain regions—the cerebellum and basal ganglia, via the thalamus—modulate the activities of the other parts of the control system. Through the remainder of the chapter we’ll look at the elements of this hierarchy, as outlined in FIGURE 5.18, in a bit more detail. FIGU R E 5 . 1 8 The Hierarchy of Movement Control View larger image How’s It Going? 1. Distinguish among reflexes, movements, and acts. 2. Discuss the importance of sensory feedback for the control of movements. How are speed and accuracy related, in the context of movement control? 3. What is a motor plan? 4. Identify the six major levels of the motor control hierarchy. Muscles and the skeleton work together to move the body Our skeleton, like those of other species with bones, is articulated with joints that vary in their planes of movement—ranging from “universal” joints, like the hip or shoulder, to joints that act more like hinges and move mostly in one direction, such as the elbow or knee. Around a joint, different muscles, connected to the bones by tendons, are arranged in a reciprocal fashion such that when one muscle group contracts, it stretches the other group; that is, the muscles are antagonists. Some groups of muscles, called synergists, may work together to move a limb in one direction. A simple example of muscle action around a joint is shown in FIGURE 5.19. The movement of a limb is determined by the degree and rate of contraction in some muscles and relaxation in others, or we can lock a limb in position by contracting opposing muscles at the same time. FIGU R E 5 . 1 9 The Arrangement of Muscles around the Elbow View larger image The muscles that we use for movement of the skeleton are called skeletal muscles. Because they have a striped appearance on microscopic examination, due to overlapping layers of contractile proteins called myosin and actin, skeletal muscles are said to be made of striate muscle. (Smooth muscle, which has a different appearance and is found in visceral organs and blood vessels, is not generally involved in voluntary behavior, so we will not concern ourselves with it here.) Contraction of the muscle increases the overlap of the actin and myosin filaments within muscle fibers, and as these filaments slide past each other, the muscle fiber shortens. Most muscles consist of a specific mixture of two types of fibers: slow-twitch fibers that contract with relatively low intensity but fatigue slowly, and fast-twitch fibers that contract strongly but fatigue quickly. Through training, endurance athletes enhance the slow-twitch properties of their muscles (Plotkin et al., 2022). Muscles contract because motor neurons (or motoneurons) of the spinal cord and brainstem (see Figure 1.7 and Figure 1.8) send action potentials along their axons and axon collaterals to terminate at specialized synapses, called neuromuscular junctions, that are found on muscle fibers (Figure 5.20). The production of an action potential by a motor neuron triggers a release of the neurotransmitter acetylcholine (ACh) at all of the motor neuron’s axon terminals. The motor neuron, together with all of the muscle fibers it innervates, is known as a motor unit; the fibers respond to the release of ACh by triggering their own action potentials that cause actin and myosin to produce contraction (see FIGURE 5.20). FIGU R E 5 . 2 0 The Innervation of Muscle View larger image Some large motor units—where motor neurons innervate thigh muscles, for example—may control hundreds or thousands of muscle fibers. But muscles that require more precise control—muscles of the face, for example—tend to have much smaller motor units, with each motor neuron controlling only a few muscle fibers. Many people experience “jumping nerves” in the eyelids when they’re fatigued (from studying neuroscience, maybe). This tiny but incredibly annoying twitch, called a fasciculation, is actually a misfiring facial motor unit. A fasciculation in the thigh, in contrast, produces a much larger twitch. Within the spinal cord, motor neurons tend to have large cell bodies and very widespread dendritic fields because they receive and integrate inputs from so many different sources—incoming sensory inputs, as well as descending signals from the brain—that form thousands of synapses onto the motor neurons. Virtually all motor neuron axons are myelinated, so their action potentials reach their target muscles quickly. In a somewhat dramatic turn of phrase, classical neuroscience referred to motor neurons as the final common pathway: the sole route through which the spinal cord and brain can control our bodies. Sensory feedback from muscles, tendons, and joints regulates movement To produce rapid coordinated movements of the body, the brain and spinal cord continually monitor the state of the muscles, the positions of the limbs, and the instructions being issued by the motor centers. This collection of information about body movements and positions is called proprioception (from the Latin proprius, “own,” and recipere, “to receive”). Let’s consider two proprioceptors— muscle spindles and Golgi tendon organs—that monitor muscle length and muscle tension. The muscle spindle is basically a capsule, buried amid the other fibers of the muscle, that contains a special kind of muscle fiber called an intrafusal fiber (from the Latin intra, “within,” and fusus, “spindle”) (FIGURE 5.21A). When a muscle is stretched beyond its relaxed state, so it is lengthened—imagine someone handing you a heavy book, causing your arm to bend downward and lengthening the biceps muscle—then sensory endings within the spindle fiber become excited and trigger action potentials in sensory nerves. This proprioceptive signal informs the spinal cord and brain about the extent and rate of change in the length of the muscle, and therefore about the load being imposed. Interestingly, a special motor neuron controls the length of the intrafusal fiber, adjusting it according to the movements being planned by the brain—in a sense, calibrating the muscle spindle to the expected limb position. FIGU R E 5 . 2 1 Muscle Receptors View larger image While muscle spindles respond primarily to length, the other proprioceptive receptors for muscle—Golgi tendon organs—are especially sensitive to the tension of the muscle as it shortens. Loads that are strong enough to stretch the tough tendon are sensed by the nerve endings of the Golgi tendon organ that weave through the tendon (FIGURE 5.21B). It takes a pretty strong load to stretch a tendon to this degree, so it makes sense that the primary function of Golgi tendon organs is to monitor the force of muscle contractions, providing a second source of sensory information about the muscles (FIGURE 5.22). This arrangement makes the Golgi tendon organs useful in another important way: they detect overloads that threaten to tear muscles and tendons, and they can cause a reflexive relaxation of the affected muscles, protecting the muscles (and causing you to drop that book). Another familiar example of a stretch reflex is the knee-jerk (or patellar) reflex that we discussed in Chapter 2 (see Figure 2.15). FIGU R E 5 . 2 2 Activation of Muscle Receptors View larger image Classic studies in physiology emphasized the importance of information from muscle spindles and Golgi tendon organs for controlling movement. Mott (1895) and Sherrington (1898) showed that severing the sensory fibers from a monkey’s arm muscles causes the monkey to stop using the affected limb, even if the connections from motor neurons to the muscles are preserved. The arm dangles, apparently useless. But if the good arm is restrained, the animal soon learns to use the affected arm, and indeed it can become quite dexterous (Taub et al., 2014). Monkeys manage to do this by guiding their movements with visual feedback about how the arm is moving. In fact, we all supplement our proprioceptive information with feedback from other sensory channels, like vision. The spinal cord mediates “automatic” responses and receives inputs from the brain To really understand the physiology of movement, we need to understand how the “final common pathway” is controlled by the CNS. The lowest level of this hierarchy is the spinal cord, where relatively simple circuits produce reflexive behavioral responses to sensory stimuli. A straightforward example is the stretch reflex, illustrated in FIGURE 5.23, that can be elicited by stretching any muscle. In this case, dropping a load into the outstretched hand causes a sudden stretch of the biceps muscle, which is detected by muscle spindles. In the spinal cord, the incoming sensory information from the spindles has two immediate effects: it stimulates motor neurons of the biceps, causing a contraction, and it simultaneously inhibits the antagonistic motor neurons that connect to the triceps muscle on the back of the arm. The reflex thus generates a compensatory movement to bring the hand and arm back to their intended position. Not all spinal circuits are quite this simple; for example, the rhythmic movements of walking are governed by spinal circuits that may involve many neurons across multiple spinal segments. FIGU R E 5 . 2 3 The Stretch Reflex Circuit View larger image Although muscles of the head are controlled directly by the brain, via the cranial nerves (see Figure 1.7), the muscles of the rest of the body are ultimately controlled by commands from the brain to the spinal cord. The brain sends these commands through two major pathways: the pyramidal system and the extrapyramidal system. The pyramidal system (or corticospinal system) consists of neuronal cell bodies within the frontal cortex and their axons, which pass through the brainstem, forming the pyramidal tract to the spinal cord (FIGURE 5.24A). In a cross section of the medulla, the tract is a wedge-shaped anterior protuberance (pyramid) on each side of the midline. Because the left and right pyramidal tracts each cross over to the other side, the right cortex controls the left side of the body while the left cortex controls the right. Lesions anywhere in the pyramidal tract will cause paralysis in the muscles controlled by the damaged neurons. Many of the axons of the pyramidal tract originate from neurons in the primary motor cortex (M1), which consists mainly of the precentral gyrus, just anterior to the central sulcus (FIGURE 5.24B). We will return to the topic of motor cortex a little later. FIGU R E 5 . 2 4 The Pyramidal System and Primary Motor Cortex View larger image Many other axon pathways run from the forebrain to the brainstem and spinal cord. Because these tracts are outside the pyramids of the medulla, they and their connections are lumped together as the extrapyramidal system. In general, lesions of the extrapyramidal system do not prevent the movement of individual joints and limbs, but they do interfere with spinal reflexes, usually exaggerating them, and they interfere with systems that regulate and fine-tune motor behavior. Many of these extrapyramidal projections pass to the spinal cord via specialized motor regions (the reticular formation and red nucleus) of the midbrain and brainstem; as we’ll see shortly, the basal ganglia are an important point of origin for extrapyramidal projections. Spinal injuries due to vehicular accidents, violence, falls, and sports injuries are all too common, and they often cause heartbreaking disabilities. Because the spinal cord carries all of the instructions from the brain to the muscles, an injury that completely severs the cord results in immediate and permanent paralysis below the level of injury. Depending on the extent of destruction of the spinal cord below the injury site, spinal reflexes may or may not be lost as well (in fact, reflexes may become stronger because of the loss of descending inhibition from the brain). Over 250,000 people in the United States have spinal cord injuries (Richards et al., 2017), and thousands more occur each year, mostly in young people. Although much remains to be discovered, the hope of reconnecting the injured spinal cord no longer seems farfetched. Motor cortex plans and executes movements—and more The primary motor cortex of humans—M1—is a major source of axons forming the pyramidal tract. Like S1, the primary somatosensory cortex that we discussed earlier in the chapter, M1 occupies a single large cortical gyrus: the precentral gyrus, located immediately in front of the central sulcus (M1 is thus a part of the frontal lobe; see Figure 5.24B). And like S1, M1 is organized as a map of the contralateral side of the body. So, electrical stimulation of a discrete region of the left M1 will cause movement in the corresponding region of the right side of the body. Once again, the map is distorted, in the sense that the parts of the body that we control most precisely—hands, lips, tongue—are overrepresented in M1. FIGURE 5.24C shows the motor homunculus, a figure drawn using the body proportions represented in M1. But although the M1 map helps us understand the basic organization of motor cortex, recent research indicates that the map is really an oversimplification. The mapping of individual body regions in M1 isn’t nearly as clearcut and discrete as traditional M1 maps suggest. In fact, there is a fair bit of intermingling of body regions in the map, because many body parts coordinate with one another across regions of M1 (Card and Gharbawie, 2020). By recording from M1 neurons in monkeys making arm movements, we can eavesdrop on the commands originating there (FIGURE 5.25). Many M1 cells change their firing rates according to the direction of the movement, but for any one cell, discharge rates are highest in one particular direction. Only by averaging the activity of hundreds of M1 neurons at once can we predict the direction of arm movements with reasonable accuracy. But of course, millions of M1 neurons are available, so in principle a larger sampling would provide a more accurate prediction. FIGU R E 5 . 2 5 Directional Tuning of Motor Cortex Cells View larger image Motor representations in M1 are not static; they change as a result of training. For example, M1 is wider in piano players, especially in the hand area, than in nonmusicians. The younger the musician was at the start of musical training, the larger the gyrus is in adulthood (Chieffo et al., 2016), so this expansion of M1 seems to be in response to the experience of musical training. Studies using transcranial magnetic stimulation (TMS) (see Chapter 1) to noninvasively stimulate cortical neurons have shown that the movements produced by a patch of M1 may change with repeated use or as a result of motor learning. In rats, this cortical plasticity associated with motor learning has been directly observed by means of sophisticated mapping of the motor cortex before and after extended training of a new skill (Monfils et al., 2005) (FIGURE 5.26). FIGU R E 5 . 2 6 Motor Learning Causes Remapping of Motor Cortex View larger image Just anterior to M1 are cortical regions, collectively known as nonprimary motor cortex, that make additional crucial contributions to motor control. Nonprimary motor systems can contribute to behavior directly, through communication with lower levels of the motor hierarchy in the brainstem and spinal cord systems, as well as indirectly, through M1. The traditional account of nonprimary motor cortex emphasizes two main regions: the supplementary motor area (SMA), which lies mainly on the medial aspect of the hemisphere, and the premotor cortex, which is anterior to the primary motor cortex (FIGURE 5.27). FIGU R E 5 . 2 7 Human Motor Cortical Areas View larger image The SMA seems important for the initiation of movement sequences, especially when they’re being executed according to an internal preprogrammed plan (Zimnik et al., 2019). In contrast, the premotor cortex seems to be activated when motor sequences are guided by external events (Svoboda and Li, 2018). However, evidence is mounting that premotor cortex is not a single system, but really a mosaic of different units controlling groups of motor behaviors that cluster together into major categories: defensive movements, feeding behavior, and so on (Graziano, 2006; Graziano and Aflalo, 2007). This organization suggests that motor and premotor areas mostly map behaviors, rather than mapping specific movements, as in M1. Strokes or other injuries in motor areas of the cortex result in plegia (paralysis) or paresis (weakness) of voluntary movements, usually on the contralateral side of the body (hemiplegia or hemiparesis). Damage to regions outside purely motor zones of the cerebral cortex, such as some regions of parietal or frontal association cortex, produces more-complicated changes in motor control, such as apraxia (from the Greek a, “not,” and praxis, “action”), the inability to carry out complex movements even though paralysis or weakness is not evident and language comprehension and motivation are intact. There are several subtypes of apraxia, but in general it’s as though the person with apraxia is unable to work out the sequence of movements required to perform a desired behavior—a high-level motor-programming problem. How’s It Going? 1. Describe the arrangement of muscles and joints that allows movement. 2. Briefly describe the main components of a motor unit. 3. Define proprioception. Explain how two specialized sensors in muscle provide feedback about the muscle’s current state. 4. Provide a summary of the path taken by motor fibers innervating the skeletal musculature—from the level of the brain, through the spinal cord, to the muscle targets. 5. Where is primary motor cortex located, and how is it organized? 6. Distinguish between the pyramidal and extrapyramidal systems. 7. What are some of the contributions of nonprimary motor cortex? RESEARCHERS AT WORK Mirror Neurons in Premotor Cortex Track Movements in Others A subregion of premotor cortex (called F5) contains a population of remarkable neurons that seem to fulfill two functions. These neurons fire shortly before a monkey makes a very particular movement of the hand and arm to reach for an object; different neurons fire during different reaching movements. The data thus suggest that these neurons trigger specific movements. But these neurons also seem to fire whenever the monkey sees another monkey (or a human) make that same movement (FIGURE 5.28). These cells are called mirror neurons because they fire as though the monkey were imagining doing the same thing as the other individual. Mirror neurons are also found in adult humans (Mukamel et al., 2010) and children (Lepage and Theoret, 2006), both in the premotor cortex and in other cortical locations. Because the activity of these neurons suggests that they are important in the understanding of other individuals’ actions (Rizzolatti and Craighero, 2004), an intriguing notion is that mirror neurons could be part of a neural system for empathy. Thus, there has been a great deal of speculation about the function of mirror neurons in the imitating behavior of human infants, the evolution of language, and other behavior (Gallese and Sinigaglia, 2011). Some have speculated that people with autism spectrum disorder, which is characterized by a failure to anticipate other people’s thinking and actions, may have a deficit in mirror neuron activity (Vivanti and Rogers, 2014). Note, however, that the specific functions ascribed to mirror neurons remain somewhat controversial (Albertini et al., 2021). FIGU R E 5 . 2 8 Mirror Neurons View larger image Extrapyramidal systems regulate and fine-tune motor commands Extrapyramidal projections—the motor fibers outside the pyramidal tracts—are especially important in modulation and ongoing control of movement. Two of the most important sources of extrapyramidal fibers are the basal ganglia and the cerebellum. As we saw in Chapter 1, the basal ganglia are a group of several interconnected forebrain nuclei (especially the caudate nucleus, putamen, and globus pallidus), with strong inputs from the substantia nigra and the subthalamic nucleus. The basal ganglia receive inputs, via the thalamus, from wide expanses of the cortex forming a loop from the cortex through the basal ganglia and thalamus and back to the cortex (FIGURE 5.29). The basal ganglia help control the amplitude and direction of movement, and changes in activity in regions of the basal ganglia appear to be important for the initiation of movement. Much of the motor function of the basal ganglia appears to be the modulation of activity started by other brain circuits, such as the motor pathways of the cortex (see Figure 5.24). The basal ganglia are especially important for movements performed by memory, in contrast to those guided by sensory control. FIGU R E 5 . 2 9 Subcortical Systems Involved in Movement View larger image Inputs to the cerebellum come both from sensory sources and from other brain motor systems. Sensory inputs include the muscle and joint receptors and the vestibular, somatosensory, visual, and auditory systems. Both pyramidal and nonpyramidal pathways contribute inputs to the cerebellum and in turn receive outputs—all of which are inhibitory—from the deep nuclei of the cerebellum. The cerebellum helps establish and fine-tune neural programs for skilled movements, especially the kinds of rapid, repeated movements that become automatic. Remarkably, some people appear to be born without a cerebellum, yet they develop normal motor skills (FIGURE 5.30), presumably because of the great plasticity of the brain during development. In addition to its role in motor function, the cerebellum is also crucial for some types of learning (Hull, 2020; Ernst et al., 2019), as we’ll discuss in more detail in Chapter 13. FIGU R E 5 . 3 0 A Woman without a Cerebellum View larger image Damage to extrapyramidal systems impairs movement Different constellations of symptoms are associated with damage to the various extrapyramidal motor structures. The exact consequences of cerebellar damage depend on the part of the cerebellum that has been damaged, but common motor symptoms include characteristic abnormalities of gait and posture, especially ataxia (loss of coordination) of the legs. Other cerebellar lesions may cause decomposition of movement (in which gestures are broken up into individual segments instead of being executed smoothly) or difficulties with gaze and visual tracking of objects. Two diseases that target the basal ganglia reveal important aspects of extrapyramidal contributions to motor control. People with Parkinson’s disease show progressive degeneration of dopaminecontaining cells in the substantia nigra. Loss of these neurons, which project to the caudate nucleus and putamen, is associated with a cluster of symptoms that are all too familiar: slow movement, tremors of the hands and face while at rest, a rigid bearing, and diminished facial expressions. People who have Parkinson’s show few spontaneous actions and have great difficulty in all motor efforts, no matter how routine. In advanced cases, deep brain stimulation (DBS) in the basal ganglia can relieve symptoms (Antonini et al., 2022; FIGURE 5.31). FIGU R E 5 . 3 1 Deep Brain Stimulation to Treat Parkinson’s Disease View larger image Whereas damage to the basal ganglia in Parkinson’s disease reduces movement, other kinds of basal ganglia disorders cause the opposite: excessive movement. The first symptoms of Huntington’s disease are subtle behavioral changes: clumsiness, and twitches in the fingers and face. Subtlety is rapidly lost as the illness progresses; a continuing stream of involuntary jerks engulfs the entire body. Aimless movements of the eyes, jerky leg movements, and writhing of the body make even routine activity a major challenge, exacerbated in later stages of the disease by intellectual deterioration. The neuroanatomical basis of this disorder is widespread destruction of the basal ganglia, including the caudate nucleus and the putamen (rather than just the substantia nigra, which greatly reduces movement in Parkinson’s). Although much remains to be discovered, there is more reason than ever to look forward to the introduction of effective treatments for motor disorders. Scientists are learning more and more about what goes wrong in Parkinson’s and Huntington’s diseases, and their continuing research efforts may pave the way to new therapies. How’s It Going? 1. What are mirror neurons, and what is their significance? 2. What does the fact that some people born without a cerebellum show no deficits tell you about neuroplasticity? 3. What are the symptoms of Parkinson’s disease, and what brain changes cause it? 4. What are the symptoms of Huntington’s disease, and what brain changes cause it? FOOD FOR THOUGHT Children of people with Huntington’s disease have a fifty-fifty chance of inheriting the gene causing it. If you had a parent with Huntington’s, would you want to take the test to see if you carried the disease? RECOMMENDED READING Ballantyne, J. C., Fishman, S. M., and Rathmell, J. P. (Eds.). (2018). Bonica’s Management of Pain (5th ed.). Philadelphia, PA: Lippincott. Cole, J. (2016). Losing Touch: A Man without His Body. Oxford, UK: Oxford University Press. Cytowic, R. E. (2018). Synesthesia. Cambridge, MA: MIT Press. Purves, D., Augustine, G. J., Fitzpatrick, D., Hall, W., et al. (Eds.). (2017). Neuroscience (6th ed.). Sunderland, MA: Oxford University Press/Sinauer. (See Unit III: “Movement and Its Central Control,” Chapters 16–21.) Subramanian, S. (2021). How to Feel: The Science and Meaning of Touch. New York, NY: Columbia University Press. Turk, D. C., and Gatchel, R. J. (Eds.). (2018). Psychological Approaches to Pain Management: A Practitioner’s Handbook (3rd ed.). New York, NY: Guilford Press. Walsh, R. A., de Brie, R. M., and Fox, S. H. (2017). Movement Disorders: What Do I Do Now? (2nd ed.). New York, NY: Oxford University Press. Wolfe, J. M., Kluender, J. R., Levi, D. M., Bartoshuk, L. M., et al. (2021). Sensation & Perception (6th ed.). Sunderland, MA: Oxford University Press/Sinauer. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 5 View larger image LIST OF KEY TERMS acetylcholine (ACh) acts acupuncture A delta (Aδ) fibers analgesia antagonists anterolateral system apraxia ataxia basal ganglia central modulation of sensory information cerebellum C fibers cingulate cortex decomposition of movement deep brain stimulation (DBS) dermatome dorsal column system electromyography (EMG) endorphins extrapyramidal system final common pathway free nerve endings Golgi tendon organs Huntington’s disease intrafusal fiber labeled lines Meissner’s corpuscles Merkel’s discs mirror neurons motor neurons motor plan movements muscle spindle naloxone neuromuscular junctions neuropathic pain nociceptors nonprimary motor cortex nonprimary sensory cortex Pacinian corpuscle pain paresis Parkinson’s disease phasic receptors placebo effect plegia polymodal neurons precentral gyrus premotor cortex primary motor cortex primary sensory cortex Primary somatosensory cortex proprioception pyramidal system receptive field receptor cells receptor potential reflexes Ruffini corpuscles Sensory adaptation sensory transduction somatosensory system stimuli stretch reflex substance P substantia nigra supplementary motor area (SMA) synergists synesthesia thalamus threshold tonic receptors transcutaneous electrical nerve stimulation (TENS) transient receptor potential type M3 (TRPM3) CHAPTER 6 Hearing, Balance, Taste, and Smell Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University Hold the Phone It’s like a classic horror movie scene: a scientist using amazing technology to reanimate parts of dead bodies, seeking out Nature’s secrets. But when Georg von Békésy started experimenting with cadavers in the 1920s, he was not trying to create life. He was interested in a more practical question: Why are human ears so much more sensitive than most microphones? Békésy, an engineer, thought that learning how the human ear works could help him to design better microphones for his employer, the Hungarian phone company. He gathered cadavers from local hospitals and devised a clever dissection that would reveal the inner ear without destroying it. (His work was not always appreciated by his fellow engineers; they didn’t like finding their drill press full of human bone dust in the morning.) Bringing his background in physics to bear, Békésy devised exquisitely precise physical models and biophysical experiments that let him measure extremely brief, minuscule movements in the inner ear. His subsequent discoveries provided us with the key to understanding how we translate a stream of auditory data—sounds—into neural activity that the brain can understand. In the end, Békésy did not come up with a better microphone, but his discoveries have helped restore hearing to thousands of people who once were deaf, as we’ll see in this chapter. Your existence is the direct result of the keen senses possessed by your distant ancestors—senses that enabled them to find food and mates and to avoid predators and other dangers long enough to reproduce. In this chapter we consider several of the amazing sensory systems that we use to monitor important signals from distant sources, especially sounds (by audition) and smells (by olfaction). We’ll discuss related systems for detecting position and movement of the body (the vestibular system, related to the auditory system) and tastes of foods (the gustatory or taste sense, which like olfaction is a chemical sense). We begin with hearing, because the auditory system evolved from special mechanical receptors related to the touch system that we discussed in Chapter 5. 6.1 Hearing: Pressure Waves in the Air Are Perceived as Sound The Road Ahead The first part of the chapter is concerned with the structure and function of the ear, especially the inner ear, which gives us our sense of hearing. After reading this section, you should be able to: 6.1.1 Explain how the external ear and middle ear capture and concentrate sound energy and convey it to the inner ear. 6.1.2 Sketch the anatomy of the middle and inner ears, highlighting the location of sensorineural components. 6.1.3 Explain how vibrations travel through the cochlea and how they are converted into neural activity. 6.1.4 Describe the process by which the organ of Corti encodes the frequencies of sounds. 6.1.5 Summarize the neural projections between the cochlea and brain. 6.1.6 Identify the principal auditory pathways and structures of the brain, and describe the integration of signals from the left and right ears. 6.1.7 Describe the orderly map of frequencies found at each level of the auditory system. The Ears Have It The external ears, or pinnae, of mammals come in a variety of shapes, each adapted to a particular ecological niche. Many mammals can move their ears to direct them toward a particular sound. In such cases, the brain must account for the position of the ears to judge where a particular sound came from. (Fennec fox [top left]; whispering bat [top right]; sea otter [bottom left]; chimpanzee [bottom right].) View larger image Hearing is vital for the survival of most species. There are animals that don’t use vision, like blind cave fish, but so far we don’t know of any vertebrate animals that don’t use hearing to detect sound in air and/or water. Humans can produce and perceive an impressive variety of vocalizations—from barely audible murmurs to soaring flights of song—but we especially rely on speech sounds for our social relations and to transmit knowledge to others. Across the animal kingdom, species produce and perceive sounds in wildly different ways, shaped by their unique evolutionary history. Birds sing and crickets chirp in order to attract mates, while monkeys grunt and screech and burble to signal comfort, danger, and pleasure. Owls and bats exploit the directional property of sound to locate prey and avoid obstacles in the dark, because unlike light, sound can be detected in the darkest night, or even around a corner. How does energy transmitted through air become the speech, music, and other sounds we hear? Your auditory system detects changes in the vibration of air molecules that are caused by sound sources: it senses both the intensity of sounds, measured in decibels (dB) and perceived as loudness, and their frequency, measured in cycles per second, or hertz (Hz), and perceived as pitch. BOX 6.1 describes some of the basic properties of sound that are relevant to our discussion of hearing. The outer ear directs sound into the inner parts of the ear, where the mechanical force of sound is transduced into neural activity: the action potentials that inform the brain. Your ears are incredibly sensitive organs; in fact, one of the main jobs of your powers of attention is to filter out the constant barrage of unimportant little noises that your ears detect (see Chapter 14). BOX 6.1 The Basics of Sound We perceive a repetitive pattern of local increases and decreases in air pressure as sound. Usually this oscillation is caused by a vibrating object, such as a loudspeaker or a person’s larynx during speaking. A single alternation of compression and expansion of air is called one cycle. The figure illustrates the oscillations in pressure produced by a vibrating loudspeaker. Because the sound produced by the loudspeaker here has only one frequency of vibration, it is called a pure tone and can be represented by a sine wave. A pure tone is described physically in terms of two measures: Amplitude Also called intensity, this is usually measured as sound pressure in dynes per square centimeter (dyn/cm ). Our perception of amplitude is termed loudness, expressed as decibels (dB). The decibel scale is logarithmic: one decibel is the threshold for human hearing, a whisper is about 20 dB, and a departing jetliner can be more than a million times as intense, at up to 140 dB. Frequency This is the number of cycles per second, measured in hertz (Hz). So, middle A on a piano has a frequency of 440 Hz. Our perception of frequency is termed pitch. Most sounds are more complicated than a pure tone. For example, a sound made by a musical instrument contains a fundamental frequency and harmonics. The fundamental is the basic frequency, and the harmonics are multiples of the fundamental. For example, if the fundamental is 440 Hz, the harmonics are 880 Hz, 1320 Hz, 1760 Hz, and so on. When different instruments play the same note, the notes differ in the relative intensities of the various harmonics, and there are subtle qualitative differences between instruments in the way they commence, shape, and sustain the sound; these differences are what give each instrument its characteristic voice, or timbre. 2 The external ear captures, focuses, and filters sound The oddly shaped fleshy objects that most people call ears are properly known as pinnae (singular pinna). Aside from their occasional utility as handles and jewelry hangers, the pinnae funnel sound waves into the second part of the external ear: the ear canal (or auditory canal). The pinna is a distinctly mammalian characteristic, and mammals show a wide array of ear shapes and sizes. Few humans can move their ears—and even then only enough to look silly—but many other mammals deftly shape and swivel their pinnae to help locate the source of a sound. Animals with exceptional auditory localization abilities, such as bats, may have especially mobile ears. The “ridges and valleys” of the pinna modify the character of sound that reaches the middle ear. Some frequencies of sound are enhanced; others are suppressed. For example, the shape of the human ear especially increases the reception of sounds between 2000 and 5000 Hz—a frequency range that is important for speech perception. The shape of the external ear—and, in many species, the direction in which it is being pointed—provides additional cues about the direction and distance of the source of a sound, as we will discuss later in this chapter. The middle ear concentrates sound energies A collection of tiny structures made of membrane, muscle, and bone —essentially a tiny biological microphone—links the ear canal to the neural receptor cells of the inner ear (FIGURE 6.1A). This middle ear (FIGURE 6.1B) consists of the taut tympanic membrane (eardrum) sealing the end of the ear canal plus a chain of tiny bones, called ossicles, that mechanically couple the tympanic membrane to the inner ear at a specialized patch of membrane called the oval window. These ossicles, the smallest bones in the body, are called the malleus (Latin for “hammer”), the incus (“anvil”), and the stapes (“stirrup”). A Touching Moment Helen Keller, who was both blind and deaf, said, “Blindness deprives you of contact with things; deafness deprives you of contact with people”—a poignant reminder of the importance of speech for our social lives. Here, Keller (center, accompanied by her aide and interpreter, Polly Thompson) communicates with U.S. President Dwight Eisenhower by feeling Eisenhower’s face as he speaks and makes facial expressions. Rather than living in sensory and social isolation, Keller honed her intact senses to such a degree that she was able to become a noted teacher and writer. View larger image Sound waves in the air strike the tympanic membrane and cause it to vibrate with the same frequency as the sound; as a result, the ossicles start moving too. Because of how they are attached to the eardrum, the ossicles concentrate and amplify the vibrations, focusing the pressures collected from the relatively large tympanic membrane onto the small oval window. This amplification is crucial for converting vibrations in air into movements of fluid in the inner ear, as we’ll see shortly. The middle ear is equipped with the equivalent of a volume control, which helps protect against the damaging forces of extremely loud noises. Two tiny muscles—the tensor tympani and the stapedius (see Figure 6.1B)—attach to the ends of the chain of ossicles. Within 200 milliseconds of the arrival of a loud sound, the brain signals the muscles to contract, which stiffens the chain of ossicles and reduces the effectiveness of the sounds. Interestingly, the middle-ear muscles activate just before we produce self-made sounds like speech or coughing, which is why we don’t perceive our own sounds as distractingly loud (Schneider and Mooney, 2018). The cochlea converts vibrational energy into neural activity The part of the inner ear that ultimately converts vibrations from sound into neural activity—the coiled, fluid-filled cochlea (from the Greek kochlos, “snail”)—is a marvel of miniaturization (FIGURES 6.1C and D). In an adult human, the cochlea measures only about 9 millimeters in diameter at its widest point—roughly the size of a pea. Fully unrolled, the human cochlea would be about 35–40 millimeters long. The cochlea is a spiral of three parallel canals: (1) the scala vestibuli (also called the vestibular canal), (2) the scala media (middle canal), and (3) the scala tympani (tympanic canal). The scala media contains the receptor system, called the organ of Corti, that converts vibration (from sound) into neural activity (see Figure 6.1D). It consists of three main structures: (1) the auditory sensory cells, called hair cells (FIGURE 6.1E), which bridge between the basilar membrane and the overlying tectorial membrane; (2) an elaborate framework of supporting cells; and (3) the auditory nerve terminals that transmit neural signals to and from the brain. FIGU R E 6 . 1 External and Internal Structures of the Human Ear View larger image When the ossicles transmit vibrations from the tympanic membrane to the oval window, waves or ripples are created in the fluid of the scala vestibuli, which in turn cause the basilar membrane to ripple, like shaking out a rug. A crucial feature of the basilar membrane is that it is tapered—it’s much wider at the apex of the cochlea than at the base. Thanks to this taper, each successive location along the basilar membrane shows its strongest response to a different frequency of sound. High frequencies have their greatest effects near the base, where the basilar membrane is narrow and comparatively stiff; low-frequency sounds produce a larger response near the apex, where the basilar membrane is wider, floppier, and has special properties that accentuate low frequencies (Sasmal and Grosh, 2019). RESEARCHERS AT WORK Georg von Békésy and the Cochlear Wave The discovery of the mechanics of the basilar membrane garnered a Nobel Prize for Georg von Békésy in 1961 (FIGURE 6.2). FIGU R E 6 . 2 Deformation of the Basilar Membrane Encodes Sound Frequencies View larger image The hair cells transduce movements of the basilar membrane into electrical signals The rippling of the basilar membrane is converted into neural activity through the actions of the hair cells, arranged along the length of the organ of Corti. Each hair cell features a sloping brush of minuscule hairs called stereocilia (singular stereocilium) on its upper surface. In Figure 6.1D you’ll notice that, although the bases of hair cells are embedded in the basilar membrane, the stereocilia nestle into hollows in the tectorial membrane that lies above. The hair cells—and especially the stereocilia themselves—thus form a mechanical bridge, spanning between the two membranes, that is forced to bend when sounds cause the basilar membrane to ripple. Even a tiny bend of the stereocilia produces a large and rapid depolarization of the hair cells. This depolarization results from the operation of a special type of large and nonselective ion channel found on stereocilia. Like spring-loaded trapdoors, these channels are mechanically popped open as stereocilia bend (Hudspeth, 2014), allowing an inrush of potassium (K ) and calcium (Ca ) ions. Just as we saw in neurons (in Chapter 2), this depolarization leads to a + 2+ rapid influx of Ca at the base of the hair cell, which in turn causes synaptic vesicles there to fuse with the presynaptic membrane and release neurotransmitter, stimulating adjacent nerve fibers. The stereocilia channels snap shut again in a fraction of a millisecond as the hair cell sways back. This ability to rapidly switch on and off allows hair cells to accurately track the rapid oscillations of the basilar membrane with exquisite sensitivity. In the human cochlea, the hair cells are organized into a single row of about 3500 inner hair cells (IHCs, called inner because they are closer to the central axis of the coiled cochlea) and about 12,000 outer hair cells (OHCs) in three rows (see Figure 6.1D). Fibers of the vestibulocochlear nerve (cranial nerve VIII) contact the bases of the hair cells (see Figure 6.1E). Some of these fibers do indeed convey sound information to the brain, but the neural connections of the cochlea are a little more complicated than this. In fact, there are four kinds of neural connections with hair cells, each relying on a different neurotransmitter (Goutman et al., 2015), as you can see in FIGURE 6.3. 2+ FIGU R E 6 . 3 Auditory Nerve Fibers and Synapses in the Organ of Corti View larger image The fibers are distinguished as follows: 1. IHC afferents convey to the brain the action potentials that provide the perception of sounds. IHC afferents make up about 95 percent of the fibers leading to the brain. 2. IHC efferents lead from the brain to the IHCs. They allow the brain to control the responsiveness of IHCs. 3. OHC afferents convey information to the brain about the mechanical state of the basilar membrane, but not the perception of sounds themselves. 4. OHC efferents from the brain enable it to activate a remarkable property of OHCs: the ability to change their length almost instantaneously (He et al., 2014). Through this electromechanical action, the brain continually modifies the stiffness of regions of the basilar membrane, resulting in both sharpened tuning and pronounced amplification (Hudspeth, 2014). Evidence is mounting that a complementary process also modifies the local stiffness of the tectorial membrane (see Figure 6.3A), further improving the tuning and amplification of the organ of Corti (Sellon et al., 2019). Now that the inner ear has transduced the vibrations from sound into trains of action potentials, the auditory signals must leave the cochlea and enter the brain. Auditory signals run from cochlea to cortex On each side of your head, about 30,000–50,000 auditory axons from the cochlea make up the auditory part of the vestibulocochlear nerve (cranial nerve VIII), and most of these afferent fibers carry information from the IHCs (each of which stimulates several nerve fibers) to the brain. If we record from these IHC afferents, we find that each one has a maximum sensitivity to sound of a particular frequency but will also respond to neighboring frequencies if the sound is loud enough. For example, the auditory neuron whose responses are shown in red in FIGURE 6.4 has its best frequency at 1200 Hz (1.2 kHz)—that is, it is sensitive to even a very weak tone at 1200 Hz—but for sounds that are 20 dB louder, the cell will respond to frequencies from 500 to 1800 Hz. We call this the cell’s tuning curve. If the brain received a signal from only one such fiber, it would not be able to tell whether the stimulus was a weak tone of 1200 Hz or a stronger tone of 500 or 1800 Hz, or any frequency in between. Instead, the brain analyzes the activity from thousands of such units simultaneously to calculate the intensity and frequency of each sound. FIGU R E 6 . 4 Tuning Curves of Auditory Nerve Cells View larger image The inputs from the auditory nerves are distributed to both sides of the brain via the ascending network shown in FIGURE 6.5. First, the auditory nerve fibers terminate in the (sensibly named) cochlear nuclei, where some initial processing occurs. Output from the cochlear nuclei primarily projects to the superior olivary nuclei, each of which receives inputs from both right and left cochlear nuclei. This bilateral input makes the superior olivary nucleus the first brain site at which binaural (two-ear) processing occurs. As you might expect, this mechanism plays a key role in localizing sounds by comparing the two ears, as we’ll discuss shortly. FIGU R E 6 . 5 Auditory Pathways of the Human Brain View larger image The superior olivary nuclei pass information derived from both ears to the inferior colliculi, which are the primary auditory centers of the midbrain. Outputs of the inferior colliculi go to the medial geniculate nuclei of the thalamus. Outputs from the medial geniculate nuclei extend to several auditory cortical areas. The neurons within all levels of the auditory system, from cochlea to auditory cortex, display tonotopic organization; that is, they are arrayed to form an orderly map of sound frequencies (topos is Greek for “place”) from low frequency (sounds that we perceive as lower pitched or “bass”) to high frequency (perceived as higher pitched or “treble”) (Saenz and Langers, 2014). Furthermore, at the higher levels of the system, auditory neurons are not only excited by specific frequencies, but also inhibited by neighboring frequencies, resulting in much sharper tuning of the frequency responses of these cells. This precision helps us discriminate tiny differences in the frequencies of sounds. Brain-imaging studies in humans confirm that many sounds (tones, noises, and so on) activate the primary auditory cortex (A1), which is located on the upper surface of the temporal lobes. Speech sounds produce similar activation but additionally activate other, more specialized auditory areas (FIGURE 6.6). Interestingly, when hearing people use their visual systems to try to lip-read—that is, try to figure out what someone is saying solely by watching their lips—a similar pattern of activation of auditory cortex is observed (Bourguignon et al., 2020). This finding suggests that the auditory cortex integrates other, nonauditory, information with sounds. FIGU R E 6 . 6 Responses of the Human Auditory Cortex to Random Sounds versus Speech View larger image How’s It Going? 1. Identify the major components of the external ear. What does the external ear do? 2. Identify the three ossicles, and explain their function. To what structures do the ossicles connect, and how is their action moderated? 3. Provide a brief description of the organ of Corti, naming the components that are most important for the perception of sound. 4. Explain how the movement of hair cells transduces sound waves into action potentials. Compare and contrast the functions of inner hair cells and outer hair cells. 5. Sketch the major anatomical components of the auditory projections in the brain. Where does binaural processing first occur? What is tonotopic organization? What kind of processing does auditory cortex perform? FOOD FOR THOUGHT Why do you suppose that the tuning curves of auditory neurons are relatively broad? Wouldn’t it make more sense for each auditory neuron to focus on a single specific frequency? 6.2 Specialized Neural Systems Extract Information from Auditory Signals The Road Ahead Higher levels of the auditory system process different features of the sounds captured by the ears. After reading this section, you should be able to: 6.2.1 Explain the relationship between frequency and pitch, and discuss the ranges of frequencies perceived by humans and other species. 6.2.2 Describe the two major ways in which frequency information is encoded by the cochlea. 6.2.3 Explain the principal features of sound that the nervous system uses for sound localization. 6.2.4 Discuss the functions of auditory cortex, from an ecological perspective. 6.2.5 Evaluate the importance of experience in the development and tuning of the auditory system, throughout the life-span. 6.2.6 Describe the relationship between musical experience and the development of auditory skills in music and other domains. At least when we’re young, most of us can hear sounds ranging from 20 Hz to about 20,000 Hz, and within this range we can distinguish between sounds that differ by just a few hertz. Our ability to discern many frequencies simultaneously, and accurately identify where in the world they are coming from, helps us to define the spaces and sound emitters around us—acoustical objects as varied as insects and tubas—and identify the ones that are important for our daily lives. The pitch of sounds is encoded in two complementary ways Differences in frequency are important for our sense of pitch, but pitch and frequency are not synonymous. Frequency describes a physical property of sounds (see Box 6.1), but pitch relates solely to our subjective perception of those sounds. This is an important distinction because frequency is not the sole determinant of perceived pitch; at some frequencies, higher-intensity sounds may seem higher pitched, and changes in pitch do not precisely parallel changes in frequency. How do we distinguish pitches? Two signals from the cochlea appear to inform the brain about the pitch of sounds: 1. According to place coding theory, the pitch of a sound is determined by the location of activated hair cells along the length of the basilar membrane, as we discussed in this chapter’s Researchers at Work feature. So, activation of receptors near the base of the cochlea (which is narrow and stiff and responds to high frequencies) signals treble, and activation of receptors nearer the apex (which is wide and floppy and responds to low frequencies) signals bass. This is another example of a labeled line system, which we discussed in the context of the sense of touch in Chapter 5—here, each neuron fires in response to a favorite frequency. 2. A complementary account called temporal coding theory proposes that the frequency of a sound is encoded in the rate of firing of auditory neurons. According to this model, the frequency of action potentials produced by the neuron is directly related to the number of cycles per second (i.e., hertz) of the sound. For example, a 500 Hz sound might cause some auditory neurons to fire 500 action potentials per second. Encoding sound frequency within volleys of action potentials averaged across a number of neurons with similar tunings provides the brain with a reliable additional source of pitch information. Experimental evidence indicates that we rely on both of these processes to discriminate the pitch of sounds. Temporal coding is most evident at lower frequencies, up to about 4000 Hz: auditory neurons can fire a maximum of only about 1000 action potentials per second, but to some extent they can encode sound frequencies that are multiples of the action potential frequency. Beyond about 4000 Hz, however, this encoding becomes impossible, and pitch discrimination relies on place coding of pitch along the basilar membrane. Mammalian species employ a huge range of frequencies in their vocalizations, from infrasound (less than 20 Hz) in elephants and whales to ultrasound (greater than 20,000 Hz) in bats and porpoises and many other species (the ghost-faced bat emits vocalizations at an incredible 160,000 Hz). These sounds have been shaped by evolution to serve special purposes. For example, many species of bats analyze the reflected echoes of their ultrasonic vocalizations to navigate and hunt in the dark. At the other end of the spectrum, homing pigeons seem to use infrasound cues to establish a navigational map, and they will become disoriented if exposed to a jet’s sonic boom or if atmospheric conditions prevent them from perceiving natural infrasound sources (Hagstrum, 2019). Elephants emit ultra-low-frequency alarm calls that are so powerful that they travel partly through the ground and are detected seismically by other elephants (Herbst et al., 2012) and yet are so nuanced that the elephants can distinguish human-related threats from bee-related threats (yes, elephants are scared of bees; Soltis et al., 2014) and can use their “rumbles” to identify potential mates (Stoeger and Baotic, 2017). Brainstem systems compare the ears to localize sounds Being able to quickly identify where a sound is coming from— whether it is the crack of a twig under a predator’s foot, or the sweet tones of a would-be mate—is a matter of great evolutionary significance. So it’s no surprise that we are remarkably good at locating a sound source (our accuracy is about ±1 degree horizontally around the head, and many other animals do even better). The auditory system accomplishes this feat by analyzing two kinds of binaural cues that signal the location of a sound source: 1. Interaural intensity differences (IIDs) result from comparison of the intensity of the sound—the physical property that we perceive as loudness—at the left and right ears (interaural means “between the two ears”). Depending on the species—and the placement and characteristics of their pinnae—intensity differences occur because one ear is pointed more directly toward the sound source or because the head casts a sound shadow (FIGURE 6.7A), preventing sounds originating on one side (called off-axis sounds) from reaching both ears with equal loudness. The head shadow (or sound shadow) effect is most pronounced for higherfrequency sounds (FIGURE 6.7B). 2. Interaural temporal differences (ITDs) are differences between the two ears in the time of arrival of sounds. They arise because one ear is always a little closer to an off-axis sound than the other ear is. Two kinds of temporal (time) differences are present in a sound: onset disparity, which is the difference between the two ears in hearing the beginning of the sound; and ongoing phase disparity, which is the continuing mismatch between the two ears in the time of arrival of all the peaks and troughs that make up the sound wave, as illustrated in FIGURE 6.7C. FIGU R E 6 . 7 Cues for Binaural Hearing View larger image Both types of cues are used to figure out where a sound is coming from; researchers call this the duplex theory of sound localization. At low frequencies, no matter where sounds occur horizontally around the head, there are virtually no intensity differences between the ears (see Figure 6.7B). So for these frequencies, differences in times of arrival are the principal cues used for sound localization (and at very low frequencies, neither cue is much help; this is why you can place the subwoofer of an audio system anywhere you want within a room). At higher frequencies, however, the sound shadow cast by the head causes significant intensity differences between the ears. Of course, we can’t perceive exactly which types of processing we’re relying on for any given sound; in general, we are aware of the results of neural processing but not the processing itself. The structure of the external ear provides yet another localization cue. As we mentioned earlier, the hills and valleys of the external ear selectively reinforce some frequencies in a complex sound and diminish others. This process is known as spectral filtering, and the frequencies that are altered depend on the angle at which the sound strikes the external ear (Zonooz et al., 2019). That angle varies, of course, depending on the vertical localization (or elevation) of a sound source. The relationship between spectral cues and elevation is learned and calibrated during development (van der Heijden et al., 2019). The various binaural and spectral cues used for sound localization converge and are integrated in the inferior colliculus (Slee and Young, 2014). The auditory cortex processes complex sound In some sensory areas of the brain, lesions cause the loss of basic perceptions. For example, lesions of visual cortex result in blind spots, as we will discuss in Chapter 7. But the auditory cortex is different: researchers have long known that simple pure tones can be heard even after the entire auditory cortex has been surgically removed (Neff and Casseday, 1977). So if the auditory cortex is not involved in basic auditory perception, then what does it do? The auditory cortex seems to be specialized for the detection of morecomplex “biologically relevant” sounds, of the sort we mentioned earlier—vocalizations of animals, footsteps, snaps, crackles, and pops —containing many frequencies and complex patterns (Theunissen and Elie, 2014). In other words, the auditory cortex evolved to process the complex soundscape of everyday life, not simple tones. The unique capabilities of the auditory cortex result from a sensitivity that is fine-tuned by experience as we grow (Chang and Kanold, 2021). Human infants have diverse hearing capabilities at birth, but their hearing for complex speech sounds in particular becomes more precise and rapid through exposure to the speech of their families and other people. Newborns can distinguish all the different sounds that are made in any human language. But as they develop, they get better and better at distinguishing sounds in the language(s) they hear around them, and worse at distinguishing speech sounds unique to other languages. Similarly, early experience with binaural hearing, compared with equivalent monaural (oneeared) hearing, is important for developing the ability to localize sound sources, but if it occurs early enough, the auditory system can learn to use other cues to compensate for the loss of hearing in one ear (Kumpik and King, 2019). Studies with both humans and lab animals confirm that throughout life, experience with tasks that employ complex auditory stimuli—like discriminating between multiple pitches or, in the case of humans, modified speech samples —can cause a rapid retuning of auditory neurons (FIGURE 6.8) (Holdgraf et al., 2016). Sounds that are biologically urgent, such as the distress cries of an infant, reportedly cause this auditory retuning and learning to occur especially quickly (Schiavo et al., 2020). Later in life, aging takes a steady toll on our hearing. With the passage of time, the responsiveness of auditory cortex neurons gradually declines, and it becomes harder to distinguish between sounds that occur simultaneously (Overton and Recanzone, 2016; Recanzone, 2018). This is one reason why grandparents can find it so difficult to follow a conversation in a noisy restaurant. FIGU R E 6 . 8 Long-Term Retention of a Trained Shift in the Tuning of an Auditory Receptive Field View larger image Music also shapes the responses of auditory cortex. It might not surprise you to learn that the music-relevant areas of the auditory cortex of trained musicians are structurally different from the same regions in nonmusicians, and also more responsive. After all, when two people differ in any skill, their brains must be different in some way, and maybe people born with brains that are more responsive to musical sounds are also more likely to train in music (Wesseldijk et al., 2021). The surprising part is that the extent to which a musician’s brain is extra sensitive to music is correlated with the age at which they began their serious training in music: the earlier the training began, and the more intensive it was, the larger the difference in auditory cortex in adulthood (Herholz and Zatorre, 2012; Habibi et al., 2020). Kids who receive intensive musical education also show enhanced speech perception later in life (Weiss and Bidelman, 2015; Intartaglia et al., 2017). Findings like these show that early musical training alters the functioning of auditory cortex in an enduring manner. By adulthood, the portion of primary auditory cortex where music is first processed, called Heschl’s gyrus, is much larger and more responsive in professional musicians than in nonmusicians, and more than twice as strongly activated by music (P. Schneider et al., 2002). Even in older adults, piano training increases cortical thickness in multiple regions of auditory cortex, including Heschl’s gyrus (Worschech et al., 2022). The cortical processing of music is also believed to be influenced by the brain’s mesolimbic reward system (see Chapter 3), attaching a reward value to music that is new and pleasurable to us (Salimpoor et al., 2015; Gold et al., 2019). So, to what extent is music perception inborn? Some people show a lifelong inability to discern tunes or sing, called amusia. Amusia is associated with subtly abnormal connectivity between primary auditory cortex and regions of the right frontal lobe known to participate in pitch discrimination (FIGURE 6.9) (Loui et al., 2009; Chen et al., 2015). The result is an inability to consciously access pitch information, even though cortical pitch-processing systems are intact (Zendel et al., 2015). Interestingly, studies of people with amusia indicate that when listening to music, we process pitch and rhythm quite separately, and pitch perception seems to be heritable, suggesting a genetic component (Peretz, 2016). If you’re worried about your own ability to carry a tune, the National Institutes of Health (NIH) provides an online test of pitch perception. FIGU R E 6 . 9 Brain Connections in People with Amusia View larger image FOOD FOR THOUGHT If you were completely deaf in one ear, would you still be able to localize sound sources? How? 6.3 Hearing Loss Is a Widespread Problem The Road Ahead Next we consider the main causes of auditory dysfunction. After reading this section, you should be able to: 6.3.1 Define and distinguish between hearing loss and deafness. 6.3.2 Describe and contrast the three major categories of hearing loss. 6.3.3 Identify potentially harmful noise intensities, and discuss the ways in which noise damages the auditory system. 6.3.4 Summarize and evaluate methods for treating each form of hearing loss. Disorders of hearing include hearing loss (defined as a moderate to severe decrease in sensitivity to sound) and deafness (defined as hearing loss so profound that speech cannot be perceived even with the use of hearing aids). Bilateral hearing loss affects about 40 million people in the USA alone (Goman and Lin, 2016), and it is estimated that by 2050 about 1 in 4 people worldwide will have hearing problems, many of them preventable (World Health Organization, 2021). By now, you may have anticipated that there are three main kinds of problems that can prevent sound waves in the air from being transformed into conscious auditory perceptions: problems with sound waves reaching the cochlea, trouble converting those sound waves into action potentials, or dysfunction of the brain regions that process sound information (FIGURE 6.10): 1. Before anything even happens in the nervous system, the ear may fail to convert the sound vibrations in air into waves of fluid within the cochlea. This form of hearing loss, called conduction deafness (FIGURE 6.10A), often comes about when the ossicles of the middle ear become fused together and vibrations of the eardrum can no longer be conveyed to the oval window of the cochlea. 2. Even if vibrations are successfully conducted to the cochlea, the sensory apparatus of the cochlea—the organ of Corti, and the hair cells it contains—may fail to convert the ripples created in the basilar membrane into the volleys of action potentials that ordinarily inform the brain about sounds. This form of hearing loss, termed sensorineural deafness (FIGURE 6.10B), is most often the result of permanent damage or destruction of hair cells by any of a variety of causes (FIGURE 6.11). Some people are born with genetic abnormalities that interfere with the function of hair cells; researchers hope that gene therapies will someday help reverse genetic hearing loss (Akil and Lustig, 2019; ShubinaOleinik et al., 2021). Many more people acquire sensorineural deafness during their lives as a result of being exposed to extremely loud sounds—overamplified music, nearby gunshots, and industrial noise are important examples—or because of medical problems such as infections and adverse drug effects (certain antibiotics, such as streptomycin, are particularly ototoxic). If you don’t think it can happen to you, think again. Anyone listening to something for more than 5 hours per week at 89 dB or louder is already exceeding workplace limits for hearing safety (SCENIHR, 2008), yet many personal music players and music at concerts and clubs exceed 100 dB. Fortunately, earplugs are available that attenuate all frequencies equally, making concerts a little quieter without muffling the music. Various sound sources are compared in FIGURE 6.12; if you are concerned about your own exposure, excellent sound level meter apps for smartphones are available at little or no cost ( including one from the National Institute for Occupational Safety and Health [NIOSH]). Long-term exposure to loud sounds can cause lasting hearing problems ranging from a persistent ringing in the ears, called tinnitus (Zenner et al., 2017), to a permanent profound loss of hearing for the frequencies being listened to at such high volumes. 3. For the action potentials sent from the cochlea to be of any use, the auditory areas of the brain must process and interpret them in meaningful ways. Central deafness (FIGURE 6.10C) occurs when auditory brain areas are damaged by, for example, strokes, tumors, or traumatic injuries. As you might expect from our earlier discussion of auditory processing in the brain, this type of deafness almost never involves a simple loss of auditory sensitivity. Afflicted individuals can often hear a normal range of pure tones but are impaired in the perception of complex, behaviorally relevant sounds. An example in humans is word deafness: selective trouble with speech sounds despite normal speech and normal hearing for nonverbal sounds. Another example of central deafness is cortical deafness, a rare syndrome involving bilateral lesions of auditory cortex, causing a more complete impairment marked by difficulty recognizing almost all complex sounds, whether verbal or nonverbal. Although there are few treatments available for central deafness, we can use electronic prostheses to restore the auditory stimulation that is missing in conduction or sensorineural deafness. We discuss these approaches in Signs & Symptoms, next. FIGU R E 6 . 1 0 Types of Hearing Loss View larger image FIGU R E 6 . 11 The Destructive Effects of Loud Noise View larger image FIGU R E 6 . 1 2 How Loud Is Too Loud? View larger image SIGNS & SYMPTOMS Restoring Auditory Stimulation in Deafness People with conduction deafness use hearing aids that employ electronic amplification to deliver louder sounds to the impaired —but still functional—auditory system. Surgery can sometimes free up fused ossicles, or they can be replaced with Teflon prosthetics, restoring the transmission of vibrations from the eardrum to the cochlea (Young and Ng, 2022). But sensorineural deafness presents a much thornier problem because neural elements have been destroyed (or were absent from birth). Can new hair cells be grown? Although fishes and amphibians produce new hair cells throughout life, mammals have traditionally been viewed as incapable of regenerating hair cells. This conclusion may have been too hasty, however (Géléoc and Holt, 2014). Using several different strategies, researchers have succeeded in inducing the birth of new hair cells in cochlear tissues of lab animals (Li et al., 2015), so there is reason to hope that an effective restorative therapy for deafness may be available someday. For now, treatments for sensorineural deafness focus on the use of prostheses. Implantable devices called cochlear implants can detect sounds and then directly stimulate the auditory nerve fibers of the cochlea, bypassing the ossicles and hair cells altogether and offering partial restoration of hearing even in cases of complete sensorineural deafness (FIGURE 6.13). You may have had doubts about the value of Békésy’s work with cadavers that we described at the start of this chapter. If so, consider this: the cochlear implants that have brought hearing to thousands of deaf people work by reproducing the phenomena Békésy discovered. In other words, the device sends information about low frequencies to electrodes stimulating nerves at the apex of the cochlea and sends information about high frequencies to electrodes stimulating nerves at the base. As you might predict from our discussion of the importance of experience in shaping auditory responsiveness, the earlier in life these devices are implanted, the better the person will be able to understand complex sounds, especially speech (Geers et al., 2017). So in a sense, the success of these implants is due to the extreme plasticity of the young brain. FIGU R E 6 . 1 3 Cochlear Implants Provide Hearing in Some Deaf People View larger image How’s It Going? 1. Compare and contrast the two important signals about pitch that the brain receives from the cochlea: place coding and temporal coding. How do they work together to give us our sense of pitch? 2. Discuss the sensory capabilities of different species as adaptations shaped by natural selection. 3. Provide an account of sound localization, identifying the several sources of information that we use to determine the source of a sound. 4. Discuss the types of processing that are performed by primary auditory cortex. Is experience with sound important for development of cortical auditory systems? 5. Name and describe the three major forms of deafness. FOOD FOR THOUGHT How could technology of the future help people overcome central deafness? 6.4 Balance: The Inner Ear Senses the Position and Movement of the Head The Road Ahead In the next section we look at the inner ear system that gives us our sense of balance. After reading this section, you should be able to: 6.4.1 Describe the anatomical features of the vestibular system. 6.4.2 Explain how accelerations and changes in the position of the head are transduced into sequences of action potentials. 6.4.3 Describe the neural projections from the vestibular system to the brainstem, and summarize their functional importance. 6.4.4 Discuss some of the consequences of vestibular dysfunction or abnormal vestibular stimulation. Without our sense of balance, it would be a challenge to simply stand on two feet. When you use an elevator, you clearly sense that your body is rising or falling, despite the sameness of your surroundings. When you turn your head, take a tight curve in your car, or bounce through the seas in a boat, your continual awareness of motion allows you to plan movements and anticipate changes in perception due to movement of your head. And of course, too much of this sort of stimulation can make you lose your lunch. Like hearing, our sense of balance is the product of the inner ear, relying on several small structures that adjoin the cochlea and are known collectively as the vestibular system (from the Latin vestibulum, “entrance hall,” reflecting the fact that the system lies in hollow spaces in the temporal bone). In fact, it is generally accepted that the auditory organ evolved from the vestibular system, although the ossicles probably evolved from parts of the jaw. The most obvious components of the vestibular system are the three fluid-filled semicircular canals, plus two bulbs called the saccule and the utricle that are located near the ends of the semicircular canals (FIGURE 6.14A). Notice that the three canals are oriented in the three different planes in which the head can rotate (FIGURE 6.14B)—nodding up and down (technically known as pitch), shaking from side to side (yaw), and tilting left or right (roll). FIGU R E 6 . 1 4 Structures of the Vestibular System View larger image The receptors of the vestibular system are hair cells—just like those in the cochlea—whose bending ultimately produces action potentials. The cilia of these hair cells are embedded in a gelatinous mass inside an enlarged chamber called the ampulla (plural ampullae) that lies at the base of each semicircular canal (see Figure 6.14B). Movement of the head in one axis sets up a flow of the fluid in the semicircular canal that lies in the same plane, bending the stereocilia in that particular ampulla and signaling the brain that the head has moved. Working together, the three semicircular canals accurately track the movement of the head. The utricle and saccule each contain an otolithic membrane (a gelatinous sheet studded with tiny crystals; otolith literally means “ear stone”) that, thanks to its mass, lags slightly when the head moves. This bends the stereocilia of nearby hair cells, stimulating them to track straight-line acceleration and deceleration—the final signals that the brain needs to calculate the position and movement of the body in three-dimensional space. Axons leading from these hair cells to the brain make up the vestibular part of the vestibulocochlear nerve (cranial nerve VIII). Vestibular information is crucial for planning body movements, maintaining balance against gravity, and smoothly directing sensory organs like the eyes and ears toward specific locations, even when our bodies are themselves in motion. So, it’s no surprise that the nerve pathways from the vestibular system have strong connections to brain regions responsible for the planning and control of movement. On entering the brainstem, many of the vestibular fibers terminate in the vestibular nuclei, while some fibers project directly to the cerebellum to aid in motor programming there. Outputs from the vestibular nuclei project in a complex manner to motor areas throughout the brain, including motor nuclei of the eye muscles, the thalamus, and the cerebral cortex. Some forms of vestibular excitation produce motion sickness There is one aspect of vestibular activation that many of us would gladly do without. Too much strong vestibular stimulation—think of boats and roller coasters—can produce the misery of motion sickness. Motion sickness is caused by movements of the body that we cannot control. For example, passengers in a car are more likely to experience motion sickness than is the driver; it remains to be seen how this will affect the occupants and design of self-driving cars (Buchheit et al., 2022). Why do we experience motion sickness? According to the sensory conflict theory, we feel bad when we receive contradictory sensory messages, especially a discrepancy between vestibular and visual information (Keshavarz and Golding, 2022). According to one early hypothesis, discrepancies in sensory information might ordinarily signal the neurological impact of toxins, triggering dizziness and vomiting to get rid of accidentally ingested poisons. However, there is little objective evidence to support the “poison hypothesis” of motion sickness, so its evolutionary origins remain a mystery (Oman, 2012). The observation that virtual reality devices frequently induce motion sickness, and that people who tend to sway when standing are more susceptible to this sickness, has been interpreted as evidence that motion sickness actually results from postural instability rather than sensory conflict (Munafo et al., 2017). When an airplane bounces around in turbulence, the vestibular system signals that various changes in direction and accelerations are occurring, but as far as the visual system is concerned, nothing is happening; the plane’s interior is a constant. For passengers, the worst effect of this may be some motion sickness, but pilots are trained to be wary of a second effect of this mismatch. In conditions of very low visibility, an acceleration of the plane may be misinterpreted as a climb (an upward tilt of the plane) (MacNeilage et al., 2007; Sánchez-Tena et al., 2018), a compelling phenomenon called the somatogravic illusion (or false-climb illusion). Both acceleration and climb will press you back in your seat, so pilots are trained not to reflexively dive the plane (which could result in disaster), but instead to rely on their instruments—rather than their vestibular systems—to determine whether the plane is climbing or accelerating. How’s It Going? 1. Use a diagram to explain how the general layout of the vestibular system allows it to track movement in three axes. Where are the receptors for head movement located? Do they resemble other types of sensory receptors? 2. Where are the vestibular nuclei located? What nerve provides inputs to these nuclei? 3. How does vestibular sensitivity affect your everyday activities? 4. Discuss the role of the vestibular system in motion sickness. FOOD FOR THOUGHT Propose an amusement park ride that capitalizes on the somatogravic illusion. How would your ride affect the vestibular system, and why would people pay for that? 6.5 Taste: Chemicals in Foods Are Perceived as Tastes The Road Ahead We now turn our attention to the specialized sensors that gives us our sense of taste. After reading this section, you should be able to: 6.5.1 Describe the structure, function, and distribution of the papillae on the tongue. 6.5.2 Summarize the structure of taste buds, and discuss their relationship to papillae. 6.5.3 Describe the basic tastes and the distribution of taste sensitivity across the surface of the tongue. 6.5.4 Describe the specialized cellular mechanisms through which taste cells transduce each of the major tastes. 6.5.5 Trace the neural projection of gustatory information to the brainstem and higher-order systems. Delicious foods, poisons, dangerous adversaries, and fertile mates— these are just a few of the sources of chemical signals in the environment. Being able to detect these signals is vital for survival and reproduction throughout the animal kingdom. Most people derive great pleasure from eating delicious food, and because we recognize many substances by their distinct flavors, we tend to think that we can discriminate many tastes. In reality, though, humans detect only a small number of basic tastes; the huge variety of sensations aroused by different foods are actually flavors rather than simple tastes, and they rely on the sense of smell as well as taste. (To appreciate the importance of smell to flavor, block your nose while eating first a little bit of raw potato and then some apple: without the sense of smell, it’s difficult to tell them apart!) Scientists are in broad agreement that we possess at least five basic tastes: salty, sour, sweet, bitter, and umami. (Umami, Japanese for “delicious taste,” is the term for the savory, meaty taste that is characteristic of gravy or soy sauce.) These tastes are determined genetically, as we will see shortly, but there is considerable genetic variation across the globe in both the strength and pleasurable qualities of the basic tastes (Pirastu et al., 2016). Further, the hunt continues for additional basic tastes. For example, studies suggest that humans and other animals may possess a primary fat taste (Besnard et al., 2016; Hichami et al., 2022); another candidate taste, called kokumi, is described as the full-bodied, thick, mouth-filling quality of some foods (S. C. Brennan et al., 2014). But no matter how many basic tastes we are eventually shown to possess, it is clear that evolution shaped them to help us find nutritious food and avoid toxins. Tastes excite specialized receptor cells on the tongue Many people think that the many little bumps on their tongues are taste buds, but they aren’t. They are actually papillae (singular papilla) (FIGURE 6.15), tiny lumps of tissue that increase the surface area of the tongue. FIGU R E 6 . 1 5 A Cross Section of the Tongue View larger image There are three kinds of papillae—circumvallate, foliate, and fungiform papillae—occurring in different locations on the tongue (FIGURE 6.16). Taste buds, each consisting of a cluster of 50–150 taste receptor cells (FIGURE 6.16B), are found buried within the walls of the papillae (a single papilla may house several such taste buds; see Figure 6.15). Fine fibers, called microvilli, extend from the taste receptor cells into a tiny pore, where they come into contact with substances that can be tasted, called tastants. Each taste cell is sensitive to just one of the five basic tastes, and with a life-span of only 10–14 days, taste cells are constantly being replaced. But as our various personal experiences with hot drinks, frozen flagpoles, or spicy foods tell us, taste is not the only sensory capability of the tongue. It is also exquisitely sensitive to pain, touch, and temperature. FIGU R E 6 . 1 6 Taste Buds and Taste Receptor Cells View larger image You may have seen maps of the tongue indicating that each taste is perceived mainly in one region (sweet at the tip of the tongue, bitter at the back, and so on), but these maps are based on an enduring myth. All five basic tastes can be perceived anywhere on the tongue where there are taste receptors (Chandrashekar et al., 2006). Those areas do not differ greatly in the strength of taste sensations that they mediate (FIGURE 6.16D). The five basic tastes are signaled by specific sensors on taste cells The tastes salty and sour are evoked when taste cells are stimulated by simple ions acting via ion channels in the membranes of the taste cells. Sweet, bitter, and umami tastes are perceived by specialized receptor molecules—metabotropic G protein–coupled receptors (GPCRs), as we discussed in Chapter 3 (see Figure 3.2)—that use second messengers to change the activity of the taste cell. (Behrens and Meyerhof, 2019; Liszt et al., 2022). Salty Taste cells apparently sense salt (NaCl) in several different ways, which are not yet completely understood. As you might guess, one kind of salt sensor simply relies on sodium (Na ) channels, just like the ones we have seen in previous chapters. In this case, sodium ions (Na ) from salty food enter taste cells via sodium channels in the cell membrane, causing a depolarization of the cell and release of neurotransmitter. We know that this is a crucial mechanism for perceiving saltiness, because blocking the sodium channels with a drug reduces salt sensitivity—though it does not eliminate it (Chandrashekar et al., 2010). This primary salt-sensing system also seems to be responsible for the appetizing qualities of moderate concentrations of salt in food. A second salt receptor probably responds to multiple cations: Na , K , and Ca (Rhyu et al., 2021), perhaps accounting for taste differences between culinary salts from + + + + 2+ varying sources, such as sea salts. And although most previous research on the salt taste has focused on Na perception, it turns out that taste cells also detect the other ion that is liberated when salt dissolves: chloride (Cl ). This parallel salt-sensing system seems to mediate the aversive properties of high concentrations of salt. Because drugs that block chloride-selective ion channels have little effect on the Cl sensitivity of the tongue, researchers believe that Cl transduction by taste cells involves a different, as-yet-unknown mechanism (Roebber et al., 2019). Depolarization of the saltsensitive taste cells ultimately causes them to release neurotransmitters that stimulate afferent neurons that relay saltiness information to the brain. Sour Acids in food taste sour—the more acidic the food, the more sour it tastes—and after a long search, researchers have narrowed in on the primary mechanisms that detect sour tastants. The property that all acids share is that they release protons (H ; also called hydrogen ions). It seems that all sour-sensitive taste cells share an inward flow of protons that depolarizes the cell (Bushman et al., 2015). Researchers eventually discovered that these taste cells express a new kind of ion channel, called OTOP1, that is exquisitely selective for protons: OTOP1 channels are 100,000 times more permeable to protons than Na ions, and they completely block most other ions (Tu et al., 2018; Teng et al., 2019). An inrush of protons from sour foods via OTOP1 sour receptors thus directly depolarizes + – – – + + sour taste cells, which signal sourness to the brain’s gustatory systems. This OTOP1-dependent activity in the gustatory pathways accurately encodes the acidity (sourness) of food, and it is absent in mice with the Otop1 gene knocked out (Turner and Liman, 2022). Interestingly, sour receptors also detect the sensation and taste of carbonation in drinks (Chandrashekar et al., 2009) and prompt thirsty animals to drink (Zocchi et al., 2017). The receptors for sweet, bitter, and umami tastes are metabotropic GPCRs (see Figure 3.2): when bound by tastant molecules arriving at the taste cell’s surface, the receptor activates a second-messenger system within the cell. These receptors are made up of simpler proteins belonging to two families—designated T1R and T2R—that are combined in various ways (Ahmad and Dalziel, 2020), as we will see next. Sweet When two members of the T1R family—T1R2 and T1R3—combine (heterodimerize), they make a receptor that selectively detects sweet tastants (Nelson et al., 2001; Yoshida and Ninomiya, 2016). Mice engineered to lack either T1R2 or T1R3 are insensitive to sweet tastes (Zhao et al., 2003). And if you’ve spent any time around cats, you may be aware that they couldn’t care less about sweets. It turns out that in all cats, from tabbies to tigers, the gene that encodes T1R2 is disabled, so their sweet receptors don’t work (X. Li et al., 2009). Bitter In nature, bitter tastes often signal the presence of toxins, so it’s not surprising that a high sensitivity to different kinds of bitter tastes has evolved, although individuals vary significantly in their bitter taste sensitivity (FIGURE 6.17). Members of the T2R family of receptor proteins appear to function as bitter receptors (Chandrashekar et al., 2000; Behrens and Meyerhof, 2018). The T2R family has about 30 members, and this large number may reflect the wide variety of bitter substances encountered in the environment, as well as the adaptive importance of being able to detect and avoid them. Furthermore, each bitter-sensing taste cell produces most or all of the different types of T2R bitter receptors, so each bitter-sensing taste cell is very broadly tuned and will respond to just about any bitter-tasting substance (Brasser et al., 2005). That’s just what you’d expect in a system that evolved to detect toxins. FIGU R E 6 . 1 7 It’s All a Matter of Taste Buds View larger image Umami The fifth basic taste, umami—the meaty, savory flavor—is detected by at least two kinds of receptors. One of these is a variant of the metabotropic glutamate receptor (Yasumatsu et al., 2015) and most likely responds to the amino acid glutamate, which is found in high concentrations in meats, cheeses, kombu, and other savory foodstuffs (that’s why MSG—monosodium glutamate—is used as a “flavor enhancer”). The second probable umami receptor, a heterodimer of T1R1 and T1R3 proteins, responds to most of the dietary amino acids (Nelson et al., 2002; Ahmad and Dalziel, 2020). Given this receptor’s similarity to the T1R2+T1R3 sweet receptor, there is reason to suppose that receptors for things that taste good may have shared evolutionary origins. Consider the taste abilities of birds that, just like their house cat enemies, lack the T1R2 gene and thus ordinarily can’t taste sweet. Instead, in those birds that rely on nectar to survive, evolution repurposed the T1R1+T1R3 umami receptor into a new class of taste receptor (Toda et al., 2021). It’s impossible to know exactly what taste sensation it produces in the birds’ brains, but it evidently signals the presence of delicious sugars —an evolutionary work-around that has allowed nectar-feeding birds to thrive and spread. Researchers have also discovered that these taste receptor proteins are expressed in numerous tissues of the body—not just the tongue (FIGURE 6.18). These extra-oral taste receptors serve widely varying functions unrelated to conventional taste, such as the control of appetite, digestion, and immune responses FIGU R E 6 . 1 8 Body Tissues Expressing Taste Receptors View larger image Taste information is transmitted to several parts of the brain Taste projections of the gustatory system (from the Latin gustare, “to taste”) extend from the tongue to several brainstem nuclei, then to the thalamus, and ultimately to gustatory regions of the somatosensory cortex (FIGURE 6.19). Because there are only five basic tastes, and because each taste cell detects just one of the five, the encoding of taste perception could be quite straightforward, with the brain simply monitoring which specific axons are active in order to determine which tastes are present (Chandrashekar et al., 2006). In such a simple arrangement—as we noted earlier, it is sometimes called a labeled-line system—there is no need to analyze complex patterns of activity across multiple kinds of taste receptors (called pattern coding). Experimental evidence seemingly supports the conclusion that taste is a labeled-line system: selectively inactivating taste cells that express receptors for just one of the five tastes tends to completely eradicate sensitivity to that one taste while leaving the other four tastes mostly unaffected (e.g., Huang et al., 2006). However, the same manipulation can also be viewed as knocking out one-fifth of any pattern of activity that would be normally present. Furthermore, it’s hard to see how a purely labeled-line system would allow us to discriminate between different types of salty tastes, or different forms of sweet. Interestingly, research using electron microscopy to study neural connections of taste cells has revealed that a minority of gustatory neurons receive inputs from more than one type of taste cell, so it’s possible that the taste system could employ some degree of pattern coding to detect tastes (Wilson et al., 2022). The resolution of this debate will require new technical developments and further experimentation. FIGU R E 6 . 1 9 Anatomy and Main Pathways of the Human Gustatory System View larger image How’s It Going? 1. What are the five basic tastes? 2. Generate a map of the human tongue, showing how sensitive each region is to the five basic tastes. 3. Compare and contrast taste buds and papillae. 4. Identify the cellular mechanisms underlying each of the five tastes. Discuss the evolution of taste sensitivity: How do these five tastes help us survive? FOOD FOR THOUGHT It seems that “taste” receptors are found throughout the bodies of many animals; how might that have come about? Do you suppose they first evolved to sense chemicals in the internal environment, or the external environment? 6.6 Smell: Chemicals in the Air Elicit Odor Sensations The Road Ahead Finally we turn our attention to the specialized sensory system that samples chemicals in the air: our sense of smell. After reading this section, you should be able to: 6.6.1 Describe the main structures of the olfactory system, with a focus on the cells and projections of the olfactory epithelium. 6.6.2 Explain the process of olfactory transduction, and discuss the function and variety of olfactory receptors that have been discovered. 6.6.3 Trace the projection route of olfactory information, and main olfactory structures, from the olfactory epithelium to the cortex. 6.6.4 Compare and contrast human olfactory capabilities with those of other species. 6.6.5 Describe the structure and function of the vomeronasal system, and weigh the evidence for and against the idea that humans detect pheromones. As for all the other senses, species differences in olfaction—odor perception—reflect the evolutionary importance of various smells for survival and reproduction (Bear et al., 2016). Cats and mice, dogs and rabbits—all have a sharper sense of smell than humans, although as we will see later, the old view that humans have poor olfactory acuity has little foundation. Birds, however, have only basic olfactory abilities, and dolphins don’t have functional olfactory receptors at all. Our ability to perceive a large number of different odors is what produces the complex array of flavors that we normally think of as tastes. Surveys of olfaction in large populations of healthy people reveal surprising variation in odor sensitivity, ranging from fairly widespread anosmia (odor blindness, in varying degrees; Hofmann et al., 2016) to olfactory supersensitivity, with olfactory performance slightly better among women than men, on average (Sorokowski et al., 2019). A high incidence of anosmia is also among the many miseries inflicted by COVID-19, infecting olfactory cells of the nose and causing an inflammatory immune response that damages the nearby olfactory receptor cells. This, results in a loss of smell that lasts for only a few weeks in most people, but much longer in a few others. In fact, it’s possible that some people will never regain their sense of smell, suggesting that in a minority of people, COVID-19 infection damages the olfactory parts of the brain, which unlike the cells in the nose, do not regenerate (Sukel, 2021). Early evidence suggests that COVID-19 infection can cause widespread changes in brain structure, especially regions that are functionally related to the olfactory system (Douaud et al., 2022). Understanding the impact of COVID-19 on the nervous system will be an urgent priority for neuroscientists this decade. The sense of smell starts with receptor neurons in the nose In humans (and most other mammals), a sheet of cells called the olfactory epithelium lines part of the nasal cavities. Within the 5– 10 square centimeters of olfactory epithelium that we possess, three types of cells are found (FIGURE 6.20): supporting cells, basal cells, and about 6 million olfactory receptor neurons. For comparison, dogs have 100–300 million olfactory receptor neurons, which explains their ability to detect odors at extremely low concentrations—as low as 2 parts per trillion (King, 2013), which is like tasting a pinch of sugar dissolved in a billion cups of tea! FIGU R E 6 . 2 0 The Human Olfactory System View larger image Each olfactory receptor cell is a complete neuron, with a long, slender apical dendrite that divides into branches (cilia) that extend into the moist mucosal surface. Substances that we can smell from the air that we inhale or sniff, called odorants, dissolve into the mucosal layer and interact with receptors studding the dendritic cilia of the olfactory neurons (Mohrhardt et al., 2018). Like the metabotropic receptors found on neurons, the olfactory receptor proteins are a variety of G protein–coupled receptors (GPCRs), employing a second-messenger system to respond to the presence of odorants. However, despite these similarities, olfactory neurons differ from the neurons of the brain in several ways. One way in which olfactory neurons are distinct from their cousins in the brain relates to the production of receptor proteins: there is an incredible diversity of olfactory receptor protein subtypes. So, while there may be up to a dozen or so subtypes of receptors for a given neurotransmitter in the brain, there are hundreds or even thousands of subtypes within the family of odorant receptors, depending on the species under study. The Nobel Prize–winning discovery of the genes encoding this odorant receptor superfamily (Buck and Axel, 1991) provided one of the most important advances in the history of olfaction research. Mice have about 2 million olfactory receptor neurons, each of which expresses only one of about 1000 different receptor proteins. These receptor proteins can be divided into four different subfamilies of about 250 receptors each (Mori et al., 1999). Within each subfamily, members have similar structure and presumably recognize chemically similar odorants. Receptors of different subfamilies are expressed in separate bands of olfactory neurons within the olfactory epithelium (FIGURE 6.21) (Coleman et al., 2019). By comparison, humans make a total of about 400 different kinds of functional olfactory receptor proteins. That’s still a large number, but in our case, it looks like hundreds of additional olfactory receptor protein genes became nonfunctional during our evolution (Olender et al., 2008), suggesting that the substances those receptors detected ceased to be important to our ancestors’ survival and reproduction. Although estimates of the number of odors that modern humans can distinguish vary wildly—ranging from hundreds of thousands to hundreds of billions (Bushdid et al., 2014; Gerkin and Castro, 2015) —the widely held belief that we humans have a poor sense of smell relative to other animals has been overstated historically (McGann, 2017). Whatever turns out to be the actual number of odors humans can distinguish, our ability to discriminate thousands, millions, or perhaps billions of odors using just 400 kinds of functional olfactory receptors indicates that we must recognize most odorants by their activation of a characteristic combination of different kinds of receptor molecules (Duchamp-Viret et al., 1999), an example of pattern coding. In addition, any two people will differ by about 30 percent in the makeup of their olfactory receptors (Mainland et al., 2014), so to some extent we each live in our own, personalized olfactory world (Trimmer et al., 2019). And in a curious parallel to the discovery of taste receptors throughout the body, it turns out that some of the tongue’s taste cells possess functional olfactory receptor proteins (Malik et al., 2019), perhaps reflecting the great importance of flavors to our species. FIGU R E 6 . 2 1 Different Kinds of Olfactory Receptor Molecules on the Olfactory Epithelium View larger image Another big difference between olfactory neurons and brain neurons is that olfactory neurons die and are replaced in adulthood (Lledo and Valley, 2018). This regenerative capacity is most likely an adaptation to the hazardous environment that olfactory neurons inhabit. If an olfactory neuron is killed—say, by the virus that gave you that darn head cold, or by a whiff of something toxic while you were cleaning out the shed, or by some other misadventure—an adjacent basal cell will soon differentiate into a neuron and begin extending a dendrite and an axon (Leung et al., 2007). Each olfactory neuron extends a fine, unmyelinated axon into the nearby olfactory bulb of the brain, where it terminates on one specific glomerulus— a spherical clump of neurons (from the Latin glomus, “ball”)—out of the thousands of glomeruli that exist in the olfactory bulb. Each glomerulus receives inputs exclusively from olfactory neurons that are expressing the same type of olfactory receptor protein (see Figure 6.20). No one knows exactly how the extending axon knows where to go to find its specific glomerulus, or how it knows where to form synapses within the glomerulus after it arrives. One possibility is that olfactory receptor proteins that are found on the axons of these cells (as well as on the dendrites) guide the axons to their corresponding glomeruli (Barnea et al., 2004; Francia and Lodovichi, 2021). But whatever may be the exact mechanisms of neuroplasticity in these cells, better understanding of the process of olfactory neurogenesis may someday help us develop methods for restoring damaged regions of the brain and spinal cord. Olfactory information projects from the olfactory bulbs to several brain regions Having received information from multiple olfactory neurons all expressing the same type of olfactory receptor protein, the glomerulus then actively tunes and sharpens the neural activity associated with the corresponding odorants. The glomeruli are organized within the olfactory bulb according to an orderly, topographic map of smells, with neighboring glomeruli receiving inputs from receptors that are closely related. And, as Figure 6.21 shows, the spatial organization of glomeruli within the olfactory bulbs reflects the segregation of the four receptor protein subfamilies in the olfactory epithelium. This glomerular organization is established during a critical period in early life, after which it becomes fixed (Tsai and Barnea, 2014), resulting in an “olfactotopic” map that is maintained within the olfactory projections throughout the brain. Olfactory information is conveyed to the brain via the axons of mitral cells (see Figure 6.20), which extend from the glomeruli in the olfactory bulbs to various regions of the forebrain; smell is the only sensory modality that synapses directly in the cortex rather than having to pass through the thalamus. Important targets for olfactory inputs include the hypothalamus, the amygdala, and the prepyriform cortex (FIGURE 6.22). These limbic structures are closely involved in memory and emotion, which may help explain the potency of odors in evoking nostalgic memories of long ago (Hackländer et al., 2019). FIGU R E 6 . 2 2 Components of the Brain’s Olfactory System View larger image Many vertebrates possess a vomeronasal system Though many perfumers have tried to create one, there is no perfume for humans that is as alluring as the natural scents that other species use to find possible mates. The majority of terrestrial vertebrates—mammals, amphibians, and reptiles—possess a secondary chemical detection system that is specialized for detecting such pheromones. The system is called the vomeronasal system (FIGURE 6.23), and its receptors are found in the vomeronasal organ (VNO), near the olfactory epithelium. FIGU R E 6 . 2 3 The Vomeronasal System View larger image In rodents, the sensory neurons of the VNO express two major families of GPCR vomeronasal receptors—V1R and V2R—that encode hundreds of different types of receptors (Tirindelli, 2021). These receptors are extremely sensitive, able to detect very low levels of the pheromone signals—such as sex hormone metabolites and signals of genetic relatedness—that are released by other individuals (Isogai et al., 2011; Ihara et al., 2013). For example, hamsters and mice can distinguish relatives from nonrelatives just by smell, allowing these animals to optimize their reproductive activities. From the VNO, information is transmitted to the accessory olfactory bulb (adjacent to the main olfactory bulb), which projects to the medial amygdala and hypothalamus, structures that play crucial roles in governing emotional and sexual behaviors and in regulating hormone secretion. In parallel, dedicated mechanisms in olfactory cortex activate fear and stress responses to predator odor signals, helping the animal to avoid their toothy source (Kondoh et al., 2016). Do humans communicate via pheromones? Studies reporting pheromone-like phenomena in humans attract plenty of media attention because of the apparent link to our evolutionary past. Wellknown examples include the report that simple exposure to each other’s bodily odors can shift women’s menstrual cycles (Stern and McClintock, 1998) and a report that exposure to female tears causes reductions in testosterone and sexual arousal in men (Gelstein et al., 2011). However, the VNO is either vestigial or absent in humans, and almost all of our V1R and V2R receptor genes have become nonfunctional over evolutionary time (Lübke and Pause, 2015). So, if humans do communicate through pheromones, it is most likely accomplished using the main olfactory epithelium, and not the VNO. In mice, receptors in the main olfactory epithelium called TAARs, for trace amine–associated receptors, reportedly respond to sex-specific pheromones instead of odorants (Liberles and Buck, 2006; Dewan, 2021), and mice with their TAAR genes knocked out stop reacting to certain urinary odor signals, even in the urine of predators (Dewan et al., 2013). Thus, the old notion that the olfactory epithelium detects odors while the VNO detects pheromones is an oversimplification, even in rodents. And because TAARs have also been found in the human main olfactory epithelium, behavioral evidence that humans respond to pheromones despite their minimal or absent VNOs isn’t necessarily paradoxical. If rodents can detect pheromones through the main olfactory epithelium, using TAARs or other yet-unknown mechanisms, then perhaps we can too. Whatever the details of the mechanism may be, accumulating evidence confirms that odor is an ecologically important channel for human social behavior. Determining whether this constitutes pheromonal communication represents a challenge for future neuroscientists (Calvi et al., 2020; Wyatt 2020). How’s It Going? 1. Discuss odor sensitivity in humans. How do we compare with other species? 2. Provide a brief sketch of the olfactory epithelium, showing the major cell types and their relationships to the brain. 3. Discuss the genetics of odor receptor proteins, as well as their spatial organization in the nose and olfactory bulbs. What is a glomerulus? 4. Which regions of the brain receive strong olfactory inputs? What is the significance of this arrangement for an animal’s behavior? 5. Discuss the structures and receptors associated with pheromone sensitivity, and speculate about the ecological importance of pheromone sensitivity in humans and other animals. Are humans sensitive to pheromones? FOOD FOR THOUGHT We’ve seen that, depending on the species, the sense of smell relies on hundreds—or even thousands—of different types of olfactory receptors. Why is this necessary when other senses, such as vision and hearing, need only a few different receptors? RECOMMENDED READING Doty, R. L. (2015). Handbook of Olfaction and Gustation (3rd ed.). New York, NY: Wiley-Blackwell. Hawkes, C. H. (2018). Smell and Taste Disorders. Cambridge, UK: Cambridge University Press. Horowitz, S. S. (2012). The Universal Sense: How Hearing Shapes the Mind. London, UK: Bloomsbury. Lass, N. J., and Donai, J. J. (2021). Hearing Science Fundamentals (2nd ed.). San Diego, CA: Plural. McGee, H. (2020). Nose Dive: A Field Guide to the World’s Smells. New York, NY: Penguin. Musiek, F. E., and Baran, J. A. (2018). The Auditory System: Anatomy, Physiology, and Clinical Correlates (2nd ed.). San Diego, CA: Plural. Wolfe, J. M., Kluender, K. R., Levi, D. M., Bartoshuk, L. M., et al. (2021). Sensation & Perception (6th ed.). Sunderland, MA: Oxford University Press/Sinauer. Wyatt, T. D. (2014). Pheromones and Animal Behavior: Chemical Signals and Signatures (2nd ed.). Cambridge, UK: Cambridge University Press. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 6 View larger image LIST OF KEY TERMS Amplitude ampulla amusia anosmia basilar membrane Central deafness cochlea cochlear implants cochlear nuclei conduction deafness cortical deafness deafness decibels (dB) ear canal flavors Frequency fundamental glomerulus gustatory system hair cells harmonics hearing loss hertz (Hz) inferior colliculi infrasound inner ear inner hair cells (IHCs Interaural intensity differences (IIDs) Interaural temporal differences (ITDs) medial geniculate nuclei middle ear motion sickness olfaction olfactory bulb olfactory epithelium olfactory receptor neurons organ of Corti ossicles outer hair cells (OHCs) oval window papillae pheromones pinnae place coding theory primary auditory cortex (A1) pure tone scala media scala tympani scala vestibuli semicircular canals sensorineural deafness spectral filtering stereocilia superior olivary nuclei T1R T2R TAARs taste buds tastes tectorial membrane temporal coding theory timbre tinnitus tonotopic organization transduced tympanic membrane ultrasound umami vestibular nuclei vestibular system vestibulocochlear nerve vomeronasal organ (VNO) word deafness CHAPTER 7 Vision From Eye to Brain Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University When Seeing Isn’t Seeing The young woman got in the shower unaware that the nearby water heater was malfunctioning, filling the room with deadly, odorless carbon monoxide gas. Her partner found her unconscious on the floor and called for an ambulance to rush her to the emergency room. When she regained consciousness, “D.F.” seemed to have gotten off lightly, avoiding what could have been a fatal accident. She could understand the doctors’ questions and reply sensibly, move all her limbs, and perceive touch on her skin. But something was wrong with her sight. D.F. had lost the ability to identify things that she viewed. Even the faces of family members had become unfamiliar. More than a decade later, D.F. still could not recognize faces or commonplace objects, yet she was not entirely blind. If you showed her a flashlight, she could tell you that it was made of shiny aluminum with some red plastic, but she didn’t recognize it (“Is it a kitchen utensil?”). Without telling her what it was, if you asked her to pick it up, her hand moved directly to grasp the flashlight exactly as one normally does. Shown a slot in a piece of plastic, D.F. could not tell you whether the slot was oriented vertically, horizontally, or diagonally; but if you handed her a disk and asked her to put it through the slot, she invariably turned the disk so that it went smoothly through (Ganel and Goodale, 2019). Could D.F. see or not? Many species rely on vision to find food and mates, avoid predators, and locate shelter. However, the sheer volume of visual information poses a serious problem. Viewing the world has been compared to drinking from a waterfall. How does the visual system avoid being overwhelmed by the flood of information entering the eyes? One answer is that each species evolved visual capabilities that are tailored to that species’ particular lifestyle. Most nocturnal species have better night vision than do animals that are active during the day, like us. Most rodent species, such as rats and mice, which live in tunnels and close quarters, have poor vision for distant objects, while daytime hunters like hawks have incredibly keen distance vision. Birds and bees can detect ultraviolet light, allowing them to see patterns in flowers that we cannot. But even within our limits of sight, we humans process a remarkable amount of visual information, which keeps about onethird of our cerebral cortex busy analyzing it. 7.1 The Vision Pathway Extends from the Eye to the Brain The Road Ahead To begin this chapter, you’ll learn how light entering the eye affects the firing of neurons and how that visual information reaches the brain. By the end of this section, you should be able to: 7.1.1 Describe how a visual scene is projected onto the back of the eyes. 7.1.2 Identify the major types of neurons there, which detect and analyze light. 7.1.3 Explain how we are able to detect visual images over a very broad range of illumination. 7.1.4 Describe the orderly mapping of information from a visual scene projecting into the brain. The eye is an elaborate structure that captures light at the front and projects detailed images of the external world onto a layer of neurons at the back. That layer of neurons, called the retina, turns the light into neural signals in a process called transduction. So, good vision requires an accurate optical image focused on the retina. In other words, light from a point on a target object must end up as a point of light—rather than a blur—on the retina. To produce this sharply focused optical image, the eye has many of the features of a camera, starting with the transparent outer layer of the eye, called the cornea (FIGURE 7.1). Light travels in a straight line until it encounters a change in the density of the medium, such as when it moves from air into water, which causes light rays to bend. This bending of light rays, called refraction, is the basis of such instruments as eyeglasses, telescopes, and microscopes. The curvature of the cornea, which does not change shape, refracts light rays and is primarily responsible for focusing on the retina. Light passing through the cornea is further refracted by the lens, which changes its shape to fine-tune that image on the retina. FIGU R E 7 . 1 Structures of the Human Eye View larger image The change in the shape of the lens is controlled by the ciliary muscles inside the eye. Contraction of the ciliary muscles alters the focal distance of the eye, causing nearer images to come into focus on the retina; this process is called accommodation. As mammals age, their lenses become less elastic and therefore less able to bring nearby objects into focus (we call this farsightedness). Aging humans correct this problem either by holding books and menus farther away from their eyes, or by wearing reading glasses. In contrast, the most common vision problem in young people is myopia (nearsightedness), which is difficulty seeing distant objects. Myopia develops if the eyeball is too long, causing the cornea and lens to focus images in front of the retina rather than on it. Distance vision can be restored in such cases by lenses that correct refraction of the visual image so that it is on the retina (FIGURE 7.2). FIGU R E 7 . 2 Focusing Images on the Retina View larger image If you’ve ever played around with a magnifying glass, you’ve probably noticed that if you hold the lens at arm’s length, you can see a clearly focused image of a distant scene through the glass but that scene is upside down and reversed. Like a magnifying glass, the biconvex (bulging on both sides) shape of the lens of the eye causes the visual scene that is focused on the retina to be upside down and reversed compared with the real world (see Figure 7.1). Movement of the eyes is controlled by the extraocular muscles, three pairs of muscles that extend from the outside of the eyeball to the eye socket of the skull. Fixing your gaze on still or moving targets requires delicate control of these muscles to anchor the visual image on the retina. Let’s talk about how that sharply focused visual image is processed in the retina. Visual processing begins in the retina The first stages of visual information processing occur in the retina, the receptive surface inside the back of the eye. The retina is only 200–300 micrometers thick—as thick as 2–3 sheets of paper—but it contains several types of cells in distinct layers (FIGURE 7.3A,B). Sensory neurons that detect light are called photoreceptors. There are two types of photoreceptors in the retina, called rods and cones, reflecting their respective shapes (FIGURE 7.3B,C). Cones come in several different varieties, which respond differently to light of varying wavelengths, providing us with color vision (as described later in the chapter). Rods respond to visible light of almost any wavelength. FIGU R E 7 . 3 Anatomy of the Retina View larger image Both rod and cone photoreceptors release neurotransmitter molecules into synapses on the bipolar cells, controlling their activity. The bipolar cells, in turn, connect with ganglion cells. The axons of the ganglion cells form the optic nerve, which carries information to the brain. Two additional types of cells—horizontal cells and amacrine cells—are especially significant in interactions within the retina. The horizontal cells make contacts among the receptor cells and bipolar cells; the amacrine cells contact both the bipolar cells and the ganglion cells. Interestingly, the rods, cones, bipolar cells, and horizontal cells generate only graded, local potentials; they do not produce action potentials. Unlike most neurons, these cells affect each other through the graded release of neurotransmitters in response to graded changes in electrical potentials. The ganglion cells, on the other hand, conduct action potentials in the same way that most other neurons do. From the receptor cells to the ganglion cells, enormous amounts of data converge and are compressed; the human eye contains about 100 million rods and 4 million cones, but there are only 1 million ganglion cells to transmit all that information to the brain. Thus, a great deal of information processing is done inside the eye, as the input of over 100 million photoreceptors is compressed into the action potentials of 1 million ganglion cell axons. The two different populations of photoreceptors (rods and cones) provide input to two different functional systems in the retina. A rodbased system, called the scotopic system (from the Greek skotos, “darkness,” and ops, “eye”), is very sensitive and thus works especially well in low light—we use rod vision to detect objects in dim light—but it is insensitive to color. That’s why in the darkness of night, when only our rods can detect light, we can’t tell colors apart. There is a lot of convergence in the scotopic system because the information from many rods converges onto each ganglion cell. The other system uses cones, which are less sensitive than rods (i.e., they have a higher threshold before they respond), and therefore it requires more light to function. This photopic system (which, like the term photon, gets its name from the Greek phos, “light”) shows differential sensitivity to wavelengths, enabling our color vision. Compared with the scotopic system, the photopic system has less convergence, with some ganglion cells reporting information from only a single cone. At moderate levels of illumination, both the rods and the cones function, and some ganglion cells receive input from both types of receptors. TABLE 7.1 summarizes the characteristics of the photopic and scotopic systems. TA B LE 7 . 1 Properties of the Human Photopic and Scotopic Visual Systems Property Photopic system Scotopic system Receptors Cones Rods Approximate number of receptors per eye 4 million 100 million Photopigments Three classes of cone opsins; the basis of color vision Rhodopsin Sensitivity Low; needs relatively strong stimulation; used for day vision High; can be stimulated by weak light intensity; used for night vision Location in retina Concentrated in and near fovea; present less densely throughout the retina Outside fovea Receptive-field size and visual acuity Small in fovea, so acuity is high; larger outside fovea Larger, so acuity is lower Response time Relatively rapid Slow Photoreceptors respond to light by releasing less neurotransmitter Rods and cones owe their extraordinary sensitivity to their unusual structure and biochemistry (FIGURE 7.3B–D). Each of these cells contains a stack of discs, which is where light particles are detected. Because light is reflected in many directions by the various parts of the eye, only a fraction of the light that strikes the cornea actually reaches the retina. The stacking of the discs increases the probability that one of them will capture those light particles that make it to the retina. The light particles, called photons, that strike the discs are captured by special photopigment receptor molecules. In the rods this photopigment is rhodopsin (from the Greek rhodon, “rose,” and opsis, “vision”). Cones use similar photopigments, as we’ll see later. Curiously enough, photoreceptors in the dark continually release neurotransmitter onto bipolar cells. When light hits photopigment in the photoreceptor, it triggers a cascade of chemical reactions that hyperpolarize the cell, causing the cell to release less neurotransmitter onto bipolar cells (FIGURE 7.4). It may seem puzzling that light causes photoreceptors to release less neurotransmitter, but remember that the visual system responds to changes in light. Either an increase or a decrease in the intensity of light can stimulate the visual system, and hyperpolarization is just as much a neural signal as depolarization is. FIGU R E 7 . 4 Light Hyperpolarizes Photoreceptors View larger image This change of potential in photoreceptors is the initial electrical signal in the visual pathway. Stimulation of rhodopsin by light hyperpolarizes the rods, just as light stimulation of the cone pigments hyperpolarizes them. For both rods and cones, the size of the hyperpolarizing photoreceptor potential determines how much less transmitter will be released (see Figure 7.4). Another important feature of photoreceptors is that their sensitivity to light is constantly changing, as discussed next. Different mechanisms enable the eyes to work over a wide range of light intensities Our visual system must respond to stimuli of vastly different intensities: a very bright light is about 10 billion times as intense as the weakest lights we can see. One way the visual system deals with this large range of intensities is by adjusting the size of the pupil, which is an opening in the colorful disc called the iris (see Figure 7.1). In Chapter 1 we mentioned that dilation (opening) of the pupils is controlled by the sympathetic division of the autonomic system and that constriction is triggered by the parasympathetic division. Because usually both divisions are active, pupil size reflects a balance of influences. Drugs that block acetylcholine transmission in the parasympathetic synapses onto muscles controlling the iris relax them, opening the pupil widely. One drug with this effect— belladonna—got its name (Italian for “beautiful woman”) because in Renaissance Europe it was thought to make a woman more beautiful by giving her the wide-open pupils of an attentive person. Other drugs, such as morphine, constrict the pupils. In bright light, the pupil contracts quickly to admit only about onesixteenth as much light as when illumination is dim (FIGURE 7.5). Although rapid, the 16-fold difference in light controlled by the pupil doesn’t come close to accounting for the billion- fold range of visual sensitivity (FIGURE 7.6). Another mechanism for handling different light intensities is range fractionation, the handling of different intensities by different receptors—some with low thresholds (rods) and others with high thresholds (cones) (see Table 7.1). But the main reason we can see over such a vast range of light is photoreceptor adaptation: each photoreceptor constantly adjusts its sensitivity to match the average level of ambient illumination, over a tremendous range. Thus, the visual system is concerned with differences, or changes, in brightness—not with the absolute level of illumination. FIGU R E 7 . 5 The Iris Controls the Size of the Pupil Opening View larger image FIGU R E 7 . 6 The Wide Range of Sensitivity to Light Intensity View larger image At any given time, a photoreceptor operates over a range of intensities of about a hundred-fold; that is, it is completely depolarized by a stimulus about one-tenth the ambient level of illumination, and a light 10 times more intense than the ambient level will completely hyperpolarize it. The receptors constantly shift their whole range of response to work around the prevailing level of illumination. Further adaptation, controlled by neural circuits, occurs in the brain. Acuity is best in foveal vision The whole area that you can see without moving your head or eyes is called your visual field. Visual acuity, commonly known as the sharpness of vision, is especially fine in the center of the visual field and falls off rapidly toward the periphery. That’s why when we want to look at something closely, we center our gaze on the object of interest. The fine structure of the retina explains why our acuity is best in the center of the visual field, called the fovea (FIGURE 7.7A). Notice how much more densely packed cones are in the fovea, where acuity is highest (FIGURE 7.7B), than in other parts of the retina. The fovea has an especially dense concentration of cones, absorbing so much light that the region looks dark in the photo (see Figure 7.7A). That is one reason visual acuity is so high in this region. People differ in their concentrations of cones (Legras et al., 2018), and this variation may be related to individual differences in visual acuity. Species differences in visual acuity also reflect the density of cones in the fovea. For example, hawks, whose acuity is much greater than that of humans, have much narrower and more densely packed cones in the fovea than we do. Acuity is reduced in the periphery of the retina in part because both rods and cones are larger there. FIGU R E 7 . 7 Densities of Retinal Receptors and Visual Acuity View larger image The rods show a different distribution from the cones: they are absent in the fovea but more numerous than cones in the periphery of the retina (see Figure 7.7A). This is why, if you want to see a dim star, you do best to search for it a little off to the side of your center of gaze. Not only are rods more sensitive than cones to dim light, but as we mentioned earlier, input from many rods converges on each ganglion cell in the scotopic system, further increasing the system’s sensitivity to weak stimuli. But that greater convergence of rods comes at the cost of diminished acuity compared with the fovea. Rods provide high sensitivity with limited acuity; cones provide high acuity with limited sensitivity. Thus, fine vision requires good lighting. In addition to the tight packing of cones, another reason acuity is greater in the fovea is that in this region of the retina light reaches the cones directly, without having to pass through other layers of cells and blood vessels (FIGURE 7.8). In the rest of the retina, many light particles hit those upper layers without reaching the photoreceptors. This is why the surface of the retina is depressed at the fovea (see Figure 7.1A), giving the structure its name (fovea means “pit” in Latin). FIGU R E 7 . 8 An Unobstructed View View larger image The optic disc, to the nasal side of the fovea, is where blood vessels and ganglion cell axons leave the eye (see Figure 7.7A). There are no photoreceptors at the optic disc, so there is a blind spot here that we normally do not notice. You can locate your blind spot, and experience firsthand some of its interesting features, with the help of FIGURE 7.9. The blind spot is much bigger than we usually appreciate; it is about 10 times larger than the image of a full moon, yet we typically don’t even notice it! Again, brain systems “fill in” the missing information so that we perceive an uninterrupted visual scene. FIGU R E 7 . 9 Experiencing the Blind Spot View larger image Before we consider how information is processed at different levels of the visual system, we need to describe the pathway from the eye to the cortex. How’s It Going? 1. Describe how structures of the eye refract light to focus an image on the retina. 2. How do the photopic and scotopic visual systems differ? 3. How are we able to discriminate differences in light over such a wide range of illumination? 4. Why is our vision so much more acute at the fovea than it is elsewhere? Neural signals travel from the retina to several brain regions The ganglion cells in each eye produce action potentials that are conducted along their axons to send visual information to the brain. These axons make up the optic nerve (also known as cranial nerve II), which brings visual information into the brain, eventually reaching the occipital cortex at the back of the brain. In vertebrates, many of the axons of each optic nerve cross to the opposite cerebral hemisphere. The optic nerves cross the midline at the optic chiasm (named for the Greek letter χ [chi] because of its crossover shape). Proportionally more axons cross the midline in prey animals, such as rabbits, that have laterally placed eyes with little overlap in their fields of vision (FIGURE 7.10). This arrangement gives prey animals an especially wide field of view (good for spotting threats) at the cost of poor depth perception (which predators gain by comparing the overlapping visual fields of their front-facing eyes). FIGU R E 7 . 1 0 Visual Fields View larger image In humans, axons from the half of the retina toward your nose (the nasal hemiretina) cross over to the opposite side of the brain. The half of the retina toward your temple (the temporal hemiretina) projects its axons to its own side of the brain. The result of these projections is that the right hemisphere of the brain “sees” the left side of the visual field, and the left hemisphere “sees” the right side of the visual field (FIGURE 7.11). After the axons of the retinal ganglion cells pass the optic chiasm, they are known collectively as the optic tract. FIGU R E 7 . 11 Visual Pathways in the Human Brain The right visual field, which falls on parts of both retinas, projects to the left cerebral hemisphere. Similarly, the left visual field projects to both eyes and then to the right cerebral hemisphere. View larger image A minority of retinal ganglion cells send their optic tract axons to the superior colliculus in the midbrain (see Figure 7.11, step 4a), which coordinates rapid movements of the eyes toward a target and controls the pupil’s response to light levels. But most axons of the optic tract terminate on cells in the lateral geniculate nucleus (LGN) (step 4b) portion of the thalamus. Axons of the LGN neurons form the optic radiations (step 5), which terminate in the primary visual cortex (V1) of the occipital cortex at the back of the brain (step 6). The primary visual cortex is sometimes called striate cortex because cross sections of brain tissue from this region feature a prominent stripe, or striation, corresponding to convergent binocular (“two-eyed”) inputs. This binocular input to layer IV of the primary visual cortex is important for depth perception. As Figure 7.11 shows, the visual cortex in the right cerebral hemisphere receives its input from the left half of the visual field, and the visual cortex in the left hemisphere receives its input from the right half of the visual field. In addition to the primary visual cortex (V1), numerous surrounding regions of the cortex are largely visual in function. These visual cortical areas outside the striate cortex are sometimes called extrastriate cortex. Working in parallel, these cortical regions process different aspects of visual perception, such as form, color, location, and movement, as we’ll discuss later in this chapter. The striate cortex, as well as most extrastriate regions, contains a topographic projection of the retinas, which means there’s a topographic projection of the visual field, discussed next. The retina projects to the brain in a topographic fashion The retina represents a two-dimensional map of the visual field. As this information courses through the brain, the point-to-point correspondence between neighboring parts of visual space is maintained, forming a maplike projection (see Figure 7.11). Much of this topographic projection of visual space is devoted to the foveal region (FIGURE 7.12A). Human V1 is located mainly on the medial surface of the cortex (FIGURE 7.12B; see also Figure 7.11). About half of the human V1 is devoted to the fovea and the retinal region just around the fovea, even though this represents a tiny fraction of the total retina. This disproportionate representation does not mean that our spatial perception is distorted. Rather, this representation makes possible the great acuity in the central part of the visual field. In other words, another reason why our vision is so much more acute in the foveal region is that we devote proportionally more gray matter to analyzing information from that region. FIGU R E 7 . 1 2 Location of the Primary Visual Cortex View larger image Because of the orderly mapping of the visual field (known as retinotopic mapping) at the various levels of the visual system, damage to parts of the visual system can be diagnosed from perceptual defects within the visual field. And if we know the site of injury in the visual pathway, we can predict the location of such a perceptual gap, or scotoma, in the visual field. Although the word scotoma comes from the Greek skotos, meaning “darkness,” a scotoma is not perceived as a dark patch in the visual field; rather, it is a spot where nothing can be perceived, and usually rigorous testing is required to demonstrate its existence. As with the blind spots we all have, people may not be aware of scotomas that arise. Within a scotoma, a person cannot consciously perceive visual cues, but some visual discrimination in this region may still be possible; this paradoxical phenomenon has been called blindsight. People with blindsight say they cannot see, but when asked to guess whether a stimulus is present, they’re correct more often than could be expected by chance alone, or they may walk down a corridor strewn with objects without running into them (De Gelder et al., 2008). How’s It Going? 1. Describe the path of information from the left visual field to the right side of the brain. 2. Name the structures that carry information from the eye to the brain. 3. Why is the proportion of primary visual cortex devoted to the fovea so large compared with other parts of the retina? FOOD FOR THOUGHT Why don’t the ganglion cell axons that make up the optic disc exit the eyeball outside the visual field and thereby avoid causing the blind spot? 7.2 Neurons at Different Levels of the Visual System Have Very Different Receptive Fields The Road Ahead This next section describes how neurons in the retina and brain respond to light that enters the eye. Reading this section should enable you to: 7.2.1 Describe the kinds of light stimuli that best excite or inhibit neurons in the retina, LGN, and striate and extrastriate cortex. 7.2.2 Understand why our perception of light and dark is not a simple function of how much light strikes the eye. 7.2.3 Explain how simple receptive fields of the retina can be combined to produce more-complex receptive fields in V1. 7.2.4 Contrast hierarchical models of visual processing with a spatial-frequency model. 7.2.5 Identify extrastriate brain regions specialized to detect complex forms and motion. As we noted in Chapter 5, the receptive field of a sensory cell consists of the stimulus features that excite or inhibit the cell. Understanding the receptive fields of cells in the visual system begins with the response of photoreceptors. At rest, both rod and cone photoreceptors steadily release the synaptic neurotransmitter glutamate. Light always hyperpolarizes the photoreceptors, causing them to release less glutamate. But the responses of the bipolar cells that receive this glutamate differ, depending on the type of glutamate receptor they possess. One group of bipolar cells consists of on-center bipolar cells. Glutamate is inhibitory to this type of cell, so light on the on-center bipolar cell’s receptive field (which would cause the photoreceptor to release less glutamate) would excite this bipolar cell (think of taking the brakes off a system) (FIGURE 7.13A). The second group consists of off-center bipolar cells. Glutamate is excitatory to offcenter bipolar cells, so shining light on this cell’s receptive field (which causes the photoreceptor to release less glutamate) would inhibit this bipolar cell. It’s called an off-center bipolar cell because turning off a light in the center of its receptive field excites it (FIGURE 7.13B). FIGU R E 7 . 1 3 Connections of Cones to Bipolar Cells View larger image Bipolar cells also release glutamate, which always depolarizes ganglion cells. Therefore, when light is turned on, on-center bipolar cells depolarize (excite) on-center ganglion cells; when light is turned off, off-center bipolar cells depolarize (excite) off-center ganglion cells (see Figure 7.13). The stimulated on-center and offcenter ganglion cells then fire nerve impulses and report “light” or “dark” to higher visual centers. Neurons in the retina and the LGN have concentric receptive fields Recordings from single ganglion cells show that in addition to the on- or off-center portion we’ve just discussed, their receptive fields also include a ring around that center, which is called a surround because it surrounds the central patch. Thus the entire receptive field of a bipolar cell is concentric, consisting of a roughly circular central area and the ringlike area surrounding it. Through various retinal connections, the photoreceptors in the central area and those in the ring surrounding it tend to have opposite effects on the next cells in the circuit. Thus, both bipolar cells and ganglion cells have two basic types of retinal receptive fields: on-center/off-surround and offcenter/on-surround. These antagonistic effects of the center and its surround explain why uniform illumination of the entire receptive field has little effect on ganglion cell activity, compared with a wellplaced small spot of light within the cell’s receptive field. Neurons in the LGN, which are stimulated by retinal ganglion cells, also have these concentric on-center/off-surround or off-center/on-surround receptive fields, as shown in FIGURE 7.14. FIGU R E 7 . 1 4 Receptive Fields of Retinal Cells View larger image To understand why the effect of light falling on the surround of a firing ganglion cell is opposite to the effect of light falling in the center, we need the concept of lateral inhibition, in which sensory receptor cells inhibit the reporting of information from neighboring receptor cells. As illustrated in FIGURE 7.15, the bipolar cells that relay information from photoreceptors to ganglion cells also inhibit one another. So when one bipolar cell is active, it inhibits its neighbors. FIGU R E 7 . 1 5 Lateral Inhibition in the Retina View larger image Because of this lateral inhibition, the ganglion cells stimulated by the right-hand edge of each dark band in FIGURE 7.16A are inhibited by the neighboring photoreceptors stimulated by the lighter band next door. Thus, ganglion cells stimulated by the right edge of each bar report receiving less light than they actually do (i.e., that edge looks darker to us). Conversely, the left edge of each bar appears lighter than the rest of the bar. FIGU R E 7 . 1 6 The Effect of Context on the Perception of Brightness View larger image Again, in FIGURE 7.16B two indicated patches, which clearly differ in the brightness we perceive, reflect the same amount of light. If you use your pinkie to cover the edge where the two tiles meet, you’ll see that the two patches are the same shade of gray. How are such puzzling effects produced? Although the contrast effect in Figure 7.16A is determined, at least in part, by lateral inhibition among adjacent retinal cells, the entire areas indicated in Figure 7.16B, not just the edges, appear different, so the effect must be produced higher in the visual system. One explanation is that we are accustomed to light sources coming from overhead (such as the sun, or a room light), so our brain assumes that the upper patch must actually be darker than the lower patch, because the upper one should be receiving more light than the lower one. The important point is that our visual experience is not a simple reporting of the physical properties of light. Rather, our perception of light versus dark is created by the brain in response to many factors, including surrounding stimuli (FIGURE 7.16C). Let’s consider receptive fields of the visual system in Researchers at Work, next. How’s It Going? 1. Given that all photoreceptors are hyperpolarized by light, how can the same photoreceptor excite some bipolar cells while inhibiting others? 2. What is a receptive field, and what two kinds of receptive fields are displayed by retinal ganglion cells? 3. Describe lateral inhibition in the retina and how it can sharpen our vision yet make us susceptible to the optical illusion we experience in Figure 7.16A. RESEARCHERS AT WORK Neurons in the Visual Cortex Have Varied Receptive Fields Neurons from the LGN send their axons to cells in the primary visual cortex (V1), but the spots of light that are effective stimuli for LGN cells (FIGURE 7.17A; see also Figure 7.14A) are not very effective for cortical cells. In 1959, David Hubel and Torsten Wiesel reported that visual cortical cells require more-specific, elongated stimuli than those that activate LGN cells and ganglion cells. FIGU R E 7 . 1 7 Receptive Fields of Cells at Various Levels in the Cat Visual System View larger image Hubel and Wiesel categorized cortical cells according to the types of stimuli that produced maximum responses. So-called simple cortical cells respond best to an edge or a bar that has a particular width and a particular orientation and location in the visual field (FIGURE 7.17B). These cells are therefore sometimes called bar detectors or edge detectors. Like the simple cells, complex cortical cells have elongated receptive fields, but they also require movement of the stimulus to make them respond actively. For some of these cells, any movement in their field is sufficient; others are more demanding, requiring motion in a specific direction (FIGURE 7.17C). Spatial-frequency analysis is unintuitive but efficient Hubel and Wiesel’s theoretical model of visual analysis can be described as hierarchical; that is, more-complex receptive fields are built up from inputs of simpler ones. For example, a simple cortical cell can be thought of as receiving input from a row of LGN cells with circular receptive fields (FIGURE 7.18A), and a complex cortical cell can be thought of as receiving input from a row of simple cortical cells (FIGURE 7.18B). FIGU R E 7 . 1 8 Simple Receptive Fields Can Combine to Make Complex Receptive Fields View larger image Other theorists extrapolated from this hierarchical model, suggesting that higher-order circuits of cells could detect any possible form. Thus it was suggested that, by integration of enough successive levels of analysis, a neuron might respond only to a person’s grandmother, and such hypothetical “grandmother cells” were frequently mentioned in the literature. According to this view, whenever such a cell was excited, up would pop a mental picture of one’s grandmother. This hypothesis was given as a possible explanation for facial recognition. Critics soon pointed out both theoretical and empirical problems with the hierarchical model (Barwich, 2019). For one thing, a hierarchical system like this would require a vast number of cells— perhaps more neurons than the cortex possesses—in order to account for all the visual objects we might ever encounter. Although some neurons are indeed activated by the sight of very specific faces (e.g., “Halle Berry neurons” are activated by photos of that actress) in both humans (Pedreira et al., 2010) and monkeys (Freiwald et al., 2009), these neurons do not respond to specific features of the face, as we would expect if they were built up from feature detectors. Rather, they respond only when the whole face or most of the face is presented (Freiwald et al., 2009). Confronted with the inadequacies of the hierarchical model, scientists proposed an alternative account of vision, known as the spatial-frequency model, that is more powerful but less intuitive. This model proposes that the visual system analyzes the number of cycles of light-dark (or color) patches in any stimulus (Gardner and Merriam, 2021). Some cycles are narrow, others broad. Some cycles of light-dark are oriented vertically, others horizontally, and others somewhere in between. If cortical neurons are indeed optimized to detect light-dark cycles, then they should respond to repeating bars of light, as in the examples shown in FIGURE 7.19, even better than to a single bar of light. And that’s precisely what researchers found. FIGU R E 7 . 1 9 Examples of Spatial Frequencies in Vision The brain combines information about all these spatial frequencies to give us our perception of the scene. View larger image The idea that the visual system processes spatial-frequency information was revolutionary because it led to entirely different conceptions of how the visual system might work. The idea suggests that, rather than specifically detecting such seminaturalistic features as bars and edges, the system is breaking down complex stimuli into their individual spatial-frequency components (Kauffmann et al., 2015; Trouilloud et al., 2023). In such a system, we might require a view of the whole face, which includes the low-frequency components, for recognition. This could explain why “Halle Berry neurons” do not respond to small portions of a face, because such snippets contain only high-frequency components. The spatialfrequency approach has proven useful in the analysis of many aspects of human pattern vision, and it provides the basis of highdefinition television (HDTV). Neurons in the visual cortex beyond area V1 have complex receptive fields and help identify forms Area V1 represents only a small fraction of the total amount of cortex devoted to vision. From area V1, axons extend to cortical areas involved in the perception of form: V2, V4, and the inferior temporal area (FIGURE 7.20A–C). The receptive fields of the cells in many of these extrastriate visual areas are even more complex than those of area V1. FIGU R E 7 . 2 0 Main Visual Areas in Monkey and Human Brains View larger image The visual areas of the human brain (FIGURE 7.20D) have been less thoroughly mapped than those of the monkey brain, and mainly by neuroimaging (the spatial resolution of which is not as fine as that of the electrophysiological recording used in the monkey brain), but the general layout appears similar among primates, especially for V1. An astonishing proportion of primate cortex analyzes visual information. The areas that are largely or entirely visual in function occupy over half of the surface of the macaque cortex (Van Essen et al., 2001) and about 30 percent of human cortex (Tootell et al., 2003; Rapan et al., 2022). We will discuss only a few of the main visual cortical areas and their functions. Area V2 is adjacent to V1, and many of its cells have receptive fields similar to those of V1 cells. Many V2 cells can respond to illusory contours, such as the boundaries of the upright triangle in FIGURE 7.21. Clearly, such cells respond to complex relations among the parts of their receptive fields. FIGU R E 7 . 2 1 A Geometric Figure with “Illusory” or “Subjective” Contours View larger image Area V4 cells generally give their strongest responses to the frequency gratings that we discussed earlier (see Figure 7.19). However, some V4 cells give even stronger responses to concentric and radial stimuli, such as those in FIGURE 7.22 (Roe et al., 2012). Area V4 also has many cells that respond most strongly to color differences, as we will see later when we discuss color vision. FIGU R E 7 . 2 2 Complex Stimuli Evoke Strong Responses in Visual Cortex View larger image The prefrontal cortex also contains a restricted region of neurons that are activated by faces (Dinh et al., 2018). These findings indicate that a pathway mediating visual recognition extends from V1 through temporal cortex to the prefrontal cortex. Later we’ll see that this pathway was damaged in D.F., the woman described at the start of the chapter. Visual perception of motion is analyzed by a special system that includes cortical area V5 In area V5 (also called the medial temporal area or area MT; see Figure 7.20C) of monkeys, neurons respond to moving visual stimuli, indicating that they are specialized for the perception of motion and its direction. Imaging studies show that moving stimuli also evoke responses in human area V5. Experimental lesions of area V5 in monkeys trained to report the direction of perceived motion impaired their performance, at least temporarily. Conversely, electrically stimulating an area of V5 that normally responds to stimuli moving up caused monkeys to report that dots on the screen were moving up even when they were actually moving to the right. In other words, the electrical stimulation appeared to alter the monkeys’ experience of visual motion (Zeki, 2015). One striking report described a woman who had lost the ability to perceive motion after a stroke damaged her area V5 (Zihl and Heywood, 2015). The woman was unable to perceive continuous motion and saw only separate, successive still images. This impairment led to many problems in daily life. She had difficulty crossing streets, because she could not follow the positions of automobiles in motion: “When I’m looking at the car at first, it seems far away. But then when I want to cross the road, suddenly the car is very near.” She also had difficulty following conversations, because she could not see the movements of speakers’ lips. Except for her inability to perceive motion, her visual perception appeared normal. How’s It Going? 1. How could information from LGN neurons with simple concentric receptive fields be combined in a cortical cell such that it would respond best to a line of light? 2. How could information from simple cortical cells be combined in another cortical cell so that it would respond best to a moving line of light? 3. Describe the spatial-frequency hypothesis of vision. 4. What are some examples of visual receptive fields outside of V1 that respond to very complex stimuli? 5. What kind of stimuli affect the firing of neurons in V5 (also called area MT)? Food for Thought The now rejected hypothesis that there might be “grandmother cells” has been described as a valuable “failure in science” (Barwich, 2019). What do you think that means, and what does it say about the scientific enterprise? 7.3 Color Vision Depends on Integrating Information from the Retinal Cones The Road Ahead For most people, color is a striking aspect of vision. In this portion of the chapter, you’ll learn how our brain uses input from three different types of cones to construct our experience of color. By the end of this section, you should: 7.3.1 Know the physical properties of light that make objects appear colored. 7.3.2 Realize that our perception of color does not simply reflect those properties of light. 7.3.3 Be able to discuss the two major theories of how the visual system can detect colors. 7.3.4 Understand how comparing the activity of two or more types of cones informs us about color. 7.3.5 Understand why men are more likely than women to have difficulty distinguishing colors. For most of us, the visible world has distinguishable hues: blue, green, yellow, red, and their intermediates. These hues appear different because every photon vibrates as it travels across space, behaving like a sinusoidal wave. Photons vary in the frequency of vibration and therefore in the wavelength (the distance between two adjacent peaks of the wave) of the light, and we can detect some of these differences, perceiving faster-vibrating (thus shorterwavelength) photons as blue and green, and slower-vibrating (longer-wavelength) photons as more orange and red. The human visual system responds only to photons whose wavelengths lie within a very narrow section of the total electromagnetic range, from about 400 to 700 nanometers (nm) (FIGURE 7.23). Wavelengths outside this narrow range aren’t considered to be visible light, and instead have names like X-rays or radio waves. The color of an object depends on which wavelengths of light it absorbs versus which wavelengths it reflects. Our eyes detect the reflected wavelengths to distinguish different colors, as illustrated in FIGURE 7.24. FIGU R E 7 . 2 3 The Wavelengths of Light View larger image FIGU R E 7 . 2 4 Colored Objects Reflect Different Wavelengths of Light View larger image There are three dimensions of color perception: 1. Brightness, which varies from dark to light 2. Hue, which varies continuously through blue, green, yellow, orange, and red (and is what most people mean when they use the term color) 3. Saturation, which varies from rich, full colors to gray; for example, rich red through pink to gray as saturation decreases It is important to understand that the perception of a particular hue is not a simple function of the wavelength of light. For example, a patch reflecting light of a particular wavelength is perceived as various different hues, depending on several factors, including the intensity of illumination, prior exposure to a different stimulus, and the surrounding field. As illumination fades, the blues in a painting or a rug appear more prominent and the reds appear duller, even though the wavelength distribution in the light reflecting off those objects has not changed. In addition, the hue perceived at a particular point is strongly affected by the pattern of wavelengths and intensities in other parts of the visual field, as FIGURE 7.25 illustrates. To understand how the visual system creates our experience of color, we must understand how cone photoreceptors work. FIGU R E 7 . 2 5 Color Perception View larger image Color perception requires receptor cells that differ in their sensitivities to different wavelengths The first stage of color detection is accomplished by different types of cone photodetectors. On the basis of observations of mixing pigments and lights, scientists at the start of the nineteenth century predicted that there would be three separate kinds of receptors in the retina. Endorsed in 1852 by the great physiologist-physicistpsychologist Hermann von Helmholtz, this trichromatic hypothesis (from the Greek tri, “three,” and chroma, “color”) was widely accepted. Helmholtz predicted that blue-sensitive, green-sensitive, and redsensitive receptors would be found, that each would be sharply tuned to its part of the spectrum, and that each type would have a separate path to the brain. The color of an object would be recognized, then, on the basis of which color receptor(s) were activated. This system would be like the mechanisms for discriminating touch and temperature on the basis of which skin receptors and labeled neural lines are activated (see Chapter 5). Later in the nineteenth century, physiologist Ewald Hering proposed an alternative explanation. He argued, on the basis of visual experience, that there are four unique hues (blue, green, yellow, red) and three opposed pairs of colors—blue versus yellow, green versus red, and black versus white—and that three physiological processes with opposed positive and negative values must therefore be the basis of color vision. As we will see, both this opponent-process hypothesis and the trichromatic hypothesis are encompassed in current color vision theory, but neither of the old hypotheses is sufficient by itself. Measurements of photopigments in cones have borne out the trichromatic hypothesis in part. Each cone of the human retina has one of three classes of pigments (each pigment has a name, but we’ll just refer to them as opsins). The response of the cone depends on which wavelength of light its pigment absorbs to start the process depicted in Figure 7.4. These pigments do not, however, have the narrow spectral distributions that Helmholtz predicted. Despite what you may have heard in other classes (or even read in other textbooks!), the human visual system does not have receptors that are sensitive to only narrow parts of the visible spectrum, such as “red” cones and “green” cones; instead, the receptor pigments exhibit broad sensitivities that substantially overlap. In fact, two of the three retinal cone pigments show some response to light of almost any wavelength. The pigments have different peaks of sensitivity, but even the peaks are not as far apart as Helmholtz predicted, and those peaks don’t always correspond to a particular color. As FIGURE 7.26 shows, the cone pigment peaks occur at about 420 nm (in the part of the spectrum where we usually see violet under daylight conditions), about 530 nm (where most of us see green), and about 560 nm (where most of us see yellow-green). Despite Helmholtz’s prediction, none of the curves peak in the long- wavelength part of the spectrum, where most of us see red (about 630 nm). FIGU R E 7 . 2 6 Spectral Sensitivities of Human Photopigments View larger image Under ordinary conditions, almost any object, no matter what color it is, stimulates at least two kinds of cones, thus ensuring high visual acuity and good perception of form. It is the subsequent processing performed by the nervous system, comparing the differences in activation across cones, that extracts color information about the light falling on the retina. Thus, certain ganglion cells and certain neurons at higher stations in the visual system are color-specific, even though the photoreceptors are not. In a similar manner, photoreceptors are not form-specific (they respond to single points of light), but form is detected later in the system, by comparison of the outputs of different receptors. Because the cones are not color detectors, the simplest name for each derives from its peak area of wavelength sensitivity: short (S) for the receptor with peak sensitivity at about 420 nm, medium (M) for 530 nm, and long (L) for 560 nm (see Figure 7.26). There are typically twice as many L as M receptors, but the ratio varies across individuals (FIGURE 7.27A,B). There are far fewer S cones, which explains why acuity is much lower with short-wavelength illumination (blue light) than in the other parts of the visible spectrum. In some insects, including bees, the short-wavelength receptors respond to ultraviolet wavelengths that we humans cannot see. This ability permits bees to see color patterns in flowers that are invisible to us (FIGURE 7.28). Most birds have not three but four different types of cones, and they can also detect ultraviolet light (Tedore and Nilsson, 2019). FIGU R E 7 . 2 7 Distributions of Various Types of Cones in Three Humans View larger image FIGU R E 7 . 2 8 How Flowers Look to the Birds and the Bees View larger image The genes for wavelength-sensitive pigments in the retina have been analyzed, and the similarities in structure of the three genes suggest that they are all derived from a common ancestral gene (Carvalho et al., 2017). In addition, the genes for the medium- and longwavelength pigments occupy adjacent positions on the X chromosome and are much more similar to each other than either is to the gene for the short-wavelength pigment on chromosome 7. Probably our primate ancestors had only one photopigment gene on the X chromosome, which became duplicated. Then mutations caused the two genes to become more and more different, until their responses to various wavelengths of light were no longer the same. Thus, our ancestors went from having only two cone pigments (one on the X chromosome and the S pigment on chromosome 7) to three, with an associated improvement in color vision. This evolution of a third photopigment may have happened recently (in evolutionary terms); for example, most South American monkeys have only a single longer-wavelength pigment. The fact that the genes for the M and L pigments are on the X chromosome also explains why defects of red-green color vision are much more frequent in human males (about 8 percent) than females (about 0.5 percent). Because males have only one X chromosome, mutation in the genes for either the M or L pigment can impair color vision (FIGURE 7.27C). But if a female has a defective photopigment gene on one of her X chromosomes, the normal copy on her other X chromosome can compensate. Even though the term color blindness is commonly used, most people with impaired color vision are able to distinguish some hues. Complete color blindness can be caused by brain lesions or by the congenital absence of any cones, but that is extremely rare in people. Likewise, even those mammalian species with weak color vision can discriminate some colors, as we discuss in Signs & Symptoms, next. SIGNS & SYMPTOMS Most Mammalian Species Have Some Color Vision Animals exhibit different degrees of color vision. Many species of birds, fishes, and insects have excellent color vision. Humans and Old World monkeys also have an excellent ability to discriminate wavelengths. Many other mammals (e.g., cats) cannot discriminate wavelengths very well, but most mammals have at least some degree of color vision. Although only certain primates have good tri chromatic color vision (vision based on three classes of cone photopigments), most mammalian species have at least di chromatic color vision (based on two classes of cone pigments). Most so-called color-blind (actually color-deficient) people have dichromatic vision and can distinguish short-wavelength stimuli (blue) from long-wavelength stimuli (not blue) (FIGURE 7.29). FIGU R E 7 . 2 9 Simulating Color Blindness (A) The photograph on the right has been adjusted to simulate the experience of the most common form of color blindness in humans, which is the absence of cones sensitive to medium-wavelength light (M cones). For such individuals, the world’s colors consist of blue (detected by short-wavelength photopigment encoded on the seventh chromosome) and not blue (detected by the long-wavelength photopigment encoded on the X chromosome). (B) In a typical test for color vision, dichromats may have a difficult time detecting the numerals displayed in figures like this. View larger image When a gene carrying a third photopigment was introduced into photoreceptors of adult male squirrel monkeys with such dichromatic vision, they soon displayed excellent trichromatic vision (Mancuso et al., 2009). Likewise, introducing photopigment genes in mice enabled them to discriminate colors they normally cannot see (G. H. Jacobs et al., 2007), so it may be possible to correct dichromatic vision in humans. Among those species of South American monkeys that are generally dichromats, some females are actually trichromatic. Why? Because the gene encoding one photopigment is on the X chromosome. Since females have two X chromosomes, if the two chromosomes carry different genes for the photopigment, then the female possesses a total of three different kinds of cones and therefore has trichromatic vision. Interestingly, a woman may carry slightly different genes for one of the photopigments on her two X chromosomes (Bosten, 2022) and therefore have four different kinds of cones. Such “tetrachromats” tend to be very good at discriminating colors and very sensitive to clashing colors. It’s interesting to speculate on whether such women have a different experience of, for example, green than those of us who are trichromats and dichromats. We will take up the question of our subjective experience of color again in Chapter 14. Some retinal ganglion cells and LGN cells show spectral opponency Monkeys discriminate colors about as well as humans do. Recordings made from monkeys reveal that most ganglion cells and LGN cells are excited and fire in response to some wavelengths and are inhibited by other wavelengths. FIGURE 7.30A shows the response of one such LGN cell as a light centered on its receptive field changes from one wavelength to another. Firing is stimulated by wavelengths above 600 nm, where the L cones are most sensitive; it is inhibited at shorter wavelengths, where the L cones are less sensitive than the M cones. A cell exhibiting this response pattern is therefore called a plus L/minus M cell (+L/–M). This is an example of a spectrally opponent cell (or color-opponent cell) because two regions of the spectrum have opposite effects on the cell’s rate of firing, conforming to Hering’s opponent-process hypothesis. FIGU R E 7 . 3 0 Responses by the Four Main Types of Spectrally Opponent Cells in Monkey LGN View larger image Each spectrally opponent ganglion cell receives input from two or three different kinds of cones through bipolar cells. The connections from at least one type of cone are excitatory, and those from at least one other type are inhibitory. The spectrally opponent ganglion cells thus record the difference in stimulation of different types of cones. For example, a +M/–L cell responds to the difference in the excitation of M and L cones. The peaks of the sensitivity curves of the M and L cones are not very different (see Figure 7.26). However, whereas the M-minus-L difference curve (FIGURE 7.30B) shows a clear peak at about 500 nm (in the green part of the spectrum), the L-minus-M difference function (see Figure 7.30A) shows a peak at about 650 nm (in the red part of the spectrum). Thus, +M/–L and +L/–M cells yield distinctly different neural response curves. LGN cells that are excited by the L and M cells but inhibited by S cells—that is, +(L+M)/−S cells—peak in the red range (FIGURE 7.30C), while cells excited by S but inhibited by L and M—that is, +S/−(L+M) cells—peak in the blueviolet range (FIGURE 7.30D). Spectrally opponent neurons are the second stage in the system for color perception, but they still cannot be called color cells, because (1) they send their outputs into many higher circuits—for detection of form, depth, and movement, as well as hue; and (2) their peak wavelength sensitivities do not correspond precisely to the wavelengths that we see as the principal hues. The brightness detectors receive stimulation from both M and L cones (+M/+L); the darkness detectors are inhibited by those same cones (–M/–L). In the monkey LGN, 70–80 percent of the cells are spectrally opponent; in the cat, very few spectrally opponent cells are found— only about 1 percent. This difference explains why monkeys so easily distinguish between colors and it’s so difficult to train cats to discriminate even large differences in color. Some visual cortical cells and regions appear to be specialized for color perception In the cortex, spectral information is used for various kinds of information processing. Forms are segregated from their background by differences in color or intensity (or both). The most important role that color plays in our perception is to denote which parts of a complex image belong to one object and which belong to another. Some animals use displays of brightly colored body parts to call attention to themselves, but color can also be used as camouflage. Some spectrally opponent cortical cells contribute to the perception of color. These cells are not just responding to the differences between two types of cones, as retinal ganglion cells and LGN cells do. Rather, they are responding to differences in colors that we perceive; in other words, they are perceptually opponent, as predicted by Hering: red versus green, blue versus yellow, and black versus white (R. L. De Valois and De Valois, 1993). The spectral responses of these cells correspond to the wavelengths of the principal hues specified by human observers, and their characteristics also help explain other color phenomena. Visual cortical region V4 is particularly rich in color-sensitive cells; each of these cells truly responds best to a particular hue, including the four that Hering postulated (blue, green, yellow, red). V4 cells respond best if the color outside the receptive field is different from the color preferred inside the receptive field (Zhang et al., 2023). How’s It Going? 1. What two main hypotheses were developed to explain our ability to discriminate colors? Which aspects of the visual system appear to match each hypothesis? 2. Describe some examples in which our perception of color is not simply the detection of particular wavelengths of light. 3. Why do we label cones as S cones, M cones, and L cones rather than blue, green, and red cones? 4. Why are men more likely than women to have difficulty distinguishing some colors? FOOD FOR THOUGHT Why is it a good idea to make life rafts yellow if they are to be detected on a blue sea? 7.4 What versus Where: Cortical Visual Areas Are Organized into Two Streams The Road Ahead Vision is so crucial for us primates that we devote lots of brain space to analyzing visual stimuli and work hard to correct vision deficiencies. After reading this final section of the chapter, you should be able to: 7.4.1 Identify the major streams of visual processing that deal with what a stimulus is, and where it is. 7.4.2 Understand why D.F. can use vision to guide her movements but not to recognize objects. 7.4.3 Describe the underlying causes of nearsightedness and how it can be avoided. 7.4.4 Understand the role of visual experience in sharpening vision, especially in children. Over 40 years ago, scientists proposed that monkeys have two main visual processing streams, both originating in primary visual cortex: a ventral processing stream responsible for visually identifying objects, and a dorsal stream responsible for appreciating the spatial location of objects and for visually guiding our interactions with them (FIGURE 7.31). They called these processing streams, respectively, the what and where streams (Pitcher and Ungerleider, 2021). FIGU R E 7 . 3 1 Parallel Processing Pathways in the Visual System View larger image PET studies, as well as studies of people with brain lesions, indicate that the human brain possesses what and where visual processing streams similar to those that have been found in monkeys. In Chapter 15, we’ll discuss a portion of the ventral stream, the fusiform gyrus, that is specialized to identify faces. Discovery of these separate visual cortical streams helps us understand cases like that of patient D.F., described at the start of this chapter. Recall that, as a result of carbon monoxide poisoning, D.F. lost the ability to perceive faces and objects but retained the ability to reach and grasp objects under visual control. D.F.’s visual ventral (what) stream appears to have been devastated, but her dorsal (where) stream seems unimpaired. An opposite kind of dissociation had already been reported: people with damage to the dorsal parietal cortex often have optic ataxia, a specific difficulty in using vision to reach for and grasp objects, despite being able to identify objects correctly (R. A. Andersen et al., 2014). We’ll discuss optic ataxia again in Chapter 14. MRIs of D.F.’s brain (FIGURE 7.32A) reveal diffuse damage concentrated in the ventrolateral occipital cortex. Throughout the brain there is evidence of atrophy, indicated by shrunken gyri and enlarged sulci. FIGURE 7.32B shows the area activated in fMRI recordings when healthy participants viewed pictures of objects; it corresponds to D.F.’s lateral occipital lesion. When D.F. reached for and grasped objects, her fMRI activation in the parietal lobe was similar to that of control participants, indicating that her dorsal stream is largely intact. D.F.’s intact dorsal pathway not only tells her where objects are, but also guides her movements to use these objects properly (Ganel and Goodale, 2019). FIGU R E 7 . 3 2 Object Recognition Centers in the Brain View larger image It is still puzzling that one part of D.F. knows exactly how to grasp a pencil held in front of her, yet another part of her—the part that talks to you—has no idea whether the object she’s holding is a pencil, a ruler, or a bouquet of flowers. This condition is reminiscent of the cortical damage that causes blindsight, mentioned earlier: people with such damage report being unable to see, but they show evidence that they can. In Chapter 14 we’ll learn about other people who can see only one thing at a time, or who can see faces but cannot identify to whom they belong. Imagining what such disjointed visual experience must be like helps us appreciate how effortlessly our brains usually bind together information with our marvelous sense of sight. Visual neuroscience can be applied to alleviate some visual deficiencies Vision is so important that many investigators have sought ways to prevent its impairment, to improve inadequate vision, and to restore sight to the blind. In the United States, half a million people are blind. Recent medical advances have reduced some causes of blindness but have increased blindness from other causes. For example, medical advances permit people with diabetes to live longer, but because we don’t know how to prevent blindness associated with diabetes, there are more people alive today with diabetes-induced blindness. In the discussion that follows, we will first consider ways of avoiding the impairment of vision. Then we take up ways of improving an impaired visual system. Reducing visual impairment Studies of the development of vision show that the incidence of myopia (nearsightedness) can be reduced. Myopia develops if the eyeball grows too long, causing the eye to focus images in front of the retina rather than on the retina (see Figure 7.2). As a result, distant objects appear blurred. It is now clear that the reason some children develop myopia is that certain environmental factors cause the eyeball to grow excessively. Previously it was thought that the modern habit of looking closely at nearby objects (books, computer screens, and so on) might be responsible for myopia, but mounting evidence suggests that indoor lighting may be to blame (Lagrèze and Schaeffel, 2017). Before civilization, most people spent the bulk of their time outdoors, looking at objects illuminated by sunlight. But with the advent of indoor lighting, we’ve come to spend a lot of time looking at things with light that, while containing many wavelengths, does not exactly match the composition of sunlight. Several studies found that children with myopia spend less time outdoors than do other children, but that correlation could be caused by genes that favor both myopia and indoor activities, like reading. Indeed, the advent of public schools in various nations is accompanied by increased rates of myopia. However, one of these studies focused on people of Chinese origin who lived in either Singapore, where crowded conditions mean that people spend little time outdoors, or Sydney, Australia. Even though these populations should be genetically similar, and the length of schooldays is similar, 30 percent of the Chinese children living in Singapore, who averaged only 30 minutes a day outdoors, were myopic, versus only 3 percent of those living in Sydney, who averaged 2 hours a day outdoors (Rose et al., 2008). What’s more, in these populations myopia correlates much more strongly with time spent indoors than with time spent reading. Another study found that children forced to spend their recesses outdoors were less likely to develop myopia (P.C. Wu et al., 2018). One study found that second graders during the COVID-19 shutdown spent less time outdoors than the previous cohort of second graders, and also grew longer eyeballs than the children the year before (FIGURE 7.33). FIGU R E 7 . 3 3 Pandemic-Augmented Myopia View larger image Of course, too much sunlight can be a bad thing, especially for our skin. So almost all children in Australia wear hats to shield their faces when outdoors, yet they still benefit from being outdoors in terms of avoiding myopia. Likewise, there’s no evidence that wearing sunglasses blocks the benefit of light from the sun. The next challenge will be to determine what it is about indoor lighting, as opposed to sunlight, that encourages the eyeball to grow excessively in children, leading to myopia. Hey There, You with the Stars over Your Eye As a treatment for amblyopia, this girl is wearing a patch over her “good” eye—the one she has been relying on while ignoring information from her other, “weak” eye. Increased visual experience through the weak eye will strengthen its influence on the cortex. View larger image Exercising vision The misalignment of the two eyes (lazy eye) can lead to a condition called amblyopia, in which acuity is poor in one eye, even though the eye and retina are normal. If the two eyes are not aligned properly during the first few years of life, the primary visual cortex of the child tends to suppress the information arriving from one eye, and that eye becomes functionally blind (see Figure 4.12B). Studies of the development of vision in children and other animals show that most cases of amblyopia are avoidable. The balance of the eye muscles can be surgically adjusted to bring the two eyes into better alignment. Alternatively, if the weak eye is given regular practice, with the good eye covered, vision can be preserved in both eyes. Attempts to alleviate amblyopia by training alone, however, have produced mixed results. The optimal treatment appears to be a combination of both surgical correction and eye patches and visual exercises (Pediatric Eye Disease Investigator Group, 2005). SIGNS & SYMPTOMS Macular Degeneration Is the Leading Cause of Vision Loss as We Age Macular degeneration is a visual impairment caused by damage to the retina. The most common type, “dry” macular degeneration, is caused by atrophy of the retinal pigmented epithelium (see Figure 7.3), resulting in death of overlying photoreceptors. In the more severe, “wet” macular degeneration, abnormal growth of retinal capillaries leads to detachment of the retina and/or death of photoreceptors. The damage is mostly restricted to the fovea, but because visual acuity is poor for the rest of the retina, vision is quite impaired (FIGURE 7.34). An NIH-conducted trial (Age-Related Eye Disease Study [AREDS]) found that supplemental vitamins and antioxidants could slow the disease slightly (Lin et al., 2022). FIGU R E 7 . 3 4 Simulating Visual Experience with Macular Degeneration View larger image How’s It Going? 1. What are the two main streams of visual processing in the cortex, and what aspects of vision does each stream support? 2. Describe D.F.’s symptoms, and relate them to the brain damage revealed by MRIs. 3. What is the evidence that indoor lighting may cause myopia in children? 4. Why does degeneration of the fovea impair acuity in the whole visual field? FOOD FOR THOUGHT Why might natural selection have evolved a specific visual network to analyze faces? RECOMMENDED READING DeSall, R., and Bachor, H. (2020). A Natural History of Color: The Science behind What We See and How We See It. New York, NY: Penguin Books. Gregory, R. L. (2015). Eye and Brain: The Psychology of Seeing (5th ed.). Princeton, NJ: Princeton University Press. Kreiman, G. (2021). Biological and Computer Vision. Cambridge, UK: Cambridge University Press. Masland, R. (2020). We Know It When We See It: What the Neurobiology of Vision Tells Us about How We Think. New York, NY: Basic Books. Purves, D., and Lotto, R. B. (2011). Why We See What We Do Redux: A Wholly Empirical Theory of Vision. Sunderland, MA: Oxford University Press/Sinauer. Wolfe, J. M., Kluender, K. R., Levi, D. M., Bartoshuk, L. M., et al. (2021). Sensation & Perception (6th ed.). Sunderland, MA: Oxford University Press/Sinauer. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 7 View larger image LIST OF KEY TERMS accommodation amacrine cells amblyopia binocular bipolar cells blindsight blind spot Brightness ciliary muscles complex cortical cells cones convergence cornea extraocular muscles extrastriate cortex fovea ganglion cells horizontal cells Hue iris lateral geniculate nucleus (LGN) lateral inhibition lens Macular degeneration myopia occipital cortex off-center bipolar cells off-center ganglion cells off-center/on-surround on-center bipolar cells on-center ganglion cells on-center/off-surround opponent-process hypothesis optic ataxia optic chiasm optic disc optic nerve optic radiations optic tract photopic system photoreceptor adaptation photoreceptors primary visual cortex (V1) pupil range fractionation receptive field refraction retina rhodopsin rods Saturation scotoma scotopic system simple cortical cells spatial-frequency model spectrally opponent cell topographic projection transduction trichromatic hypothesis Visual acuity visual field wavelength

robot