CHAPTER 6 Hearing, Balance, Taste, and Smell Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University Hold the Phone It’s like a classic horror movie scene: a scientist using amazing technology to reanimate parts of dead bodies, seeking out Nature’s secrets. But when Georg von Békésy started experimenting with cadavers in the 1920s, he was not trying to create life. He was interested in a more practical question: Why are human ears so much more sensitive than most microphones? Békésy, an engineer, thought that learning how the human ear works could help him to design better microphones for his employer, the Hungarian phone company. He gathered cadavers from local hospitals and devised a clever dissection that would reveal the inner ear without destroying it. (His work was not always appreciated by his fellow engineers; they didn’t like finding their drill press full of human bone dust in the morning.) Bringing his background in physics to bear, Békésy devised exquisitely precise physical models and biophysical experiments that let him measure extremely brief, minuscule movements in the inner ear. His subsequent discoveries provided us with the key to understanding how we translate a stream of auditory data—sounds—into neural activity that the brain can understand. In the end, Békésy did not come up with a better microphone, but his discoveries have helped restore hearing to thousands of people who once were deaf, as we’ll see in this chapter. Your existence is the direct result of the keen senses possessed by your distant ancestors—senses that enabled them to find food and mates and to avoid predators and other dangers long enough to reproduce. In this chapter we consider several of the amazing sensory systems that we use to monitor important signals from distant sources, especially sounds (by audition) and smells (by olfaction). We’ll discuss related systems for detecting position and movement of the body (the vestibular system, related to the auditory system) and tastes of foods (the gustatory or taste sense, which like olfaction is a chemical sense). We begin with hearing, because the auditory system evolved from special mechanical receptors related to the touch system that we discussed in Chapter 5. 6.1 Hearing: Pressure Waves in the Air Are Perceived as Sound The Road Ahead The first part of the chapter is concerned with the structure and function of the ear, especially the inner ear, which gives us our sense of hearing. After reading this section, you should be able to: 6.1.1 Explain how the external ear and middle ear capture and concentrate sound energy and convey it to the inner ear. 6.1.2 Sketch the anatomy of the middle and inner ears, highlighting the location of sensorineural components. 6.1.3 Explain how vibrations travel through the cochlea and how they are converted into neural activity. 6.1.4 Describe the process by which the organ of Corti encodes the frequencies of sounds. 6.1.5 Summarize the neural projections between the cochlea and brain. 6.1.6 Identify the principal auditory pathways and structures of the brain, and describe the integration of signals from the left and right ears. 6.1.7 Describe the orderly map of frequencies found at each level of the auditory system. The Ears Have It The external ears, or pinnae, of mammals come in a variety of shapes, each adapted to a particular ecological niche. Many mammals can move their ears to direct them toward a particular sound. In such cases, the brain must account for the position of the ears to judge where a particular sound came from. (Fennec fox [top left]; whispering bat [top right]; sea otter [bottom left]; chimpanzee [bottom right].) View larger image Hearing is vital for the survival of most species. There are animals that don’t use vision, like blind cave fish, but so far we don’t know of any vertebrate animals that don’t use hearing to detect sound in air and/or water. Humans can produce and perceive an impressive variety of vocalizations—from barely audible murmurs to soaring flights of song—but we especially rely on speech sounds for our social relations and to transmit knowledge to others. Across the animal kingdom, species produce and perceive sounds in wildly different ways, shaped by their unique evolutionary history. Birds sing and crickets chirp in order to attract mates, while monkeys grunt and screech and burble to signal comfort, danger, and pleasure. Owls and bats exploit the directional property of sound to locate prey and avoid obstacles in the dark, because unlike light, sound can be detected in the darkest night, or even around a corner. How does energy transmitted through air become the speech, music, and other sounds we hear? Your auditory system detects changes in the vibration of air molecules that are caused by sound sources: it senses both the intensity of sounds, measured in decibels (dB) and perceived as loudness, and their frequency, measured in cycles per second, or hertz (Hz), and perceived as pitch. BOX 6.1 describes some of the basic properties of sound that are relevant to our discussion of hearing. The outer ear directs sound into the inner parts of the ear, where the mechanical force of sound is transduced into neural activity: the action potentials that inform the brain. Your ears are incredibly sensitive organs; in fact, one of the main jobs of your powers of attention is to filter out the constant barrage of unimportant little noises that your ears detect (see Chapter 14). BOX 6.1 The Basics of Sound We perceive a repetitive pattern of local increases and decreases in air pressure as sound. Usually this oscillation is caused by a vibrating object, such as a loudspeaker or a person’s larynx during speaking. A single alternation of compression and expansion of air is called one cycle. The figure illustrates the oscillations in pressure produced by a vibrating loudspeaker. Because the sound produced by the loudspeaker here has only one frequency of vibration, it is called a pure tone and can be represented by a sine wave. A pure tone is described physically in terms of two measures: Amplitude Also called intensity, this is usually measured as sound pressure in dynes per square centimeter (dyn/cm ). Our perception of amplitude is termed loudness, expressed as decibels (dB). The decibel scale is logarithmic: one decibel is the threshold for human hearing, a whisper is about 20 dB, and a departing jetliner can be more than a million times as intense, at up to 140 dB. Frequency This is the number of cycles per second, measured in hertz (Hz). So, middle A on a piano has a frequency of 440 Hz. Our perception of frequency is termed pitch. Most sounds are more complicated than a pure tone. For example, a sound made by a musical instrument contains a fundamental frequency and harmonics. The fundamental is the basic frequency, and the harmonics are multiples of the fundamental. For example, if the fundamental is 440 Hz, the harmonics are 880 Hz, 1320 Hz, 1760 Hz, and so on. When different instruments play the same note, the notes differ in the relative intensities of the various harmonics, and there are subtle qualitative differences between instruments in the way they commence, shape, and sustain the sound; these differences are what give each instrument its characteristic voice, or timbre. 2 The external ear captures, focuses, and filters sound The oddly shaped fleshy objects that most people call ears are properly known as pinnae (singular pinna). Aside from their occasional utility as handles and jewelry hangers, the pinnae funnel sound waves into the second part of the external ear: the ear canal (or auditory canal). The pinna is a distinctly mammalian characteristic, and mammals show a wide array of ear shapes and sizes. Few humans can move their ears—and even then only enough to look silly—but many other mammals deftly shape and swivel their pinnae to help locate the source of a sound. Animals with exceptional auditory localization abilities, such as bats, may have especially mobile ears. The “ridges and valleys” of the pinna modify the character of sound that reaches the middle ear. Some frequencies of sound are enhanced; others are suppressed. For example, the shape of the human ear especially increases the reception of sounds between 2000 and 5000 Hz—a frequency range that is important for speech perception. The shape of the external ear—and, in many species, the direction in which it is being pointed—provides additional cues about the direction and distance of the source of a sound, as we will discuss later in this chapter. The middle ear concentrates sound energies A collection of tiny structures made of membrane, muscle, and bone —essentially a tiny biological microphone—links the ear canal to the neural receptor cells of the inner ear (FIGURE 6.1A). This middle ear (FIGURE 6.1B) consists of the taut tympanic membrane (eardrum) sealing the end of the ear canal plus a chain of tiny bones, called ossicles, that mechanically couple the tympanic membrane to the inner ear at a specialized patch of membrane called the oval window. These ossicles, the smallest bones in the body, are called the malleus (Latin for “hammer”), the incus (“anvil”), and the stapes (“stirrup”). A Touching Moment Helen Keller, who was both blind and deaf, said, “Blindness deprives you of contact with things; deafness deprives you of contact with people”—a poignant reminder of the importance of speech for our social lives. Here, Keller (center, accompanied by her aide and interpreter, Polly Thompson) communicates with U.S. President Dwight Eisenhower by feeling Eisenhower’s face as he speaks and makes facial expressions. Rather than living in sensory and social isolation, Keller honed her intact senses to such a degree that she was able to become a noted teacher and writer. View larger image Sound waves in the air strike the tympanic membrane and cause it to vibrate with the same frequency as the sound; as a result, the ossicles start moving too. Because of how they are attached to the eardrum, the ossicles concentrate and amplify the vibrations, focusing the pressures collected from the relatively large tympanic membrane onto the small oval window. This amplification is crucial for converting vibrations in air into movements of fluid in the inner ear, as we’ll see shortly. The middle ear is equipped with the equivalent of a volume control, which helps protect against the damaging forces of extremely loud noises. Two tiny muscles—the tensor tympani and the stapedius (see Figure 6.1B)—attach to the ends of the chain of ossicles. Within 200 milliseconds of the arrival of a loud sound, the brain signals the muscles to contract, which stiffens the chain of ossicles and reduces the effectiveness of the sounds. Interestingly, the middle-ear muscles activate just before we produce self-made sounds like speech or coughing, which is why we don’t perceive our own sounds as distractingly loud (Schneider and Mooney, 2018). The cochlea converts vibrational energy into neural activity The part of the inner ear that ultimately converts vibrations from sound into neural activity—the coiled, fluid-filled cochlea (from the Greek kochlos, “snail”)—is a marvel of miniaturization (FIGURES 6.1C and D). In an adult human, the cochlea measures only about 9 millimeters in diameter at its widest point—roughly the size of a pea. Fully unrolled, the human cochlea would be about 35–40 millimeters long. The cochlea is a spiral of three parallel canals: (1) the scala vestibuli (also called the vestibular canal), (2) the scala media (middle canal), and (3) the scala tympani (tympanic canal). The scala media contains the receptor system, called the organ of Corti, that converts vibration (from sound) into neural activity (see Figure 6.1D). It consists of three main structures: (1) the auditory sensory cells, called hair cells (FIGURE 6.1E), which bridge between the basilar membrane and the overlying tectorial membrane; (2) an elaborate framework of supporting cells; and (3) the auditory nerve terminals that transmit neural signals to and from the brain. FIGU R E 6 . 1 External and Internal Structures of the Human Ear View larger image When the ossicles transmit vibrations from the tympanic membrane to the oval window, waves or ripples are created in the fluid of the scala vestibuli, which in turn cause the basilar membrane to ripple, like shaking out a rug. A crucial feature of the basilar membrane is that it is tapered—it’s much wider at the apex of the cochlea than at the base. Thanks to this taper, each successive location along the basilar membrane shows its strongest response to a different frequency of sound. High frequencies have their greatest effects near the base, where the basilar membrane is narrow and comparatively stiff; low-frequency sounds produce a larger response near the apex, where the basilar membrane is wider, floppier, and has special properties that accentuate low frequencies (Sasmal and Grosh, 2019). RESEARCHERS AT WORK Georg von Békésy and the Cochlear Wave The discovery of the mechanics of the basilar membrane garnered a Nobel Prize for Georg von Békésy in 1961 (FIGURE 6.2). FIGU R E 6 . 2 Deformation of the Basilar Membrane Encodes Sound Frequencies View larger image The hair cells transduce movements of the basilar membrane into electrical signals The rippling of the basilar membrane is converted into neural activity through the actions of the hair cells, arranged along the length of the organ of Corti. Each hair cell features a sloping brush of minuscule hairs called stereocilia (singular stereocilium) on its upper surface. In Figure 6.1D you’ll notice that, although the bases of hair cells are embedded in the basilar membrane, the stereocilia nestle into hollows in the tectorial membrane that lies above. The hair cells—and especially the stereocilia themselves—thus form a mechanical bridge, spanning between the two membranes, that is forced to bend when sounds cause the basilar membrane to ripple. Even a tiny bend of the stereocilia produces a large and rapid depolarization of the hair cells. This depolarization results from the operation of a special type of large and nonselective ion channel found on stereocilia. Like spring-loaded trapdoors, these channels are mechanically popped open as stereocilia bend (Hudspeth, 2014), allowing an inrush of potassium (K ) and calcium (Ca ) ions. Just as we saw in neurons (in Chapter 2), this depolarization leads to a + 2+ rapid influx of Ca at the base of the hair cell, which in turn causes synaptic vesicles there to fuse with the presynaptic membrane and release neurotransmitter, stimulating adjacent nerve fibers. The stereocilia channels snap shut again in a fraction of a millisecond as the hair cell sways back. This ability to rapidly switch on and off allows hair cells to accurately track the rapid oscillations of the basilar membrane with exquisite sensitivity. In the human cochlea, the hair cells are organized into a single row of about 3500 inner hair cells (IHCs, called inner because they are closer to the central axis of the coiled cochlea) and about 12,000 outer hair cells (OHCs) in three rows (see Figure 6.1D). Fibers of the vestibulocochlear nerve (cranial nerve VIII) contact the bases of the hair cells (see Figure 6.1E). Some of these fibers do indeed convey sound information to the brain, but the neural connections of the cochlea are a little more complicated than this. In fact, there are four kinds of neural connections with hair cells, each relying on a different neurotransmitter (Goutman et al., 2015), as you can see in FIGURE 6.3. 2+ FIGU R E 6 . 3 Auditory Nerve Fibers and Synapses in the Organ of Corti View larger image The fibers are distinguished as follows: 1. IHC afferents convey to the brain the action potentials that provide the perception of sounds. IHC afferents make up about 95 percent of the fibers leading to the brain. 2. IHC efferents lead from the brain to the IHCs. They allow the brain to control the responsiveness of IHCs. 3. OHC afferents convey information to the brain about the mechanical state of the basilar membrane, but not the perception of sounds themselves. 4. OHC efferents from the brain enable it to activate a remarkable property of OHCs: the ability to change their length almost instantaneously (He et al., 2014). Through this electromechanical action, the brain continually modifies the stiffness of regions of the basilar membrane, resulting in both sharpened tuning and pronounced amplification (Hudspeth, 2014). Evidence is mounting that a complementary process also modifies the local stiffness of the tectorial membrane (see Figure 6.3A), further improving the tuning and amplification of the organ of Corti (Sellon et al., 2019). Now that the inner ear has transduced the vibrations from sound into trains of action potentials, the auditory signals must leave the cochlea and enter the brain. Auditory signals run from cochlea to cortex On each side of your head, about 30,000–50,000 auditory axons from the cochlea make up the auditory part of the vestibulocochlear nerve (cranial nerve VIII), and most of these afferent fibers carry information from the IHCs (each of which stimulates several nerve fibers) to the brain. If we record from these IHC afferents, we find that each one has a maximum sensitivity to sound of a particular frequency but will also respond to neighboring frequencies if the sound is loud enough. For example, the auditory neuron whose responses are shown in red in FIGURE 6.4 has its best frequency at 1200 Hz (1.2 kHz)—that is, it is sensitive to even a very weak tone at 1200 Hz—but for sounds that are 20 dB louder, the cell will respond to frequencies from 500 to 1800 Hz. We call this the cell’s tuning curve. If the brain received a signal from only one such fiber, it would not be able to tell whether the stimulus was a weak tone of 1200 Hz or a stronger tone of 500 or 1800 Hz, or any frequency in between. Instead, the brain analyzes the activity from thousands of such units simultaneously to calculate the intensity and frequency of each sound. FIGU R E 6 . 4 Tuning Curves of Auditory Nerve Cells View larger image The inputs from the auditory nerves are distributed to both sides of the brain via the ascending network shown in FIGURE 6.5. First, the auditory nerve fibers terminate in the (sensibly named) cochlear nuclei, where some initial processing occurs. Output from the cochlear nuclei primarily projects to the superior olivary nuclei, each of which receives inputs from both right and left cochlear nuclei. This bilateral input makes the superior olivary nucleus the first brain site at which binaural (two-ear) processing occurs. As you might expect, this mechanism plays a key role in localizing sounds by comparing the two ears, as we’ll discuss shortly. FIGU R E 6 . 5 Auditory Pathways of the Human Brain View larger image The superior olivary nuclei pass information derived from both ears to the inferior colliculi, which are the primary auditory centers of the midbrain. Outputs of the inferior colliculi go to the medial geniculate nuclei of the thalamus. Outputs from the medial geniculate nuclei extend to several auditory cortical areas. The neurons within all levels of the auditory system, from cochlea to auditory cortex, display tonotopic organization; that is, they are arrayed to form an orderly map of sound frequencies (topos is Greek for “place”) from low frequency (sounds that we perceive as lower pitched or “bass”) to high frequency (perceived as higher pitched or “treble”) (Saenz and Langers, 2014). Furthermore, at the higher levels of the system, auditory neurons are not only excited by specific frequencies, but also inhibited by neighboring frequencies, resulting in much sharper tuning of the frequency responses of these cells. This precision helps us discriminate tiny differences in the frequencies of sounds. Brain-imaging studies in humans confirm that many sounds (tones, noises, and so on) activate the primary auditory cortex (A1), which is located on the upper surface of the temporal lobes. Speech sounds produce similar activation but additionally activate other, more specialized auditory areas (FIGURE 6.6). Interestingly, when hearing people use their visual systems to try to lip-read—that is, try to figure out what someone is saying solely by watching their lips—a similar pattern of activation of auditory cortex is observed (Bourguignon et al., 2020). This finding suggests that the auditory cortex integrates other, nonauditory, information with sounds. FIGU R E 6 . 6 Responses of the Human Auditory Cortex to Random Sounds versus Speech View larger image How’s It Going? 1. Identify the major components of the external ear. What does the external ear do? 2. Identify the three ossicles, and explain their function. To what structures do the ossicles connect, and how is their action moderated? 3. Provide a brief description of the organ of Corti, naming the components that are most important for the perception of sound. 4. Explain how the movement of hair cells transduces sound waves into action potentials. Compare and contrast the functions of inner hair cells and outer hair cells. 5. Sketch the major anatomical components of the auditory projections in the brain. Where does binaural processing first occur? What is tonotopic organization? What kind of processing does auditory cortex perform? FOOD FOR THOUGHT Why do you suppose that the tuning curves of auditory neurons are relatively broad? Wouldn’t it make more sense for each auditory neuron to focus on a single specific frequency? 6.2 Specialized Neural Systems Extract Information from Auditory Signals The Road Ahead Higher levels of the auditory system process different features of the sounds captured by the ears. After reading this section, you should be able to: 6.2.1 Explain the relationship between frequency and pitch, and discuss the ranges of frequencies perceived by humans and other species. 6.2.2 Describe the two major ways in which frequency information is encoded by the cochlea. 6.2.3 Explain the principal features of sound that the nervous system uses for sound localization. 6.2.4 Discuss the functions of auditory cortex, from an ecological perspective. 6.2.5 Evaluate the importance of experience in the development and tuning of the auditory system, throughout the life-span. 6.2.6 Describe the relationship between musical experience and the development of auditory skills in music and other domains. At least when we’re young, most of us can hear sounds ranging from 20 Hz to about 20,000 Hz, and within this range we can distinguish between sounds that differ by just a few hertz. Our ability to discern many frequencies simultaneously, and accurately identify where in the world they are coming from, helps us to define the spaces and sound emitters around us—acoustical objects as varied as insects and tubas—and identify the ones that are important for our daily lives. The pitch of sounds is encoded in two complementary ways Differences in frequency are important for our sense of pitch, but pitch and frequency are not synonymous. Frequency describes a physical property of sounds (see Box 6.1), but pitch relates solely to our subjective perception of those sounds. This is an important distinction because frequency is not the sole determinant of perceived pitch; at some frequencies, higher-intensity sounds may seem higher pitched, and changes in pitch do not precisely parallel changes in frequency. How do we distinguish pitches? Two signals from the cochlea appear to inform the brain about the pitch of sounds: 1. According to place coding theory, the pitch of a sound is determined by the location of activated hair cells along the length of the basilar membrane, as we discussed in this chapter’s Researchers at Work feature. So, activation of receptors near the base of the cochlea (which is narrow and stiff and responds to high frequencies) signals treble, and activation of receptors nearer the apex (which is wide and floppy and responds to low frequencies) signals bass. This is another example of a labeled line system, which we discussed in the context of the sense of touch in Chapter 5—here, each neuron fires in response to a favorite frequency. 2. A complementary account called temporal coding theory proposes that the frequency of a sound is encoded in the rate of firing of auditory neurons. According to this model, the frequency of action potentials produced by the neuron is directly related to the number of cycles per second (i.e., hertz) of the sound. For example, a 500 Hz sound might cause some auditory neurons to fire 500 action potentials per second. Encoding sound frequency within volleys of action potentials averaged across a number of neurons with similar tunings provides the brain with a reliable additional source of pitch information. Experimental evidence indicates that we rely on both of these processes to discriminate the pitch of sounds. Temporal coding is most evident at lower frequencies, up to about 4000 Hz: auditory neurons can fire a maximum of only about 1000 action potentials per second, but to some extent they can encode sound frequencies that are multiples of the action potential frequency. Beyond about 4000 Hz, however, this encoding becomes impossible, and pitch discrimination relies on place coding of pitch along the basilar membrane. Mammalian species employ a huge range of frequencies in their vocalizations, from infrasound (less than 20 Hz) in elephants and whales to ultrasound (greater than 20,000 Hz) in bats and porpoises and many other species (the ghost-faced bat emits vocalizations at an incredible 160,000 Hz). These sounds have been shaped by evolution to serve special purposes. For example, many species of bats analyze the reflected echoes of their ultrasonic vocalizations to navigate and hunt in the dark. At the other end of the spectrum, homing pigeons seem to use infrasound cues to establish a navigational map, and they will become disoriented if exposed to a jet’s sonic boom or if atmospheric conditions prevent them from perceiving natural infrasound sources (Hagstrum, 2019). Elephants emit ultra-low-frequency alarm calls that are so powerful that they travel partly through the ground and are detected seismically by other elephants (Herbst et al., 2012) and yet are so nuanced that the elephants can distinguish human-related threats from bee-related threats (yes, elephants are scared of bees; Soltis et al., 2014) and can use their “rumbles” to identify potential mates (Stoeger and Baotic, 2017). Brainstem systems compare the ears to localize sounds Being able to quickly identify where a sound is coming from— whether it is the crack of a twig under a predator’s foot, or the sweet tones of a would-be mate—is a matter of great evolutionary significance. So it’s no surprise that we are remarkably good at locating a sound source (our accuracy is about ±1 degree horizontally around the head, and many other animals do even better). The auditory system accomplishes this feat by analyzing two kinds of binaural cues that signal the location of a sound source: 1. Interaural intensity differences (IIDs) result from comparison of the intensity of the sound—the physical property that we perceive as loudness—at the left and right ears (interaural means “between the two ears”). Depending on the species—and the placement and characteristics of their pinnae—intensity differences occur because one ear is pointed more directly toward the sound source or because the head casts a sound shadow (FIGURE 6.7A), preventing sounds originating on one side (called off-axis sounds) from reaching both ears with equal loudness. The head shadow (or sound shadow) effect is most pronounced for higherfrequency sounds (FIGURE 6.7B). 2. Interaural temporal differences (ITDs) are differences between the two ears in the time of arrival of sounds. They arise because one ear is always a little closer to an off-axis sound than the other ear is. Two kinds of temporal (time) differences are present in a sound: onset disparity, which is the difference between the two ears in hearing the beginning of the sound; and ongoing phase disparity, which is the continuing mismatch between the two ears in the time of arrival of all the peaks and troughs that make up the sound wave, as illustrated in FIGURE 6.7C. FIGU R E 6 . 7 Cues for Binaural Hearing View larger image Both types of cues are used to figure out where a sound is coming from; researchers call this the duplex theory of sound localization. At low frequencies, no matter where sounds occur horizontally around the head, there are virtually no intensity differences between the ears (see Figure 6.7B). So for these frequencies, differences in times of arrival are the principal cues used for sound localization (and at very low frequencies, neither cue is much help; this is why you can place the subwoofer of an audio system anywhere you want within a room). At higher frequencies, however, the sound shadow cast by the head causes significant intensity differences between the ears. Of course, we can’t perceive exactly which types of processing we’re relying on for any given sound; in general, we are aware of the results of neural processing but not the processing itself. The structure of the external ear provides yet another localization cue. As we mentioned earlier, the hills and valleys of the external ear selectively reinforce some frequencies in a complex sound and diminish others. This process is known as spectral filtering, and the frequencies that are altered depend on the angle at which the sound strikes the external ear (Zonooz et al., 2019). That angle varies, of course, depending on the vertical localization (or elevation) of a sound source. The relationship between spectral cues and elevation is learned and calibrated during development (van der Heijden et al., 2019). The various binaural and spectral cues used for sound localization converge and are integrated in the inferior colliculus (Slee and Young, 2014). The auditory cortex processes complex sound In some sensory areas of the brain, lesions cause the loss of basic perceptions. For example, lesions of visual cortex result in blind spots, as we will discuss in Chapter 7. But the auditory cortex is different: researchers have long known that simple pure tones can be heard even after the entire auditory cortex has been surgically removed (Neff and Casseday, 1977). So if the auditory cortex is not involved in basic auditory perception, then what does it do? The auditory cortex seems to be specialized for the detection of morecomplex “biologically relevant” sounds, of the sort we mentioned earlier—vocalizations of animals, footsteps, snaps, crackles, and pops —containing many frequencies and complex patterns (Theunissen and Elie, 2014). In other words, the auditory cortex evolved to process the complex soundscape of everyday life, not simple tones. The unique capabilities of the auditory cortex result from a sensitivity that is fine-tuned by experience as we grow (Chang and Kanold, 2021). Human infants have diverse hearing capabilities at birth, but their hearing for complex speech sounds in particular becomes more precise and rapid through exposure to the speech of their families and other people. Newborns can distinguish all the different sounds that are made in any human language. But as they develop, they get better and better at distinguishing sounds in the language(s) they hear around them, and worse at distinguishing speech sounds unique to other languages. Similarly, early experience with binaural hearing, compared with equivalent monaural (oneeared) hearing, is important for developing the ability to localize sound sources, but if it occurs early enough, the auditory system can learn to use other cues to compensate for the loss of hearing in one ear (Kumpik and King, 2019). Studies with both humans and lab animals confirm that throughout life, experience with tasks that employ complex auditory stimuli—like discriminating between multiple pitches or, in the case of humans, modified speech samples —can cause a rapid retuning of auditory neurons (FIGURE 6.8) (Holdgraf et al., 2016). Sounds that are biologically urgent, such as the distress cries of an infant, reportedly cause this auditory retuning and learning to occur especially quickly (Schiavo et al., 2020). Later in life, aging takes a steady toll on our hearing. With the passage of time, the responsiveness of auditory cortex neurons gradually declines, and it becomes harder to distinguish between sounds that occur simultaneously (Overton and Recanzone, 2016; Recanzone, 2018). This is one reason why grandparents can find it so difficult to follow a conversation in a noisy restaurant. FIGU R E 6 . 8 Long-Term Retention of a Trained Shift in the Tuning of an Auditory Receptive Field View larger image Music also shapes the responses of auditory cortex. It might not surprise you to learn that the music-relevant areas of the auditory cortex of trained musicians are structurally different from the same regions in nonmusicians, and also more responsive. After all, when two people differ in any skill, their brains must be different in some way, and maybe people born with brains that are more responsive to musical sounds are also more likely to train in music (Wesseldijk et al., 2021). The surprising part is that the extent to which a musician’s brain is extra sensitive to music is correlated with the age at which they began their serious training in music: the earlier the training began, and the more intensive it was, the larger the difference in auditory cortex in adulthood (Herholz and Zatorre, 2012; Habibi et al., 2020). Kids who receive intensive musical education also show enhanced speech perception later in life (Weiss and Bidelman, 2015; Intartaglia et al., 2017). Findings like these show that early musical training alters the functioning of auditory cortex in an enduring manner. By adulthood, the portion of primary auditory cortex where music is first processed, called Heschl’s gyrus, is much larger and more responsive in professional musicians than in nonmusicians, and more than twice as strongly activated by music (P. Schneider et al., 2002). Even in older adults, piano training increases cortical thickness in multiple regions of auditory cortex, including Heschl’s gyrus (Worschech et al., 2022). The cortical processing of music is also believed to be influenced by the brain’s mesolimbic reward system (see Chapter 3), attaching a reward value to music that is new and pleasurable to us (Salimpoor et al., 2015; Gold et al., 2019). So, to what extent is music perception inborn? Some people show a lifelong inability to discern tunes or sing, called amusia. Amusia is associated with subtly abnormal connectivity between primary auditory cortex and regions of the right frontal lobe known to participate in pitch discrimination (FIGURE 6.9) (Loui et al., 2009; Chen et al., 2015). The result is an inability to consciously access pitch information, even though cortical pitch-processing systems are intact (Zendel et al., 2015). Interestingly, studies of people with amusia indicate that when listening to music, we process pitch and rhythm quite separately, and pitch perception seems to be heritable, suggesting a genetic component (Peretz, 2016). If you’re worried about your own ability to carry a tune, the National Institutes of Health (NIH) provides an online test of pitch perception. FIGU R E 6 . 9 Brain Connections in People with Amusia View larger image FOOD FOR THOUGHT If you were completely deaf in one ear, would you still be able to localize sound sources? How? 6.3 Hearing Loss Is a Widespread Problem The Road Ahead Next we consider the main causes of auditory dysfunction. After reading this section, you should be able to: 6.3.1 Define and distinguish between hearing loss and deafness. 6.3.2 Describe and contrast the three major categories of hearing loss. 6.3.3 Identify potentially harmful noise intensities, and discuss the ways in which noise damages the auditory system. 6.3.4 Summarize and evaluate methods for treating each form of hearing loss. Disorders of hearing include hearing loss (defined as a moderate to severe decrease in sensitivity to sound) and deafness (defined as hearing loss so profound that speech cannot be perceived even with the use of hearing aids). Bilateral hearing loss affects about 40 million people in the USA alone (Goman and Lin, 2016), and it is estimated that by 2050 about 1 in 4 people worldwide will have hearing problems, many of them preventable (World Health Organization, 2021). By now, you may have anticipated that there are three main kinds of problems that can prevent sound waves in the air from being transformed into conscious auditory perceptions: problems with sound waves reaching the cochlea, trouble converting those sound waves into action potentials, or dysfunction of the brain regions that process sound information (FIGURE 6.10): 1. Before anything even happens in the nervous system, the ear may fail to convert the sound vibrations in air into waves of fluid within the cochlea. This form of hearing loss, called conduction deafness (FIGURE 6.10A), often comes about when the ossicles of the middle ear become fused together and vibrations of the eardrum can no longer be conveyed to the oval window of the cochlea. 2. Even if vibrations are successfully conducted to the cochlea, the sensory apparatus of the cochlea—the organ of Corti, and the hair cells it contains—may fail to convert the ripples created in the basilar membrane into the volleys of action potentials that ordinarily inform the brain about sounds. This form of hearing loss, termed sensorineural deafness (FIGURE 6.10B), is most often the result of permanent damage or destruction of hair cells by any of a variety of causes (FIGURE 6.11). Some people are born with genetic abnormalities that interfere with the function of hair cells; researchers hope that gene therapies will someday help reverse genetic hearing loss (Akil and Lustig, 2019; ShubinaOleinik et al., 2021). Many more people acquire sensorineural deafness during their lives as a result of being exposed to extremely loud sounds—overamplified music, nearby gunshots, and industrial noise are important examples—or because of medical problems such as infections and adverse drug effects (certain antibiotics, such as streptomycin, are particularly ototoxic). If you don’t think it can happen to you, think again. Anyone listening to something for more than 5 hours per week at 89 dB or louder is already exceeding workplace limits for hearing safety (SCENIHR, 2008), yet many personal music players and music at concerts and clubs exceed 100 dB. Fortunately, earplugs are available that attenuate all frequencies equally, making concerts a little quieter without muffling the music. Various sound sources are compared in FIGURE 6.12; if you are concerned about your own exposure, excellent sound level meter apps for smartphones are available at little or no cost ( including one from the National Institute for Occupational Safety and Health [NIOSH]). Long-term exposure to loud sounds can cause lasting hearing problems ranging from a persistent ringing in the ears, called tinnitus (Zenner et al., 2017), to a permanent profound loss of hearing for the frequencies being listened to at such high volumes. 3. For the action potentials sent from the cochlea to be of any use, the auditory areas of the brain must process and interpret them in meaningful ways. Central deafness (FIGURE 6.10C) occurs when auditory brain areas are damaged by, for example, strokes, tumors, or traumatic injuries. As you might expect from our earlier discussion of auditory processing in the brain, this type of deafness almost never involves a simple loss of auditory sensitivity. Afflicted individuals can often hear a normal range of pure tones but are impaired in the perception of complex, behaviorally relevant sounds. An example in humans is word deafness: selective trouble with speech sounds despite normal speech and normal hearing for nonverbal sounds. Another example of central deafness is cortical deafness, a rare syndrome involving bilateral lesions of auditory cortex, causing a more complete impairment marked by difficulty recognizing almost all complex sounds, whether verbal or nonverbal. Although there are few treatments available for central deafness, we can use electronic prostheses to restore the auditory stimulation that is missing in conduction or sensorineural deafness. We discuss these approaches in Signs & Symptoms, next. FIGU R E 6 . 1 0 Types of Hearing Loss View larger image FIGU R E 6 . 11 The Destructive Effects of Loud Noise View larger image FIGU R E 6 . 1 2 How Loud Is Too Loud? View larger image SIGNS & SYMPTOMS Restoring Auditory Stimulation in Deafness People with conduction deafness use hearing aids that employ electronic amplification to deliver louder sounds to the impaired —but still functional—auditory system. Surgery can sometimes free up fused ossicles, or they can be replaced with Teflon prosthetics, restoring the transmission of vibrations from the eardrum to the cochlea (Young and Ng, 2022). But sensorineural deafness presents a much thornier problem because neural elements have been destroyed (or were absent from birth). Can new hair cells be grown? Although fishes and amphibians produce new hair cells throughout life, mammals have traditionally been viewed as incapable of regenerating hair cells. This conclusion may have been too hasty, however (Géléoc and Holt, 2014). Using several different strategies, researchers have succeeded in inducing the birth of new hair cells in cochlear tissues of lab animals (Li et al., 2015), so there is reason to hope that an effective restorative therapy for deafness may be available someday. For now, treatments for sensorineural deafness focus on the use of prostheses. Implantable devices called cochlear implants can detect sounds and then directly stimulate the auditory nerve fibers of the cochlea, bypassing the ossicles and hair cells altogether and offering partial restoration of hearing even in cases of complete sensorineural deafness (FIGURE 6.13). You may have had doubts about the value of Békésy’s work with cadavers that we described at the start of this chapter. If so, consider this: the cochlear implants that have brought hearing to thousands of deaf people work by reproducing the phenomena Békésy discovered. In other words, the device sends information about low frequencies to electrodes stimulating nerves at the apex of the cochlea and sends information about high frequencies to electrodes stimulating nerves at the base. As you might predict from our discussion of the importance of experience in shaping auditory responsiveness, the earlier in life these devices are implanted, the better the person will be able to understand complex sounds, especially speech (Geers et al., 2017). So in a sense, the success of these implants is due to the extreme plasticity of the young brain. FIGU R E 6 . 1 3 Cochlear Implants Provide Hearing in Some Deaf People View larger image How’s It Going? 1. Compare and contrast the two important signals about pitch that the brain receives from the cochlea: place coding and temporal coding. How do they work together to give us our sense of pitch? 2. Discuss the sensory capabilities of different species as adaptations shaped by natural selection. 3. Provide an account of sound localization, identifying the several sources of information that we use to determine the source of a sound. 4. Discuss the types of processing that are performed by primary auditory cortex. Is experience with sound important for development of cortical auditory systems? 5. Name and describe the three major forms of deafness. FOOD FOR THOUGHT How could technology of the future help people overcome central deafness? 6.4 Balance: The Inner Ear Senses the Position and Movement of the Head The Road Ahead In the next section we look at the inner ear system that gives us our sense of balance. After reading this section, you should be able to: 6.4.1 Describe the anatomical features of the vestibular system. 6.4.2 Explain how accelerations and changes in the position of the head are transduced into sequences of action potentials. 6.4.3 Describe the neural projections from the vestibular system to the brainstem, and summarize their functional importance. 6.4.4 Discuss some of the consequences of vestibular dysfunction or abnormal vestibular stimulation. Without our sense of balance, it would be a challenge to simply stand on two feet. When you use an elevator, you clearly sense that your body is rising or falling, despite the sameness of your surroundings. When you turn your head, take a tight curve in your car, or bounce through the seas in a boat, your continual awareness of motion allows you to plan movements and anticipate changes in perception due to movement of your head. And of course, too much of this sort of stimulation can make you lose your lunch. Like hearing, our sense of balance is the product of the inner ear, relying on several small structures that adjoin the cochlea and are known collectively as the vestibular system (from the Latin vestibulum, “entrance hall,” reflecting the fact that the system lies in hollow spaces in the temporal bone). In fact, it is generally accepted that the auditory organ evolved from the vestibular system, although the ossicles probably evolved from parts of the jaw. The most obvious components of the vestibular system are the three fluid-filled semicircular canals, plus two bulbs called the saccule and the utricle that are located near the ends of the semicircular canals (FIGURE 6.14A). Notice that the three canals are oriented in the three different planes in which the head can rotate (FIGURE 6.14B)—nodding up and down (technically known as pitch), shaking from side to side (yaw), and tilting left or right (roll). FIGU R E 6 . 1 4 Structures of the Vestibular System View larger image The receptors of the vestibular system are hair cells—just like those in the cochlea—whose bending ultimately produces action potentials. The cilia of these hair cells are embedded in a gelatinous mass inside an enlarged chamber called the ampulla (plural ampullae) that lies at the base of each semicircular canal (see Figure 6.14B). Movement of the head in one axis sets up a flow of the fluid in the semicircular canal that lies in the same plane, bending the stereocilia in that particular ampulla and signaling the brain that the head has moved. Working together, the three semicircular canals accurately track the movement of the head. The utricle and saccule each contain an otolithic membrane (a gelatinous sheet studded with tiny crystals; otolith literally means “ear stone”) that, thanks to its mass, lags slightly when the head moves. This bends the stereocilia of nearby hair cells, stimulating them to track straight-line acceleration and deceleration—the final signals that the brain needs to calculate the position and movement of the body in three-dimensional space. Axons leading from these hair cells to the brain make up the vestibular part of the vestibulocochlear nerve (cranial nerve VIII). Vestibular information is crucial for planning body movements, maintaining balance against gravity, and smoothly directing sensory organs like the eyes and ears toward specific locations, even when our bodies are themselves in motion. So, it’s no surprise that the nerve pathways from the vestibular system have strong connections to brain regions responsible for the planning and control of movement. On entering the brainstem, many of the vestibular fibers terminate in the vestibular nuclei, while some fibers project directly to the cerebellum to aid in motor programming there. Outputs from the vestibular nuclei project in a complex manner to motor areas throughout the brain, including motor nuclei of the eye muscles, the thalamus, and the cerebral cortex. Some forms of vestibular excitation produce motion sickness There is one aspect of vestibular activation that many of us would gladly do without. Too much strong vestibular stimulation—think of boats and roller coasters—can produce the misery of motion sickness. Motion sickness is caused by movements of the body that we cannot control. For example, passengers in a car are more likely to experience motion sickness than is the driver; it remains to be seen how this will affect the occupants and design of self-driving cars (Buchheit et al., 2022). Why do we experience motion sickness? According to the sensory conflict theory, we feel bad when we receive contradictory sensory messages, especially a discrepancy between vestibular and visual information (Keshavarz and Golding, 2022). According to one early hypothesis, discrepancies in sensory information might ordinarily signal the neurological impact of toxins, triggering dizziness and vomiting to get rid of accidentally ingested poisons. However, there is little objective evidence to support the “poison hypothesis” of motion sickness, so its evolutionary origins remain a mystery (Oman, 2012). The observation that virtual reality devices frequently induce motion sickness, and that people who tend to sway when standing are more susceptible to this sickness, has been interpreted as evidence that motion sickness actually results from postural instability rather than sensory conflict (Munafo et al., 2017). When an airplane bounces around in turbulence, the vestibular system signals that various changes in direction and accelerations are occurring, but as far as the visual system is concerned, nothing is happening; the plane’s interior is a constant. For passengers, the worst effect of this may be some motion sickness, but pilots are trained to be wary of a second effect of this mismatch. In conditions of very low visibility, an acceleration of the plane may be misinterpreted as a climb (an upward tilt of the plane) (MacNeilage et al., 2007; Sánchez-Tena et al., 2018), a compelling phenomenon called the somatogravic illusion (or false-climb illusion). Both acceleration and climb will press you back in your seat, so pilots are trained not to reflexively dive the plane (which could result in disaster), but instead to rely on their instruments—rather than their vestibular systems—to determine whether the plane is climbing or accelerating. How’s It Going? 1. Use a diagram to explain how the general layout of the vestibular system allows it to track movement in three axes. Where are the receptors for head movement located? Do they resemble other types of sensory receptors? 2. Where are the vestibular nuclei located? What nerve provides inputs to these nuclei? 3. How does vestibular sensitivity affect your everyday activities? 4. Discuss the role of the vestibular system in motion sickness. FOOD FOR THOUGHT Propose an amusement park ride that capitalizes on the somatogravic illusion. How would your ride affect the vestibular system, and why would people pay for that? 6.5 Taste: Chemicals in Foods Are Perceived as Tastes The Road Ahead We now turn our attention to the specialized sensors that gives us our sense of taste. After reading this section, you should be able to: 6.5.1 Describe the structure, function, and distribution of the papillae on the tongue. 6.5.2 Summarize the structure of taste buds, and discuss their relationship to papillae. 6.5.3 Describe the basic tastes and the distribution of taste sensitivity across the surface of the tongue. 6.5.4 Describe the specialized cellular mechanisms through which taste cells transduce each of the major tastes. 6.5.5 Trace the neural projection of gustatory information to the brainstem and higher-order systems. Delicious foods, poisons, dangerous adversaries, and fertile mates— these are just a few of the sources of chemical signals in the environment. Being able to detect these signals is vital for survival and reproduction throughout the animal kingdom. Most people derive great pleasure from eating delicious food, and because we recognize many substances by their distinct flavors, we tend to think that we can discriminate many tastes. In reality, though, humans detect only a small number of basic tastes; the huge variety of sensations aroused by different foods are actually flavors rather than simple tastes, and they rely on the sense of smell as well as taste. (To appreciate the importance of smell to flavor, block your nose while eating first a little bit of raw potato and then some apple: without the sense of smell, it’s difficult to tell them apart!) Scientists are in broad agreement that we possess at least five basic tastes: salty, sour, sweet, bitter, and umami. (Umami, Japanese for “delicious taste,” is the term for the savory, meaty taste that is characteristic of gravy or soy sauce.) These tastes are determined genetically, as we will see shortly, but there is considerable genetic variation across the globe in both the strength and pleasurable qualities of the basic tastes (Pirastu et al., 2016). Further, the hunt continues for additional basic tastes. For example, studies suggest that humans and other animals may possess a primary fat taste (Besnard et al., 2016; Hichami et al., 2022); another candidate taste, called kokumi, is described as the full-bodied, thick, mouth-filling quality of some foods (S. C. Brennan et al., 2014). But no matter how many basic tastes we are eventually shown to possess, it is clear that evolution shaped them to help us find nutritious food and avoid toxins. Tastes excite specialized receptor cells on the tongue Many people think that the many little bumps on their tongues are taste buds, but they aren’t. They are actually papillae (singular papilla) (FIGURE 6.15), tiny lumps of tissue that increase the surface area of the tongue. FIGU R E 6 . 1 5 A Cross Section of the Tongue View larger image There are three kinds of papillae—circumvallate, foliate, and fungiform papillae—occurring in different locations on the tongue (FIGURE 6.16). Taste buds, each consisting of a cluster of 50–150 taste receptor cells (FIGURE 6.16B), are found buried within the walls of the papillae (a single papilla may house several such taste buds; see Figure 6.15). Fine fibers, called microvilli, extend from the taste receptor cells into a tiny pore, where they come into contact with substances that can be tasted, called tastants. Each taste cell is sensitive to just one of the five basic tastes, and with a life-span of only 10–14 days, taste cells are constantly being replaced. But as our various personal experiences with hot drinks, frozen flagpoles, or spicy foods tell us, taste is not the only sensory capability of the tongue. It is also exquisitely sensitive to pain, touch, and temperature. FIGU R E 6 . 1 6 Taste Buds and Taste Receptor Cells View larger image You may have seen maps of the tongue indicating that each taste is perceived mainly in one region (sweet at the tip of the tongue, bitter at the back, and so on), but these maps are based on an enduring myth. All five basic tastes can be perceived anywhere on the tongue where there are taste receptors (Chandrashekar et al., 2006). Those areas do not differ greatly in the strength of taste sensations that they mediate (FIGURE 6.16D). The five basic tastes are signaled by specific sensors on taste cells The tastes salty and sour are evoked when taste cells are stimulated by simple ions acting via ion channels in the membranes of the taste cells. Sweet, bitter, and umami tastes are perceived by specialized receptor molecules—metabotropic G protein–coupled receptors (GPCRs), as we discussed in Chapter 3 (see Figure 3.2)—that use second messengers to change the activity of the taste cell. (Behrens and Meyerhof, 2019; Liszt et al., 2022). Salty Taste cells apparently sense salt (NaCl) in several different ways, which are not yet completely understood. As you might guess, one kind of salt sensor simply relies on sodium (Na ) channels, just like the ones we have seen in previous chapters. In this case, sodium ions (Na ) from salty food enter taste cells via sodium channels in the cell membrane, causing a depolarization of the cell and release of neurotransmitter. We know that this is a crucial mechanism for perceiving saltiness, because blocking the sodium channels with a drug reduces salt sensitivity—though it does not eliminate it (Chandrashekar et al., 2010). This primary salt-sensing system also seems to be responsible for the appetizing qualities of moderate concentrations of salt in food. A second salt receptor probably responds to multiple cations: Na , K , and Ca (Rhyu et al., 2021), perhaps accounting for taste differences between culinary salts from + + + + 2+ varying sources, such as sea salts. And although most previous research on the salt taste has focused on Na perception, it turns out that taste cells also detect the other ion that is liberated when salt dissolves: chloride (Cl ). This parallel salt-sensing system seems to mediate the aversive properties of high concentrations of salt. Because drugs that block chloride-selective ion channels have little effect on the Cl sensitivity of the tongue, researchers believe that Cl transduction by taste cells involves a different, as-yet-unknown mechanism (Roebber et al., 2019). Depolarization of the saltsensitive taste cells ultimately causes them to release neurotransmitters that stimulate afferent neurons that relay saltiness information to the brain. Sour Acids in food taste sour—the more acidic the food, the more sour it tastes—and after a long search, researchers have narrowed in on the primary mechanisms that detect sour tastants. The property that all acids share is that they release protons (H ; also called hydrogen ions). It seems that all sour-sensitive taste cells share an inward flow of protons that depolarizes the cell (Bushman et al., 2015). Researchers eventually discovered that these taste cells express a new kind of ion channel, called OTOP1, that is exquisitely selective for protons: OTOP1 channels are 100,000 times more permeable to protons than Na ions, and they completely block most other ions (Tu et al., 2018; Teng et al., 2019). An inrush of protons from sour foods via OTOP1 sour receptors thus directly depolarizes + – – – + + sour taste cells, which signal sourness to the brain’s gustatory systems. This OTOP1-dependent activity in the gustatory pathways accurately encodes the acidity (sourness) of food, and it is absent in mice with the Otop1 gene knocked out (Turner and Liman, 2022). Interestingly, sour receptors also detect the sensation and taste of carbonation in drinks (Chandrashekar et al., 2009) and prompt thirsty animals to drink (Zocchi et al., 2017). The receptors for sweet, bitter, and umami tastes are metabotropic GPCRs (see Figure 3.2): when bound by tastant molecules arriving at the taste cell’s surface, the receptor activates a second-messenger system within the cell. These receptors are made up of simpler proteins belonging to two families—designated T1R and T2R—that are combined in various ways (Ahmad and Dalziel, 2020), as we will see next. Sweet When two members of the T1R family—T1R2 and T1R3—combine (heterodimerize), they make a receptor that selectively detects sweet tastants (Nelson et al., 2001; Yoshida and Ninomiya, 2016). Mice engineered to lack either T1R2 or T1R3 are insensitive to sweet tastes (Zhao et al., 2003). And if you’ve spent any time around cats, you may be aware that they couldn’t care less about sweets. It turns out that in all cats, from tabbies to tigers, the gene that encodes T1R2 is disabled, so their sweet receptors don’t work (X. Li et al., 2009). Bitter In nature, bitter tastes often signal the presence of toxins, so it’s not surprising that a high sensitivity to different kinds of bitter tastes has evolved, although individuals vary significantly in their bitter taste sensitivity (FIGURE 6.17). Members of the T2R family of receptor proteins appear to function as bitter receptors (Chandrashekar et al., 2000; Behrens and Meyerhof, 2018). The T2R family has about 30 members, and this large number may reflect the wide variety of bitter substances encountered in the environment, as well as the adaptive importance of being able to detect and avoid them. Furthermore, each bitter-sensing taste cell produces most or all of the different types of T2R bitter receptors, so each bitter-sensing taste cell is very broadly tuned and will respond to just about any bitter-tasting substance (Brasser et al., 2005). That’s just what you’d expect in a system that evolved to detect toxins. FIGU R E 6 . 1 7 It’s All a Matter of Taste Buds View larger image Umami The fifth basic taste, umami—the meaty, savory flavor—is detected by at least two kinds of receptors. One of these is a variant of the metabotropic glutamate receptor (Yasumatsu et al., 2015) and most likely responds to the amino acid glutamate, which is found in high concentrations in meats, cheeses, kombu, and other savory foodstuffs (that’s why MSG—monosodium glutamate—is used as a “flavor enhancer”). The second probable umami receptor, a heterodimer of T1R1 and T1R3 proteins, responds to most of the dietary amino acids (Nelson et al., 2002; Ahmad and Dalziel, 2020). Given this receptor’s similarity to the T1R2+T1R3 sweet receptor, there is reason to suppose that receptors for things that taste good may have shared evolutionary origins. Consider the taste abilities of birds that, just like their house cat enemies, lack the T1R2 gene and thus ordinarily can’t taste sweet. Instead, in those birds that rely on nectar to survive, evolution repurposed the T1R1+T1R3 umami receptor into a new class of taste receptor (Toda et al., 2021). It’s impossible to know exactly what taste sensation it produces in the birds’ brains, but it evidently signals the presence of delicious sugars —an evolutionary work-around that has allowed nectar-feeding birds to thrive and spread. Researchers have also discovered that these taste receptor proteins are expressed in numerous tissues of the body—not just the tongue (FIGURE 6.18). These extra-oral taste receptors serve widely varying functions unrelated to conventional taste, such as the control of appetite, digestion, and immune responses FIGU R E 6 . 1 8 Body Tissues Expressing Taste Receptors View larger image Taste information is transmitted to several parts of the brain Taste projections of the gustatory system (from the Latin gustare, “to taste”) extend from the tongue to several brainstem nuclei, then to the thalamus, and ultimately to gustatory regions of the somatosensory cortex (FIGURE 6.19). Because there are only five basic tastes, and because each taste cell detects just one of the five, the encoding of taste perception could be quite straightforward, with the brain simply monitoring which specific axons are active in order to determine which tastes are present (Chandrashekar et al., 2006). In such a simple arrangement—as we noted earlier, it is sometimes called a labeled-line system—there is no need to analyze complex patterns of activity across multiple kinds of taste receptors (called pattern coding). Experimental evidence seemingly supports the conclusion that taste is a labeled-line system: selectively inactivating taste cells that express receptors for just one of the five tastes tends to completely eradicate sensitivity to that one taste while leaving the other four tastes mostly unaffected (e.g., Huang et al., 2006). However, the same manipulation can also be viewed as knocking out one-fifth of any pattern of activity that would be normally present. Furthermore, it’s hard to see how a purely labeled-line system would allow us to discriminate between different types of salty tastes, or different forms of sweet. Interestingly, research using electron microscopy to study neural connections of taste cells has revealed that a minority of gustatory neurons receive inputs from more than one type of taste cell, so it’s possible that the taste system could employ some degree of pattern coding to detect tastes (Wilson et al., 2022). The resolution of this debate will require new technical developments and further experimentation. FIGU R E 6 . 1 9 Anatomy and Main Pathways of the Human Gustatory System View larger image How’s It Going? 1. What are the five basic tastes? 2. Generate a map of the human tongue, showing how sensitive each region is to the five basic tastes. 3. Compare and contrast taste buds and papillae. 4. Identify the cellular mechanisms underlying each of the five tastes. Discuss the evolution of taste sensitivity: How do these five tastes help us survive? FOOD FOR THOUGHT It seems that “taste” receptors are found throughout the bodies of many animals; how might that have come about? Do you suppose they first evolved to sense chemicals in the internal environment, or the external environment? 6.6 Smell: Chemicals in the Air Elicit Odor Sensations The Road Ahead Finally we turn our attention to the specialized sensory system that samples chemicals in the air: our sense of smell. After reading this section, you should be able to: 6.6.1 Describe the main structures of the olfactory system, with a focus on the cells and projections of the olfactory epithelium. 6.6.2 Explain the process of olfactory transduction, and discuss the function and variety of olfactory receptors that have been discovered. 6.6.3 Trace the projection route of olfactory information, and main olfactory structures, from the olfactory epithelium to the cortex. 6.6.4 Compare and contrast human olfactory capabilities with those of other species. 6.6.5 Describe the structure and function of the vomeronasal system, and weigh the evidence for and against the idea that humans detect pheromones. As for all the other senses, species differences in olfaction—odor perception—reflect the evolutionary importance of various smells for survival and reproduction (Bear et al., 2016). Cats and mice, dogs and rabbits—all have a sharper sense of smell than humans, although as we will see later, the old view that humans have poor olfactory acuity has little foundation. Birds, however, have only basic olfactory abilities, and dolphins don’t have functional olfactory receptors at all. Our ability to perceive a large number of different odors is what produces the complex array of flavors that we normally think of as tastes. Surveys of olfaction in large populations of healthy people reveal surprising variation in odor sensitivity, ranging from fairly widespread anosmia (odor blindness, in varying degrees; Hofmann et al., 2016) to olfactory supersensitivity, with olfactory performance slightly better among women than men, on average (Sorokowski et al., 2019). A high incidence of anosmia is also among the many miseries inflicted by COVID-19, infecting olfactory cells of the nose and causing an inflammatory immune response that damages the nearby olfactory receptor cells. This, results in a loss of smell that lasts for only a few weeks in most people, but much longer in a few others. In fact, it’s possible that some people will never regain their sense of smell, suggesting that in a minority of people, COVID-19 infection damages the olfactory parts of the brain, which unlike the cells in the nose, do not regenerate (Sukel, 2021). Early evidence suggests that COVID-19 infection can cause widespread changes in brain structure, especially regions that are functionally related to the olfactory system (Douaud et al., 2022). Understanding the impact of COVID-19 on the nervous system will be an urgent priority for neuroscientists this decade. The sense of smell starts with receptor neurons in the nose In humans (and most other mammals), a sheet of cells called the olfactory epithelium lines part of the nasal cavities. Within the 5– 10 square centimeters of olfactory epithelium that we possess, three types of cells are found (FIGURE 6.20): supporting cells, basal cells, and about 6 million olfactory receptor neurons. For comparison, dogs have 100–300 million olfactory receptor neurons, which explains their ability to detect odors at extremely low concentrations—as low as 2 parts per trillion (King, 2013), which is like tasting a pinch of sugar dissolved in a billion cups of tea! FIGU R E 6 . 2 0 The Human Olfactory System View larger image Each olfactory receptor cell is a complete neuron, with a long, slender apical dendrite that divides into branches (cilia) that extend into the moist mucosal surface. Substances that we can smell from the air that we inhale or sniff, called odorants, dissolve into the mucosal layer and interact with receptors studding the dendritic cilia of the olfactory neurons (Mohrhardt et al., 2018). Like the metabotropic receptors found on neurons, the olfactory receptor proteins are a variety of G protein–coupled receptors (GPCRs), employing a second-messenger system to respond to the presence of odorants. However, despite these similarities, olfactory neurons differ from the neurons of the brain in several ways. One way in which olfactory neurons are distinct from their cousins in the brain relates to the production of receptor proteins: there is an incredible diversity of olfactory receptor protein subtypes. So, while there may be up to a dozen or so subtypes of receptors for a given neurotransmitter in the brain, there are hundreds or even thousands of subtypes within the family of odorant receptors, depending on the species under study. The Nobel Prize–winning discovery of the genes encoding this odorant receptor superfamily (Buck and Axel, 1991) provided one of the most important advances in the history of olfaction research. Mice have about 2 million olfactory receptor neurons, each of which expresses only one of about 1000 different receptor proteins. These receptor proteins can be divided into four different subfamilies of about 250 receptors each (Mori et al., 1999). Within each subfamily, members have similar structure and presumably recognize chemically similar odorants. Receptors of different subfamilies are expressed in separate bands of olfactory neurons within the olfactory epithelium (FIGURE 6.21) (Coleman et al., 2019). By comparison, humans make a total of about 400 different kinds of functional olfactory receptor proteins. That’s still a large number, but in our case, it looks like hundreds of additional olfactory receptor protein genes became nonfunctional during our evolution (Olender et al., 2008), suggesting that the substances those receptors detected ceased to be important to our ancestors’ survival and reproduction. Although estimates of the number of odors that modern humans can distinguish vary wildly—ranging from hundreds of thousands to hundreds of billions (Bushdid et al., 2014; Gerkin and Castro, 2015) —the widely held belief that we humans have a poor sense of smell relative to other animals has been overstated historically (McGann, 2017). Whatever turns out to be the actual number of odors humans can distinguish, our ability to discriminate thousands, millions, or perhaps billions of odors using just 400 kinds of functional olfactory receptors indicates that we must recognize most odorants by their activation of a characteristic combination of different kinds of receptor molecules (Duchamp-Viret et al., 1999), an example of pattern coding. In addition, any two people will differ by about 30 percent in the makeup of their olfactory receptors (Mainland et al., 2014), so to some extent we each live in our own, personalized olfactory world (Trimmer et al., 2019). And in a curious parallel to the discovery of taste receptors throughout the body, it turns out that some of the tongue’s taste cells possess functional olfactory receptor proteins (Malik et al., 2019), perhaps reflecting the great importance of flavors to our species. FIGU R E 6 . 2 1 Different Kinds of Olfactory Receptor Molecules on the Olfactory Epithelium View larger image Another big difference between olfactory neurons and brain neurons is that olfactory neurons die and are replaced in adulthood (Lledo and Valley, 2018). This regenerative capacity is most likely an adaptation to the hazardous environment that olfactory neurons inhabit. If an olfactory neuron is killed—say, by the virus that gave you that darn head cold, or by a whiff of something toxic while you were cleaning out the shed, or by some other misadventure—an adjacent basal cell will soon differentiate into a neuron and begin extending a dendrite and an axon (Leung et al., 2007). Each olfactory neuron extends a fine, unmyelinated axon into the nearby olfactory bulb of the brain, where it terminates on one specific glomerulus— a spherical clump of neurons (from the Latin glomus, “ball”)—out of the thousands of glomeruli that exist in the olfactory bulb. Each glomerulus receives inputs exclusively from olfactory neurons that are expressing the same type of olfactory receptor protein (see Figure 6.20). No one knows exactly how the extending axon knows where to go to find its specific glomerulus, or how it knows where to form synapses within the glomerulus after it arrives. One possibility is that olfactory receptor proteins that are found on the axons of these cells (as well as on the dendrites) guide the axons to their corresponding glomeruli (Barnea et al., 2004; Francia and Lodovichi, 2021). But whatever may be the exact mechanisms of neuroplasticity in these cells, better understanding of the process of olfactory neurogenesis may someday help us develop methods for restoring damaged regions of the brain and spinal cord. Olfactory information projects from the olfactory bulbs to several brain regions Having received information from multiple olfactory neurons all expressing the same type of olfactory receptor protein, the glomerulus then actively tunes and sharpens the neural activity associated with the corresponding odorants. The glomeruli are organized within the olfactory bulb according to an orderly, topographic map of smells, with neighboring glomeruli receiving inputs from receptors that are closely related. And, as Figure 6.21 shows, the spatial organization of glomeruli within the olfactory bulbs reflects the segregation of the four receptor protein subfamilies in the olfactory epithelium. This glomerular organization is established during a critical period in early life, after which it becomes fixed (Tsai and Barnea, 2014), resulting in an “olfactotopic” map that is maintained within the olfactory projections throughout the brain. Olfactory information is conveyed to the brain via the axons of mitral cells (see Figure 6.20), which extend from the glomeruli in the olfactory bulbs to various regions of the forebrain; smell is the only sensory modality that synapses directly in the cortex rather than having to pass through the thalamus. Important targets for olfactory inputs include the hypothalamus, the amygdala, and the prepyriform cortex (FIGURE 6.22). These limbic structures are closely involved in memory and emotion, which may help explain the potency of odors in evoking nostalgic memories of long ago (Hackländer et al., 2019). FIGU R E 6 . 2 2 Components of the Brain’s Olfactory System View larger image Many vertebrates possess a vomeronasal system Though many perfumers have tried to create one, there is no perfume for humans that is as alluring as the natural scents that other species use to find possible mates. The majority of terrestrial vertebrates—mammals, amphibians, and reptiles—possess a secondary chemical detection system that is specialized for detecting such pheromones. The system is called the vomeronasal system (FIGURE 6.23), and its receptors are found in the vomeronasal organ (VNO), near the olfactory epithelium. FIGU R E 6 . 2 3 The Vomeronasal System View larger image In rodents, the sensory neurons of the VNO express two major families of GPCR vomeronasal receptors—V1R and V2R—that encode hundreds of different types of receptors (Tirindelli, 2021). These receptors are extremely sensitive, able to detect very low levels of the pheromone signals—such as sex hormone metabolites and signals of genetic relatedness—that are released by other individuals (Isogai et al., 2011; Ihara et al., 2013). For example, hamsters and mice can distinguish relatives from nonrelatives just by smell, allowing these animals to optimize their reproductive activities. From the VNO, information is transmitted to the accessory olfactory bulb (adjacent to the main olfactory bulb), which projects to the medial amygdala and hypothalamus, structures that play crucial roles in governing emotional and sexual behaviors and in regulating hormone secretion. In parallel, dedicated mechanisms in olfactory cortex activate fear and stress responses to predator odor signals, helping the animal to avoid their toothy source (Kondoh et al., 2016). Do humans communicate via pheromones? Studies reporting pheromone-like phenomena in humans attract plenty of media attention because of the apparent link to our evolutionary past. Wellknown examples include the report that simple exposure to each other’s bodily odors can shift women’s menstrual cycles (Stern and McClintock, 1998) and a report that exposure to female tears causes reductions in testosterone and sexual arousal in men (Gelstein et al., 2011). However, the VNO is either vestigial or absent in humans, and almost all of our V1R and V2R receptor genes have become nonfunctional over evolutionary time (Lübke and Pause, 2015). So, if humans do communicate through pheromones, it is most likely accomplished using the main olfactory epithelium, and not the VNO. In mice, receptors in the main olfactory epithelium called TAARs, for trace amine–associated receptors, reportedly respond to sex-specific pheromones instead of odorants (Liberles and Buck, 2006; Dewan, 2021), and mice with their TAAR genes knocked out stop reacting to certain urinary odor signals, even in the urine of predators (Dewan et al., 2013). Thus, the old notion that the olfactory epithelium detects odors while the VNO detects pheromones is an oversimplification, even in rodents. And because TAARs have also been found in the human main olfactory epithelium, behavioral evidence that humans respond to pheromones despite their minimal or absent VNOs isn’t necessarily paradoxical. If rodents can detect pheromones through the main olfactory epithelium, using TAARs or other yet-unknown mechanisms, then perhaps we can too. Whatever the details of the mechanism may be, accumulating evidence confirms that odor is an ecologically important channel for human social behavior. Determining whether this constitutes pheromonal communication represents a challenge for future neuroscientists (Calvi et al., 2020; Wyatt 2020). How’s It Going? 1. Discuss odor sensitivity in humans. How do we compare with other species? 2. Provide a brief sketch of the olfactory epithelium, showing the major cell types and their relationships to the brain. 3. Discuss the genetics of odor receptor proteins, as well as their spatial organization in the nose and olfactory bulbs. What is a glomerulus? 4. Which regions of the brain receive strong olfactory inputs? What is the significance of this arrangement for an animal’s behavior? 5. Discuss the structures and receptors associated with pheromone sensitivity, and speculate about the ecological importance of pheromone sensitivity in humans and other animals. Are humans sensitive to pheromones? FOOD FOR THOUGHT We’ve seen that, depending on the species, the sense of smell relies on hundreds—or even thousands—of different types of olfactory receptors. Why is this necessary when other senses, such as vision and hearing, need only a few different receptors? RECOMMENDED READING Doty, R. L. (2015). Handbook of Olfaction and Gustation (3rd ed.). New York, NY: Wiley-Blackwell. Hawkes, C. H. (2018). Smell and Taste Disorders. Cambridge, UK: Cambridge University Press. Horowitz, S. S. (2012). The Universal Sense: How Hearing Shapes the Mind. London, UK: Bloomsbury. Lass, N. J., and Donai, J. J. (2021). Hearing Science Fundamentals (2nd ed.). San Diego, CA: Plural. McGee, H. (2020). Nose Dive: A Field Guide to the World’s Smells. New York, NY: Penguin. Musiek, F. E., and Baran, J. A. (2018). The Auditory System: Anatomy, Physiology, and Clinical Correlates (2nd ed.). San Diego, CA: Plural. Wolfe, J. M., Kluender, K. R., Levi, D. M., Bartoshuk, L. M., et al. (2021). Sensation & Perception (6th ed.). Sunderland, MA: Oxford University Press/Sinauer. Wyatt, T. D. (2014). Pheromones and Animal Behavior: Chemical Signals and Signatures (2nd ed.). Cambridge, UK: Cambridge University Press. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 6 View larger image LIST OF KEY TERMS Amplitude ampulla amusia anosmia basilar membrane Central deafness cochlea cochlear implants cochlear nuclei conduction deafness cortical deafness deafness decibels (dB) ear canal flavors Frequency fundamental glomerulus gustatory system hair cells harmonics hearing loss hertz (Hz) inferior colliculi infrasound inner ear inner hair cells (IHCs Interaural intensity differences (IIDs) Interaural temporal differences (ITDs) medial geniculate nuclei middle ear motion sickness olfaction olfactory bulb olfactory epithelium olfactory receptor neurons organ of Corti ossicles outer hair cells (OHCs) oval window papillae pheromones pinnae place coding theory primary auditory cortex (A1) pure tone scala media scala tympani scala vestibuli semicircular canals sensorineural deafness spectral filtering stereocilia superior olivary nuclei T1R T2R TAARs taste buds tastes