IC

CH 1-3 Exam Prep

CHAPTER 1 Structure and Function Neuroanatomy and Research Methods Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University Electrical Storm Sam had been feeling a bit odd all day; he thought perhaps he was coming down with a bug. But when he collapsed unconscious to the floor of the lunchroom at work and began twitching and jerking, it was clear that he had a much bigger problem than the flu. Sam was having a seizure, a type of uncontrollable convulsion that he’d never had before. By the time Sam arrived at the hospital, the seizure had stopped, and although he was confused and slow to respond to commands, he didn’t seem to be in distress. But when Sam smiled at Dr. Cheng, the attending neurologist, and offered to shake her hand, Dr. Cheng ordered immediate brain scans: Sam could offer only half a smile, because only the left side of his face was working, and he was unable to grip Dr. Cheng’s hand at all. How can an understanding of the pathways between brain and body provide clues about Sam’s problem? We now know quite a bit about the neural organization of basic functions, but the control of complex cognition remains a tantalizing mystery. However, the advent of sophisticated brain-imaging technology has invigorated the search for answers to fundamental questions about brain organization: Does each brain region control a specific behavior, or is the pattern of connections within the brain more important? Do some regions of the brain act as general purpose information processors? Is everybody’s brain organized in the same way? Almost everything about us—our thoughts, feelings, and behavior, however serious or silly—is the product of a knobbly three-pound organ that, despite its unremarkable appearance, is the most complicated object in the known universe. In this chapter we’ll have a look at the structure of the brain, first at the cellular level and then zooming out to survey the brain’s larger-scale anatomical landscape. We’ll also take a brief look at the elements of research design that all neuroscientists must consider when designing their studies, and some of the amazing technologies that allow researchers to probe the structure and function of the mind’s machine. 1.1 The Nervous System Is Made of Specialized Cells The Road Ahead The first part of the chapter is concerned with the cells of the nervous system. By the end of the section, you should be able to: 1.1.1 Name and describe the general functions of the four main parts of a neuron. 1.1.2 Classify neurons according to both structure and function. 1.1.3 Outline the key components of a synapse and the major steps in neurotransmission. 1.1.4 Describe the four principal types of glial cells and their basic functions. All of your organs and muscles are in continual two-way communication with the nervous system, which, like all other living tissue, is made up of highly specialized cells. The most important of these are the neurons (or nerve cells), arranged into the circuits that underlie all forms of behavior, from simple reflexes to complex cognition. Each neuron receives inputs from many other cells, integrates those inputs, and then distributes the processed information to other neurons. Your brain contains 80−90 billion of these tiny cellular computers (Herculano-Houzel, 2012; von Bartheld, 2018), working together to process vast amounts of information with apparent ease. The human brain also contains a nearly equivalent number of glial cells (sometimes called just glia) (von Bartheld et al., 2016), providing various support functions and also directly participating in information processing. But because neurons are larger and produce readily measured electrical signals, we know much more about them than about glial cells. An important early controversy in neuroscience concerned the functional independence of individual neurons: Was each neuron a discrete component? Or did cells of the nervous system become fused together, perhaps during growth and development, into larger functional units? Through painstaking study of the fine details of individual neurons, the celebrated Spanish anatomist Santiago Ramón y Cajal (1852–1934) was able to show that although neurons come very close together, they are not quite continuous with one another. Cajal and his supporters established what came to be known as the neuron doctrine: (1) that the neurons and other cells of the brain are structurally, metabolically, and functionally independent and (2) information is transmitted from neuron to neuron across tiny gaps, later named synapses. It’s impossible to determine exactly how many synapses there are in the brain, but scientists estimate that each of us may have as many as 10 (a quadrillion) synapses. That’s a number too huge for most of us to comprehend: if you gathered a quadrillion grains of sand, each a millimeter in diameter, they would fill a cube that is longer on each side than an American football field—well over a million cubic yards, or 750,000 cubic meters (in more familiar units that’s about 260 million gallons of sand, or 750 million liters)! These vast networks of connections are responsible for all of our achievements. Nineteenth-Century Drawings of Neurons Santiago Ramón y Cajal created detailed drawings of the many types of neurons found in the brain (labeled here by Cajal with 15 lowercase letters). Based on his studies of neurons, Cajal and his collaborators proposed that neurons are discrete cells that communicate via tiny contacts, which were later named synapses. View larger image The neuron has four principal divisions Although neurons come in hundreds of different shapes and sizes, they all share certain features. Like the cells of other body tissues, each neuron has a prominent nucleus, containing chromosomes made up of DNA strands within which the cell’s genes are encoded. The neuron also contains the standard assortment of cellular organelles needed for basic functions like energy production (the mitochondria), and translation of genetic instructions (via ribosomes) into specialized proteins essential for the structure and functioning of the neuron (consult the Appendix if you need a refresher on cell biology). But neurons also share a set of unique, highly specialized components that let them collect input signals from multiple sources, process and combine this information, and distribute the results of this processing to other cells. These information-processing features, illustrated in FIGURE 1.1, can be viewed as belonging to four functional zones: 1. Input zone At cellular extensions called dendrites (from the Greek dendron, “tree”), neurons receive information via synapses from other neurons. Some neurons have dendrites that are elaborately branched, providing room for many synapses. Dendrites may be covered in dendritic spines, small projections from the surface of the dendrite that add more space for synapses. 2. Integration zone In addition to receiving additional synaptic inputs, the neuron’s cell body (or soma, plural somata) integrates (combines) the information that has been received to determine whether or not to send a signal of its own. 3. Conduction zone A single extension, the axon (or nerve fiber), carries the neuron’s own electrical signals away from the cell body. In some mammalian neurons, this conduction is often aided by an insulating sheath of a substance called myelin, which we will discuss a little later. Toward its end, the axon may split into multiple branches called axon collaterals. 4. Output zone Specialized swellings at the ends of the axon, called axon terminals (or synaptic boutons), transmit the neuron’s signals across synapses to other cells. FIGU R E 1 . 1 The Major Parts of the Neuron View larger image Neurons exhibit tremendous diversity in shapes and sizes, reflecting their differing processing functions. For example, motor neurons (also called motoneurons)—the neurons that trigger movements—are large with long axons reaching out to synapse on muscles, causing muscular contractions. Sensory neurons convey information from sense organs to the brain, taking many different shapes depending on whether they signal the presence of light or sound or touch and so on. Most of the neurons in the brain are interneurons that receive information from other neurons, process it, and pass the integrated information to other neurons. Interneurons make up the hugely intricate networks and circuits that perform the complex functions of the brain. The axons of interneurons may measure only a few micrometers (μm; a micrometer [or micron] is a millionth of a meter), while motor neurons and sensory neurons may have axons a meter or more in length (!), conveying information to and from the most distant parts of the body. In general, larger neurons tend to have more-complex inputs and outputs, cover greater distances, and/or convey information more rapidly than smaller neurons. The relative sizes of neural structures that we will be discussing throughout the book are illustrated in FIGURE 1.2. FIGU R E 1 . 2 Sizes of Some Neural Structures and the Units of Measure and Magnification Used in Studying Them View larger image In addition to size, neuroscientists classify neurons into several general categories of shape, each specialized for a particular kind of information processing (FIGURE 1.3): 1. Multipolar neurons have many dendrites and a single axon. They are the most common type of neuron. 2. Bipolar neurons have a single dendrite at one end of the cell and a single axon at the other end. Bipolar neurons are especially common in sensory systems, such as vision. 3. Unipolar neurons (also called monopolar neurons) have a single extension (or process), typically identified as an axon for its entire length. At one end are the branching dendritelike input zone and integration zone, from which the axon arises directly, leading away to the distant output zone with its axon terminals. The cell body branches off the axon partway along its length. Unipolar neurons transmit touch and pain information from the body into the spinal cord. FIGU R E 1 . 3 Neurons Are Classified into Three Principal Types View larger image In all three types of neurons, the dendrites comprise the input zone. In multipolar and bipolar neurons, the cell body also receives synaptic inputs, so it is also part of the input zone. We’ll discuss some of the techniques used to visualize neurons later in the chapter. Information is transmitted through synapses A neuron’s dendrites reflect the complexity of the inputs that are received. Some simple neurons have just a couple of short dendritic branches, while others have huge and complex dendritic trees (or arbors) receiving many thousands of synaptic contacts from other neurons. At each synapse, information is transmitted from an axon terminal of a presynaptic neuron to the receptive surface of a postsynaptic neuron (FIGURE 1.4A). FIGU R E 1 . 4 Synapses View larger image A synapse can be divided into three principal components (FIGURE 1.4B): 1. The specialized presynaptic membrane of the axon terminal of the presynaptic (i.e., transmitting) neuron 2. The synaptic cleft, a gap of about 20–40 nanometers (nm; billionths of a meter) that separates the presynaptic and postsynaptic neurons 3. The specialized postsynaptic membrane on the dendrite or cell body of the postsynaptic (i.e., receiving) neuron Presynaptic axon terminals contain many tiny hollow spheres called synaptic vesicles. Each synaptic vesicle contains molecules of neurotransmitter, the special chemical with which a presynaptic neuron communicates with postsynaptic cells. This communication starts when, in response to electrical activity in the axon, synaptic vesicles fuse to the presynaptic membrane and then rupture, releasing their payload of neurotransmitter molecules into the synaptic cleft (see Figures 1.4B and C). After crossing the cleft, the released neurotransmitter molecules interact with matching neurotransmitter receptors that stud the postsynaptic membrane. The receptors capture and react to molecules of the neurotransmitter, altering the level of excitation of the postsynaptic neuron. This action affects the likelihood that the postsynaptic neuron will in turn release its own neurotransmitter from its axon terminals. Molecules of neurotransmitter generally do not enter the postsynaptic neuron; they simply bind to the outside of the receptors briefly to induce a response, and then detach and diffuse away. The configuration of synapses on a neuron’s dendrites and cell body is constantly changing—synapses and receptors come and go, dendrites change their shapes, dendritic spines wax and wane—in response to new patterns of synaptic activity and the formation of new neural circuits. We use the general term neuroplasticity to refer to this capacity of neurons to continually remodel their connections with other neurons. The axon integrates and then transmits information Most neurons feature a distinctive cone-shaped enlargement on the cell body called an axon hillock (“little hill”), from which the neuron’s axon projects away. As we will see in Chapter 2, the axon hillock has unique properties that allow it to gather and integrate the information arriving from the synapses on the dendrites and cell body. The result of this process of integration determines when the neuron will produce neural signals of its own, encoded in a stream of electrical impulses that race down the axon toward the targets that the neuron is said to innervate. The axon is a hollow tube, and various important substances, such as enzymes and structural proteins, are conveyed through the interior of the axon from the cell body, where they are produced, to the axon terminals, where they are used. This axonal transport works in both directions: anterograde transport moves materials toward the axon terminals, and retrograde transport moves used materials back to the cell body for recycling. So, it’s important to understand that the axon has two quite different functions: the rapid transmission of electrical signals along the outer membrane of the axon (like a wire), and the much slower transportation of substances within the axon, to and from the axon terminals (like a pipe). Glial cells protect and assist neurons Early neuroscientists had little regard for glial cells, viewing them as a mere filler holding the neurons together (in Greek glia means “glue”). But we now know that glial cells are much more important than that. Glial cells directly affect neuronal processes by providing neurons with raw materials, chemical signals, and specialized structural components. Although they number in the billions, there are just four main types of glial cells in the human nervous system (Barres et al., 2015) (FIGURE 1.5). Two of these four types of glia—oligodendrocytes and Schwann cells—wrap around successive segments of axons to insulate them with a fatty substance called myelin. These myelin sheaths give an axon the appearance of a string of elongated slender beads. Between adjacent beads, small uninsulated patches of axonal membrane, called nodes of Ranvier, remain exposed (FIGURE 1.5A). Within the brain and spinal cord, myelination is provided by the oligodendrocytes, each cell typically supplying myelin beads to several nearby axons (also illustrated in Figure 1.5A). In the rest of the body, Schwann cells do the ensheathing, with each Schwann cell wrapping itself around a segment of one axon to provide a single bead of myelin. But whether it is provided by oligodendrocytes or by Schwann cells, myelination has the same result: a large increase in the speed with which electrical signals pass down the axon, jumping from one node of Ranvier to the next. (In Chapter 2 we discuss the diverse abnormalities that arise when the myelin insulation is compromised in the disease multiple sclerosis [MS].) FIGU R E 1 . 5 Glial Cells View larger image The other two types of glial cells—astrocytes and microglial cells— perform more diverse functions in the brain. Astrocytes (from the Greek astron, “star,” for their usual shape) weave around and between neurons with tentacle-like extensions (FIGURE 1.5B). Some astrocytes stretch between neurons and fine blood vessels, controlling local blood flow to increase the amount of blood reaching more-active brain regions (Schummers et al., 2008). Astrocytes help to form the tough outer membranes that swaddle the brain, and they also secrete chemical signals that affect synaptic transmission and the formation of synapses (Perea et al., 2009; Eroglu and Barres, 2010). In contrast, microglial cells (or microglia) are tiny and mobile (FIGURE 1.5C). Their primary job appears to be to contain and clean up sites of injury (S. A. Wolf et al., 2017). However, astrocytes and microglia may also worsen some problems, such as harmful swelling (edema) following brain injury, and degenerative processes like Alzheimer’s disease (Chapter 4) and Parkinson’s disease (Chapter 11) (W. S. Chung et al., 2015; Liddelow et al., 2017). Supported and influenced by glial cells, and sharing information through synapses, neurons form the vast ensembles of informationprocessing circuits that give the brain its visible form. These major divisions are our next topic. How’s It Going? 1. What are the four “zones” common to all neurons, and what are their functions? 2. Compare and contrast axonal signal transmission and axonal transport. 3. Describe the three main components of the synapse. What are some of the specialized structures found on each side of the synapse? 4. What are the names and general functions of the four types of glial cells? 5. What effect does myelin have on axon function? FOOD FOR THOUGHT If the axon works in some ways like a wire, and in some ways like a pipe, what do you think might happen if either function was compromised? 1.2 The Nervous System Extends throughout the Body The Road Ahead The second part of the chapter surveys the major components of the nervous system. By the end of the section, you should be able to: 1.2.1 Explain what a nerve is, and distinguish between somatic and autonomic nerves. 1.2.2 Identify the cranial and spinal nerves by name and function. 1.2.3 Describe the general functions of the two major divisions of the autonomic nervous system. 1.2.4 Name the main anatomical structures that make up the two cerebral hemispheres. 1.2.5 Summarize the anatomical conventions used to describe locations and directions within the nervous system, and distinguish between gray matter and white matter. 1.2.6 Describe the fetal development of the brain, and catalog the major adult brain divisions that arise from each fetal region. Neuronal cell bodies, dendrites, axons, and glial cells mass together to form the tissues that define the gross neuroanatomy of the nervous system—the neural structures that are visible to the unaided eye (in this context gross means “large,” not “yucky,” but you may feel otherwise). The gross view of the entire human nervous system reveals the basic division between the central nervous system (CNS; consisting of the brain and spinal cord) and the peripheral nervous system (everything else; FIGURE 1.6). Let’s take a closer look at the anatomical organization of these systems. FIGU R E 1 . 6 The Central and Peripheral Nervous Systems View larger image The peripheral nervous system has two divisions The peripheral nervous system consists of nerves—collections of axons bundled together—that extend throughout the body. Some nerves, called motor nerves, transmit information from the spinal cord and brain to muscles and glands; others, called sensory nerves, convey information from the body to the CNS. The various nerves of the body are divided into two distinct systems: 1. The somatic nervous system, which consists of nerves that interconnect the brain and the major muscles and sensory systems of the body 2. The autonomic nervous system, made up of the nerves that connect primarily to the viscera (internal organs) The somatic nervous system Taking its name from the Latin word for “body”—soma—the somatic nervous system is the main pathway through which the brain controls movement and receives sensory information from the body and from the sensory organs of the head. The nerves that make up the somatic nervous system form two anatomical groups: the cranial nerves and the spinal nerves. We each have 12 pairs (left and right) of cranial nerves that arise from the brain and innervate the head, neck, and visceral organs directly, without ever joining the spinal cord. Each cranial nerve is known both by a Roman numeral and by a name; in many cases (not all, welcome to neuroanatomy!) the name reveals the nerve’s main functions. As you can see in FIGURE 1.7, some of these nerves are exclusively sensory input pathways: the olfactory (I) nerves transmit information about smell, the optic (II) nerves carry visual information from the eyes, and the vestibulocochlear (VIII) nerves convey information about hearing and balance. Five pairs of cranial nerves are exclusively motor output pathways from the brain: the oculomotor (III), trochlear (IV), and abducens (VI) nerves innervate muscles to move the eyes; the spinal accessory (XI) nerves control neck muscles; and the hypoglossal (XII) nerves control the tongue. The remaining cranial nerves have both sensory and motor functions. The trigeminal (V) nerves, for example, transmit facial sensation through some axons but control the chewing muscles through other axons. The facial (VII) nerves control facial muscles and receive some taste sensation, and the glossopharyngeal (IX) nerves receive additional taste sensations and sensations from the throat and also control the muscles there. The vagus (X) nerve extends far from the head, innervating the visceral organs including the heart, liver, and intestines, and others. Its long, convoluted route is the reason for its name, which is Latin for “wandering.” The vagus is the primary route by which the brain both controls and receives information from visceral organs, and it participates in such varied functions as sweating, digestion, and heart rate (Shaffer et al., 2014). FIGU R E 1 . 7 The Cranial Nerves View larger image Along the length of the spinal cord, 31 pairs of spinal nerves— again, one member of each pair serves each side of the body—emerge through regularly spaced openings along both sides of the backbone (FIGURE 1.8). Each spinal nerve is made up of a group of motor axons, projecting from the ventral (front) part of the spinal cord to the organs and muscles, and a group of sensory axons that enter the dorsal (rear) part of the spinal cord. Spinal nerves are named according to the segments of the spinal cord to which they are connected. There are 8 cervical (neck), 12 thoracic (torso), 5 lumbar (lower back), 5 sacral (pelvic), and 1 coccygeal (bottom) spinal segments. The name of each spinal nerve reflects the position of the spinal cord segment to which it is connected; for example, the nerve connected to the twelfth thoracic segment is called T12, the nerve connected to the seventh cervical segment is called C7, and so on. After leaving the spinal cord, axons from the spinal nerves spread out in the body and may merge with axons from different spinal nerves to form the various peripheral nerves. FIGU R E 1 . 8 The Spinal Cord and Spinal Nerves View larger image The autonomic nervous system Although it is “autonomous” in the sense that we have little conscious, voluntary control over its actions, the autonomic nervous system is the brain’s main system for controlling the organs of the body. The activity of our organs is determined by a balance between the two major divisions of the autonomic nervous system—called the sympathetic and parasympathetic nervous systems—that act more or less in opposition to each other (FIGURE 1.9). FIGU R E 1 . 9 The Autonomic Nervous System View larger image Axons of the sympathetic nervous system exit from the middle parts of the spinal cord, travel a short distance, and then innervate the sympathetic ganglia (small clusters of neurons found outside the CNS), which run in two chains along the spinal column, one on each side (see Figure 1.9 left). Axons from the sympathetic ganglia then spread throughout the body, innervating all the major organ systems. In general, sympathetic activity prepares the body for immediate action: blood pressure increases, the pupils of the eyes widen, the heart quickens, and so on. This set of reactions is sometimes called the fight-or-flight response. In contrast to the effects of sympathetic activity, the parasympathetic nervous system generally helps the body to relax, recuperate, and prepare for future action—sometimes called the rest-and-digest response. Anatomically, nerves of the parasympathetic system originate in the brainstem (above the sympathetic nerves) and in the sacral spinal cord (below the sympathetic nerves), which explains the name: the Greek para means “around” (see Figure 1.9 right). Compared with sympathetic nerves, parasympathetic nerves travel a longer distance before terminating in parasympathetic ganglia, clusters of neurons that are usually located close to the organs they serve. The sympathetic and parasympathetic systems have very different effects on individual organs because the organs receive different neurotransmitters from the two opposing systems (norepinephrine from sympathetic nerves and acetylcholine from parasympathetic nerves; see Chapter 2 and Figure 11.19). The balance between the two systems determines the state of the internal organs at any given moment. So, for example, when parasympathetic activity predominates, heart rate slows, blood pressure drops, and digestive processes are activated. As the brain causes the balance of autonomic activity to become predominantly sympathetic, opposite effects are seen: increased heart rate and blood pressure, inhibited digestion, and so on. This tension between parasympathetic and sympathetic activity ensures that the individual is appropriately prepared for current circumstances. The central nervous system consists of the brain and spinal cord The spinal cord funnels sensory information from the body up to the brain and conveys the brain’s motor commands out to the body. The spinal cord also contains circuits that perform local processing and control simple units of behavior, such as reflexes. We will discuss other aspects of the spinal cord in later chapters, so for now let’s focus on the anatomy of the executive portion of the CNS: the brain. Anatomical conventions for describing the anatomy of the brain Because the nervous system is a three-dimensional structure, twodimensional illustrations and diagrams cannot represent it completely. Anatomists use standard terminology to help identify structures, locations, and directions in the brain. It’s a bit of a chore, but learning the anatomical lingo now will make later discussions of brain organization much easier to follow. Scientists (and brain-imaging machines) typically visualize the brain in one of three main planes to obtain a two-dimensional section from this three-dimensional object. The plane that divides the brain into right and left portions is called the sagittal plane. The plane that divides front (anterior) from back (posterior) is called the coronal plane (also known as the frontal plane), and the horizontal plane divides between upper and lower parts. In addition to the three planes of dissection, locations in the nervous system are described using directional terms. Medial means “toward the middle,” contrasting with lateral, which means “toward the side.” Ipsilateral means “on the same side,” as opposed to contralateral, which means “on the opposite side.” These terms are all relative, as are the terms superior (“above”), inferior (“below”), and basal (“toward the bottom”) as illustrated in FIGURE 1.10. Locations toward the front of the brain are anterior or rostral, locations toward the rear are posterior or caudal (from the Latin cauda, “tail”). Proximal means “near” and distal means “far” or “toward the end of a limb.” And a nerve or pathway is afferent if it carries information into a region that we’re interested in, and it’s efferent if it carries information away from the region of interest (a handy way to remember this is that efferents exit but afferents arrive, relative to the region of interest). Lastly (phew!), dorsal means “toward the back,” and ventral means “toward the belly.” FIGU R E 1 . 1 0 Terms for Describing Anatomical Locations View larger image The outer surface of the brain On average, the human brain weighs only 1400 grams (about 3 pounds), accounting for just 2 percent of the average body weight. Put your two fists together and you get a sense of the size of the two cerebral hemispheres—smaller than most people expect. But what the brain lacks in size and weight it makes up for in intricacy. One obvious complication of the brain is its lumpy, convoluted surface—the result of elaborate folding of a thick sheet of tissue, mostly the dendrites, cell bodies, and axonal projections of neurons, called the cerebral cortex (also called neocortex in mammals). Most people have heard brain tissue referred to as gray matter. When you cut into a brain, you see that the outer layers of the cortex have a darker grayish shade (see Figure 1.10). This is because they contain a preponderance of neuronal cell bodies and dendrites. In contrast, the underlying white matter gets its snowy appearance from the whitish fatty myelin that insulates many axons. So, a simple view is that gray matter mostly receives and processes information, while white matter mostly transmits information. The folding of the cortex creates ridges of tissue, called gyri (singular gyrus), that are separated from each other by crevices called sulci (singular sulcus). Folding up the tissue in this way greatly increases the amount of cortex that can be crammed into the confines of the skull, and about two-thirds of the cerebral cortex is hidden within the cortical convolutions. The pattern of folding is not random; in fact, it is similar enough between brains that we can name the various gyri and sulci and group them together into lobes. Using a combination of landmarks and anatomical conventions, neuroscientists distinguish four major cortical regions of each cerebral hemisphere: the frontal, parietal, temporal, and occipital lobes (FIGURE 1.11). In some cases, the boundaries between adjacent lobes are very clear; for example, the Sylvian fissure (or lateral sulcus) divides the temporal lobe from other regions of the hemisphere. The central sulcus provides a distinct landmark dividing the frontal and parietal lobes. The physical boundaries between the occipital lobe and the temporal and parietal lobes are less obvious, but the lobes are quite different with regard to the functions they perform. FIGU R E 1 . 11 The Human Brain Has Four Distinct Lobes View larger image The cortex is the seat of complex cognition. Depending on the specific regions affected, cortical damage can cause symptoms ranging from impairments of movement or body sensation; through speech errors, memory problems, and personality changes; to many kinds of visual impairments. In people with undamaged brains, the four lobes of the cortex are continually communicating and collaborating in order to produce the seamless control of complex behavior that distinguishes us as individuals. Furthermore, hundreds of millions of axons connect the left and right hemispheres via the corpus callosum, allowing the brain to act as a single entity during complex processing. Some life-sustaining functions—heart rate and respiration, reflexes, balance, and the like—are governed by lower, subcortical brain regions. We can identify certain general categories of processing that are particularly associated with specific cortical lobes. For example, the frontal lobes are important for movement planning and high-level cognition. In front of the the central sulcus, the precentral gyrus (see Figure 1.11) of the frontal lobe is crucial for motor control and is thus referred to as primary motor cortex. In contrast, the parietal lobes receive sensory information from the body and participate in spatial cognition. The strip of parietal cortex located just posterior to the central sulcus, called the postcentral gyrus (see Figure 1.11), mediates the sense of touch, so it is often referred to as the primary somatosensory cortex. As we will see in later chapters, both gyri exhibit somatotopic organization, which means that they precisely map the various parts of the contralateral side of the body. The occipital lobes are crucial for vision, and the temporal lobes receive auditory inputs and participate in language and memory functions. But beyond the generalities mentioned here, each lobe of the brain also performs a wide variety of other high-level functions. These will be major topics in later chapters. Development of subdivisions within the brain It can be difficult to understand the origin of some of the regional names applied to the adult human brain. For example, part of the brain closest to the back of the head is anatomically identified as part of the forebrain. Why? The key to understanding this confusing terminology is to consider how the gross anatomy of the brain develops early in life. In a very young embryo of any vertebrate, the CNS looks like a tube (Chapter 4). The walls of this neural tube are made of cells, and the interior is filled with fluid. A few weeks after conception, the human neural tube begins to show three separate swellings at the head end (FIGURE 1.12A): the forebrain, the midbrain, and the hindbrain; the remainder of the neural tube eventually forms the spinal cord. By about 50 days, the fetal forebrain features two clear subdivisions. At the very front is the telencephalon (from the Greek encephalon, “brain”), which will become the cerebral hemispheres (consisting of cortex plus some deeper structures). The other part of the forebrain is the diencephalon, which will go on to become the thalamus and the hypothalamus, two of the many subcortical structures of the forebrain. FIGU R E 1 . 1 2 Divisions of the Human Nervous System in the Embryo and the Adult View larger image Similarly, the hindbrain further develops into several large structures: the cerebellum, pons, and medulla. The term brainstem usually refers to the midbrain, pons, and medulla combined (some scientists include the diencephalon too). FIGURES 1.12B and C show the positions of these structures and their relative sizes in the adult human brain. Even when the brain achieves its adult form, it is still a fluid-filled tube, but a tube of very complicated shape. The interior of this complicated tube forms the cerebral ventricles that we’ll describe later. The main sections of the brain can be subdivided in turn. We can work our way from the largest, most general divisions of the nervous system on the left of the schematic in Figure 1.12B to more-specific ones on the right. Within and between the major brain regions are collections of neurons called nuclei (singular nucleus) and bundles of axons called tracts. Recall that outside the CNS, collections of neurons are called ganglia, and bundles of axons are called nerves. Unfortunately, the word nucleus can mean either “a collection of neurons in the CNS” or “the spherical DNA-containing organelle within a single cell.” You must rely on the context to understand which meaning is intended. Because brain tracts and nuclei are the same in different individuals, and often the same in different species, they have names too (many, many names). You are probably more interested in the functions of all these parts of the brain than in their names, but as we noted earlier, each region serves more than one function, and our knowledge of the functional organization of the brain is continually being updated with new research findings. So, with that caution in mind, we’ll briefly survey the functions of specific brain structures next, leaving the detailed discussion for later chapters. How’s It Going? 1. Name and briefly describe the major divisions of the peripheral nervous system. What general function does each part perform? 2. Briefly sketch and describe the anatomical organization of the cranial nerves. How many nerves are there? Now do the same for the spinal nerves. 3. Give some examples of how each division of the autonomic nervous system affects organs of the body. 4. Why does the cortex look so lumpy on the outside? 5. What is special about the pre- and postcentral gyri? 6. Review the fetal development of the brain, and the major divisions of the brain that arise from the earlier fetal form of the nervous system. FOOD FOR THOUGHT We mammals evolved separate autonomic systems to ready us for “fight-or-flight” versus “rest-and-digest” situations, so this division must be adaptive in some ways. Why do you think this organization might have been beneficial in ancestral species, and how might it complicate our modern lives? 1.3 The Brain Shows Regional Specialization of Functions The Road Ahead Functional neuroanatomy connects behaviors to brain regions. By the end of this section, you should be able to: 1.3.1 Describe the cellular organization of the cortex. 1.3.2 Identify the major components of the basal ganglia and limbic system, and state some of the behavioral functions of each. 1.3.3 Name the major divisions of the brainstem and midbrain, and identify key functions performed by each. Vertebrates are bilaterally symmetrical: our bodies have mirrorimage left and right sides. The brain is no exception, and almost all the structures of the brain also come in twos. One important principle of the vertebrate brain is that each side of the brain generally controls the contralateral side of the body. The right side of the brain thus controls movement of the left side of the body and receives sensory information from the left. Likewise, the left side of the brain monitors and controls the right side of the body. In Chapter 15 we’ll learn about how the two cerebral hemispheres interact, but for now let’s review the various components of the brain and their functions. The cerebral cortex performs complex cognitive processing Neuroscientists are only just beginning to understand how the structures and functions of the cerebral cortex accomplish the feats of human cognition. If the human cortex were unfolded, it would occupy an area of about 2000 square centimeters (315 square inches) —more than 3 times the area of this book’s front cover. How are all those millions of cells arranged? Cortical neurons make up six distinct layers, as shown in FIGURE 1.13. Each cortical layer has a unique appearance because it consists of either a band of similar neurons, or a particular pattern of dendrites or axons. For example, the outermost layer, layer I, is distinct because it has few cell bodies, while layers V and VI stand out because of their many neurons with large cell bodies. The most prominent kind of neuron in the cerebral cortex—the pyramidal cell—usually has its pyramid-shaped cell body in layer III or V. FIGU R E 1 . 1 3 Layers of the Cerebral Cortex View larger image In the cerebral cortex, neurons may be organized into regular columns, perpendicular to the layers, that seem to serve as information-processing units (Horton and Adams, 2005). These cortical columns extend through the entire thickness of the cortex, from the white matter to the surface. Within each column, most of the synaptic interconnections of neurons are vertical, although there are some horizontal connections as well (Mountcastle, 1979; Sakmann, 2017). Important nuclei are hidden beneath the cerebral cortex Buried within the cerebral hemispheres are several large gray matter structures, richly connected to each other and to other brain regions and contributing to a wide variety of behaviors. One prominent cluster—the basal ganglia, consisting primarily of the caudate nucleus, the putamen, and the globus pallidus (FIGURE 1.14A)— plays a critical role in the control of movement (see Chapter 5). FIGU R E 1 . 1 4 Two Important Brain Systems View larger image Curving through each hemisphere, alongside the basal ganglia, lies a loose network of structures called the limbic system (identified in FIGURE 1.14B) that is involved in emotion and learning. The amygdala is a limbic structure involved in emotional regulation (see Chapter 11), odor perception (see Chapter 6), and aspects of memory (see Chapter 13). The hippocampus and fornix are also important for learning and memory (see Chapter 13). A strip of cortex atop the corpus callosum in each hemisphere, called the cingulate gyrus, is implicated in many cognitive functions, including directing attention (see Chapter 14), and the olfactory bulb processes the sense of smell. Other limbic structures near the base of the brain, especially the hypothalamus, help to govern motivated behaviors, like sex and aggression, and to regulate the hormonal systems of the body. Toward the medial (middle) and basal (bottom) aspects of the forebrain are found the thalamus and the hypothalamus (the latter means simply “under thalamus”). You can see both the hypothalamus and thalamus in Figure 1.14B and FIGURE 1.15A. The thalamus is the brain’s traffic router, directing virtually all incoming sensory information to the appropriate regions of the cortex for further processing, and receiving instructions back from the cortex about which sensory information is to be transmitted. The small but mighty hypothalamus has a much different role: it is packed with discrete nuclei involved in many vital functions, such as hunger, thirst, temperature regulation, sex, and many more. Furthermore, because the hypothalamus also controls the pituitary gland, it serves as the brain’s main interface with the hormonal systems of the body. We’ll encounter the hypothalamus again in several later chapters. FIGU R E 1 . 1 5 Midline and Basal Structures of the Brain View larger image The midbrain has sensory and motor components Compared with the forebrain and hindbrain, the midbrain doesn’t encompass a lot of tissue, but that doesn’t mean its components are unimportant. The top part of the midbrain, called the tectum (from the Latin for “roof,” because it’s atop the midbrain), features two pairs of bumps—one pair in each hemisphere—with specific roles in sensory processing. The more-rostral bumps are called the superior colliculi (singular colliculus), and they have specific roles in visual processing. The more-caudal bumps, called the inferior colliculi (see Figure 1.15A), process information about sound. The main body of the midbrain is called the tegmentum, and it also contains several important structures. The substantia nigra is in many ways a part of the basal ganglia, and loss of its neurons (which normally release the neurotransmitter dopamine within the forebrain) leads to Parkinson’s disease, discussed in Chapter 5. The periaqueductal gray is a midbrain structure implicated in the perception of pain (see Chapter 5). The reticular formation (reticular means “netlike”) is a loose collection of neurons that are important in a variety of behaviors, including sleep and arousal (see Chapter 10). Multiple large tracts of nerve fibers run in, out, and through the midbrain to connect the brain to the spinal cord and the rest of the body. The brainstem controls vital body functions The views in FIGURE 1.15 clearly show the hemispheres of the cerebellum (“little brain”), tucked up under the posterior cortex and attached to the dorsal brainstem. Like the cerebral cortex, the cerebellum is highly convoluted, but it is made up of a simpler threelayered tissue instead of the six layers found in the cerebral cortex. The cerebellum has long been known to be crucial for motor coordination and control, but we now know that it also participates in certain aspects of cognition, including learning. The adjacent pons (from the Latin word for “bridge”) contains many nerve fibers and important motor control and sensory nuclei; it is the point of origin for several cranial nerves. The reticular formation, which we first saw in the midbrain, stretches down through the pons and ends in the medulla. The medulla marks the transition from the brain to the spinal cord. In addition to conveying all of the major motor and sensory fibers to and from the body, the medulla contains nuclei that drive such essential processes as respiration and heart rate, so brainstem injuries are often lethal. And like other parts of the brainstem, the medulla gives rise to several cranial nerves. Behaviors and cognitive processes depend on networks of brain regions In order to understand the neural origins of our most complex behaviors and experiences—thought, language, music—scientists are working to discover how different brain regions with distinct functions collaborate in larger-scale networks. This applies to functional units as small as the individual cortical columns we mentioned earlier and to huge assemblages of millions of cells making up substantial parts of cortical lobes. Cortical regions communicate with one another via tracts of axons looping through the underlying white matter. Some of these connections are short pathways to nearby cortical regions; others travel longer distances through and between the two cerebral hemispheres and subcortical structures like the basal ganglia. Progress in describing the “connectome” of the human brain (Glasser, Coalson et al., 2016)—the network map that completely describes the functional connections within and between brain regions, based on huge datasets of human and nonhuman animal neuroanatomical findings (van Essen and Glasser, 2018; Suárez et al., 2020)—is rapidly transforming the field of behavioral neuroscience. How’s It Going? 1. How are the cells of the cerebral cortex organized? 2. Name the major components of the basal ganglia and the limbic system. What behaviors especially rely on these systems? 3. What functions are served by the thalamus and hypothalamus? 4. Name and describe the general functions of the major components of the midbrain and hindbrain. 5. Why are injuries to the medulla often fatal? FOOD FOR THOUGHT How can we reconcile the emerging emphasis on the connectome—the view that complex behavior depends on activity in diffuse networks of brain structures—with the established evidence that specific brain regions have specific functions? 1.4 Specialized Support Systems Protect and Nourish the Brain The Road Ahead Next, we consider the specialized structures and fluids that support the operations of the brain. By the end of this section, you should be able to: 1.4.1 Name and describe the meninges, ventricular system, and glymphatic system, and review their clinical significance. 1.4.2 Give an outline of the vascular supply of the brain, and note the signs and symptoms of stroke. The brain is relatively soft and easily damaged. It also needs a steady and substantial supply of fuel to maintain normal functioning, and thus keep us alive. Fortunately, the brain is equipped with systems to protect and cushion it and to provide a continual source of energy, nutrients, and important chemicals. The brain floats within layers of membranes Within the bony skull and vertebrae, the brain and spinal cord are swaddled by three protective membranes called meninges (see Figure 1.8). Between a tough outer sheet called the dura mater (in Latin, literally “tough mother”) and the delicate pia mater (“tender mother”) that adheres tightly to the surface of the brain, a webby substance called the arachnoid (“spiderweb-like”) creates a reservoir called the subarachnoid space that suspends the brain in a bath of a watery liquid called cerebrospinal fluid (CSF). The meninges can become inflamed by infections, termed meningitis, or distorted by a hemorrhage; either situation is a medical emergency because the brain is squeezed and impaired. Tumors called meningiomas can form in the meninges and are technically benign, because they don’t spread, but any mass that takes up space in the enclosed cranium is far from harmless. The brain relies on two fluids for survival The brain essentially floats in cerebrospinal fluid within the subarachnoid space, cushioning it from minor blows to the head. But CSF has additional important roles, passing into the substance of the brain, conveying nutrients and signaling chemicals, and picking up waste matter for later clearance. Inside the brain is a series of chambers called the cerebral ventricles, which are filled with CSF (FIGURE 1.16). These chambers comprise the ventricular system. Each hemisphere of the brain contains a lateral ventricle extending into all four lobes of the hemisphere. The lateral ventricles are lined with a specialized membrane called the choroid plexus, which produces CSF by filtering blood. The CSF flows from the lateral ventricles into a midline third ventricle (so named because it follows the two lateral ventricles) and continues down a narrow passage (the cerebral aqueduct) to the fourth ventricle, which lies between the cerebellum and the pons. Just below the cerebellum, three small openings allow CSF to exit the ventricular system and circulate over the outer surface of the brain and spinal cord. The CSF is absorbed back into the circulatory system through large veins beneath the top of the skull. A problem that blocks the flow of CSF through the ventricular system may result in hydrocephalus, a ballooning of the ventricles as they accumulate fluid, resulting in symptoms that vary depending on which brain structures are compromised by the expanding ventricles. FIGU R E 1 . 1 6 The Cerebral Ventricles View larger image The brain was long thought to lack the lymphatic drainage system found in other tissues, but the recently discovered glymphatic system (the name reflects the involvement of glial cells) provides for drainage of waste-bearing CSF-derived fluids from the brain (FIGURE 1.17), as well as the distribution of various nutrients, immune system components, and signaling substances (Jessen et al., 2015; Mestre et al., 2020), and may provide a route to get certain drugs into the brain (Naseri Kouzehgarani et al., 2021). Curiously, glymphatic clearance occurs primarily while we sleep, and it is impaired in people who have certain sleep disorders, such as sleep apnea (Chong et al., 2022; Lee et al., 2022). Researchers think that glymphatic clearance prevents the accumulation of substances that damage neurons, and they are working to understand how this system may protect against neurological problems like Alzheimer’s disease, stroke, and multiple sclerosis (M. K. Rasmussen et al., 2018). FIGU R E 1 . 1 7 The Glymphatic System View larger image The second crucial fluid for the brain is, of course, blood. Without a lavish supply of oxygen- and nutrient-rich blood, the tissue of the brain would swiftly die. That’s because brain tissue is unusually needy: it accounts for only 2 percent of the average human body but consumes more than 20 percent of the body’s energy at rest. So the brain is critically dependent on a set of large blood vessels. Blood arrives in the brain via two pairs of arteries: the carotid arteries in the neck, and the vertebral arteries that ascend within each side of the vertebrae of the neck, fusing to form the basilar artery inside the skull. These arteries give rise to a set of three pairs of cerebral arteries that supply the cortex, plus a number of smaller vessels that penetrate and supply other regions of the brain. Fine vessels and capillaries branching off from the arteries deliver nutrients and other substances to brain cells and remove waste products. In contrast to capillaries in the rest of the body, capillaries in the brain are highly resistant to the passage of large molecules across their walls and into neighboring neurons. This blood-brain barrier probably evolved to help protect the brain from infections and blood-borne toxins, but it also makes the delivery of drugs to the brain more difficult. Interruption of blood flow to the brain causes stroke, which we consider in Signs & Symptoms, next. SIGNS & SYMPTOMS Stroke The general term stroke applies to a situation in which a clot, a narrowing, or a rupture interrupts the supply of blood to a particular brain region, causing the affected region to stop functioning or die (FIGURE 1.18). Although the exact effects of stroke depend on the region of the brain that is affected, the five most common warning signs are sudden numbness or weakness, altered vision, dizziness, severe headache, and confusion or difficulty speaking. Effective treatments are available to help restore blood flow and minimize the long-term damage of a stroke, but only if the victim is treated immediately (Albers et al., 2018). Some people experience temporary stroke-like symptoms lasting for a few minutes. Caused by a brief interruption of blood supply to some part of the brain, this transient ischemic attack (from the Greek ischemia, “interrupted blood”), or TIA, is a serious warning sign that a major stroke may be imminent, and it should be treated as a medical emergency. FIGU R E 1 . 1 8 Stroke View larger image How’s It Going? 1. Name the three meninges, and describe how they’re organized. Identify one special characteristic of each. 2. What is CSF? What function does it serve, where does it come from, and where does it go? 3. Describe the ventricular system of the brain. 4. What is the blood-brain barrier? 5. What are the two types of stroke, and what are some common symptoms of a stroke? FOOD FOR THOUGHT Why do we have two circulating fluid systems in the brain— blood versus CSF? Couldn’t blood do it all? 1.5 Scientists Have Devised Clever Techniques for Studying the Structure and Function of the Nervous System The Road Ahead Our focus now turns to experimental approaches researchers use to probe the nervous system. By the end of this section, you should be able to: 1.5.1 Distinguish between invasive and noninvasive experimental techniques. 1.5.2 Review the techniques for studying the detailed, cellular structure and function of the brain, and review their application in various types of studies. 1.5.3 Summarize the major brain-imaging technologies used to study living human brains, highlighting their differing uses and limitations. Because it is both fantastically complex and somewhat inaccessible, the brain poses special challenges when it comes to asking research questions and designing experiments to answer them. Researchers have long sought methods that would allow them to study the detailed structure and function of the nervous system, from mapping the molecular components of the various cells that make up the brain to tracking the moment-by-moment activity of large neural networks during the execution of complex behaviors. These techniques vary in terms of their invasiveness: to study the brain at the cellular level, we generally need to work with postmortem (“after death”) tissue samples or biopsies, whereas the larger-scale activity of the brain can be studied using less-invasive functional-imaging technologies. Histological techniques let us view the cells of the nervous system in varying ways Over the last 150 years or so, technical advances in histology—the study of the composition of body tissues—have made it possible to selectively stain different parts of neurons and glia. Nowadays, scientists use specialized staining procedures to study the numbers, shapes, distribution, and interconnections of neurons within targeted regions of the brain. We can group these techniques based on the types of experiments they enable. Regional cell counts Using Nissl stains, scientists can visualize all of the cell bodies in a tissue section, making it possible to measure the size and number of cell bodies in a particular region (FIGURE 1.19A). FIGU R E 1 . 1 9 Histological Methods for Studying Neurons View larger image Individual cell shapes Mysteriously, and in contrast to Nissl stains, Golgi stains label only a small minority of neurons in a sample, but the affected cells are stained very completely, revealing fine details of cell structure such as the branches of dendrites and axons. With the Golgi method (and similar techniques, such as filling cells with fluorescent dye), individual neurons stand out in sharp contrast to their unstained neighbors, allowing researchers to study the types and precise shapes of neurons in a brain region of interest (FIGURE 1.19B). Expression of cellular products Often, neuroscientists would like to know the distribution of neurons that exhibit a specific property. In autoradiography, for example, animals are treated with radioactive versions of experimental drugs, and then thin slices of the brain are placed alongside photographic film. Radioactivity from the drug “exposes” the film—just like photons in a film camera—so the brain essentially takes a picture of itself, highlighting the specific brain regions where the radioactive drug has become selectively concentrated. An alternative way to visualize cells that have an attribute in common—termed immunohistochemistry (IHC) (FIGURE 1.19C)—involves creating antibodies against a protein of interest (we can create antibodies to almost any protein). Equipped with colorful labels, these antibodies can selectively seek out and attach themselves to their target proteins within neurons in a brain slice, revealing the distribution of only those neurons that make the target protein. A related procedure called in situ hybridization goes a step further and, using radioactively labeled lengths of nucleic acid (RNA or DNA, see the Appendix), labels only those neurons in which a gene of interest has been turned on. Interconnections between neurons Many research questions are more concerned with the pattern of connections between neurons than with their cellular structure (FIGURE 1.19D). To accomplish this goal, scientists have developed many sorts of tract tracers, substances that are taken up by neurons and transported over the routes of their axons. Some tract tracers can even jump across synapses, or work their way backward through the length of the neural pathway, leaving visible molecules of label all along the way. In “Brainbow” experiments, inserted genes cause neurons to express fluorescent proteins in hundreds of different hues (FIGURE 1.19E), powerfully aiding the study of brain development and the interconnections of neurons (Cai et al., 2013; Shen et al., 2020). Brain-imaging techniques reveal the structure and function of the living brain How can we study intact and functioning brains? Unlike older, more invasive techniques, modern brain-imaging technology lets us study the brains of healthy participants, revealing both structural details and the ongoing patterns of brain activity associated with specific behaviors. Computerized axial tomography In computerized axial tomography (CAT or CT scans), X-rays are used to generate images by moving an X-ray source in steps around the head. At each point, detectors on the opposite side of the head measure the amount of X-ray radiation that is absorbed; this value is proportional to the density of the tissue the X-rays passed through. When this process is repeated from many angles, the results are mathematically combined into a computer-generated anatomical image of the brain based on density (FIGURE 1.20A). CT scans are medium-resolution images, useful for visualizing problems such as strokes, tumors, or cortical shrinkage. FIGU R E 1 . 2 0 Visualizing the Living Human Brain View larger image Sam, whom we met at the beginning of the chapter, had developed a meningioma that was pressing on and deforming the motor cortex on the left side of his brain. The tumor impaired the functioning of regions of the motor cortex responsible for voluntary control of the muscles of Sam’s right arm and the right side of his face, producing his alarming symptoms. Fortunately, his emergency CT scan pinpointed the meningioma, and following surgical removal of the tumor, Sam experienced immediate improvement; he has been healthy ever since. Magnetic resonance imaging Using magnetic fields and radio waves instead of X-rays, magnetic resonance imaging (MRI) provides higher-resolution images than CT, and less exposure to potentially harmful X-rays. For an MRI image of the brain, the person’s head is first placed in an extremely powerful magnetic field that causes all the protons in the brain to line up in parallel, instead of in their usual random orientations (protons are found in the nuclei of atoms; in body tissues, most protons are found within water molecules). Next, the protons are knocked over by a powerful pulse of radio waves. When this pulse is turned off, the protons relax back to their original configuration, emitting radio waves as they go. Detectors surrounding the head measure this emitted energy, which differs for tissues of varying densities. A computer then compiles the densitybased information to create a detailed cross-sectional view of the brain (FIGURE 1.20B) that reveals the size and shape of distinct brain regions. MRI images can also reveal subtle changes in the brain, such as the local loss of myelin that is characteristic of multiple sclerosis. A variant of MRI, called diffusion tensor imaging (DTI), exploits a signal associated with the diffusion of water within axons in order to visualize axonal fiber tracts within the brain (FIGURE 1.20C). This kind of research, generally known as tractography, is helping us to learn how networks of brain structures —the connectome—work together in various forms of complex cognition and consciousness. Functional brain imaging With its ability to noninvasively image localized changes in the brain’s activity, rather than details of its structure, functional MRI (fMRI) revolutionized cognitive neuroscience in the twenty-first century. Offering both reasonable speed (temporal resolution) and sharpness (spatial resolution) at the gross anatomical level, fMRI uses rapidly oscillating magnetic fields to detect regional changes in brain metabolism, particularly patterns of oxygen use and blood flow in the most active regions of the brain. Scientists can use fMRI data to create “difference images” of the specific activity of different parts of the brain while people engage in various experimental tasks. Although fMRI cannot resolve the fine cellular structure of the brain and is too slow to track rapid, moment-by-moment changes in the activity of networks of neurons, fMRI combined with conventional anatomical MRI has revealed many important clues about how networks of brain structures collaborate on complex cognitive processes (FIGURE 1.20D). An additional consideration is that, as in other imaging techniques, fMRI imagery is not photographic: it is created by a computer, based on mathematical models. There is concern among researchers that the computer algorithms used in this process may sometimes be misleading (Eklund et al., 2016; Poldrack et al., 2017). Like fMRI, positron emission tomography (PET) depicts the brain’s activity during behavioral tasks. Short-lived radioactive chemicals are injected into the bloodstream, and radiation detectors encircling the head map the destination of these chemicals in the brain. A particularly effective strategy is to inject radioactively labeled glucose (“blood sugar”) while the person is engaged in a cognitive task of interest to the researcher. Because the radioactive glucose is selectively taken up and used by the most active parts of the brain, a moment-to-moment color-coded portrait of brain activity can be created (FIGURE 1.20E) (P. E. Roland, 1993; Chiaravalloti et al., 2019). Although PET can’t match the detailed resolution of fMRI, it tends to be faster and thus better able to track quick changes in brain activity. We can also use light to track cortical activity during behavior. Light in the near-infrared range (i.e., with wavelengths of 700–1000 nm) passes easily through tissue and can be transmitted through the skull a short distance into the cortex. In functional near-infrared spectroscopy (fNIRS; sometimes called optical imaging), detectors pick up reflections of this light as it bounces back out through the scalp—shifts in the wavelengths of the reflected light are associated with local changes in blood flow and cortical activity during ongoing behavior (FIGURE 1.20F). Although the spatial resolution of fNIRS is less than with fMRI or PET, it offers several advantages: it is noninvasive, fast, and the apparatus is relatively inexpensive and compact, enabling many more laboratories to use brain imaging in their research (Chen et al., 2020; Devezas, 2021). Magnetic stimulation and mapping It is a simple matter to pass magnetic fields into the brain. However, it is technically more challenging to project magnetic fields in a highly focused and precise manner. In transcranial magnetic stimulation (TMS) (FIGURE 1.21), focal magnetic currents are used to briefly stimulate the cortex of alert people directly, without any lasting physical alterations or surgery. Using TMS lets experimenters map cortical surfaces by either activating or muting discrete regions while simultaneously tracking any resulting changes in behavior, and it can be powerfully combined with functional brain-imaging techniques like PET (Tremblay et al., 2020). FIGU R E 1 . 2 1 Transcranial Magnetic Stimulation View larger image Not only can magnets stimulate neurons, but neurons also act as tiny electromagnets themselves! In magnetoencephalography (MEG), a large array of ultrasensitive detectors measures the minuscule magnetic fields produced by the electrical activity of cortical neurons. This information is used to construct real-time maps of brain activity during ongoing cognitive processing (FIGURE 1.22). Because MEG can track quick, moment-bymoment changes in brain activity, it is excellent for studying the rapidly shifting patterns of brain activity in cortical circuits that fMRI is too slow to track (Baillet, 2017; Gross, 2019). FIGU R E 1 . 2 2 Animal Magnetism View larger image Now let’s look a little more closely at a process scientists use to distinguish the brain activity underlying a specific behavior from the background activity of the busy conscious brain. How’s It Going? 1. Compare and contrast the main methods for producing still images of the structure of the brain. What do you think are some of the advantages and disadvantages of each method? 2. Compare and contrast the main functional-imaging technologies used for visualizing the activity of brain regions. What do you think are some of the advantages and disadvantages of each method? 3. Describe the process that neuroscientists can use to isolate brain activity associated with a specific behavior, as visualized by functional-imaging techniques. FOOD FOR THOUGHT If you could grant neuroscientists a “perfect” technology for studying the human brain, what features would it have? What practical limitations would you have to overcome to make your brain tech a reality? RESEARCHERS AT WORK Subtractive analysis isolates specific brain activity Modern brain imaging provides dramatic pictures showing the particular brain regions that are activated during specific cognitive processes; there are many such images in this book. But if you do a PET scan of a healthy person, you find that almost all of the brain is active at any given moment (showing that the old notion that “we use only 10 percent of our brain” is nonsense). How do researchers obtain these highly specific images of brain activity? In order to associate specific brain regions with particular cognitive operations, researchers developed a sort of algebraic technique, in which activity during one behavioral condition is subtracted from activity during a different condition. So, for example, the data from a control PET scan made while a person was gazing at a blank wall might be subtracted from the data from a PET scan collected while that person studied a complex visual stimulus. Averaged over enough trials, the specific regions that are almost always active during the processing task become apparent, even though, on casual inspection, a single experimental scan might not look much different from a single control scan (FIGURE 1.23). It is important to keep in mind that although functional brain images seem unambiguous and easy to label, they are computergenerated composites—not actual brain images—and thus only as accurate as the assumptions and algorithms with which they are created (Racine et al., 2005; Poldrack et al., 2017). FIGU R E 1 . 2 3 Isolating Specific Brain Activity View larger image 1.6 Careful Research Design Is Essential for Progress in Behavioral Neuroscience The Road Ahead In the final section of the chapter, we turn our attention to factors that neuroscientists consider in designing research: both the formal layout of experiments and the theoretical considerations on which research questions are based. Studying this section should prepare you to: 1.6.1 Describe and distinguish between correlational studies and experimental studies and explain how scientists rely on all three types of studies to develop research programs. 1.6.2 Discuss the major theoretical perspectives that inform research in behavioral neuroscience. 1.6.3 Discuss important issues that modern behavioral neuroscience must contend with, such as the use of animals in research, and the replication crisis in behavioral research. 1.6.4 Explain the different levels of analysis that behavioral neuroscientists use, and describe how they may relate to one another. The complexity of human behavior, and the organ by which it is produced, necessitate the use of indirect means to manipulate behavior and the activity of the brain. This complexity also contributes to an emerging crisis in behavioral neuroscience: difficulty in replicating many influential earlier findings (De Boeck and Jeon, 2018). These complications underscore the importance of careful research design based on detailed observation, precise control of experimental variables, and selection of appropriate research participants. A new emphasis on rigorous and transparent experimentation is driving the rapid evolution of research methodology across the behavioral sciences. Three types of study designs probe brain-behavior relationships Behavioral neuroscientists use three general types of studies for research. In an experiment employing somatic intervention (FIGURE 1.24A), we alter a structure or function of the brain or body to see how this alteration changes behavior. In this sort of experiment, the physical alteration is an independent variable (a general term used to describe the manipulation in an experiment), and the behavioral effect is the dependent variable (a general term used to describe the measured consequence of an experimental manipulation). Some examples of somatic intervention experiments we’ll explore include (1) administering a hormone to some animals, but not others, and comparing their sexual behavior; (2) electrically stimulating a specific brain region and measuring alterations in movement; and (3) destroying a specific region in the brain and observing subsequent changes in sleep patterns. In each case, the behavioral measurements follow the bodily intervention; furthermore, in each case the behavioral measurements are compared with those of a control group. In a withinparticipants experiment, the control group is simply the same individuals, tested before the somatic intervention occurs. In a between-participants experiment, the experimental group of individuals is compared with a different group of individuals who are treated identically in every way except that they don’t receive the somatic intervention. FIGU R E 1 . 2 4 Three Main Approaches to Studying the Neuroscience of Behavior View larger image The approach opposite to somatic intervention is behavioral intervention (FIGURE 1.24B). In this approach the scientist alters or controls the behavior of an organism and looks for resulting changes in body structure or function. Here, behavior is the independent variable, and change in the body is the dependent variable. A few examples include (1) allowing adults of each sex to interact and then measuring their hormone levels, (2) having a person perform a cognitive task while in a brain scanner and then measuring changes in activity in specific regions of the brain, and (3) training an animal to fear a previously neutral stimulus and then observing electrical changes in the brain that may encode the newly learned association. As with somatic intervention, these experimental approaches may employ either within-group or between-groups designs. The third type of study is correlation (FIGURE 1.24C), which measures how closely changes in one variable are associated with changes in another variable. Two examples of correlational studies include (1) observing the extent to which memory ability is associated with the size of a certain brain structure and (2) noting that increases in a certain hormone are accompanied by increases in aggressive behavior. Note that while this type of study tells us if the measured variables are associated in some way, it can’t tell us which causes the other. We can’t tell, for example, whether the hormones cause the aggression or aggression increases the hormones. But even though it can’t establish causality, correlational research can help researchers identify which things are linked, directly or indirectly, and thus it helps us to develop hypotheses that can be tested experimentally using behavioral and somatic interventions. Combining these three approaches yields the circle diagram of FIGURE 1.24D, showing how the three types of studies complement each other. It also underscores that the effects of brain and behavior are reciprocal: each affects the other in an ongoing cycle. Animal research is an essential part of life sciences research, including behavioral neuroscience Human beings’ involvement and concern with other species predates recorded history; early humans had to study animal behavior and physiology in order to escape some species and hunt others. To study the biological bases of behavior inevitably requires research on animals of other species, as well as on human beings. Psychology students usually underestimate the contributions of animal research to psychology because the most widely used introductory psychology textbooks often present major findings from animal research as if they were obtained with human participants (Domjan and Purdy, 1995). A vocal minority of people believe that research with animals, even if it does lead to lasting benefits, is unethical. Others argue that animal research is acceptable only when it produces immediate and measurable benefits. The potential cost in taking this perspective lies in the fact that we have no way of predicting which experiments will lead to a breakthrough. The whole point of studying the unknown is that it is unknown; there is a long history of chance observation, based on the steady accumulation of basic knowledge, leading to unexpected benefits. There’s no denying that animal research can cause stress and discomfort, and researchers have a strong ethical obligation to hold pain and stress to the absolute minimum levels possible. Animal research has itself provided us with the drugs and techniques that make most research painless for lab animals, while also leading to improved veterinary care for our animal companions (Sunstein and Nussbaum, 2004), and researchers are ethically bound to continually refine lab practices, with animal well-being a primary concern. Researchers are also bound by animal protection legislation and are subject to continual administrative oversight to ensure adherence to nationally mandated animal care policies that emphasize the use of as few animals as possible without jeopardizing research integrity, as well as the use of the simplest species that can answer the questions under study. As human beings with the full range of emotions and empathetic feelings toward animals, we all wish there were an alternative to the use of animals in research. But if we want to understand how the nervous system works, we have to actually study it, in detail. The life sciences would slow to a crawl without the basic knowledge derived from studying animals. How do similarities and differences among people and animals fit into behavioral neuroscience? Each person is in some ways like all other people, in some ways like some other people, and in some ways like no other person. As shown in FIGURE 1.25, we can extend this observation to the much broader range of animal life. The electrical messages used by neurons (see Chapter 2) are essentially the same in a jellyfish, a cockroach, and a human being, and many species employ identical hormones. These characteristics are said to be conserved, meaning that they first arose in a shared ancestor. But mere similarity of a feature between species does not guarantee that the feature came from a common ancestral species. Our eyes resemble those of octopuses, but certain key differences reveal that their eyes and our eyes evolved independently. FIGU R E 1 . 2 5 We Are All Alike, and We Are All Different View larger image With respect to each biological property, researchers must determine how animals are identical and how they are different. When we seek animal models for studying human behavior or biological processes, we ask the following question: Does the proposed animal model really have some things in common with the process at work in humans? In later chapters we will see many cases in which it does, but even within the same species, individuals differ from one another: cat from cat, blue jay from blue jay, and person from person. Behavioral neuroscience seeks to understand individual differences as well as similarities. Animal Research Is Crucial In studying something as complicated as the mammalian brain, there is often no option but to study the brains of lab animals, the vast majority of which are rats and mice. Much of the research you will read about in this book has relied on studies of animals. Very high standards of care are provided to research animals, as a result of extensive legislative requirements, the need to protect the integrity of research, and the researchers’ ethical and empathetic concern for their research subjects. View larger image Behavioral neuroscientists use several levels of analysis A final consideration that researchers must weigh in designing experiments is the level of complexity at which to work. Even the most complex behavior could, in principle, be understood at the level of cellular activity or even lower, at the level of biochemistry and molecular interactions. This idea, that we can understand complex systems by dissecting their simpler constituent parts, is known as reductionism. But we wouldn’t get very far if, for example, we actually set out to explain the use of grammar in terms of chemical reactions; the behavior is so complex that an explanation at the molecular level would involve a vast amount of data. So instead, the reductionist approach aims to identify levels of analysis that are just simple enough that they allow us to make rapid progress on the more complex phenomena under study. Finding explanations for behavior often requires several levels of analysis, ranging from social interactions, to brain systems, to circuits and single nerve cells and their even simpler, molecular constituents (FIGURE 1.26). FIGU R E 1 . 2 6 Levels of Analysis in Behavioral Neuroscience View larger image Naturally, different problems are carried to different levels of analysis, and fruitful work is often being done simultaneously by different researchers working at different levels. For example, in their research on visual perception, some cognitive psychologists carefully analyze behavior. They try to determine how the eyes move while looking at a visual pattern, or how the contrast among parts of the pattern determines its visibility. Meanwhile, other behavioral neuroscientists study the differences in visual abilities among species and try to determine the adaptive significance of these differences. For example, how is the presence (or absence) of color vision related to the lifestyle of a species? At the same time, other investigators trace out brain structures and networks involved in different visual tasks. Still other scientists try to understand the electrical and chemical events that occur in the eye and brain during vision. Some people doubt whether our “merely human” brains will ever be able to understand something as complicated as the human brain, to a level where we can fully explain mysterious properties like consciousness, identity, or the perception of free will. Nevertheless, the gains we are making in understanding how the brain works—the subject of this book—bring us closer to that goal every day. How’s It Going? 1. What are the three general forms of research studies in behavioral neuroscience? What is the issue of “causality”? How do the three research perspectives inform and shape one another? 2. Define independent variable, dependent variable, control group, within-participants experiment, and between-participants experiment. 3. Consider both sides of the debate over animal research, weighing the pros and cons of the “for” and “against” positions. How do you think animal use should be regulated? 4. What is the general principle behind reductionism? How does this influence the level of analysis at which a researcher works? For that matter, what is meant by “level of analysis”? FOOD FOR THOUGHT Researchers have grown concerned that so many studies of human behavior use young adult samples drawn from undergraduate participant pools at universities. What are some of the potential problems with this practice? Under what circumstances is it less likely to be a problem? RECOMMENDED READING Bausell, R. B. (2015). The Design and Conduct of Meaningful Experiments Involving Human Participants: 25 Scientific Principles. New York, NY: Oxford University Press. Blumenfeld, H. (2021). Neuroanatomy through Clinical Cases (3rd ed.). Sunderland, MA: Oxford University Press/Sinauer. Felten, D. L., O’Banion, M. K., and Maida, M. E. (2021). Netter’s Atlas of Neuroscience (4th ed.). New York, NY: Elsevier. Huettel, S. A., Song, A. W., and McCarthy, G. (2014). Functional Magnetic Resonance Imaging (3rd ed.). Sunderland, MA: Oxford University Press/Sinauer. Kaiser, M. (2020). Changing Connectomes: Evolution, Development, and Dynamics in Network Neuroscience. Cambridge, MA: MIT Press. Stamatakis, E. A., Orfanidou, E., and Papanicolau, A. C. (Eds.). (2017). The Oxford Handbook of Functional Brain Imaging in Neuropsychology and Cognitive Neurosciences. Oxford, UK: Oxford University Press. Swanson, L. W., Newman, E., Araque, A., and Dubinsky, J. M. (2017). The Beautiful Brain: The Drawings of Santiago Ramón y Cajal. New York, NY: Abrams. Vanderah, T., and Gould, D. J. (2020). Nolte’s The Human Brain: An Introduction to Its Functional Anatomy (8th ed.). New York, NY: Elsevier. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 1 View larger image LIST OF KEY TERMS afferent amygdala anterior arachnoid Astrocytes autonomic nervous system autoradiography axon axonal transport axon collaterals axon hillock axon terminals basal basal ganglia behavioral intervention between-participants experiment Bipolar neurons blood-brain barrier brainstem causality cell body central nervous system (CNS central sulcus cerebellum cerebral arteries cerebral cortex cerebral hemispheres cerebrospinal fluid (CSF) cervical choroid plexus cingulate gyrus coccygeal computerized axial tomography (CAT or CT scans) Conduction zone conserved contralateral control group coronal plane corpus callosum correlation cortical columns cranial nerves dendrites dependent variable diencephalon diffusion tensor imaging (DTI) distal dorsal dura mater efferent forebrain fornix fourth ventricle frontal functional MRI (fMRI) functional near-infrared spectroscopy (fNIRS glial cells glymphatic system Golgi stains gray matter gross neuroanatomy gyri hindbrain hippocampus histology horizontal plane hydrocephalus hypothalamus immunohistochemistry (IHC) independent variable inferior inferior colliculi innervate Input zone in situ hybridization Integration zone interneurons Ipsilateral lateral lateral ventricle levels of analysis limbic system lumbar magnetic resonance imaging (MRI) magnetoencephalography (MEG) Medial medulla meninges meningiomas meningitis microglial cells midbrain motor nerves motor neurons Multipolar neurons myelin nerves neural tube neurons neuroplasticity neurotransmitter neurotransmitter receptors Nissl stains nodes of Ranvier nuclei occipital lobes olfactory bulb oligodendrocytes Output zone parasympathetic nervous system parietal periaqueductal gray peripheral nervous system pia mater pons positron emission tomography (PET) posterior postsynaptic postsynaptic membrane precentral gyrus presynaptic presynaptic membrane Proximal pyramidal cell reductionism reticular formation sacral sagittal plane Schwann cells sensory nerves Sensory neurons somatic intervention somatic nervous system spinal nerves stroke substantia nigra sulci superior superior colliculi Sylvian fissure sympathetic nervous system synapses synaptic cleft synaptic vesicles tectum tegmentum telencephalon temporal thalamus third ventricle thoracic tracts tract tracers transcranial magnetic stimulation (TMS) transient ischemic attack Unipolar neurons ventral ventricular system white matter within-participants experiment CHAPTER 2 Neurophysiology The Generation, Transmission, and Integration of Neural Signals Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University Grab the Bull by the Brains Perhaps the most dramatic neuroscience demonstration in history occurred in 1964 when Yale professor José Delgado strolled into an arena in Spain to face an enraged bull trained to attack humans. Armed only with a remote control, Delgado watched the massive bull paw the earth, lower its head, and charge right at him. Just before the bull reached him, Delgado pressed a button on the remote control that caused a wire, called an electrode, in the bull’s brain to deliver a tiny trickle of electricity. The bull stopped cold. When Delgado electrically stimulated another part of the bull’s brain, the animal turned to the right and calmly trotted away. Repeated stimulations rendered the bull docile for several minutes (Marzullo, 2017). Other bulls responded differently to brain stimulation, depending on the brain region targeted. One animal produced a single “moo” for every button press—a hundred times in a row. Delgado also electrically stimulated electrodes in the brains of people, in an attempt to pinpoint the cause of a neurological disorder. Depending on which part of the brain was stimulated, patients might suddenly become anxious or angry (Delgado, 1969). In this #MeToo era, probably the creepiest response elicited by one brain stimulation site was in women who suddenly began flirting with their male interviewer, showing romantic interest. Yet as soon as the electrical stimulation of their brains stopped, the women returned to their usual reserved behavior. Why does a tiny bit of electrical stimulation in the brain produce such profound changes in mood and behavior? Because neurons normally use electrical signals to sum up vast amounts of information. When Delgado electrically stimulated the brain, he was triggering those normal electrical signals in a very abnormal way, with sometimes startling results. To understand how even a tiny electrical charge to the brain can so dramatically affect the mind, we need to understand how electrical signaling works in the brain. Neurophysiology is the study of the specialized life processes that allow neurons to use chemical and electrical signals to process and transmit information. In this chapter we’ll study the electrical processes at work within a neuron; in Chapter 3 we’ll look at the chemical signals that pass between neurons. We’ll see that brain function is an alternating series of electrical signals within neurons and of chemical signals between neurons. For example, a doctor may use a small rubber mallet to strike just below your knee and watch your leg kick upward in what is known as the knee-jerk reflex. Simple as it appears, a lot happens during this test. First, sensory neurons in the muscle detect the hammer tap and send a rapid electrical signal along their axons to your spinal cord. That rapid electrical signal along the axons from knee to spinal cord is called an action potential, which is a major topic in this chapter. We’ll discuss the knee-jerk reflex in detail later in the chapter. Hold It Right There! Dr. José Delgado stops a bull in the middle of a charge, using the remote control in his hand. View larger image 2.1 Electrical Signals Are the Vocabulary of the Nervous System The Road Ahead The first section of this chapter explains how neurons use electrical forces to process information. Reading this material should enable you to: 2.1.1 Identify the two physical forces that make neurons more negatively charged inside than outside. 2.1.2 Understand the changes in a neuron’s membrane that produce a large electrical signal called an action potential. 2.1.3 Explain the changes in channels and movement of ions that underlie the action potential. 2.1.4 Understand how the action potential spreads down the length of an axon. 2.1.5 Understand how each neuron uses these electrical signals to integrate information from other neurons. Like all living cells, neurons are more negative on the inside than on the outside, so we say they are polarized, meaning there is a difference in electrical charge between the inside and outside of the cell. Let’s consider a neuron at rest, neither perturbed by other neurons nor producing its own signals. Of the many ions (electrically charged molecules) that a neuron contains, a majority are anions (“ANN-eye-ons”; negatively charged ions), including large protein anions that cannot exit the cell. The rest are cations (“CAT-eye-ons”; positively charged ions). (It may help you to remember that the letter t, which occurs in the word cation, is shaped a bit like a plus sign, +.) All of these ions are dissolved in the intracellular fluid inside the cell and the extracellular fluid outside the cell membrane. If we insert a fine microelectrode into the interior of a neuron and place another electrode in the extracellular fluid and take a reading (as in FIGURE 2.1), we find that the inside of the neuron is more negative than the fluid around it. Specifically, a neuron at rest exhibits a characteristic resting potential (an electrical difference across the membrane) of about –50 to –80 thousandths of a volt, or millivolts (mV) (the negative sign indicates that the cell’s interior is more negative than the outside). To understand how this negative membrane potential comes about, we have to consider some special properties of the cell membrane, as well as two forces that drive ions across it. FIGU R E 2 . 1 Measuring the Resting Potential View larger image The cell membrane is a double layer of fatty molecules studded with many sorts of specialized proteins. One important type of membrane-spanning protein is the ion channel, a tubelike pore that allows ions of a specific type to pass through the membrane (FIGURE 2.2). As we’ll see later, some types of ion channels are gated: they can open and close rapidly in response to various influences. But some ion channels stay open all the time, and the cell membrane of a neuron contains many such channels that selectively allow potassium ions (K ) to cross the membrane, but not sodium ions (Na ). Because it is studded with these K channels, we say that the cell membrane of a neuron exhibits selective permeability, allowing some things to pass through, but not others. The membrane allows K ions, but not Na ions, to enter or exit the cell fairly freely. + + + + + FIGU R E 2 . 2 The Distribution of Ions Inside and Outside a Neuron View larger image The resting potential of the neuron reflects a balancing act between two opposing processes that drive K ions in and out of the neuron. The first of these is diffusion (FIGURE 2.3A), which is the tendency for molecules of a substance to spread from regions of high concentration to regions of low concentration. For example, when placed in a glass of water, the molecules in a drop of food coloring will tend to spread from the drop out into the rest of the water, where they are less concentrated. So we say that molecules tend to “move down their concentration gradient” until they are evenly distributed. If a selectively permeable membrane divides the fluid, particles that can pass through the membrane, such as K , will diffuse across until they are equally concentrated on both sides. Other ions, unable to cross the membrane, will remain concentrated on one side (FIGURE 2.3B). + + FIGU R E 2 . 3 Ionic Forces Underlying Electrical Signaling in Neurons View larger image The second force at work is electrostatic pressure, which arises from the distribution of electrical charges rather than the distribution of molecules. Charged particles exert electrical force on one another: like charges repel, and opposite charges attract (FIGURE 2.3C). Positively charged cations like K are thus attracted to the negatively charged interior of the cell; conversely, anions are repelled by the cell interior and so tend to exit to the extracellular fluid. Now let’s consider the situation across a neuron’s cell membrane. Much of the energy consumed by a neuron goes into operating specialized membrane proteins called sodium-potassium pumps that pump three Na ions out of the cell for every two K ions pumped in (FIGURE 2.4A). This action results in a buildup of K ions inside the cell (and reduces Na inside the cell), but as we explained earlier, the membrane is selectively permeable to K ions (but not Na ions). Therefore, K ions can leave the interior, moving down their concentration gradient and causing a net buildup of negative charges inside the cell (FIGURE 2.4B). As negative charge builds up inside the cell, it begins to exert electrostatic pressure to pull positively charged K ions back inside. Eventually the opposing forces exerted by the K concentration gradient and by electrostatic pressure reach the equilibrium potential, the electrical charge that exactly balances the concentration gradient: any further movement of K ions into the cell (drawn by electrostatic attraction) is matched by the flow of K ions out of the cell (moving down their concentration gradient). This point approximates the cell’s resting potential of about –65 mV (values may range between –50 and –80 mV), as FIGURE 2.4C depicts. + + + + + + + + + + + + FIGU R E 2 . 4 The Ionic Basis of the Resting Potential View larger image The resting potential of a neuron provides a baseline level of polarization found in all cells. But unlike most other cells, neurons routinely undergo a brief but radical change in polarization, sending an electrical signal from one end of the neuron to the other, as we’ll discuss next. A threshold amount of depolarization triggers an action potential Action potentials are very brief but large changes in the resting membrane potential that arise in the initial segment of the axon, just after the axon hillock (the cone-shaped region where the axon emerges from the cell body; see Figure 1.4A), and then move rapidly down the axon. The information that a neuron sends to other cells is encoded in patterns of these action potentials, so we need to understand their properties—where they come from, how they race down the axon, and how they send information across synapses to other cells. Let’s turn first to the creation of the action potential. Two concepts are central to understanding how action potentials are triggered. Hyperpolarization is an increase in membrane potential (i.e., the neuron becomes even more negative on the inside, relative to the outside). So if the neuron already has a resting potential of, say, –65 mV, hyperpolarization makes it even farther from zero, maybe –70 mV. Depolarization is the reverse, referring to a decrease in membrane potential. The depolarization of a neuron from a resting potential of –65 mV to, say, –50 mV makes the inside of the neuron more like the outside. In other words, depolarization of a neuron brings its membrane potential closer to zero. Let’s use an apparatus to apply hyperpolarizing and depolarizing stimuli to a neuron, via electrodes. (Later we’ll talk about how synapses produce similar hyperpolarizations and depolarizations.) Applying a hyperpolarizing stimulus to the membrane produces an immediate response that passively mirrors the stimulus pulse (FIGURES 2.5A and B). The greater the stimulus, the greater the response, so these changes in the neuron’s potential are graded responses. FIGU R E 2 . 5 The Effects of Hyperpolarizing and Depolarizing Stimuli on a Neuron View larger image If we measured the membrane response at locations farther and farther away from the stimulus location, we would see another way in which the membrane response seems passive. Like the ripples spreading from a pebble dropped in a pond, these graded local potentials across the membrane get smaller as they spread away from the point of stimulation (see Figure 2.5B bottom). Up to a point, the application of depolarizing pulses to the membrane follows the same pattern as for hyperpolarizing stimuli, producing local, graded responses. However, the situation changes suddenly if the stimulus depolarizes the axon to –40 mV or so (the exact value varies slightly among neurons). At this point, known as the threshold, a sudden and brief (0.5- to 2.0- millisecond) response—the action potential, sometimes referred to as a spike because of its shape—is provoked (FIGURE 2.5C). An action potential is a rapid reversal of the membrane potential that momentarily makes the inside of the neuron positive with respect to the outside. Unlike the passive graded potentials that we have been discussing, the action potential is actively reproduced (or propagated) down the axon, through mechanisms that we’ll discuss shortly. Applying strong stimuli to produce depolarizations that far exceed the neuron’s threshold reveals another important property of action potentials: larger depolarizations do not produce larger action potentials. In other words, the size (or amplitude) of the action potential is independent of stimulus size. This characteristic is referred to as the all-or-none property of the action potential: either it fires at its full amplitude, or it doesn’t fire at all. The action potential does not diminish as it spreads down the axon (see Figure 2.5C). Neurons encode information by changes in the number of action potentials rather than in their amplitude. With stronger stimuli, more action potentials are produced, but the size of each action potential remains the same (see Figure 2.5C, right). A closer look at the form of the action potential shows that the return to baseline membrane potential is not simple. Many axons exhibit small potential changes immediately following the spike; these changes are called afterpotentials (see Figure 2.5C) that reflect the movement of ions in and out of the cell, which we take up next. Ionic mechanisms underlie the action potential What events explain the action potential? The action potential is created by the sudden movement of Na ions into the axon (Nicholls et al., 2021). At its peak, the action potential reaches about +40 mV, approaching the equilibrium potential for Na , when the concentration gradient pushing Na ions into the cell would be exactly balanced by the positive charge pushing them out. The action potential thus involves a rapid shift in membrane properties, switching suddenly from the potassium-dependent resting state to a primarily sodium-dependent active state and then swiftly returning to the resting state. This shift is accomplished through the actions of a very special kind of ion channel: the voltage-gated Na channel. Like other ion channels, this channel is a tubular, membrane-spanning protein, but its central Na -selective pore is gated. The gate is ordinarily closed. But if we electrically stimulate the neuron, or if synapses affect the neuron in ways we’ll describe later, then the axon may be depolarized. If the axon is depolarized enough to reach threshold levels, the channel’s shape changes, opening the “gate” to allow Na ions through for a short while. This tiny protein molecule, the voltage-gated Na channel, is really a quite complicated machine. It monitors the axon’s membrane potential, and at threshold the channel changes its shape to open the pore, shutting down again just a millisecond later. The channel then “remembers” that it was recently open and refuses to open again for a short time. These properties of the voltage-gated Na channel are responsible for the characteristics of the action potential. + + + + + + + + You might wonder whether the repeated inrush of Na ions would allow them to build up, affecting the cell’s resting potential. In fact, relatively few Na ions need to enter to change the membrane potential, and the outgoing K ions quickly restore the resting potential. In the long run, the sodium-potassium pump enforces the concentrations of ions that maintain the resting potential. The voltage-gated Na channels stay open for a little less than a millisecond, and then they automatically close again. By this time, the membrane potential has shot up to about +40 mV. Positive charges inside the nerve cell start to push K ions out, aided by the opening of additional voltage-gated K channels that let lots of K ions rush out quickly, restoring the resting potential. Consider what happens when a patch of axonal membrane depolarizes. As long as the depolarization is below threshold, Na channels remain closed. But when the depolarization reaches threshold, a few Na channels open at first, allowing a few ions to start entering the neuron. The positive charges of those ions depolarize the membrane even further, opening still more Na channels. Thus, the process accelerates until the barriers are removed and Na ions rush in (FIGURE 2.6). FIGU R E 2 . 6 Voltage-Gated Sodium Channels Produce the Action Potential View larger image Applying very strong stimuli reveals another important property of axonal membranes. As we bombard the axon with ever-stronger stimuli, an upper limit to the frequency of action potentials becomes apparent at about 1200 spikes per second. (Many neurons have even slower maximum rates of response.) Similarly, applying pairs of stimuli that are spaced closer and closer together reveals a related phenomenon: beyond a certain point, only the first stimulus is able to elicit an + + + + + + + + + + + action potential. The axonal membrane is said to be refractory (unresponsive) to the second stimulus. Refractoriness has two phases: During the absolute refractory phase, a brief period immediately following the production of an action potential, no amount of stimulation can induce another action potential, because the voltage-gated Na channels can’t respond (in Figure 2.6, see the brackets above the graph, as well as step 4). The absolute phase is followed by a period of reduced sensitivity, the relative refractory phase, during which only strong stimulation can depolarize the axon to threshold to produce another action potential. The neuron is relatively refractory because K ions are still flowing out, so the cell is temporarily hyperpolarized after firing an action potential (see Figure 2.6, step 4). The overall duration of the refractory phase is what determines a neuron’s maximal rate of firing. In general, the transmission of action potentials is limited to axons. Cell bodies and dendrites usually have few voltage-gated Na channels, so they do not conduct action potentials. The ion channels on the cell body and dendrites are stimulated chemically at synapses, as we’ll discuss later in this chapter. Because the axon has many such channels, an action potential that occurs at the origin of the axon regenerates itself down the length of the axon, as we discuss next. How’s It Going? 1. What does it mean if a neuron is depolarized or hyperpolarized, and which action brings the cell closer to threshold? 2. Describe how the polarity of a neuron changes during the phases of an action potential. 3. How does the flow of ions account for that sequence of changes in electrical potential? 4. What mechanisms underlie the two phases of the refractory period? Action potentials are actively propagated along the axon Now that we’ve explored how voltage-gated channels underlie action potentials, we can turn to the question of how action potentials spread down the axon—another function for which voltage-gated channels are crucial. Consider an experimental setup like the one pictured in FIGURE 2.7: recording electrodes are positioned along the length of the axon, allowing us to record an action + + + potential at various points on the axon. Recordings like this show that an action potential begun at the axon hillock spreads in a sort of chain reaction down the length of the axon. FIGU R E 2 . 7 Propagation of the Action Potential View larger image How does the action potential travel? It is important to understand that the action potential is regenerated along the length of the axon. Remember, the action potential is a spike of depolarizing electrical activity (with a peak of about +40 mV), so it strongly depolarizes the next adjacent axon segment. Because this adjacent axon segment is similarly covered with voltage-gated Na channels, the depolarization immediately creates a new action potential, which in turn depolarizes the next patch of membrane, which generates yet another action potential, and so on all down the length of the axon (FIGURE 2.8A). An analogy is the spread of fire along a row of closely spaced match heads. When one match is lit, its heat is enough to ignite the next match, and so on along the row. Voltage-gated Na channels open when the axon is depolarized to threshold. In turn, the influx of Na ions—the movement of positive charges into the axon—depolarizes the adjacent segment of axonal membrane and therefore opens new gates for the movement of Na ions. + + + + FIGU R E 2 . 8 Conduction along Unmyelinated versus Myelinated Axons View larger image The axon normally conducts action potentials in only one direction—from the axon hillock toward the axon terminals—because, as it progresses along the axon, the action potential leaves in its wake a stretch of refractory membrane (see Figure 2.8A). The action potential does not spread back over the axon hillock and the cell body and dendrites, because the membranes there have too few voltage-gated Na channels to be able to produce an action potential. Many of the characteristics of action potentials have been whimsically likened to the action of a toilet, as BOX 2.1 explores. BOX 2.1 How Is an Axon Like a Toilet? It might help you to remember basic facts about action potentials if you consider how much they resemble a flushing toilet. For example, if you gently push the lever on a toilet, nothing much happens. As you gradually increase the force you apply to the lever, you will eventually find the threshold—the amount of force that is just enough to trigger a flush (step 1 of the Figure). Likewise, the neuron has a threshold—the amount of depolarization that is just enough to trigger an action potential at the start of an axon. Once you’re past the threshold, it doesn’t matter how hard you pushed the (traditional style) toilet lever; the flush will always be the same. Similarly, once the neuron is pushed past threshold, the action potential will be the same. This is the all-or-none property of action potentials (step 2). + After a toilet has flushed, it takes a while (about a minute) before it can flush again (step 3). Likewise, after a neuron fires, it takes a while (about a millisecond) before it can fire again. This is the neuron’s refractory period. Why do neurons have a refractory period for action potentials? As you may remember, the voltage-gated sodium channels always slam shut for a while after they’ve opened, no matter what the membrane potential is. Until that time is up, the sodium channels won’t open again. If the situation is urgent, we can flush our toilet about 60 times per hour or fire our neurons about 1000 times per second. (If things are really urgent, we might do both.) Also notice that when a properly working toilet flushes, the water always goes in the same direction (no one wants a toilet that sometimes flushes backward!) (step 4). Likewise, the neuron’s action potential goes in only one direction down the axon, from the end attached to the cell body to the axon terminals. Of course, neurons are different from toilets in many ways. The outflow of a toilet goes only to the single sewer line leaving a house, but an action potential may flow down many axon branches, to communicate with hundreds of other neurons. Voltage-gated sodium channels on the axon branches ensure that the action potential is just as large in each branch, so it’s not diminished by spreading out among branches. A toilet has only one lever, but each neuron has hundreds or thousands of synapses, and by producing different sorts of local, graded potentials, some synapses make the neuron more likely to reach threshold, while others make it less likely. If we record the speed of action potentials along axons that differ in diameter, we see that conduction velocity varies with the diameter of the axon. Larger axons allow the depolarization to spread faster through the interior. In mammals, the conduction velocity in large fibers may be as fast as 150 meters per second. (We will discuss axon diameter and conduction velocity again in Chapter 5.) Although not as fast as the speed of light (as was once believed), neural conduction can nevertheless be very fast: over 300 miles per hour. This relatively high rate of conduction ensures rapid sensory and motor processing. The fastest conduction velocities require more than just large axons. Myelin sheathing also greatly speeds conduction. As we described in Chapter 1, the myelin sheath is provided by glial cells. This sheath surrounding the axon is interrupted by nodes of Ranvier, small gaps spaced about every millimeter along the axon (see Figure 1.5A). Because the myelin insulation resists the flow of ions across the membrane, the action potential “jumps” from node to node. This process is called saltatory conduction (from the Latin saltare, “to jump”) (FIGURE 2.8B). The evolution of rapid saltatory conduction in vertebrates has given them a major behavioral advantage over the invertebrates, whose axons are unmyelinated and thus slower in conduction. Multiple sclerosis (MS) is a disease in which myelin is compromised, with highly variable effects on brain function, as described in Signs & Symptoms, next. SIGNS & SYMPTOMS Multiple Sclerosis Most cases of multiple sclerosis (MS) are due to the body’s immune system generating antibodies that attack one or more molecules in myelin. If the myelin is damaged enough, then saltatory conduction of axon potentials is disrupted, throwing off the brain’s timing in coordinating behavior and interpreting sensory input (FIGURE 2.9). MS can present with a bewildering variety of motor and/or sensory symptoms, depending on where, exactly, myelin damage occurs. For many people, the first symptoms they notice are blurred vision or poor color perception. For others, the first symptoms are unexpected tingling sensations, or a difficulty coordinating their walking, with the feeling of stiff legs. Eventually, almost all MS patients experience fatigue. The various symptoms wax and wane for reasons no one understands. While MS symptoms generally worsen over time, the rate of change varies a great deal across patients, making it impossible to predict how the disease might progress. FIGU R E 2 . 9 MS Impairs Axonal Conduction View larger image There is currently no cure for MS, but there are medicines to manage the symptoms, mostly by interfering with the immune system to curtail myelin damage. It’s also important to get physical therapy to learn how to maintain normal activity despite the symptoms. Smoking increases the risk of MS. It is also more common in women than in men. Symptoms tend to be reduced during pregnancy when estrogen levels are high (Voskuhl and Momtazee, 2017), so there is growing interest in whether hormones might offer an effective treatment. How’s It Going? 1. How is the action potential propagated along the axon? 2. What factor causes saltatory conduction, and why does it speed propagation of the action potential? 3. Why do action potentials move only away from the cell body? 4. What is the underlying cause of multiple sclerosis, and what are some symptoms of the disease? Synapses cause local changes in the postsynaptic membrane potential When the action potential reaches the end of an axon, it causes the axon to release a chemical, called a neurotransmitter (or transmitter), into the synapse. We will discuss the many different types of transmitters in detail in Chapter 4. For now, what you need to know is that when an axon releases neurotransmitter molecules into a synapse, they briefly alter the membrane potential of the other cell. Because information is moving from the axon to the target cell on the other side of the synapse, we say the axon is from the presynaptic cell, and the target neuron on the other side of the synapse is the postsynaptic cell. The brief changes in the membrane potential of the postsynaptic cell in response to neurotransmitter are called, naturally enough, postsynaptic potentials. A given neuron, receiving synapses from many other cells, is subject to hundreds or thousands of postsynaptic potentials. When added together, this massive array of local potentials determines whether the axon hillock’s membrane potential will reach threshold and therefore trigger an action potential. The nervous system employs electrical synapses too, but the vast majority of synapses use neurotransmitters to produce postsynaptic potentials. We can study postsynaptic potentials with a setup like that shown in FIGURE 2.10. This setup allows us to compare the effects of excitatory versus inhibitory synapses on the local membrane potential of a postsynaptic cell. The responses of the presynaptic and postsynaptic cells are shown on similar graphs in Figure 2.10 for easy comparison of their timing. It is important to remember that excitatory and inhibitory neurons get their names from their actions on postsynaptic neurons, not from their effects on behavior. FIGU R E 2 . 1 0 Recording Postsynaptic Potentials View larger image Stimulation of the excitatory presynaptic neuron (red in Figure 2.10) causes it to produce an all-ornone action potential that spreads to the end of the axon, releasing transmitter. After a brief delay, the postsynaptic cell (yellow) displays a small local depolarization, as channels open to let positive ions in. This postsynaptic membrane depolarization is known as an excitatory postsynaptic potential (EPSP) because it pushes the postsynaptic cell a little closer to the threshold for an action potential. The action potential of the inhibitory presynaptic neuron (blue in Figure 2.10) looks exactly like that of the excitatory presynaptic neuron; all neurons use the same kind of action potential. But the effect on the postsynaptic side is quite different. When the inhibitory presynaptic neuron is activated, the postsynaptic membrane potential becomes even more negative, or hyperpolarized. This hyperpolarization moves the cell membrane potential away from threshold—it decreases the probability that the neuron will fire an action potential—so it is called an inhibitory postsynaptic potential (IPSP). Usually IPSPs result from the opening of channels that permit chloride ions (Cl ) to enter the cell. Because Cl ions are much more concentrated outside the cell than inside (see Figure 2.2), they rush into the cell, making its membrane potential more negative. What determines whether a synapse excites or inhibits the postsynaptic cell? One factor is the particular neurotransmitter released by the presynaptic cell. Some transmitters typically generate an EPSP in the postsynaptic cells; others typically generate an IPSP. But the same neurotransmitter can be excitatory at one synapse and inhibitory at another, depending on what sort of receptor the postsynaptic cell possesses. So in the end, whether a neuron fires an action potential at any given moment is decided – – by the balance between the number of excitatory and the number of inhibitory signals that it is receiving, and it receives many signals of both types at all times. Now that you know more about the parts of neurons and how they communicate, we summarize the differences between axons (which send information via action potentials) and dendrites (which receive information from synapses) in TABLE 2.1. TA B LE 2 . 1 Comparing Axons and Dendrites Property Size Number per neuron Information flow Voltage changes Axon Thin, uniform One (but may have branches) Away from cell body All-or-none Dendrite Thick, variable Many Into cell body Graded, variable Spatial summation and temporal summation integrate synaptic inputs Synaptic transmission is an impressive process, but complex behavior requires more than the simple arrival of signals across synapses. Neurons must also be able to integrate the messages they receive. In other words, they perform information processing—by using a sort of neural algebra, in which each nerve cell adds and subtracts the many inputs it receives from other neurons. As we’ll see next, this is possible because of the characteristics of synaptic inputs, the way in which the neuron integrates the postsynaptic potentials, and the trigger mechanism that determines whether a neuron will fire an action potential. We’ve seen that postsynaptic potentials are caused by transmitter chemicals that can be either depolarizing (excitatory) or hyperpolarizing (inhibitory). From their points of origin on the dendrites and cell body, these graded EPSPs and IPSPs spread passively over the postsynaptic neuron, decreasing in strength over time and distance. Whether the postsynaptic neuron will fire depends on whether a depolarization exceeding threshold reaches the axon hillock, triggering an action potential. If many EPSPs are received, the axon may reach threshold and fire. But if both EPSPs and IPSPs arrive at the axon hillock, they partially cancel each other. Thus, the net effect is the difference between the two: the neuron subtracts the IPSPs from the EPSPs. Simple arithmetic, right? Well, yes, summed EPSPs and IPSPs do tend to cancel each other out. But because postsynaptic potentials spread passively and dissipate as they cross the cell membrane, the resulting sum is also influenced by distance. For example, EPSPs from synapses close to the axon hillock will produce a larger effect there than will EPSPs from farther away. The summation of potentials originating from different physical locations across the cell body is called spatial summation. Only if the overall sum of all the potentials—both EPSPs and IPSPs—is sufficient to depolarize the axon hillock to threshold will an action potential be triggered (FIGURE 2.11A). Usually it takes excitatory messages from many presynaptic neurons to cause a postsynaptic neuron to fire an action potential. FIGU R E 2 . 11 Spatial versus Temporal Summation View larger image Postsynaptic effects that are not absolutely simultaneous can also be summed, because the postsynaptic potentials last a few milliseconds before fading away. The closer they are in time, the greater is the overlap and the more complete is the summation, which in this case is called temporal summation. Temporal summation is easily understood if you imagine a neuron with only one input. If EPSPs arrive one right after the other, they sum, and the postsynaptic cell eventually reaches threshold and produces an action potential (FIGURE 2.11B). But these graded potentials fade quickly, so if too much time passes between successive EPSPs, they will never sum and no action potentials will be triggered. It should now be clear that although action potentials are all-or-none phenomena, the postsynaptic effect they produce is graded in size and determined by the processing of many inputs occurring close together in time. The membrane potential at the axon hillock thus reflects the moment-tomoment integration of all the neuron’s inputs, which the axon hillock encodes into action potentials. Dendrites add to the story of neuronal integration. A vast number of synaptic inputs, arrayed across the dendrites and cell body, can induce postsynaptic potentials. So dendrites expand the receptive surface of the neuron and increase the amount of input the neuron can handle. All other things being equal, the farther out on a dendrite a potential occurs, the less effect it should have at the axon, because the potential decreases in size as it passively spreads. When the potential arises at a dendritic spine (see Figure 1.4), its effect is even smaller because it has to spread down the shaft of the spine. Thus, information arriving at various parts of the neuron is weighted, in terms of the distance to the axon hillock and the path resistance along the way. TABLE 2.2 summarizes the many properties of action potentials, EPSPs, and IPSPs, noting the similarities and differences among the three kinds of neural potentials. TA B LE 2 . 2 Characteristics of Electrical Signals of Nerve Cells Type of signal Signaling role Typical duration (ms) Amplitude Character Mode of propagation Ion channel opening Channel sensitive to: Action potential Conduction along an axon 1–2 Overshooting, 100 mV All-ornone, digital Actively propagated, regenerative First Na , then K , in different channels Voltage (depolarization) Excitatory postsynaptic potential (EPSP) Transmission between neurons 10–100 Depolarizing, from less than 1 to more than 20 mV Graded, analog Local, passive spread Na , K Chemical (neurotransmitter) Inhibitory postsynaptic potential (IPSP) Transmission between neurons 10–100 Hyperpolarizing, from less than 1 to about 15 mV Graded, analog Local, passive spread Cl , K Chemical (neurotransmitter) How’s It Going? 1. What are EPSPs and IPSPs? 2. Compare and contrast spatial summation versus temporal summation. 3. Discuss the electrical properties of a neuron that allow it to process information. 4. Where does information enter a neuron, and how does a neuron send information to other cells? FOOD FOR THOUGHT + + + + – + Invertebrates do not produce myelin, so they cannot use that method to increase conduction velocity. Without myelin, how could natural selection speed the conduction of axons in important circuits, say for escape behaviors? 2.2 Synaptic Transmission Requires a Sequence of Events The Road Ahead This portion of the chapter explains how neurons release chemicals to signal one another. After reading this material, you should be able to: 2.2.1 Identify the sequence of steps that take place when one neuron releases a chemical signal to affect another. 2.2.2 Understand how a variety of chemical signals enables a diversity of neuronal responses to other neurons. 2.2.3 Identify the interactions between neurons and muscles that underlie a simple reflex. The steps that take place during chemical synaptic transmission are summarized in FIGURE 2.12: 1. The action potential arrives at the presynaptic axon terminal. 2. Voltage-gated calcium channels in the membrane of the axon terminal open, allowing calcium ions (Ca ) to enter. 3. Ca causes synaptic vesicles filled with neurotransmitter to fuse with the presynaptic membrane and rupture, releasing the transmitter molecules into the synaptic cleft. 4. Transmitter molecules bind to special receptor molecules in the postsynaptic membrane, leading—directly or indirectly— to the opening of ion channels in the postsynaptic membrane. The resulting flow of ions creates a local EPSP or IPSP in the postsynaptic neuron. 5. The IPSPs and EPSPs in the postsynaptic cell spread toward the axon hillock. (If the sum of all the EPSPs and IPSPs ultimately depolarizes the axon hillock enough to reach threshold, an action potential will arise.) 6. Synaptic transmission is rapidly stopped, so the message is brief and accurately reflects the activity of the presynaptic cell. 7. Synaptic transmitter may also activate presynaptic receptors, as a way of monitoring the extent of transmitter release. 2+ 2+ FIGU R E 2 . 1 2 Steps in Transmission at a Chemical Synapse View larger image Let’s look at these seven steps in a little more detail. Action potentials cause the release of transmitter molecules into the synaptic cleft When an action potential reaches a presynaptic terminal, it causes hundreds of synaptic vesicles near the presynaptic membrane to fuse with the membrane and discharge their contents—molecules of neurotransmitter—into the synaptic cleft (the space between the presynaptic and postsynaptic membranes). The key event in this process is an influx of calcium ions (Ca ), rather than K or Na , into the axon terminal. These ions enter through voltage-gated Ca channels opening in response to the arrival of an action potential. Synaptic delay is the time needed for Ca to enter the terminal, for the vesicles to fuse with the membrane, for the transmitter to diffuse across the synaptic cleft, and for transmitter molecules to interact with their receptors before the postsynaptic cell responds. The presynaptic terminal normally produces and stores enough transmitter to ensure that it is ready for activity, and although intense activity of the neuron can temporarily deplete the terminal of vesicles, additional filled vesicles are soon produced to replace those that were discharged. The rate of making the transmitter is regulated by enzymes that are manufactured in the neuronal cell body and transported down the axons to the terminals. 2+ + + 2+ 2+ Receptor molecules recognize transmitters The action of a key in a lock is a good analogy for the action of a transmitter on a receptor protein. Just as a particular key can open a door, a molecule of the correct shape, called a ligand (see Chapter 3), can fit into a receptor protein and activate or block it. So, for example, at synapses where the transmitter is acetylcholine (ACh), the ACh fits into areas called ligand-binding sites in neurotransmitter receptor molecules embedded in the postsynaptic membrane (FIGURE 2.13). FIGU R E 2 . 1 3 A Nicotinic Acetylcholine Receptor View larger image The nature of the postsynaptic receptors at a synapse determines the effect of the transmitter (see Chapter 3). For example, ACh can function as either an inhibitory or an excitatory neurotransmitter, at different synapses. At excitatory synapses, binding of ACh to one type of receptor opens channels for Na and K ions. At inhibitory synapses, ACh may act on another type of receptor to open channels that allow Cl ions to enter, thereby hyperpolarizing the membrane (i.e., making it more negative and so less likely to fire an action potential). The lock-and-key analogy is strengthened by the observation that various chemicals can fit onto receptor proteins and block the entrance of the key. Some of the preparations used in this research sound like the ingredients for a witches’ brew. As an example, consider some potent poisons that block ACh receptors: curare and bungarotoxin. Curare is an arrowhead poison used by indigenous South Americans. Extracted from a plant, it greatly increases the efficiency of hunting: if the hunter hits any part of the prey, the arrow’s poison soon blocks ACh receptors on muscles, paralyzing the animal. Bungarotoxin, another blocker of ACh receptors, is found in the venom of a snake (Bungarus multicinctus) native to China and Southeast Asia. The chemical nicotine, found in tobacco products, mimics the action of ACh at some synapses, increasing alertness and heart rate. Molecules such as nicotine that act like transmitters at a receptor are called agonists (from the Greek agon, “contest” or “struggle”) of + + – that transmitter. Conversely, molecules that interfere with or prevent the action of a transmitter, like curare, are called antagonists. Just as there are master keys that fit many different locks, there are submaster keys that fit a certain group of locks, as well as keys that each fit only a single lock. Similarly, each chemical transmitter binds to several different receptor molecules. ACh acts on at least four subtypes of cholinergic receptors. Nicotinic cholinergic receptors— yes, the subtype on which nicotine exerts its effects—are found at synapses on muscles and in autonomic ganglia; it is the blockade of these receptors that causes paralysis brought on by curare and bungarotoxin. Most nicotinic sites are excitatory, but there are also inhibitory nicotinic synapses. The many “flavors” of receptors for each transmitter have evolved to enable a variety of actions in the nervous system. The nicotinic ACh receptor resembles a lopsided dumbbell with a tube running down its central axis (see Figure 2.13). The handle of the dumbbell spans the cell membrane, with two sites on the outside that fit ACh molecules (Delgado-Vélez et al., 2021). For the channel to open, ACh molecules must occupy both binding sites. Receptors for some of the synaptic transmitter molecules that we will consider in later chapters, such as gamma-aminobutyric acid (GABA), glycine, and glutamate, are similar. In Chapter 3, we’ll learn about another common type of neurotransmitter receptor that alters the internal chemistry of the postsynaptic cell to either open separate ion channels or trigger longer-lasting changes (see Figure 3.2). The coordination of different transmitter systems of the brain is incredibly complex. Each subtype of neurotransmitter receptor has a unique pattern of distribution within the brain. Different receptor systems become active at different times in fetal life. The number of any given type of receptor remains plastic in adulthood: not only are there seasonal variations, but many kinds of receptors show a regular daily variation of 50 percent or more in number, affecting the sensitivity of cells to that particular transmitter. Similarly, the numbers of some receptors have been found to vary with the use of drugs. We’ll learn more about these properties of neurotransmitter receptors in Chapter 3. The action of synaptic transmitters is stopped rapidly When a chemical transmitter such as ACh is released into the synaptic cleft, its postsynaptic action is not only prompt but usually very brief as well. It is important that each activation of the synapse be brief in order to maximize how much information can be transmitted. Think of it this way: the worst doorbell in the world is one that, when the button is pushed, rings forever. Such a doorbell would be able to transmit only one piece of information, and only once. But a doorbell that could ring as fast as a thousand times per minute would be able to send a lot of information—Morse code maybe. Likewise, a synapse can signal over a thousand times per second, potentially sending a lot of information (but not by Morse code). Two processes bring transmitter effects to a prompt halt: 1. Degradation Transmitter molecules can be rapidly broken down and thus inactivated by special enzymes—a process known as degradation (step 6a in Figure 2.12). For example, the enzyme that inactivates ACh is acetylcholinesterase (AChE). AChE breaks down ACh very rapidly into products that are recycled (at least in part) to make more ACh in the axon terminal. 2. Reuptake Alternatively, transmitter molecules may be swiftly cleared from the synaptic cleft by being absorbed back into the axon terminal that released them—a process known as reuptake (step 6b in Figure 2.12). Norepinephrine, dopamine, and serotonin are examples of transmitters whose activity is terminated mainly by reuptake. In these cases, special receptors for the transmitter, called transporters, are located on the presynaptic axon terminal and bring the transmitter back inside. Once taken up into the presynaptic terminal, transmitter molecules may be repackaged into newly formed synaptic vesicles to await re-release, conserving the resources for making new transmitter molecules. Malfunction of reuptake mechanisms is suspected as the cause of some kinds of mental illness, such as depression (see Chapter 12). Neural circuits underlie reflexes For simplicity, so far we have focused on the classic axo-dendritic synapses (from axon to dendrite) and axo-somatic synapses (from axon to cell body, or soma). But many nonclassic forms of chemical synapses exist in the nervous system. As the name implies, axo-axonic synapses form on axons, often near the axon terminal, allowing the presynaptic neuron to strongly facilitate or inhibit the activity of the postsynaptic axon terminal. Similarly, neurons may form dendro-dendritic synapses, allowing coordination of their activities (FIGURE 2.14). FIGU R E 2 . 1 4 Different Types of Synaptic Connections In reality, neurons typically summate input from hundreds or even thousands of synapses. View larger image Now that we know more about the electrical signaling that takes place within each neuron and the neurotransmitter signaling that goes on between neurons, we can revisit the knee-jerk reflex that we discussed at the start of the chapter (FIGURE 2.15). Note that this reflex is extremely fast: only about 40 milliseconds elapse between the hammer tap and the start of the kick. Several factors account for this speed: (1) both the sensory and the motor axons involved are myelinated and of large diameter, so they conduct action potentials rapidly; (2) the sensory cells synapse directly on the motor neurons; and (3) both the central synapse and the neuromuscular junction are fast synapses. FIGU R E 2 . 1 5 The Knee-Jerk Reflex View larger image This reflex serves as an example of neural processing—electrical signaling within each neuron, alternating with chemical signaling between neurons. Chapter 3 will explain how drugs can interfere with the chemical signaling between neurons. For the final part of this chapter, let’s see how scientists exploit the electrical signaling within neurons to learn more about brain function. How’s It Going? 1. Recount the seven steps in synaptic transmission, including processes that end the signal. 2. What ion must enter the axon terminal to trigger neurotransmitter release? 3. What are agonists and antagonists? 4. Describe how information is processed within neurons by electrical signals yet communicated to other neurons by chemical signals. Food for Thought So far we have talked about only a tiny minority of all the neurotransmitters at work in the brain. Speculate about why natural selection would favor the evolution of so many transmitters. How might evolution of that variety of signals benefit mental function? 2.3 EEGs Measure Gross Electrical Activity of the Human Brain The Road Ahead The final section of the chapter explains how we can exploit electrical signals to monitor and understand brain function. Reading this section should allow you to: 2.3.1 Understand how electroencephalograms (EEGs) work. 2.3.2 Explain the logic of event-related potentials. 2.3.3 Understand the electrical activity underlying the brain disorder called epilepsy. 2.3.4 Appreciate how neurosurgeons discovered important “maps” by electrically stimulating the brain. The electrical activity of millions of cells working together combines to produce electrical potentials large enough that we can detect them with electrodes applied to the surface of the scalp. Recordings of these spontaneous brain potentials (or brain waves), called electroencephalograms (EEGs) (FIGURE 2.16A), can provide useful information about the activity of brain regions during behavioral processes (O’Connell and Kelly, 2021). As we will see in Chapter 10, EEG recordings can distinguish whether a person is asleep or awake. In many countries, EEG activity determines whether someone is legally dead. FIGU R E 2 . 1 6 Gross Potentials of the Human Nervous System View larger image Event-related potentials (ERPs) are EEG responses to a single stimulus, such as a flash of light or a loud sound. Typically, many ERP responses to the same stimulus are averaged to obtain a reliable estimate of brain activity (FIGURE 2.16B). ERPs have very distinctive characteristics of wave shape and time delay (or latency) that reflect the type of stimulus, the state of the participant, and the site of recording (Luck, 2014). ERPs can also be used to detect hearing problems in babies, evident as reduced or absent ERPs in response to sounds. In Chapter 14 we’ll learn how ERPs are used to study subtler psychological processes, such as attention. EEG recordings can also provide vital information for diagnosing seizure disorders, as we discuss next. Electrical storms in the brain can cause seizures Since the dawn of civilization, people have pondered the causes of epilepsy, a disorder in which seizures lasting for a few seconds or minutes may produce dramatic behavioral changes such as alterations or loss of consciousness and rhythmic convulsions of the body. Worldwide, over 40 million people suffer from epilepsy (GBD 2016 Epilepsy Collaborators, 2019), which we now know to be a disorder of electrical potentials in the brain. In the normal, active brain, electrical activity tends to be desynchronized; that is, different brain regions carry on their functions more or less independently. In contrast, during a seizure there is widespread synchronization of electrical activity: broad stretches of the brain start firing in simultaneous waves, which are evident in the EEGs as an atypical “spike-and-wave” pattern of brain activity. Many factors, such as trauma, injury, or metabolic problems, can predispose brain tissue to produce such synchronized activity, which, once begun, may readily spread from one brain region to others. There are several major categories of seizure disorders. The most severe, with loss of consciousness and rhythmic convulsions, are called tonic-clonic seizures (formerly known as grand mal seizures) and are accompanied by synchronized EEG activity all over the brain (FIGURE 2.17A). In the more subtle simple partial seizures (or absence attacks, formerly known as petit mal seizures), the characteristic spike-and-wave EEG activity is evident for 5–15 seconds at a time (FIGURE 2.17B), sometimes occurring many times per day. The person is unaware of the environment during these periods and later cannot recall events that occurred during the episodes. Behaviorally, people experiencing simple partial seizures show no unusual muscle activity; they just stop what they’re doing and seem to stare into space. FIGU R E 2 . 1 7 Seizure Disorders View larger image Complex partial seizures do not involve the entire brain and thus can produce a wide variety of symptoms, often preceded by an unusual sensation, or aura. In one example, a woman felt an unusual sensation in the abdomen, a sense of foreboding, and tingling in both hands before the seizure spread. At the height of the episode, she was unresponsive and rocked her body back and forth while speaking nonsensically, twisting her left arm, and looking toward the right. Of course, her seemingly random set of behavioral symptoms actually reflected the functions of the particular brain regions activated by the seizure (FIGURE 2.17C); others experiencing seizures would produce a completely different set of behaviors. In some individuals, complex partial seizures may be provoked by stimuli like loud noises or flashing lights. Many seizure disorders can be effectively controlled with the aid of antiepileptic drugs. Although these drugs have a wide variety of neural targets, they tend to selectively reduce the excitability of neurons (Park et al., 2019). There is anticipation that cannabis products such as cannabidiol may control seizures, but so far there have been few controlled trials to determine whether it actually works (Kirkpatrick and O’Callaghan, 2022). For now, about a third of cases of epilepsy are not controlled by medication. If the seizures are severe, or happen very often, they may be life-threatening. Individuals living with severe epilepsy may resort to the drastic step of having parts of the brain removed, as we’ll see next. RESEARCHERS AT WORK Surgical Probing of the Brain Revealed a Map of the Body Typically, the electrical activity causing seizures begins in one part of the brain and then spreads to others. So in the twentieth century, neurosurgeons began taking drastic measures to help people with severe epilepsy that did not respond to medication: surgical removal of the part of the brain where the seizures begin. The trick, of course, is to remove the part of the brain where the seizures begin, and only that part. Otherwise the patient might take the risks of surgery and still have seizures, or might have impairment of a vital function, such as verbal or memory skills, if healthy tissue is removed. One way to locate the origin of the seizures is to compare EEG readings from different places on the skull (see Figure 2.17). But this approach gives only a rough idea of where the seizures begin, and it is problematic because the recording must be made when a seizure is actually starting. To improve the success rate of such surgeries, Canadian neurosurgeon Wilder Penfield developed a procedure that, nearly a century later, is still compelling (Foerster and Penfield, 1930). Using only local anesthesia to deaden the pain of cutting the scalp and opening up the skull, Penfield had patients remain awake and alert as he exposed their brain. Then he used electrodes to deliver a tiny electrical stimulation to the surface of the cortex, asking the patient to report the results. One strategy for people whose epileptic seizures were preceded by an aura was to try to find the point where stimulation recreated the aura. In one famous case of a woman whose seizures were preceded by the smell of burnt toast, Penfield was able to find a spot where stimulation caused her to smell burnt toast and, presuming that region was the origin of the seizures, surgically removed it. (The brain itself has no pain receptors, so cutting the cortex didn’t hurt.) Using this refined technique, Penfield was able to cure about half of his patients, and seizures were reduced in another 25 percent. Later, José Delgado also stimulated patients’ brains to seek the origin of seizures, as we discussed at the start of the chapter. Delgado altered the technique by implanting several electrodes temporarily, so the patient could walk around while doctors electrically stimulated different brain regions and observed the results. In Chapter 5 and Chapter 12 we’ll learn that today electrodes are sometimes implanted in the brain as a treatment for other disorders. In his pioneering work, Penfield did more than help his patients. He also made major discoveries about the organization of the human cortex (FIGURE 2.18). By carefully recording the effects of stimulating different regions of the brain, he found that stimulation of occipital cortex often caused the patient to “see” flashes of light. Stimulating another region might cause the person’s thumb to tingle, while stimulation elsewhere might cause the patient’s leg to move. Through these studies, Penfield confirmed that each side of the cortex receives information from, and sends commands to, the opposite side of the body. He found that stimulating the postcentral gyrus of the parietal cortex caused patients to experience sensations on various parts of the body in a way that was consistent from one person to another (see Figure 5.9). Just across the central sulcus from each site, in the precentral gyrus, stimulations caused that same part of the body to move (see Figure 5.22). These “maps” of how the various parts of the body are laid out on the cortex (Jasper and Penfield, 1954) have been reproduced in countless textbooks, providing the basis of what is called the homunculus, the “little man” drawn on the surface of the cortex to depict Penfield’s maps, as we will discuss further in Chapter 5. FIGU R E 2 . 1 8 Mapping the Human Brain View larger image These groundbreaking observations taught us that brain function is organized in a map that reproduces body parts. We also learned that the map is distorted, in the sense that parts of the body that are especially sensitive to touch, such as the lips or fingers, are monitored by a relatively large area of cortex compared with, say, the backs of the legs. Penfield’s studies also stimulated a rich store of speculation about the relationship between the workings of the brain and the mind. In a minority of patients, electrical stimulation in some sites would sometimes elicit a memory of, for example, sitting on the porch step and hearing a relative’s voice, or hearing a snatch of music. As we saw at the start of this chapter, other researchers would find that electrical stimulation of the brain could make people think they were attracted to their examiner or could make an angry, murderous bull peaceful and calm. In another case, electrical stimulation of one part of her brain caused a young woman to find whatever was happening around her to be humorous (Fried et al., 1998). These startling observations—showing that electrical stimulation of the brain triggers mental processes—remain a cornerstone of neuroscience, and a tantalizing demonstration that our mind is a result of physical processes at work in the machine we call the brain. Brain Stimulation Surgeon Wilder Penfield electrically stimulating the surface of the exposed brain in an awake patient. View larger image How’s It Going? 1. What are EEGs and ERPs? How have these techniques been useful? 2. What are the three main categories of epileptic seizures, and what are the main characteristics of each type? 3. Describe Penfield’s surgical procedure and what it revealed about organization of the brain. FOOD FOR THOUGHT Being awake and alert while a surgeon opens up your skull and removes part of your brain is a very “meta” experience. If you had to have such surgery and could choose to be awake or anesthetized, which would you choose, and why? RECOMMENDED READING Augustine, G. J., Groh, J. M., Huetell, S. A., et al. (2017). Neuroscience (7th ed.). New York, NY: Oxford University Press. Kandel, E. R., Koester, J. D., Mack, S. H., and Siegelbaum, S. A. (Eds.). (2021). Principles of Neural Science (6th ed.). New York, NY: McGraw-Hill. Nicholls, J. G., Martin, A. R., Fuchs, P. A., et al. (2021). From Neuron to Brain (6th ed.). Sunderland, MA: Oxford University Press/Sinauer. Prat, C. (2022). The Neuroscience of You: How Every Brain Is Different and How to Understand Yours. New York, NY: Penguin/Random House. Sapolsky, R. M. (2018). Behave: The Biology of Humans at Our Best and Worst. New York, NY: Penguin Books. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 2 View larger image LIST OF KEY TERMS absolute refractory phase acetylcholine (ACh) acetylcholinesterase (AChE) action potential afterpotentials agonists all-or-none property anions antagonists aura axo-axonic synapses axo-dendritic synapses axon hillock axo-somatic synapses Bungarotoxin calcium ions (Ca ) cations 2+ cell membrane chloride ions (Cl ) cholinergic Complex partial seizures conduction velocity Curare degradation dendro-dendritic synapses Depolarization diffusion electroencephalograms (EEGs) electrostatic pressure epilepsy equilibrium potential Event-related potentials (ERPs) excitatory postsynaptic potential (EPSP) extracellular fluid Hyperpolarization inhibitory postsynaptic potential (IPSP) intracellular fluid ion channel – ions knee-jerk reflex ligand local potentials microelectrode millivolts (mV) Multiple sclerosis (MS) Myelin Neurophysiology neurotransmitter neurotransmitter receptor nodes of Ranvier postsynaptic postsynaptic potentials potassium ions (K ) presynaptic refractory relative refractory phase resting potential reuptake saltatory conduction + seizures selective permeability simple partial seizures sodium ions (Na ) sodium-potassium pumps spatial summation synaptic cleft Synaptic delay synaptic vesicles temporal summation threshold tonic-clonic seizures transporters voltage-gated Na channel + + CHAPTER 3 The Chemistry of Behavior Neurotransmitters and Neuropharmacology Neil V. Watson Simon Fraser University S. Marc Breedlove Michigan State University Living the Dream As the twentieth century began, scientists knew that neurons were important for brain function, but no one understood how neurons communicated with one another. What happened at all those newly discovered synapses between neurons? Did sparks of electricity pass from cell to cell? Or was some unknown chemical substance involved? Some scientists, nicknamed “sparks,” favored the idea that electrical signals crossed synapses; other scientists, the “soups,” thought neurons released a chemical that flowed across synapses. Otto Loewi was so consumed with the question of neural communication that he even dreamed about it. One night he suddenly awoke, having dreamed of an experiment that could finally answer whether the “soups” or the “sparks” were right. He made a few notes and went back to sleep, only to discover the next day that he couldn’t make any sense of his scribblings from the night before. So when he had the same dream again the following night, he got up and went straight to the lab to do the experiment while it was still fresh in his mind. The result was a discovery that would revolutionize the study of the brain. Throughout the ages, people have experimented with exogenous substances (substances from outside the body) to try to change the functioning of their bodies and brains. Our ancestors sipped, swallowed, and smoked their way to euphoria, calmness, pain relief, and hallucination. They discovered deadly poisons in frogs, miraculous antibiotics in mold, powerful painkillers in poppies, and all the rest of a vast catalog of helpful and harmful substances. By studying the physiological actions of these substances, modern scientists have been able to unlock many mysteries of brain function. The preceding chapters showed us that the brain is an electrochemical system. Today we know that, in general, each neuron electrically processes information received through many synapses and then releases a chemical to pass the result of that information processing to the next cell. Specifically, a presynaptic neuron releases an endogenous substance (a substance from inside the body), a chemical called a neurotransmitter. The neurotransmitter then communicates with the postsynaptic cell. As you might have guessed, most drugs that affect behavior do so by meddling with this chemical communication process at millions, or even billions, of synapses. So, we open our discussion with a detailed look at the sequence of electrochemical events in synaptic transmission. 3.1 Synaptic Transmission Is a Complex Electrochemical Process The Road Ahead We begin with a detailed look at the sequence of electrochemical events in synaptic transmission. After reading this section, you should be able to: 3.1.1 Review the neuronal processes leading to the release of neurotransmitter into a synapse. 3.1.2 Explain how receptors capture, recognize, and respond to molecules of neurotransmitter. 3.1.3 Give a general overview of the concept of receptor subtypes and how their existence adds complexity to neural signaling. As we learned in Chapter 2, the typical neuron integrates a variety of inputs and, if sufficiently excited (that is, depolarized), fires a distinctive, brief electrical signal called an action potential that rapidly sweeps down the axon toward the axon terminals, each of which forms the presynaptic side of a synapse. FIGURE 3.1 recaps the events that occur upon the arrival of the action potential. First, because the arrival of the action potential strongly depolarizes the axon terminal, voltage-gated calcium (Ca ) channels in the terminal membrane open. The resulting inflow of Ca ions drives the migration of synaptic vesicles to the nearby presynaptic membrane, where specialized proteins on the walls of the vesicles and corresponding proteins on the synaptic membrane interact and cause the vesicles to release their cargo of molecules of neurotransmitter (or just transmitter) into the synaptic cleft (a process called exocytosis). In Chapter 2 we also saw that following their diffusion across the cleft, neurotransmitter molecules briefly bind to their corresponding neurotransmitter receptors— protein molecules embedded in the postsynaptic membrane that recognize a specific transmitter—which then mediate a response on the postsynaptic side. The neurotransmitter molecules must then be cleared from the synapse, being either (1) broken down by enzymes into simpler chemicals—sometimes with the help of nearby glial cells —or (2) brought back into the presynaptic terminal in a process called reuptake (see Figure 3.1). Reuptake of transmitters relies on specialized proteins, called transporters, that bind molecules of neurotransmitter and conduct them back inside the presynaptic terminal. Once inside, the neurotransmitter molecules can be recycled. 2+ 2+ FIGU R E 3 . 1 A Review of Synaptic Activity EPSP, excitatory postsynaptic potential; IPSP, inhibitory postsynaptic potential. View larger image Neurotransmitter receptors are very selective about the substances that they will respond to: as we saw in Chapter 2, the action of transmitters on receptors is often likened to a key opening a lock. Nevertheless, the various neurotransmitter receptors can all be categorized as belonging to one of two general kinds: ionotropic receptors or metabotropic receptors. An ionotropic receptor is really just a fancy ion channel; when bound by a neurotransmitter molecule, an ionotropic receptor quickly changes shape, opening (or closing) its integral ion channel (FIGURE 3.2). The opening (or closing) of channels in the postsynaptic membrane allows more (or fewer) of the channels’ favored ions to flow into or out of the postsynaptic neuron, thus changing the local membrane potential. If the change in the postsynaptic membrane potential is a depolarization, bringing the cell closer to its threshold for producing an action potential, we call it an excitatory synapse. Conversely, if the result is a local hyperpolarization of the postsynaptic cell, making it less likely to produce an action potential, we call it an inhibitory synapse. FIGU R E 3 . 2 The Versatility of Neurotransmitters View larger image But not all receptors contain an ion channel. Receptors belonging to the other major category—the metabotropic receptors—don’t pass ions or other substances through the cell membrane. Instead, they provide a link across the cell membrane to complicated chemical machinery—called G proteins—inside the postsynaptic neuron (see Figure 3.2). When activated, metabotropic receptors alter the inner workings of the postsynaptic cell, using internal chemical signals called second messengers that are activated by the G proteins. This two-step signaling process can cause changes in excitability of the postsynaptic cell, or it can cause other, slower but larger-scale responses. For example, the metabotropic receptor may kick off a chain of chemical reactions that affect gene expression (the use of genes to produce proteins; see the Appendix). Changes in gene expression can have many lasting effects, such as changing the excitability of the postsynaptic neuron, remodeling its connections to other cells, or stimulating the production of more receptors and signaling chemicals. Together, the two major families of transmitter receptors allow not only rapid responses where timing is crucial, but also more complex, integrative, slower behaviors such as emotional responses, social behavior, and so on. Receptors add an important layer of complexity in neural signaling, because any given transmitter may affect various kinds of receptors that differ from one another in structure. This diversity of receptor subtypes is true for both metabotropic receptors and ionotropic receptors. In mammals and almost all other organisms, a gene “superfamily” encodes hundreds of different kinds of G protein–coupled receptors (GPCRs), including the many and various types of metabotropic neurotransmitter receptors that we’ll be seeing again later in the book. In the case of ionotropic receptors, a diverse group of genes encodes the various protein subunits that combine to make up the ion channels at the core of the receptors. The characteristics of each subtype of ionotropic receptor —the specific neurotransmitter it recognizes and the type of ions that it selectively conducts—are determined by the unique combination of subunits that make up the receptor. So, the specific response of any postsynaptic neuron to molecules of neurotransmitter is determined by the particular subtypes of receptors present on the postsynaptic membrane. Furthermore, the various subtypes of receptors for any particular neurotransmitter may vary widely in their anatomical distribution within the brain (e.g., Beliveau et al., 2017). Shortly we’ll look at the ways in which psychoactive drugs exploit this complexity, but first let’s see how Otto Loewi made his dream come true. RESEARCHERS AT WORK The First Transmitter to Be Discovered Was Acetylcholine Our chapter began with the classic story of Otto Loewi’s dream of an experiment that would reveal the first neurotransmitter. After forgetting the details of the dream the first night, Loewi was quick to get out of bed when the dream returned the following night. He went straight to the lab to conduct the experiment depicted in FIGURE 3.3. Loewi’s first step was to electrically stimulate the vagus nerve in a frog, which he knew would cause its heart to slow down. The question was, Why did the heart slow down? Had electricity jumped from the vagus nerve to the heart, as the “sparks” believed? Or were the “soups” right? Had the nerve released a chemical to slow the heart? FIGU R E 3 . 3 “Soups” versus “Sparks”: The First Neurotransmitter View larger image Loewi’s critical, dream-inspired experiment was to collect the fluid that surrounded the slowing heart. Then he applied that fluid to the heart of another frog. If the first heart had been slowed by electrical signals from the vagus, then the fluid should have no effect on the second heart. But if activation of the vagus caused it to release a chemical that slowed the beating of the heart, then the fluid from the first heart should alter the beating of the second. In fact, the transferred fluid caused the second heart to slow down, providing Loewi with conclusive evidence of chemical neurotransmission (and a nice shiny Nobel Prize, in 1936). The neurotransmitter was later chemically identified as acetylcholine (ACh for short). The “soups” were vindicated. Otto Loewi Following the Nazi annexation of Austria in 1938, Loewi, then a professor at the University of Graz, was forced to flee to the United States, where he resided for the rest of his life. He was photographed in the summer of 1955 by his associate, Dr. Morris Rockstein, at the Woods Hole Marine Biological Laboratory. View larger image How’s It Going? 1. Distinguish between endogenous and exogenous substances, and give a few examples of each. 2. Review the sequence of events that occurs when an action potential arrives at the axon terminal and causes a release of neurotransmitter. Use the following terms in your answer: exocytosis, receptors, ionotropic, metabotropic, reuptake. 3. Describe how the first neurotransmitter was discovered. Why do you think we selected this discovery for this chapter’s Researchers at Work feature? FOOD FOR THOUGHT It seems that evolution favored the emergence of multiple different subtypes of receptors for each of the major neurotransmitters—how and why do you think this occurred? 3.2 Neurotransmitters Differ in Their Chemical Composition and Anatomical Distribution The Road Ahead Next we survey the major families of neurotransmitters and their distribution in the brain. After reading this section, you should be able to: 3.2.1 Identify general properties shared by most neurotransmitters, summarizing the criteria for establishing that a substance acts as a neurotransmitter. 3.2.2 Name the major neurotransmitters, and briefly describe the chemical families of transmitters to which they belong. 3.2.3 Trace the anatomical distribution of the major transmitters in the brain. 3.2.4 Briefly review the major functions associated with each of the classical transmitters. In the years since Loewi’s discovery, neuroscientists have agreed on some basic principles for deciding whether a brain chemical qualifies as a classical neurotransmitter. We can conclude that a candidate substance is a transmitter if it meets the following qualifications: It can be synthesized by presynaptic neurons and stored in axon terminals. It is released when action potentials reach the terminals. It is recognized by specific receptors located on the postsynaptic membrane. It causes changes in the postsynaptic cell. Blocking its release interferes with the ability of the presynaptic cell to affect the postsynaptic cell. The brain contains many different chemicals that meet these criteria and are therefore considered classic neurotransmitters. Considering the rate at which these substances are being discovered, it would not be surprising if there turned out to be several hundred different neurotransmitters at work at synapses throughout the nervous system. But for now, let’s content ourselves with a look at a few of the best-known neurotransmitter systems in the brain. TABLE 3.1 summarizes the major neurotransmitters and the chemical families to which they belong. Amino acid neurotransmitters and peptide neurotransmitters (or neuropeptides), as their names suggest, are based on single amino acid molecules or on short chains of amino acids (called peptides), respectively. A different family—the amine neurotransmitters— includes some of the best-known classical transmitters, such as acetylcholine, dopamine, and serotonin. The search for new transmitters has occasionally yielded surprises like the gasotransmitters, gas molecules that dissolve in water (and thus are also called soluble gases) that diffuse between neurons to alter ongoing processes (and defy several of the criteria for classic neurotransmitters that we just listed!). TA B LE 3 . 1 Some Synaptic Transmitters and Families of Transmitters Family and subfamily Transmitter(s) AMINO ACIDS Gamma-aminobutyric acid (GABA), glutamate, glycine, histamine AMINES Quaternary amines Acetylcholine (ACh) Monoamines Catecholamines: Norepinephrine (NE), epinephrine (adrenaline), dopamine (DA) Indoleamines: Serotonin (5-hydroxytryptamine [5-HT]), melatonin NEUROPEPTIDES Opioid peptides Enkephalins: Met-enkephalin, leu-enkephalin Endorphins: Beta-endorphin Dynorphins: Dynorphin A Family and subfamily Transmitter(s) Other neuropeptides Oxytocin, substance P, cholecystokinin (CCK), vasopressin, neuropeptide Y (NPY), hypothalamic releasing hormones GASES Nitric oxide, carbon monoxide The most abundant neurotransmitters are amino acids A majority of synapses in the brain rely on amino acid transmitters for communication. Of this group, the two best studied are glutamate, the most widespread excitatory transmitter in the brain, and gamma-aminobutyric acid (GABA), the most widespread inhibitory transmitter. Both have wide-ranging effects at synapses throughout the central nervous system, and from an evolutionary perspective they are among the most ancient transmitters. Glutamate Glutamatergic (i.e., glutamate-using) synapses employ three subtypes of ionotropic receptors—AMPA, kainate, and NMDA receptors—named for the compounds that selectively activate them. Activation of AMPA receptors, the most plentiful receptors in the brain, has rapid excitatory effects. NMDA receptors have unique characteristics that suggest they play a central role in memory formation (discussed in Chapter 13). Several additional subtypes of glutamate receptors are metabotropic—which explains their collective name, mGluR—and therefore act more slowly via secondmessenger signaling pathways. GABA Among the subtypes of receptors for GABA, the GABA receptors have received decades of special scrutiny because of their relationship to anxiety relief. GABA receptors are ionotropic; when activated, they allow more Cl ions to flow into the postsynaptic cell, resulting in a rapid-onset local hyperpolarization that inhibits the cell’s activity. Compounds that mimic this action of GABA tend to be effective calming agents because they produce a widespread decrease in neural activity. In fact, drugs belonging to the family of benzodiazepines—examples include Xanax (alprazolam) and Ativan (lorazepam)—potently activate GABA receptors, thereby decreasing the excitability of neurons and acting to reduce anxiety and panic attacks, block epileptic seizures, and aid relaxation and sleep onset. GABA receptors, in contrast, are metabotropic receptors with slower postsynaptic effects (Fritzius et al., 2022); GABA -selective drugs may help treat diverse chronic problems such as pain and mood disorders (Pin and Bettler, 2016; Nieto et al., 2022). A A – A A B B Four classical neurotransmitters modulate brain activity It is possible to stain brain tissue in such a way that only the neurons that make a particular neurotransmitter end up being labeled (see Box 1.1). Studies of brain sections stained in this way have shown that transmitters are found in complex networks of neurons that extend throughout the brain. In FIGURE 3.4, this complicated anatomy is depicted for acetylcholine plus just three of the most famous classical amine transmitters: dopamine, serotonin, and norepinephrine (amines are nitrogen-containing compounds related to ammonia, often derived from an amino acid). Acetylcholine and the amine transmitters have been implicated in many categories of behavior and pathology, so these transmitter systems are major targets for drug development. We’ll encounter them throughout the book. And despite its gnarly appearance, Figure 3.4 is actually a simplification; there are many more transmitters at work than the four we have shown here, and they are arranged in much more complicated networks. Furthermore, we now know that some neurons make and release more than one type of transmitter—a phenomenon known as neurotransmitter co-localization. FIGU R E 3 . 4 Neurotransmitter Pathways in the Brain View larger image A key point to remember is that each of these classical neurotransmitters is carried by a different set of axons, and those axons project to different brain regions. Each type of neurotransmitter is thus talking to a distinct set of brain targets, and there may be overlap as two different transmitters arrive at the same target. How those targets respond depends on which neurotransmitter is being released and which kinds of receptors the target neurons possess (TABLE 3.2). TA B LE 3 . 2 The Bewildering Multiplicity of Transmitter Receptor Subtypes Transmitter Known receptor subtypes Function Glutamate AMPA, kainate, and NMDA receptors (ionotropic); mGluR’s (metabotropic glutamate receptors) Glutamate is the most abundant of all neurotransmitters and the most important excitatory transmitter. Glutamate receptors are crucial for excitatory signals, and NMDA receptors are especially implicated in learning and memory. Gammaaminobutyric acid (GABA) GABA (ionotropic) GABA receptors mediate most of the brain’s inhibitory activity, balancing the excitatory actions of glutamate. GABA receptors are inhibitory in many brain regions, reducing excitability and preventing seizure activity. GABA (metabotropic) GABA receptors are also inhibitory, by a different mechanism. Acetylcholine (ACh) Muscarinic receptors (metabotropic) Both types of receptors are involved in cholinergic transmission in the cortex. Nicotinic receptors (ionotropic) Nicotinic receptors are crucial for muscle contraction. Dopamine (DA) D through D receptors (all metabotropic) DA receptors are found throughout the forebrain. DA receptors are involved in complex behaviors, including motor function, reward, and higher cognition. A A B B 1 5 Transmitter Known receptor subtypes Function Norepinephrine (NE) α , α , β , β , and β receptors (all metabotropic) NE has multiple effects in visceral organs, important in sympathetic nervous system and fight-or-flight responses. In the brain, NE transmission provides an alerting and arousing function. Serotonin 5-HT receptor family (5 members) Different subtypes differ in their distribution in the brain. 5-HT receptor family (3 members) 5-HT receptors may be involved in mood, sleep, and higher cognition. 5-HT through 5-HT receptors (all but one subtype [5-HT ] metabotropic) 5-HT receptors are particularly involved in nausea. Miscellaneous peptides Many specific receptors for peptides such as opioids (delta, kappa, and mu receptors), cholecystokinin (CCK), neurotensin, neuropeptide Y (NPY), and dozens more (all metabotropic) Peptide transmitters have many different functions, depending on their anatomical localization. Some important examples include the control of feeding, sexual behaviors, and social functions. Acetylcholine We now know that the transmitter discovered by Otto Loewi, acetylcholine (ACh), plays a major role in neurotransmission in the 1 2 1 2 3 1 2 2 3 7 3 3 forebrain. Many cholinergic (ACh-containing) neurons are found in nuclei within the basal forebrain. These cholinergic cells project widely in the brain, to sites such as the cerebral cortex, amygdala, and hippocampus (see Figure 3.4). Widespread loss of cholinergic neurons is associated with Alzheimer’s disease, and experimental disruption of cholinergic pathways in rats interferes with learning and memory. There are two families of ACh receptors in the peripheral and central nervous systems: nicotinic and muscarinic receptors (named for drugs that selectively bind to them). The quickacting, ionotropic, nicotinic receptors are excitatory and are the receptors that drive muscle contractions, so drugs that block nicotinic acetylcholine receptors cause widespread paralysis. The slower metabotropic muscarinic receptors can be either excitatory or inhibitory and are crucial in widely varying cognitive processes. Dopamine Out of more than 80 billion neurons in the human brain, only about a million synthesize dopamine (DA), but they are critically important for many aspects of behavior. Figure 3.4 shows the paths of the major dopaminergic projections. One of these projections is called the mesostriatal pathway because it originates in the midbrain (mesencephalon) around the substantia nigra and projects axons to the basal ganglia (aka the striatum). There aren’t all that many neurons in the system—hundreds of thousands—but keep in mind that a single axon can divide to supply thousands of synapses. Those synapses play a crucial role in motor control. When people lose a significant number of mesostriatal dopaminergic neurons, from either exposure to toxins or just old age, they develop the profound movement problems of Parkinson’s disease (described in Chapter 5), including tremors. Several subtypes of DA receptors have been discovered and numbered D , D , D , D , and D , in the order of their discovery; these receptors vary widely in their actions, anatomical distributions, and behavioral functions (see Table 3.2). A second major dopaminergic projection, called the mesolimbocortical pathway, originates in a midbrain region called the ventral tegmental area (VTA) (see Figure 3.4) and projects to various locations in the limbic system (see Chapter 2) and cortex. The mesolimbocortical system appears to be especially important for the processing of reward; it’s probably where feelings of pleasure arise. Thus, it makes sense that the mesolimbocortical dopamine system is important for learning that is shaped by positive reinforcement (which usually involves a reward; see Chapter 13), especially via the D dopamine receptor subtype (Lerner et al., 2021). Abnormalities in the mesolimbocortical pathway are associated with some of the symptoms of schizophrenia, as we discuss in Chapter 12. At the end of this chapter we’ll look at the role of this pathway in addictive behaviors. Serotonin Scarce as dopaminergic neurons are, there are even fewer serotonergic neurons in the human brain—just 200,000 or so. Nevertheless, wide expanses of the brain are innervated by serotonergic fibers, originating from neurons sprinkled along the 1 2 3 4 5 2 midline of the midbrain and brainstem in the raphe nuclei (raphe is pronounced “rafay” and is Latin for “seam”) (see Figure 3.4). Serotonin (5-HT, short for its chemical name, 5- hydroxytryptamine) participates in the control of all sorts of behaviors: mood, vision, sexual behavior, anxiety, sleep, and many other functions. As we’ll see a little later, drugs that increase serotonergic activity are often prescribed for depression and anxiety. There are at least 14 different subtypes of 5-HT receptors, with diverse modes of action and neuroanatomical distribution. The precise behavioral actions of serotonergic drugs therefore depend on which of the many 5-HT receptor subtypes are affected (see Table 3.2) (Ślifirski et al., 2021; Johnson et al., 2019). Norepinephrine As Figure 3.4 shows, many of the brain’s noradrenergic neurons— given this name because norepinephrine (NE) is also known as noradrenaline—have their cell bodies in two regions of the brainstem and midbrain: the locus coeruleus (“blue spot”) and the lateral tegmental area. Noradrenergic axons from these regions project broadly throughout the cerebrum, including the cerebral cortex, limbic system, and thalamic nuclei, and interact with five subtypes of NE receptors, all of which are metabotropic (see Table 3.2). They participate in the control of behaviors ranging from alertness to mood to sexual behavior (and many more). Many peptides function as neurotransmitters Peptides are very important signaling chemicals both in the brain and in the other organs of the body. Here are just a few examples of peptides that act as neurotransmitters: The opioid peptides, a group of endogenous substances with actions that resemble those of opiate drugs like morphine: some key opioids are met-enkephalin, leuenkephalin, beta-endorphin, and dynorphin. As with morphine, these peptides act as analgesics (painkillers) and have rewarding properties. A diverse group of peptides originally discovered in the periphery—especially in the organs of the gut (which explains some of their names)—are also made by neurons in the spinal cord and brain. In the nervous system, gut peptides can act as synaptic transmitters, and they are often co-localized with classical transmitters. Examples include vasoactive intestinal polypeptide (VIP), substance P, cholecystokinin (CCK), neuropeptide Y (NPY), and ghrelin (see Chapter 9). Various peptide hormones, such as oxytocin, orexin, and vasopressin, are produced by the hypothalamus and pituitary. These peptides are involved in an astonishing variety of functions, ranging from basic housekeeping like urine production through to higher-level functions like memory, pair-bonding (see Chapter 8), and social processes. Some neurotransmitters are gases As we mentioned earlier, neurons sometimes use certain watersoluble gas molecules to communicate information; the best studied of these is nitric oxide (not to be confused with “laughing gas,” which is nitrous oxide). Carbon monoxide also serves as a transmitter in some cells. Although we call them gasotransmitters, these substances are different from traditional neurotransmitters in at least three important ways: 1. Gasotransmitters are produced in cellular locations other than the axon terminals, especially in the dendrites, and are not held in vesicles; the substance simply dissolves in cellular fluids and diffuses out of the neuron as it is produced (so no, despite the name, it’s not released as tiny bubbles). 2. No receptors in the membrane of the target cell are involved. Instead, the gasotransmitter diffuses into the target cell to trigger second messengers inside. 3. Most important, these gases can function as retrograde transmitters: by diffusing from the postsynaptic neuron back to the presynaptic neuron, a gasotransmitter conveys information that is used to physically change the synapse. This process may be crucial for memory formation (see Chapter 13). In addition to their functions in the brain, gasotransmitters are involved in functions as diverse as hair growth and penile erection (Melis and Argiolas, 2021). Although we have surveyed the major neurotransmission mechanisms of the brain, our list is by no means exhaustive; scientists are actively studying numerous other probable neurotransmitters. For example, some endogenous free fatty acids act via receptors to change the activity of neurons; a little later we’ll see how one such compound, anandamide, acts as a neurotransmitter in the pathway through which cannabis affects the brain. Research into the specific functions of neurotransmitters is a promising avenue for developing useful new treatments, so let’s turn our attention to those chemicals from outside the body that affect the brain: drugs and toxins. How’s It Going? 1. Identify the criteria that are used to establish whether a substance in the brain can be considered a neurotransmitter. Briefly discuss why each one is important. 2. Name and briefly describe each of the major categories of neurotransmitters. 3. What are the main excitatory and inhibitory amino acid transmitters of the brain, and what importance do they have? 4. Name and describe the anatomical organization of the four major “classical” amine neurotransmitters. What are some of the functions in which each transmitter has been implicated? 5. What is a peptide? Where do peptide transmitters come from—give a few examples—and what are some functions they perform? 6. Discuss the ways in which the gasotransmitters resemble and differ from traditional amine transmitters. FOOD FOR THOUGHT With so many different transmitters continually released by millions and millions of neurons, why isn’t the brain in constant confusion? Isn’t the brain a soup of different transmitter molecules, all jumbled up? 3.3 Drugs Fit Like Keys into Molecular Locks The Road Ahead The effects of drugs depend on the types of receptors they interact with. After reading this section, you should be able to: 3.3.1 Distinguish between agonist, antagonist, and partial agonist drug actions. 3.3.2 Explain the concepts of binding affinity and efficacy, and discuss the general relationship between a drug’s dose and its effects. 3.3.3 Summarize the routes of administration of drugs and the ways in which the brain and body adapt to the presence of drugs over time. In everyday English, we use the term drug in different ways. One common meaning is “a medicine used in the treatment of a disease” (as in prescription drug or over-the-counter drug). Many psychoactive drugs—compounds that alter the function of the brain and thereby affect conscious experiences—fall into this category, and they may be useful in psychiatric settings. Some psychoactive drugs are used recreationally, with varying degrees of risk to the user; these are sometimes referred to as drugs of abuse, although such substances may also have therapeutic value. Some psychoactive drugs affect the brain by altering enzyme action or modifying other internal cellular processes, but as you may have guessed, most of the drugs that are of interest in neuroscience act via receptors. Recall from Chapter 2 that any substance that binds to a receptor is termed a ligand. The natural ligands for receptors are molecules of neurotransmitters, of course, but many (not all) drugs that affect the brain are also receptor ligands. The actions of drugs on ionotropic and metabotropic receptors are illustrated in FIGURE 3.5. Drugs that mimic or potentiate the actions of a transmitter are called agonists. A substance that mimics the normal action of a neurotransmitter on its receptors by binding to the receptors and activating them is thus a receptor agonist. Similarly, drugs that reduce the normal actions of a neurotransmitter system are called antagonists. Drugs classified as receptor antagonists bind to receptors but do not activate them—instead, they block the receptors from being activated by their normal neurotransmitter. For that reason, drugs with this action are sometimes called receptor blockers. Some important drugs, called partial agonists, are useful because they produce only a middling response. And still other drugs have agonistic or antagonistic effects that are not directly receptor mediated, such as drugs that alter the synthesis or release of transmitter by presynaptic neurons, or drugs that adjust (“modulate”) the sensitivity of receptors. FIGU R E 3 . 5 Examples of Agonistic and Antagonistic Actions of Drugs on Receptors View larger image Many drugs—caffeine, opium, nicotine, and cocaine are just a few examples—originally evolved in plants, often as a defense against being eaten. Other modern drugs are synthetic and designed to target specific transmitter systems. For example, benzodiazepine antianxiety drugs like lorazepam (trade name Ativan) enhance GABA neurotransmission; classic antipsychotics like haloperidol (trade name Haldol) block dopamine receptors; selective serotonin reuptake inhibitors like fluoxetine (Prozac) are antidepressants. So to understand how drugs work, we must perform analyses at many levels—from molecules to anatomical systems to behavioral effects and experiences. Earlier we said that a given neurotransmitter interacts with a variety of different subtypes of receptors. To get a sense of just how diverse the transmitter receptor subtypes can be, take another look at Table 3.2. For example, there are more than a dozen different subtypes of serotonin receptors. Some are inhibitory, some excitatory, some ionotropic, some metabotropic; in fact, the only thing they all really share is that they normally respond to serotonin. They even differ in their anatomical distribution within the brain. This division of transmitter receptors into multiple subtypes presents us with an opportunity because, although the natural transmitter will act on all its receptor subtypes, we humans can craftily design drugs that fit into only one or a few receptor subtypes. Selectively activating or blocking specific subtypes of receptors can produce diverse effects, some of which are beneficial. For example, treating someone with large doses of the neurotransmitter serotonin would necessarily activate all of the different subtypes of serotonin receptors in their brain, producing a debilitating welter of different effects. But drugs that selectively block 5-HT receptors while ignoring other subtypes of serotonin receptors produce a powerful and specific anti-nausea effect that brings relief to people undergoing cancer chemotherapy. 3 Drugs are administered and eliminated in many different ways Drug molecules aren’t magic bullets; they don’t somehow know where to go to find particular receptor molecules. Instead, drug molecules just spread widely throughout the body, binding to their selective receptors when they happen to encounter them. This binding triggers a chain of cellular events, but it is usually very brief, and when the drug (or transmitter) breaks away from the receptor, the receptor resumes its unbound shape and functioning. The amount of a drug that gets to the brain, and how fast it gets there and starts exerting its effects, depends in part on the drug’s route of administration. Some routes, such as smoking or intravenous injection, rapidly ramp up the amount of drug that is bioavailable: free to act on the target tissue, and thus not in use elsewhere or in the process of being eliminated. With other routes, such as ingestion (swallowing), the concentration of drug builds up more slowly over longer periods of time. The duration of a drug effect also depends on how the drug is metabolized and excreted from the body—via the kidneys, liver, lungs, or other routes. In some cases, the metabolites of drugs are themselves active; this biotransformation of drugs can produce substances with beneficial or harmful actions. The factors that affect the movement of a drug into, through, and out of the body are collectively referred to as pharmacokinetics. The effects of a drug depend on its dose The tuning of drug molecules to receptor subtypes is not absolutely specific. In reality, a particular drug will generally bind strongly to one kind of receptor, more weakly to a few other types, and not at all to many others. This chemical attraction is known as binding affinity (or simply affinity). At low doses, when relatively few drug molecules are in circulation, drugs will preferentially bind to their highest-affinity receptors. At higher doses, enough molecules of the drug are available to bind both the highest-affinity receptors and some of the lower-affinity receptors. It’s interesting to note that neurotransmitter molecules are low-affinity ligands: they bind only comparatively weakly to their receptors, so they can rapidly detach a moment later, allowing the synapse to reset in preparation for the next presynaptic signal. Once it is bound to a receptor, the extent to which a drug molecule activates the receptor is termed its efficacy (or intrinsic activity). As you might guess, agonists have high efficacy: they tend to activate the receptors they bind to. Conversely, antagonists have low or no efficacy (see Figure 3.5). Partial agonists, unsurprisingly, have appreciable but submaximal efficacy. So it is a combination of affinity and efficacy—where it binds and what it does—that determines the overall action of a drug. For example, the classic antipsychotic drugs tend to have high affinity and low efficacy at the D subtype of dopamine receptors; in other words, they are D blockers. It’s a topic we’ll revisit in Chapter 12. 2 2 Giving larger doses of a drug ultimately increases the proportion of receptors that are bound and affected by the drug. Within certain limits, this increase in receptor binding also increases the response to the drug; in other words, greater doses tend to produce greater effects. When plotted as a graph, the relationship between drug doses and observed effects is called a dose-response curve (DRC), typically taking the sigmoidal shape shown in FIGURE 3.6A. Careful analysis of DRCs reveals many aspects of a drug’s activity, such as useful and safe dose ranges (FIGURE 3.6B), and it is one of the main tools for understanding the functional relationships between drugs and their targets. FIGU R E 3 . 6 The Dose-Response Curve View larger image Humans have devised a variety of ingenious techniques for introducing substances into the body; these are summarized in TABLE 3.3. In Chapter 1 we described how tight junctions between the cells of the walls of blood vessels create a blood-brain barrier that inhibits the movement of larger molecules out of the bloodstream and into the brain. This barrier poses a major challenge for neuropharmacology because many compounds that could be useful for research or clinical treatment have molecular sizes too large to cross the blood-brain barrier and enter the brain. To a limited extent this problem can be circumvented by administering the drugs directly into the brain, but that is a drastic step. Alternatively, some drugs can take advantage of active transport systems that normally move nutrients out of the bloodstream and into the brain. TA B LE 3 . 3 The Relationship between Routes of Administration and Effects of Drugs Route of administration Examples and mechanisms Typical speed of effects INGESTION Many sorts of drugs and remedies: ingestion depends on absorption by the gut, which is somewhat slower than most other routes and affected by digestive factors such as acidity of the stomach and the presence of food. Slow to moderate Tablets and capsules Syrups Infusions and teas Suppositories Route of administration Examples and mechanisms Typical speed of effects INHALATION Nicotine, cocaine, organic solvents such as airplane glue and gasoline, other drugs of abuse, and a variety of prescription drugs and hormone treatments: inhalation methods take advantage of the rich vascularization of the nose and lungs to convey drugs directly into the bloodstream. Moderate to fast Smoking Nasal absorption (snorting) Inhaled gases, powders, and sprays PERIPHERAL INJECTION Many drugs: subcutaneous (under the skin) injections tend to have the slowest effects because they must diffuse into nearby tissue in order to reach the bloodstream; intravenous injections have very rapid effects because the drug is placed directly into circulation. Moderate to fast Subcutaneous Intramuscular Intraperitoneal (abdominal) Intravenous CENTRAL INJECTION Central methods involve injection directly into the central nervous system and are used in order to circumvent the blood-brain barrier, to rule out peripheral effects, or to directly affect a discrete brain location. Fast to very fast Intracerebroventricular (into ventricular system) Intrathecal (into the cerebrospinal fluid of the spine) Epidural (under the dura mater) Intracerebral (directly into a brain region) Repeated treatments may reduce the effectiveness of drugs Our bodies are well equipped to maintain a constant internal environment, optimized for cellular activities, and to counteract physiological challenges. Drug treatments create sudden changes in the body’s internal environment, and our bodies may try to adapt to this challenge by developing drug tolerance, where a drug’s effectiveness diminishes with repeated treatments. As tolerance to a drug develops, successively larger and larger doses of a drug are needed to produce the same effect. Drug tolerance can develop in several different ways. Some drugs provoke metabolic tolerance, in which the body (especially metabolic organs, like the liver with its specialized enzymes) becomes more effective at eliminating the drug from the bloodstream before it can have an effect. Alternatively, the target tissue may change its sensitivity to the drug—a phenomenon called functional tolerance. Although we tend to think of receptors as being relatively constant features on cell membranes, in reality their numbers can rapidly wax and wane as new receptors are manufactured and old ones are recycled (e.g., Roth et al., 2020). So one important way in which a cell can develop functional tolerance is by changing how many receptors it has on its surface. For example, after repeated doses of an agonist drug, neurons may downregulate their receptors (decrease the number of receptors available to the drug), thereby becoming less sensitive and countering the drug effect. If the drug is an antagonist, target neurons may instead upregulate (increase) the number of receptors, to become more sensitive and thus counteract the drug effect. Indeed, continual modification of receptor densities is a key feature of synapses and is crucial for neurotransmission and plasticity (Choquet and Triller, 2013). Tolerance to a particular drug often generalizes to other drugs of the same chemical class; this effect is termed cross-tolerance. For example, people who have developed tolerance to heroin tend to exhibit a degree of tolerance to all the other drugs in the opioid category, including codeine, morphine, and methadone. This phenomenon occurs because all those drugs act on the same family of receptors. And paradoxically, there are some circumstances in which response to a drug can become stronger with repeated treatments, rather than weaker: this phenomenon is termed sensitization and is thought to contribute to cravings that people with addictions may experience (Samaha et al., 2021), a topic that we will return to later in the chapter. How’s It Going? 1. Briefly explain agonist and antagonist actions of a ligand, with respect to effects on receptors. 2. What are receptor subtypes? What is their significance for drug development? 3. Briefly explain how dose-response curves are calculated and why they are useful to pharmacologists. Distinguish between a drug’s binding affinity and its efficacy. 4. Provide a review of the different ways in which drugs can be administered, in particular noting some considerations that are taken into account when deciding on a route of administration. 5. How does repeated exposure to a drug alter its effects? (Hint: Use the words tolerance and regulate in your answer.) FOOD FOR THOUGHT Drugs vary widely in their recommended doses—some are dosed in tablets of less than 1 milligram; pills of other drugs may contain hundreds of milligrams. What factors are responsible for this variability, and how do they determine dosing both individually and in combination? 3.4 Drugs Affect Each Stage of Neural Conduction and Synaptic Transmission The Road Ahead Drugs affect many stages of synaptic transmission. After reading this section, you should be able to: 3.4.1 Summarize the ways in which drugs alter presynaptic processes, with examples. 3.4.2 Summarize drug effects on postsynaptic processes, with examples. 3.4.3 Define autoreceptors and explain their function, using caffeine as an example. 3.4.4 Review the processes that terminate transmitter action at synapses. As the saying goes, it takes two to tango. Synaptic transmission involves a complicated choreography of the two participating neurons, and drugs that affect the brain and behavior may act on either side of the synapse. Let’s consider these two sites of action in turn. Some drugs alter presynaptic processes One of the ways that a drug may alter synaptic transmission is by affecting the presynaptic neuron, changing the system that converts an electrical signal (an action potential) into a chemical signal (secretion of neurotransmitter). As FIGURE 3.7 illustrates, the most common presynaptic drug effects can be grouped into three main categories: effects on transmitter production, effects on transmitter release, and effects on transmitter clearance. FIGU R E 3 . 7 Drug Effects on Presynaptic Mechanisms, with Examples View larger image Transmitter production In order for the presynaptic neuron to produce neurotransmitter, a steady supply of raw materials and enzymes must arrive at the axon terminals and carry out the needed reactions. Drugs are available that alter this process in various ways (FIGURE 3.7A). For example, a drug may inhibit an enzyme that neurons need in order to synthesize a particular neurotransmitter, resulting in depletion of that transmitter. Alternatively, drugs that block axonal transport prevent raw materials from reaching the axon terminals in the first place, which could also cause the presynaptic terminals to run out of neurotransmitter. In both cases, affected presynaptic neurons are prevented from having their usual effects on postsynaptic neurons, with sometimes profound effects on behavior. A third class of drug (e.g., reserpine) doesn’t prevent the production of transmitter but instead interferes with the cell’s ability to store the transmitter in synaptic vesicles for later release. The effect on behavior may be complicated, depending on how much transmitter can still reach the postsynaptic cell. Transmitter release As we saw in Chapter 2, transmitter is released when action potentials arrive at the axon terminal and trigger an inflow of calcium ions. But a number of drugs and toxins can block those action potentials from ever arriving. For example, local anesthetics such as procaine (trade name Novocain) block sodium channels and thus block the action potentials (see Chapter 2) that nearby neurons would’ve used to report pain from, say, that tooth your dentist is working on. Tetrodotoxin, the potent toxin that makes puffer fish a dangerous delicacy, likewise prevents axons from firing action potentials, but it shuts down synaptic transmission throughout the body, with deadly results. And drugs called calcium channel blockers do exactly as their name suggests, blocking the calcium influx that normally drives the release of transmitter into the synapse (FIGURE 3.7B). The active ingredient in Botox— botulinum toxin—specifically blocks ACh release from axon terminals near the injection site (Rossetto et al., 2021). The resulting local paralysis of underlying muscles reduces wrinkling of the overlying skin, but it may also interfere with the ability to make typical facial expressions. A different way to alter transmitter release is to modify the systems that the neuron normally uses to monitor and regulate its own transmitter release. For example, presynaptic neurons often use autoreceptors to monitor how much transmitter they have released; it’s a kind of feedback system. Drugs that stimulate these autoreceptors provide a false feedback signal, prompting the presynaptic cell to release less transmitter. Drugs that instead block autoreceptors prevent the presynaptic neuron from receiving its normal feedback, tricking the cell into releasing more transmitter than usual. Many of us exploit this action every day, with our morning coffee. Worldwide, we drink more than 2.2 billion cups of coffee every day, and the caffeine we get from all that coffee blocks a type of autoreceptor called the adenosine receptor. Adenosine, which is classified as a neuromodulator, is co-released with the neuron’s transmitter and acts to reduce further transmitter release. So, by blocking presynaptic adenosine receptors, caffeine increases the amount of neurotransmitter released, resulting in the enhanced alertness for which coffee is renowned (McLellan et al., 2016). Interestingly, coffee is associated with a distinctive pattern of enhanced functional connectivity between certain brain regions, with implications for alertness and mood (Magalhães et al., 2021). Whether coffee directly improves your memory for material that you study—the hope of countless students (Sharif et al., 2021)—is a more nuanced question that has not yet been settled (Borota et al., 2014; Aust and Stahl, 2020). Transmitter clearance After action potentials have arrived at the axon terminals and prompted a release of transmitter substance, the transmitter is rapidly cleared from the synapse by several processes (FIGURE 3.7C). Obviously, getting rid of the used transmitter is an important step, because until it is gone, new releases of transmitter from the presynaptic side won’t be able to have much extra effect. However, researchers think that under certain circumstances, neurons may be too good at clearing the used transmitter and that a significant lack of transmitter in certain synapses may contribute to disorders such as depression. As we’ll see shortly, some important psychiatric drugs, called reuptake inhibitors, work by blocking the presynaptic system that normally reabsorbs transmitter molecules after their release; this blocking action allows transmitter molecules to stay a bit longer in the synaptic cleft, having a greater effect on the postsynaptic cell. Other drugs achieve a similar result by blocking the enzymes that normally break up molecules of neurotransmitter into inactive metabolites, again allowing the transmitter to accumulate, having a greater effect on the postsynaptic cell. Some drugs alter postsynaptic processes An alternate way for drugs to change synaptic transmission is by altering the postsynaptic systems that respond to the released neurotransmitter. As illustrated in FIGURE 3.8, there are two major classes of postsynaptic drug actions: (1) direct effects on transmitter receptors and (2) effects on cellular processes within the postsynaptic neuron. FIGU R E 3 . 8 Drug Effects on Postsynaptic Mechanisms View larger image Activity at postsynaptic transmitter receptors As we discussed earlier in the chapter, selective receptor antagonists bind directly to postsynaptic receptors and block them from being activated by their neurotransmitter (FIGURE 3.8A). The results may be immediate and dramatic. Curare, for example, blocks the nicotinic ACh receptors found on muscles, resulting in immediate paralysis of all skeletal muscles, including those used for breathing (which is why curare is an effective arrow poison). Selective receptor agonists bind to specific receptors and activate them, mimicking the natural neurotransmitter at those receptors. These drugs are often very potent, with effects that vary depending on the particular types of receptors activated. LSD is an example, producing psychedelic visual experiences through strong stimulation of a subtype of serotonin receptors (namely, 5-HT receptors) found in visual cortex. Postsynaptic intracellular processes When they bind to their matching receptors on postsynaptic membranes, neurotransmitters can stimulate a variety of changes within the postsynaptic cells, such as the activation of second messengers, the activation of genes, and the production of various proteins. These intracellular processes present additional targets for drug action (FIGURE 3.8B). For example, some drugs induce the postsynaptic cell to upregulate its receptors, thus changing the sensitivity of the synapse. Other drugs cause a down-regulation in receptor density. Some drugs, like lithium chloride, directly alter second-messenger systems, with widespread effects in the brain. Future research will probably focus on drugs to selectively activate, alter, or block targeted genes within the DNA of neurons. These 2A genomic effects could produce profound long-term changes in the structure and function of neurons. Food for Thought Imagine you work in a drug discovery lab, searching for useful new drugs that alter serotonergic neurotransmission. What are some of the synaptic mechanisms that you would be trying to manipulate with candidate drugs? How could you figure out whether a drug’s action is presynaptic or postsynaptic? 3.5 Some Neuroactive Drugs Can Ease Mental Illness and Pain The Road Ahead In the next section we look at the major categories of neuroactive drugs used for clinical purposes. After reading this section, you should be able to: 3.5.1 Summarize the two major types of antipsychotic medications, and review their pharmacological actions. 3.5.2 Discuss the major types of actions of drugs for treating depression and anxiety, with examples. 3.5.3 Review the discovery of opiates, and their major actions in the brain. Mental disorders have tormented people through the centuries. Historical accounts of sorcery, strange visions, and possession by demons undoubtedly reflect early misunderstandings of the symptoms of severe mental illness (the topic of Chapter 12) rather than supernatural events. But where the historical response to psychiatric illness was to lock away those afflicted, neuroscience breakthroughs of the last 70 years have revolutionized psychiatry and liberated millions from the purgatory of institutionalized care. In the sections that follow, we will briefly review some of the major categories of psychoactive drugs, based on how they affect behavior. The Antipsychotic Revolution The introduction of antipsychotic drugs relieved the symptoms of millions of people with schizophrenia who had previously required hospitalization in psychiatric institutions like this one. Antipsychotics dramatically curb the striking hallucinations and delusions that may accompany schizophrenia. View larger image Antipsychotics relieve symptoms of schizophrenia It’s hard to believe now, but prior to the 1950s about half of all hospital beds were taken up by psychiatric patients (Menninger, 1948), and owing to its debilitating nature, most of these were people experiencing the delusions and hallucinations of schizophrenia. This awful situation was suddenly and dramatically improved by the development of a family of drugs now called first-generation antipsychotics (or neuroleptics). The first of these drugs, chlorpromazine (Thorazine), and successors like haloperidol (Haldol) and loxapine (Loxitane) all share one crucial feature: they act as selective antagonists of dopamine D receptors in the brain. These drugs are so good at relieving the so-called positive symptoms of schizophrenia—emergent symptoms and behaviors that were previously absent, such as hallucinations and delusions—that a dopaminergic model of the disease became dominant (see Chapter 12). More recently, second-generation antipsychotics have been developed that have both dopaminergic and additional, nondopaminergic actions, especially the blockade of certain serotonin receptors. These drugs may be helpful in relieving symptoms that are resistant to first-generation antipsychotics— especially the negative symptoms that involve impairment or loss of a behavior, such as social withdrawal and blunted emotional responses—but early hopes that these drugs would be safer, better tolerated, and more effective than the first-generation antipsychotics 2 have not been borne out (Kane and Correll, 2010; Solmi et al., 2017). Growing evidence that schizophrenia also involves transmitters other than the classic targets has since prompted an intense research effort aimed at developing third-generation antipsychotics, with novel targets like glutamate and oxytocin, but effective treatments remain elusive (O’Tuathaigh et al., 2017; Wigton et al., 2021). So, although there has been progress in its treatment, schizophrenia remains a difficult, multifaceted disease and a major health problem, with no clear consensus on an optimal treatment strategy (Correll et al., 2017). Antidepressants reduce chronic mood problems Disturbances of mood called affective disorders are among the most common of all psychiatric complaints, affecting 2–6 percent of the population in any given year (depending on the country) or about 300 million people worldwide (Global Burden of Disease Collaborative Network, 2021). In contrast to the antipsychotic drugs, which reduce synaptic activity by blocking receptors, effective antidepressant drugs generally act to increase synaptic transmission. Some of the earliest antidepressants were the monoamine oxidase (MAO) inhibitors, which, as their name suggests, block the enzyme responsible for breaking down monoamine transmitters such as dopamine, serotonin, and norepinephrine. This action allows transmitter molecules to accumulate in the synapses (see Figure 3.7, step 9), with an associated improvement in mood. A second generation of drugs, called the tricyclic antidepressants (an example is imipramine), likewise promote an accumulation of synaptic transmitter, by blocking the reuptake of transmitter molecules into the presynaptic terminal (see Figure 3.7, step 8). More recent generations of antidepressants also increase synaptic transmitter availability, but they focus on specific transmitters: selective serotonin reuptake inhibitors (SSRIs) like fluoxetine (Prozac) and citalopram (Celexa) are so named because they act specifically to block reuptake at serotonergic synapses. Related compounds called serotoninnorepinephrine reuptake inhibitors (SNRIs), like venlafaxine (Effexor), promote the accumulation of both serotonin and norepinephrine, by blocking reuptake of both, and have therapeutic applications that extend beyond depression to include anxiety disorders. Anxiolytics combat anxiety Severe anxiety, in the form of panic attacks, phobias (irrational strong fears of specific things, like flying, or spiders), and generalized anxiety, can spiral out of control and become disabling; many millions of people are affected by anxiety disorders (see Chapter 12). Anything that reduces or depresses the excitability of neurons tends to counter these states, which explains some of the historical popularity of depressants like alcohol and opium. Unfortunately, these substances have a strong potential for intoxication and addiction, so they are not suitable for therapeutic use. Barbiturate drugs, such as phenobarbital, were originally developed to reduce anxiety, promote sleep, and avoid epileptic seizures. They are still used occasionally for those purposes, but they are also addictive and easy to overdose on, often fatally, as illustrated in Figure 3.6B. Since the 1970s the most widely prescribed anxiolytics (antianxiety drugs) have been the benzodiazepines, which are both safer and more specific than the barbiturates (see Figure 3.6B), although they still carry some risk of addiction. Members of this class of drug, such as alprazolam (trade name Xanax) and lorazepam (Ativan), bind to specific sites on GABA receptors and enhance the activity of GABA (R. W. Olsen, 2018). Because GABA receptors are inhibitory, benzodiazepines help GABA to produce larger inhibitory postsynaptic potentials than GABA would produce alone. The net effect is a reduction in the excitability of neurons. The GABA receptor is large and complex and contains multiple bindings sites through which drugs may act (Masiulis et al., 2019). Hormones that interact with GABA receptors, as well as drugs that subtly alter serotonergic neurotransmission, are examples of newer anxiolytics (Belelli et al., 2018). Antidepressant drugs are often effective anxiolytics too. Considering the high prevalence and enormous social costs of anxiety disorders (all the more evident during the COVID-19 pandemic: Liyanage et al., 2021; Bera et al., 2022), the discovery of new anxiolytics with ever more refined actions is an urgent matter. A A A Opioids have powerful painkilling effects Opium, extracted from poppy flower seedpods, has been used by humans since at least the Stone Age. Morphine, the major active substance in opium, is a very effective analgesic (painkiller) that has brought relief from severe pain to many millions of people (see Chapter 5). Unfortunately, because it produces powerful feelings of euphoria, morphine also has a strong potential for addiction, as do close relatives like heroin (diacetylmorphine) and opioid painkillers like oxycodone (OxyContin) and fentanyl, a synthetic opioid that is 30 to 40 times stronger than heroin. Accidental opioid overdose is a rapidly growing epidemic: the fatal fentanyl overdoses of musicians Prince and Tom Petty were just two of many thousands every year (FIGURE 3.9; Hedegaard et al., 2020). FIGU R E 3 . 9 An Epidemic of Overdose Deaths View larger image The Source of Opium and Morphine The opium poppy has a distinctive flower and seedpod. The bitter flavor and brain actions of opium may provide the poppy plant a defense against being eaten. View larger image Opiates like morphine, heroin, and codeine bind to specific receptors —opioid receptors—that are concentrated in various regions of the brain. An area within the midbrain called the periaqueductal gray (FIGURE 3.10A) contains a very high density of opioid receptors and is an especially important target because it is here that opiates exert much of their painkilling effects (see Chapter 5). FIGU R E 3 . 1 0 Recreational Drugs Bind to Varying Brain Sites View larger image As we mentioned earlier in the chapter, we now know that the brain makes its own morphine-like compounds, called endogenous opioids. Researchers have identified three major families of these potent peptides: the enkephalins (from the Greek en, “in,” and kephale, “head”); the endorphins (a contraction of “endogenous morphine”); and the dynorphins (from “dynamic [i.e., fast] endorphins”) (see Table 3.1). There are also three main kinds of opioid receptors—delta (δ), kappa (κ), and mu (μ)—all of which are metabotropic receptors (see Table 3.2). Powerful drugs that block opioid receptors, such as naloxone (Narcan injection or nasal spray) and naltrexone (Revia tablets or Vivitrol injection), can rapidly reverse the effects of opioids and rescue people from overdose. Opioid antagonists also block the rewarding aspects of drugs like heroin, so they can be helpful for treating addiction, as we discuss at the end of this chapter. Food for Thought Isn’t it curious that our brains contain receptors that also just happen to respond to substances that occur in Nature? For example, why do our brains have receptors that are strongly stimulated by opium-containing poppy sap? How might you explain this and other similar relationships between exogenous substances and brain receptors? 3.6 Some Neuroactive Drugs Alter Conscious Experience The Road Ahead In addition to treating illness, humans seek out drugs for recreational use. After reading this section, you should be able to: 3.6.1 Identify the main active ingredients in cannabis, describe their sites of action in the brain, and discuss the effects of cannabinoids on behavior. 3.6.2 Compare and contrast the main families of stimulant drugs, and explain how they interact with neurons, their major effects, and risks they pose to human health. 3.6.3 Review the modes of action of alcohol in the brain, and discuss the prevalence and treatment of alcohol use disorders. 3.6.4 Define psychedelic drugs, and summarize the ways in which the major psychedelics act on the brain, their major behavioral effects, and their potential therapeutic applications. Whether they are seeking pleasure, vigor, spiritual experiences, or simply to satisfy curiosity, people have a long history of chemically tinkering with their conscious experience. Some of the most familiar types of drugs that modify consciousness include cannabinoids, stimulants, alcohol, and psychedelics. Cannabinoids have many effects Cannabis and related preparations, such as hashish, are derived from the Cannabis sativa plant, which has been widely cultivated and used by human societies for thousands of years (Russo et al., 2008). (The common alternative name marijuana is considered by some to have dubious, possibly racist origins; see Halperin, 2018.) Typically administered by smoking, by vaping, or via edible products such as cookies and candies, cannabis contains dozens of active ingredients, the best known of which are the compounds delta-9- tetrahydrocannabinol (THC), which is thought to produce the feeling of being “high” most closely associated with cannabis use, and cannabidiol (CBD), which appears to have anxiolytic effects as well as other medicinal actions (Boggs et al., 2018; Freeman et al., 2019). Cannabis use usually produces pleasant relaxation and mood alteration, although the drug can occasionally cause stimulation and paranoia instead. Relaxation Cannabis laws are being relaxed in many jurisdictions. Legal access through licensed shops, like this one in California, acknowledges existing widespread use of cannabis for recreational and medicinal purposes and is expected to reduce criminal activity, but health risks remain for adolescents and heavy users. View larger image Occasional use of cannabis seems to be mostly harmless, but as with other substances, heavy use can be harmful. For example, persistent heavy use (that is, substantial daily or near-daily use) may be associated with respiratory problems, addiction, cognitive decline, and psychiatric disorders (Kroon et al., 2020). Adolescents who use cannabis appear to be at greater risk of subsequently developing schizophrenia (Marconi et al., 2016; H. J. Jones et al., 2018). However, it remains to be determined whether the cannabis use causes the illness, or conversely whether adolescents who are already experiencing symptoms of mental illness are more drawn to cannabis use (J. Bourque et al., 2017). As with opioids and benzodiazepines, researchers identified specific cannabinoid receptors—two subtypes, named CB1 and CB2—that mediate the effects of compounds like THC (Li et al., 2021). Cannabinoid receptors are found in the substantia nigra, the hippocampus, the cerebellar cortex, and the cerebral cortex (FIGURE 3.10B), and we now know that the brain makes several THC-like endogenous ligands (accordingly termed endocannabinoids) that bind to these receptors. The most studied endocannabinoid is anandamide (from the Sanskrit ananda, “bliss”), which produces some of the most familiar physiological and psychological effects of cannabis use, such as elevated mood, pain relief, lowered blood pressure, relief from nausea, improvements in the eye disease glaucoma, and so on. The cannabinoid signaling pathway is thus the target of an intense research effort aimed at developing drugs with some of the specific beneficial effects of cannabis. The documented use of cannabis for recreational and medicinal purposes spans over 6000 years, but for most of the twentieth century it was subject to widespread legal prohibition. More recently, however, the sale and use of cannabis products is being legalized increasingly in various U.S. states, nationwide in Canada, and to varying extents in other countries, and possession of cannabis for recreational or medical purposes has been “decriminalized” (i.e., tolerated while technically illegal) in many additional jurisdictions. It seems likely that the relaxation of cannabis laws will continue, so a fuller understanding of both the beneficial and adverse effects of cannabis, and potential for cannabis addiction, is a high priority for researchers (Zehra et al., 2018; Freeman et al., 2019). Stimulants increase neural activity Proper functioning of the nervous system involves a fine balance between excitatory and inhibitory influences. A stimulant is a drug that tips the balance toward the excitatory side, with an overall alerting, activating effect. People use many different naturally occurring and synthetic stimulants; familiar examples include nicotine, caffeine, amphetamine, and cocaine. Some stimulants act directly by increasing excitatory synaptic potentials. Others act by blocking normal inhibitory processes: we’ve already seen that caffeine acts as a stimulant by blocking presynaptic adenosine receptors that monitor and inhibit transmitter release. Interestingly, people with attention deficit hyperactivity disorder (ADHD) often find that stimulants like methylphenidate (Ritalin) help them to focus, possibly because of changes in synaptic activity in the frontal lobes (Faraone, 2018). The stimulants thus form a large class of drugs that are exceptionally diverse in their behavioral effects, neurobiological modes of action, and potential for both benefit and harm. Nicotine Tobacco is native to the Americas, where European explorers first encountered smoking; these explorers brought tobacco back to Europe with them. Tobacco use became much more widespread following technological innovations that made it easier to smoke, in the form of cigarettes. Delivered to the large surface of the lungs, the nicotine from conventional or e-cigarettes enters the blood and brain much more rapidly than does nicotine from other tobacco products. Nicotine acts as a stimulant, increasing heart rate, blood pressure, digestive action, and alertness. In the short run, these effects make tobacco use pleasurable. But these alterations of body function, quite apart from the effects of tobacco tar on the lungs, make prolonged exposure to nicotine unhealthful. Smoking and nicotine exposure during development—even prenatally, via the mother—can have a lasting impact on physiology and cognitive development, due to long-lasting modifications of neural function (Yuan et al., 2015; Rauschert et al., 2019). The nicotinic ACh receptors didn’t get their name by coincidence; it is through these receptors that the nicotine from tobacco exerts most of its effects in the body. We noted earlier that nicotinic receptors drive the contraction of skeletal muscles, and the activation of various visceral organs, but they are also found in high concentrations in the brain, including the cortex. This is one way in which nicotine enhances some aspects of cognitive performance. Nicotine also acts directly on nicotinic receptors within the ventral tegmental area to exert its rewarding/addicting effects (Durand-de Cuttoli et al., 2018). (We will discuss the ventral tegmental area in more detail when we discuss positive reward models later in this chapter.) Cocaine For hundreds of years, people in Bolivia, Colombia, and Peru have used the leaves of the coca shrub—either chewed or brewed as a tea— to increase endurance, alleviate hunger, and promote a sense of wellbeing. The use of coca leaves in this manner does not seem to cause problems. But processing and purifying an extract from this plant produces a much more potent and dangerous drug: cocaine. First isolated in 1859, cocaine was added to beverages (such as CocaCola) and tonics for its stimulant qualities, and subsequently it was used as a local anesthetic (it is in the same chemical family as procaine) and as an antidepressant. But people soon discovered that the rapid hit resulting from snorting cocaine (see Table 3.3) has a stimulant effect that is powerful and pleasurable. Cocaine exerts its stimulant effects by blocking the reuptake of monoamine transmitters—especially dopamine and norepinephrine. This action causes transmitters to accumulate in synapses throughout much of the brain (FIGURE 3.10C), therefore boosting their effects. Crack, a smokable form of cocaine that appeared in the mid-1980s, enters the blood and the brain even more rapidly and thus is even more addictive than cocaine powder. But however it is consumed, cocaine is highly addictive, and heavy cocaine use raises the risk of serious side effects like stroke, psychosis, loss of gray matter in the frontal lobes, and severe mood disturbances (Franklin et al., 2002; Crunelle et al., 2014). Cocaine causes changes in the structure and function of many regions of the brain (Hanlon et al., 2013), which contribute to high rates of relapse in people attempting to quit cocaine use. People who use cocaine along with other substances run the added risk of dual dependence, in which the interaction of two (or more) drugs produces another addictive state. For example, cocaine metabolized in the presence of ethanol (alcohol) yields an active metabolite called cocaethylene, to which the user may develop an additional addiction (Y. Liu et al., 2018). Amphetamine The synthetic stimulant amphetamine (“speed”) and its more potent relatives, like methamphetamine (“meth”), have a mode of action that superficially resembles that of cocaine, inducing an accumulation of the synaptic transmitters norepinephrine and dopamine. However, the mechanics of amphetamine’s actions, involving two steps, are quite different from those of cocaine. First, amphetamine acts within axon terminals to cause a larger-thannormal release of neurotransmitter when the synapse is activated. Second, amphetamine then interferes with the clearance of the released transmitter by blocking its reuptake and metabolic breakdown. The result is that the affected synapses become unnaturally potent, having strong effects on behavior. Over the short term, amphetamine causes increased vigor and stamina, wakefulness, decreased appetite, and feelings of euphoria. For these reasons, amphetamine has historically been used in military applications and other settings where intense sustained effort is required. However, the quality of the work being performed may suffer, and the costs of amphetamine use soon outweigh the benefits. Addiction and tolerance to amphetamine and methamphetamine develop rapidly, requiring ever-larger doses that lead to sleeplessness, severe weight loss, and general deterioration of mental and physical condition. Prolonged use of amphetamine or methamphetamine may lead to symptoms that resemble those of schizophrenia: compulsive, agitated behavior and irrational suspiciousness. Users may neglect their diet and basic hygiene, aging rapidly. Users also experience a variety of peripheral effects, like high blood pressure, tremor, dizziness, sweating, rapid breathing, and nausea. And worst of all, people who are addicted to meth often display symptoms of brain damage long after they quit using the drug (Moratalla et al., 2017; Wu et al., 2018). As we’ll discuss a little later in the chapter, increased activation of the mesolimbocortical dopaminergic reward system of the brain appears to be crucial for the rewarding aspects of drug use. Amphetamine-like stimulants called cathinones are released when the African shrub khat (or qat, pronounced “cot”) is chewed. Many types of synthetic cathinones—known collectively as “bath salts”— have been developed and marketed in recent years (Baumann et al., 2018). These designer drugs, especially mephedrone (“plant food” or “meow meow”), soared in popularity in the early twenty-first century, despite the potential for damaging effects on the brain, muscular system, and kidneys (White, 2016; Poyatos et al., 2022). Alcohol acts as both a stimulant and a depressant The most widely consumed psychoactive drug, alcohol, is easily produced by the fermentation of fruit or grains. Taken in moderation —one drink per day for women, up to two per day for men—alcohol may do limited harm. Whether or not moderate consumption also has any health benefits, as suggested in some research on the topic (Sabia et al., 2018; Krittanawong et al., 2022), as opposed to varying degrees of health costs (Angebrandt et al., 2022; Biddinger et al., 2022) remains quite controversial, and some public health guidelines now take the position that there is no completely safe level of alcohol consumption (Paradis et al., 2023). There is no controversy, however, about the impact of heavy alcohol consumption: it is very damaging to the brain and body and linked to a multitude of serious diseases (O’Keefe et al., 2014; Bell et al., 2017). Alcohol has a biphasic effect on the nervous system: at first it acts as a stimulant, and then it has a more prolonged depressant phase. (Remember, the word depressant relates to a depression or inhibition of neural activity, not an effect on mood.) This complex action is thought to be the result of alcohol’s effects on several different neurotransmitter systems, such as glutamate and GABA (Roberto and Varodayan, 2017). Like the anxiety-reducing benzodiazepines we discussed earlier, alcohol inhibits neural excitability in multiple brain regions via an action on GABA receptors, resulting in the social disinhibition, poor motor control, and sensory disturbances that we call drunkenness (Harrison et al., 2017). Alcohol additionally activates dopamine-mediated reward systems of the brain, accounting for some of the pleasurable aspects of drinking. Chronic heavy alcohol consumption damages or destroys nerve cells in many regions of the brain. Heavy drinking by expectant mothers can cause grievous permanent damage to the developing fetus, termed fetal alcohol spectrum disorder (FASD), which in the most severe cases is characterized by facial deformities and stunted brain growth, sometimes including the absence of the corpus callosum that normally connects the two hemispheres of the brain (FIGURE 3.11). Furthermore, although it was previously thought that low levels of alcohol consumption were safe during pregnancy, as few as one or two drinks per week may be associated with lower fetal weight and preterm birth (Mamluk et al., 2017), changes in craniofacial development (Muggli et al., 2017), and later behavioral problems (Murray et al., 2016). Such findings, combined with a paucity of information on the impact of light drinking on other aspects of fetal health, such as cognitive function, and clear evidence of fetal risks from alcohol in studies with lab animals (Madarnas et al., 2020), encourage an abundance of caution: the Centers for Disease Control and Prevention (CDC) currently advises that there is no known safe level of alcohol use in pregnancy (CDC, 2022). FIGU R E 3 . 11 Abnormal Brain Development in Fetal Alcohol Spectrum Disorder View larger image Chronic heavy drinking is associated with pathological changes in white matter pathways, along with decreased volumes of regions including frontal, temporal, and cingulate cortex, hippocampus, diencephalon, and cerebellum (Nutt et al., 2021), resulting in a host of behavioral symptoms including cognitive decline, memory impairment, and movement disorders. Happily, some of the anatomical changes associated with problematic use of alcohol may be reversible with abstinence (Parvaz et al., 2022). In humans recovering from alcohol use disorder, MRI studies show increased volumes of gray matter throughout multiple cortical regions, along with improvements in subcortical sites like the thalamus, amygdala, and cerebellum (Cardenas et al., 2007; Muller and Meyerhoff, 2021) (FIGURE 3.12), along with at least partial reversal of alcoholinduced atrophy of white matter pathways (Zahr and Pfefferbaum, 2017). But even in the absence of clear-cut alcohol use disorder, periodic binge drinking—defined as four (female) or five (male) or more drinks on a single occasion (SAMHSA, 2020)—can harm the brains and alter the behavior of young people in ways that last into adulthood (Crews et al., 2016; Lees et al., 2020). Extreme alcohol bingeing can depress breathing enough to kill, as happens every year to a few college students. Many additional college students die from alcohol-related accidents (Hingson et al., 2017). FIGU R E 3 . 1 2 Immediate Changes in Brain Volume in Those Recovering from Alcoholism View larger image Psychedelic drugs alter sensory perceptions Humans have long prized psychedelics (also called hallucinogens or entheogens), substances that produce powerful sensory alterations, often finding the resultant experiences to have deep spiritual or psychological meaning. The effects of LSD (lysergic acid diethylamide, or simply acid) and related substances like mescaline (from the peyote cactus) and psilocybin (“magic mushrooms”) are predominantly visual, producing bizarre and mysterious sensory experiences. The older name for these drugs, hallucinogens, is really a misnomer: a hallucination is a novel perception that takes place in the absence of sensory stimulation (hearing voices, or seeing something that isn’t there), but drugs in this category mostly alter or distort existing perceptions (mainly visual in nature). Users may see fantastic images (FIGURE 3.13), often with intense colors, but typically they are aware that these strangely altered perceptions are not real events. FIGU R E 3 . 1 3 Perceptual Alterations with LSD View larger image Psychedelic drugs are diverse in their neural actions. Whereas muscarine affects the ACh system, mescaline acts via noradrenergic and serotonergic systems. The herb Salvia divinorum is unusual among psychedelics because it acts on the opioid kappa receptor. But research with LSD and related drugs suggests that perhaps the most important shared neural action of many psychedelics is the stimulation of serotonin receptors. Discovered by Albert Hofmann in the 1940s, LSD structurally resembles serotonin. Even in tiny doses, LSD strongly activates serotonin 5-HT receptors that are found in especially heavy concentrations in the visual cortex. Other psychedelics, such as mescaline and psilocybin, share this action. People treated with psilocybin also show neuroplastic changes and altered activity in brain regions including prefrontal cortex, amygdala, and anterior cingulate cortex (Mertens et al., 2020; Aleksandrova and Phillips, 2021), perhaps accounting for some of the drug’s emotional, mystical qualities. In addition to their 2A impressive perceptual effects, LSD, psilocybin, and other psychedelics can produce mood changes, introspective states, and feelings of creativity that have led to renewed interest in the possibility of using psychedelics to treat specific psychiatric disorders, including depression, anxiety, and obsessive-compulsive disorder (Kyzar et al., 2017). Recent reports suggest that people who microdose psychedelics like LSD and psilocybin—taking very low doses on a regular basis—may experience significant improvements of mood and cognition (Rootman et al., 2021), but it’s possible a placebo effect accounts for some effects of microdosing (Szigeti et al., 2021). The Father of LSD Albert Hofmann discovered LSD by accidentally taking some in 1943, and he devoted the rest of his career to studying it. A prohibited drug in most jurisdictions, LSD is distributed on colorful blotter paper. This example, picturing Hofmann and the LSD molecule, is made up of 1036 individual doses, or “hits.” View larger image Ketamine (known as Special K) is already in widespread use in medical settings as a component of anesthesia but also has pronounced psychedelic properties. Acting principally (but not exclusively) to block NMDA receptors, ketamine increases activity in the prefrontal cortex and hippocampus and produces feelings of depersonalization and detachment from reality. Ketamine increases activity in the prefrontal cortex (Zorumski et al., 2016; Ago et al., 2019), and while high doses produce transient psychedelic effects and occasional psychotic symptoms in volunteers, low doses have a potent and rapid effect on depression that may help ease symptoms in resistant cases (Carlson et al., 2013; Williams and Schatzberg, 2016). Ecstasy is the street name for the psychedelic amphetamine derivative MDMA (3,4-methylenedioxymethamphetamine). Like LSD, MDMA stimulates 5-HT receptors in visual cortex, but it also changes the levels of serotonin, dopamine, and certain hormones, such as prolactin and oxytocin, that have been associated with prosocial feelings and behaviors. Exactly how these physiological actions of MDMA account for its subjective effects—positive emotions, empathy, euphoria, a sense of well-being, and colorful visual phenomena—remains uncertain. Complications due to psychedelic use are quite varied. The major psychedelics seem to have comparatively low addiction potential. 2A LSD has relatively few negative side effects (although some users report long-lasting visual changes). Long-term frequent MDMA use may cause problems with mood and cognitive performance and longlasting changes in patterns of brain activation (Montgomery and Roberts, 2022). However, shorter-term MDMA treatment is also being investigated as a possible treatment for persistent posttraumatic stress disorder (Smith et al., 2022). The neural actions, recreational properties, and possible psychiatric uses of some of the major psychedelics are summarized in TABLE 3.4. TA B LE 3 . 4 Possible Clinical Applications for Psychedelics Drug name and date of discovery Action in brain Recreational use Possible clinical application Psilocybin/psilocin (Psilocybe mushroom) (according to archaeological evidence, used in prehistory) Is a partial agonist of 5- HT receptors, especially 5- HT receptors that occur in high density in visual cortex. Modifies activity of frontal and occipital cortex. Users of “shrooms” often report spiritual experiences and feelings of transcendence, along with intense visual experiences and alterations in the perception of time. The exact effects are strongly influenced by the expectations Recent studies suggest that psilocybin— administered in controlled settings—can offer substantial and enduring improvements in the symptoms of obsessivecompulsive disorder (OCD), cluster headache (a type of migraine), 2A Drug name and date of discovery Action in brain Recreational use Possible clinical application and surroundings of the user. treatmentresistant depression, and debilitating anxiety and anguish (as in a sample of terminal cancer patients) (Schindler et al., 2015; Carhart-Harris et al., 2016; Agin-Liebes et al., 2020). Lysergic acid diethylamide (LSD) (1938) Activates many subtypes of monoamine receptors, especially DA and 5- HT, resulting in heightened activity in many cortical regions, especially frontal, cingulate, and “Acid” produces pronounced perceptual changes that resemble hallucinations. Intense colors in geometric patterns, novel visual objects, and an altered sense of time are common. LSD may be an effective treatment for alcohol use disorder and other addictions and may also be an effective treatment for some types of debilitating anxiety (Gasser et al., 2014; Bogenschutz and Johnson, 2016). Drug name and date of discovery Action in brain Recreational use Possible clinical application occipital cortex. Ketamine (1962) Has widespread effects in the brain, especially blockade of NMDA receptors, and stimulates opioid and ACh receptors. “Special K” creates a detached, trancelike state, in keeping with its routine medical use as an anesthetic. It may also produce psychedelic perceptual alterations. Recent experiments have revealed that relatively low doses of ketamine have a potent antidepressant effect even in cases that resist other types of treatments (Lima et al., 2022). 3,4- Methylenedioxymethamphetamine (MDMA) (1912/1970s) Stimulates release of monoamine transmitters and the prosocial hormone oxytocin. Users of “Ecstasy” experience intense visual phenomena, empathy, strongly prosocial feelings, and euphoria. MDMA treatment may reduce symptoms of posttraumatic stress disorder (PTSD), especially in combination with conventional psychotherapy, but concerns remain regarding drug Drug name and date of discovery Action in brain Recreational use Possible clinical application safety (Smith et al., 2021). How’s It Going? 1. Compare and contrast the three major categories of presynaptic effects of psychoactive drugs. Give examples of each kind of action. (Hint: The words production, release, and clearance will be important for your discussion.) 2. Compare and contrast the main postsynaptic actions of psychotropic drugs, with examples. Be sure to distinguish between actions at receptors and actions within the postsynaptic neuron. 3. At least four general categories of psychoactive drugs are used to relieve disorders. Describe these categories, and give some examples of each class of drugs. Be sure to discuss the modes of action of the drugs you cite. 4. Identify and discuss the major categories of drugs that people use to alter their consciousness. In what ways are the major categories similar, and in what ways do they differ? What are some of the threats to health that these compounds present? 5. Discuss the renewed scientific interest in the therapeutic use of psychedelics. How might they help in psychiatric disorders? Food for Thought Psychedelic drugs were quickly outlawed in the twentieth century, amid widespread fear of their effects. Why do you think this happened, and why does this prohibition now seem to be easing? More generally, do you think the experiences created by psychedelics can reveal anything about the nature of consciousness? 3.7 Substance Use Disorders Are Global Social Problems The Road Ahead In the final section of the chapter we turn to the urgent public health problem posed by substance use disorders. By the end of this section, you should be able to: 3.7.1 Provide a formal definition and diagnostic considerations for substance use disorders. 3.7.2 Summarize the major theoretical models of disordered substance use and dependency. 3.7.3 Describe some individual differences that affect susceptibility to addiction. 3.7.4 Review leading categories of treatments for addiction, briefly explaining the logic of each. The habitual use of drugs to alter consciousness can be costly to the user and to society. Governments try to minimize these costs by controlling (or preventing) the production and distribution of designated drugs, but the division of drugs into licit and illicit categories is largely a matter of historical accident. Some classes of drugs—the opioids, for example—span both categories, being both useful medicines and harmful recreational drugs. And some substances, like tobacco, are legal only because they have been cultivated for centuries and are backed by powerful economic interests. In terms of illness, death, lost productivity, and sheer human misery, some of the legal drugs may be the worst offenders. Just one example reveals the extent of the problem: in the United States each year, more men and women die of smoking-related lung cancer than of colon and breast cancers combined (U.S. Cancer Statistics Working Group, 2021). In addition to the personal impact of so much illness and early death, there are dire social costs: huge expenses for medical and social services; millions of hours lost in the workplace; elevated rates of crime associated with illicit drugs; and scores of children who are damaged by their parents’ substance use disorders, in the uterine environment as well as in the childhood home. Males are more likely than females to engage in disordered substance use, but it is unclear whether this sex difference is related to biological differences between the sexes or to differences in social influences on males versus females. For medical purposes, addiction is defined as “substance use disorder” (SUD) in the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5; American Psychiatric Association, 2013). SUD can take multiple forms, and it varies in severity from mild to severe. The DSM-5 criteria for a diagnosis of alcohol use disorder appear in TABLE 3.5; about 7 percent of adults in the USA (approximately 17 million people) currently meet these criteria (SAMHSA, 2018). Identical criteria are used for diagnosis of all other types of substance use disorders, including opioids like heroin, stimulants like cocaine and methamphetamine, tobacco, cannabis, psychedelics, and so on. Almost everyone will meet some of the criteria some of the time; a diagnosis of a specific substance use disorder requires more-sustained problems and a pattern of use that interferes with normal daily functioning. A mild substance use disorder is diagnosed if two or three of the listed criteria are met. People meeting four or five criteria are classified as having moderate substance use disorder, and severe substance use disorder is diagnosed in cases where six or more of the criteria are met. In the discussion that follows, we will look at some of the most prevalent perspectives on addiction: what it is, where it comes from, how it can be treated. Some of these models stem from social forces; others are more deeply rooted in scientific observations and theories. But for any model of substance use disorders, the challenge is to come up with a single account that can explain the addicting power of substances as seemingly dissimilar as cocaine (a stimulant), heroin (an analgesic and euphoriant), and alcohol (largely a sedative). We will focus primarily on addiction to cocaine, the opioid drugs (such as morphine and heroin), nicotine, and alcohol because these substances have been studied the most thoroughly. It is estimated that more than 20 million people in the United States alone have disordered use of one or more substances (SAMHSA, 2017). Worldwide, the number is probably in the hundreds of millions. Sex differences are evident in multiple aspects of disordered substance use and dependence (Becker et al., 2017), but although historically men have been more likely than women to engage in problematic substance use, rates in adults are now roughly equivalent between the sexes (SAMHSA, 2021). TA B LE 3 . 5 DSM-5 Diagnostic Criteria for Alcohol Use Disorder A problematic pattern of alcohol use leading to clinically significant impairment or distress, as manifested by at least two of the following, occurring within a 12-month period: 1. Alcohol is often taken in larger amounts or over a longer period than was intended. 2. There is a persistent desire or unsuccessful efforts to cut down or control alcohol use. 3. A great deal of time is spent in activities necessary to obtain alcohol, use alcohol, or recover from its effects. 4. Craving, or a strong desire or urge to use alcohol. 5. Recurrent alcohol use resulting in a failure to fulfill major role obligations at work, school, or home. 6. Continued alcohol use despite having persistent or recurrent social or interpersonal problems caused or exacerbated by the effects of alcohol. 7. Important social, occupational, or recreational activities are given up or reduced because of alcohol use. 8. Recurrent alcohol use in situations in which it is physically hazardous. 9. Alcohol use is continued despite knowledge of having a persistent or recurrent physical or psychological problem that is likely to have been caused or exacerbated by alcohol. 10. Tolerance, as defined by either of the following: a. A need for markedly increased amounts of alcohol to achieve intoxication or desired effect. b. A markedly diminished effect with continued use of the same amount of alcohol. 11. Withdrawal, as manifested by either of the following: a. The characteristic withdrawal syndrome for alcohol [listed elsewhere in DSM-5 as an alcohol-induced disorder]. b. Alcohol (or a closely related substance, such as a benzodiazepine) is taken to relieve or avoid withdrawal symptoms. Source: Reprinted with permission from the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition © 2013. American Psychiatric Association. All rights reserved. Feeding the Monkey Hollywood superstar Robert Downey Jr. has candidly described his personal descent into multiple substance use disorders, and his eventual road to recovery, in media interviews and articles. View larger image Competing models of substance use disorders have been proposed Any comprehensive model of substance use disorder has to answer several difficult questions: What social and environmental factors trigger disordered use of substances, and what factors sustain continuing misuse? What physiological mechanisms make a substance rewarding? What is addiction, physiologically and behaviorally, and why is it so hard to quit? Four major models attempt to answer at least some of these questions: 1. The moral model blames disordered substance use on weakness of character and a lack of self-control. Proponents of this view may apply exhortation, peer pressure, and/or religious intervention in an attempt to curb abusive practices. These approaches have historically had limited success, probably because they aren’t founded in a scientific framework that addresses the neurobiological roots of addiction. The temperance movement that commenced in the early 1800s did seem to reduce alcohol consumption for a time, but despite good intentions, high hopes, and multibillion-dollar budgets, there remains little evidence that modern morality-based campaigns—Project D.A.R.E., for example—have a substantial effect on rates of problematic substance use (S. L. West and O’Neil, 2004; Vincus et al., 2010). 2. The disease model takes the view that people with substance use disorders require medical treatment rather than moral exhortation or punishment. The problem is that substance use disorder is not like any other disease we know about. We generally reserve the term disease for cases involving a physical abnormality, and no such condition has been found in the case of drug addiction (although some people are genetically more susceptible to addiction than others). Furthermore, the disease model offers no clue about how addiction initially arises. Nevertheless, this model continues to appeal to many, and much research is focused on looking for pathological states that create addiction after initial exposure to a drug. 3. The physical dependence model argues that people keep taking drugs in order to avoid unpleasant withdrawal symptoms. The specific withdrawal symptoms depend on the drug, but they are often the opposite of the effects produced by the drug itself. For example, withdrawal from morphine causes irritability, a racing heart, and waves of goose bumps (that’s where the term cold turkey comes from —the skin looks like the skin of a plucked turkey). And of course, the opposite of the euphoria caused by many drugs is dysphoria: strongly negative feelings that can be rapidly relieved by administration of the withdrawn drug. So the model does a good job of explaining why people with addictions will go to great lengths to obtain the drug they are addicted to, but it has an important shortcoming: the model is mute on how the addiction becomes established in the first place. Why do some people, but not all, commence disordered substance use before any physical dependence (tolerance) has developed? And how is it that some people can become addicted to some drugs even in the absence of clear physical withdrawal symptoms? A striking example is cocaine, which is powerfully addictive and produces intense drug craving, yet cocaine withdrawal is not accompanied by the shaking and vomiting and other physical symptoms that are seen during withdrawal from equally addictive substances like heroin. 4. The positive reward model proposes that people may start taking a drug, and become addicted to it, because the drug provides powerful reinforcement. Behavioral tests of the motivation of animals to obtain drugs, such as the selfadministration apparatus illustrated in FIGURE 3.14, allow researchers to precisely quantify the rewarding properties and addiction potential of various drugs. We can infer that the more lever presses animals will perform for a single dose, or the smaller the dose that will support the lever-pressing behavior, the more rewarding and addictive the drug must be. For example, it turns out that animals will selfadminister doses of morphine that are so low that no signs of physical dependence ever develop. Animals will also furiously press a lever to self-administer tiny doses of cocaine and other stimulants (Pickens and Thompson, 1968; Mereu et al., 2020). In fact, cocaine supports some of the highest rates of lever pressing ever recorded. FIGU R E 3 . 1 4 Experimental Setup for Drug Self-Administration View larger image Experiments using drug self-administration suggest that, by itself, the physical dependence model is inadequate to explain drug addiction, although physical dependence and tolerance may contribute to drug hunger. The more comprehensive view of drug self-administration interprets it as a behavior controlled by a powerful pattern of positive and negative rewards (a variant of operant conditioning theory; see Chapter 13), without the need to implicate a disease process. Many—but not all—addictive drugs cause the release of dopamine in the nucleus accumbens, just as occurs with more conventional rewards, such as food, sex, and gambling (Volkow et al., 2017; Samaha et al., 2021). As we mentioned previously, dopamine released from axons originating in the ventral tegmental area (VTA), part of the mesolimbocortical dopaminergic pathway illustrated in Figure 3.4, has been widely implicated in the perception of reward (FIGURE 3.15). If the dopaminergic pathway from the VTA to the nucleus accumbens serves as a reward system for a wide variety of experiences, then the addictive power of drugs may come from their extra strong stimulation of this pathway. When the drug activates this system, providing an abnormally powerful reward, the user learns to associate the drug-taking behavior with that pleasure and begins seeking out drugs more and more until life’s other pleasures fade into the background. If natural activities like conversation, food, and even sex no longer provide appreciable reward, people with addictions may seek drugs as the only source of pleasure available to them. FIGU R E 3 . 1 5 A Neural Pathway Implicated in Drug Abuse View larger image Tucked deep within the folds of the frontal cortex, the insula (Latin for “island”; FIGURE 3.16) also appears to play an important role in addiction, craving, and pleasure. Addiction to numerous substances is associated with abnormalities of the insula (Mackey et al., 2019). For example, people with damage to the left insula reportedly lose their urge to smoke tobacco and are able to effortlessly quit smoking (Abdolahi et al., 2017), and noninvasive stimulation of the insula using rTMS (repetitive transcranial magnetic stimulation; see Chapter 2) significantly reduces nicotine craving and helps people to quit smoking (Zangen et al., 2021). Via its rich interconnections with prefrontal and sensory cortex and the VTA, the insula seems to interact with several large brain networks to mediate multiple features of addiction (Droutman et al., 2015; Ibrahim et al., 2019). Interestingly, changes in the insula are reportedly evident in people whose compulsive use of social media resembles addiction (Turel et al., 2018). FIGU R E 3 . 1 6 The Insula and Addiction View larger image Not everyone who uses an addictive drug becomes addicted, of course. For example, most hospitalized patients treated with opioids for pain relief do not develop opioid addiction after the pain has resolved. However, modern prescription painkillers (like fentanyl) are highly effective at activating the dopamine reward system, so the use of these drugs outside of medical contexts carries a high risk of addiction. The individual and environmental factors that account for differential susceptibility are the subject of active investigation (Karch, 2006). Some of the major risk factors include biological factors (being male, heritable tendencies to addiction; Karlsson Linnér et al., 2021), poor family life, personality factors (poor emotional control), and environmental factors (living in a neighborhood with high rates of addiction). Simply returning to a neighborhood where drugs were previously used can trigger drug craving in an addicted person; this cue-induced drug use is thought to rely on long-lasting associations that involve remodeling of the brain’s reward circuitry (J. Wang et al., 2018; Wright and Dong, 2021). Let’s conclude the chapter by considering strategies to treat addiction in Signs and Symptoms, next. SIGNS & SYMPTOMS Medical Interventions for Substance Use Disorders Some people can overcome their dependence on substances by themselves. For example, many ex-smokers and exalcoholics managed to quit on their own. Other people with addictions greatly benefit from professional counseling, or they turn to informal social interventions like the faith-based 12-step program developed by Alcoholics Anonymous in the 1930s. However, overcoming addiction may require stronger measures in some cases, especially for the most powerfully addictive substances. Medical strategies for loosening the grip of addiction can be grouped into several categories: Lessening the discomfort of withdrawal and drug craving Benzodiazepines and other sedatives, antinausea medications, and drugs that promote sleep all help reduce withdrawal symptoms. Other medications help reduce uncomfortable cravings for the problematic substance; for example, acamprosate (trade name Campral) eases alcohol-associated withdrawal symptoms. Preliminary evidence indicates that noninvasive stimulation of prefrontal cortex using rTMS (repetitive transcranial magnetic stimulation; see Chapter 2) can reduce drug hunger and relapse rates in people with addictions (Diana et al., 2017). Providing alternatives to the addictive drug Agonist or partial agonist analogs of the addictive drug weakly activate the same mechanisms as the addictive drug, to help wean the individual. For example, the opioid receptor agonist methadone reduces heroin appetite; nicotine patches work in a similar fashion to reduce cravings for cigarettes. Directly blocking the actions of the addictive drug Specific receptor antagonists can prevent a problematic drug from interacting with its receptors. For example, the opioid receptor antagonists naloxone (Narcan) and naltrexone (Revia and Vivitrol) block the actions of opioid drugs like heroin and fentanyl, but they also may produce harsh withdrawal symptoms. Naltrexone is also approved by the FDA for the medical treatment of people with alcohol use disorder (Mason and Heyser, 2021); it reportedly blocks feelings of euphoria, suggesting that in these individuals alcohol causes a release of endogenous opioids that brings pleasure. Altering metabolism of the addictive drug Changing the breakdown of a drug can reduce or reverse its rewarding properties. Disulfiram (Antabuse) changes alcohol metabolism such that a nausea-inducing metabolite (acetaldehyde) accumulates. Blocking the brain’s reward circuitry When a person takes medicine (e.g., dopamine receptor blockers) to blunt the activity of the mesolimbocortical dopamine reward system, the addictive drugs lose their pleasurable qualities (but at the cost of a general loss of pleasurable feelings, called anhedonia). Immunization to render the drug ineffective Vaccination is constantly in the news nowadays, thanks to COVID19, so everyone knows that we can create vaccines that induce the immune system to attack infectious agents. Researchers are using a similar approach to develop vaccines against drugs such as cocaine, heroin, and methamphetamine (Nguyen et al., 2017; Truong and Kosten, 2021). Here the strategy is to prompt the individual’s immune system to produce antibodies that remove the targeted drugs from circulation before they have a chance to reach the brain (FIGURE 3.17). FIGU R E 3 . 1 7 The Needle and the Damage Undone View larger image The economic impact of substance use disorders—the costs of law enforcement, medical care, and lost productivity—exceeds $700 billion per year in the USA alone (National Institute on Drug Abuse, 2017). The social costs of disordered drug use and addiction, furthermore, are incalculable. Unfortunately, no single approach so far appears to be uniformly effective, and rates of relapse remain high. Research breakthroughs are therefore badly needed. How’s It Going? 1. Define substance use disorder. How prevalent is disordered substance use in the population? 2. Summarize the major models of substance use disorders and addiction, highlighting the strengths and shortcomings of each perspective. 3. Describe an experimental setup for measuring the rewarding properties of a drug. 4. Provide a survey of the anatomical system that mediates reward. What happens when this system is activated? What are some triggers that can activate the system, and how does activity of the reward system relate to drug addiction? 5. Provide a thorough overview of medical approaches and interventions in substance use disorders. FOOD FOR THOUGHT In popular media it is commonplace to see the word addiction used to describe non-drug habits: “food addiction,” “sex addiction,” “gambling addiction,” and so on. Do you think these strong appetites are accurately called addictions, or does addiction to a substance differ in some specific ways? RECOMMENDED READING Advokat, C. D., Comaty, J. E., and Julien, R. M. (2018). Julien’s Primer of Drug Action (14th ed.). New York, NY: Worth. Erickson, C. K. (2018). The Science of Addiction: From Neurobiology to Treatment (2nd ed.). New York, NY: Norton. Karch, S. B., and Drummer, O. (2015). Karch’s Pathology of Drug Abuse (5th ed.). Boca Raton, FL: CRC Press. Meyer, J. S., and Quenzer, L. F. (2018). Psychopharmacology: Drugs, the Brain, and Behavior (3rd ed.). Sunderland, MA: Oxford University Press/Sinauer. Nestler, E. J., Kenny, P. J., Russo, S. J., and Shaefer, A. (2020). Molecular Neuropharmacology (4th ed.). New York, NY: McGraw-Hill. Nutt, D. (2020). Drink?: The New Science of Alcohol and Health. New York, NY: Hachette Go. Nutt, D. (2022). Drugs without the Hot Air: Making Sense of Legal and Illegal Drugs (2nd ed.). Cambridge, UK: UIT Cambridge Press. Thombs, D. L., and Osborn, C. J. (2019). Introduction to Addictive Behaviors (5th ed.). New York, NY: Guilford Press. VISUAL SUMMARY You should be able to relate each summary to the adjacent illustration, including structures and processes. The online version of this Visual Summary includes links to figures, animations, and activities that will help you consolidate the material. Visual Summary Chapter 3 View larger image LIST OF KEY TERMS acetylcholine (ACh agonists amine neurotransmitters Amino acid neurotransmitters amphetamine analgesic anandamide antagonists antidepressant anxiolytics autoreceptors Barbiturate basal forebrain benzodiazepines binding affinity bioavailable biotransformation blood-brain barrier caffeine cannabidiol (CBD) cannabinoid receptors Cannabis cholinergic cocaine co-localization cross-tolerance delta-9-tetrahydrocannabinol (THC) depressants dopamine (DA) dopaminergic dose-response curve (DRC) downregulate drug tolerance dysphoria efficacy endocannabinoids endogenous endogenous opioids excitatory synapse exogenous fetal alcohol spectrum disorder (FASD) first-generation antipsychotics functional tolerance gamma-aminobutyric acid (GABA) gasotransmitters glutamate G protein–coupled receptors (GPCRs) heroin inhibitory synapse insula ionotropic receptor khat lateral tegmental area ligand local anesthetics locus coeruleus LSD MDMA metabolic tolerance metabotropic receptors monoamine oxidase (MAO) Morphine neurotransmitter neurotransmitter receptors nicotine noradrenergic norepinephrine (NE) nucleus accumbens opioid peptides opioid receptors Opium partial agonists peptide neurotransmitters periaqueductal gray pharmacokinetics postsynaptic presynaptic psychedelics raphe nuclei receptor subtypes retrograde transmitters reuptake second-generation antipsychotics selective serotonin reuptake inhibitors (SSRIs) sensitization serotonergic Serotonin (5-HT serotonin-norepinephrine reuptake inhibitors (SNRIs) stimulant substantia nigra synapse transporters tricyclic antidepressants upregulate ventral tegmental area (VTA) withdrawal symptoms