CH 14 PSYC
CHAPTER 14
Attention and Higher Cognition
Neil V. Watson
Simon Fraser University
S. Marc Breedlove
Michigan State University
Attention to Details
Some heart conditions can launch blood clots that block
the arteries supplying the brain, causing several strokes
within a short span of time. What made Parminder’s
case so exceptional was that in the span of a couple of
months she had two strokes that were mirror images of
each other—they damaged identical regions of the left
and right parietal lobes.
Parminder’s unlikely brain lesions produced equally
unlikely symptoms. A few weeks after her second
stroke, Parminder had regained many of her intellectual
powers—she could converse normally and remember
things. Her visual fields were apparently normal too, but
her visual perception was anything but normal.
Parminder had lost the ability to perceive more than one
thing at a time. For example, she could see her
husband’s face just fine, but she couldn’t judge whether
he had glasses on or not. It turned out that she could
see the glasses or she could see the face, but she
couldn’t perceive them both at the same time. When
shown a drawing of several overlapping items, she
could perceive and name only one at a time.
Furthermore, she couldn’t understand where the objects
she saw were located. It was as if Parminder was lost in
space, able to pay attention to only one object or detail
at a time, apparently alone in a world of its own. What
could explain Parminder’s symptoms?
What is attention? William James, the great American psychologist,
wrote in 1890:
Everyone knows what attention is. It is the taking possession by the mind, in clear
and vivid form, of one out of what seem several simultaneously possible objects or
trains of thought. Focalization, concentration, of consciousness are of its essence. It
implies withdrawal from some things in order to deal effectively with others, and is a
condition which has a real opposite in the confused, dazed, scatterbrained state.
Clearly, James understood that attention can be effortful, improves
perception, and acts as a filter. This continual shifting of our focus
from one interesting stimulus to the next lies at the heart of our
innermost conscious experiences, our awareness of the world around
us, and our place in it. So, we open this chapter by exploring the
behavioral and neural dimensions of attention before turning to the
more general question of our conscious experience of the world.
14.1 Attention Focuses Cognitive
Processing on Specific Targets or
Information
The Road Ahead
The first part of this chapter concerns the consequences of
attention processes: the ways attention filters the world
and affects our processing of information. At the
conclusion of this section, you should be able to:
14.1.1 Provide a general definition of attention, and
distinguish between overt and covert forms of
attention, with examples.
14.1.2 Describe the limitations of our attention,
situations in which our attention may be
overextended, and the behavioral
manifestations of our limited attention.
14.1.3 Speculate about the ways in which evolution
may have shaped attention.
14.1.4 Distinguish between voluntary and reflexive
attention, and describe general experimental
designs for studying each.
14.1.5 Describe the use of focused attention to search
the world for particular objects (using either a
feature search or a conjunction search), and
discuss the significance of the “binding
problem.”
Despite taking delight in pretending otherwise, the average 5-yearold knows exactly what it means when an exasperated parent shouts,
“Pay attention!” We all share an intuitive understanding of the term
attention, but it is tricky to formally define. In general, attention
(or selective attention) is the process by which we select or focus on
one or more specific stimuli—either external phenomena or internal
thoughts—for enhanced processing and analysis. It is the selective
quality of attention that distinguishes it from the related concept of
vigilance, the global level of alertness of the individual. Most of the
time we direct our eyes and our attention to the same target, a
process known as overt attention. For example, as you read this
sentence, it is both the center of your visual gaze and (we hope) the
main item that your brain has selected for attention. But if we choose
to, we can also shift the focus of our visual attention covertly,
keeping our eyes fixed on one location while “secretly” scrutinizing
something in peripheral vision (Helmholtz, 1962; original work
published in 1894). Remember that teacher who, even when looking
out the window, somehow knew instantly when someone checked
their phone? That’s an example of what is known as covert
attention (FIGURE 14.1).
FIGU R E 1 4 . 1 Covert Attention View larger image
Selective attention isn’t restricted to visual stimuli. Imagine yourself
chatting with an old friend at a noisy party. Despite the background
noise, you would probably find it relatively easy to focus on what
your friend was saying, even if speaking quietly, because paying close
attention to a friend enhances your processing of their speech and
helps filter out distracters. This phenomenon—the ability to “tune in”
to one voice and “tune out” everything else—is known as the
cocktail party effect, and it nicely illustrates how attention acts to
focus cognitive processing resources on a particular target. If your
attention drifts to a different stimulus—for example, if you start
eavesdropping on a more interesting conversation nearby—it
becomes almost impossible to simultaneously follow what your
friend is saying. (The term cocktail party effect is also sometimes
used in the special case where a highly salient word, such as one’s
own name, captures attention in a noisy environment.)
There are limits on attention
The powers of attention that help you to easily chat with a friend in a
noisy room normally rely on cues in several different sensory
modalities, such as where their speech sounds are coming from, the
movements of their lips while speaking, and their unique tone of
voice. But what if we restrict our attention to just one type of
stimulus?
In shadowing experiments, participants must focus their attention
on just one out of two or more simultaneous streams of stimuli. In a
classic example, Cherry (1953) presented different streams of speech
simultaneously to people’s left and right ears via headphones—the
technique is called dichotic listening—and asked them to focus their
attention on one ear or the other and report what they heard.
Participants were able to accurately report what they heard in the
attended ear, but they could report very little about what was said in
the nonattended ear, aside from simple characteristics, such as the
sex of the speaker. In fact, if a shadowing task is difficult enough,
people may fail to detect even their own names in the unattended ear
most of the time (Röer and Cowan, 2021)!
Similar restrictions of attention can be seen in other sensory
modalities, such as musical notes (Zendel and Alain, 2009) and
visual stimuli. Participants closely attending to one complex visual
event against a background of other moving stimuli—dancers
weaving through a basketball game, for example—may show
inattentional blindness: a surprising failure to perceive
nonattended stimuli. And the unperceived stimuli can be things that
you might think impossible to miss, like a gorilla strolling across the
screen out of the blue (Simons and Chabris, 1999; Simons and
Jensen, 2009). Even highly trained experts can have this problem. In
one study, over 80 percent of radiologists screening CT scans for
lung cancer didn’t notice a seemingly obvious image of a gorilla
inserted into one of the scans (Drew et al., 2013) (you can see an
example on the website). Inattentional blindness even occurs when
the nonattended stimulus could have life-or-death consequences for
the observer; for example, a significant fraction of police officers and
trainees will fail to notice a gun placed in plain view during a
simulated traffic stop (Simons and Schlosser, 2017).
In general, divided-attention tasks—in which a person is asked to
process two or more simultaneous stimuli—confirm that attention is
a limited resource and that it’s very difficult to effectively attend to
more than one thing at a time, even if we feel like we are
multitasking just fine (Bonnel and Prinzmetal, 1998; Konishi et al.,
2020). So, our limited selective attention generally acts like an
attentional spotlight (see Figure 14.1), shifting around the
environment, highlighting stimuli for enhanced processing. It’s an
adaptation that we share with many other species because, like us,
they are confronted with the problem of extracting important signals
from a noisy background (Bee and Micheyl, 2008). Birds and bats,
for example, must isolate critical vocalizations from a cacophony of
calls and other noises in the environment—their version of the
cocktail party problem (Lewicki et al., 2014). Having a single
attentional spotlight helps us focus cognitive resources and
behavioral responses toward the most important things in the
environment at any given moment (the smell of smoke, the voice of a
potential mate, a glimpse of a big spotted cat), while ignoring
extraneous information.
By acting as a filter, attention narrows our focus and directs our
cognitive resources toward only the most important stimuli around
us, thereby protecting the brain from being overwhelmed by the
world. But the details of this attentional bottleneck have been
elusive. Initial research gave evidence of an early-selection model of
attention, in which unattended information is filtered out right away,
at the level of the initial sensory input, as in the shadowing
experiments we just described (Broadbent, 1958). But others noted
that important but unattended stimuli (such as your name) may
undergo substantial unconscious processing, right up to the level of
semantic meaning and awareness, before suddenly capturing
attention (Röer and Cowan, 2021), thus illustrating a late-selection
model of attention. Many contemporary models of attention now
combine both early- and late-selection mechanisms, and debate
continues over their relative importance.
Gorillas in the Midst Who could miss the gorilla in the video from which this still is taken?
Most people do, if they are concentrating on some other task, such as counting the number
of times people in white shirts touch a ball that is being passed around. View larger
image
If we view attention as a limited resource, then we only have enough
of it to do one complex task at a time, or a few very simple ones. This
finding implies that attention is continually rebalanced between
early and late selection, depending on perceptual load: the
processing demands imposed by the task at hand. When we focus on
a very complex stimulus, the load on our perceptual processing
resources is so great that there is nothing left over, and extra stimuli
are excluded right from the outset: an early-selection process (S.
Murphy et al., 2017).
Attention is deployed in several
different ways
We’ve seen that through willpower we can direct our attention to
specific stimuli without moving our eyes or otherwise reorienting.
Early experiments on this phenomenon employed sustainedattention tasks, like the one depicted in Figure 14.1, where a single
stimulus location must be held in the attentional spotlight for an
extended period. Although these tasks are useful for studying basic
phenomena, other important questions about attention require
another approach. For example, how do we shift our attention
around? How does attention enhance the processing of stimuli, and
which brain regions are involved? To answer these questions,
researchers devised clever tasks that employ stimulus cuing to
control attention, which revealed two general categories of attention,
as we’ll discuss next.
The kind of attention that we have been discussing thus far is what
researchers call voluntary attention (or endogenous attention).
As the name implies, voluntary shifts of attention come from within;
they are the conscious, top-down directing of our attention toward
specific aspects of the environment, according to our interests and
goals. FIGURE 14.2 features the symbolic cuing task (or spatial
cuing task), developed by Michael Posner (2016) and used
extensively to study voluntary attention. Studies using cuing tasks
have confirmed that consciously directing your attention to the
correct location or stimulus improves processing speed and accuracy.
Conversely, directing your attention to an incorrect location or
stimulus impairs processing efficiency.
FIGU R E 1 4 . 2 Measuring the Effects of Voluntary Shifts of Attention View
larger image
How much does it help to shift your attention to a location before a
stimulus occurs there? Posner’s symbolic cuing task allows us to
quantify how voluntary attention benefits processing. In a symbolic
cuing task, participants stare at a point in the center of a computer
screen and must press a key as soon as a specific target (the
stimulus) appears on the screen; this technique thus measures
reaction time. The stimulus is preceded by a cue that briefly
flashes on the screen, hinting where the stimulus will appear. Most of
the time, as in FIGURE 14.2A, the participant is provided with a
valid cue; for example, a rightward arrow flashes on the screen
moments before the stimulus appears on the right side of the screen.
In a few trials, like the one in FIGURE 14.2B, the arrow points the
wrong way and thus provides an invalid cue. And in “neutral” control
trials (FIGURE 14.2C), the cue doesn’t provide any hint at all. Both
the cue and the stimulus are on the screen so briefly that participants
don’t have time to shift their gaze (and in any case, the researchers
monitor their eyes to ensure they stare at the fixation point).
Averaged over many trials, the reaction-time data (FIGURE 14.2D)
clearly show that people swiftly learn to use cues to predict stimulus
location, shifting their attention without shifting their gaze, in
anticipation of the appearance of the target stimulus. Compared with
neutral trials, processing is significantly faster for validly cued trials,
and participants pay a price for misdirecting their attention on those
few trials in which the cue is invalid, misdirecting attention to the
wrong side of the display. Many variants of the symbolic cuing
paradigm have been developed—varying the timing of the stimuli,
altering their complexity, requiring a choice between different
responses—all of which can affect reaction time, which we discuss
next.
Some types of stimuli just grab our
attention
There is a second way in which we pay attention to the world,
involving more than just consciously steering our attentional
spotlight around. Flashes, bangs, sudden movements—any striking
or important change—can instantly snatch our attention away from
whatever we’re doing, unless we are very focused. Drop your glass in
a restaurant, and every conversation stops, every head in the place
swivels, seeking the source of the sound (you, embarrassingly). This
sort of involuntary reorientation toward a sudden or important event
is an example of reflexive attention (or exogenous attention). It is
considered to be a bottom-up process, because attention is being
seized by sensory inputs from lower levels of the nervous system,
rather than being directed by voluntary, conscious top-down
processes of the forebrain.
RESEARCHERS AT WORK
Reaction Times Reflect Brain Processing, from Input to
Output
Reaction-time measures are a mainstay of cognitive
neuroscience research. In tests of simple reaction time,
participants make a single response—for example, pressing a
button—in response to an experimental stimulus (the
appearance of a target, the solution to a problem, a tone, or
whatever the experiment is testing). In tests of choice reaction
time, the situation is slightly more complicated: a person is
presented with alternatives and has to choose among them
(e.g., correct versus incorrect, same versus different) by
pressing one of two or more buttons.
Reaction times in an uncomplicated choice reaction-time
test, in which the participant indicates whether two stimuli are
the same or different, average about 300–350 milliseconds
(ms). The delay between stimulus and response varies
depending on the amount of neural processing required
between input and output. The neural systems involved in this
sort of task, and the timing of events in the response circuit, are
illustrated in FIGURE 14.3. Brain activity proceeds from the
primary visual cortex (V1) through a ventral visual object
identification pathway (see Chapter 7) to prefrontal cortex, and
then through premotor and primary motor cortex, down to the
spinal motor neurons and out to the finger muscles. In the
sequence shown in the figure—proceeding from the
presentation of visual stimuli to a discrimination response—
notice that it takes about 110 ms for the sensory system to
recognize the stimulus (somewhere in the inferior temporal
lobe), about 35 ms more for that information to reach the
prefrontal cortex, and then about 30 ms more to determine
which button to push. After that, it takes another 75 ms or so for
the movement to be executed (i.e., 75 ms of time elapses
between the moment the signal from the prefrontal cortex
arrives in premotor cortex and the moment the finger pushes
the button). It is fascinating to think that something like this
sequence of neural events happens over and over in more-
complicated behaviors, such as recognizing a long-lost friend or
composing an opera.
FIGU R E 1 4 . 3 A Reaction-Time Circuit in the Brain LGN, lateral geniculate
nucleus; V1, primary visual cortex; V2 and V4, extrastriate visual areas. View
larger image
Researchers study reflexive attention using a different kind of cuing
task, called peripheral spatial cuing. In this task, instead of a
meaningful symbol like an arrow, the cue that is presented is a
simple sensory stimulus, such as a flash of light, occurring in the
location to which attention is to be drawn. Research with this type
of simple cuing confirmed that valid reflexive cues enhance the
processing of subsequent stimuli at the same location, but only when
the target stimulus closely follows the cue. At longer intervals
between the cue and target, starting at about 200 ms, a curious
phenomenon is observed: detection of stimuli at the location where
the valid cue occurred is actually impaired (Satel et al., 2019). It’s as
though attention has moved on from where the cue occurred and is
reluctant to return to that location. This inhibition of return
probably evolved because it prevented reflexive attention from
settling on unimportant stimuli for more than an instant, an effective
strategy in animals foraging for food or scanning the world for
threats.
Normally, reflexive and voluntary attention work together to direct
cognitive activities (FIGURE 14.4), probably relying on somewhat
overlapping neural mechanisms. Anyone who has watched a squirrel
at work has seen that twitchy interplay. When it comes to singlemindedly searching for tasty morsels (an example of voluntary
attention), a squirrel has few rivals. But even slight noises and
movements (cues that reflexively capture attention) cause the
squirrel to stop and scan its surroundings—a sensible precaution if,
like a squirrel, you are yourself a tasty morsel. So, it’s no surprise
that emotional cues—a sudden gasp from a companion, for example
—can likewise reflexively capture attention and augment sensory
processing (Carretié, 2014). And effective cues for reflexive attention
may involve multiple sensory modalities: a sudden sound coming
from a particular location, for example, can improve the visual
processing of a stimulus that appears there (Hillyard et al., 2016;
Störmer, 2019).
FIGU R E 1 4 . 4 Voluntary and Reflexive Attention Are Complementary View
larger image
Attention helps us to search for
specific objects in a cluttered world
Another familiar way that we use attention is in visual search:
systematically scanning the world to locate a specific object among
many—your car in a parking lot, for example, or your friend’s face in
a crowd. If the sought-after item varies in just one key attribute, the
task can be pretty easy—searching for your red car among a bunch of
silver and black ones, for example. In a simple feature search like
this (FIGURE 14.5A), the sought-after item “pops out”
immediately, no matter how many distracters are present (Joseph et
al., 1997). Effortful voluntary attention isn’t needed.
FIGU R E 1 4 . 5 Visual Search View larger image
More commonly, however, we must use a conjunction search—
searching for an item on the basis of a combination of two or more
features, such as size and color (FIGURE 14.5B and C). This can
become very difficult when, for example, you must simultaneously
consider the hair, nose, eyes, and smile of your friend’s face in a
crowd—and the bigger the crowd grows, the harder the task becomes
(unless your friend waves, thereby reflexively grabbing your
attention—phew!).
Experimental results (FIGURE 14.5D) confirm what you probably
already know intuitively: conjunction searches can be relatively slow
and laborious, involving a large cognitive effort. That’s because your
brain has to deal with what is known as the binding problem (A.
M. Treisman, 1996; Hitch et al., 2020), which is this: How do we
know which different stimulus features—colors, shapes, sounds, etc.,
each processed by different regions of the brain—are bound together
in a single object? And if those objects appear only infrequently—say,
weapons in luggage or tumors in X-rays—they may go undetected
with alarmingly high frequency, even by highly trained screeners
(Wolfe et al., 2013). Our natural tendency is to let our attention
wander between the various sources of stimuli in our environment.
Where’s Waldo? Puzzles like the “Where’s Waldo?” series are classic examples of
conjunction searches: you can find Waldo only if you search for the right combination of
striped sweater, hat, glasses, and slightly goofy expression. Imagine how much easier it
would be to find Waldo if everyone else on the beach were wearing green! In that case,
finding Waldo would be a feature search (the only person not wearing green) and he would
“pop out” in the picture … but that wouldn’t be any fun. View larger image
To uncover finer details of attentional mechanisms, neuroscientists
take two complementary perspectives on attention experimental
strategies. First, we can look at consequences of attention in the
brain, asking how neural systems are affected by selective attention
to enhance the processing of stimuli. So, in these studies, the
question is, What are the neural targets of attention? Second, we can
try to uncover the mechanisms of attention, the brain regions that
produce and control attention, shifting it between different stimuli in
different sensory modalities. Here, the question is, What are the
neural sources of attention? In selecting experimental techniques to
address these different objectives, researchers must juggle the need
for good temporal resolution—the ability to track changes in the
brain that occur very quickly—with the need for excellent spatial
resolution, the ability to observe the detailed structure of the brain.
In general, electrophysiological approaches offer the speed (temporal
resolution) necessary to distinguish the consequences of attention
from the mechanisms that direct it, while brain-imaging techniques
like fMRI offer the anatomical detail (spatial resolution) to figure out
where these neural actions are taking place. This speed-versusaccuracy trade-off permeates the research that we discuss in the
following section.
How’s It Going?
1. How do you define attention? Distinguish between overt
and covert attention, giving examples of each. What is
the attentional spotlight?
2. What is inattentional blindness, and under what
circumstances might it occur?
3. How do early-selection effects of attention differ from
late-selection effects? What single aspect of a stimulus
may determine whether early or late selection occurs?
4. Summarize Posner’s symbolic cuing task. What did this
task reveal?
5. Compare and contrast voluntary attention and reflexive
attention, and identify the principal ways in which they
differ. What is inhibition of return, and does it relate to
voluntary attention or to reflexive attention?
6. While conducting a visual search for something, we
sometimes experience “pop-out.” What is it? Is pop-out
more closely associated with feature search or with
conjunction search, and how do those differ?
7. Distinguish between temporal resolution and spatial
resolution as they apply to brain-imaging techniques.
How are they related?
FOOD FOR THOUGHT
Self-help articles and books promise to help us get better at
multitasking. What does the science of attention tell us about
this goal?
14.2 Targets of Attention:
Attention Alters the Functioning of
Many Brain Regions
The Road Ahead
The next section turns to the impact of attention on brain
processes. Once you have finished studying this section,
you should be able to:
14.2.1 Describe how and why scientists use the
electrical activity of the brain to study attention.
14.2.2 Name and describe the main components seen
in event-related potentials (ERPs) as they relate
to auditory versus visual attention, and
particularly compare the auditory N1 effect and
the visual P1 effect.
14.2.3 Describe the electrophysiological phenomena
associated with visual search tasks.
14.2.4 Describe experimental evidence that selective
attention to stimuli enhances neural activity in
the brain regions processing those stimuli.
Recording electrical activity directly from the neurons of people’s
brains would be a way to obtain excellent temporal and excellent
spatial resolution, but of course we can’t just stick recording
electrodes directly into the brains of healthy participants. Instead, we
must find noninvasive ways to assess brain activity.
When many cortical neurons work together on a specific task, their
activity becomes synchronized to some degree. You might think this
would be easy to see in a standard EEG recording (i.e., an
electroencephalogram, where the brain’s electrical activity is
recorded from the scalp, as we described in Chapter 2), but it isn’t.
Because of variation in the firing of the neurons, not to mention
regional differences in the timing of brain activity, a real-time EEG
recorded during an attention task looks surprisingly random. So
instead, researchers record participants doing a task (FIGURE
14.6A) over and over again, and they average all the EEGs recorded
during those repeated trials (FIGURE 14.6B). Over enough trials,
the random variation averages out, and what’s left is the overall
electrical activity specifically associated with task performance
(FIGURE 14.6C). This averaged activity, called the event-related
potential (ERP) (Helfrich and Knight, 2019), tracks regional
changes in brain activity much faster than brain-imaging techniques
like fMRI do. For this reason, ERP has become the favorite tool of
neuroscientists studying moment-to-moment consequences of
attention in the brain.
FIGU R E 1 4 . 6 Event-Related Potentials View larger image
Distinctive patterns of brain electrical
activity mark shifts of attention
Consciously directing your attention to a particular auditory stimulus
—for example, shadowing one ear, as we described earlier—has a
predictable effect on the ERP. Between about 100 and 150 ms after
the onset of a sound stimulus, two large waves are seen in the ERP
from the auditory cortex: an initial positive-going wave called P1,
immediately followed by a larger negative-going wave called N1 (see
Figure 14.6C). The N1 wave reflects an important aspect of auditory
attention: it is much larger following a stimulus that is being
attended to than it is for the very same stimulus presented at the
same ear but not attended to (Hillyard et al., 1973). Because the only
thing that changes between conditions is the participants’ attention
to the stimuli, this auditory N1 effect must be a result of selective
attention somehow acting on neural mechanisms to enhance
processing of that particular sound. Auditory attention may also
affect much later ERP components, such as the wave called P3 (or
auditory P300) (see Figure 14.6C). Changes in late-occurring
components like P3 are tricky to interpret, because they can be
associated with multiple different cognitive operations, ranging from
memory access to reactions to unexpected events (Wessel and Aron,
2017). Nevertheless, some researchers believe that P3 is especially
sensitive to higher-order processing of the stimulus—qualities like
the underlying meaning of the stimulus, unexpected language,
identity of the speaker, and other cognitive processes—in which case
the P3 effect provides an example of a late-selection effect of
attention. Researchers are debating whether P3 therefore is
(Mashour et al., 2020) or is not (Pitts et al., 2014) an
electrophysiological marker of consciousness.
What about effects of attention on ERPs from visual stimuli?
Because the neural systems involved in visual perception are
different from those involved in audition, voluntary visual attention
causes its own distinctive changes in the ERP. We can study these
visual effects by collecting ERP data over occipital cortex—the
primary visual area of the brain—while a participant performs a
symbolic cuing task, as depicted in FIGURE 14.7. On valid trials
(remember, this is when the target appears where expected, in the
location indicated by the cue, as in FIGURE 14.7A), electrodes over
occipital cortex show a substantial enhancement of the ERP
component P1, the positive wave that occurs about 70–100 ms after
stimulus onset, often carrying over into an enhancement of the N1
component immediately afterward (FIGURE 14.7C). A similar
effect on P1 is evident when attention is instead oriented reflexively
to a flash or sound (McDonald et al., 2005), but only when the
interval between the cue and the appearance of the target is brief. At
longer intervals, the P1 effect may actually be reduced, as an
electrophysiological manifestation of the inhibition of return (Tang
et al., 2021; but see also Satel et al., 2019). And for invalid trials
(FIGURE 14.7B), where attention is being directed elsewhere, the
visual P1 effect isn’t evident at all, even though the visual stimulus
is identical and in the same location as in the validly cued trials.
Interestingly, the P1 effect is evident only in visual tasks involving
manipulations of spatial attention (where is the target?)—not other
features, like color, orientation, or more complex properties that
would be characteristic of late-selection tasks.
FIGU R E 1 4 . 7 ERP Changes in Voluntary Visual Attention View larger image
What happens to ERPs during visual search tasks, where we are
directing attention so as to find a particular target in an array and
ignore distracters? Under these conditions, a subcomponent of N2
(see Figure 14.6), called N2pc, is triggered at occipitotemporal sites
contralateral to the visual target (Luck and Hillyard, 1994; Hickey et
al., 2009).
The neural mechanisms of visual attention may be quite plastic. For
example, extensive experience with action video games, which
heavily rely on visual attention, is associated with neural changes (S.
Tanaka et al., 2013; Kowalczyk et al., 2018) and corresponding
enhancements of longer-latency ERP components (Mishra et al.,
2011; Palaus et al., 2017). Possible trade-offs for all this gaming,
however, may include impaired social and emotional function (no,
we’re not kidding: K. Bailey and West, 2013; Yan et al., 2021). And of
course, some people could be drawn to gaming simply because they
are already good at visuospatial processing.
Attention affects the activity of
neurons
PET and fMRI operate too slowly to track the rapid changes in brain
activity that occur in reaction-time tests. Instead, researchers have
used “sustained-attention tasks” to confirm that attention enhances
activity in brain regions that process key aspects of the target
stimulus. In these experiments, participants are asked to pay close
and lasting attention to one particular aspect of a complex stimulus—
just the faces in a complex scene, or changes in the pattern of
selected dots within an array, for example. Concurrent fMRI
generally confirms that attention somehow acts directly on neurons,
boosting the activity of those brain regions that process whichever
stimulus characteristic was targeted. So, in these particular
examples, enhancement is seen in the cortical fusiform face area
during attention to faces (O’Craven et al., 1999), or in the subcortical
superior colliculus and lateral geniculate (important for spatial
processing of visual stimuli) during attention to spatial arrays
(Schneider and Kastner, 2009).
In Chapter 7 we discussed the distinctive receptive fields of visual
neurons and how stimuli falling within these fields can excite or
inhibit the neurons, causing them to produce more or fewer action
potentials. In an important early study, Moran and Desimone (1985)
recorded the activity of individual neurons in visual cortex while
attention was shifted within each cortical cell’s receptive field. Using
a system of rewards, the researchers trained monkeys to covertly
attend to one spatial location or another while recordings were made
from single neurons in visual cortex. A display was presented that
included the cell’s most preferred stimulus, as well as an ineffective
stimulus (one that, by itself, did not affect the cell’s firing) a short
distance away but still within the cell’s receptive field. As long as
attention was covertly directed at the preferred stimulus, the cell
responded by producing many action potentials (FIGURE 14.8).
But when the monkey’s attention was shifted elsewhere within the
cell’s receptive field, even though the animal’s gaze had not shifted,
that same stimulus provoked far fewer action potentials from the
neuron. Only the shift in attention could account for this sort of
modulation of the cell’s excitability. Subsequent work has confirmed
that attention can also remold the receptive fields of neurons in a
variety of ways (Womelsdorf et al., 2008; Speed et al., 2020).
FIGU R E 1 4 . 8 Effect of Selective Attention on the Activity of Single Visual
Neurons View larger image
How’s It Going?
1. Define EEG and ERP, and explain how ERPs are
measured. Why is the ERP a favored technique in
cognitive neuroscience?
2. Match each of the following ERP phenomena—N1, P1,
P3, N2pc—with one of these terms: pop-out, early
selection, auditory attention, late selection, visual
attention, distractors.
3. Describe an experimental procedure that can
demonstrate the effects of selective attention on the
activity of an individual neuron.
FOOD FOR THOUGHT
How might psychotherapists exploit the ability of attention to
alter the activity of neurons?
14.3 Sources of Attention: A
Network of Brain Sites Creates
and Directs Attention
The Road Ahead
In the section that follows, we turn our attention to the
anatomy of attention: the network of cortical and
subcortical sites that govern voluntary and reflexive
attention. After studying this material, you should be able
to:
14.3.1 Discuss the functions of the principal
subcortical sites—the superior colliculus and
the pulvinar nucleus—that are associated with
shifts of visual attention.
14.3.2 Summarize the dorsal frontoparietal network
believed to govern voluntary attention,
illustrating this action with examples of
research.
14.3.3 Summarize the right temporoparietal network
associated with reflexive shifts of attention, and
again provide relevant research examples.
14.4.4 Describe some of the most striking forms of
attentional disorders and some medical
approaches to treat them.
Whether attention comes reflexively, from the bottom up, or is
controlled voluntarily, from the top down, it strongly affects neural
processing in the brain, thereby augmenting electrophysiological
activity. That doesn’t mean that the sources of the different forms of
attention are identical, however, or even that they are similar—just
that their consequences are somewhat comparable. So let’s turn to
some of the details of the brain mechanisms that are the source of
attention.
Two subcortical systems guide shifts
of attention
Subcortical structures can be difficult to study because, deep in the
center of the brain and skull, their activity cannot be measured with
EEG/ERP and other noninvasive techniques. Our knowledge of their
roles in attention thus comes mostly from work with animals.
Single-cell recordings from individual neurons have implicated the
superior colliculus, a midbrain structure (FIGURE 14.9), in
controlling the movement of the eyes toward objects of attention,
especially in overt forms of attention (Wurtz et al., 1982; Zhaoping,
2016). When the same eye movements are made but attention is
directed elsewhere, increased firing of the superior colliculus
neurons does not occur. And in people with lesions in just one of the
two superior colliculi, inhibition of return was reduced for visual
stimuli on the affected side (Sapir et al., 1999). So it seems that the
superior colliculus helps direct our gaze to attended objects, and it
ensures that we don’t return to them too soon after our gaze has
moved on. The superior colliculus may also help direct the covert
attentional spotlight: for example, monkeys in which the superior
colliculus has been temporarily inactivated lose the ability to use
selective attention cues (arrows, flashes, etc.) until the inactivation
ends (Krauzlis et al., 2013).
FIGU R E 1 4 . 9 Subcortical Sites Implicated in Visual Attention View larger
image
The pulvinar nucleus, or just pulvinar, making up the posterior
quarter of the human thalamus (see Figure 14.9), is heavily involved
in visual processing, with widespread interconnections between
lower visual pathways, the superior colliculus, and many cortical
areas. The pulvinar nucleus is important for the orienting and
shifting of attention. Monkeys whose pulvinar nuclei are inactivated
with drugs, and humans with strokes affecting the pulvinar nuclei,
may have great difficulty orienting covert attention toward visual
targets (D. L. Robinson and Petersen, 1992; Kraft et al., 2015). The
pulvinar nucleus is also needed to filter out and ignore distracting
stimuli while we’re engaged in covert attention tasks, and in general
it coordinates activity in larger-scale cortical networks according to
attentional demands (Saalmann et al., 2012; Green et al., 2017). In
humans, attention tasks with larger numbers of distracters induce
greater activation of the pulvinar nucleus (M. S. Buchsbaum et al.,
2006), confirming the importance of this nucleus for attention to key
stimuli.
Several cortical areas are crucial for
generating and directing attention
The extensive connections between subcortical mechanisms of
attention and the parietal lobes, along with observations from
clinical cases that we will discuss shortly, point to a special role of the
parietal lobes for attention control. Research indicates that two
integrated networks—dorsal frontoparietal and right
temporoparietal—work together to continually select and shift
between objects of interest, in coordination with subcortical
mechanisms of attention.
A dorsal frontoparietal network for voluntary
(“top-down”) control of attention
In monkeys, recordings from single cells show that a region called
the lateral intraparietal area, or just LIP, is crucial for voluntary
attention. LIP neurons increase their firing rate when attention—not
gaze—is directed to particular locations, and it doesn’t matter
whether the voluntary attention is being directed toward visual or
auditory targets (Shomstein and Gottlieb, 2016). So it’s the top-down
steering of the attentional spotlight that is important to LIP neurons,
not the sensory characteristics of the stimuli.
The human equivalent of this system is a region around the
intraparietal sulcus (IPS) (FIGURE 14.10) that behaves much
like the monkey LIP. For example, on tasks designed so that covert
attention can be sustained long enough to make fMRI images, IPS
activity is enhanced while participants are actively steering their
attention (Corbetta and Shulman, 1998; Hutchinson, 2019). And
when researchers used TMS (transcranial magnetic stimulation; see
Chapter 2) to temporarily inhibit the functioning of the IPS,
participants found it difficult to voluntarily shift their attention
between targets (Koch et al., 2005).
FIGU R E 1 4 . 1 0 Cortical Regions Implicated in the Top-Level Control of Attention
View larger image
People with damage to a frontal lobe region called the frontal eye
field (FEF) (see Figure 14.10) struggle to prevent their gaze from
being drawn away toward peripheral distracters while they’re
performing a voluntary attention task. Neurons of the FEF appear to
be crucial for ensuring that our gaze is directed among stimuli
according to cognitive goals rather than eye-catching characteristics
of the stimuli. In effect, the FEF ensures that cognitively controlled
top-down attention gets priority. It’s no surprise, then, that the FEF
is closely connected to the superior colliculus, which, as we discussed
earlier, is important for planned eye movements.
Functional brain imaging can be used to identify changes in neural
activity during top-down attentional processing tasks (e.g.,
Hopfinger et al., 2010). FIGURE 14.11 shows patterns of activation
while voluntary attention is shifting in response to a symbolic cue.
Enhanced activity is evident in the vicinity of the frontal eye fields
(dorsolateral frontal cortex) and, simultaneously, in the IPS.
Electrophysiological studies of this network indicate that the
attentional control–related activity is first seen in the frontal and
parietal components, followed by anticipatory activation of visual
cortex (if the expected stimulus is visual) or auditory cortex (if the
stimulus is a sound) (McDonald and Green, 2008; Green et al.,
2011). Taken together, these studies support the view that a dorsal
frontoparietal network provides top-down (voluntary) control of
attention.
FIGU R E 1 4 . 11 The Frontoparietal Attention Network View larger image
A right temporoparietal network for reflexive
(“bottom-up”) shifts of attention
A second attention system, located at the border of the temporal and
parietal lobes of the right hemisphere—and named, a little
unimaginatively, the temporoparietal junction (TPJ) (see
Figure 14.10)—is involved in reflexive steering of attention toward
novel or unexpected stimuli (flashes, color changes, and so on).
Neuroimaging studies (FIGURE 14.12) confirm that if a relevant
stimulus suddenly appears in an unexpected location, there’s a spike
in activity of the TPJ of the right hemisphere, regardless of whether
the stimulus itself occurs in the left or right side of the world
(Igelström and Graziano, 2017; Krall et al., 2015). Interestingly, the
TPJ system receives direct input from the visual cortex, presumably
providing direct access for information about visual stimuli. The TPJ
also has strong connections with the ventral frontal cortex, a region
that is involved in working memory (see Chapter 13). Because
working memory tracks sensory inputs over short time frames, this
system may specialize in analyzing novelty by comparing present
stimuli with those of the recent past. Overall, the ventral TPJ system
seems to act as an alerting signal, or “circuit breaker,” overriding our
current attentional priority if something new and unexpected
happens.
FIGU R E 1 4 . 1 2 The Right Temporoparietal System for Reflexive Attention View
larger image
Ultimately, the dorsal and ventral attention-control networks need to
interact extensively and function as a single interactive system.
Researchers think that the more dorsal stream of processing is
responsible for voluntary attention, enhancing neural processing of
stimuli and interacting with the pulvinar nucleus and superior
colliculus to steer the attentional spotlight around. At the same time,
the right-sided temporoparietal system scans the environment for
novel salient stimuli (which then draw reflexive attention), rapidly
reassigning attention as interesting stimuli pop up. This basic model
seems to apply across sensory modalities, including both visual and
auditory stimuli (Brunetti et al., 2008; Walther et al., 2010).
Brain disorders can cause specific
impairments of attention
One way to learn about attention systems in the brain is to carefully
analyze the behavioral consequences of damage to specific regions of
the brain. Research on people with attentional disorders shows that
damage of cortical or subcortical attention mechanisms can
dramatically alter our ability to understand and interact with the
environment.
Right-hemisphere lesions
We’ve discussed evidence that the right hemisphere normally plays a
special role in attention (see Figure 14.12). Unfortunately, it is not
uncommon for people to suffer strokes or other types of brain
damage to this part of the brain. The result—hemispatial neglect
—is an extraordinary attention syndrome in which the person tends
to completely disregard the left side of the world. People and objects
to the left of the person’s midline may be completely ignored, as if
unseen, even though the person’s vision is otherwise normal.
Someone with neglect may fail to dress the left side of their body, will
not notice visitors if they approach from the left, and may fail to eat
the food on the left side of their dinner plate. If touched lightly on
both hands at the same moment, the person may notice only the
right-hand touch—a symptom called simultaneous extinction
(despite the name, this is unrelated to extinction in classical
conditioning that you may have learned about previously). People
with this problem may even deny ownership of their left arm or leg
—“My sister must’ve left that arm in my bed; wasn’t that an awful
thing to do?!”—despite normal sensory function and otherwise intact
intellectual capabilities.
It is as if the normally balanced competition for attention between
the two sides has become skewed and now the input from the right
side of the world overrules or extinguishes the input from the left.
Lesions in people with hemispatial neglect (FIGURE 14.13A) neatly
overlap the frontoparietal attention network that we discussed
earlier (shown again in FIGURE 14.13B). This overlap suggests
that hemispatial neglect is a disorder of attention itself, and not a
problem with processing spatial relationships, as was once thought
(Mesulam, 2000; Bartolomeo, 2021). With time, hemispatial neglect
can significantly improve (although simultaneous extinction often
persists), and targeted therapies may help. For example, researchers
are experimenting with the use of special prism glasses to shift vision
to the right during intense physical therapy, in order to recalibrate
the visual attention system (Barrett et al., 2012; O’Shea et al., 2017).
FIGU R E 1 4 . 1 3 Brain Damage in Hemispatial Neglect View larger image
Diagnostic Test for Hemispatial Neglect When asked to duplicate drawings of common
symmetrical objects, people with hemispatial neglect ignore the left side of the model that
they’re copying. View larger image
Bilateral lesions
Parminder, whom we met at the beginning of the chapter, had
bilateral lesions of the parietal lobe regions that are implicated in the
attention network. Although it is rare, bilateral parietal damage can
result in a dramatic disorder called Bálint’s syndrome, made up of
three principal symptoms. First, people with Bálint’s syndrome have
great difficulty steering their visual gaze appropriately (a symptom
called oculomotor apraxia). Second, they are unable to accurately
reach for objects using visual guidance (optic ataxia). And third—the
most striking symptom—people with Bálint’s syndrome show a
profound restriction of attention, to the point that only one object or
feature can be consciously observed at any moment. This problem,
called simultagnosia, can be likened to an extreme narrowing of
the attentional spotlight, to the point that it can’t encompass more
than one object at a time. Hold up a comb or a pencil, and Parminder
has no trouble identifying the object. But hold up both the comb and
the pencil, and she can identify only one or the other. It’s as though
she is simply unable to consciously experience more than one visual
object at a time, despite having little or no loss of vision. Bálint’s
syndrome thus illustrates the coordination of attention and
awareness with mechanisms that orient us within our environment.
SIGNS & SYMPTOMS
Difficulty with Sustained Attention Can Sometimes Be
Relieved with Stimulants
At least 5 percent of all children are diagnosed with attention
deficit hyperactivity disorder (ADHD), characterized as
difficulty directing sustained attention to a task or activity, along
with a higher degree of impulsivity than in other children of the
same age. About three-fourths of those diagnosed are male.
Estimating the prevalence of ADHD (FIGURE 14.14) is
complicated and very controversial; for example, there is
significant variation in ADHD diagnosis and medication between
different (sometimes neighboring) states within the USA, raising
questions about the reliability of current diagnostic practices
(Fulton et al., 2009). Nevertheless, researchers have identified
several neurological changes associated with this disorder.
Affected children tend to have slightly reduced overall brain
volumes (about 3–4 percent smaller than in unaffected
children), with reductions especially evident in the cerebellum
and the frontal lobes, and effective treatments tend to enhance
frontal activity (Arnsten, 2006; Spencer et al., 2015). As we
discuss elsewhere in the chapter, frontal lobe function is
important for myriad complex cognitive processes, including the
inhibition of impulsive behavior, as we will discuss below. (But
remember, correlational studies like these say nothing about
causation; we don’t know whether the brain differences cause,
or are caused by, the behavior.)
FIGU R E 1 4 . 1 4 Prevalence of ADHD in the United States View larger
image
In addition to structural changes, ADHD has been associated
with abnormalities in connectivity between brain regions, such
as within the default mode network, a neural system implicated
in conscious reflection that we will discuss shortly (Cao et al.,
2014). In fact, individual differences in the ability to sustain
attention can be predicted with high accuracy from the strength
of sets of brain connections (Rosenberg et al., 2016, 2017),
even in the resting state, when the individual is not working on
any particular task. Children with ADHD may have abnormal
activity levels in some specific brain systems, such as the
system that signals the rewarding aspects of activities
(Furukawa et al., 2014). Based on a model of ADHD that
implicates impairments in dopamine and norepinephrine
neurotransmission, some researchers advocate treating these
children with stimulant drugs like methylphenidate (Ritalin),
which inhibits the synaptic reuptake of dopamine and
norepinephrine, or with selective norepinephrine reuptake
inhibitors like atomoxetine (Strattera) (Schwartz and Correll,
2014). Stimulant treatment often improves the focus and
performance of children with ADHD within traditional school
settings, but this treatment remains controversial because of the
significant risk of side effects. Furthermore, stimulants improve
focus in everybody, not just people with ADHD, which raises
doubts about the orthodox view that impaired neurotransmission
is the sole cause of ADHD (del Campo et al., 2013). An
emerging alternative view is that what is diagnosed as ADHD
may simply be an extreme on a continuum of normal behavior.
Allowing kids diagnosed with ADHD to fidget and engage in
more intense physical activity effectively reduces their
symptoms and improves task performance (Hartanto et al.,
2016; Den Heijer et al., 2017).
How’s It Going?
1. Identify two subcortical structures that are implicated in
the control of attention. What functions do they perform?
2. What is the general name for the cortical system
responsible for conscious shifts of attention? What are
its components, and what happens when those
components are damaged?
3. Name the cortical system implicated in reflexive shifts of
attention. Which specific regions are part of this system,
and what happens if they are damaged?
FOOD FOR THOUGHT
Evidence suggests that the right TPJ provides a circuit breaker
to force attention to shift from a current target; in a modern
setting, how might that function be a help or a hindrance?
14.4 Consciousness, Thought,
and Decision-making Are
Mysterious Products of the Brain
The Road Ahead
The final part of the chapter looks at the most enigmatic,
top-level product of the brain—consciousness—and its
relationships with attention, reflection, and the executive
processes that direct thoughts and feelings. After reading
this section, you should be able to:
14.4.1 Provide a reasonable definition of
consciousness, and name the neural networks
and structures that, when activated, may play a
special role in coordinating conscious states.
14.4.2 Discuss the relationship between
consciousness as experienced by healthy
people and the diminished levels of
consciousness experienced by people in comas
and minimally conscious states.
14.4.3 Discuss the impediments to the scientific study
of consciousness, distinguishing between the
“easy” and “hard” problems of consciousness,
and how free will relates to the study of
consciousness.
14.4.4 Provide an overview of the organization and
function of the frontal lobes, and especially
prefrontal cortex, as they relate to high-level
cognition and executive functions.
There can be no denying the close relationship between attention
and consciousness; indeed, attention is the foundation on which
consciousness is built. Whenever we are conscious, we’re attending
to something, be it internal or external. William James (1890) tried
to capture the relationship between attention and consciousness
when he wrote, “My experience is what I agree to attend to. Only
those items which I notice shape my mind—without selective
interest, experience is an utter chaos.” We all experience
consciousness, so we know what it is, but that experience is so
personal, so subjective, that it’s difficult to come up with an objective
definition. Perhaps a reasonable attempt is to say that
consciousness is the state of being aware that we are conscious
and that we can perceive what’s going on in our minds and all
around us. However, “what’s going on in our minds and all around
us” covers an awful lot of ground. It includes our perception of time
passing, our sense of being aware, our recollection of events that
happened in the past, and our imaginings about what might happen
in the future. Add to that our belief that we employ free will to direct
our attention and make decisions, and we have a concept of immense
scope.
Which brain regions are active when
we are conscious?
Despite definitional complexities, consciousness is an active area of
neuroscience research. So far, there are numerous competing
theoretical models of consciousness, and a growing body of
neuroscientific data. One approach is to look for patterns of
synchronized activity in neural networks as people engage in
conscious, inwardly focused thought. Using fMRI, researchers have
identified a large circuit of brain regions—collectively called the
default mode network, consisting of parts of the frontal,
temporal, and parietal lobes—that seems to be selectively activated
when we are at our most introspective and reflective, and relatively
deactivated during behavior directed toward external goals (Raichle,
2015). In some ways, you could think of it as a daydream network, or
perhaps as a “making-sense” network that we use for reflecting on
daily events and integrating them with our personal memories and
knowledge of the world (Yeshurun et al., 2021). Researchers think
that dysfunction within the default mode network contributes to the
symptoms of various cognitive problems, such as ADHD, autism
spectrum disorder, schizophrenia, and dementia (Whitfield-Gabrieli
and Ford, 2012; Sato et al., 2015). Monkeys and lab rats have circuits
that resemble the human default mode network on structural and
functional grounds, raising the possibility that some nonhuman
species may likewise engage in self-reflection or other introspective
mental activity (Mantini et al., 2011; Sierakowiak et al., 2015). Some
of the basic elements of human consciousness that researchers agree
on are identified in TABLE 14.1, which also lists other species that
may have comparable capacities and experiences.
TA B LE 1 4 . 1 Elements of Consciousness in Humans and Other Animals
Element Definition Other species
Theory of mind Insight into the mental lives of
others; understanding that other
individuals act on their own unique
beliefs, knowledge, and desires
Only chimpanzees, so far
Mirror
recognition
Ability to recognize the self as
depicted in a mirror
All great apes; dolphins;
magpies; some elephants
Imitation Ability to copy the actions of others;
thought to be a stepping-stone to
awareness and empathy
Many species, including
cephalopods like the octopus
Empathy and
emotion
Possession of complex emotions
and the ability to imagine the
feelings of other individuals
Most mammals, ranging from
primates and dolphins, to
hippos and rodents; most
vertebrates able to experience
pleasure (and other basic
emotions)
Tool use Ability to employ found objects to
achieve intermediate and/or
ultimate goals
Chimps and other primates;
other mammals such as
elephants, otters, and
dolphins; birds such as crows
and gulls
Element Definition Other species
Language Use of a system of arbitrary
symbols, with specific meanings
and strict grammar, to convey
concrete or abstract information to
any other individual that has
learned the same language
Generally considered to be an
exclusively human ability,
with controversy over the
extent to which the great apes
can acquire language skills
Metacognition “Thinking about thinking”: the
ability to consider the contents of
one’s own thoughts and cognitions
Nonhuman primates; dolphins
A practical alternative approach has been to study consciousness by
focusing on people who lack it—people in comas or other states of
reduced consciousness. Maps of brain activity in such people—or
more precisely, maps of deactivated areas (Tsuchiya and Adolphs,
2007; Lemaire et al., 2021)—suggest that consciousness depends on
a specific frontoparietal network (FIGURE 14.15) that includes
much of the cortical attention network we’ve been discussing, along
with regions of medial frontal and cingulate cortices. Some
researchers have proposed that the claustrum (FIGURE 14.16)—a
slender sheet of neurons buried within the white matter of the
forebrain lateral to the basal ganglia—may play a role in the
experience of being conscious (Crick and Koch, 2005; Smith et al.,
2020), by virtue of its remarkable reciprocal connections with
virtually every area of cortex, suggestive of a network hub. A sudden
change in consciousness has been reported after electrical
stimulation of the claustrum in some studies (Koubeissi et al., 2014;
Quraishi et al., 2017) but not all (Bickel and Parvizi, 2019), and it
remains to be determined exactly what role the claustrum plays in
coordinating the various cognitive elements of the networks
underlying consciousness (Atilgan et al., 2022). Some consciousness
researchers argue that further searching for specific neural correlates
of consciousness is unlikely to solve the problem anyway, because
the enormous variability of conscious experiences means that large
and shifting collections of brain regions—sometimes almost the
entire brain—will participate (Seth, 2021).
FIGU R E 1 4 . 1 5 The Unconscious Brain View larger image
FIGU R E 1 4 . 1 6 Consciousness Controller? View larger image
So is clinical unconsciousness really the inverse of consciousness?
Perhaps it’s not that simple. For one thing, some people in a
persistent vegetative state (a very deep coma) can be instructed to
use two different forms of mental imagery to create distinct “yes” and
“no” patterns of activity on fMRI and then to use this mental activity
to answer questions (FIGURE 14.17) (Monti et al., 2010;
Fernández-Espejo and Owen, 2013). Most people in a vegetative
state don’t respond to questions and outside stimulation, but others
in minimally conscious states may have considerable cognitive
activity and awareness with little or no overt behavior to indicate
that they are aware of their surroundings (Gosseries et al., 2014;
Sinai et al., 2017). So, it’s increasingly clear that we can’t simply view
a coma as an exact inverse of what we experience as consciousness.
In any case, there seems to be more to consciousness than just being
awake, aware, and attending. How can we identify and study the
additional dimensions of consciousness?
FIGU R E 1 4 . 1 7 Communication in “Unconscious” People View larger image
Some aspects of consciousness are
easier to study than others
Most of the activity of the central nervous system is unconscious.
Scientists call unconscious brain functions cognitively
impenetrable: they involve basic neural processing operations that
cannot be experienced through introspection. For example, we see
whole objects and hear whole words and can’t really imagine what
the primitive sensory precursors of those perceptions would feel like.
Sweet food tastes sweet, and we can’t mentally break it down any
further. But those simpler mechanisms, operating below the surface
of awareness, are the foundation that conscious experiences are built
on.
In principle, then, we might someday develop technology that would
let us directly reconstruct people’s conscious experience—read their
minds—by decoding the primitive neural activity and assembling
identifiable patterns from it. This is sometimes called the easy
problem of consciousness: understanding how particular
patterns of neural activity create specific conscious experiences. Of
course, it’s almost a joke to call this problem “easy,” but at least we
can fairly say that, someday, the necessary technology and
knowledge may be available to accomplish the task of eavesdropping
on large networks of neurons, in real time.
Present-day technology offers a glimpse of that possible future. For
example, if participants are repeatedly scanned while viewing several
distinctive scenes, a computer can eventually learn to identify which
of the scenes the participant is viewing on each trial, solely on the
basis of the pattern of brain activation (FIGURE 14.18A). Of
course, this outcome relies on having the participants repeatedly
view the same static images—hardly a normal state of consciousness.
A much more complex problem is to reconstruct conscious
experience from neural activity during a person’s first exposure to a
stimulus. This sort of reconstruction has been accomplished for
visual stimuli of varying complexity, including letters, shapes, and
uncluttered photos, and shows some promise for reconstructing
remembered scenes (FIGURE 14.18B) (Miyawaki et al., 2008;
Shen et al., 2019). We are still a very long way from directly
capturing the rich ongoing stream of conscious experience, but at
least it’s conceivable.
FIGU R E 1 4 . 1 8 Easy and Hard Problems of Consciousness View larger
image
Alas, there is also the hard problem of consciousness, and it
may prove impossible to crack. How can we understand the brain
processes that produce people’s subjective experiences of their
conscious perceptions? To use a simple example, everyone with
normal vision will agree that a ripe tomato is “red.” That’s the label
that children all learn to apply to the particular pattern of
information, entering consciousness from the color-processing areas
of visual cortex, that is provoked by looking at something like a
tomato. But that doesn’t mean that your friend’s internal personal
experience of “red” is the same as yours. These purely subjective
experiences of perceptions are referred to as qualia (singular quale).
Because they are subjective and impossible to communicate to others
—how can your friend know if “redness” feels the same in your mind
as it does in theirs?—qualia may prove impossible to study
(FIGURE 14.18C). At this point, anyway, we are unable to even
conceive of a technology that would make it possible.
Our subjective experience of consciousness is closely tied up with the
notion of free will: the belief that our conscious self is
unconstrained in deciding our actions and decisions and that for any
given moment, given exactly the same circumstances, we could have
chosen to engage in a different behavior. After centuries of
argument, there’s still no agreement on whether we actually have
free will, but most people behave as though there are always options,
and in any event, there must be a neural substrate for the universal
feeling of having free will. When participants intend to act (push a
button, say), there is selective activation of motor areas, parietal
regions including the IPS (which we implicated earlier in top-down
attention), and dorsal prefrontal cortex (Zapparoli et al., 2017; Si et
al., 2021), suggesting that these regions are important for our
feelings of control over our behavior. However, the conscious
experience of intention may come relatively late in the process of
deciding what to do. Classic research (Libet, 1985), using EEG and a
precise timer, found that an EEG component signaling movement
preparation was evident 200 ms before participants consciously
decided to move. Although controversy initially surrounded this
work, later confirmatory research using fMRI (Soon et al., 2008)
(FIGURE 14.19) found, astonishingly, that brain activity associated
with making a decision was evident in fMRI scans as much as 5–10
seconds before participants were consciously aware of making a
choice!
FIGU R E 1 4 . 1 9 Reading the Future View larger image
These results are sometimes interpreted as meaning we can have no
free will, because our brain decides to push a button before our
conscious self has decided. But even if you are not aware that your
brain has decided to push the left button, it was still your brain that
made that decision, not someone else’s brain. Your conscious self
may be late to join the party when making a decision, but that
doesn’t tell us whether your brain was truly free to choose the left
button rather than the right (or to take the blue pill versus the red
pill, Neo [Google it, youngsters]).
In any event, the earliest indications of the decision-making process
are found in prefrontal cortex. Such involvement of prefrontal
systems in most aspects of attention and consciousness, regardless of
sensory modality or emotional tone, suggests that the prefrontal
cortex is the main source of goal-driven behaviors (Badre and Nee,
2018), as we discuss next.
A flexible frontal system plans and
monitors our behavior
How do we translate our inner thoughts into behavior? Careful
analysis of impairments in people with localized brain damage, along
with functional imaging studies in healthy people, shows that a
network of anterior forebrain sites dominated by the frontal lobes—
but including several other cortical and subcortical sites—is crucial
for executive function, the suite of high-level cognitive processes
that control and organize lower-level cognitive functions in line with
our thoughts and feelings (Alvarez and Emory, 2006; Yuan and Raz,
2014). Some scientists liken executive function to a “supervisory
system” that analyzes important stimuli, weighs competing ideas and
hypotheses, and governs the creation of suitable “plans” for future
action by drawing on cognitive processes like working memory,
attention, feedback utilization, and so on. Executive function
involves multiple interrelated processes, especially (1) smooth task
switching between different cognitive operations, (2) continual
updating of the cognitive plan based on new information and the
contents of working memory, and (3) timely inhibition of thoughts
and behaviors that would compromise the plan (A. Diamond, 2013).
Additional high-level functions that researchers attribute to
executive function include behavioral sequencing, prioritizing of
actions, suppression of interfering inputs, and monitoring of ongoing
performance (Friedman and Robbin, 2022; Menon and D’Esposito,
2022). A closely related account proposes that the crucial function of
the frontal network is hierarchical cognitive control: the ability to
direct shorter-term actions while simultaneously keeping longerterm goals in mind (Koechlin et al., 2003; Badre and Nee, 2018).
Accordingly, a person with executive dysfunction due to frontal
lesions who is given a simple set of errands may be unable to
complete them without numerous false starts, backtracking, and
confusion (Shallice and Burgess, 1991; Rabinovici et al., 2015).
Several of the most widely studied tests of executive functions are
described in TABLE 14.2.
TA B LE 1 4 . 2 Tests of Executive Functions
Test name Procedure Scoring Functions
sampled
Wisconsin Card
Sorting Test (WCST)
(Weigl, 1941; Heaton
et al., 1993)
Sort cards into piles on
the basis of the number,
color, or shape of
symbols on card face.
Every 10 cards, discover
Errors in sorting;
perseveration in
old sorting rule
after rule change
Task
switching and
abstract
reasoning
Test name Procedure Scoring Functions
sampled
and shift to a new
sorting rule (see Figure
14.21).
Controlled Oral
Word Association
Test (COWAT) (also
known as the Verbal
Fluency Task;
Benton and
Hamsher, 1976)
Say as many words as
possible that start with a
specific letter (F, A, or
S), in 60 seconds.
Total number of
unique words
uttered for all
three starting
letters
Verbal
fluency,
updating,
working
memory
Stroop Test of ColorWord Interference
(Stroop, 1935;
MacLeod, 1991)
Read aloud as quickly as
possible color names
that are printed in the
congruent color (e.g.,
BLUE) or incongruent
color (e.g., BLUE).
Time and total
errors
Response
inhibition
We have touched on some frontal lobe functions in earlier chapters—
things like movement control, working memory, language,
psychopathology—but this mass of cortex also underlies other, more
mysterious intellectual characteristics. Perhaps it reflects a bit of
vanity about our species, but the large size of our frontal lobes—
about one-third of the entire cortical surface (FIGURE 14.20A)—
also led to the long-standing view that the frontal cortex is the seat of
intelligence and abstract thinking. The remarkable story of Phineas
Gage, one of the most famous case studies in the history of
neuroscience, underscores the subtlety and complexity of behaviors
governed by the frontal lobes. Like Gage, people with discrete frontal
lesions express various unusual emotional, motor, and cognitive
changes. Widespread frontal damage may be associated with a
persistent strange apathy, broken by bouts of euphoria (an exalted
sense of well-being). Ordinary social conventions are readily cast
aside by impulsive behavior. Concern for the past or the future may
be absent, and forgetfulness is shown in many tasks requiring
sustained attention. In fact, some people with frontal damage even
forget their own warnings to “remember.” However, standard IQ test
performance often shows only slight changes after prefrontal injury
or stroke.
FIGU R E 1 4 . 2 0 The Prefrontal Cortex View larger image
On the basis of both structure and function, researchers distinguish
between several major divisions of the human frontal lobes. The
posterior portion of the frontal cortex includes motor and premotor
regions (see Chapter 5). The anterior portion, usually referred to as
prefrontal cortex, is immensely interconnected with the rest of the
brain (Fuster, 1990; Mega and Cummings, 1994). It was prefrontal
cortex that was surgically disrupted in frontal lobotomy—the
notorious, now-discredited treatment for psychiatric disorders that
we discussed in Chapter 12. As shown in FIGURE 14.20B,
neuroscientists subdivide prefrontal cortex into multiple specific
regions on both anatomical and functional grounds; these regions
include the dorsolateral, ventrolateral, ventromedial, and
orbitofrontal cortex (Catani, 2019).
Dorsolateral prefrontal cortex is closely associated with executive
control, as it is crucial for working memory (holding information in
mind while using it to solve problems) and task switching. People
with lesions that include the dorsolateral prefrontal cortex may thus
struggle with top-down conscious switching from one task to a new
one, as in the Wisconsin Card Sorting Test (FIGURE 14.21), and
tend to perseverate (continue beyond a reasonable degree) in any
activity (B. Milner, 1963; Alvarez and Emory, 2006). Similarly,
lesions in the frontal lobes may cause motor perseveration—
repeating a simple movement over and over again—despite
diminished overall levels of spontaneous motor activity. Along with
movement of the head and eyes, facial expression of emotions may
be greatly reduced. People with prefrontal lesions often have an
inability to plan future acts and use foresight, as in the famous case
of Phineas Gage, who survived an accident that extensively damaged
his orbitofrontal and ventromedial prefrontal cortex. Their social
skills may decline, especially the ability to inhibit inappropriate
behaviors, and they may be unable to stay focused on any but shortterm projects. They may agonize over even simple decisions. Some of
the clinical features of damage to the subdivisions of prefrontal
cortex are summarized in TABLE 14.3.
FIGU R E 1 4 . 2 1 The Wisconsin Card Sorting Test (WCST) View larger image
Phineas Gage Phineas P. Gage was a sober, polite, and capable member of a rail-laying
crew, responsible for placing the charges used to blast rock from new rail beds. That’s Gage
on the left, holding a meter-long steel tamping rod. Perhaps the images on the right can
help you guess why there appears to be something wrong with the left side of his face. In a
horrific accident in 1848, a premature detonation blew that rod right through Gage’s skull,
on the trajectory shown in red, severely damaging both frontal lobes, especially in the
orbitofrontal regions. Amazingly, Gage could speak shortly after the accident, and he walked
up the stairs to a doctor’s office, although no one expected him to live (Macmillan and Lena,
2010). In fact, Gage survived another 12 years, but he was definitely a changed man, so
rude and aimless, and his powers of attention so badly impaired, “that his friends and
acquaintances said that he was ‘no longer Gage.’” The historical account of Gage’s case,
and anatomical reconstruction of his injury, are consistent with modern cases of people with
prefrontal damage that includes orbitofrontal cortex (Wallis, 2007; de Schotten et al., 2015).
View larger image
TA B LE 1 4 . 3 Regional Prefrontal Syndromes
Prefrontal
damage type
Syndrome
type
Characteristics
Dorsolateral Dysexecutive Diminished judgment, planning, insight, and
temporal organization; reduced cognitive focus;
motor-programming deficits (possibly including
aphasia and apraxia); diminished self-care
Orbitofrontal Disinhibited Stimulus-driven behavior; diminished social insight;
distractibility; emotional lability
Mediofrontal Apathetic Diminished spontaneity; diminished verbal output;
diminished motor behavior; urinary incontinence;
lower-extremity weakness and sensory loss;
diminished spontaneous prosody; increased
response latency
In monkeys, distinct populations of orbitofrontal cortical neurons
become especially active when the animal has to make an uncertain
decision that may provide a reward (Matsumoto et al., 2022),
indicating that orbitofrontal cortex controls reward-directed
behaviors. In general, orbitofrontal cortex—and especially the
ventromedial prefrontal cortex—seems important for learning about
rewarding behaviors, and it is believed to play a special role in
anticipating the values of different choices, along with other
operations that guide decisions about how to behave (Hiser and
Koenigs, 2018; Knudsen and Wallis, 2022). In humans performing
tasks in which some stimuli have more reward value than others, the
level of activation in prefrontal cortex correlates with how rewarding
the stimulus is (Gottfried et al., 2003; Kokmotou et al., 2017). This
relationship seems to be a significant factor in gambling behavior
and, more generally, is important for our decision-making processes,
as we discuss next.
We make decisions using a frontal
network that weighs risk and benefit
The waiter has brought over the dessert trolley, and it’s decision
time: do you go with the certain delight of the chocolate cake, or do
you succumb to the glistening allure of the sticky toffee pudding? Or,
do you allow yourself only a cup of black coffee, for the sake of your
waistline? What happens in the brain when we make everyday
decisions?
In the lab, researchers usually evaluate decision-making by using
monetary rewards (instead of desserts, darn it) because money is
convenient: you can vary how much money is at stake, how great a
reward is offered, and so on, to accurately gauge how we really make
economic decisions. These studies show that most of us are very
averse to loss and risk: we are more sensitive to losing a certain
amount of money than we are to gaining that amount. In other
words, losing $20 makes us feel a lot worse than gaining $20 makes
us feel good. From a strictly logical point of view, the value of money,
whether lost or gained, should be exactly the same. Our tendency to
overemphasize loss is just one of several ways in which people fail to
act rationally in the marketplace.
Neuroeconomics is the study of brain mechanisms at work during
economic decision-making, and our attention to environmental
factors and evaluation of rewards has a tremendous impact on these
decisions. In general, findings from human and animal research
suggest that two main systems underlie decision processes (Kable
and Glimcher, 2009). The first, consisting of the orbitofrontal and
ventromedial prefrontal cortex (including the anterior cingulate
cortex) plus the dopamine-based reward system of the brain (see
Chapter 3), is a valuation system, a network that ranks choices on
the basis of their perceived worth and potential reward (Clairis and
Pessiglione, 2022; Ballesta et al., 2020). Impressively, using an
optogenetic technique (see Chapter 2) to selectively activate neurons
that express dopamine receptor D in the nucleus accumbens—a
central forebrain component of the brain’s reward system—can
instantaneously turn a risk-preferring rat into a risk-averse rat
(Zalocusky et al., 2016)! Presumably, the activated cells cause the
valuation system to devalue the reward relative to risk; a similar
dopamine-dependent process appears to participate in human
monetary decisions too (Ojala et al., 2018).
The second system involves dorsolateral prefrontal cortex, dorsal
anterior cingulate cortex, and parietal regions (like the LIP or IPS
discussed earlier in the chapter), and it is thought to be a choice
system, sifting through the valuated alternatives and producing the
conscious decision.
Neuroeconomics research is confirming that the prefrontal cortex
normally inhibits impulsive decision-making as a way to avoid loss
(Tom et al., 2007; Muhlert and Lawrence, 2015). As people are faced
2
with more and more uncertainty, the prefrontal cortex becomes more
and more active (Hsu et al., 2005; Huettel et al., 2006), and the
dorsal cingulate cortex may improve decisions by delaying action
until full processing of a complex decision can be completed (Sheth
et al., 2012; Heilbronner and Hayden, 2016). Likewise, when people
have made wrong, costly decisions that they regret, thereby changing
their future decision-making—called the sunk cost fallacy—activity
increases in a network including the amygdala, cingulate, and
orbitofrontal cortex (FIGURE 14.22) (Coricelli et al., 2005; Haller
and Schwabe, 2014), reflecting the person’s growing aversion to loss.
FIGU R E 1 4 . 2 2 A Costly Decision View larger image
We may never have a full understanding of the deepest secrets of
consciousness, or an answer to the question of whether we actually
make decisions based on the free will that our brain seems to
perceive as an element of consciousness (Haggard, 2017). But that
doesn’t prevent us from marveling that our consciousness has
become so self-aware that it can study itself to a high degree. Perhaps
it’s best to allow ourselves at least one or two mysteries, if only for
the sake of art. Would life seem as rich if we could predict other
people’s behavior, or even our own, with perfect accuracy?
How’s It Going?
1. Define consciousness (or at least try!).
2. Discuss unconsciousness in states like deep coma. Is
this unconsciousness the inverse of consciousness?
3. Contrast the easy and hard problems of consciousness,
giving examples of each. What are qualia?
4. Name the main subdivisions of the prefrontal cortex.
How do they differ in function?
5. What are some of the main symptoms of frontal lobe
lesions?
6. What are the two main neural systems that are thought
to operate in the process of decision-making, as
identified in neuroeconomics research?
FOOD FOR THOUGHT
Emerging technology—high-resolution cortical recording and AIbased image reconstruction, for example—may soon allow us to
record and play back the sensory content of peoples’ thoughts,
dreams, and memories. How will this affect our lives, both pro
and con?
RECOMMENDED READING
Buzsáki, G. (2021). The Brain from Inside Out. New York, NY:
Oxford University Press.
D’Esposito, M., and Grafman, J. H. (Eds.). (2019). The Frontal
Lobes (Handbook of Clinical Neurology, Volume 163).
Cambridge, MA: Elsevier.
Gazzaniga, M. S. (2018). The Consciousness Instinct:
Unraveling the Mystery of How the Brain Makes the Mind. New
York, NY: Farrar, Straus and Giroux.
Glimcher, P. W., and Fehr, E. (2013). Neuroeconomics: Decision
Making and the Brain (2nd ed.). San Diego, CA: Academic
Press.
Goldberg, E. (2017). Executive Functions in Health and
Disease. New York, NY: Academic Press.
Harley, T. A. (2021). The Science of Consciousness. Cambridge,
UK: Cambridge University Press.
Hopfinger, J. B., and Slotnick, S. (Eds.). (2021). The Cognitive
Neuroscience of Attention: Current Debates and Research.
Abingdon-on-Thames, UK: Routledge.
Koch, C. (2020). The Feeling of Life Itself: Why Consciousness
Is Widespread but Can’t Be Computed. Cambridge, MA: MIT
Press.
Laureys, S., and Tononi, G. (Eds.). (2015). The Neurology of
Consciousness: Cognitive Neuroscience and Neuropathology
(2nd ed.). New York, NY: Academic Press.
Nobre, K., and Kastner, S. (2018). The Oxford Handbook of
Attention. Oxford, UK: Oxford University Press.
Owen, A. (2017). Into the Gray Zone: A Neuroscientist Explores
the Border between Life and Death. New York, NY: Scribner.
Sapolski, R. M. (2023). Determined: A Science of Life Without
Free Will. New York, NY: Penguin.
VISUAL SUMMARY
You should be able to relate each summary to the adjacent
illustration, including structures and processes. The online
version of this Visual Summary includes links to figures,
animations, and activities that will help you consolidate the
material.
Visual Summary Chapter 14 View larger image
LIST OF KEY TERMS
attention
attentional bottleneck
attentional spotlight
attention deficit hyperactivity disorder (ADHD)
auditory N1 effect
binding problem
Bálint’s syndrome
cocktail party effect
cognitively impenetrable
conjunction search
consciousness
covert attention
default mode network
divided-attention tasks
easy problem of consciousness
event-related potential (ERP)
executive function
feature search
free will
frontal eye field (FEF)
hard problem of consciousness
hemispatial neglect
inattentional blindness
inhibition of return
intraparietal sulcus (IPS)
lateral intraparietal area
Neuroeconomics
overt attention
P3 effect
perceptual load
peripheral spatial cuing
perseverate
prefrontal cortex
pulvinar nucleus
qualia
reaction time
reflexive attention
shadowing
simultagnosia
spatial resolution
superior colliculus
sustained-attention tasks
symbolic cuing
temporal resolution
temporoparietal junction (TPJ)
vigilance
visual P1 effect
voluntary attention