1/175
Looks like no tags are added yet.
Chapter 5: Speech Perception
Study Questions
When you hear a word like raspberry, which parts of your brain underlie the hierarchical processing of that word’s hierarchical sound structure?
Extends from the dorsal surface of the STG down around its lateral surface and into the upper lip of the STS
What evidence suggests that the early cortical stages of speech perception involve not only the left hemisphere but also the right?
”Asymmetric sampling in time”
Left hemisphere is dominant for processing rapid auditory variation in the 20-80 ms range which is ideal for registering and classifying fine-grained distinctions at the phonemic level
Right hemisphere is dominant for processing longer-duration auditory patterns in the 150-300 ms range, which is ideal for tracking speech input at the syllabic level
Why was the Dual Stream Model originally motivated by classic findings about double dissociations between comprehension and repetition?
Initial evidence for separate streams comes from neuropsychology, since brain-damaged patients exhibit double dissociation between, on the one hand, the ability to comprehend utterances and, on the other hand, the ability to repeat utterances or closely monitor their phonological makeup
In both cognitive and neural terms, how does the ventral stream contribute to your perception of a word like raspberry?
”What” pathway
From sound to meaning
Allows the listener to understand the conceptual content of utterances
Lexical interface
Combinatorial network
In both cognitive and neural terms, how does the dorsal stream contribute to your perception of a word like raspberry?
”How” pathway
From sound to action
Allows the listener to link speech perception with speech production
Sensorimotor interface
Articulatory network
Why has there been a great deal of controversy over whether the articulatory network facilitates speech perception?
Gregory Hickok believe that while the articulatory network might modulate the perception of speech in various ways, it is probably not a necessary resource for comprehension
Opposing studies believe that the articulatory network does make a nontrivial functional contribution to receptive speech processing
Summary and Key Points
Hierarchically Organized Processing
The dorsal STG carries out fairly simple spectrotemporal analyses
The mid-to-posterior lateral STG represents the subphonemic features and feature combinations
The mid-to-posterior STS represents individual phonemes and the sequential phonological structures of whole words
Bottom-up triggered by acoustic stimuli
Top-down influenced by prior knowledge and expectations
Segmentation and identification of phonological structures that have different durations is facilitated by the entrainment of electrophysiological oscillations that have correspondingly different timescales
Bilaterally Organized Processing
Both hemispheres are recruited in different ways
”Asymmetric sampling in time"
Left hemisphere is dominant for processing rapid auditory variation in the 20-80 ms range which is ideal for registering and classifying fine-grained distinctions at the phonemic level
Right hemisphere is dominant for processing longer-duration auditory patterns in the 150-300 ms range, which is ideal for tracking speech input at the syllabic level
Dual Stream Model
After the early cortical stages of speech perception have been completed, further processing proceeds along two separate pathways
Ventral stream: leads into brain regions that are involved in comprehending utterances
Dorsal stream: leads into brain regions that are involved in converting the auditory representations of words into matching articulatory codes
Where does initial evidence for separate streams come from?
Neuropsychology
Ventral Stream
”What” pathway
From sound to meaning
Allows the listener to understand the conceptual content of utterances
Functional-anatomical components
Lexical interface: relay station that maps the sound structures of words onto the corresponding semantic structures
Depends on the mPTG and pITG in both hemispheres (leftward bias)
Combinatorial network: a system for integrating the semantic and grammatical aspects of phrases and sentences
Depends on the lateral ATL (predominately in the left hemisphere)
Dorsal Stream
”How” pathway
From sound to action
Allows the listener to link speech perception with speech production
Supports not only the overt limitation and repetition of heard utterances, but also covert auditory-verbal STM and some aspects fo speech perception
Functional-anatomical components
Sensorimotor interface: relay station that maps the sound structures of words onto the corresponding motor representations
Depends on area Spt in the left hemisphere
Articulatory network: underlies the production of utterances
Depends on a variety of regions in the left posterior frontal lobe
Classic Wernicke-Lichtheim-Geschwind “House” Model
Two pathways project from the center for the “sound images” of words (A)
One pathway leads to the center for word meanings (B)
Another pathway leads to the center for speech production (M)
Early Cortical Stages of Speech Perception
Processing extends from the dorsal STG down around the mid-lateral STG and into the upper lip of the mid-posterior STS
Functional organization is both hierarchical and bilateral
Language has left hemisphere dominance but both hemispheres are important
Hierarchical Organization
Hypothetical neural network for processing the hierarchical organization of the “tonal scream” of the rhesus monkey
The circuitry for processing human speech may be similar, but scaled up in complexity
Top of Hierarchy
The upper neuron serves as a “tonal scream detector” by integrating the inputs from neurons T1 and T2
Middle of Hierarchy
Neurons T1 and T2 integrate the auditory features of compromising the early (T1) and late (T2) phases of a tonal scream, specifically by firing only if all three of the appropriate lower-level FM neurons fire
Bottom of Hierarchy
The first three FM (“frequency modulated”) neurons detect the FM components of the early phase of a tonal scream, and the other three detect the FM components of the late phase
Putative Hierarchy in the Human Brain
The dorsal STG carries out fairly simple spectrotemporal analyses
The mid-posterior lateral STG represents subphonemic features and feature combinations
The mid-posterior STS represents individual phonemes and the sequential phonological structures of whole words
Mesgarani et al.’s (2014) ECoG Study: Hierarchical Organization
Found that individual electrodes over the left mid-lateral STG responded selectively to certain categories of speech sounds
Some electrodes were sensitive to certain categories of speech sounds more than others
Presented subjects with hundreds of naturally spoken sentences
Plosives, fricatives, nasals
Sensitive to a critical cue for distinguishing between vowels
Suggests that this cortical region represents subphonemic features and feature combinations in a topographic yet widely distributed fashion
Chang et al.’s (2010) ECoG Study: Hierarchical Organization
The spatial topography of neural responses in the mid-lateral STG was highly distributed and complex for each sound category but still distinct
Created a continuum of 14 speech sounds by incrementally increasing F2
Further evidence that this region represents subphonemic features and feature combinations
Not recognizing the consonants as distinct phonemes yet but distinguishing different places of articulation
These sounds were perceived as belonging to three categories: /ba/, /da/, /ga/
Liebenthal et al.’s (2015) fMRI Study: Hierarchical Organization
Left mid/post-STS activation in contrast between phonetic vs. nonphonetic discrimination
Suggests that this region is where individual phonemes are explicitly recognized
Two sounds of the same category are hard to distinguish
Performance is good when they are in two separate categories (even if they are only two categories apart)
Okada & Hickok’s et al.’s (2006) fMRI Study: Hierarchical Organization
Found mid/post-STS activation when high-neighboring-density words (i.e. those with many similar-sounding associations, like cat) were contrasted against low-neighborhood-density words (i.e., those with few similar-sounding associates, like spinach)
Suggest that this region represents the pool of phonological associates that are automatically and unconsciously activated in a bottom-up manner during the process of auditory word recognition
Phonological forms of words reside in these areas
How is hierarchical organization not only bottom-up but also top-down?
Influenced by prior knowledge and expectations
Phase of certain electrophysiological oscillations becomes entrained to (i.e., is brought into alignment with) the phase of certain speech rhythms
Theta oscillations (4 Hz range) correlate roughly with syllables
Gamma oscillations (30-70 Hz range) correlate roughly with subphonemic features and whole phonemes, analyze structures
Increasing entrainment leads to better perception
Wada Procedures
Performed a word-picture matching task with phonemic, semantic, and unrelated distractions
Overall performance was quite good regardless of whether the left or right hemispheres was anesthetized
Distractors
Target word: “bear”
Phonological: sounds like word (“pear”)
Semantic: word category (“moose”)
Unrelated: (“grape”)
“Word Deafness”
Disorder in which speech perception is impaired, despite intact hearing and sometimes even intact recognition of nonspeech sounds
Usually requires bilateral lesions to the middle and posterior portions of the STG and underlying white matter
While often sparing Heschl’s gyrus
Very rare - almost always requires two strokes (one in each hemisphere)
“Asymmetric Sampling in Time” (AST) Hypothesis
LH dominance for rapid changes of ~20-80 ms
Ideal for processing very brief aspects of speech (e.g., cues for place of articulation)
RH dominance for slower changes of ~150-300 ms
Better for processing longer aspects of speech (e.g., syllabic structure)
Support
At rest, LH dominance for gamma (40 Hz) oscillations, and RH dominance for theta (4 Hz) oscillations
During speech perception, gamma entrainment to short phonological features is stronger in the LH, whereas theta entrainment to longer ones is stronger in the RH
Arsenault & Buchsbaum’s (2015) fMRI Study: Bilateral Organization
Place distinctions with really fast changing cues should rely on left hemisphere
Manner distinctions with slow changing cues should rely on right hemisphere
Voice distinctions with changes cues should rely on both hemispheres (more right than left)
What is the direction for the processing pathway for hierarchical organization?
Bidirectional
A Double Dissociation Between Comprehension and Repetition: Initial Evidence for Separate Processing streams
Impaired comprehension but intact repetition = transcortical sensory aphasia
Intact comprehension but impaired repetition = conduction aphasia or logopenic variant PPA
Impaired comprehension but intact phoneme discrimination (monitoring)
Intact comprehension but impaired phoneme discrimination (monitoring)
Phonological representations → repetition → motor-articulatory system OR recognition → lexical-semantic system
Ventral “What” Stream: From Sound to Meaning
Lexical interface (bilateral pMTG & pITS/pITG) maps phonological structures onto semantic structures
Combinatorial network (left aMTG & aITS): contributes to integration of sentence meaning
What are the 7 locations of electrode pairs where language was tested for lexical interface in left pMTG/pITG?
Syllable discrimination
Word and sentence repetition
Sentence comprehension
Spontaneous speech
Oral reading of words
Oral reading of paragraphs
Oral object naming
What did the 29 sites where stimulated induced transcortical sensory aphasia (TSA) impact? (intact vs. impaired)
Intact syllable discrimination
Intact word and sentence repetition
Impaired sentence comprehension
Fluent but paraphasic production at 19 of 29 critical sites
What is suggested at the 10 sites where TSA was induced but with intact naming?
Impaired mapping of sound to meaning but normal mapping of meaning to sound
Dronkers et al. (2004) Study
Serves as an intermediary between the phonological and semantic structures of words as maintained by the Dual Stream Model
Most severe and pervasive deficits were left pMTG lesions
Stroke patients with LH lesions
Bonilha et al. (2017) Study
The posterior aspect of the left MTG possibly plays the role of bridging speech perception with subsequent comprehension
Main task: noun-picture matching
Control task: object-picture matching
Patients with widely distributed LH lesions
Split brain
ATL Localizer
Passive listening to sentences → passive listening to noun lists
Semantic Task
Detect semantic anomalies (e.g. the infant was spilling some carpet on the milk)
Syntactic Task
Detect syntactic anomalies (e.g., the plumber with the glasses were installing the sink)
Combinatorial Network in Left ATL
Only 20% of trials had anomalies and the correct sentence in the two tasks were identical
Only a few voxels in the ATL had a task preference (semantic → syntactic)
Narain et al. (2003) Study: Combinatorial Network in Left ATL
Subjects had two types of intelligible speech (A & B) and two matching types of unintelligible speech (C & D)
Conduction analysis using the subtraction paradigm
Davis and Johnsrude’s (2003) fMRI Study: Combinatorial Network in Left ATL
Normal speech
Partly distorted speech
Three degrees of distortion: low, medium, high
Three types of distortion: vocoded, segmented, embedded in background noise
Completely distorted speech
Signal-correlated noise (SCN)
Abrams et al.’s (2013) fMRI Study: Combinatorial Network in Left ATL
Multivariate pattern analysis
Two conditions: normal speech and rotated speech
Two analyses: subtraction and MVPA
The Dorsal “How” Stream: From Sound to Action
Sensorimotor interface (left Spt): maps phonological structures onto motor representations
Articulatory network (left posterior frontal lobe): essential for speech production
Sensorimotor Interface in Left Area Spt
Area Spt resides in the posterior portion of the planum temporale (PT) which straddles at least four different cytoarchitectonic fields
Exhibits both auditory and motor-related response properties
Sensorimotor Interface in Left Area Spt Trial
3 seconds of auditory stimulation (speech or tune)
Followed by 15 seconds of covert rehearsal (speech or humming) of the heard stimulus
Followed by 3 seconds of auditory stimulation (speech or tune)
Followed by 15 seconds of rest
Area Spt was engaged during auditory stimulation and covert rehearsal
Mid STG was only engaged during auditory stimulation
What is area Spt a sensorimotor network for?
Just vocal sounds/actions
Conduction Aphasia: Sensorimotor Interface in Left Area Spt
Results from damage to the left supramarginal gyrus and inferiorly adjacent tissue, including area Spt
Comprehension is mostly intact because the lesion spares the ventral stream
Phonemic paraphasias are rampant (especially for long, complex, and low-frequency words) because the motor programming of words can no longer be guided, via area Spt, by the sound-based representations that specify the auditory “targets” of production
Repetition is severely impaired because it depends critically on area Spt, the neural relay station that translates what one hears into how to say it
Logopenic Progressive Aphasia: Sensorimotor Interface in Left Area Spt
Atrophy generally includes area Spt
Similar set of symptoms but milder
Auditory-Verbal Short-Term Memory (STM) aka the “Phonological Loop”
The perception of an utterance activates sound-based representations in the phonological network
These representations are kept “alive” by means of corresponding subvocal motor processes in the articulatory network
This reverberatory cycle is mediated by the sensorimotor interface
Thus, auditory–verbal STM depends on the dorsal stream
Although the articulatory network might modulate speech perception in various ways, why is it probably not a necessary resource for comprehension?
Large left frontal lesions severely impair production but not comprehension
Deactivating the entire left hemisphere leads to the same outcome
The failure to develop speech production does not preclude normal receptive speech development
Infants as young as 1-month-old exhibit sophisticated speech perception ability, including categorical perception, well before they acquire the ability to speak
Chapter 6: Speech Production
Study Questions
How do the first three stages of the Lemma Model - specifically, lexical concept retrieval, lemma retrieval, and phonological code retrieval - appear to be organized in the left temporal lobe, and what is some relevant evidence?
Lexical concept retrieval
Concrete nouns and action verbs reside in the ATLs bilaterally but with leftward bias
Resolution of conflicts between coactivated lexical concepts may depend on the left IFG
Lemma retrieval
Concrete nouns reside in the varied sectors of the left MTG and ITG
Action verbs reside in the left ventral prefrontal cortex (including the IFG)
Phonological code retrieval
Left posterior STG
What exactly are lemmas, and why does syllabification occur after phonological code retrieval?
Lemma: an abstract word node that not only intervenes between semantics and phonology, but also points to morphosyntactic features like grammatical category
Each retrieved segment in the phonological code spreads activation to all syllabic gestures in which it partakes
How does the syllabary in the Lemma Model relate to the Speech Sound Map in the DIVA Model?
Syllabary in the Lemma Model
The theory does not take into account evidence that area Spt serves as an auditory-motor interface that relays signals back and forth between the sound-based representations of words in the left posterior STG and the corresponding motor-based representations of the same words in the left frontal lobe
The theory predicts that cortical areas linked with phonetic encoding, such as left BA44, should be sensitive to syllable frequency, but several studies suggest that this may not be the case
Speech Sound Map in the DIVA Model
A repository of acquired speech sound representations (phonemes, syllables, or syllable sequences) that serve as the starting point for articulation, and that reside in the left ventral premotor cortex, which for present purposes is assumed to include not only the rostral portion of the ventral precentral gyrus, but also neighboring regions in the posterior IFG and anterior superior insula
What is apraxia of speech, how is it accommodated by both the Lemma Model and the DIVA Model, and why are its lesion correlates controversial?
Apraxia affects the speech motor planning operations that are called phonetic encoding in the Lemma Model
Some fMRI studies with healthy subjects, and some lesion studies with AOS patients, have implicated the anterior insula instead of the pIFG/vPMC
But these findings have been challenged, and there is ongoing debate about this whole issue
What is the purpose of the Initiation Map in the DIVA Model, and where does it reside in the brain?
A module that sends a “go” signal to prepared speech motor commands
Resides in the SMA bilaterally, with modulatory influences from the basal ganglia
In the DIVA Model, what kinds of processing interactions occur between the Auditory Target Map, and the Auditory State Map, and the Auditory Error Map?
During speech production, the Speech Sound Map not only sends feedforward instructions to the Articulator Map, but also sends an anticipatory message to the Auditory Target Map, indicating how the utterance should ideally sound
The acoustic signals of the actual utterance are represented in the Auditory State Map, and those signals are matched against the target representation by the Auditory Error Map
If the utterance was produced correctly, the error map does not generate any output, but if it was produced incorrectly, the error map alerts the Feedback Control Map, which then sends corrective motor commands to the Articulator Map
What are the peripheral mechanisms of speech production, and what disorders are caused by damage to them?
Vocal tract representations in the primary
motor cortex project to brainstem nuclei via the corticobulbar pathway
These brainstem nuclei contain 12 sets of cranial nerves that innervate the head and neck
The cells in the primary motor cortex are sometimes called upper motor neurons, and those constituting the cranial nerves are sometimes called lower
motor neurons
The cranial nerves not only transmit outgoing motor signals to the organs comprising the vocal apparatus, but also carry incoming sensory signals from the very same organs
To what extent do the neural substrates of speech production appear to overlap with the neural substrates of speech perception?
Summary and Key Points
According to the lemma model, word production consists of what two main subsystems?
Lexical selection: identifying the most appropriate word in the mental lexicon
Form encoding: preparing the word’s articulatory shape
Lexical Concept Retrieval and Selection (Lexical Selection Processing Stage)
Converting the thought one wishes to express into the most appropriate lexical concept → semantic structure
Neural correlates are still poorly understood
Lexical concepts encoded by concrete nouns and action verbs may reside in the ATLs bilaterally but with leftward bias
Resolution of conflicts between coactivated lexical concepts may depend on the left IFG
Lemma Retrieval and Selection (Lexical Selection Processing Stage)
Mapping the selected lexical concept onto the corresponding lemma
When the target word is a concrete noun, this process may be subserved by varied sectors of the left MTG and ITG
When the target word is an action verb, the critical brain regions may be the left ventral prefrontal cortex (including the IFG) and the left inferior parietal lobule and mid/posterior MTG
The resolution of conflicts between coactivated lemmas may depend on the left IFG
Phonological Code Retrieval (Form Encoding Subsystem Processing Stage)
Incrementally spelling out the segmental phonological representation of the target word
Left posterior STG
Frequency effects occur
Syllabification (Form Encoding Subsystem Processing Stage)
Determining the syllabic structure of the target word
Incremental and context-sensitive process
Takes place “on the fly”
Sometimes transcends morpheme and word boundaries
Most likely implemented in left BA44
Phonetic Encoding (Form Encoding Subsystem Processing Stage)
Taking as input the syllabified target word and generating as output an articulatory score that specifies in a goal-oriented manner the speech motor tasks to be accomplished (e.g., lip closure)
Associated with several neighboring left frontal regions (BA44, anterior part of the ventral precentral gyrus, and the anterior superior insula)
If the preceding process of syllabification yields units that match precompiled programs in the syllabary, they are retrieved and concatenated
Otherwise, the phonetic form of the target word must be computed on the basis of the segmental and suprasegmental information accessed earlier
Lemma Model Challenges
The ATLs may lack the connectivity necessary to subserve lexical concepts
Based on data from brain-damaged patients who make semantic errors that are restricted to either oral output or written output, some researchers have questioned the plausibility of representational level for amodal lemmas
The theory does not take into account evidence that area Spt serves as an auditory-motor interface that replays signals back and forth between the sound-based representations of words in the left posterior STG and the corresponding motor-based representations of the same words in the left frontal lobe
The theory predicts that cortical areas linked with phonetic encoding, such as left BA44, should be sensitive to syllable frequency, but several studies suggest that this may not be the case
The theory assumes that processing flows sequentially from stage to stage, but even though there is substantial support for this, there is also mounting evidence that multiple levels of architecture - and, by correspondence, multiple regions of the brain - are often activated either simultaneously or at several different time points during spoken word production
Where does the DIVA Model begin?
Where the Lemma Model leaves off
Phonetic encoding and articulation
According to the DIVA Model, the architecture that supports speech motor control consists of what two main systems?
Feedforward control: activating motor commands for articulatory gestures and transmitting them to the vocal apparatus via subcortical nuclei
Feedback control: using auditory and somatosensory input from self-produced speech to recognize errors and send corrective instructions to the articulatory component
Feedforward Control System
Maps
Speech Sound
Articulator
Initiation
During speech production, activation of a particular unit in the Speech Sound Map engages the corresponding vocal tract motor commands in the Articulator Map, and those commands are released by a “go” signal from the Initiation Map.
Speech Sound Map
A repository of acquired speech sound representations (phonemes, syllables, or syllable sequences) that serve as the starting point for articulation, and that reside in the left ventral premotor cortex, which for present purposes is assumed to include not only the rostral portion of the ventral precentral gyrus, but also neighboring regions in the posterior IFG and anterior superior insula
Articulator Map
A set of nodes that represent the major components of the vocal tract (larynx, lips, jaw, and tongue), that specify the time series of movements necessary to produce a particular utterance, and that have a rough somatotopic organization in the ventral primary motor cortex bilaterally
Initiation Map
A module that sends a “go” signal to prepared speech motor commands, and that resides in the SMA bilaterally, with modulatory influences from the basal ganglia
Auditory Feedback Circuit
Maps
Auditory Target
Auditory State
Auditory Error
Feedback Control
During speech production, the Speech Sound Map not only sends feedforward instructions to the Articulator Map, but also sends an anticipatory message to the Auditory Target Map, indicating how the utterance should ideally sound
The acoustic signals of the actual utterance are represented in the Auditory State Map, and those signals are matched against the target representation by the Auditory Error Map
If the utterance was produced correctly, the error map does not generate any output, but if it was produced incorrectly, the error map alerts the Feedback Control Map, which then sends corrective motor commands to the Articulator Map
Auditory Target Map
A module that subserves auditory target representations (i.e., acoustic expectations) during speech production, and that resides in the posterior STG bilaterally
Auditory State Map
A module that represents speech-related auditory input (including self- generated utterances), and that resides in Heschl’s gyrus as well as anteriorly and posteriorly adjacent superior temporal areas bilaterally
Auditory Error Map
A module that computes discrepancies between the anticipated and the actual sounds of self-generated utterances, and that resides in the posterior STG and planum temporale bilaterally
Feedback Control Map
A module that adjusts or updates articulatory commands in light of sensory feedback, and that resides in the right posterior IFG
Somatosensory Feedback Circuit
Maps
Target
State
Error
Somatosensory Target Map
A module that subserves somatosensory target representations (i.e., tactile and proprioceptive expectations) during speech production, and occupies posterior parts of the ventral postcentral gyrus bilaterally
Somatosensory State Map
A module that processes tactile and proprioceptive feedback during speech production, and is represented in a somatotopic manner in anterior parts of the ventral postcentral gyrus bilaterally
Somatosensory Error Map
A module that computes discrepancies between the anticipated and the actual tactile and proprioceptive sensations associated with speech production, and that depends on several cortical regions bilaterally: posterior parts of the ventral postcentral gyrus; anterior parts of the adjacent supramarginal gyrus; and posterior parts of the insula
How does the speech sound map send an anticipatory message to the somatosensory target map?
Indicates how the utterance should ideally feel in the vocal tract
The tactile and proprioceptive signals of the actual utterance are represented in the Somatosensory State Map, and those signals are matched against the target representation by the Somatosensory Error Map
If the utterance was produced correctly, the error map does not generate any output, but if it was produced incorrectly, the error map alerts the Feedback Control Map, which then sends corrective motor commands to the Articulator Map
How do vocal tract representations in the primary motor cortex get projected to the brainstem nuclei?
Via the corticobulbar pathway
How many cranial nerves does the brainstem nuclei contain?
12 sets that innervate the head and neck
Upper and lower motor neurons
Upper: cells in the primary motor cortex
Lower: cells constituting the cranial nerves
What type of signals does the cranial nerves transmit?
Outgoing motor signals to the organs comprising the vocal apparatus
Carry incoming sensory signals from the very same organs
Whole Lemma Model
Lexical selection (conceptual focusing perspective-taking) → lexical concept → (lexical selection) → lemma → form encoding (retrieving morphemic phonological codes) → phonological codes → (prosodification syllabification) → phonological word → (phonetic encoding) → articulatory score
What does the lemma model relay?
Sound structure and phonology
Lexical Concept
The thought you have converted to a meaning of the word of your language
What influences lexical concept retrieval and selection?
Cross-lingusitic variation: messages must be “tuned” to the target language
Perspective: subjective construal; considering the adressess’s state of mind
What are lexical concepts connected to/bind together/conjoin?
All the multifarious semantic features that constitute the conceptual content of the words
How are several semantically related lexical concepts activated?
Typically co-activated in parallel with one (the target) ultimately being selected
Lemma
Abstract word node that not only intervenes between semantics and phonology but also points to various morphosyntactic features like grammatical category (e.g., noun), nominal gender/class (e.g., feminine), verbal transivity (e.g, intransitivite)