1/102
midterm psych-ling
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
modularity
Language - highly modular system
Modularity
General
language is a different system form vision
Specific
within language there are different systems (phonology, syntax, morphology etc)
cognition
process that takes place in the human brain
computational theory of the mind
•Mind: Analogous to software
•Brain: Analogous to Hardware
3 levels of understanding / Marr’s Neurological basis of information processing
computations
What problem is being solved? → Understanding and producing language.
Why is this important? → Language enables communication, expression, and thought.
This level looks at the abstract goal of language processing
Theoretical linguistics
Algorithmic Level (How)
How does the brain process language?
This level describes the mental representations and rules used to understand and generate speech.
real time processing
Implementation Level (Physical Realization)
How does the brain (or a computer) physically carry out language processing?
In humans:
Brain regions like Broca’s area (speech production) and Wernicke’s area (language comprehension) are involved.
Neural circuits process phonemes, words, and sentences.
patterns of neural activation
relevant brain structures
the unification problem
how to relate the three levels of understanding
temporal and spatial summation
Temporal: The sum of several impulses form the same presynaptic neuron can create action potential in the postsynaptic neuron
Spatial: impulses form several neurons incoming to one neuron, the sum of the impulses reaches the threshold of action potential
refractory period
the amount of time it takes for an excitable membrane to be ready for a second stimulus once it returns to its resting state following an excitation.
fasciculus
a bundle of neurons with a common destination, neural pathway
glial cells
provide myelin
organize growth
absorb dead neurons
Cerebellum
•The cerebellum is involved in the coordination of voluntary motor movement, balance and equilibrium and muscle tone. It is located just above the brain stem and toward the back of the brain. It is relatively well protected from trauma compared to the frontal and temporal lobes and brain stem.
•Functions:
•Coordination of voluntary movement
•Balance and equilibrium
•Some memory for reflex motor acts.
Thalamus
•a routing station for all incoming sensory impulses except those of smell, transmitting them to higher (cerebral) nerve centers.
•connects various brain centers with others. Thus the thalamus is a major integrative complex, enabling sensory stimuli to evoke appropriate physical reactions as well as to affect emotions.
Neocortex
•Neocortex: The newer portion of the cerebral cortex that serves as the center of higher mental functions for humans.
•Contains some 100 billion cells, each with 1,000 to 10,000 synapses (connections), and has roughly 100 million meters of wiring, all packed into a structure the size and thickness of a formal dinner napkin.
Lobes and functions
Frontal lobe
decision making
problem-solving
planning
Parietal lobe
reception and processing of the sensory information
spatial processing
Occipital lobe
concerned with vision
Temporal lobe
memory
emotion
hearing
language
object recognition
areas in the brain
Bromann’s areas
Gestalt Laws
proximity
similarity
good continuation
common fate
David Marr
familiar objects are configurations of simple components
Biederman
Geons
Log term potentiation (LPT)
Language processing requires the brain to form and strengthen neural connections to recognize words, interpret meaning, and produce speech. LTP plays a role in this by reinforcing frequently used neural pathways, making communication more efficient.
LTP occurs at the synapses, which are the junctions where two neurons communicate. When a neuron is repeatedly stimulated by another neuron, the synaptic strength between them increases, meaning the postsynaptic neuron (the receiving neuron) becomes more responsive to the same signal in the future.
Increase in Synaptic Strength: The post-synaptic neuron becomes more sensitive to the pre-synaptic neuron's signal.
Reduced Activation Threshold: With more AMPA receptors and other modifications, the post-synaptic neuron requires less input to reach the activation threshold, increasing the likelihood of firing.
Persistent Changes in Synaptic Structure: Over time, dendritic spines (small protrusions on the post-synaptic neuron) may grow or change shape, physically altering the synaptic structure, making communication between neurons more efficient.
SAME STIMULUS CAUSES MORE POLARIZATION, BUT THAT PLATEAUS OVER TIME
multi-model perception
different systems interact - Mcgurc effect
human language capacity
translating acoustic words into meaning and back
Language is domain specific
specific language impairment
idiot savant
Williams syndrome
brain imaging research
aphasia
memory trace
Definition:
A memory trace (engram) is the neural representation of a memory, stored through physical and chemical changes in the brain.
Key Processes:
Encoding – Sensory input is processed and stored in neural circuits (hippocampus, cortex).
Storage – Strengthened through Long-Term Potentiation (LTP); memories move from short-term (hippocampus) to long-term (cortex).
Retrieval – The stored memory trace is reactivated when recalling information.
How Memory Traces Are Strengthened:
✅ Repetition & Practice – Strengthens neural connections.
✅ Association – Linking new info to known concepts enhances retention.
✅ Emotion – Strong emotional events create deeper memory traces.
✅ Sleep & Consolidation – Reinforces learning and memory stability.
Forgetting & Weakening of Memory Traces:
❌ Lack of Use – "Use it or lose it" (synapses weaken over time).
❌ Interference – New learning can overwrite old memories.
❌ Brain Damage – Conditions like Alzheimer’s disrupt memory traces.
Nativist view
Humans are innately predisposed to acquire language, due to genetic programm
Empiricist view
Tabula rasa - nurture, learning comes from experience
3 problems for learning from observation
too many encodings
False encodings
Abstract meanings
Universal Grammar Hypothesis
All languages have the same basic, core properties
These properties are innately available to children
Language-specific input (polish vs dutch)
Dissociation between language and general intelligence
•Intact general intelligence - language impaired:
–Aphasia patients (later in this course)
–SLI, specific language impairment (later in this course)
•Intact language – cognitive retardation:
–Williams syndrome (later in this course)
–savant syndrome (later in this course)
Critical period
If a child does not acquire first language by approximately the age of puberty, it will never be able to acquire it as a mother tongue.
Reason: Brain loses its plasticity with age
Evidence: Multiple studies with immigrants and my personal experience…
Language is NOT imitation, why?
The phrase "Language learning is NOT imitation" suggests that language acquisition is more than just copying words and sentences. It highlights that:
Children Create New Sentences
If language learning were purely imitation, children would only repeat what they hear.
However, they often form new, grammatically correct sentences they’ve never heard before.
Errors Show Rule-Based Learning
Children make predictable errors like "I goed to the park" instead of "I went".
This shows they are applying learned grammar rules, not just imitating adults.
Chomsky’s Universal Grammar
Noam Chomsky argued that humans have an innate ability for language learning.
This explains why children can acquire complex structures without direct teaching.
Context, Meaning & Interaction Matter
Language learning involves understanding meaning and context, not just repeating words.
Social interaction, problem-solving, and cognitive processing play a key role.
Poverty of stimulus
Positive and negative evidence
1. Positive Evidence (Exposure to Correct Forms)
Definition: Positive evidence refers to the correct language input that learners hear or read.
How it works: It provides examples of what is possible in the language, helping learners identify correct structures.
Examples:
A child hears: “She is running fast” → Learns correct verb usage.
A language learner sees: "I have eaten breakfast." → Learns correct past participle structure.
Key point: Positive evidence helps learners form hypotheses about how the language works, but it does not directly tell them what is incorrect.
2. Negative Evidence (Indication of Errors)
Definition: Negative evidence provides learners with information about what is incorrect in a language.
Types of Negative Evidence:
Explicit Correction (Direct Feedback)
Someone corrects an error directly.
Example:
Child: "He goed to the park."
Parent: "No, we say ‘He went to the park.’"
Implicit Correction (Indirect Cues) – Also called Reformulation or Recasting
The error is not explicitly corrected, but the correct form is modeled.
Example:
Child: "He goed to the park."
Parent: "Oh yes, he went to the park!" (Without directly saying it was wrong)
positive: example of overt approval (you hypothesize, someone provides evidence)
negative: example that does not fit your theory (you assume SVO structure but then you hear John Mary likes), overt disapproval
Statistical learning
Child is innately sensitive to the statistical regularities in the input (not only linguistic)
The child is sensitive to how often (on average) it is a sunny day, how often (on average) a visual image of a dog is accompanied by a sound string /DOG/, etc
Sensitivity to infrmation
Information comes from things / events that are not typical (a sudden appearance of a kangaroo in this class, an unusual co-occurrence of consonants, etc.)
A learner is particularly sensitive to such unusual events and interprets them as meaningful.
psycholinguisitic experimentation categorization
off-line
Usually interested in speakers’ knowledge of certain rules, or the availability of a correct interpretation, or ability to process structures in general.
It is not concerned with how difficult it is, or how fast/slow the underlying process is.
Often used with children and brain damaged patients.
repetition task
Can experimental subjects repeat the sentences equally well?
Do they replace the initial structure
picture selection task
Often used to see if the subjects get the right interpretation.
Truth Value Judgment Task (TVJT)
you show one picture at a time and then ask if its true of false
Kermit the frog says sentences, a child gives him ice cream when he’s right, shoe when wrong
Sentence completion Task
used to see if subject has access to a particular morpheme
This bear walks. This bear …
on-line
Interested in the time course of a certain process
Sometimes interested in the localization – in brain imaging
Used to see when the reaction takes place (e.g. how long it takes to detect anomaly)
CMLD (cross-modal lexical decision)
Cross-modal: Visual and auditory presentation
Lexical decision: Is something a word?
reaction time in comprehension (secondary and primary task)]
common pool of resources for both primary and secondary - if one take too much energy there isn’t left for another
Comparison 1st vs 2nd language acquisition
broca’s are and syntactic complexity (slajd 86 class 5)
Brain and language research methods
Static and dynamic
Static Methods
Brain tumors
Penetrating wounds
Aphasia
all identify the relevance of a particular (damaged) are for a particular (linguistic) function
advantages: well-understood
disadvantages: we can’t compare with the patient prior to a lesion, lesion is much larger than the area we’re interested in
CT (CAT) scan - computed tomography
CT scan
Computed tomography
3D image
The X-ray beam passes through the head
cross-sectional images
only structure
Advantages:
very common, cheap, easily available
Disadvantages:
doesn’t show the activity, only a still image
Dynamic methods
Haemodynamic:
PET Scan
fMRI
OT
Electrophysiological
EEG
MEG
EEG
Electroencephalography
Electrical sensors placed on the scalp measuring the electrical activity (firing) of neurons
ERP - event related potential
Advantages:
noninvasive
Excellent TEMPORAL RESOLUTION
Disadvantages:
false information from echoes
bad spatial resolution
ERPs
ELAN - early left anterior negativity - first signal of early language access 160 ms
P200 - Linked to phonetic processing and early word recognition.
N400 - Negative, Linked to meaning (semantics) and expectation violations.
If a word doesn’t fit the expected meaning, your brain says: “Wait, that doesn’t make sense!” → N400 appears.
“She spread the butter on the bread.” → Low N400
“She spread the butter on the socks.” → High N400
P600 - Related to syntax (grammar) processing and reanalysis.
If a sentence has weird grammar or structural ambiguity, the brain struggles → P600 appears.
"The girl enjoys the movie." → Low P600 (correct grammar).
"The girl enjoy the movie." → High P600 (grammatical error detected).
Garden Path Sentences - Reanalysis Needed
Attributes:
polarity - (N vs P)
latency (100 vs 400ms)
distribution over the scalp
iEEG
intracerebral EEG
very high both temporal and spatial resolution
only implemented in strictly clinical purposes
only 5 to 9 electrodes in the region
MEG
Magnetoencephalography
Neurons communicate using electrical signals → these signals create tiny magnetic fields
The key principle: Whenever an electric current flows (neuronal activity), a magnetic field is generated.
MEG uses highly sensitive sensors to detect these tiny magnetic signals.
HIGH TEMPORAL RESOLUTION (no BOLD 0 blood oxygen level-dependent)
GOOD SPATIAL RESOLUTION (number 3 on the list)
Advantages
noninvasive and silent
Disadvantages
Expensive
difficult to maintain
requires isolation fro noise, vibration and magnetic fields
PET
Petrision Emission Tomography
A radioactive glucose tracer is injected, as the brain uses glucose in its activity. As the tracer breaks down it produces particles that collide with electrons, producing gamma rays that get picked up by the PET scanner
Regions that are more active consume more glucose
More activity = brighter colors (red, yellow)
Less activity = darker colors (blue, purple)
Reasonable spatial resolution (2nd place)
Baaad temporal resolution (the worst out of the dynamic ones)
tracks NEUROTRANMITTERS
Disadvantages
it’s literally a radioactive injection
New experimental conditions can be introduced only once every 10-15 minutes
Subjects cannot be given mixed stimulus
MRI
Magnetic resonance imaging
A huge magnetic field is introduced, forcing the hydrogen atoms in the body to align in the same direction, then a radio pulse is sent, knocking them out of that alignment. When it’s done, the hydrogen atom release a radio signal that is detected by the machine
Greater detail than a CT scan
not dynamic!
BOLD imaging
Blood oxygen level dependent
fMRI
The brain sends oxygen-rich blood to the regions working hardest.
Example: If you’re reading, Wernicke’s area (language comprehension) will show increased oxygen use.
Oxygenated blood = Stronger MRI signal
Deoxygenated blood = Weaker MRI signal
Advantages:
safe, noninvasive
no special prep
number 1! on SPATIAL RESOLUTION list
Disadvanages
expensive
cannot be used with patients with metallic devices
cannot be used with uncooperative patients
claustrophobic
loud
TEMPORAL RESOLUTION - slower (seconds) but more detailed, number 4/5
Block design vs odd-ball design
Both are experimental paradigms used in fMRI and ERP studies to measure brain activity under different conditions.
Stimuli are grouped into blocks (e.g., 30 seconds of one task, then 30 seconds of another).
Rare, unexpected stimuli (the “oddballs”) are mixed into a stream of frequent stimuli.
Block example:
Seq. 1: wrote letters, ate apples, sang songs, etc.
Seq. 2: wrote sang, ate rang, bring swam, etc.
Odd-ball example:
wrote letters, ate apples, bring swam, sang songs
OT
Optical Topography
BOLD, the inferred light diffuses differently on differently oxygenated and deoxygenated neurons
Moderate/slow TEMPORAL RESOLUTION (like every BOLD)
Moderate 4/5 SPATIAL RESOLUTION (only works near the surface)
Advantages:
safe
silent
portable
can be used with babies
Disadvantages:
only cortex/near the surface
main properties of language
Creativity
Structure
Meaning
Reference
Interpersonality
creativity
Language allows us to generate infinite new sentences from a limited set of words & rules.
You’ve probably never seen the sentence "Unicorns bake cookies on Mars" before, but you immediately understand it!
structure
Language follows systematic rules for combining sounds, words, and sentences.
English has word order rules: "The cat chased the dog" ≠ "The dog chased the cat" (same words, different meaning).
Meaning
Words and sentences carry meaning, allowing us to convey thoughts, emotions, and information.
"Love", "run", and "freedom" have meanings that exist beyond just their sound or letters.
Reference
Words stand for things in the real or imaginary world, even if they aren’t present.
The word "apple" 🍏 refers to an actual apple, even if none are in sight. We can also talk about unicorns, which don’t exist!
Interpersonality
Language is used in social interactions to express ideas, influence others, and build relationships.
Saying "Can you pass the salt?" isn’t just about salt—it’s a polite request in a conversation.
Phonetics
Speech is a translation of physical stimuli (acoustic waves) into something cognitively interpretable.
Phonetics deals with the human capacity to process sound, to separate it from the background noise, and to parse it into units interpretable by the next system – phonology.
Phonetics – intermediate position between physics and linguistics.
Variability of the acoustic wave (male vs female)
Separation from the background
Language-specific (whats relevant for one doesn’t have to be relevant for another)
Parameters: place and manner of articulation
Phonology
unit: phoneme
First purely linguistically meaningful level (distinctive phonemes worm vs warm)
Morphology
Morpheme - the smallest meaningful unit of speech
Free (stand-alone) and bound (attaches itself) morphemes, functional morphemes
Rules of morphological computations (ed and the end not beginning)
Syntax
Unit: phrase
phrases organized hierarchically
Syntactic computations are followed unconsciously by native speaker
INTUITION! of native speakers
Tranformations
D - structure, S - structure
Deep structure is SAD - simple, active, declarative (S,V,O, not a question)
First, you access the words form the lexicon (words and morphemes), which requires efficient use of LTM and WM
Syntax: you put words together in phrases
that you from the D-structure
then you transform it into an S-structure
There could be wrong retrieval or wrong computations
Target S-structure: who was chased by the dog?
D-structure: the dog chased who
Semantics
Lexical and compositional
Lexical: word meaning, problem: almost impossible to give a finite definition (except biological terms)
Compositional: how we derive the meaning of sentences from individual words. The meaning of the whole is determined by the meaning of the parts + the structure.
Truth coditions
Discourse, speaker-internal vs conversation internal knowledge
In discourse, meaning is not just about individual words or sentences—it’s about context, shared knowledge, and how speakers interact.
Speaker-internal knowledge = What a speaker knows before entering a conversation (background knowledge, beliefs, assumptions).
Conversation-internal knowledge = What speakers learn and negotiate during a conversation (new information, shared understanding).
Speaker internal example:
Before starting a conversation, you already know that "Paris is the capital of France" or that "Water is wet."
If someone says "I just came back from Paris!", you already internally know what Paris is and what that might mean (travel, sightseeing, French culture, etc.).
Conversation interanal example
If someone says, "I just came back from Paris, and it was raining the whole time!", you didn’t know about the rain until they said it—this is conversation-internal knowledge.
If they say, "My flight was delayed because of a strike," you just learned something new, which changes the course of the conversation.
If you say: the bride was young, it grammatically makes sense but not in a conversation internal discourse. ( I went to a wedding, the bride was young is okay)
BRIDGING
Bridging
Bridging is the process of linking new information to previously mentioned information in a discourse.
Involves lexical semantics, Lexical semantics = The study of word meanings and their relationships.
Since bridging requires linking new information to previous information, it depends heavily on lexical meaning and relationships between words.
Example: "I bought a new car. The vehicle is red."
Bridging inference: "The vehicle" refers to "the car".
This works because "car" and "vehicle" are semantically related (synonyms).
Pragmatics
The use of language
What’s on TV tonight? --- Nothing
Do you have time? --- Yes.
She walks on the thin ice.
John kicked the bucket.
Normal, expected interpretation appears to be impaired in right brain damaged patients.
Relevant area for memory
prefrontal cortex and hippocampus
Model for memory
sensory imput—> senory memory—→(attention) Short term memory (maintanance rehearsal)—→ (enocding/retrival) Long term memory
Memory stores (function, capacity ad storage time)
Sensory memory:
Sensory memory makes sure that the information is held in a buffer, long enough to be processed if necessary.
large capacity
very short (0,1-0,5 seconds)
all sensory modules have their own kind of sensory memory
Iconic memory
sperling
Short term memory
Working Mememory
small capacity (limited memory span)
short duration (several seconds)
active
7+2 items
word length effect
Long term memory
ACTIVATION
determines speed and accuracy of access
memories are activated when associated with present concepts —→ necessary to bring the element to the threshold
POWER OF LAW
large capacity
long lived
passive (does not require continuous effort to stay there)
Baddeley’s theory of working memory
phonological loop
1-2 s
verbal thoughts
visuospatial sketchpad
visual thoughts
how does “p” look like when turned upside down?
episodic buffer
navigates
central executive
connects to LTM
loop and sketchpad independent = possible to do double tasks
power law
ower Law of Practice states that the more we practice something, the faster and more efficient we become—but improvements slow down over time.
long term potentiation
LTP
Long Term Potentiation
"Neurons that fire together, wire together."
Step 1: A neuron sends a signal (Action Potential).
Step 2: If this happens repeatedly, the receiving neuron becomes more responsive.
Step 3: The synapse strengthens—making future signals easier & faster!
increase of synapse surface
more ion channels
more transmitter vesicles
also: new synapses
Prefrontal and hippocampal regions show decreased activation as participant become more practiced.
Base level of activation
The base level of activation is how "ready" a memory is to be retrieved.
High activation = Memory is easy to recall.
Low activation = Memory is harder to access.
Every memory has a baseline activation level.
Frequently used memories stay highly active
Infrequent memories require extra effort to retrieve.
When we comprehend language (listening/reading), the external stimulus (sound waves or written text) provides the activation energy to trigger word recognition.
when we produce language (speaking/writing), we have to generate the activation internally, starting from our thoughts.
memory trace
the neural representation of a memory in the brain. It’s basically the physical "footprint" left in your brain when you experience something.
Strengthened through Long-Term Potentiation (LTP) → The more a memory is used, the stronger the trace.
Stored in neural networks → A combination of synaptic connections & patterns of activation.
The distance of the memory trace to the threshold is measurable (reaction time)
Primed lexical decision
Prime and target
The Primed Lexical Decision Task (LDT) is an experiment where participants:
See a prime word first (e.g., doctor).
See a target word and must decide ASAP if it’s a real word (nurse or blork).
Reaction times are measured to see if the prime influenced word recognition.
If the prime is related to the target → Faster response
If the prime is unrelated → Slower response
If the target is a non-word → Slowest response
Cross-modal lexical priming
Cross-Modal Lexical Priming is when a word in one modality (e.g., spoken) influences how quickly we recognize a word in another modality (e.g., visual).
ou hear the word "bank" (auditory).
A second later, you see the written word "money" or "river" (visual).
If you recognize one faster than the other, it tells us which meaning was activated in your brain first!
David Swinney’s experiment
David Swinney’s experiment showed that when we hear an ambiguous word, our brain briefly activates all possible meanings, even if context makes one more likely.
Step 1: Participants listened to a sentence containing an ambiguous word.
Step 2: Right after hearing the ambiguous word, they were shown a visual word on a screen.
Step 3: They had to do a lexical decision task (decide if the visual word was a real word).
Step 4: Reaction times were measured.
Example Sentence Used in the Experiment:
"The government building had a large bug in the office."
The word "bug" can mean:
Insect (literal meaning)
Listening device (spy bug)
A general problem
What happened?
Right after "bug", participants responded equally fast to words related to both meanings ("ant" and "spy").
This showed that both meanings of "bug" were activated at first, even though context favored only one.
What happened a second later?
After a slight delay (200-300ms), only the contextually appropriate meaning ("spy") remained active, while the unrelated meaning ("ant") faded.
What Does This Mean for Language Processing?
Lexical access is automatic → When we hear a word, our brain activates all possible meanings instantly.
Context takes time to narrow down meaning → After a short delay, only the relevant meaning stays active.
Cross-modal priming proves that lexical access happens in real-time across different sensory modes (hearing + vision).
Spreading activation
Spreading Activation is the process where activating one concept in memory "spreads" and activates related concepts.
Our mental lexicon (word storage in the brain) is organized like a network, where words are connected by meaning, sound, or experience.
When one word is activated, nearby related words also get partially activated, making it easier to retrieve them.
Factors influencing memories
Elaborative Processing
when we actively connect new information to things we already know. This makes memories stronger, richer, and easier to retrieve later.
The use of mental imagery
Personal and emotional relevance
Sentence superiority
Sentence Superiority Effect = Words are processed more easily & accurately when they appear in a meaningful sentence, rather than in isolation or in random word sequences.
Context boosts word recognition.
Sentences create expectations, helping predict upcoming words.
Our brain is wired to process language holistically, not just as individual words.
Flashbulb memory
lashbulb Memory = A highly detailed, vivid memory of an emotionally significant event.
Feels like a mental "snapshot" of the moment.
Often involves major public or personal events.
People remember not just the event, but also where they were, who they were with, what they were doing, and how they felt.
declarative memory
General knowledge about the world
knowledge of language, notably words (mental lexicon)
irregular verbs
Procedural memory
procedures
skills
regular verbs
Fun fact about occipital lobe
Same activation when looking at sth when just imagining sth
4 lobes and relevant fissures and 1 thing that connects 2 things
Malfunction in the monitoring
stuttering (oversensitive and overdeveloped in the monitoring)
is the acoustic signal good enough representation of thought?
vicious cycle hypothesis
threshold of what’s acceptable is too high
distraction beneficial
Production side of the model
Conceptualizer
prepares a pre-linguistic, conceptual, structured message
information about individuals, their properties and events
adding thematic roles (agent, theme), not syntax yet!
Monitoring – Before speaking, you check if your message makes sense.
the end result is a pre-verbal message that goes into the formulator
Formulator
Grammatical Encoding – Selecting words (lemmas) and structuring them grammatically. = Deep structure is built from the lexicon
Surface Structure – Arranging the sentence correctly (word order, syntax).
Argument structure→ argument structure of a verb means how many specific arguments it can take (1 - John is jumping, 2 - John ate an apple, 3 - John gave an apple to Mary), also what kind of arguments (thematic roles, agent, theme etc)
Projection principle → reducing uncertainty
Phonological Encoding – Adding sounds to words (how they should be pronounced).
Sound structure
Articulator
Phonetic Plan → Muscle Movements – Your brain sends signals to your tongue, lips, and vocal cords to produce sound. (cortical homunculus)
Overt Speech – Finally, you say "I’m hungry!" out loud.
Lemma vs Lexeme
Lemma is the "base" of a word that carries its meaning and grammatical properties.
It does NOT include morphological inflections (e.g., tense, plural, conjugation).
It’s like the mental blueprint of a word before you actually say or write it.
Example:
The lemma for "run" is the base form RUN (verb) 🏃♀
But it does NOT specify if it’s "ran" (past), "runs" (third person), or "running" (progressive).
What is a Lexeme?
A lexeme is a word family—the base word plus all its inflected forms.
It includes all morphological variations of a word.
This is what we think of when we look up a word in a dictionary.
Example:
The lexeme RUN includes:
Run (base form)
Runs (third-person singular)
Running (progressive form)
Ran (past tense)
First, the brain selects the lemma (choosing the base word & grammar).
Then, the lexeme is activated (choosing the right form for pronunciation).
ToT
Tip of the tongue: accompanied by a strong feeling of knowing the intended meaning and grammatical characteristics of the message
This phenomenon supports the idea of multiple stage model of lexicalization: A word is a complex system that has several faces: meaning, grammar, sound (and orthography when written.)
garret’s model
First vs second language acquisition
L1 acquisition:
Happens Naturally – No formal instruction is needed.
Critical Period Hypothesis (CPH) – If a child isn’t exposed to language by early childhood, they may never fully acquire it (e.g., Genie case study).
Universal Stages – Babies go through cooing, babbling, one-word, two-word, and full-sentence stages.
Mostly left hemisphere:
Broca’s Area (speech production)
Wernicke’s Area (comprehension)
Superior Temporal Gyrus (auditory processing)
Highly automatic (no cognitive effort required)
Low WM load (fast & efficient retrieval)
Stronger, more efficient connections'
Unsupervised
L2 aquisition
More Effort Required – Learning is influenced by age, motivation, environment, and L1 influence.
Interference from L1 – Grammar and pronunciation rules from the first language can carry over.
More Variability – Some people achieve native-like fluency, others never fully do.
More distributed activation:
Left hemisphere (if fluent)
Right hemisphere (if learned late)
More reliance on Prefrontal Cortex (working memory)
Requires more cognitive control, esp. in early learners
Higher WM load (more effortful retrieval)
Weaker, more effortful connection
Critical period hapothesis
L1 must be acquired before puberty for full fluency.
L2 is harder to acquire after puberty because brain plasticity decreases.
Comprehension side model
Relevant areas:
Wernicke’s area
Auditory cortex
Key Components:
Phonetic String → Parsed Speech – The sounds of language are analyzed and understood.
Connection to Mental Lexicon – You match words to their meaning.
Connection to Conceptualizer – You interpret speech in context.
Example:
Your friend hears you say "I’m hungry."
Their auditory system processes the phonetic string.
Their speech comprehension system recognizes words and meaning.
Their conceptualizer figures out how to respond: "Let’s get pizza!"
Self-Monitoring: You can also hear yourself talk and correct errors if needed.
Parsed speech is the structured version of spoken language after the brain has identified words, syntax, and meaning.
First, speech is just a raw sound wave (phonetic string).
Then, the brain processes it to recognize words & sentence structure.
Finally, it turns into parsed speech—ready for comprehension.
The model includes feedback loops to monitor and correct speech errors.
How It Works:
Before you speak, Monitoring checks if the message makes sense.
After you speak, you hear yourself and can correct mistakes.
If you say, "I’m thirsty—uh, I mean hungry!", that’s your speech comprehension system catching a mistake and fixing it in real-time
Categorical perceprion
Voiced vs unvoiced
voiced - first vocal cords activated, then lips released
unvoiced - first lips released then vocal chords activated
Understanding speech
Differentiation of speech from other sounds
Recognizing words
Activating their syntactic and semantic properties
Building their grammatical structure
Interpreting this structure
Proposed parsing algorithms
Wait and see
parallelism
conservative guessing
Wait and See
Slow but accurate – Avoids premature errors.
Works well for ambiguous sentences.
Prefers to avoid making incorrect assumptions.
Example Sentence:
"The horse raced past the barn fell."
Wait-and-See Strategy: Does NOT immediately assume "raced" is the main verb.
Waits for more words to confirm whether "raced" is a verb or a reduced relative clause.
Delays commitment until full information is available.
Parallel parsing
The brain considers ALL possible interpretations simultaneously and waits to see which one is correct.
Keeps multiple syntactic structures active at the same time
Efficient for complex or ambiguous sentences.
Used in computational models of language processing.
Example Sentence:
"I saw the man with the telescope."
Parallel Parsing: Brain keeps BOTH possible meanings active:
1. I used a telescope to see the man.
The man had a telescope.
Waits for disambiguation later in the sentence.
Advantage: More flexibility, handles ambiguity well.
Disadvantage: Requires more cognitive effort & memory to keep multiple structures in mind.
Conservative guessing
The brain makes an immediate decision based on early input and sticks with it—sometimes leading to errors.
Fastest parsing strategy – Prioritizes efficiency over accuracy.
Leads to "garden path" effects when a sentence is misleading.
Uses probabilistic cues & past experience to make quick guesses.
Example Sentence:
"The old man the boats."
Conservative Guessing: Immediately assumes "old" is an adjective (not a noun).
WRONG! The sentence actually means "Old people are the ones who man the boats."
Brain needs to backtrack & reanalyze the sentence.
Advantage: Super fast processing.
Disadvantage: More errors in ambiguous or tricky sentences.
Incremental parsing
Incremental Parsing = The brain processes sentences word-by-word as they arrive, without waiting for the full sentence.
Super fast – We don’t wait until the sentence ends to start making sense of it.
Efficient for everyday speech – Allows real-time conversation & prediction.
BUT… it can lead to parsing errors!
Garden path
A Garden Path Sentence is a sentence that initially leads the reader to an incorrect interpretation, requiring reanalysis to understand the correct meaning.
Happens because of Incremental Parsing → The brain processes words as they come in, making predictions.
When the prediction is wrong, the brain has to backtrack & repair the structure (Later Repair/Reanalysis).
Causes a moment of confusion!