lcc FINAL STUDY GUIDE

0.0(0)
studied byStudied by 11 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/260

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

261 Terms

1
New cards

What are Chomsky’s 3 theories about language acquisition?

  1. I-language (internal language)

  2. Universal Grammar

  3. Innatism

2
New cards

What is I-language?

mental representation of a person's knowledge of their language

3
New cards

What is universal grammar?

  • all humans are born with an innate set of grammatical principles shared across all languages

  • acts like a blueprint for language learning, explaining why children can learn complex languages quickly and uniformly, despite limited input

4
New cards

What is innatism?

  • certain ideas, knowledge, or capacities are inborn rather than acquired through experience

  • language ability is hardwired into the human brain

5
New cards

What are the two domains language is divided into?

Faculty of Language

  • FLB (broad-sense) → sensory-motor and conceptual systems

  • FLN (narrow-sense) → recursion (computation)

6
New cards
<p><span>What is faculty of language?</span></p>

What is faculty of language?

the biological capacity humans have to acquire, understand, and use language

  • It's a term often associated with Noam Chomsky and his theories on how language is rooted in human cognition

Language is not just speech or communication — it is a cognitive system composed of:

  • A computational core (syntax/recursion)

  • Interfaces with thought and sensory-motor systems

  • Only humans seem to possess the full package (FLN), though parts (FLB) are present in other animals.

RIGHT DIAGRAM:

  1. Syntactic Rules and Lexical Representations (Blue Box)

    • Internal grammar system that creates structured sentences

    • Combines with vocabulary (lexical items)

  2. Internal Conceptual-Intentional Interface (Orange Box)

    • Deals with meaning, reasoning, and intention

    • Where thoughts are formulated before being put into language

  3. External Sensory-Motor Interface (Red Box)

    • How language is perceived and produced (e.g., speech, sign, hearing)

    • Converts linguistic structures into physical signals (and vice versa)

→ Arrows show bidirectional flow: We understand language by mapping sounds/gestures back to structured meanings, and we produce language by mapping thoughts into words and sound

<p>the <strong>biological capacity</strong> humans have to acquire, understand, and use language</p><ul><li><p>It's a term often associated with <u>Noam Chomsky</u> and his theories on how language is rooted in human cognition</p><p></p></li></ul><p>Language is not just speech or communication — it is a <strong>cognitive system</strong> composed of:</p><ul><li><p>A <strong>computational core</strong> (syntax/recursion)</p></li><li><p>Interfaces with <strong>thought</strong> and <strong>sensory-motor systems</strong></p></li></ul><ul><li><p><strong>Only humans</strong> seem to possess the full package (FLN), though parts (FLB) are present in other animals.</p></li></ul><p></p><p>RIGHT DIAGRAM:</p><ol><li><p><strong>Syntactic Rules and Lexical Representations (Blue Box)</strong></p><ul><li><p>Internal grammar system that creates structured sentences</p></li><li><p>Combines with vocabulary (lexical items)</p></li></ul></li><li><p><strong>Internal Conceptual-Intentional Interface (Orange Box)</strong></p><ul><li><p>Deals with <strong>meaning</strong>, <strong>reasoning</strong>, and <strong>intention</strong></p></li><li><p>Where <strong>thoughts</strong> are formulated before being put into language</p></li></ul></li><li><p><strong>External Sensory-Motor Interface (Red Box)</strong></p><ul><li><p>How language is <strong>perceived and produced</strong> (e.g., speech, sign, hearing)</p></li><li><p>Converts linguistic structures into physical signals (and vice versa)</p></li></ul></li></ol><p>→ Arrows show <strong>bidirectional flow</strong>: We understand language by mapping sounds/gestures back to structured meanings, and we produce language by mapping thoughts into words and sound</p>
7
New cards

What is evidence for and against language being innate to humans? (3)

Pro:

  • poverty of stimulus (speakers know what is wrong without exposure to it)

    • people have a sense of what is and isn’t correct with little input and corrections

Con:

  • it is not falsifiable (explanations from linguists are post-hoc, after they happen)

    • Falsifiability means a theory can be proven wrong by evidence. A theory that's not falsifiable is not scientific in the strict Popperian sense.

    • after the fact — meaning:

      • They don’t predict what we should observe.

      • They explain things post-hoc (after they happen), making them impossible to truly test or refute

  • linguistic changes vs genetics

    • fast-changing languages incompatible with a slow-changing genetic hardwire

    • Languages evolve quickly — new words, grammar shifts, entire languages appear and disappear over centuries.

    • Genes evolve slowly — over thousands to millions of years.

    • This mismatch raises a challenge:

      • How can a genetically fixed language faculty (like FLN or Universal Grammar) account for the huge diversity and rapid evolution of languages?

      • Wouldn’t our genetic hardwiring lag behind?

If language were mostly hardwired, it should remain fairly stable across cultures and time.

8
New cards

What are the 3 hypotheses for the origins of the faculty of language Broad-sense? (FLB)

  1. FLB is homologous to animal communication

  2. FLB is an adaptation, and only present in humans

  3. FLB is homologous to animals, FLN (narrow-sense) is uniquely human

9
New cards
<p><span>What is </span><strong>Faculty of Language in the Broad Sense (FLB)?</strong></p>

What is Faculty of Language in the Broad Sense (FLB)?

  • Includes general cognitive abilities that aren’t specific to language but are involved in it, like memory, pattern recognition, and the ability to learn from experience.

  • Some animals might share parts of FLB (like communication or pattern recognition skills), but not in the same way humans do.

  • Includes sensory-motor systems (speech, hearing, gesture) and conceptual-intentional systems (thought, meaning, planning)

  • Shared with other species – but only humans combine them with recursion to create full language

10
New cards
<p><span>What is </span><strong>Faculty of Language in the Narrow Sense (FLN)?</strong></p>

What is Faculty of Language in the Narrow Sense (FLN)?

  • Refers to specific features unique to human language, like recursion (the ability to embed ideas within ideas) and the complex grammatical structures we can create

  • Core computational mechanism that builds infinite structures from finite means

11
New cards

What are 3 claims about language?

  • Language as a thought tool: The primary role of language is to express thoughts, not to communicate. Communication is secondary.

  • No comparison with animals: Human language is considered qualitatively different from animal communication; it's not just a more complex version.

  • Species studies caution: Studies in other species may reveal mechanisms (e.g., working memory, pattern recognition) also used in language—but these mechanisms are not exclusive to language.

12
New cards
<p>What does the speech discrimination experiment with monkeys (tamarins) show regarding <span style="color: inherit"><strong>Faculty of language: Sensory-motor interface?</strong></span></p>

What does the speech discrimination experiment with monkeys (tamarins) show regarding Faculty of language: Sensory-motor interface?

  • Goal: To test whether tamarins can discriminate between languages and speakers.

  • Conditions:

    • Forward speech (normal)

    • Backward speech (unnatural)

  • Tamarins can discriminate between languages when speech is played forward, even though they don't have full language abilities.

  • This implies that some sensory-motor processing mechanisms (like detecting rhythm or sound patterns) are shared across species, even if full language is not.

  • It supports the idea that some mechanisms used in language are not exclusive to it

13
New cards

What are two animal examples of Faculty of Language: Conceptual - intentional interface

  • vervet monkeys use vocalizations to communicate specific meanings—though in a limited, non-linguistic way

  • honeybees use the waggle dance to explore how information can be encoded and communicated without language

14
New cards

Describe how vervet monkeys are an example for Faculty of Language: Conceptual - Intentional interface

monkeys have different calls for predators → snake alarm is different from eagle alarm

  • issue: vocalization is limited because they don’t have muscle control and communication is indexical (more instinctive survival instinct)

    • can only happen in the presence of what is happening

  • These vocalizations are instinctive and referential (they point to specific dangers), but they:

    • Lack syntax (no structure or combination rules)

    • Are not generative (you can’t build new meanings by combining them)

    • Do not reflect intentional communication like in humans (ex. expressing thoughts or questions)

Vervet calls show that animals may map sounds to meanings, but not in a linguistic way.

  • This supports the idea that full language depends on a more complex conceptual-intentional interface unique to humans

This slide demonstrates that some precursors to language (like sound-meaning associations) exist in other species. However, only humans use a flexible, abstract, and compositional system tied to internal thought and intentionality—what we call language.

<p>monkeys have different calls for predators → snake alarm is different from eagle alarm</p><ul><li><p>issue: vocalization is limited because they don’t have muscle control and communication is indexical (more instinctive survival instinct)</p><ul><li><p>can only happen in the presence of what is happening</p></li></ul><p class=""></p></li><li><p class="">These vocalizations are <strong>instinctive</strong> and <strong>referential</strong> (they point to specific dangers), but they:</p><ul><li><p class=""><strong>Lack syntax</strong> (no structure or combination rules)</p></li><li><p class="">Are not <strong>generative</strong> (you can’t build new meanings by combining them)</p></li><li><p class="">Do not reflect <strong>intentional communication</strong> like in humans (ex. expressing thoughts or questions)</p></li></ul></li></ul><p></p><p class="">Vervet calls show that animals may <strong>map sounds to meanings</strong>, but <strong>not in a linguistic way</strong>.</p><ul><li><p class="">This supports the idea that <strong>full language</strong> depends on a more complex conceptual-intentional interface <strong>unique to humans</strong></p></li></ul><p></p><p>This slide demonstrates that <strong>some precursors to language</strong> (like sound-meaning associations) exist in other species. However, only humans use a <strong>flexible, abstract, and compositional</strong> system tied to internal thought and intentionality—what we call <strong>language</strong>.</p>
15
New cards

What is the Conceptual - Intentional interface? → faculty of language

  • The cognitive side of language: where thoughts, intentions, and reasoning occur.

  • It connects mental concepts with the linguistic system.

  • In humans, this interface enables us to turn abstract thoughts into structured language.

16
New cards

Describe how honeybees are an example for Faculty of Language: Conceptual - Intentional interface

  • When a forager bee finds a flower source, it returns to the hive and performs a waggle dance.

  • The dance communicates:

    • Direction of the flower (relative to the sun)

    • Distance (via duration of the waggle)

  • This is a symbolic communication system—it maps internal representations (location of food) to external signals (dance patterns).

  • animals can encode and convey conceptual information.

  • Limitations of the waggle dance:

    • Is rigid and limited in scope (only about food)

    • Doesn’t involve syntax or recursive structure

    • Isn’t generative (bees can’t create new messages beyond what’s hard-coded)

<ul><li><p class="">When a forager bee finds a <strong>flower source</strong>, it returns to the hive and performs a <strong>waggle dance</strong>.</p></li><li><p class="">The dance communicates:</p><ul><li><p class=""><strong>Direction</strong> of the flower (relative to the sun)</p></li><li><p class=""><strong>Distance</strong> (via duration of the waggle)</p></li></ul></li><li><p class="">This is a <strong>symbolic communication system</strong>—it maps internal representations (location of food) to <strong>external signals</strong> (dance patterns).</p><p class=""></p></li></ul><ul><li><p class="">animals can <strong>encode and convey conceptual information</strong>.</p></li><li><p class="">Limitations of the waggle dance:</p><ul><li><p class="">Is <strong>rigid and limited</strong> in scope (only about food)</p></li><li><p class="">Doesn’t involve <strong>syntax</strong> or <strong>recursive structure</strong></p></li><li><p class="">Isn’t <strong>generative</strong> (bees can’t create new messages beyond what’s hard-coded)</p></li></ul></li></ul><p></p>
17
New cards

What are makes language uniquely human? (3)

  1. Discrete Infinite Elements (words)

  • Human language consists of units (ex. words) that are:

    • Discrete: clearly separable

    • Infinite in potential: can be combined endlessly

  • This enables humans to express an unlimited number of ideas using a finite vocabulary.

2. Syntactic Organization (Computation)

  • These elements are not just listed — they’re structured by rules (syntax).

  • The mind applies computational operations (like recursion, hierarchy) to build complex expressions.

  • This syntactic system is what separates language from other forms of animal communication.

3. Symbolism (Lexicon)

  • Language uses arbitrary symbols (words) to refer to concepts.

  • These are stored in the lexicon — our mental dictionary.

  • Symbolic reference is more flexible and abstract than fixed calls or signals seen in animals.

18
New cards
<p>Describe <strong>recursion</strong> in the Faculty of Language (FLN) → unique human component</p>

Describe recursion in the Faculty of Language (FLN) → unique human component

  • Recursion is the ability to embed structures within structures (ex. sentences within sentences).

  • It’s the computational mechanism at the heart of human syntax.

  • This is what makes the Faculty of Language in the Narrow Sense (FLN) distinct from broader communication systems seen in animals

Recursion is:

  • A defining feature of human language

  • What allows us to generate infinite new expressions from finite elements

  • Considered absent in non-human species, even those with symbolic or meaningful signals

19
New cards

Describe recursion in the honeybee waggle dance and vocalization in vervet monkeys

  • The bee waggle dance and vervet monkey calls lack recursion.

  • They convey fixed messages (e.g., food direction or predator type) but can’t combine or nest them in flexible ways.

20
New cards

How is the Pirahã language, spoken in the Amazon an example of recursion?

  • Pirahã lacks recursion in its external language (e-language, ex. actual spoken language).

  • Culture shapes grammar — the Pirahã worldview (focus on the present, rejection of abstraction) may restrict language structure.

  • Raises the question: might recursion exist in the internal language (I-language, the mental system), even if it's not expressed?

Recursion (the ability to embed phrases within phrases) is seen as a core component of the Faculty of Language in the Narrow sense (FLN) and is often claimed to be unique to humans

emphasizes a key tension in linguistics:

  • Is recursion truly universal in human language?

  • Or can culture override an innate computational capacity?

21
New cards
<p>Describe how European Starlings were used to try and show recursion</p>

Describe how European Starlings were used to try and show recursion

  • Starlings were trained to discriminate between patterns of artificial sounds with different grammatical structures.

  • Two types of sound patterns were tested:

    • (AB)ⁿ (e.g., ab, abab, ababab) → simple repetition

    • AⁿBⁿ (e.g., aabb, aaabbb) → more complex nested patterns, suggesting recursion

Setup (left image)

  • Birds heard sound patterns from a speaker and had to choose the correct response port to get food.

  • Their choices revealed whether they could distinguish between pattern types.

Results (middle graphs)

  • The birds learned (AB)ⁿ patterns more easily (simpler repetition).

  • There was some limited success with AⁿBⁿ patterns, suggesting possible sensitivity to structured sequences.

Human Comparison (bottom examples)

  • AB structure = “The starling was tired.” (simple sentence)

  • AⁿBⁿ structure = “The starling [that the cats want] was tired.” (nested/recursive relative clause)

This human sentence involves recursive embedding, which is much more complex than AB repetition. It requires memory, hierarchy, and grammar rules—all tightly linked to syntactic recursion.

  • While starlings show some pattern learning, the complexity and flexibility of human recursive syntax (like in embedded clauses) still appears qualitatively different.

  • This supports the view that recursion, as used in human language, remains a uniquely human capacity.

*Birds could have counted or detected acoustic approximation → could have simply learned the patterns

<ul><li><p class="">Starlings were trained to <strong>discriminate between patterns of artificial sounds</strong> with different grammatical structures.</p></li><li><p class="">Two types of sound patterns were tested:</p><ul><li><p class=""><strong>(AB)ⁿ</strong> (e.g., ab, abab, ababab) → simple repetition</p></li><li><p class=""><strong>AⁿBⁿ</strong> (e.g., aabb, aaabbb) → more complex nested patterns, suggesting recursion</p></li></ul></li></ul><p></p><p>Setup (left image) </p><ul><li><p class="">Birds heard sound patterns from a <strong>speaker</strong> and had to <strong>choose the correct response port</strong> to get food.</p></li><li><p class="">Their choices revealed whether they could <strong>distinguish between pattern types</strong>.</p></li></ul><p class=""></p><p class="">Results (middle graphs) </p><ul><li><p class="">The birds <strong>learned (AB)ⁿ patterns more easily</strong> (simpler repetition).</p></li><li><p class="">There was <strong>some limited success</strong> with AⁿBⁿ patterns, suggesting possible <strong>sensitivity to structured sequences</strong>.</p></li></ul><p class=""></p><p class="">Human Comparison (bottom examples) </p><ul><li><p class=""><strong>AB structure</strong> = “The starling was tired.” (simple sentence)</p></li><li><p class=""><strong>AⁿBⁿ structure</strong> = “The starling [that the cats want] was tired.” (nested/recursive relative clause)</p></li></ul><p> </p><p class="">This human sentence involves <strong>recursive embedding</strong>, which is <strong>much more complex</strong> than AB repetition. It requires <strong>memory, hierarchy, and grammar rules</strong>—all tightly linked to <strong>syntactic recursion</strong>.</p><p class=""></p><ul><li><p class="">While starlings show <strong>some pattern learning</strong>, the complexity and flexibility of <strong>human recursive syntax</strong> (like in embedded clauses) still appears <strong>qualitatively different</strong>.</p></li><li><p class="">This supports the view that <strong>recursion, as used in human language</strong>, remains a <strong>uniquely human capacity</strong>.</p></li></ul><p></p><p>*Birds could have counted or detected acoustic approximation → could have simply learned the patterns</p>
22
New cards

What are 3 uniquely human components of language?

  • recursion

  • merge

  • lexicon

23
New cards

Describe merge in the Faculty of Language (FLN) → unique human component

Merge is a fundamental operation in syntax that:

  • Combines two elements (like words or phrases) into a new syntactic unit

  • Is the building block of syntactic structure

Example:

  • [ate] + [apple] → [ate apple]

  • This forms hierarchical structure, not just a linear string

Aspects:

  1. Creates syntactic objects (like sentences or phrases)

  2. Allows for syntactic dependencies (ex. subject-verb agreement, word order)

  3. Supports hierarchical sentence structure

Integration with Language System

  • Merge operates within the syntactic module (blue box)

  • It interfaces with:

    • Conceptual-intentional system (meaning/thoughts)

    • Sensory-motor system (spoken/written/sign language)

<p><strong>Merge</strong> is a fundamental operation in syntax that:</p><ul><li><p class=""><strong>Combines two elements</strong> (like words or phrases) into a new <strong>syntactic unit</strong></p></li><li><p class="">Is the building block of <strong>syntactic structure</strong></p></li></ul><p class="">Example:</p><ul><li><p class="">[ate] + [apple] → [ate apple]</p></li><li><p class="">This forms <strong>hierarchical structure</strong>, not just a linear string</p></li></ul><p></p><p><strong>Aspects</strong>:</p><ol><li><p class=""><strong>Creates syntactic objects</strong> (like sentences or phrases)</p></li><li><p class="">Allows for <strong>syntactic dependencies</strong> (ex. subject-verb agreement, word order)</p></li><li><p class="">Supports <strong>hierarchical</strong> sentence structure</p></li></ol><p class=""></p><p class="">Integration with Language System </p><ul><li><p class="">Merge operates within the <strong>syntactic module</strong> (blue box)</p></li><li><p class="">It interfaces with:</p><ul><li><p class=""><strong>Conceptual-intentional system</strong> (meaning/thoughts)</p></li><li><p class=""><strong>Sensory-motor system</strong> (spoken/written/sign language)</p></li></ul></li></ul><p class=""></p>
24
New cards

Describe the 4 aspects of lexicon in the Faculty of Language (FLN) → unique human component

  • The lexicon is the internal mental store of words (lexical items).

  • Each lexical item includes:

    • Phonological form (how it sounds)

    • Syntactic features (how it combines with other words)

    • Semantic content (meaning)

It’s not just a list—it’s a rich, structured system crucial for producing and understanding language.

Aspects:

  1. Highly complex and specific

    • Words carry detailed grammatical and conceptual information, unlike animal calls which are typically fixed and limited.

  2. Infinite

    • Humans can create an unlimited number of new words or meanings using existing elements (e.g., compound words, neologisms, metaphors).

  3. Mind-dependent entities

    • Lexical meaning is not just referential—it often reflects mental representations, emotions, or abstract concepts.

  4. Enables interpretation of the world

    • Words are tools for categorizing and reasoning about experience, not just for communication.

System Interaction (diagram)

  • The lexicon is housed in the syntactic system (blue box).

  • It interacts with:

    • The conceptual-intentional system (orange): to encode thoughts into words

    • The sensory-motor system (red): to express those words in speech/sign

<ul><li><p class="">The <strong>lexicon</strong> is the internal mental store of <strong>words (lexical items)</strong>.</p></li><li><p class="">Each lexical item includes:</p><ul><li><p class=""><strong>Phonological form</strong> (how it sounds)</p></li><li><p class=""><strong>Syntactic features</strong> (how it combines with other words)</p></li><li><p class=""><strong>Semantic content</strong> (meaning)</p></li></ul></li></ul><p class="">It’s not just a list—it’s a <strong>rich, structured system</strong> crucial for producing and understanding language.</p><p class=""></p><p class=""><strong>Aspects:</strong></p><ol><li><p class=""><strong>Highly complex and specific</strong></p><ul><li><p class="">Words carry detailed grammatical and conceptual information, unlike animal calls which are typically fixed and limited.</p></li></ul></li><li><p><strong>Infinite</strong></p><ul><li><p class="">Humans can create <strong>an unlimited number of new words or meanings</strong> using existing elements (e.g., compound words, neologisms, metaphors).</p></li></ul></li><li><p><strong>Mind-dependent entities</strong></p><ul><li><p class="">Lexical meaning is <strong>not just referential</strong>—it often reflects <strong>mental representations</strong>, emotions, or abstract concepts.</p></li></ul></li><li><p><strong>Enables interpretation of the world</strong></p><ul><li><p class="">Words are <strong>tools for categorizing and reasoning</strong> about experience, not just for communication.</p></li></ul></li></ol><p class=""></p><p class="">System Interaction (diagram)</p><ul><li><p class="">The <strong>lexicon</strong> is housed in the <strong>syntactic system</strong> (blue box).</p></li><li><p class="">It interacts with:</p><ul><li><p class=""><strong>The conceptual-intentional system</strong> (orange): to encode thoughts into words</p></li><li><p class=""><strong>The sensory-motor system</strong> (red): to express those words in speech/sign</p></li></ul></li></ul><p></p>
25
New cards
<p>Decribe chimps and <strong>Lexicon</strong> → unique human component of Language (4)</p>

Decribe chimps and Lexicon → unique human component of Language (4)

While chimps can:

  • Recognize objects (ex. apple, knife)

  • Use tools

  • Learn some symbolic associations (via signs or lexigrams)

They lack a lexicon in the human linguistic sense:

1. Highly complex and specific

  • Human words encode rich syntactic and semantic features.

  • Chimps recognize objects but don’t assign roles or relationships linguistically.

2. Infinite

  • Humans can endlessly create and combine words (ex. “knife apple cutter,” “pre-cuttable apple”).

  • Chimps show no evidence of productive, generative word creation.

3. Mind-dependent entities

  • Human words can refer to things that don’t exist (ex. “unicorn,” “justice”).

  • Chimps communicate only about immediate, concrete entities.

4. World interpretation

  • Language lets humans categorize, explain, and narrate the world.

  • Chimps can act on the world but do not show evidence of representing it symbolically in structured ways.

🐒 Visual Message of the Slide

  • The chimp with an apple and knife suggests tool use and object recognition.

  • But it cannot label, describe, or reflect on the experience using language.

26
New cards
<p><span>Neural architecture</span></p>

Neural architecture

knowt flashcard image
27
New cards

What are the two FLB interfaces and FLN - 3 major component of language?

  1. Sensory-Motor Interface (FLB)

  • How language is externalized (speech, sign)

  • Includes:

    • Speech discrimination (Ramus et al. 2000 – tamarins)

    • Vocalization (Owren & Bernacki, 1988 – vervet monkeys)

2. Conceptual-Intentional Interface (FLB)

  • Interface between language and thought

  • Includes:

    • Vocalization again (ex. vervet monkey calls as meaningful signals)

3. Computation Core (FLN)

  • Syntax and recursion

  • Merge as the core operation (Berwick et al., 2013)

  • Enables hierarchical structure

28
New cards

What are the 3 components of evolution?

  • Variation: Individuals differ in traits

  • Heredity: Traits are passed on genetically

  • Differential reproduction: Traits that help survival and communication get passed on more easily

Implication for language:

  • Language-related traits (like vocal learning or symbol processing) may have evolved gradually due to adaptive advantages (e.g., better cooperation, teaching, mate selection)

<ul><li><p class=""><strong>Variation</strong>: Individuals differ in traits</p></li><li><p class=""><strong>Heredity</strong>: Traits are passed on genetically</p></li><li><p class=""><strong>Differential reproduction</strong>: Traits that help survival and communication get passed on more easily</p></li></ul><p class=""></p><p class=""><strong>Implication for language</strong>:</p><ul><li><p class="">Language-related traits (like vocal learning or symbol processing) may have <strong>evolved gradually</strong> due to <strong>adaptive advantages</strong> (e.g., better cooperation, teaching, mate selection)</p></li></ul><p></p>
29
New cards

Describe Vocalization and the Singing Ape Hypothesis → evolution

  • Early vocalizations were:

    • Taught to offspring (culturally transmitted)

    • Used for social bonding or display

  • Over time, this behavior:

    • Became subject to sexual selection (more complex/appealing vocalizers reproduced more)

    • Led to the emergence of a “singing ape”: proto-humans with richer vocal abilities

  • 🧬 Result:

    • Vocal learning became an adaptation, biologically supported and passed on genetically

    • Feedback loop: cultural teaching → selective advantage → biological adaptation → more teaching

🔁 Key Mechanism: Baldwin Effect

  • Learned behavior (like vocal imitation) leads to biological adaptation over time due to selection pressures.

<ul><li><p class=""><strong>Early vocalizations</strong> were:</p><ul><li><p class=""><strong>Taught to offspring</strong> (culturally transmitted)</p></li><li><p class="">Used for <strong>social bonding or display</strong></p></li></ul></li><li><p class="">Over time, this behavior:</p><ul><li><p class="">Became subject to <strong>sexual selection</strong> (more complex/appealing vocalizers reproduced more)</p></li><li><p class="">Led to the emergence of a <strong>“singing ape”</strong>: proto-humans with richer vocal abilities</p></li></ul></li><li><p class=""><span data-name="dna" data-type="emoji">🧬</span> <strong>Result</strong>:</p><ul><li><p class="">Vocal learning became an <strong>adaptation</strong>, biologically supported and passed on genetically</p></li><li><p class=""><strong>Feedback loop</strong>: cultural teaching → selective advantage → biological adaptation → more teaching</p></li></ul></li></ul><p> </p><p> <span data-name="repeat" data-type="emoji">🔁</span> Key Mechanism: <strong>Baldwin Effect</strong> </p><ul><li><p class="">Learned behavior (like vocal imitation) leads to <strong>biological adaptation</strong> over time due to <strong>selection pressures</strong>.</p></li></ul><p></p>
30
New cards
<p>Describe: vocal calls to language</p>

Describe: vocal calls to language

Vocal Calls → Language (L)

  • M = Common ancestor shared by humans and other primates

  • From primitive vocal calls, two lineages evolved:

    • One remained with non-linguistic vocalizations

    • One (L) led to language

  • Millions of years of evolution separate simple vocal calls from modern language

  • Language (L) is now a biological adaptation:

    • It likely began as learned, flexible vocal behavior

    • Selected for because of social, sexual, or survival advantages

31
New cards

How was Wilhelm Wundt influential to the evolution of language?

1879 – Wilhelm Wundt: psychological research

chat:

  • Founded the first psychological lab in Leipzig

  • Marked the beginning of experimental psychology

  • 🧪 Psychology provides:

    • Methods to study cognition, learning, and communication

    • Insight into mental processes behind language: memory, attention, intention, social interaction

  • 💬 Applied to language evolution:

    • Helps explain how the mind supports language (e.g., theory of mind, symbol use, imitation)

    • Complements biological and linguistic approaches

32
New cards

What impact did the Société de Linguistique de Paris (1866) have? → Language Evolution: Linguistics

🏛 Société de Linguistique de Paris (1866):

  • Banned all discussion of language origins.

  • Language origin was seen as unscientific due to lack of data.

  • Gatekeeper of what someone could publish in linguistics

33
New cards

Describe the modern view on evolutionary linguistics

🧠 Modern View (Chomsky & Hauser)

“It is not easy to imagine a course of selection...”
— Chomsky (1981)
🔹 Language’s complexity makes its evolution hard to reconstruct via natural selection alone.

“FLN may have evolved for non-language reasons...”
— Hauser et al. (2002)
🔹 FLN (e.g. recursion) might have evolved originally for other cognitive domains like:

  • Numerical reasoning

  • Navigation

  • Social cognition

🔍 Key Insight

Language-specific abilities (like Merge) may be by-products or exaptations, not direct adaptations for communication.

34
New cards

Describe comparative methods used in linguistics for animals → language evolution

Goal: Understand what aspects of language are uniquely human by comparing humans with other species.

🐵 Approach:

  • Study non-human animals for:

    • Communication systems (vervet monkey alarm calls)

    • Vocal learning (songbirds, parrots)

    • Cognitive capacities (chimpanzees using symbols or gestures)

🧠 What We Learn:

  • Shared traits (FLB): perception, memory, vocal imitation, social learning

  • Unique traits (FLN): recursive syntax, Merge, hierarchical structure

chat:

  • Comparative linguistics studies the evolution and relationships between languages by:

    • Comparing vocabulary

    • Analyzing grammar and sound patterns

    • Reconstructing language families

🗣 Example: Romance Languages

  • The diagram maps Romance subfamilies:

    • Ibero-Romance (e.g., Spanish, Portuguese)

    • Western Romance, Eastern Romance, Island Romance, etc.

  • Shows language descent and divergence from Latin

🧠 Why It Matters for Language Evolution:

  • Reveals how languages change over time

  • Helps trace linguistic ancestry

  • Offers clues about universal patterns and cognitive constraints

<p class=""><strong>Goal</strong>: Understand <strong>what aspects of language are uniquely human</strong> by comparing humans with other species.</p><p class=""></p><p class=""><span data-name="monkey_face" data-type="emoji">🐵</span> <strong>Approach</strong>:</p><ul><li><p class="">Study <strong>non-human animals</strong> for:</p><ul><li><p class=""><strong>Communication systems</strong> (vervet monkey alarm calls)</p></li><li><p class=""><strong>Vocal learning</strong> (songbirds, parrots)</p></li><li><p class=""><strong>Cognitive capacities</strong> (chimpanzees using symbols or gestures)</p><p class=""></p></li></ul></li></ul><p><span data-name="brain" data-type="emoji">🧠</span> <strong>What We Learn</strong>:</p><ul><li><p class=""><strong>Shared traits</strong> (FLB): perception, memory, vocal imitation, social learning</p></li><li><p class=""><strong>Unique traits</strong> (FLN): recursive syntax, Merge, hierarchical structure</p></li></ul><p></p><p>chat:</p><ul><li><p class=""><strong>Comparative linguistics</strong> studies the <strong>evolution and relationships</strong> between languages by:</p><ul><li><p class="">Comparing <strong>vocabulary</strong></p></li><li><p class="">Analyzing <strong>grammar</strong> and <strong>sound patterns</strong></p></li><li><p class="">Reconstructing <strong>language families</strong></p></li></ul></li></ul><p> <span data-name="speaking_head" data-type="emoji">🗣</span> <strong>Example: Romance Languages</strong></p><ul><li><p class="">The diagram maps <strong>Romance subfamilies</strong>:</p><ul><li><p class=""><strong>Ibero-Romance</strong> (e.g., Spanish, Portuguese)</p></li><li><p class=""><strong>Western Romance</strong>, <strong>Eastern Romance</strong>, <strong>Island Romance</strong>, etc.</p></li></ul></li><li><p class="">Shows <strong>language descent and divergence</strong> from <strong>Latin</strong></p></li></ul><p> <span data-name="brain" data-type="emoji">🧠</span> Why It Matters for Language Evolution: </p><ul><li><p class="">Reveals how <strong>languages change over time</strong></p></li><li><p class="">Helps trace <strong>linguistic ancestry</strong></p></li><li><p class="">Offers clues about <strong>universal patterns</strong> and <strong>cognitive constraints</strong></p></li></ul><p></p>
35
New cards

Describe the linguist revival 1990 (3)

🧬 Language as a gradual adaptation shaped by natural selection
→ Not a sudden mutation, but evolved like other complex traits.

🧠 Key Ideas:

  • Preadaptation:

    • Language built on earlier cognitive mechanisms (ex. memory, imitation)

  • Modularity:

    • The mind has specialized modules (ex. vision, language, social reasoning)

  • Grammar module:

    • Treated as a potential unit of selection—it evolved because it helped survival/reproduction.

🏛 SPANDREL metaphor:

  • Not all features are directly selected for—some (like certain aspects of grammar) may be by-products (spandrels) of other adaptations.

<p class=""><span data-name="dna" data-type="emoji">🧬</span> <strong>Language as a gradual adaptation</strong> shaped by <strong>natural selection</strong><br>→ Not a sudden mutation, but evolved like other complex traits.</p><p></p><p> <span data-name="brain" data-type="emoji">🧠</span> <strong>Key Ideas</strong>: </p><ul><li><p class=""><strong>Preadaptation</strong>:</p><ul><li><p class="">Language built on earlier <strong>cognitive mechanisms</strong> (ex. memory, imitation)</p></li></ul></li><li><p class=""><strong>Modularity</strong>:</p><ul><li><p class="">The mind has <strong>specialized modules</strong> (ex. vision, language, social reasoning)</p></li></ul></li><li><p class=""><strong>Grammar module</strong>:</p><ul><li><p class="">Treated as a potential <strong>unit of selection</strong>—it evolved because it helped survival/reproduction.</p></li></ul></li></ul><p></p><p> <span data-name="classical_building" data-type="emoji">🏛</span> SPANDREL metaphor: </p><ul><li><p class="">Not all features are <strong>directly selected for</strong>—some (like certain aspects of grammar) may be <strong>by-products</strong> (spandrels) of other adaptations.</p></li></ul><p></p>
36
New cards

Describe the cognitive niche and the three cognitive mechanisms hominids evolved to use (3)

  • A niche is the environment an organism constructs

  • The cognitive niche is a niche built using brainpower instead of claws or speed

🧍‍♂ Hominids evolved to:

  • Use cognitive mechanisms to dominate their environment

    • 🧠 Intelligence → understanding of the world

    • 🗣 Language → communicating that knowledge

    • 🤝 Sociality → collaborating with others

      • reciprocal altruism: collaborating with unrelated humans to pass on knowledge

chat:

🧠 Intelligence (understanding of the world)

  • Domains: physics, biology, geometry, navigation, psychology

  • Leads to tool creation and strategic manipulation of the environment

🤝 Sociality (collaborating with others)

  • Humans cooperate even with unrelated humans

  • Based on reciprocal altruism:

    • Short-term decress in one’s fitness for others → future reward

    • Builds trust, coordination, social bonds

🗣 Language (communicating knowledge)

  • Combines limited elements (words) to generate unlimited messages

  • Essential for:

    • Teaching

    • Planning

    • Coordinating social and technical tasks

<ul><li><p class="">A <strong>niche</strong> is the environment an organism constructs</p></li><li><p class="">The <strong>cognitive niche</strong> is a niche built using <strong>brainpower</strong> instead of claws or speed</p></li></ul><p></p><p><span data-name="man_standing" data-type="emoji">🧍‍♂</span> Hominids evolved to:</p><ul><li><p class="">Use <strong>cognitive mechanisms</strong> to dominate their environment</p><ul><li><p class=""><span data-name="brain" data-type="emoji">🧠</span> <strong>Intelligence</strong> → understanding of the world</p></li><li><p class=""><span data-name="speaking_head" data-type="emoji">🗣</span> <strong>Language</strong> → communicating that knowledge</p></li><li><p class=""><span data-name="handshake" data-type="emoji">🤝</span> <strong>Sociality</strong> → collaborating with others</p><ul><li><p class=""><u>reciprocal altruism</u>: collaborating with unrelated humans to pass on knowledge</p></li></ul></li></ul></li></ul><p></p><p>chat:</p><p><span data-name="brain" data-type="emoji">🧠</span> <strong>Intelligence</strong> (understanding of the world)</p><ul><li><p class="">Domains: physics, biology, geometry, navigation, psychology</p></li><li><p class="">Leads to <strong>tool creation</strong> and strategic manipulation of the environment</p></li></ul><p></p><p><span data-name="handshake" data-type="emoji">🤝</span> <strong>Sociality</strong> (collaborating with others)</p><ul><li><p class="">Humans cooperate even with <strong>unrelated humans</strong></p></li><li><p class="">Based on <strong>reciprocal altruism</strong>:</p><ul><li><p class="">Short-term decress in one’s fitness for others → future reward</p></li><li><p class="">Builds trust, coordination, social bonds</p></li></ul></li></ul><p></p><p><span data-name="speaking_head" data-type="emoji">🗣</span> <strong>Language</strong> (communicating knowledge)</p><ul><li><p class="">Combines <strong>limited elements (words)</strong> to generate <strong>unlimited messages</strong></p></li><li><p class="">Essential for:</p><ul><li><p class="">Teaching</p></li><li><p class="">Planning</p></li><li><p class="">Coordinating social and technical tasks</p></li></ul></li></ul><p></p>
37
New cards

The cognitive niche

🧬 Human Mind = Cognitive Niche

  • The cognitive niche is a human-specific strategy for survival and adaptation.

  • It's constructed by the brain using:

    • 🧠 Intelligence (understanding the world)

    • 🗣 Language (communicating knowledge)

    • 🤝 Sociality (collaboration and cooperation)

These are adaptations:

  • Evolved through natural selection

  • Gave humans a flexible, knowledge-based survival strategy

  • Contrast with animals relying on physical traits (claws, speed)

📌 Summary

The human mind itself is an adaptation shaped by and shaping the cognitive niche.

<p><span data-name="dna" data-type="emoji">🧬</span> Human Mind = Cognitive Niche </p><ul><li><p class="">The <strong>cognitive niche</strong> is a human-specific strategy for survival and adaptation.</p></li><li><p class="">It's <strong>constructed by the brain</strong> using:</p><ul><li><p class=""><span data-name="brain" data-type="emoji">🧠</span> <strong>Intelligence</strong> (understanding the world)</p></li><li><p class=""><span data-name="speaking_head" data-type="emoji">🗣</span> <strong>Language</strong> (communicating knowledge)</p></li><li><p class=""><span data-name="handshake" data-type="emoji">🤝</span> <strong>Sociality</strong> (collaboration and cooperation)</p></li></ul></li></ul><p> </p><p> These are <strong>adaptations</strong>: </p><ul><li><p class="">Evolved through <strong>natural selection</strong></p></li><li><p class="">Gave humans a <strong>flexible, knowledge-based survival strategy</strong></p></li><li><p class="">Contrast with animals relying on physical traits (claws, speed)</p></li></ul><p> </p><p> <span data-name="pushpin" data-type="emoji">📌</span> Summary </p><figure data-type="blockquoteFigure"><div><blockquote><p class="">The <strong>human mind</strong> itself is an adaptation shaped by and shaping the <strong>cognitive niche</strong>.</p></blockquote><figcaption></figcaption></div></figure><p></p>
38
New cards

What are the 3 computational models?

  • Machine learning

  • Multi-agent modeling

  • Evolutionary computation

<ul><li><p>Machine learning</p></li><li><p>Multi-agent modeling</p></li><li><p>Evolutionary computation</p></li></ul><p></p>
39
New cards

Describe computational modeling

🔁 Three interacting systems:

  1. Individual learning
    → Learners adapt to language based on biases and exposure
    → (Linked to machine learning models)

  2. Cultural transmission
    → Language is passed between generations through imitation, teaching, and communication
    → Drives changes in structure over time

  3. Biological evolution
    → Determines learning capacities, vocal abilities, and cognitive traits
    → Language structure can affect fitness (e.g., cooperation, success)

🧩 Interactions:

  • Learning biases influence which structures are easier to acquire

  • Cultural evolution favors learnable, expressive systems

  • Biological evolution shapes learning mechanisms

  • Language itself feeds back into fitness and selection

🛠 Tools:

  • Machine learning = simulates individual learning

  • Multi-agent modeling = simulates populations and cultural dynamics

  • Evolutionary computation = simulates long-term genetic change

<p><span data-name="repeat" data-type="emoji">🔁</span> <strong>Three interacting systems</strong>: </p><ol><li><p class=""><strong>Individual learning</strong><br>→ Learners adapt to language based on <strong>biases</strong> and exposure<br>→ (Linked to machine learning models)</p></li><li><p class=""><strong>Cultural transmission</strong><br>→ Language is passed between generations through <strong>imitation, teaching, and communication</strong><br>→ Drives changes in structure over time</p></li><li><p class=""><strong>Biological evolution</strong><br>→ Determines <strong>learning capacities</strong>, vocal abilities, and cognitive traits<br>→ Language structure can affect <strong>fitness</strong> (e.g., cooperation, success)</p></li></ol><p> </p><p> <span data-name="jigsaw" data-type="emoji">🧩</span> Interactions: </p><ul><li><p class=""><strong>Learning biases</strong> influence which structures are easier to acquire</p></li><li><p class=""><strong>Cultural evolution</strong> favors learnable, expressive systems</p></li><li><p class=""><strong>Biological evolution</strong> shapes learning mechanisms</p></li><li><p class="">Language itself feeds back into <strong>fitness and selection</strong></p></li></ul><p> </p><p> <span data-name="hammer_and_wrench" data-type="emoji">🛠</span> Tools: </p><ul><li><p class=""><strong>Machine learning</strong> = simulates individual learning</p></li><li><p class=""><strong>Multi-agent modeling</strong> = simulates populations and cultural dynamics</p></li><li><p class=""><strong>Evolutionary computation</strong> = simulates long-term genetic change</p></li></ul><p></p>
40
New cards

Describe the computational model machine learning (2)

🧠 Individual Learning

  • Learners acquire language by observing others

🧪 Key Learning Models:

  1. Instance-based learning (Batali, 1999)

    • Learners compare new information to stored information

    • Enables pattern recognition and generalization

  2. Network models (Christiansen & Dale, 2003)

    • Use neural networks to simulate how language is learned and processed

    • Capture emergent structure from repeated exposure

<p><span data-name="brain" data-type="emoji">🧠</span> <strong>Individual Learning</strong> </p><ul><li><p class="">Learners acquire language by <strong>observing others</strong></p></li></ul><p> </p><p> <span data-name="test_tube" data-type="emoji">🧪</span> <strong>Key Learning Models</strong>: </p><ol><li><p class=""><strong>Instance-based learning</strong> (Batali, 1999)</p><ul><li><p class="">Learners <strong>compare new information</strong> to <strong>stored information</strong></p></li><li><p class="">Enables <strong>pattern recognition</strong> and <strong>generalization</strong></p></li></ul></li><li><p class=""><strong>Network models</strong> (Christiansen &amp; Dale, 2003)</p><ul><li><p class="">Use <strong>neural networks</strong> to simulate how language is learned and processed</p></li><li><p class="">Capture <strong>emergent structure</strong> from repeated exposure</p></li></ul></li></ol><p></p>
41
New cards

Describe the computational model multi-agent modeling & the experiment for it

Ex: Kirby, 2000

🧩 What it is:

  • A simulation where multiple agents interact and learn from each other over time.

  • Each agent has a set of meanings and signals (words/strings) that can be shared.

🧠 Key Setup:

  • 5 proper nouns (ex. agents or objects)

  • 5 action verbs

  • Agents start with random associations

  • Through interaction and feedback, they converge on shared meanings (ex. "word A" = "agent X")

🔄 How it models language evolution:

  • Simulates cultural transmission

  • Captures how shared structure (like grammar or vocabulary) emerges without centralized control

  • Shows how language stabilizes over generations

<p><strong>Ex: Kirby, 2000</strong></p><p> <span data-name="jigsaw" data-type="emoji">🧩</span> <strong>What it is</strong>: </p><ul><li><p class="">A simulation where <strong>multiple agents</strong> interact and learn from each other over time.</p></li><li><p class="">Each agent has a <strong>set of meanings</strong> and <strong>signals</strong> (words/strings) that can be shared.</p></li></ul><p></p><p> <span data-name="brain" data-type="emoji">🧠</span> <strong>Key Setup</strong>: </p><ul><li><p class="">5 <strong>proper nouns</strong> (ex. agents or objects)</p></li><li><p class="">5 <strong>action verbs</strong></p></li><li><p class="">Agents <strong>start with random associations</strong></p></li><li><p class="">Through interaction and feedback, they <strong>converge</strong> on shared meanings (ex. "word A" = "agent X")</p></li></ul><p></p><p><span data-name="arrows_counterclockwise" data-type="emoji">🔄</span> <strong>How it models language evolution</strong>: </p><ul><li><p class="">Simulates <strong>cultural transmission</strong></p></li><li><p class="">Captures how <strong>shared structure</strong> (like grammar or vocabulary) emerges <strong>without centralized control</strong></p></li><li><p class="">Shows how language <strong>stabilizes</strong> over generations</p></li></ul><p></p>
42
New cards

Describe the computational model evolutionary computation

What It Models:

  • Biological evolution of agents

  • Agents have genetically-determined traits (e.g., learning bias, communication ability)

  • Traits are passed on through genetic transmission and selected via natural selection

🔁 Evolutionary Process:

  • Some traits give agents a fitness advantage (e.g., better communication = more success)

  • These traits are selected for over generations

  • Goal: simulate how language-related biological traits evolve

<p><span data-name="gear" data-type="emoji">⚙</span> What It Models: </p><ul><li><p class=""><strong>Biological evolution of agents</strong></p></li><li><p class="">Agents have <strong>genetically-determined traits</strong> (e.g., learning bias, communication ability)</p></li><li><p class="">Traits are passed on through <strong>genetic transmission</strong> and selected via <strong>natural selection</strong></p></li></ul><p> </p><p> <span data-name="repeat" data-type="emoji">🔁</span> Evolutionary Process: </p><ul><li><p class="">Some traits give agents a <strong>fitness advantage</strong> (e.g., better communication = more success)</p></li><li><p class="">These traits are <strong>selected for over generations</strong></p></li><li><p class="">Goal: simulate how language-related <strong>biological traits</strong> evolve</p></li></ul><p> </p>
43
New cards

What’s a criticism of the computational model evolutionary computation?

Bickerton, 2007:

Many models suffer from the “streetlight effect”
They only search for answers where it’s easiest to look, not necessarily where the truth is.

Lack of realism: Often oversimplified or detached from actual human biology/social dynamics

<p>Bickerton, 2007: </p><figure data-type="blockquoteFigure"><div><blockquote><p class="">Many models suffer from the <strong>“streetlight effect”</strong> –<br>They only search for answers where it’s easiest to look, not necessarily where the truth is.</p></blockquote><figcaption></figcaption></div></figure><p> </p><p class="">→ <strong>Lack of realism</strong>: Often oversimplified or detached from actual human biology/social dynamics</p>
44
New cards

What are the 3 types of symbols → consensus

🖼 Icon

  • Similarity between the token (representation of object) and the object it represents

  • Example: Picture of a field = actual field

🔗 Index

  • Token is physically or temporally linked to what it refers to

  • Direct or causal connection

  • Example: Monkey alarm call → indicates nearby predator, smoke → fire

🅰 Symbol

  • Arbitrary but socially agreed connection → no natural link, relies on shared understanding

  • formal or agreed upon connection between token & object

  • Example:

    • 🍎 = “apple” in English

    • “Apfel” (German), “manzana” (Spanish), “mela” (Italian)

<p> <span data-name="frame_with_picture" data-type="emoji">🖼</span> <strong>Icon</strong> </p><ul><li><p class="">Similarity between the token (representation of object) and the object it represents</p></li><li><p class=""><em>Example</em>: Picture of a field = actual field</p></li></ul><p> </p><p> <span data-name="link" data-type="emoji">🔗</span> <strong>Index</strong> </p><ul><li><p class="">Token is physically or temporally linked to what it refers to</p></li><li><p class="">Direct or causal connection</p></li><li><p class=""><em>Example</em>: Monkey alarm call → indicates nearby predator, smoke → fire </p></li></ul><p> </p><p><span data-name="a" data-type="emoji">🅰</span> <strong>Symbol</strong> </p><ul><li><p class="">Arbitrary but socially agreed connection → no natural link, relies on shared understanding</p></li><li><p class="">formal or agreed upon connection between token &amp; object</p></li><li><p class=""><em>Example</em>:</p><ul><li><p class=""><span data-name="apple" data-type="emoji">🍎</span> = “apple” in English</p></li><li><p class="">“Apfel” (German), “manzana” (Spanish), “mela” (Italian)</p></li></ul></li></ul><p> </p>
45
New cards

What are the 3 key developments of symbols → consensus

  1. Relation of symbols with each other

    • Language isn’t just words → it's how words relate to each other

  2. Increase in symbolic complexity

    • Growing structure = syntax

    • Allows infinite combinations of finite words

  3. Auditory memory expansion

    • Needed to track sequences of sounds

    • Important for understanding longer phrases or nested sentences

46
New cards

How is white sclera an example of a preadaptation unique to humans compared to other primates, and why does it matter for language?

White sclera (visible eye-whites)

  • Enhances eye-gaze visibility and joint attention

🧠 Why it matters for language:

  • Supports nonverbal communication
    → Easier to follow others' gaze and intentions

  • Enables shared attention
    → Crucial for teaching, pointing, and symbol grounding

    • Grand apes prefer head movement over eye gaze

    • Babies (12-18 mo) prefer eye gaze over head movement

🪜 Preadaptation defined:

A biological trait that evolved for another function but facilitated language evolution.

<p>White sclera (visible eye-whites)</p><ul><li><p>Enhances <strong>eye-gaze visibility</strong> and <strong>joint attention</strong></p></li></ul><p></p><p><span data-name="brain" data-type="emoji">🧠</span> <strong>Why it matters for language</strong>:</p><ul><li><p class="">Supports <strong>nonverbal communication</strong><br>→ Easier to follow others' gaze and intentions</p></li><li><p class="">Enables <strong>shared attention</strong><br>→ Crucial for <strong>teaching</strong>, <strong>pointing</strong>, and <strong>symbol grounding</strong></p><p class=""></p><ul><li><p class="">Grand apes prefer head movement over eye gaze</p></li><li><p>Babies (12-18 mo) prefer eye gaze over head movement</p></li></ul></li></ul><p></p><p><span data-name="ladder" data-type="emoji">🪜</span> <strong>Preadaptation defined</strong>:</p><figure data-type="blockquoteFigure"><div><blockquote><p class="">A <strong>biological trait</strong> that evolved for another function but <strong>facilitated language evolution</strong>.</p></blockquote><figcaption></figcaption></div></figure><p></p>
47
New cards

What are 3 preadaptations in humans?

  • white sclera → joint attention, tracking eye-gaze

  • ToM → represent others as agent and intentional beings

  • mirror neurons → empathy

48
New cards

How is representing others as agents a preadaptation for humans and why does it matter for language?

🧍‍♂👁 Theory of Mind (ToM)

  • The ability to represent others as intentional beings with thoughts and goals

  • Essential for communication, cooperation, and language

🧒 Developmental Milestones:

  • 9–12 months:

    • Follows others’ gaze

    • Understands pointing

    • Perception of agents

  • 4 years old:

    • Passes false belief tests (e.g., Sally-Anne task)

    • Can represent what someone else knows/thinks even if it’s wrong

🧩 Why it matters for language:

  • Enables shared attention, joint action, and symbol use

  • A cognitive prerequisite for understanding intentions behind words

<p><span data-name="man_standing" data-type="emoji">🧍‍♂</span><span data-name="eye" data-type="emoji">👁</span> <strong>Theory of Mind (ToM)</strong></p><ul><li><p class="">The ability to represent others as <strong>intentional beings</strong> with <strong>thoughts and goals</strong></p></li><li><p class="">Essential for <strong>communication</strong>, <strong>cooperation</strong>, and <strong>language</strong></p></li></ul><p></p><p><span data-name="child" data-type="emoji">🧒</span> <strong>Developmental Milestones</strong>:</p><ul><li><p class=""><strong>9–12 months</strong>:</p><ul><li><p class="">Follows others’ <strong>gaze</strong></p></li><li><p class="">Understands <strong>pointing</strong></p></li><li><p class="">Perception of agents</p></li></ul></li><li><p class=""><strong>4 years old</strong>:</p><ul><li><p class="">Passes false belief tests (e.g., <em>Sally-Anne task</em>)</p></li><li><p class="">Can represent <strong>what someone else knows/thinks</strong> even if it’s wrong</p></li></ul></li></ul><p></p><p><span data-name="jigsaw" data-type="emoji">🧩</span> <strong>Why it matters for language</strong>:</p><ul><li><p class="">Enables <strong>shared attention</strong>, <strong>joint action</strong>, and <strong>symbol use</strong></p></li><li><p class="">A <strong>cognitive prerequisite</strong> for understanding <strong>intentions behind words</strong></p></li></ul><p></p>
49
New cards

How is mirror neurons a preadaptation for humans and why does it matter for language?

🔁 What are Mirror Neurons?

  • Neurons that fire both when an individual performs an action and when they observe the same action in another.

🧍‍♂🧍‍♂ Key Functions:

  • Action imitation

  • Learning by observation

  • Understanding others’ intentions

🧠💬 Why they matter for language:

  • Enable social learning of communicative gestures

  • Support the recognition of communicative intent

  • Possible foundation for gesture-based or spoken language evolution

📌 Core Idea:

Mirror neurons help link action, perception, and intention — a crucial foundation for communicative behavior.

Brain responses are based on neural population activity, not individual neurons — so interpretations must consider broader network effects

<p><span data-name="repeat" data-type="emoji">🔁</span> <strong>What are Mirror Neurons?</strong> </p><ul><li><p class="">Neurons that fire both when an individual <strong>performs</strong> an action and when they <strong>observe</strong> the same action in another.</p></li></ul><p> </p><p><span data-name="man_standing" data-type="emoji">🧍‍♂</span>→<span data-name="man_standing" data-type="emoji">🧍‍♂</span> <strong>Key Functions</strong>: </p><ul><li><p class=""><strong>Action imitation</strong></p></li><li><p class=""><strong>Learning by observation</strong></p></li><li><p class=""><strong>Understanding others’ intentions</strong></p></li></ul><p> </p><p> <span data-name="brain" data-type="emoji">🧠</span><span data-name="arrow_right" data-type="emoji">➡</span><span data-name="speech_balloon" data-type="emoji">💬</span> <strong>Why they matter for language</strong>: </p><ul><li><p class="">Enable <strong>social learning</strong> of communicative gestures</p></li><li><p class="">Support the <strong>recognition of communicative intent</strong></p></li><li><p class="">Possible foundation for <strong>gesture-based or spoken language evolution</strong></p></li></ul><p> </p><p> <span data-name="pushpin" data-type="emoji">📌</span> <strong>Core Idea</strong>: </p><figure data-type="blockquoteFigure"><div><blockquote><p class="">Mirror neurons help link <strong>action</strong>, <strong>perception</strong>, and <strong>intention</strong> — a crucial foundation for communicative behavior.</p></blockquote><figcaption></figcaption></div></figure><p>Brain responses are based on <strong>neural</strong> <strong>population activity</strong>, not individual neurons — so interpretations must consider broader network effects</p>
50
New cards

Describe the gesture hypothesis

🖐 Gesture Hypothesis

  • Animal calls show limited vocal control.

  • Tool use led to gestures evolving into vocal forms

  • Humans have advanced imitation skills

  • Vocal sounds were recruited to accompany gestures → supports idea that gesture and vocalization co-evolved

51
New cards

Describe the speech hypothesis

🗣 Speech Hypothesis

  • Gestures needed line of sight and daylight (visually limited)

  • Phonetic gestures evolved from chewing/sucking/swallowing

  • Vocal calls became holistic words

52
New cards

Describe verbal dyspraxia in the KE family

🧑👩‍👦‍👦 1991: KE Family

  • Language impairment (grammar comprehension + production)

  • Caused by mutations in FOXP2 gene

53
New cards

Apart from the KE family in which animals is verbal dyspraxia present? (3)

🐀 FOXP2 Across Species

  • Present in rats, monkeys, and apes

    • Apes differ from humans by 2 amino acid changes

54
New cards

What does FOXP2 do? → language and genes

  • Transcription gene

  • Turns other genes on/off

  • Crucial in developing neural circuits for language

55
New cards

Describe right handedness in humans and chimps → language dominance (3)

🧠 Brain-Language Relationship

  • 95% of right-handed people:

    Left cerebral dominance for language

  • 75% of left-handed people:
    → Also left hemisphere dominant for language

🧍 Handedness Prevalence

  • 90% of humans are right-handed

  • 50% of chimps show right-hand preference

🗣 Language & Gesture

  • Right-handedness → better word generation

  • Birds, frogs, mammals:
    Vocalization processed in left hemisphere

56
New cards

Which side of the body does the left hemisphere control?

The right side of the body

57
New cards

SUMMARY

🧬 Biology: Language as Adaptation

  • Darwinian view: Language evolved via natural selection

📚 Linguistics Perspective

  • Language = gradual adaptation

  • Fits into the cognitive niche:

    • Natural selection

    • Intelligence

    • Language and Sociality

🤖 Computational Models

  • Machine learning

  • Multi-agent modeling

  • Evolutionary computation

🧠 Consensus

  • Language uses symbols and preadaptations (ToM, mirror neurons, joint attention)

Controversies

  • Speech vs. Gesture: Which came first?

  • Language and Genetics: FOXP2 gene

58
New cards

What are the two types of joint attention?

  • RJA (Responding Joint Attention): Following others' gaze or gestures

    • sharing a common point of interest

    • helps learning and language acquisition

    • in infants and chimps

  • IJA (Initiating Joint Attention): Actively directing another's attention

    • shows intent to share interest or pleasure

    • develops later

    • needs a deeper understanding of others’ attention and intentions

    • limited use in chimps

59
New cards

Describe the social cognitive model

Social cognition:

  • Develops around 9–12 months

  • Foundation for developing joint attention (RJA & IJA)

  • RJA and IJA are correlated in development

🔹 Key Study (Brooks & Meltzoff, 2005):

  • At 10–11 months, infants show social awareness of the meaning of looking

  • Look longer when adult’s eyes are open vs. closed ➝ evidence of social interpretation

<p>Social cognition: </p><ul><li><p class="">Develops around <strong>9–12 months</strong></p></li><li><p class=""><strong>Foundation</strong> for developing <strong>joint attention</strong> (RJA &amp; IJA)</p></li><li><p class="">RJA and IJA are <strong>correlated</strong> in development</p></li></ul><p> </p><p><span data-name="small_blue_diamond" data-type="emoji">🔹</span> Key Study (Brooks &amp; Meltzoff, 2005): </p><ul><li><p class="">At <strong>10–11 months</strong>, infants show <strong>social awareness of the meaning of looking</strong></p></li><li><p class="">Look longer when adult’s <strong>eyes are open</strong> vs. <strong>closed</strong> ➝ evidence of <strong>social interpretation</strong></p></li></ul><p></p>
60
New cards

What are the 2 attention-system models?

  • Posterior attention system → supports RJA (Responding Joint Attention)

  • Anterior attention system → supports IJA (Initiating Joint Attention)

🔹 Development Timeline:

  • 3–6 months: Reflexive attention (RJA), basic attention control

  • 7–9 months: Beginning of intentional control (IJA), faster processing

  • 10–18 months: Integrated self–other attention → emergence of full joint attention abilities

<ul><li><p class=""><strong>Posterior attention system</strong> → supports <strong>RJA</strong> (Responding Joint Attention)</p></li><li><p class=""><strong>Anterior attention system</strong> → supports <strong>IJA</strong> (Initiating Joint Attention)</p></li></ul><p></p><p><span data-name="small_blue_diamond" data-type="emoji">🔹</span> Development Timeline: </p><ul><li><p class=""><strong>3–6 months</strong>: Reflexive attention (RJA), basic attention control</p></li><li><p class=""><strong>7–9 months</strong>: Beginning of intentional control (IJA), faster processing</p></li><li><p class=""><strong>10–18 months</strong>: Integrated self–other attention → emergence of full joint attention abilities</p></li></ul><p></p>
61
New cards

Describe the social cognitive model?

🔹 Key Points:

  • Social cognition develops around 9–12 months and is necessary for the development of joint attention

  • RJA and IJA develop in parallel → correlated growth

  • Receptive joint attention at 6 months already supports language learning

    • Yet, social cognition is not fully developed at this age

62
New cards
<p></p>

  • Grey matter (cerebral cortex): involved in processing information

  • White matter: responsible for communication between brain regions

  • Brain folds:

    • Gyrus = raised ridge

    • Sulcus = groove or fold

<ul><li><p class=""><strong>Grey matter (cerebral cortex)</strong>: involved in processing information</p></li><li><p class=""><strong>White matter</strong>: responsible for communication between brain regions</p></li></ul><ul><li><p class="">Brain folds:</p><ul><li><p class=""><strong>Gyrus</strong> = raised ridge</p></li><li><p class=""><strong>Sulcus</strong> = groove or fold</p></li></ul></li></ul><p></p>
63
New cards
term image

Cortex:

  • Made of gray matter

  • Crucial for cognition (thinking, attention, memory)

🔹 Major Brain Regions:

  • Frontal lobe: decision-making, planning, voluntary movement

  • Parietal lobe: sensory integration, spatial orientation

  • Temporal lobe: auditory processing, language

  • Occipital lobe: visual processing

  • Cerebellum: coordination and balance

  • Brain stem & spinal cord: basic bodily functions, communication between brain and body

<p>Cortex:</p><ul><li><p>Made of <strong>gray matter</strong></p></li><li><p>Crucial for <strong>cognition</strong> (thinking, attention, memory)</p></li></ul><p></p><p><span data-name="small_blue_diamond" data-type="emoji">🔹</span> Major Brain Regions: </p><ul><li><p class=""><strong>Frontal lobe</strong>: decision-making, planning, voluntary movement</p></li><li><p class=""><strong>Parietal lobe</strong>: sensory integration, spatial orientation</p></li><li><p class=""><strong>Temporal lobe</strong>: auditory processing, language</p></li><li><p class=""><strong>Occipital lobe</strong>: visual processing</p></li><li><p class=""><strong>Cerebellum</strong>: coordination and balance</p></li><li><p class=""><strong>Brain stem &amp; spinal cord</strong>: basic bodily functions, communication between brain and body</p></li></ul><p></p>
64
New cards
<p>Attention-Systems Model: Brodmann Areas (1909)</p>

Attention-Systems Model: Brodmann Areas (1909)

  • Cytoarchitectonic (cell structure) map of the cortex is applicable to mammals

  • Divides cortex into 43 areas (labeled 1–52)

  • Areas 12–16 and 48–51 are not found in humans

  • Used to localize brain functions (e.g., attention, vision, language)

65
New cards
<p>Attention-systems mechanisms</p>

Attention-systems mechanisms

<p></p>
66
New cards

What are 5 aspects of the Attention-Systems Model?

  • RJA ≠ IJA, but they are interactive mechanisms of attention

  • Experience influences attention system interaction

  • IJA demands more connection between frontal and posterior attention systems than RJA

  • Autistic individuals have poor connection between their frontal and posterior attention systems

  • Blind children have different frontal neural topography for social cognition

67
New cards

How is language processing a selective mechanism? (2)

Language Acquisition

  • Joint attention between adult, infant, and object facilitates language learning

Language Processing

  • Selective mechanism:

    • Drives goal-directed processing

    • Helps filter out distractors

<p><strong>Language Acquisition</strong></p><ul><li><p class=""><strong>Joint attention</strong> between <strong>adult, infant, and object </strong>facilitates <strong>language learning</strong></p><p class=""></p></li></ul><p class=""><strong>Language Processing</strong></p><ul><li><p class=""><strong>Selective mechanism</strong>:</p><ul><li><p class="">Drives <strong>goal-directed processing</strong></p></li><li><p class="">Helps <strong>filter out distractors</strong></p></li></ul></li></ul><p></p>
68
New cards

Describe Attention in Language Processing: Automatic vs. Controlled

Automatic: Cognitive Psychologists

  • No interference

  • No attention needed

Controlled: Early Psycholinguists

  • Language is modular

  • Computation is automatic

69
New cards

Describe the components and stimulus types of Event-Related Brain Potentials (ERP)

ERP Components (ex. N100, P200, N400, P600):

  • Identity features:

    • Polarity: Positive (+) or Negative (–)

    • Latency: Time in milliseconds after stimulus

    • Scalp distribution: Where the activity is recorded

Stimulus types:

  • Visual (ex. word on screen)

  • Auditory (ex. spoken word)

<p class=""><strong>ERP Components (ex. N100, P200, N400, P600)</strong>:</p><ul><li><p class=""><strong>Identity features</strong>:</p><ul><li><p class=""><strong>Polarity</strong>: Positive (+) or Negative (–)</p></li><li><p class=""><strong>Latency</strong>: Time in milliseconds after stimulus</p></li><li><p class=""><strong>Scalp distribution</strong>: Where the activity is recorded</p></li></ul></li></ul><p> </p><p class=""><strong>Stimulus types</strong>:</p><ul><li><p class="">Visual (ex. word on screen)</p></li><li><p class="">Auditory (ex. spoken word)</p></li></ul><p></p>
70
New cards

Describe key characteristics of Event-Related Brain Potentials (ERP)

ERP – Mismatch Negativity (MMN)

  • Mismatch Negativity (MMN) = ERP response to a deviant stimulus among standard ones

Key characteristics:

  • Occurs at ~100ms

  • Detected over centro-frontal regions

  • Pre-attentive: does not require conscious attention

Function:

  • Reflects automatic redirection of attention to unexpected or deviant events

<p>ERP – Mismatch Negativity (MMN) </p><ul><li><p class=""><strong>Mismatch Negativity (MMN)</strong> = ERP response to a <strong>deviant stimulus</strong> among standard ones</p></li></ul><p> </p><p class=""><strong>Key characteristics</strong>:</p><ul><li><p class="">Occurs at <strong>~100ms</strong></p></li><li><p class="">Detected over <strong>centro-frontal regions</strong></p></li><li><p class=""><strong>Pre-attentive</strong>: does not require conscious attention</p></li></ul><p> </p><p class=""><strong>Function</strong>:</p><ul><li><p class="">Reflects <strong>automatic redirection of attention</strong> to unexpected or deviant events</p></li></ul><p></p>
71
New cards

Describe mismatch negativity (MMN) in Event related brain potentials (ERP)

AUTOMATIC!

Syntactic Violation Detection → Study: Pulvermüller et al. (2008)

Finding:

  • Mismatch Negativity (MMN) occurs automatically in response to syntactic violations

  • Even when participants were watching movies + hearing tones (i.e., not focused on the language task)

Conclusion:

  • Syntactic processing can be automatic, detected without attention


🧪 Study: Pulvermüller et al. (2008) 🧠 What did they study?

  • Whether the brain automatically detects syntactic violationseven when you're not paying attention to the language.

  • Participants were distracted (watching a silent movie + listening to tones).

  • Meanwhile, grammatical and ungrammatical sentences were played in the background.

🎯 Key Finding:

  • The brain showed a Mismatch Negativity (MMN) in response to syntactic violations.

  • MMN is an ERP component that usually reflects pre-attentive change detection — the brain’s way of noticing when something violates a rule or expectation.

MMN showed up even when participants were not focused on the language at all.

🔍 Conclusion:

Syntactic processing can happen automatically.
The brain can detect grammatical errors without conscious attention.
🚫 This differs from semantic processing (like the N400), which does require a certain level of input clarity or attention (as we saw earlier with degraded speech).

<p><strong>AUTOMATIC!</strong></p><p>Syntactic Violation Detection → <strong>Study</strong>: Pulvermüller et al. (2008)</p><p class=""><strong>Finding</strong>:</p><ul><li><p class=""><strong>Mismatch Negativity (MMN)</strong> occurs <strong>automatically</strong> in response to <strong>syntactic violations</strong></p></li><li><p class="">Even when participants were <strong>watching movies + hearing tones</strong> (i.e., not focused on the language task)</p></li></ul><p></p><p class=""><strong>Conclusion</strong>:</p><ul><li><p class=""><strong>Syntactic processing can be automatic</strong>, detected <strong>without attention</strong></p></li></ul><div data-type="horizontalRule"><hr></div><p><span data-name="test_tube" data-type="emoji">🧪</span> <strong>Study: Pulvermüller et al. (2008)</strong> <span data-name="brain" data-type="emoji">🧠</span> What did they study? </p><ul><li><p>Whether the brain <strong>automatically detects syntactic violations</strong> — <em>even when you're not paying attention to the language</em>.</p></li><li><p>Participants were <strong>distracted</strong> (watching a silent movie + listening to tones).</p></li><li><p>Meanwhile, <strong>grammatical and ungrammatical sentences</strong> were played in the background.</p></li></ul><p> </p><p><span data-name="bullseye" data-type="emoji">🎯</span> <strong>Key Finding:</strong> </p><ul><li><p>The brain showed a <strong>Mismatch Negativity (MMN)</strong> in response to <strong>syntactic violations</strong>.</p></li><li><p>MMN is an ERP component that usually reflects <strong>pre-attentive change detection</strong> — the brain’s way of noticing when something <em>violates a rule or expectation</em>.</p></li></ul><p> </p><figure data-type="blockquoteFigure"><div><blockquote><p>MMN showed up even when participants were <strong>not focused on the language at all</strong>.</p></blockquote><figcaption></figcaption></div></figure><p> </p><p><span data-name="mag" data-type="emoji">🔍</span> <strong>Conclusion:</strong> </p><p><span data-name="check_mark_button" data-type="emoji">✅</span> <strong>Syntactic processing can happen automatically.</strong><br><span data-name="check_mark_button" data-type="emoji">✅</span> The brain can <strong>detect grammatical errors</strong> <em>without conscious attention</em>.<br><span data-name="no_entry_sign" data-type="emoji">🚫</span> This differs from <strong>semantic processing</strong> (like the N400), which <strong>does require a certain level of input clarity or attention</strong> (as we saw earlier with degraded speech).</p>
72
New cards

Describe event related brain potentials (ERP) N400 & Semantic Processing

N400 = ERP component related to semantic priming (semantic access)
Example: "Dog–cat" is processed faster than "dog–fish"
🔁 Automatic Spreading Activation (ASA)

Semantics in speech seems to be controlled:

  • Auditory sentences degraded in different levels (increasing degradation from S0 to S3)

  • N400 response in the less degraded states (S0–S2)

  • No N400 in the most degraded speech (S3)
    ⇒ Semantic processing not fully automatic, depends on input quality


🧠 What is the N400?

  • The N400 is an event-related potential (ERP) — a kind of brainwave measured by EEG.

  • It shows up about 400 milliseconds after you hear or read a word.

  • It reflects semantic processing — how your brain reacts to the meaning of words.

💡 Classic N400 effect: Semantic Priming

  • When two words are semantically related (like “dog–cat”), the brain processes the second word faster and with less effort.

  • This causes a smaller N400.

  • If the words are unrelated (like “dog–fish”), the N400 is larger, showing that your brain had to work harder to integrate the unexpected or unrelated meaning.

Why?
→ This supports the idea of Automatic Spreading Activation (ASA):

  • When you hear “dog,” it activates related concepts in your mental lexicon, like “cat,” “bark,” or “bone.”

  • If the next word is one of those, it’s already activated, so it’s easier to process.

🎧 What happens in spoken language, especially degraded speech?

Researchers tested how semantic processing (and the N400) behaves when speech quality drops — like when audio is muffled, noisy, or distorted.

They created 4 levels of degradation, from S0 (clear) to S3 (very degraded).

🔍 Key Finding:

  • In S0 to S2 (where speech is still fairly understandable), the N400 effect is present. This means:

    • Even if the speech isn’t perfect, the brain still accesses meaning and shows semantic priming (smaller N400 for related words).

  • In S3 (heavily degraded speech):

    • The N400 disappears → no evidence of semantic access.

    • The brain can’t process the meaning if the signal is too poor.

🧠 Interpretation:

Semantic processing is not fully automatic — it depends on the quality of the input.

  • This challenges older models that assumed semantic access happens automatically, no matter what.

  • Instead, if auditory input is too unclear, your brain may not even get far enough to do semantic processing.

  • This shows a kind of adaptive efficiency: the brain doesn’t waste resources trying to extract meaning when the signal is unintelligible.

<p class=""><strong>N400 =</strong> ERP component related to <strong>semantic priming (semantic access)</strong><br><strong>Example</strong>: <em>"Dog–cat"</em> is processed faster than <em>"dog–fish"</em><br><span data-name="repeat" data-type="emoji">🔁</span> <strong>Automatic Spreading Activation</strong> (ASA)</p><p></p><p class=""><strong>Semantics in speech seems to be controlled:</strong></p><ul><li><p class="">Auditory sentences degraded in different levels (increasing degradation from S0 to S3)</p></li><li><p class="">N400 response in the less degraded states (S0–S2)</p></li><li><p class=""><strong>No N400</strong> in the most degraded speech (S3)<br>⇒ Semantic processing <strong>not fully automatic</strong>, <strong>depends on input quality</strong></p></li></ul><div data-type="horizontalRule"><hr></div><p><span data-name="brain" data-type="emoji">🧠</span> What is the N400? </p><ul><li><p>The <strong>N400</strong> is an <strong>event-related potential (ERP)</strong> — a kind of brainwave measured by EEG.</p></li><li><p>It shows up <strong>about 400 milliseconds</strong> after you hear or read a word.</p></li><li><p>It reflects <strong>semantic processing</strong> — how your brain reacts to the <em>meaning</em> of words.</p></li></ul><p> </p><p><span data-name="bulb" data-type="emoji">💡</span> Classic N400 effect: <strong>Semantic Priming</strong> </p><ul><li><p>When two words are <strong>semantically related</strong> (like “dog–cat”), the brain processes the second word <strong>faster and with less effort</strong>.</p></li><li><p>This causes a <strong>smaller N400</strong>.</p></li><li><p>If the words are <strong>unrelated</strong> (like “dog–fish”), the N400 is <strong>larger</strong>, showing that your brain had to work harder to integrate the unexpected or unrelated meaning.</p></li></ul><p> </p><p><strong>Why?</strong><br>→ This supports the idea of <strong>Automatic Spreading Activation (ASA)</strong>:</p><p> </p><ul><li><p>When you hear “dog,” it activates related concepts in your mental lexicon, like “cat,” “bark,” or “bone.”</p></li><li><p>If the next word is one of those, it’s <strong>already activated</strong>, so it’s easier to process.</p></li></ul><p> </p><p><span data-name="headphones" data-type="emoji">🎧</span> What happens in spoken language, especially <strong>degraded speech</strong>? </p><p>Researchers tested how semantic processing (and the N400) behaves when speech quality drops — like when audio is muffled, noisy, or distorted.</p><p> </p><p>They created <strong>4 levels of degradation</strong>, from <strong>S0 (clear)</strong> to <strong>S3 (very degraded)</strong>.</p><p> </p><p><span data-name="mag" data-type="emoji">🔍</span> Key Finding: </p><ul><li><p>In <strong>S0 to S2</strong> (where speech is still fairly understandable), the <strong>N400 effect is present</strong>. This means:</p><ul><li><p>Even if the speech isn’t perfect, the brain still <strong>accesses meaning</strong> and shows <strong>semantic priming</strong> (smaller N400 for related words).</p></li></ul></li><li><p>In <strong>S3</strong> (heavily degraded speech):</p><ul><li><p>The <strong>N400 disappears</strong> → no evidence of semantic access.</p></li><li><p>The brain can’t process the meaning if the signal is too poor.</p></li></ul></li></ul><p> </p><p> <span data-name="brain" data-type="emoji">🧠</span> Interpretation: </p><figure data-type="blockquoteFigure"><div><blockquote><p><strong>Semantic processing is not fully automatic — it depends on the quality of the input.</strong></p></blockquote><figcaption></figcaption></div></figure><p> </p><ul><li><p>This challenges older models that assumed semantic access happens <em>automatically</em>, no matter what.</p></li><li><p>Instead, if <strong>auditory input is too unclear</strong>, your brain may <strong>not even get far enough</strong> to do semantic processing.</p></li><li><p>This shows a kind of <strong>adaptive efficiency</strong>: the brain doesn’t waste resources trying to extract meaning when the signal is unintelligible.</p></li></ul><p></p>
73
New cards

Describe ELAN and P600 – Syntactic Reanalysis → event related brain potentials (ERP)

ELAN = automatic

p600 = controlled

ELAN (Early Left Anterior Negativity):

  • Appears ~200ms

  • Reflects early automatic syntactic structure building

  • Shown when sentence violates expected word category

P600:

  • Appears ~600ms

  • Reflects controlled syntactic reanalysis or repair

  • Linked to conscious syntactic processing (e.g., garden-path sentences)


🧠 ELAN vs. P600 — Two Stages of Syntactic Processing

These are ERP components observed during sentence processing, especially in response to syntactic violations.

🔹 ELAN (Early Left Anterior Negativity)

  • When: ~200 milliseconds after the critical word

  • Where: Left frontal regions of the brain

  • What it reflects:
    🔄 Automatic, fast parsing of syntax
    🧱 Structure building — e.g., identifying whether the incoming word fits the expected grammatical structure

Triggered by:

  • Violations of syntactic category:

    • e.g., “The man sang the book.” ← Verb in place of a noun → ELAN response

  • The brain expected a noun, got a verb → automatic detection

Summary:

  • Unconscious

  • Early

  • Structure-sensitive

  • Automatic parsing

🔹 P600

  • When: ~600 milliseconds after the word

  • Where: Parietal regions (often posterior)

  • What it reflects:
    🧠 Controlled, conscious processing
    🔁 Reanalysis or repair of syntactic structures

Triggered by:

  • Complex or ungrammatical sentences that require reinterpretation

  • e.g., Garden-path sentences:

    • “The horse raced past the barn fell.” ← Unexpected verb “fell” forces reanalysis

  • Also triggered by agreement violations:

    • “The key to the cabinets are rusty.”

Summary:

  • Conscious

  • Later

  • Involves reanalysis

  • Effortful syntactic repair

🧩 Final Insight:

These components show that syntax is processed in stages:

  • First, the brain automatically checks word category (ELAN).

  • If something goes wrong or gets confusing, the brain may consciously repair or reinterpret the sentence (P600).

<p class=""><strong>ELAN = automatic</strong></p><p class=""><strong>p600 = controlled</strong></p><p class=""></p><p class=""><strong>ELAN (Early Left Anterior Negativity):</strong></p><ul><li><p class="">Appears ~200ms</p></li><li><p class="">Reflects early <strong>automatic</strong> syntactic structure building</p></li><li><p class="">Shown when sentence violates expected word category</p></li></ul><p></p><p class=""><strong>P600:</strong></p><ul><li><p class="">Appears ~600ms</p></li><li><p class="">Reflects <strong>controlled</strong> syntactic reanalysis or repair</p></li><li><p class="">Linked to <strong>conscious syntactic processing</strong> (e.g., garden-path sentences)</p></li></ul><div data-type="horizontalRule"><hr></div><p><span data-name="brain" data-type="emoji">🧠</span> ELAN vs. P600 — Two Stages of Syntactic Processing </p><p>These are <strong>ERP components</strong> observed during <strong>sentence processing</strong>, especially in response to <strong>syntactic violations</strong>.</p><p> </p><p><span data-name="small_blue_diamond" data-type="emoji">🔹</span> <strong>ELAN (Early Left Anterior Negativity)</strong> </p><ul><li><p><strong>When:</strong> ~200 milliseconds after the critical word</p></li><li><p><strong>Where:</strong> Left frontal regions of the brain</p></li><li><p><strong>What it reflects:</strong><br><span data-name="arrows_counterclockwise" data-type="emoji">🔄</span> <strong>Automatic</strong>, fast parsing of syntax<br><span data-name="bricks" data-type="emoji">🧱</span> <strong>Structure building</strong> — e.g., identifying whether the incoming word fits the expected grammatical structure</p></li></ul><p> <span data-name="warning" data-type="emoji">⚠</span> Triggered by: </p><ul><li><p>Violations of <strong>syntactic category</strong>:</p><ul><li><p>e.g., <em>“The man </em><strong><em>sang</em></strong><em> the book.”</em> ← Verb in place of a noun → ELAN response</p></li></ul></li><li><p>The brain expected a <strong>noun</strong>, got a <strong>verb</strong> → automatic detection</p></li></ul><p> <span data-name="plus" data-type="emoji">➕</span> Summary: </p><ul><li><p>Unconscious</p></li><li><p>Early</p></li><li><p>Structure-sensitive</p></li><li><p>Automatic parsing</p></li></ul><p> </p><p><span data-name="small_blue_diamond" data-type="emoji">🔹</span> <strong>P600</strong> </p><ul><li><p><strong>When:</strong> ~600 milliseconds after the word</p></li><li><p><strong>Where:</strong> Parietal regions (often posterior)</p></li><li><p><strong>What it reflects:</strong><br><span data-name="brain" data-type="emoji">🧠</span> <strong>Controlled, conscious</strong> processing<br><span data-name="repeat" data-type="emoji">🔁</span> <strong>Reanalysis or repair</strong> of syntactic structures</p></li></ul><p> <span data-name="warning" data-type="emoji">⚠</span> Triggered by: </p><ul><li><p>Complex or <strong>ungrammatical</strong> sentences that require reinterpretation</p></li><li><p>e.g., <strong>Garden-path sentences</strong>:</p><ul><li><p><em>“The horse raced past the barn </em><strong><em>fell</em></strong><em>.”</em> ← Unexpected verb “fell” forces reanalysis</p></li></ul></li><li><p>Also triggered by <strong>agreement violations</strong>:</p><ul><li><p><em>“The key to the cabinets </em><strong><em>are</em></strong><em> rusty.”</em></p></li></ul></li></ul><p> <span data-name="plus" data-type="emoji">➕</span> Summary: </p><ul><li><p>Conscious</p></li><li><p>Later</p></li><li><p>Involves reanalysis</p></li><li><p>Effortful syntactic repair</p></li></ul><p> </p><p><span data-name="jigsaw" data-type="emoji">🧩</span> Final Insight: </p><p>These components show that <strong>syntax is processed in stages</strong>:</p><p> </p><ul><li><p>First, the brain <strong>automatically checks word category</strong> (ELAN).</p></li><li><p>If something goes wrong or gets confusing, the brain may <strong>consciously repair or reinterpret</strong> the sentence (P600).</p></li></ul><p></p>
74
New cards

Describe the criticism of ELAN being an artifact or real effect

Criticism by Steinhauer & Drury (2012):

  • ELAN might be an artifact of:

    • Implicit learning or experimental strategy

    • Experimental design choices

🧪 Challenges the idea that ELAN reflects a true early syntactic processing mechanism


Criticism: Is ELAN a real marker of automatic syntax processing?

Steinhauer & Drury (2012) challenged the interpretation of ELAN as a clear sign of early syntactic structure building by pointing to several concerns.

🔍 1. Experimental Design Artifacts

They argue that some studies claiming to find ELAN may have unintentionally:

  • Used highly artificial or repetitive stimuli

  • Created expectation effects due to predictable structures

  • Repeated sentence types so often that participants developed strategies

🔁 Result: The observed ELAN might reflect task-related effects or learned regularities, not spontaneous syntactic parsing.

🔍 2. Implicit Learning or Strategy

  • Participants may implicitly learn patterns during the experiment (even if they’re not aware of it).

  • This could lead to early ERP effects that mimic ELAN — but are actually driven by attention, working memory, or prediction, not real-time structure building.

🧠 In other words:

"Maybe participants aren't automatically parsing syntax — maybe they're just good at noticing patterns we accidentally trained them on."

🧪 3. Reproducibility and Variability

  • ELAN findings are not always robust or replicable across labs or languages.

  • The timing, scalp distribution, and presence of ELAN vary a lot between studies.

  • This inconsistency weakens the claim that ELAN is a universal and reliable marker of early syntactic processing.

🔄 Summary of Steinhauer & Drury’s Argument

Point of Criticism

Explanation

Design-driven artifact

ELAN may arise from how the experiment is structured, not from natural syntax processing

Implicit strategy use

Participants may unconsciously "game" the system, leading to misleading ERP signals

Poor replicability

ELAN isn’t consistently found, raising questions about its theoretical significance

Not conclusively syntax-specific

The ELAN might reflect other processes (like early attention or expectation mismatch)

🧠 Implication:

This critique does not disprove that early syntactic processing exists — but it warns against over-interpreting ELAN as definitive evidence for it.

75
New cards

What are the three attention networks?

  • Alerting → Maintains the state of focus

  • Orienting → Selects relevant info

  • Executive → Voluntary control of attention


1. Alerting

  • Function: Maintains readiness and a high state of sensitivity to incoming stimuli.

  • Like your brain's “stay awake and be ready” signal.

  • Involves arousal and vigilance.

  • Neurotransmitter: Norepinephrine

  • Brain areas involved: Right frontal and parietal cortex; thalamus

Example: You're waiting for the traffic light to turn green — your alerting system keeps you ready to respond.

2. Orienting

  • Function: Directs attention to specific stimuli or locations — selects what's relevant.

  • Like a spotlight that shifts your attention based on external cues.

  • Can be overt (eye movement) or covert (attention shift without moving eyes).

  • Neurotransmitter: Acetylcholine

  • Brain areas involved: Superior parietal lobe, frontal eye fields, superior colliculus

Example: You hear your name across a noisy room and immediately focus on that direction — that's orienting.

3. Executive (Control)

  • Function: Manages voluntary, goal-directed attention — especially during conflict or decision-making.

  • Involves inhibition, task-switching, and error monitoring.

  • Neurotransmitter: Dopamine

  • Brain areas involved: Anterior cingulate cortex (ACC), lateral prefrontal cortex

Example: You’re doing a math test, ignoring distracting noises — your executive system helps you stay on task.

76
New cards
term image
knowt flashcard image
77
New cards
term image
knowt flashcard image
78
New cards

Describe selective attention & the anterior temporal lobe, what are the 2 violations?

Selective attention targets the Anterior Temporal Lobe

  • Focuses on language processing and meaning

Violations:

  • Syntactic violation: "A tailor in the city were altering the gown."

    • Triggers syntactic reanalysis → often associated with ERP markers like P600.

  • Semantic violation: "That coffee at the diner was pouring the waiter."

    • Triggers semantic integration failure → often linked to N400 response.

Brain Activation:

  • BA 38 (Brodmann Area 38)

    • Activated during semantic and syntactic processing

    • Color-coded:

      • Yellow = Both tasks

      • Blue = Semantic

      • Red = Syntactic

🧠 What does BA 38 do in these cases?

  • It’s activated in both types of violations.

  • Suggests it plays a role in integrating meaning and structure — not just isolated semantic or syntactic processing.

  • Selective attention may amplify ATL activity when the brain tries to resolve conflicts or detect anomalies in meaning or grammar.

<p class=""><strong>Selective attention</strong> targets the <strong>Anterior Temporal Lobe</strong></p><ul><li><p class="">Focuses on language processing and meaning</p></li></ul><p></p><p class=""><strong>Violations:</strong></p><ul><li><p class=""><strong>Syntactic violation:</strong> <em>"A tailor in the city were altering the gown."</em></p><ul><li><p class="">Triggers <strong>syntactic reanalysis</strong> → often associated with ERP markers like <strong>P600</strong>.</p></li></ul></li><li><p class=""><strong>Semantic violation:</strong> <em>"That coffee at the diner was pouring the waiter."</em></p><ul><li><p class="">Triggers <strong>semantic integration failure</strong> → often linked to <strong>N400</strong> response.</p></li></ul></li></ul><p></p><p class=""><strong>Brain Activation:</strong></p><ul><li><p class=""><strong>BA 38 (Brodmann Area 38)</strong></p><ul><li><p class="">Activated during <strong>semantic and syntactic processing</strong></p></li><li><p class="">Color-coded:</p><ul><li><p class="">Yellow = Both tasks</p></li><li><p class="">Blue = Semantic</p></li><li><p class="">Red = Syntactic</p></li></ul></li></ul></li></ul><p></p><p><span data-name="brain" data-type="emoji">🧠</span> <strong>What does BA 38 do in these cases?</strong> </p><ul><li><p>It’s <strong>activated in both types of violations</strong>.</p></li><li><p>Suggests it plays a role in <strong>integrating meaning and structure</strong> — not just isolated semantic or syntactic processing.</p></li><li><p><strong>Selective attention</strong> may amplify ATL activity when the brain tries to resolve conflicts or detect anomalies in meaning or grammar.</p></li></ul><p></p>
79
New cards

Describe Attention Impairment & Language in Parkinson’s disease (3)

  • Neurodegenerative disease affects attention and memory networks

  • Difficulty in keeping focus on words and retrieving them

  • Unable to identify phonetic and semantic errors in sentences

80
New cards

Describe attention impairment & language in specific language impairment and dyslexia

Specific Language Impairment (SLI)

  • Developmental language disorder

  • Impacts phonological, semantic, syntactic processing

SLI → sustained attention deficit

  • SLI children are worse at sustaining attention than non-SLI children

  • IQ controlled for

Dyslexia

  • Visual attention deficit predicts poor reading

🧩 Language and attention share neural networks

81
New cards

Describe the fixations of eye tracking regarding attention and reading (3)

  • Saccades (eye motions) → spatial attention shifts

  • Fixations depend on:

    • Word frequency

    • Grammatical category

    • Phonological complexity

82
New cards

Describe the two models that show attention is influenced by linguistic information?

🧩 Serial Processing

  • Word-by-word

  • Attention shift occurs after processing is completed

  • EZ model

🧠 Parallel Processing

  • Processes several words at once

  • Simultaneous integration of information

  • SWIFT model

83
New cards

SUMMARY

1. Language Acquisition

  • RJA = Responding Joint Attention

  • IJA = Initiating Joint Attention

  • Social-Cognitive Model:

    • Social cognition → RJA & IJA

  • Attention-Systems Model:

    • RJA + IJA → supports social cognition

    • Involves frontal and posterior attention systems

2. Language Processing

  • Automatic vs. Controlled Processing

  • Selective Attention

  • ERP Components:

    • MMN: automatic

    • N400: controlled, semantic processing

    • P600: controlled, syntactic reanalysis

  • fMRI findings:

    • Brain areas for attention: alerting, orienting, executive

    • Selective attentionTemporal lobe (BA 38)

  • Impairments:

    • Parkinson’s → attention/language deficit

    • SLI and Dyslexia → affect visual/sustained attention

  • Reading:

    • Serial vs. Parallel models (EZ vs. SWIFT)

84
New cards

Describe the 3 components of the multi-store model of memory (Atkinson & Shiffrin, 1968)

🧠 1. Sensory Register (SR)

  • Briefly stores raw sensory input (mainly visual)

  • Other modalities (auditory, tactile) are not well understood here

  • Rapid decay unless attended to

🔄 2. Short-Term Store (STS) / Short-Term Memory

  • Receives info from SR and long-term store (LTM)

  • Acts as a workspace for:

    • Reasoning

    • Comprehension

  • Information is fragile – quickly decays or is lost unless rehearsed

  • Key for language understanding and production

📚 3. Long-Term Store (LTM)

  • More stable, but modifiable

  • Stores semantic, procedural, and episodic memories

  • Info is retrieved to STS when needed

  • Less info decay over time

<p><span data-name="brain" data-type="emoji">🧠</span> <strong>1. Sensory Register (SR)</strong> </p><ul><li><p class="">Briefly stores <strong>raw sensory input</strong> (mainly <strong>visual</strong>)</p></li><li><p class="">Other modalities (auditory, tactile) are not well understood here</p></li><li><p class="">Rapid <strong>decay</strong> unless attended to</p></li></ul><p> </p><p><span data-name="arrows_counterclockwise" data-type="emoji">🔄</span> <strong>2. Short-Term Store (STS) / Short-Term Memory</strong> </p><ul><li><p class="">Receives info from <strong>SR</strong> and <strong>long-term store (LTM)</strong></p></li><li><p class="">Acts as a <strong>workspace</strong> for:</p><ul><li><p class=""><strong>Reasoning</strong></p></li><li><p class=""><strong>Comprehension</strong></p></li></ul></li><li><p class=""><strong>Information is fragile</strong> – quickly decays or is lost unless rehearsed</p></li><li><p class="">Key for <strong>language understanding and production</strong></p></li></ul><p> </p><p><span data-name="books" data-type="emoji">📚</span> <strong>3. Long-Term Store (LTM)</strong> </p><ul><li><p class=""><strong>More stable</strong>, but modifiable</p></li><li><p class="">Stores <strong>semantic, procedural, and episodic memories</strong></p></li><li><p class="">Info is <strong>retrieved</strong> to STS when needed</p></li><li><p class="">Less info decay over time</p></li></ul><p></p>
85
New cards

Explain the support for the Short-Term Memory Long-Term Memory distinction (2)

  1. Patients with learning difficulties had unaffected STM, suggesting that STM and LTM are separate systems.

  2. Conduction aphasia patients

    • STM impaired, LTM intact

    • Impoverished speech repetition, but patients could still paraphrase, implying LTM supports semantic content even if STM fails.

86
New cards

Explain the paradox of the Short-Term Memory Long-Term Memory distinction

  • STM ≠ Working Memory (WM) (Atkinson & Shiffrin, 1968)

  • But later research shows:

    • WM = the interaction of different complex cognitive tasks (phonological loop, central executive).

    • Impaired STM affects complex cognition (cognitive impairment), suggesting it is more deeply involved than initially thought.

87
New cards

What are the 3 components of the short-term memory? → Baddeley and Hitch 1974

  1. Central Executive

    • The control system that supervises attention, planning, and coordination.

    • It directs information to the two “slave systems” below.

  2. Phonological Loop

    • Deals with verbal and auditory information (e.g., language, speech, sounds).

    • Key for language acquisition and inner speech ("talking to yourself").

  3. Visuospatial Sketchpad

    • Handles visual and spatial information (e.g., mentally rotating an object, navigation).

<ol><li><p class=""><strong>Central Executive</strong></p><ul><li><p class="">The <em>control system</em> that supervises attention, planning, and coordination.</p></li><li><p class="">It directs information to the two “slave systems” below.</p></li></ul></li><li><p class=""><strong>Phonological Loop</strong></p><ul><li><p class="">Deals with <strong>verbal and auditory information</strong> (e.g., language, speech, sounds).</p></li><li><p class="">Key for language acquisition and inner speech ("talking to yourself").</p></li></ul></li><li><p class=""><strong>Visuospatial Sketchpad</strong></p><ul><li><p class="">Handles <strong>visual and spatial information</strong> (e.g., mentally rotating an object, navigation).</p></li></ul></li></ol><p></p>
88
New cards

Describe how the experiment with healthy adults supported the working memory model by Baddeley and Hitch 1974

This image summarizes experimental support for Baddeley and Hitch's Working Memory Model, particularly from research with healthy adults. The table shows how different factors affect tasks like verbal reasoning, comprehension, long-term storage (LTS), and recency effects.

Key Findings from the Table: Memory Load

  • 1–3 items: No effect on reasoning or comprehension.

  • 6 items: Causes performance decrements in both.

Phonemic Similarity

  • Words that sound similar impair:

    • Verbal reasoning and comprehension (→ Decrement).

    • But enhance long-term storage.

Articulatory Suppression

  • Inhibiting internal speech (e.g., saying “the, the, the…” while doing a task):

    • Leads to decrements in reasoning and LTS performance.

    • Not studied for comprehension.

Interpretation:

  • These findings support the idea that working memory has limited capacity and relies heavily on verbal rehearsal.

  • The phonological loop is sensitive to sound-based interference.

  • Articulatory suppression disrupts the phonological loop, which is critical for verbal tasks.

<p>This image summarizes <strong>experimental support</strong> for Baddeley and Hitch's <strong>Working Memory Model</strong>, particularly from research with <strong>healthy adults</strong>. The table shows how different factors affect tasks like <strong>verbal reasoning, comprehension, long-term storage (LTS), and recency effects</strong>.</p><p> Key Findings from the Table: <strong>Memory Load</strong></p><ul><li><p class=""><strong>1–3 items</strong>: No effect on reasoning or comprehension.</p></li><li><p class=""><strong>6 items</strong>: Causes performance <strong>decrements</strong> in both.</p></li></ul><p><strong>Phonemic Similarity</strong></p><ul><li><p class="">Words that sound similar impair:</p><ul><li><p class=""><strong>Verbal reasoning</strong> and <strong>comprehension</strong> (→ <strong>Decrement</strong>).</p></li><li><p class="">But <strong>enhance long-term storage</strong>.</p></li></ul></li></ul><p><strong>Articulatory Suppression</strong></p><ul><li><p class="">Inhibiting internal speech (e.g., saying “the, the, the…” while doing a task):</p><ul><li><p class="">Leads to <strong>decrements in reasoning</strong> and <strong>LTS</strong> performance.</p></li><li><p class="">Not studied for comprehension.</p></li></ul></li></ul><p> Interpretation: </p><ul><li><p class="">These findings support the idea that <strong>working memory</strong> has <strong>limited capacity</strong> and relies heavily on <strong>verbal rehearsal</strong>.</p></li><li><p class="">The <strong>phonological loop</strong> is sensitive to sound-based interference.</p></li><li><p class=""><strong>Articulatory suppression</strong> disrupts the <strong>phonological loop</strong>, which is critical for verbal tasks.</p></li></ul><p></p>
89
New cards

Describe the 2 components of the phonological loop in the working memory by Baddeley and Hitch 1974

It's responsible for temporarily storing and manipulating verbal and auditory information.

1. Storage System

  • Holds memory traces (e.g., sounds, words).

  • Information decays quickly (a few seconds).

  • Language independent: stores sounds even if they aren't fully understood.

2. Subvocal Rehearsal System

  • Refreshes the decaying memory traces through silent repetition (like repeating a phone number to yourself).

  • Language dependent: relies on previously learned phonological and linguistic knowledge to rehearse.

Summary:

  • The phonological loop allows us to retain verbal info briefly.

  • It's crucial for language acquisition, reading, and learning new words, especially in early development.

<p>It's responsible for temporarily storing and manipulating <strong>verbal and auditory information</strong>.</p><p>1. <strong>Storage System</strong></p><ul><li><p class="">Holds <strong>memory traces</strong> (e.g., sounds, words).</p></li><li><p class="">Information <strong>decays quickly</strong> (a few seconds).</p></li><li><p class=""><strong>Language independent</strong>: stores sounds even if they aren't fully understood.</p><p class=""></p></li></ul><p> 2. <strong>Subvocal Rehearsal System</strong></p><ul><li><p class=""><strong>Refreshes</strong> the decaying memory traces through silent repetition (like repeating a phone number to yourself).</p></li><li><p class=""><strong>Language dependent</strong>: relies on previously learned <strong>phonological and linguistic knowledge</strong> to rehearse.</p><p class=""></p></li></ul><p> Summary: </p><ul><li><p class="">The phonological loop allows us to <strong>retain verbal info</strong> briefly.</p></li><li><p class="">It's crucial for <strong>language acquisition, reading</strong>, and <strong>learning new words</strong>, especially in early development.</p></li></ul><p class=""></p>
90
New cards

Describe the 3 aspects of phonological loop → working memory model

🔊 Retention depends on acoustic or phonological features

🔁 Similarity effect

  • It's harder to recall similar-sounding words (e.g., man, mad, mat...) than dissimilar ones.

  • Confusion arises because of overlapping phonological traces.

🧠 Similarity in meaning helps learning for long-term memory, not immediate recall

  • Words with related meanings are easier to learn over time, but this doesn't help with immediate recall.

📏 Word-length effect

  • Short words are recalled more easily than long words:

    • Shorter words require less rehearsal time and are less prone to decay.

    • Longer words increase risk of forgetting or rehearsal errors.

<p><span data-name="high_volume" data-type="emoji">🔊</span> Retention depends on <em>acoustic or phonological features</em> </p><p></p><p> <span data-name="repeat" data-type="emoji">🔁</span> <strong>Similarity effect</strong></p><ul><li><p class="">It's harder to recall <strong>similar-sounding words</strong> (e.g., man, mad, mat...) than dissimilar ones.</p></li><li><p class="">Confusion arises because of <strong>overlapping phonological traces</strong>.</p><p class=""></p></li></ul><p> <span data-name="brain" data-type="emoji">🧠</span> <strong>Similarity in meaning</strong> helps learning for <strong>long-term memory</strong>, not immediate recall</p><ul><li><p class="">Words with related meanings are <strong>easier to learn over time</strong>, but this doesn't help with <strong>immediate recall</strong>.</p><p class=""></p></li></ul><p> <span data-name="straight_ruler" data-type="emoji">📏</span> <strong>Word-length effect</strong> </p><ul><li><p class=""><strong>Short words</strong> are recalled more easily than long words:</p><ul><li><p class="">Shorter words require <strong>less rehearsal time</strong> and are <strong>less prone to decay</strong>.</p></li><li><p class="">Longer words increase risk of <strong>forgetting or rehearsal errors</strong>.</p></li></ul></li></ul><p></p>
91
New cards
term image

🟨 Storage Component

  • BA 44 (part of Broca's area):

    • Involved in semantic processing and temporary storage of verbal material.

    • Linked to phonological short-term memory.

🔁 Subvocal Rehearsal Component

  • BA 6:

    • Associated with motor planning.

    • Helps in coordinating silent articulation or subvocal rehearsal.

  • BA 40:

    • Involved in phonological processing, particularly reading and interpreting sound-based input.

<p><span data-name="yellow_square" data-type="emoji">🟨</span> <strong>Storage Component</strong> </p><ul><li><p class=""><strong>BA 44 (part of Broca's area)</strong>:</p><ul><li><p class="">Involved in <strong>semantic processing</strong> and temporary storage of verbal material.</p></li><li><p class="">Linked to <strong>phonological short-term memory</strong>.</p><p class=""></p></li></ul></li></ul><p> <span data-name="repeat" data-type="emoji">🔁</span> <strong>Subvocal Rehearsal Component</strong> </p><ul><li><p class=""><strong>BA 6</strong>:</p><ul><li><p class="">Associated with <strong>motor planning</strong>.</p></li><li><p class="">Helps in coordinating <strong>silent articulation</strong> or subvocal rehearsal.</p></li></ul></li><li><p class=""><strong>BA 40</strong>:</p><ul><li><p class="">Involved in <strong>phonological processing</strong>, particularly reading and interpreting sound-based input.</p></li></ul></li></ul><p></p>
92
New cards

Describe the 3 core functions of the phonological loop → WMM

  1. Sentence Comprehension

  • Example: Patient PV showed difficulty with long sentences.

    • Suggests the phonological loop is crucial for holding and manipulating verbal information over short durations.

  1. Facilitates Language Acquisition & Learning

  • Measured via non-word repetition tasks:

    • Impaired in SLI (Specific Language Impairment) children.

    • Predicts vocabulary growth in normal children

    • Correlates with second language (L2) learning success.

  1. Subvocalization (Silent verbal rehearsal)

  • Supports:

    • Action control

    • Cognitive switching

    • Strategic control

93
New cards

Describe Deafness and the Sign Loop

Study Overview:

  • Participants: 24 users of American Sign Language (ASL).

  • Task: Recall visually presented signs.

    • Conditions: Suppressed (no rehearsal) vs. not-suppressed (rehearsal allowed).

    • Stimuli: similar vs dissimilar signs

Key Findings:

  • Worse recall in the suppressed condition → shows importance of rehearsal.

  • A similarity effect was found:

    • Similar signs were harder to remember.

      • Mirrors the phonological similarity effect seen in hearing individuals.

<p class=""><strong>Study Overview:</strong></p><ul><li><p class=""><strong>Participants:</strong> 24 users of <strong>American Sign Language (ASL)</strong>.</p></li><li><p class=""><strong>Task:</strong> Recall visually presented signs.</p><ul><li><p class=""><strong>Conditions:</strong> Suppressed (no rehearsal) vs. not-suppressed (rehearsal allowed).</p></li><li><p class=""><strong>Stimuli: </strong>similar vs dissimilar signs</p></li></ul></li></ul><p> </p><p class=""><strong>Key Findings:</strong></p><ul><li><p class=""><strong>Worse recall</strong> in the <strong>suppressed condition</strong> → shows importance of rehearsal.</p></li><li><p class="">A <strong>similarity effect</strong> was found:</p><ul><li><p class="">Similar signs were <strong>harder to remember</strong>.</p><ul><li><p class="">Mirrors the <strong>phonological similarity effect</strong> seen in hearing individuals.</p></li></ul></li></ul></li></ul><p></p>
94
New cards

Describe the 2 components of the sign loop

  • Buffer:

    • Temporarily stores manual signs (just like phonological loop stores sound-based info).

    • Holds visual-spatial linguistic information.

  • Rehearsal Process:

    • Actively refreshes the buffer, keeping sign information available.

    • Involves covert or overt manual rehearsal (similar to subvocal rehearsal in speech).

  • Refreshing of Information:

    • The loop allows maintenance of signs over short periods, enabling tasks like sign-based sentence repetition, learning new signs, or recalling signed sequences.

<ul><li><p class=""><strong>Buffer</strong>:</p><ul><li><p class="">Temporarily stores <strong>manual signs</strong> (just like phonological loop stores sound-based info).</p></li><li><p class="">Holds visual-spatial linguistic information.</p><p class=""></p></li></ul></li><li><p class=""><strong>Rehearsal Process</strong>:</p><ul><li><p class="">Actively <strong>refreshes the buffer</strong>, keeping sign information available.</p></li><li><p class="">Involves covert or overt <strong>manual rehearsal</strong> (similar to subvocal rehearsal in speech).</p></li></ul><p class=""></p></li><li><p class=""><strong>Refreshing of Information</strong>:</p><ul><li><p class="">The loop allows <strong>maintenance of signs</strong> over short periods, enabling tasks like sign-based sentence repetition, learning new signs, or recalling signed sequences.</p></li></ul></li></ul><p></p>
95
New cards

Describe the visuospatital sketchpad of the WWM

What is the Visuospatial Sketchpad?

  • A temporary store used for:

    • Visual information (shapes, colors)

    • Spatial information (locations, directions)

    • Kinesthetic (movement-based) input

  • Controlled and coordinated by the Central Executive.

🔁 Functions

  • Storage and manipulation of visual/spatial data

  • Enables mental imagery, navigation, and tracking movement

  • Supports problem-solving in tasks like mental rotation or spatial planning

🧪 Key Evidence (Baddeley et al., 1973)

Spatial tracking (following a moving dot) disrupts visual memory but not verbal memory → shows a distinct system from the phonological loop.

<p><strong>What is the Visuospatial Sketchpad?</strong> </p><ul><li><p class="">A <strong>temporary store</strong> used for:</p><ul><li><p class=""><strong>Visual</strong> information (shapes, colors)</p></li><li><p class=""><strong>Spatial</strong> information (locations, directions)</p></li><li><p class=""><strong>Kinesthetic</strong> (movement-based) input</p></li></ul></li><li><p class="">Controlled and coordinated by the <strong>Central Executive</strong>.</p></li></ul><p> </p><p> <span data-name="repeat" data-type="emoji">🔁</span> <strong>Functions</strong> </p><ul><li><p class=""><strong>Storage and manipulation</strong> of visual/spatial data</p></li><li><p class="">Enables <strong>mental imagery</strong>, <strong>navigation</strong>, and <strong>tracking movement</strong></p></li><li><p class="">Supports <strong>problem-solving</strong> in tasks like mental rotation or spatial planning</p></li></ul><p> </p><p><span data-name="test_tube" data-type="emoji">🧪</span> Key Evidence (Baddeley et al., 1973) </p><figure data-type="blockquoteFigure"><div><blockquote><p class=""><strong>Spatial tracking</strong> (following a moving dot) disrupts <strong>visual memory</strong> but <strong>not verbal memory</strong> → shows a <strong>distinct system</strong> from the phonological loop.</p></blockquote><figcaption></figcaption></div></figure><p></p>
96
New cards

How does Williams Syndrome (WS) show a distinctive pattern in visuospatial sketchpad functioning?

Difficulty in comprehending spatial syntax (ex. understanding "the cat is under the table")

🧬 Williams Syndrome: Cognitive Profile

  • Unusual pattern of learning difficulties

  • Preserved verbal skills

  • Impaired visual processing

📊 Graph Explanation

  • Compares spatial vs non-spatial sentence comprehension

  • Three groups:

    • TD = Typically Developing children

    • WS = Williams Syndrome individuals

    • MLD = Mild Learning Disability

Key Result:

  • WS group performs much worse than TD and MLD on spatial syntax tasks, but is closer in non-spatial syntax.

  • Indicates a specific visuospatial processing deficit, not general cognitive impairment.

<p><strong>Difficulty in comprehending spatial syntax</strong> (ex. understanding "the cat is under the table")</p><p></p><p><span data-name="dna" data-type="emoji">🧬</span> <strong>Williams Syndrome: Cognitive Profile</strong></p><ul><li><p class="">Unusual pattern of learning difficulties</p></li><li><p class="">Preserved verbal skills</p></li><li><p class="">Impaired visual processing </p></li></ul><p></p><p><span data-name="bar_chart" data-type="emoji">📊</span> <strong>Graph Explanation</strong></p><ul><li><p class="">Compares <strong>spatial</strong> vs <strong>non-spatial</strong> sentence comprehension</p></li><li><p class="">Three groups:</p><ul><li><p class=""><strong>TD</strong> = Typically Developing children</p></li><li><p class=""><strong>WS</strong> = Williams Syndrome individuals</p></li><li><p class=""><strong>MLD</strong> = Mild Learning Disability</p></li></ul></li></ul><p> Key Result: </p><ul><li><p class="">WS group performs much worse than TD and MLD on <strong>spatial</strong> syntax tasks, but is closer in <strong>non-spatial</strong> syntax.</p></li><li><p class="">Indicates a <strong>specific visuospatial processing deficit</strong>, not general cognitive impairment.</p></li></ul><p></p>
97
New cards

Describe 5 functions of the central executive and its cognitive significance → WMM

  • Attentional control of working memory: Directs attention and prioritizes tasks.

  • Coordinates the phonological loop and visuospatial sketchpad.

  • Combines short-term storage and active processing.

  • Influences language comprehension capacity by managing competing information.

  • The main factor in individual differences in working memory span.

  • Most important for general cognition, since it integrates and regulates mental resources → multitasking

Located in the Frontal Lobe → Crucial for high-level executive functions

<p class=""></p><ul><li><p class=""><strong>Attentional control of working memory</strong>: Directs attention and prioritizes tasks.</p></li><li><p class=""><strong>Coordinates</strong> the phonological loop and visuospatial sketchpad.</p></li><li><p class=""><strong>Combines</strong> short-term storage and active processing.</p></li><li><p class=""><strong>Influences language comprehension</strong> <strong>capacity</strong> by managing competing information.</p></li><li><p class=""><strong>The main factor in individual differences</strong> in working memory span.</p></li><li><p class=""><strong>Most important for general cognition</strong>, since it integrates and regulates mental resources → multitasking</p></li></ul><p class=""></p><p class=""><strong>Located in the Frontal Lobe → </strong>Crucial for high-level executive functions</p>
98
New cards

Describe the episodic buffer in WWM

🧩 Episodic Buffer Overview

  • Bridges the visuo-spatial sketchpad, phonological loop, and long-term memory (LTM).

  • Controlled by the Central Executive.

🧠 Key Functions

  • Integrates visual and verbal information into a multidimensional representation in LTM

  • Provides extra storage capacity beyond the phonological loop and sketchpad.

    • Retention of prose passages: Requires binding of sequential information (words, meaning, sentence structure).

    • Amnesic patients: Show impaired performance due to limited use of the episodic buffer.

<p><span data-name="jigsaw" data-type="emoji">🧩</span> <strong>Episodic Buffer Overview</strong> </p><ul><li><p class=""><strong>Bridges</strong> the <strong>visuo-spatial sketchpad</strong>, <strong>phonological loop</strong>, and <strong>long-term memory (LTM)</strong>.</p></li><li><p class=""><strong>Controlled by</strong> the <strong>Central Executive</strong>.</p></li></ul><p> </p><p><span data-name="brain" data-type="emoji">🧠</span> <strong>Key Functions</strong> </p><ul><li><p class=""><strong>Integrates</strong> visual and verbal information into a <strong>multidimensional representation in LTM</strong></p></li><li><p class=""><strong>Provides extra storage capacity</strong> beyond the phonological loop and sketchpad.</p><ul><li><p class=""><strong>Retention of prose passages</strong>: Requires binding of sequential information (words, meaning, sentence structure).</p></li><li><p class=""><strong>Amnesic patients</strong>: Show impaired performance due to limited use of the episodic buffer.</p></li></ul></li></ul><p></p>
99
New cards

What are 5 key characteristics of the episodic buffer → WMM

  • Limited capacity: Can hold only a finite amount of information at once.

  • Controlled by the Central Executive: Does not operate independently.

  • Stores information: Temporarily integrates and holds information from multiple sources.

  • Multimodal integration: Combines visual, spatial, verbal, and long-term memory inputs into coherent episodes.

  • Foundation of conscious awareness: Enables us to be aware of bound, meaningful experiences (remembering a movie scene with dialogue, actions, and emotions).

<ul><li><p class=""><strong>Limited capacity</strong>: Can hold only a finite amount of information at once.</p></li><li><p class=""><strong>Controlled by the Central Executive</strong>: Does not operate independently.</p></li><li><p class=""><strong>Stores information</strong>: Temporarily integrates and holds information from multiple sources.</p></li><li><p class=""><strong>Multimodal integration</strong>: Combines visual, spatial, verbal, and long-term memory inputs into <strong>coherent episodes</strong>.</p></li><li><p class=""><strong>Foundation of conscious awareness</strong>: Enables us to be aware of bound, meaningful experiences (remembering a movie scene with dialogue, actions, and emotions).</p></li></ul><p></p>
100
New cards
term image

🧠 Core Components:

  1. Central Executive

    • Oversees attention, coordination, and control of the system.

    • Manages integration and switching between modalities.

  2. Visuo-spatial Sketchpad

    • Handles visual, spatial, and haptic (touch-related) information.

    • Includes subtypes like color, shape, kinesthetic (movement), and tactile data.

  3. Phonological Loop

    • Processes auditory and verbal information: speech, sound, sign language, lip reading, and music.

  4. Episodic Buffer

    • Integrates information from all modalities (visual, spatial, auditory, etc.)

    • Incorporates smell and taste (new additions)

    • Links working memory with long-term memory to form unified experiences.

  5. Articulation (ArtiC)

    • A new label here possibly indicating articulatory control, responsible for subvocal rehearsal.