L6: Acquisition of linguistic sounds

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/19

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

20 Terms

1
New cards

Define a baby/newborn, infant, toddler

  • Baby/newborn: 0-2 months

  • Infant: 3-12 months

  • Toddler: 12-36 months

<ul><li><p><strong>Baby/newborn</strong>: 0-2 months</p></li><li><p><strong>Infant</strong>: 3-12 months</p></li><li><p><strong>Toddler</strong>: 12-36 months</p></li></ul><p></p>
2
New cards

What is high amplitude sucking (HAS)?

👶 Target Age: Newborns to 3 months old

🧪 What it Measures:

  • Auditory discrimination and interest in sounds

  • Changes in sucking rate (amplitude or frequency) in response to auditory stimuli

How it Works:

  1. Infant sucks on a specially designed pacifier connected to a pressure transducer.

  2. Sucking controls sound playback.

  3. When infants hear a novel or interesting sound, sucking rate increases.

  4. When they habituate (get used to the sound), sucking rate decreases.

📊 Applications:

  • Detects infants’ ability to discriminate phonemes, rhythms, and even speech vs. non-speech sounds.

<p><span data-name="baby" data-type="emoji">👶</span> Target Age: <strong>Newborns to 3 months old</strong></p><p class=""></p><p> <span data-name="test_tube" data-type="emoji">🧪</span> What it Measures: </p><ul><li><p class=""><strong>Auditory discrimination</strong> and <strong>interest in sounds</strong></p></li><li><p class="">Changes in <strong>sucking rate</strong> (amplitude or frequency) in response to auditory stimuli</p><p class=""></p></li></ul><p> <span data-name="gear" data-type="emoji">⚙</span> How it Works: </p><ol><li><p class="">Infant sucks on a specially designed pacifier connected to a pressure transducer.</p></li><li><p class="">Sucking controls sound playback.</p></li><li><p class="">When infants <strong>hear a novel or interesting sound</strong>, sucking rate <strong>increases</strong>.</p></li><li><p class="">When they <strong>habituate</strong> (get used to the sound), sucking rate <strong>decreases</strong>.</p><p class=""></p></li></ol><p> <span data-name="bar_chart" data-type="emoji">📊</span> Applications: </p><ul><li><p class="">Detects infants’ <strong>ability to discriminate phonemes</strong>, rhythms, and even speech vs. non-speech sounds.</p></li></ul><p></p>
3
New cards

Describe the headturn paradigm

👶 Age Range: 4 to 12 months

  • Infants must be able to hold their head steady

🎯 Purpose: To test whether infants can detect changes in sound, especially phonetic contrasts that may or may not be present in their native language.

🧪 How It Works:

  1. Infant sits on caregiver's lap facing forward.

  2. A sound plays repeatedly from one side (e.g., ba ba ba...).

  3. Suddenly, the sound changes (e.g., to da da da).

  4. If the infant detects the change, they will typically turn their head toward the new sound source.

  5. Head turns are recorded using hidden cameras and rewarded (e.g., with a light or animated toy).

🔍 What It Measures:

  • Discrimination of phonemes

  • Sensitivity to sound contrasts

  • Perceptual narrowing: Infants lose ability to distinguish unfamiliar phonemes (usually by ~10–12 months)

<p><span data-name="baby" data-type="emoji">👶</span> Age Range: <strong>4 to 12 months</strong></p><ul><li><p class="">Infants must be able to <strong>hold their head steady</strong></p></li></ul><p> </p><p><span data-name="bullseye" data-type="emoji">🎯</span> Purpose: To test <strong>whether infants can detect changes in sound</strong>, especially <strong>phonetic contrasts</strong> that may or may not be present in their native language.</p><p> </p><p><span data-name="test_tube" data-type="emoji">🧪</span> How It Works: </p><ol><li><p class=""><strong>Infant</strong> sits on caregiver's lap facing forward.</p></li><li><p class="">A <strong>sound</strong> plays repeatedly from one side (e.g., ba ba ba...).</p></li><li><p class="">Suddenly, the sound <strong>changes</strong> (e.g., to da da da).</p></li><li><p class="">If the infant detects the change, they will typically <strong>turn their head</strong> toward the new sound source.</p></li><li><p class=""><strong>Head turns</strong> are recorded using <strong>hidden cameras</strong> and rewarded (e.g., with a light or animated toy).</p></li></ol><p> </p><p><span data-name="mag" data-type="emoji">🔍</span> What It Measures: </p><ul><li><p class=""><strong>Discrimination of phonemes</strong></p></li><li><p class=""><strong>Sensitivity to sound contrasts</strong></p></li><li><p class=""><strong>Perceptual narrowing</strong>: Infants lose ability to distinguish unfamiliar phonemes (usually by ~10–12 months)</p></li></ul><p></p>
4
New cards

Describe the 3 foundational cognitive mechanisms of language acquisition

1. General Acoustic Perception

  • Present in infants from birth

  • Ability to perceive a broad range of sound features (e.g., pitch, duration, rhythm)

  • Shared with many animal species

  • Basis for distinguishing phonemes (like "b" vs. "p")

2. Computational Abilities

  • Seen in non-human animals (e.g., monkeys, rats)

  • Includes:

    • Statistical learning (tracking patterns or frequencies)

    • Sequence processing

  • Important for detecting word boundaries and grammar rules

3. Social Interaction

  • Particularly emphasized in humans and social mammals

  • Necessary for:

    • Joint attention

    • Turn-taking

    • Intentional communication

  • Supports motivation and contextual understanding

Together, these mechanisms illustrate how biological predispositions and social environments interact in the emergence of language

<p>1. <strong>General Acoustic Perception</strong></p><ul><li><p class="">Present in <strong>infants</strong> from birth</p></li><li><p class="">Ability to perceive a <strong>broad range of sound features</strong> (e.g., pitch, duration, rhythm)</p></li><li><p class="">Shared with many animal species</p></li><li><p class="">Basis for distinguishing <strong>phonemes</strong> (like "b" vs. "p")</p><p class=""></p></li></ul><p> 2. <strong>Computational Abilities</strong></p><ul><li><p class="">Seen in <strong>non-human animals</strong> (e.g., monkeys, rats)</p></li><li><p class="">Includes:</p><ul><li><p class=""><strong>Statistical learning</strong> (tracking patterns or frequencies)</p></li><li><p class=""><strong>Sequence processing</strong></p></li></ul></li><li><p class="">Important for detecting <strong>word boundaries</strong> and <strong>grammar rules</strong></p><p class=""></p></li></ul><p> 3. <strong>Social Interaction</strong></p><ul><li><p class="">Particularly emphasized in <strong>humans and social mammals</strong></p></li><li><p class="">Necessary for:</p><ul><li><p class=""><strong>Joint attention</strong></p></li><li><p class=""><strong>Turn-taking</strong></p></li><li><p class=""><strong>Intentional communication</strong></p></li></ul></li><li><p class="">Supports <strong>motivation</strong> and <strong>contextual understanding</strong></p></li></ul><p class=""></p><p class="">Together, these mechanisms illustrate how <strong>biological predispositions</strong> and <strong>social environments</strong> interact in the emergence of language</p>
5
New cards

Describe the animal studies with tamarins and rats on Acoustic Perception in Animals

  • Humans can’t hear everything → range 20 Hz – 20,000 Hz

  • Humans are tuned to hear acoustic features critical to natural language

🔹 Animal Studies

  • Tamarins:

    • Responded more strongly to forward speech than backward speech

    • Show sensitivity to prosody and rhythm in human language

    • Graph shows more tamarins successfully responded (white bars) in the forward language condition

  • Rats:

    • Showed the ability to discriminate syllable patterns

    • Suggests basic acoustic pattern detection exists even in non-primate mammals

🔹 Conclusion

  • Some animals can perceive linguistic-relevant acoustic properties, even without language.

  • Suggests evolutionary precursors to language may lie in general auditory processing abilities.

<ul><li><p class=""><strong>Humans</strong> can’t hear everything → range <strong>20 Hz – 20,000 Hz</strong></p></li><li><p class="">Humans are <strong>tuned to hear acoustic features</strong> critical to <strong>natural language</strong></p><p class=""></p></li></ul><p> <span data-name="small_blue_diamond" data-type="emoji">🔹</span> <strong>Animal Studies</strong> </p><ul><li><p class=""><strong>Tamarins:</strong></p><ul><li><p class="">Responded more strongly to <strong>forward speech</strong> than backward speech</p></li><li><p class="">Show <strong>sensitivity to prosody and rhythm</strong> in human language</p></li><li><p class="">Graph shows more tamarins successfully responded (white bars) in the <em>forward language</em> condition</p></li></ul></li><li><p class=""><strong>Rats</strong>:</p><ul><li><p class="">Showed the ability to <strong>discriminate syllable patterns</strong></p></li><li><p class="">Suggests basic <strong>acoustic pattern detection</strong> exists even in non-primate mammals</p><p class=""></p></li></ul></li></ul><p> <span data-name="small_blue_diamond" data-type="emoji">🔹</span> <strong>Conclusion</strong> </p><ul><li><p class="">Some <strong>animals can perceive linguistic-relevant acoustic properties</strong>, even without language.</p></li><li><p class="">Suggests <strong>evolutionary precursors</strong> to language may lie in general auditory processing abilities.</p></li></ul><p></p>
6
New cards

Describe the results of the study done on Chinchillas and Humans for categorical perception → acoustic perception in animals

Study: Kuhl & Miller (1975)

🧪 Experiment Overview

  • Subjects: Chinchillas and English-speaking humans

  • Stimuli: Voice onset time (VOT) continuum between /d/ and /t/ sounds

  • Task: Determine the phoneme boundary (where /d/ becomes /t/)

📊 Graph Interpretation

  • X-axis: VOT in milliseconds (msec)

  • Y-axis: % labeled as /d/

  • Phoneme boundary: At 50% /d/ labeling

    • Chinchillas: ~33.5 msec

    • Humans: ~35.2 msec

🧩 Key Finding

  • Chinchillas showed similar categorical perception of VOT contrasts as humans.

  • Suggests that categorical perception is not uniquely human, but may reflect general auditory processing abilities present in mammals.

This evidence supports the idea that basic building blocks of speech perception may have evolved from shared auditory processing mechanisms in animals.

<p class=""><strong>Study:</strong> Kuhl &amp; Miller (1975)</p><p> <span data-name="test_tube" data-type="emoji">🧪</span> <strong>Experiment Overview</strong> </p><ul><li><p class="">Subjects: <strong>Chinchillas</strong> and <strong>English-speaking humans</strong></p></li><li><p class="">Stimuli: Voice onset time (VOT) continuum between <strong>/d/</strong> and <strong>/t/</strong> sounds</p></li><li><p class="">Task: Determine the <strong>phoneme boundary</strong> (where /d/ becomes /t/)</p></li></ul><p> </p><p><span data-name="bar_chart" data-type="emoji">📊</span> <strong>Graph Interpretation</strong> </p><ul><li><p class=""><strong>X-axis:</strong> VOT in milliseconds (msec)</p></li><li><p class=""><strong>Y-axis:</strong> % labeled as <strong>/d/</strong></p></li><li><p class=""><strong>Phoneme boundary:</strong> At <strong>50% /d/</strong> labeling</p><ul><li><p class=""><strong>Chinchillas:</strong> ~33.5 msec</p></li><li><p class=""><strong>Humans:</strong> ~35.2 msec</p></li></ul></li></ul><p> </p><p><span data-name="jigsaw" data-type="emoji">🧩</span> <strong>Key Finding</strong> </p><ul><li><p class="">Chinchillas <strong>showed similar categorical perception</strong> of VOT contrasts as humans.</p></li><li><p class="">Suggests that <strong>categorical perception is not uniquely human</strong>, but may reflect <strong>general auditory processing abilities</strong> present in mammals.</p></li></ul><p> </p><p>This evidence supports the idea that <strong>basic building blocks of speech perception</strong> may have evolved from <strong>shared auditory processing mechanisms</strong> in animals.</p>
7
New cards

What are 3 things infants must learn for language → acoustic perception in humans

👶 What Infants Must Learn

  1. Distinguish phonemes that carry meaning (e.g., /k/ vs. /b/).

  2. Recognize allophones: Learn that /kɑː/ and /kɑʊ/ can be variations of the same phoneme.

  3. Adapt to variations in: Sex, age, dialect, speech rate, phonetic context

Phonemes: Smallest sound units in a language (e.g., /kɑʊ/ vs. /bɑʊ/).

Phones / Phonetic Units: Variations of phonemes influenced by accent or context

  • Example:

    • UK pronunciation: /kɑː/

    • US pronunciation: /kɑʊ/
      These are allophones—different sounds that don’t change meaning in context.

🧬 Innate Capacity

  • Humans are born with the ability to discriminate phonetic sounds.

  • This skill is refined with experience and exposure to a native language.

8
New cards

Describe the impact of environment input on early auditory and language development (infants)

  • Even with limited auditory exposure, infants develop:

    • General auditory mechanisms (infants)

    • Specialized mechanisms (adults)

👶 Prenatal Sensitivity (30 weeks pregnancy)

  • Fetuses at 30 weeks can hear airborne sounds.

  • These sounds lead to:

    • Heart rate acceleration

    • Body movements

📌 Implication:

  • The fetus is already responding to sound patterns before birth.

  • Language learning starts in utero, laying a foundation for postnatal language acquisition.

9
New cards

Describe the experiment on infants for rhythmic perception

  • 2-day-old infants exposed to Spanish and English.

  • Measurement: Duration of bursts per second over an 18-minute session, divided into 3 six-minute periods.

  • Findings:

    • Native language stimuli led to sustained or increased sucking activity, indicating more interest or familiarity.

    • Foreign language stimuli led to a decline in activity, especially toward the final 6-minute period.

  • Conclusion: Infants are born with a sensitivity to the rhythmic patterns of their native language, supporting the idea of early auditory learning or prenatal exposure → can differentiate between native and foreign languages based on rhythm

<ul><li><p class="">2-day-old infants exposed to <strong>Spanish and English</strong>.</p></li><li><p class=""><strong>Measurement</strong>: Duration of bursts per second over an 18-minute session, divided into 3 six-minute periods.</p></li><li><p class=""><strong>Findings</strong>:</p><ul><li><p class=""><strong>Native language stimuli</strong> led to <strong>sustained or increased sucking activity</strong>, indicating more interest or familiarity.</p></li><li><p class=""><strong>Foreign language stimuli</strong> led to a <strong>decline in activity</strong>, especially toward the final 6-minute period.</p></li></ul></li><li><p class=""><strong>Conclusion</strong>: Infants are born with a <strong>sensitivity to the rhythmic patterns</strong> of their native language, supporting the idea of <strong>early auditory learning or prenatal exposure</strong> → can differentiate between <strong>native and foreign languages</strong> based on rhythm</p></li></ul><p></p>
10
New cards

Describe experiment 1 (English-Japanese) and experiment 2 (English-Dutch) done with 5-day-old French babies → rhythmic perception

  • Experiment 1 (English-Japanese):

    • Languages belong to different rhythmic classes (stress-timed vs. mora-timed).

    • Experimental group (different language and speaker): a significant increase in sucking rate when language changed.

    • Control group (same language, different speaker): no significant change.

    • → Indicates infants can detect rhythmic differences across languages.

  • Experiment 2 (English-Dutch):

    • Both languages are stress-timed (same rhythmic class).

    • No significant change in sucking rates after the language switch.

    • → Infants cannot discriminate between languages of the same rhythmic class at 5 days old.

Key Point:

Discrimination between rhythmic classes is present at birth, but discrimination within a rhythmic class only emerges after ~4 months.

<ul><li><p class=""><strong>Experiment 1 (English-Japanese):</strong></p><ul><li><p class="">Languages belong to <strong>different rhythmic classes</strong> (stress-timed vs. mora-timed).</p></li><li><p class=""><strong>Experimental group</strong> (different language and speaker): a <strong>significant increase</strong> in sucking rate when language changed.</p></li><li><p class=""><strong>Control group</strong> (same language, different speaker): no significant change.</p></li><li><p class="">→ Indicates infants can detect <strong>rhythmic differences</strong> across languages.</p><p class=""></p></li></ul></li><li><p class=""><strong>Experiment 2 (English-Dutch):</strong></p><ul><li><p class="">Both languages are <strong>stress-timed</strong> (same rhythmic class).</p></li><li><p class="">No significant change in sucking rates after the language switch.</p></li><li><p class="">→ Infants <strong>cannot discriminate</strong> between languages of the <strong>same rhythmic class</strong> at 5 days old.</p><p class=""></p></li></ul></li></ul><p><strong> Key Point: </strong></p><figure data-type="blockquoteFigure"><div><blockquote><p class=""><strong>Discrimination between rhythmic classes is present at birth</strong>, but <strong>discrimination within a rhythmic class only emerges after ~4 months</strong>.</p></blockquote><figcaption></figcaption></div></figure><p></p>
11
New cards

Describe experiment 3 on rhythmic (English+Dutch → Italian+Spanish) and non-rhythmic (English+Italian → Dutch+Spanish) groups done with 5-day-old French babies → rhythmic perception

Key Experimental Design:

  • Rhythmic group:

    • Language switch from English + DutchItalian + Spanish.

    • All languages in this condition differ rhythmically (e.g., stress-timed to syllable-timed/mora-timed).

    • Result: Significant increase in sucking rate, indicating discrimination based on rhythmic class.

  • Non-rhythmic group:

    • Language switch from English + ItalianDutch + Spanish.

    • The languages do not differ in rhythmic class.

    • Result: No significant increase in sucking, indicating no discrimination when rhythmic contrast is absent.

Conclusion:

Newborns are sensitive to rhythmic properties of language even before they acquire linguistic experience. This sensitivity enables them to differentiate between rhythmic classes of languages — a foundational mechanism in early language acquisition.

<p><strong>Key Experimental Design:</strong> </p><ul><li><p class=""><strong>Rhythmic group</strong>:</p><ul><li><p class="">Language switch from <strong>English + Dutch</strong> → <strong>Italian + Spanish</strong>.</p></li><li><p class="">All languages in this condition differ <strong>rhythmically</strong> (e.g., stress-timed to syllable-timed/mora-timed).</p></li><li><p class=""><strong>Result:</strong> Significant increase in sucking rate, indicating <strong>discrimination</strong> based on rhythmic class.</p></li></ul></li><li><p class=""><strong>Non-rhythmic group</strong>:</p><ul><li><p class="">Language switch from <strong>English + Italian</strong> → <strong>Dutch + Spanish</strong>.</p></li><li><p class="">The languages do <strong>not</strong> differ in rhythmic class.</p></li><li><p class=""><strong>Result:</strong> No significant increase in sucking, indicating <strong>no discrimination</strong> when rhythmic contrast is absent.</p></li></ul></li></ul><p> </p><p><strong>Conclusion:</strong> </p><figure data-type="blockquoteFigure"><div><blockquote><p class="">Newborns are sensitive to <strong>rhythmic properties</strong> of language even before they acquire linguistic experience. This sensitivity enables them to <strong>differentiate between rhythmic classes</strong> of languages — a foundational mechanism in early language acquisition.</p></blockquote><figcaption></figcaption></div></figure><p></p>
12
New cards

What are the 6 core mechanisms invovled in statistical learning in humans?

📈 Core Mechanisms Involved:

  • Regularity: Recognizing recurring patterns in sounds.

  • Probability: Estimating which sounds are likely to follow others.

  • Frequency distribution: Mapping how often sounds occur.

  • Categorical learning: Sorting input into meaningful categories (e.g. phonemes).

  • Transitional probability: Predicting upcoming sounds based on prior context.

  • Stress pattern: Using rhythm/stress to identify word boundaries.

🧠 Key Takeaways on Statistical Learning:

  • Infants learn language by tracking statistical regularities in the speech stream.

  • Infants are sensitive to the frequency distribution of sounds in language.

Hebbian learning principle: “Cells that fire together, wire together” → Neural connections are strengthened by repeated co-activation, supporting learning

13
New cards

Describe categorical learning on 6-month-old infants

🧠 Categorical Learning via Statistical Input Study setup:

  • 👶 24 infants (6 months old)

  • Tested on distinguishing the speech sounds [da] and [ta]

  • Familiarization used a continuum of speech stimuli between [da] and [ta]

🔍 Two Conditions:

  1. Bimodal distribution (green line)

    • High exposure at endpoints of the continuum

    • Leads to better categorization of [da] vs [ta]

  2. Unimodal distribution (blue line)

    • High exposure at center of the continuum

    • Blurs category boundaries

📊 Outcome (Right graph):

  • Infants in the bimodal condition showed greater discrimination:

    • Longer looking time in "alternating" trials (novelty preference)

  • Infants in the unimodal condition treated sounds as more similar (less categorization)

🧩 Conclusion: Statistical learning → Categorical perception
Infants use frequency distributions to learn phoneme categories.

<p><span data-name="brain" data-type="emoji">🧠</span> <strong>Categorical Learning via Statistical Input</strong> <strong>Study setup</strong>: </p><ul><li><p class=""><span data-name="baby" data-type="emoji">👶</span> <strong>24 infants</strong> (6 months old)</p></li><li><p class="">Tested on distinguishing the speech sounds <strong>[da]</strong> and <strong>[ta]</strong></p></li><li><p class="">Familiarization used a continuum of speech stimuli between [da] and [ta]</p></li></ul><p> </p><p><span data-name="mag" data-type="emoji">🔍</span> <strong>Two Conditions:</strong> </p><ol><li><p class=""><strong>Bimodal distribution (green line)</strong></p><ul><li><p class="">High exposure at <strong>endpoints</strong> of the continuum</p></li><li><p class="">Leads to <strong>better categorization</strong> of [da] vs [ta]</p></li></ul></li><li><p class=""><strong>Unimodal distribution (blue line)</strong></p><ul><li><p class="">High exposure at <strong>center</strong> of the continuum</p></li><li><p class=""><strong>Blurs category boundaries</strong></p></li></ul></li></ol><p> </p><p><span data-name="bar_chart" data-type="emoji">📊</span> <strong>Outcome (Right graph)</strong>: </p><ul><li><p class="">Infants in the <strong>bimodal</strong> condition showed <strong>greater discrimination</strong>:</p><ul><li><p class=""><strong>Longer looking time</strong> in "alternating" trials (novelty preference)</p></li></ul></li><li><p class="">Infants in the <strong>unimodal</strong> condition treated sounds as more similar (less categorization)</p></li></ul><p> </p><p><span data-name="jigsaw" data-type="emoji">🧩</span> <strong>Conclusion</strong>: <strong>Statistical learning → Categorical perception</strong><br>Infants use frequency distributions to learn <strong>phoneme categories</strong>.</p>
14
New cards

Describe the magnet effect in 6-month-olds → categorical learning

🧪 Experiment Setup

  • Infants: 6-month-olds

  • Method: Head-turn paradigm

  • Stimuli: Vowel sounds in the acoustic space

  • Focus: American English /i/ ("fee") and Swedish /y/ ("fy")

🗣 Main Concept

  • Infants are better at discriminating foreign vowel sounds than native prototypes because:

    • Prototypes act like magnets, pulling nearby sounds toward them perceptually.

    • This makes variations of the native sound harder to distinguish.

📊 Left Graph (Vowel Space):

  • Shows the acoustic distribution of stimuli.

  • Clusters around American /i/ (e.g. "fee") and Swedish /y/ (e.g. "fy").

📈 Right Graphs: Results

A) American Infants:

  • Better at discriminating Swedish /y/ (open circles)

  • Worse at discriminating native /i/ — magnet effect compresses the perception space.

B) Swedish Infants:

  • Opposite pattern: Better at discriminating American /i/ than their native /y/**

🧠 Conclusion: Exposure to a language shapes auditory perception early — native vowel prototypes "magnetically" attract similar sounds, reducing discrimination.

<p><span data-name="test_tube" data-type="emoji">🧪</span> <strong>Experiment Setup</strong> </p><ul><li><p class="">Infants: <strong>6-month-olds</strong></p></li><li><p class="">Method: <strong>Head-turn paradigm</strong></p></li><li><p class="">Stimuli: Vowel sounds in the acoustic space</p></li><li><p class="">Focus: American English <strong>/i/</strong> ("fee") and Swedish <strong>/y/</strong> ("fy")</p></li></ul><p> </p><p><span data-name="speaking_head" data-type="emoji">🗣</span> <strong>Main Concept</strong> </p><ul><li><p class="">Infants are <strong>better at discriminating foreign vowel sounds</strong> than native prototypes because:</p><ul><li><p class=""><strong>Prototypes act like magnets</strong>, pulling nearby sounds toward them perceptually.</p></li><li><p class="">This makes variations of the <strong>native sound harder to distinguish</strong>.</p></li></ul></li></ul><p> </p><p><span data-name="bar_chart" data-type="emoji">📊</span> <strong>Left Graph (Vowel Space)</strong>: </p><ul><li><p class="">Shows the <strong>acoustic distribution</strong> of stimuli.</p></li><li><p class="">Clusters around American <strong>/i/</strong> (e.g. "fee") and Swedish <strong>/y/</strong> (e.g. "fy").</p></li></ul><p> </p><p> <span data-name="chart_increasing" data-type="emoji">📈</span> <strong>Right Graphs: Results</strong> </p><p><strong>A) American Infants</strong>: </p><ul><li><p class=""><strong>Better at discriminating Swedish /y/</strong> (open circles)</p></li><li><p class=""><strong>Worse at discriminating native /i/</strong> — magnet effect compresses the perception space.</p></li></ul><p> <strong>B) Swedish Infants</strong>: </p><ul><li><p class="">Opposite pattern: <strong>Better at discriminating American /i/</strong> than their native /y/**</p></li></ul><p> </p><p><span data-name="brain" data-type="emoji">🧠</span> <strong>Conclusion</strong>: Exposure to a language shapes auditory perception early — native vowel prototypes <strong>"magnetically" attract</strong> similar sounds, reducing discrimination.</p>
15
New cards

Describe language specialization for consonants, using the classic /l/ vs. /r/ discrimination study

🧪 Task:

  • Infants tested on ability to distinguish between the English consonants /l/ and /r/.

  • Measured via percent correct discrimination using head-turn paradigm.

📉 Key Findings:

  • 6–8 months:

    • Both American and Japanese infants show similar ability to discriminate /l/ and /r/.

    • Suggests universal phonetic sensitivity early in development.

  • 10–12 months:

    • American infants improve their discrimination — exposure reinforces native language contrasts.

    • Japanese infants decline — due to lack of /l/-/r/ distinction in Japanese, perception narrows.

📌 Takeaway:

Language experience during infancy leads to phonetic tuning — infants lose sensitivity to non-native contrasts by their first year.

This phenomenon is often referred to as “perceptual narrowing” or “native language neural commitment”.

<p><span data-name="test_tube" data-type="emoji">🧪</span> <strong>Task</strong>: </p><ul><li><p class="">Infants tested on ability to distinguish between the English consonants <strong>/l/</strong> and <strong>/r/</strong>.</p></li><li><p class="">Measured via <strong>percent correct discrimination</strong> using head-turn paradigm.</p></li></ul><p></p><p> <span data-name="chart_decreasing" data-type="emoji">📉</span> Key Findings: </p><ul><li><p class=""><strong>6–8 months</strong>:</p><ul><li><p class=""><strong>Both American and Japanese infants</strong> show similar ability to discriminate <strong>/l/</strong> and <strong>/r/</strong>.</p></li><li><p class="">Suggests <strong>universal phonetic sensitivity</strong> early in development.</p></li></ul></li><li><p class=""><strong>10–12 months</strong>:</p><ul><li><p class=""><strong>American infants improve</strong> their discrimination — exposure reinforces native language contrasts.</p></li><li><p class=""><strong>Japanese infants decline</strong> — due to lack of <strong>/l/-/r/</strong> distinction in Japanese, perception narrows.</p></li></ul></li></ul><p></p><p><span data-name="pushpin" data-type="emoji">📌</span> Takeaway: </p><figure data-type="blockquoteFigure"><div><blockquote><p class=""><strong>Language experience during infancy leads to phonetic tuning</strong> — infants lose sensitivity to non-native contrasts by their first year.</p></blockquote><figcaption></figcaption></div></figure><p class="">This phenomenon is often referred to as <strong>“perceptual narrowing”</strong> or <strong>“native language neural commitment”</strong>.</p>
16
New cards

Describe how Japanese adults can significantly improve their /r/–/l/ discrimination with targeted training

🧠 Study Summary: /r/–/l/ Perception in Japanese Adults

  • Training duration: 15–22.5 hours

  • Groups:

    • Trained group (N=9)

    • Control group (N=7)

📊 Results:

Trained Group:

  • Pre-training: ~65% correct

  • Post-training: ↑ to ~82% correct

  • 3 months later: Performance maintained (~78%)

Control Group:

  • No substantial change across time points.

Key Insight:

Perceptual learning is possible in adulthood, even for difficult non-native contrasts (like /r/ vs. /l/), but requires focused training.

This supports the idea that native language neural commitment is not irreversible, though it becomes less flexible with age.

<p><span data-name="brain" data-type="emoji">🧠</span> Study Summary: /r/–/l/ Perception in Japanese Adults </p><ul><li><p class=""><strong>Training duration</strong>: 15–22.5 hours</p></li><li><p class=""><strong>Groups</strong>:</p><ul><li><p class=""><strong>Trained group (N=9)</strong></p></li><li><p class=""><strong>Control group (N=7)</strong></p></li></ul></li></ul><p></p><p><span data-name="bar_chart" data-type="emoji">📊</span> Results: </p><p><strong>Trained Group</strong>: </p><ul><li><p class=""><strong>Pre-training</strong>: ~65% correct</p></li><li><p class=""><strong>Post-training</strong>: ↑ to ~82% correct</p></li><li><p class=""><strong>3 months later</strong>: Performance maintained (~78%)</p></li></ul><p><strong>Control Group</strong>: </p><ul><li><p class="">No substantial change across time points.</p></li></ul><p></p><p><span data-name="check_mark_button" data-type="emoji">✅</span> Key Insight: </p><figure data-type="blockquoteFigure"><div><blockquote><p class=""><strong>Perceptual learning is possible in adulthood</strong>, even for difficult non-native contrasts (like /r/ vs. /l/), but <strong>requires focused training</strong>.</p></blockquote><figcaption></figcaption></div></figure><p class="">This supports the idea that <strong>native language neural commitment is not irreversible</strong>, though it becomes less flexible with age.</p>
17
New cards

What are the 2 learning mechanisms of Native Language Neural Commitment (NLNC)

NLNC refers to the process by which the brain becomes tuned to the native language through early experience. This tuning:

  • Helps encode and process familiar input efficiently.

  • Makes learning a second language (L2) more difficult if it deviates from L1.

📌 Mechanisms:

  • Statistical learning: Infants track the frequency of sounds, syllables, and word patterns.

  • Prosodic learning: Sensitivity to rhythm, intonation, and stress patterns.

🌍 Developmental Path:

  • Early in life: Brain is universally open to all language contrasts.

  • With age: Brain commits to native patterns, becoming language-specific.

🚧 Consequences for L2 Learning:

  • L1–L2 similarity: More similarity = easier learning.

  • Age of acquisition: Earlier = better.

  • L2 exposure/use: More input = better outcomes.

18
New cards

What are the 4 key factors influencing second language (L2) learning outcomes?

  1. Linguistic Levels Matter

    • Outcomes differ across levels (ex. phonology vs. syntax).

    • Ex. A learner might master L2 grammar but retain an accent

  2. Proficiency Affects Performance

    • At high proficiency, L2 comprehension ≈ L1 comprehension.

    • Syntactic complexity is equally manageable in L1 and L2 for advanced learners

  3. Explicit Training Can Improve Pronunciation

    • Especially useful for reducing foreign accent

  4. Motivation Drives Success

    • Motivated learners show better L2 acquisition and maintenance

Together, these findings emphasize that neural commitment is not destiny—with the right conditions (ex. training, proficiency, and motivation), adult learners can achieve native-like outcomes in certain domains.

19
New cards

Describe the role of social interaction in early language learning

👶 Participants:

  • 9-month-old infants

  • Learning Mandarin Chinese phonetic sounds

🧪 Experimental Setup:

a) Exposure Phase
Two conditions:

  • Live exposure: Native Mandarin speaker interacts in person with infant.

  • Audiovisual/TV exposure: Infants watch or hear the same material without a live speaker.

b) Testing Phase

  • Head-turn paradigm (HTP) used to measure phonetic discrimination (e.g., distinguishing Mandarin sounds).

📊 Results (c):

  • Live Mandarin exposure group showed significantly higher phonetic learning (above chance level).

  • TV and Audio exposure groups performed at chance levelno learning occurred.

🔍 Interpretation:

  • Social interaction is essential for language learning in infancy.

  • Passive exposure (even with the same content) is not sufficient.

  • Supports the idea that language learning is socially gated—humans need real social engagement to tune their phonetic system.

<p><span data-name="baby" data-type="emoji">👶</span> Participants: </p><ul><li><p class="">9-month-old infants</p></li><li><p class="">Learning <strong>Mandarin Chinese</strong> phonetic sounds</p></li></ul><p></p><p> <span data-name="test_tube" data-type="emoji">🧪</span> Experimental Setup: </p><p class=""><strong>a) Exposure Phase</strong><br>Two conditions:</p><ul><li><p class=""><strong>Live exposure</strong>: Native Mandarin speaker interacts in person with infant.</p></li><li><p class=""><strong>Audiovisual/TV exposure</strong>: Infants watch or hear the same material without a live speaker.</p></li></ul><p class=""><strong>b) Testing Phase</strong></p><ul><li><p class=""><strong>Head-turn paradigm (HTP)</strong> used to measure <strong>phonetic discrimination</strong> (e.g., distinguishing Mandarin sounds).</p></li></ul><p></p><p><span data-name="bar_chart" data-type="emoji">📊</span> Results (c): </p><ul><li><p class=""><strong>Live Mandarin exposure group</strong> showed <strong>significantly higher</strong> phonetic learning (above chance level).</p></li><li><p class=""><strong>TV and Audio exposure groups</strong> performed at <strong>chance level</strong>—<strong>no learning</strong> occurred.</p></li></ul><p></p><p><span data-name="mag" data-type="emoji">🔍</span> Interpretation: </p><ul><li><p class=""><strong>Social interaction is essential</strong> for language learning in infancy.</p></li><li><p class="">Passive exposure (even with the same content) is <strong>not sufficient</strong>.</p></li><li><p class="">Supports the idea that <strong>language learning is socially gated</strong>—humans need real social engagement to tune their phonetic system.</p></li></ul><p class=""></p>
20
New cards

SUMMARY

🍼 Age Categories:

  • Baby: 0–2 months

  • Infant: 3–12 months

  • Toddler: 12–36 months

🔬 Experimental Paradigms:

  • HAS (High Amplitude Sucking) – Used with babies (0–3 months)

  • Headturn Paradigm – Used from 4–12 months

🧠 Core Components of Language Acquisition:

  1. Acoustic Perception (0–6 months)

    • Perception of phonetic units

    • Sensitivity to environmental input

    • Rhythmic perception

  2. Statistical Learning (6–11 months)

    • Categorical learning (e.g., phoneme boundaries)

    • Transitional probabilities and stress patterns

  3. Specialization (~11 months)

    • Consonant perception

    • Native Language Neural Commitment (NLNC)

💡 Additional Notes:

  • Social interaction is a key catalyst throughout development.

  • Many perceptual and statistical mechanisms are not exclusive to humans.

  • The learning process moves from universal sensitivity to language-specific specialization.