Sound & Sign in Linguistics 111

Unit 2: Sound & Sign

Class 06: Sound & Sign
Linguistics 111

Announcements

  • Housekeeping: - Nothing was due before class today

    • RQ4 due next Monday at 11:59am ET

    • RQ5 due next Wednesday at 11:59am ET

    • Grades are generally released one week after the assignment is due


Recap: Why do we study language?

1. The Mind Creates Language
  • Understanding Yourself: - Sights and sounds become language only when they reach the human mind. The perceptual and cognitive systems actively process raw sensory data into meaningful linguistic units.

    • Linguistics is a cognitive science, employing standard scientific methods like observation, hypothesis testing, and data analysis to understand mental processes.

    • Linguists analyze the hidden mental rules and structures that underlie language, often referring to these as "universal grammar" or "innate linguistic capacities" that permit humans to acquire and use language.


2. Language Shapes Society
  • Connect with Others: - Language is omnipresent, profoundly influencing how we think, interact, and communicate. It is the primary vehicle for transmitting culture and knowledge.

    • It reflects and establishes social connections, acting as a marker of group identity, community membership, and social stratification. Language choices, such as dialect or register, can also contribute significantly to an individual's identity and how they are perceived by others.


3. All Language is Good Language
  • Promote Justice: - Natural forms of speech or signing that deviate from a perceived standard are often labeled incorrect, leading to linguistic discrimination. These judgments are typically based on social prejudices rather than linguistic facts.

    • This discrimination divides people, often along lines of race, gender, region, and socioeconomic class, reinforcing stereotypes and inequality.

    • From a linguist's scientific perspective, every form of natural language (i.e., any language developed and used by a human community) is equally valid and systematically structured. There is no linguistic basis for deeming one language or dialect superior to another.


Course Structure

Overview
  1. Doing Language Justice - Introduction to the concept of language and its societal roles.

    • Cornerstones:

      • Language & Mind: How language is represented and processed in the brain.

      • Language & Society: The role of language in social interaction and cultural identity.

      • Language & Justice: Combating linguistic discrimination and promoting language diversity.

  2. Sign and Sound - The fundamental building blocks of language, whether spoken or signed.

    • Topics:

      • Phonetics: The physical production and perception of speech sounds and signs.

      • Phonology: The mental organization and patterning of sounds/signs in a language system.

      • Speech Perception: How the brain interprets acoustic signals as meaningful language.

  3. Root, Word, and Phrase - The structure of linguistic meaning and sentence formation.

    • Topics:

      • Morphology: The study of word structure and how words are formed from smaller units (morphemes).

      • Syntax: The rules governing how words combine to form phrases, clauses, and sentences.

      • Semantics: The study of meaning in language, from individual words to sentences.

      • Pragmatics: The study of how context influences the interpretation of meaning.

  4. Language in Action - Capstone studies exploring language diversity and application.

    • Topics:

      • Languages of the World: Overview of linguistic typology and diversity.

      • Multilingualism: The acquisition and use of multiple languages.

      • Signed Languages: Examination of their linguistic structure and cultural significance.

      • Linguistics at UMich: Career paths and research opportunities within the field.


Explanation of Course Structure
  • This order is typical in introductory linguistics courses, following a bottom-up approach.

  • It starts with the smallest meaningful units (sounds/signs) and systematically builds up to complex sentence structures and meanings, reflecting the hierarchical nature of language.


Language is Hierarchical

Concept of Hierarchy
  • Language is structured hierarchically, meaning that smaller units combine in specific ways to form larger, more complex structures.

    • This hierarchy ranges from:

      • Sounds/Signs (phonemes/phones, cheremes/cheires in sign language)

      • Morphemes/Words (the smallest meaningful units)

      • Sentences (structured combinations of words)

      • Meaning (derived from the combination and context of these units)

  • The internal structure of language is not random but forms a highly organized, hierarchical, and systematic network, which allows for infinite expression from a finite set of elements.


Sound & Sign Overview

Key Areas of Study
  1. Phonetics: - Study of the physical properties of speech sounds and signs. This includes their production (articulatory phonetics), their acoustic transmission (acoustic phonetics), and their perception (auditory phonetics).

    • Focused on the anatomical structures involved in sound production (e.g., vocal cords, tongue, lips) and perception (e.g., ear, auditory cortex).

  2. Phonology: - Study of how the mind organizes sounds and their interactions within a specific language system. It deals with the abstract mental representations of sounds.

  3. Speech Perception: - Study of how language sounds are interpreted and understood by listeners, converting complex acoustic signals into recognizable linguistic units.


Phonetics

Definition and Focus
  • Phonetics is the scientific study of the physical aspects of speech sounds and their signed language equivalents in human communication.

  • Key Types:

    • Articulatory Phonetics: Deals with how speech sounds are produced by speakers using the vocal apparatus (e.g., examining tongue position, jaw height, airflow).

    • Acoustic Phonetics: Analyzes the physical properties of those sounds as they travel through the air, often using instruments like spectrograms to visualize features like frequency, intensity, and duration.


Speech Sounds
  • Defined as the smallest discernible units of language (phones in spoken language, often represented by the International Phonetic Alphabet or IPA symbols for precision). These are the concrete, physical realizations.

  • In signed languages, phonetics studies the minimal units of signs, often referred to as parameters, such as handshape, movement, location, and orientation.


Types of Sounds
  1. Segmental: - Discrete, individual units of speech that can be arranged in sequence, forming the 'segments' of an utterance. These primarily include consonants and vowels, which form the core building blocks of syllables.

  2. Suprasegmental: - Properties that apply to units larger than a single segment, such as syllables, words, or even phrases. These include features like:

    • Stress: The relative emphasis given to certain syllables in a word (e.g., RE-cord vs. re-CORD).

    • Tone: The use of pitch to distinguish lexical or grammatical meaning (common in Mandarin Chinese, for example).

    • Intonation: The rise and fall of pitch in speech over phrases or sentences, conveying grammatical information or speaker attitude (e.g., distinguishing a question from a statement).

    • Length: The duration of a sound, which can be phonemically distinctive in some languages.


Goals of Phonetics Unit

  • Focus on making rigorous cross-linguistic comparisons of speech sounds, identifying both universal patterns and language-specific variations.

  • Aim to establish a universal phonetic alphabet (the IPA) that provides a consistent, unambiguous system for transcribing every sound found in human language, regardless of the language.

    • All natural languages share the same basic human anatomy for speech production, facilitating a common framework for phonetic description and comparison.

  • The need for a standardized system like the IPA is crucial to accurately document and analyze speech sounds across languages, overcoming the inconsistencies of orthography (spelling).


Phonology

Definition and Focus
  • Phonology is the study of the systematic organization and distribution of sounds in languages. It focuses on the abstract, mental system governing how sounds function to convey meaning.

  • It examines how the mind perceives, categorizes, and organizes sounds, exploring the underlying patterns and rules rather than just their physical properties.


Rules and Representation
  • Phonology primarily studies the rules that map our abstract mental sound representations (phonemes) onto the concrete acoustic realities (allophones) that we produce and hear.

  • Example of sound variation: - The word "little" can be pronounced differently (e.g., with a 't' sound [ı tl][^{\text{ı t\text{l}}]} or a flap [ı rl][^{\text{ı r\text{l}}}] in American English) without changing its meaning. These variations are often predictable based on phonetic context.

    • However, changing a sound can significantly alter the meaning, as illustrated by the stark contrast in minimal pairs like "cheat" (with [][^{\text{t\text{ʃ}}}]) vs. "cheap" (with [^{\text{p}}}]). This difference highlights the role of phonemes in distinguishing words.

    • Another example is the aspiration of voiceless stops in English: the /p/ in "pin" [phextın][^{\text{p}^{\text{h}} ext{ın}}] is aspirated, while the /p/ in "spin" [spın]][^{\text{spın}}]] is unaspirated. This is a predictable phonological rule.


Phonemic and Allophonic Distinction
  • Phoneme: - A mental representation of a sound. It is the smallest unit of sound that can distinguish one word from another in a particular language, existing as an abstract category in a speaker's mind. For example, /æ/ (as in "cat") is a phoneme in English.

  • Allophones: - The different acoustic or articulatory realizations of a single phoneme. Allophones are context-dependent variations of a phoneme that do not distinguish meaning in a given language. For instance, the nasalized vowel [æ˜][^{\text{æ̃}}] (as in "man" before /n/) and the oral vowel [æ][^{\text{æ}}] (as in "cat" before /t/) are allophones of the phoneme /æ/ in English.

    • Changes between allophones may not alter the semantic meaning of a word, but they can affect the naturalness or 'accent' of the pronunciation and may signal regional or social variations.


Concept of Minimal Pairs
  • Minimal pairs are two words that differ by only a single sound (a single phoneme) yet have completely different meanings.

  • Example: "cat" /kæt/ vs. "cot" /kɒt/ illustrates the phonemic distinction between /æ/ and /ɒ/. Another example is "pat" /pæt/ vs. "bat" /bæt/, demonstrating the phonemic contrast between voiceless /p/ and voiced /b/.


Speech Perception

Definition and Process
  • Speech perception is the study of how speech sounds are processed, perceived, and ultimately understood by the listener, involving complex cognitive processes that convert acoustic signals into linguistic messages.

  • Evidence shows the mind actively creates structure from linguistic input, often imposing order and meaning even when the acoustic signal itself is ambiguous or incomplete. This involves both bottom-up (data-driven) and top-down (knowledge-driven) processing.


Common Observations
  1. Mismatch between acoustic input and auditory perception: The perceived speech can differ significantly from the raw acoustic signal due to phenomena like coarticulation (sounds influencing their neighbors) or auditory illusions. For example, the phoneme /s/ in "spat" is acoustically different from a standalone /s/, yet our mind perceives it consistently as the same sound category. The McGurk effect, where visual input influences auditory perception (e.g., seeing a speaker say /ga/ but hearing /ba/ when the audio is actually /da/), is another strong illustration.

  2. Discrimination of sounds often occurs categorically: Listeners tend to perceive speech sounds not as a continuous spectrum, but as discrete categories. For instance, a range of acoustic variations for /p/ and /b/ are perceived as either distinct /p/ or distinct /b/, with a sharp boundary between them, affecting processing and comprehension. This categorical perception aids in rapidly interpreting the speech stream.

  3. Multiple cues, including visual and contextual hints, impact speech perception: The brain integrates information from various sources. Alongside auditory signals, visual cues (e.g., lip movements), semantic context, and even expectations play a crucial role in how we perceive speech. This can lead to potential auditory illusions, where our perception is altered by non-auditory information, as seen in the McGurk effect.


Language Shapes Society

Sociolinguistic Aspects
  • Phonetics and phonology are critical in understanding linguistic variation (e.g., dialects, accents) and how they function in society.

  • They help distinguish language varieties not just geographically, but also socially, and carry meanings related to region, ethnicity, age, gender, and social status. For example, specific vowel pronunciations can immediately signal a speaker's origin or social group.

  • Variations in sound/sign influence important socio-cultural domains such as education (e.g., impact on literacy instruction), medicine (e.g., speech pathology and therapy for articulation disorders), and general societal navigation (e.g., forming judgments based on accent).

  • These fields are also important for language documentation and revitalization efforts, providing the tools to accurately record and teach endangered languages.


Example of Style-Shifting
  • Reference to a Key and Peele sketch showcasing how pronunciation changes (e.g., code-switching or style-shifting) can dramatically affect social perception and comprehension in different contexts.


All Language is Good Language

Attitudes towards Language Use
  • Attitudes about specific phonetic and phonological features (e.g., certain vowel shifts, consonant deletions, or intonation patterns) often shape beliefs about 'correctness' and 'appropriateness' in speech.

    • Examples include notions of 'slurred' speech, 'grating' pronunciations, or 'uneducated' accents, which are often tied to social biases rather than linguistic facts.

  • These perceptions are typically unscientific, biased, and harmful, contributing to linguistic discrimination against speakers of non-standard varieties. Linguistic evidence is essential to combat these stereotypes and biases surrounding language use related to gender, race, class, and regional origin.


Reflection
  • Students were asked to rate performances of speakers in a compare-and-contrast format, highlighting how subjective judgments based on phonetic/phonological features can influence perceptions of competence, intelligence, or trustworthiness.