SB

Chapter 11- Language: Structure, Comprehension, and Music Connections

What is Language?

  • Language is a communication system using sounds or symbols to express feelings, thoughts, ideas, and experiences.
  • Animal communication is more rigid and limited compared to human language.
  • Human language is characterized by creativity, allowing for the creation of new and unique sentences.
  • Key properties of human language:
    • Hierarchical structure: small components combine to form larger units (words, phrases, sentences, stories).
    • Rule-based nature: components can be arranged in permissible ways but not others.
  • Language is primarily used for communication.
  • The need to communicate with language is universal across all people.
    • Deaf children invent sign language in environments where nobody speaks or uses sign language.
    • All humans with normal capacities develop language and learn its complex rules, often unconsciously.

Universality of Language

  • Language is universal across cultures; over 5,000 different languages exist worldwide.
  • Isolated cultures, like those in New Guinea, have developed numerous and diverse languages.
  • Language development is similar across cultures.
    • Babbling starts at approximately 7 months.
    • First meaningful words appear around the first birthday.
    • First multiword utterances occur around age 2.
  • Languages are unique in their specific words, sounds, and rules, yet they share common features.
    • All languages have nouns and verbs.
    • All languages include systems for negation, questions, and references to past and present.

Studying Language

  • The scientific study of language began with Paul Broca (1861) and Carl Wernicke (1874).
    • Broca proposed that the Broca's area in the frontal lobe is responsible for language production.
    • Wernicke proposed that the Wernicke's area in the temporal lobe is responsible for language comprehension.
  • Behavioral research on language in the 1950s was influenced by behaviorism.
    • B. F. Skinner (1957) proposed in Verbal Behavior that language is learned through reinforcement.
    • Children are rewarded for correct language use and punished for incorrect language use.
  • Noam Chomsky (1957) argued in Syntactic Structures that human language is genetically coded.
    • Humans are programmed to acquire and use language, similar to being programmed to walk.
    • Chomsky believed the underlying basis of all languages is similar.
    • He disagreed with behaviorism and viewed studying language as a way to study the mind.

Chomsky vs. Behaviorism

  • Chomsky criticized Skinner’s behaviorist view of language in 1959.
    • Children produce sentences they have never heard and that have never been reinforced.
    • Example: "I hate you, Mommy."
  • Chomsky's criticism of behaviorism was crucial in the cognitive revolution.
  • Psycholinguistics: the psychological study of language.

The Goals of Psycholinguistics

  • Discover the psychological processes by which humans acquire and process language.
  • Four major concerns of psycholinguistics:
    • Comprehension: Understanding spoken and written language.
    • Representation: How language is represented in the mind.
    • Speech production: How people produce language.
    • Acquisition: How people learn language.
  • The focus is restricted to comprehension and representation.

Understanding Words: A Few Complications

  • Lexicon: all the words we know, our “mental dictionary.”
  • Semantics: the meaning of language; lexical semantics is the meaning of words.
  • Determining word meaning is more complex than simply looking it up in our lexicon.

Not All Words Are Created Equal: Differences in Frequency

  • Word frequency: how often a word appears in a language.
    • Example: "home" occurs 547 times per million words; "hike" occurs 4 times per million words.
  • Word frequency effect: we respond more rapidly to high-frequency words.
  • Lexical decision task: deciding quickly whether letter strings are words or nonwords.
    • Slower responding to low-frequency words.
  • Eye movements measured during reading show longer fixations on low-frequency words.
  • Rayner and Duffy (1986): low-frequency words had longer first fixation durations and total gaze durations.
    • Longer fixations may be needed to access the meaning of low-frequency words.

The Pronunciation of Words Is Variable

  • People pronounce words differently (accents, speed).
  • Relaxed pronunciation in natural speech.
    • Example: "Did you" vs. "Dijoo".
    • 50 different ways to pronounce "the" (Waldrop, 1988).
  • Context helps understand variable pronunciations.
  • Pollack and Pickett (1964): Words are harder to understand when taken out of context.
    • Participants could only identify half the words from their own conversations when presented in isolation.

No Silences Between Words in Normal Conversation

  • Words spoken in sentences are usually not separated by silence.
  • Physical energy record shows no breaks between words.
  • Saffran and coworkers (2008): Infants are sensitive to statistical regularities in the speech signal which aids speech segmentation.
  • Knowing a language helps in distinguishing individual words.
  • Meaning is responsible for organizing sounds into words.
    • "Jamie’s mother said, ‘Be a big girl…’" vs. "The thing Big Earl loved…"
    • "I scream" vs. "ice cream".

Factors Affecting Word Understanding

  • Word frequency.
  • Context.
  • Statistical regularities of language.
  • Knowledge of word meanings.
  • Knowledge achieved through learning/experience with language is crucial.

Understanding Ambiguous Words

  • Lexical ambiguity: Words can have multiple meanings.
    • Example: "bug" (insect, listening device, annoying).
  • Context determines the applicable definition.
    • "My mother is bugging me" implies annoyance.

Accessing Multiple Meanings

  • Tanenhaus and coworkers (1979) showed that people briefly access multiple meanings of ambiguous words before context takes over.
  • Lexical priming: Stimulus makes responding to related stimuli easier.
  • Repetition priming: Priming when the same word is repeated.
  • Lexical priming: Priming involving word meanings.
    • Example: "rose" followed by "flower" vs. "cloud" followed by "flower".

Tanenhaus Experiment

  • Noun-noun condition: word presented as a noun followed by a noun probe stimulus.
  • Verb-noun condition: word presented as a verb followed by a noun probe stimulus.
  • Example: "She held the rose" (noun) followed by "flower".
  • Control condition: "She held a post" followed by "flower".
  • Reaction time: Time elapsed between the end of the sentence and when the participant began saying the probe word.

Findings of the Lexical Priming Experiment

  • Rose (flower) resulted in a 37 msec faster response to "flower" than in the control condition.
  • Priming occurred even when "rose" was used as a verb (They all rose) followed by the probe "flower".
  • The "flower" meaning of "rose" is activated immediately after hearing "rose", whether it is a noun or verb.
  • After a 200 msec delay, priming still occurred for the noun condition but not for the verb condition.

Frequency Influences Which Meanings Are Activated

  • Meaning dominance: How frequently different meanings occur.
  • Biased dominance: One meaning occurs more often.
    • Example: "tin" (metal vs. container).
  • Balanced dominance: Meanings are equally likely.
    • Example: "cast" (members of a play vs. plaster cast).

Biased vs Balanced Dominance

  • Balanced dominance (e.g. cast): both meanings are activated resulting to longer fixation time.
    • The cast worked into the night.
  • Biased dominance (e.g. tin): only the dominant meaning is activated resulting fast access.
    • The tin was bright and shiny.
  • Context influences meaning accessibility.
  • Less frequent meaning of tin: context increases activation of the less frequent meaning resulting slow access.
    • The miners went to the store and saw that they had beans in a tin.
  • Only dominant meaning of tin is activated, and it reads faster.
    • The miners went under the mountain to look for tin.

Factors Influencing Meaning Access

  • Frequency of a word.
  • Context of the sentence.
  • Combination of meaning dominance and context.

Understanding Sentences

  • Sentences create context for dealing with pronunciation variability, perceiving words, and determining ambiguous word meanings.
  • Syntax: sentence structure.
  • Syntax involves discovering cues that languages provide in order to show how words in a sentence relate to one another (Traxler, 2012).
  • Meaning unfolds over time as we hear a sentence.
  • Parsing: Grouping words into phrases to determine meaning.

Parsing: Making Sense of Sentences

  • Understanding a sentence involves understanding each word and parsing words into phrases.
  • Example: After the musician played the piano…
    • Possible continuations: she left the stage, she bowed to the audience, the crowd cheered wildly.
    • Incorrect continuation: was wheeled off of the stage.
  • Garden path sentences: Sentences that appear to mean one thing initially but end up meaning something else, illustrating temporary ambiguity.
  • Garden path example: After the musician played the piano was wheeled off of the stage.

The Garden Path Model of Parsing

  • Early proposal to explain parsing and garden path sentences.
  • Proposed by Lynn Frazier (1979, 1987).
  • Grouping of words into phrases is governed by processing mechanisms called heuristics.
    • Heuristics: Rules applied rapidly to make decisions about sentence structure.
    • Fast but sometimes result in wrong decisions.
  • After the musician played the piano was wheeled off the stage illustrates initial parse can be incorrect and require reconsideration.
  • Parsing rules are based on syntax (structural characteristic of language).

Principle of Late Closure

  • Parsing mechanism assumes a new word is part of the current phrase.
  • Each new word is added to the current phrase for as long as possible.
    • After the musician played the piano was wheeled off the stage illustrates late closure leading to incorrect parsing.
  • Sentence needs to be re-parsed by [After the musician played] [the piano was wheeled off the stage].

Constraint-Based Approach to Parsing

  • Information in addition to syntax participates in processing as a person reads or hears a sentence.
  • Information contained in the words of a sentence, and in the context within which a sentence occurs, is used to make predictions about how the sentence should be parsed (Kuperberg & Jaeger, 2015).
  • Influence of word meaning:
    • The defendant examined by the lawyer was unclear.
    • The evidence examined by the lawyer was unclear.

Influence of Word Meaning (Cont.)

  • The defendant examined has two possible interpretations, examined had only one possibility, evidence is being examined.
  • Another example:
    • The dog buried in the sand was hidden.
    • The treasure buried in the sand was hidden.

Influence of Story Context

  • The horse raced past the barn fell is a famous garden path sentence, it is confusing without context.
  • Adding context makes parsing easier:
    • There were two jockeys who decided to race their horses. One raced his horse along the path that went past the garden. The other raced his horse along the path that went past the barn. The horse raced past the barn fell.

Influence of Scene Context

  • Visual world paradigm: Investigates how observing objects in a scene influences sentence interpretation.
  • Tanenhaus et al. (1995) measured eye movements as participants saw objects on a table.
    • Instructions: "Place the apple on the towel in the box."
  • Ambiguous sentence leads to participants looking at the wrong place first.
  • Non-ambiguous sentence (Move the apple that’s on the towel to the box) leads to immediate focus on the box.

Influence of Memory Load and Prior Experience with Language

  • Consider the following sentences:
    • The senator who spotted the reporter shouted
    • The senator who the reporter spotted shouted
  • They can be divided into Subject and Object-relative constructs
  • Subject-relative construction:
    • Main clause: The senator shouted.
    • Embedded clause: The senator spotted the reporter.
  • Object-relative construction:
    • Main clause: The senator shouted.
    • Embedded clause: The reporter spotted the senator.
  • High memory load in object-relative clauses slows processing.
    • Object-relative constructions are less prevalent in English.
  • Experience with subject-relative constructions makes them easier to understand and anticipate.

Prediction

  • People make predictions about what is likely to happen next in a sentence.
  • Incorrect predictions can lead down the garden path.
  • Most predictions are correct.
  • Correct predictions help us deal with the rapid pace of language.
  • Prediction is important when language is degraded (poor connection, noisy environment, foreign accent).
  • Altmann and Kamide (1999) measured eye movements to show participants make predictions as they read a sentence.

Prediction: Examples from Altman and Kamide

  • Example: "The boy will move the cake" vs. "The boy will eat the cake".
  • Eye movements toward the target object (cake) occurred before hearing the word cake in the "eat" sentence.
  • "Eat" leads to the prediction that "cake" will be the next word.

Understanding Text and Stories

  • Stories are more than the sum of individual sentences; relationships exist between sentences.
  • Inferences: Determining what the text means by using our knowledge to go beyond the information provided by the text.

Making Inferences

  • An early demonstration of inference in language was an experiment by John Bransford and Marcia Johnson (1973)
  • Bransford and Johnson’s participants read passages and then tested them to determine what they remembered.
  • Participants were likely to indicate information that they had previously seen by making references to events.
  • People use a similar creative process to make a number of different types of inferences as they are reading a text.
  • Coherence can be created by a number of different types of inference.
  • Narrative refers to texts in which there is a story that progresses from one event to another, although stories can also include flashbacks of events that happened earlier.
  • An important property of any narrative is coherence which is the representation of the text in a person’s mind that creates clear relations between parts of the text and between parts of the text and the main topic of the story.

Types of Inference

  • Anaphoric inference: Inferring that pronouns refer to previously mentioned nouns.
    • Example: "Riffifi, the famous poodle, won the dog show. She has now won the last three shows she has entered."
Example of anaphoric Inference
  • … we really love to … go down to our ranch. …I take the kids out and we fish. And then, of course, we grill them. (Stevens, 2002)
  • Instrument inference: Inferring the tools or methods used.
    • Example: William Shakespeare wrote Hamlet at his desk (using a quill pen).
  • Causal inference: Inferring that events in one clause or sentence were caused by events in a previous sentence.
    • Example: Sharon took an aspirin. Her headache went away.
  • Inferences create connections for coherence and involve creativity.
  • Reading involves transformation of words and sentences into a meaningful story.

Situation Models

  • Mental representations formed as stories are read.
  • Representations of people, objects, locations, and events in the story.
  • The runner jumped over the hurdle probably brings up an image of a runner on a track, jumping a hurdle.
  • Situation model: Simulates perceptual and motor characteristics of objects and actions in a story.
  • Stanfield and Zwaan (2001): Participants responded faster when picture orientation matched the sentence.
    • He hammered the nail into the wall (horizontal nail picture).
    • He hammered the nail into the floor (vertical nail picture).

Simulation of Object Shape

  • Sentences: The ranger saw the eagle in the sky vs. The ranger saw the eagle in its nest.
  • Pictures: Eagle with wings outstretched vs. eagle with wings folded.
  • Zwaan et al. (2002): Reaction times were faster when the picture matched the situation described in the sentence.

Accessing World Knowledge during Reading

  • Metusalem et al. (2012) measured ERP while participants read stories.
  • N400 wave: Larger response when a word is unexpected.
    • Concert Scenario:
      • The band was very popular and Joe was sure the concert would be sold out. Amazingly, he was able to get a seat down in front. He couldn’t believe how close he was when he saw the group walk out onto the (stage/guitar/barn) and start playing.
  • Findings: stage (expected word, small N400), guitar (event-related word, medium N400), barn (event-unrelated word, large N400).
  • Knowledge about situations is continually accessed during reading.
  • Knowledge is accessed rapidly (within fractions of a second).

Action and Reading

  • Situation model includes motor characteristics of objects in a story.
  • Movement elicits simulation of this movement.
  • Hauk et al. (2004) showed overlapping brain activation for action words and related movements.
    • Leg words/movements vs. arm words/movements.
    • The overall conclusion from research on how people comprehend stories is that understanding a text or story is a creative and dynamic process.

Having Conversations

  • Conversation: Two or more people talking with one another, which is often one of the most common forms of language production.
  • Easy with familiar people, difficult with new acquaintances.
  • Awareness of the other person’s knowledge is important.

The Given–New Contract

  • Speakers construct sentences with given (known) and new (unfamiliar) information.
  • Example:
    • Ed was given an alligator for his birthday.
    • The alligator was his favorite present.
  • Haviland and Clark (1974) demonstrated that it takes longer to comprehend the second sentence in pairs when the given information doesn’t directly mention the previous one.
  • Collaboration is central to language understanding (Clark, 1996).

Common Ground: Taking the Other Person into Account

  • Mental knowledge and beliefs shared among conversational parties.
  • Each person accumulates information about the topic and about what the other person knows.
  • Conversations go more smoothly if you know as much as possible about the other person.
  • Doctors who communicate well use lay terminology with patients who have limited medical knowledge.
  • Establishing common ground: How people establish common ground during a conversation.
  • Often studied via analyzing transcripts of conversations and determining how the reconstruct events the people are talking about.
  • People do not often speak in full sentences.

Referential Communication Task

  • Two people exchanging information in a conversation.
  • In Stellman and Brennan's experiment A (the director) and B (the matcher), had identical sets of 12 cards with pictures of abstract geometrical objects.
  • A arranged the cards in a specific order and B’s task was to arrange her cards in the same order.
  • Over the course of the communication task the conversational task because more brief.
  • Participants know each other and can refer to the cards by the names they have created.
  • Process of creating common ground results in entrainment.
  • Conversation depends on understanding what the other person knows.

Syntactic Coordination

  • People using similar grammatical constructions during conversation.
  • Bock (1990) illustrated with a bank robber and his lookout copying syntactic form.
  • Syntactic priming: Hearing a statement with a particular syntactic construction increases the chances that a sentence will be produced with the same construction.
  • Holly Branigan showed that participants were likely to match the syntax on 78 percent of trials when given a priming statement.
Syntactic Priming: Branigan et al. (2000)
  • Two people participate in a conversation behind a screen.
  • Person A, working with the experimenter, makes a priming statement.
    • "The girl gave the book to the boy" or "The girl gave the boy the book".
  • Person B (the participant) then gives a description of a card and depending on whether person A matches them indicates whether or not syntactic priming has taken place.

Conversation Summary

  • Conversations are dynamic, and several processes make them easier.
  • On the semantic side, people take other people’s knowledge into account and help establish common ground if necessary.
  • On the syntactic side, people coordinate or align the syntactic form of their statements.
  • People plan what to say while simultaneously understanding the other person’s input.
  • Theory of mind is involved such that the ability to understand what other people feel, think or believe is a main consideration of conversations.
  • Meaning can be obtained through a person’s gestures, facial expressions, tone of voice, and other things that provide cues to meaning.
  • Each person needs to anticipate when it is appropriate to enter the conversation, is the process called “turn taking”.

Music and Language: Similarities and Differences

  • Connections between music and language extend beyond song and speech to music and language in general.

Emotion

  • Emotion is a central player in both.
  • Music has been called the “language of emotion,” and people often state that emotion is one of the main reasons they listen to music.
  • Emotion in language is often created by prosody—the pattern of intonation and rhythm in spoken language (Banziger & Scherer, 2005; Heffner & Slevc, 2015).
  • Orators and actors create emotions by varying the pitch of their voice and the cadence of their words, speaking softly to express tenderness or loudly to emphasize a point or to capture the audience’s attention.
  • Emoji's can be used to facilitate people being able to detect and better facilitate emotions with those that are reading or hearing language.

Structured Sequences

  • Similarity: Both combine elements to create structured sequences.
  • Differences: Notes are combined based on sound, while words are combined based on meaning.
  • No analogues for nouns and verbs in music; music has no "who did what to whom".
  • Structured sequences are organized into phrases and are governed by syntax—rules for arranging these components (Deutsch, 2010).

Expectations in Music and Language

  • Listeners make predictions in both language and music.
  • Music is organized around the tonic note of the composition, called the key.

Return to Tonic

  • Framework within which a listener generates expectations about what might be coming next. One common expectation is that a song that begins with the tonic will end on the tonic occurs in "Twinkle, Twinkle, Little Star”.

Patel and Musical Syntax.

  • Aniruddh Patel (1998) had participants listen to musical phrases and judge acceptability of target chords.
  • Chords used (indicated by the arrow above the music):
    • "In key" chord.
    • "Nearby key" chord.
    • "Distant key" chord.
  • P600 occurs to violations of musical syntax.
  • Results: grammaticality ratings and P600 ERP response.
  • Demonstrates that music, like language, has a syntax that influences how we react to it.

Do Music and Language Overlap in the Brain?

  • Studying patients with brain damage has been a standard manner of determining how people’s brains have been affected by music and language.

Stroke Patients

  • Patel et al. (2008) studied stroke patients with Broca’s aphasia (difficulty understanding complex syntax).
  • Language task: understanding syntactically complex sentences.
  • Music task: detecting off-key chords in a sequence of chords.
  • Findings supported a connection between brain mechanisms involved in music and language.
  • *There is a connection between poor performance on the language task and poor performance on the music task.

Congenital Amusia

  • Patients who are born having problems with music perception—a condition called congenital amusia—have severe problems with tasks such as discriminating between simple melodies or recognizing common tunes. Yet these individuals often have normal language abilities (Patel, 2013).

Robert Slevic

  • Robert Slevc and coworkers (2016) tested a 64-year-old woman who had Broca’s aphasia caused by a stroke. She had trouble comprehending complex sentences and had great difficulty putting words together into meaningful thoughts. Yet she was able to detect out-of-key chords in sequences like those presented by Patel (Figure 11.16a).
From a Neuroimaging Perspective
  • There has been evidence to support that different areas can be involved in music and language (Fedorenko et al., 2012).
  • Broca’s area, involved in language syntax, is also activated by music (Fitch & Martins, 2014; Koelsch, 2005, 2011; Peretz & Zatorre, 2005).
  • There is evidence that music and language activation can occur within an area but involve different neural networks (Figure 11.20) (Peretz et al., 2015).