What does brain scanning data reveal about reading and listening to speech? (Spisyna, 2006)
overlap in the left temporal lobe
What did the Spisyna (2006) researchers suggest?
there is a convergence region in the brain needed for processing language and processing in this region comes after unimodal processing of the original sensory input
1/32
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What does brain scanning data reveal about reading and listening to speech? (Spisyna, 2006)
overlap in the left temporal lobe
What did the Spisyna (2006) researchers suggest?
there is a convergence region in the brain needed for processing language and processing in this region comes after unimodal processing of the original sensory input
What does the Spisyna (2006) data suggest?
the cognitive mechanisms underlying the processing of language are the same for both modalities (spoken v written)
What have more recent brain scans shown?
numerous networks of brain regions within and across the hemispheres are needed to perform the complex cognitive tasks involved in language, and some of these networks are used by both language modalities (spoken and written). In contrast, others are specific to the modality of the linguistic task
Cognitive neuropsychology researches have provided many cases of individuals who can what?
speak normally but have difficulty reading, read but have speech problems, write but not speak fluently, etc
The neuropsychological model proposed by Ellis and Young (1996) was based on what?
case studies with brain damaged patients
What does the Ellis and Young (1996) model highlight?
while there are many cognitive components shared by written and spoken languages, there are also many independent cognitive modules that are distinct to the modality being used
What does speech perception involve?
decoding of the auditory speech signal into some form of abstract representation, then converting these representations into words, then combining the words into sentences, and finally, integrating the sentences into some form of meaningful discourse
What is the first challenging step in speech perception?
distinguishing speech sounds from the auditory noise they are embedded in and identifying these phonemes
The auditory receptor system in the ears is relatively ____?
slow
the segmentation (separation) of spoken words in the speech signal is helped by what?
some consistent linguistic aspects of a spoken language
what does speech perception benefit fro,?
supplementing the auditory sensory input with parallel visual input provided by the speaker in producing these speech sound
What is the McCurk effect?
visual feedback of speaker is incongruent with speech sound being heard
What does the McCurk effect show?
how visual information from the speaker can impact the auditory input provided by the speaker during speech perception
What did O’Rourke and Holycomb (2002) compare?
word recognition for words with early ‘acoustic recognition’ points with recognition for words with late ‘acoustic recognition’ points
What did O’Rourke and Holycomb (2002) find?
word recognition was faster for early recognition point words
What do O’Rourke and Holycomb (2002) results suggest?
the recognition of spoken words is very efficient even when relying on a slower serial processing auditory system
What did Fuller (2003) record?
the frequencies of different types of verbal hesitations during different types of discourse and finding that the relative frequencies of specific verbal hesitations varied as a function of the discourse context
What do the Fuller (2003) findings suggest?
verbal hesitations were being used by the speaker to help the listener in some way even if the speaker was unaware they were doing so
What did Foxtree (2001) show?
word monitoring in a sentence is slower if the verbal hesitation “uh” was removed from the auditory presented sentence
What are prosodic cues?
rhythm, stress, intonation
What did Nygaard (2009) ask participants to do?
asked one group of participants to say sentences consisting of made-up adjectives to convey specific meaning to a listener (e.g., “blicket” means “tall”)
What did Nygaard (2009) find participants to do?
use specific prosodic cues (e.g., loudness, pitch) to convey specific meanings for these novel words
What does Leveldt (1999)’s model of speech production suggest?
speak production is complex as the speaker moves from a conceptual representation of what they want to say, then choosing the correct word to convey that meaning, breaking up the word into its constituent morphemes, converting the morphemes to phonemes, converting the phonemes to their phonetic attributes, and then finally, articulating the word.
reading involves both ____ input and ____ input to read a word
orthographic, phonological
what does Coltheart (2001)’s model of reading show?
the multiple routes to reading aloud a written word
Eye-movements during reading must move from one fixation point (usually near center of the written word) to another word in a forward direction throughout the written text and this eye-movement is called ____
a saccade
What does Reichle (2003)’s EZ-Reader model highlight?
how complex eye-movements are to control while reading
What did Ashby (2003) reveal?
the fixation durations of highly skilled readers varied as a function of word predictability, and they had less regressions
What did the findings of Neuman (2014) NOT show
that the training could develop reading skills in infants, despite the parents displaying great confidence in the training program’s effectiveness
Writers have to rely on wording of sentences to convey meaning without use of ____ cues
paralinguistic
Kaufer (1986) used the think-aloud methodology to compare what?
the cognitive processes used by expert and average writers to construct written sentences
what does the Think Aloud procedure require participants to do?
verbalize their thoughts out aloud while they are performing a cognitive task