Auditory Processing - Sound Localization - Key Concepts

0.0(0)
studied byStudied by 1 person
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/8

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

9 Terms

1
New cards

Sound Localization: 3 components

ITD: interaural time difference

ILD: interaural level difference (=intensity)
HRTF: head related transfer function

  • interaural needs input from both ears

2
New cards

ITD

only works for LOW frequencies

  • b/c when the wavelength is shorter than the distance btwn the two ears, the sound wave can reach the two ears at the same place = phase ambiguity, aliasing/mismatch

    • the brain can’t match up the same parts of the wavelength

  • coincidence detector: delay, neurons w a different preference for interaural difference

3
New cards

ILD

only works for HIGH frequencies

  • sound is louder in the ear nearer to the source bc it points towards it, and ALSO bc the shadow of skull blocks intensity

  • low frequencies bend around your skull so the shadow and ILD is less/none

Interaural Level Difference computed (by the thalamus) by

summing/convergence. One input/side is excitatory, one input/side is inhibitory. For the cell to fire, the net signal need to be positive.

4
New cards

HRTF (head related transfer function)

used for: sound info in the “cone of confusion” (no ITD or ILD for info coming from front vs back) and for elevation

❖ Sound = manipulated differently depending on the surface it hits, some frequencies are absorbed, some reflected, etc

❖ Body and head position are known

❖ Frequency content of the sound changes in predictable manner (“the brain is looking for frequency bands in the signal that correspond to particular known directions of sound”)

❖ e.g. frequency amplitudes are a function of head and body position

adaptable, when you put a prosthesis on the pinna to change known sound-ear interactions, you can only detect left vs right accurately but after a while you relearn the relationships

5
New cards

Application of bifocal correlator-model for ITD

6
New cards

tonotopic mapping in A1

❖ fMRI shows multiple tonotopic maps, like multiple retinotopic maps in vision

❖ Also shows relationship between tuning width and preferred frequency

—> vocal frequencies have narrower tuning! like the central visual field: high acuity, bias/preference

7
New cards

auditory perception: grouping

similarities w/ gestalt principles of visual grouping

similarity in frequency (range), timing (temporal proximity), good continuation (sound restoration- like beeps over gaps in sound)

8
New cards

multisensory integration: vision and sound

goes all the way back to early visual cortex

e.g. percieve double flash when there are two tones

  • stronger effect when flash is in periphery

  • using the more reliable input: for timing, audition > vision

    • for spatial resolution vision > audition

McGurk effect: “occurs when a person hears an auditory syllable that is paired with visual information of a different, incongruent syllable. The brain automatically and unconsciously tries to reconcile the conflicting information, resulting in the perception of a third, illusory sound” thru integration

—> if audition is uncertain, we rely on vision to determine what we heard

9
New cards