1/17
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No study sessions yet.
How do we localise sound?
1) Binaural cues require comparison of signals in left and right ears and are vital for signalling location of a sound in azimuth (left-right plane). There are two main ones:
- Interaural time differences (ITDs)
- Interaural level differences (ILDs)
2) Monoaural cues work with one ear and can help localise the elevation (up-down plane) and distance of a sound. There are two main ones:
- Filter properties of the pinna (outer ear)
- Intensity & reverberation
Describe ITDs as a cue for localisation of sound
The relative time at which a sound arrives at the two ears depends on its location in the azimuth.
- If the sound source is straight ahead, the distance to each ear is the same and there is no difference in time
- When the source is positioned to one side, the sound will reach the nearer ear first
- Relies on phase-locking (being able to respond at a particular part of the sound) as precise signalling of timing is required
- Most useful for low frequency or abrupt onset sounds (due to phase-locking)
What is the range of ITDs encountered based on?
- Speed of sound (how fast is the sound moving, usually fairly constant)
- Distance between two ears (people with smaller heads would encounter a smaller range and vice versa)
Describe ILDs as a cue for localisation of sound
- The relative sound pressure level reaching the two ears also depends on the location of the source in azimuth
- A reduction in sound level occurs when the sound is off to one side, due to an acoustic shadow created by the head
- This reduction only occurs for high frequency sounds because the longer the wavelength is, the more it is able to bend and warp around objects
Describe the physiology of binaural processing
Processing of ITDs and ILDs starts within the brainstem in the superior olivary complex.
Binaural localisation cues are processed by different types of neurones located in different parts of the superior olive:
- The lateral superior olive (LSO) contains neurones that are sensitive to ILDs
- The medial superior olive (MSO) contains neurones that are sensitive to ITDs
Describe the strengths and weaknesses of binaural cues
Positives:
- ITDs and ILDs provide complementary information about azimuth location (ITDs for low frequency, ILDs for high frequency)
Negatives:
- Provides ambiguous information about elevation of sound
- Tells us nothing about distance of sound
- For ITDs it can be difficult to tell if the sound is in front or behind within the azimuth
- Cone of confusion is present = an area where any sound falling there will produce identical ITDs and ILDs. So cannot tell where on the cone these sounds are coming from
Describe how the filter properties of the pinnae help us localise sound
- When sound bounces off the different parts of our outer ear, the relative intensity of different frequency sound waves change.
- This is based on the shape of our ear (will filter the frequency content of complex sounds in a slightly different way) as well as where the sound is coming from (the elevation of the sound)
- Our brain can learn how our ear is affecting the sound waves based on if the sound is coming from above or below
Describe how we can use relative intensity and reverberation to help us localise sound
We use these cues to help us tell how far a sound is from us
Relative intensity:
- Sound intensity decreases with distance, so closer objects will tend to have greater amplitudes than further ones
Reverberation:
- The way in which sound reflects off objects provides a cue to distance
- Multiple reflections combine to produce a persistence sound called reverberation
- The distance of a source alters the relative intensity and timing of direct (sounds that come straight from the source) and reverberant sounds
How can we localise sounds within rooms when there are reverberations
The precedence effect:
- When you get a reverberant sound, our sense of where the sound is coming from is dominated the initial wave front (similar sounds arriving in quick succession from different locations are localised according to the direction of the first sound)
- Only work within a particular range (as long as the reverberations occur within 10-20ms of each other) only a single sound is perceived
Describe what is meant by auditory scene analysis
- Natural environments often contain multiple sound sources
- The auditory system needs to make sense of the mixture of component sounds that makes up the auditory scene
- It needs to segregate (keep apart) the components of the sound that come from different sound sources
- Also need to group the components of the sound that come from the same sound source
What are the strategies for auditory grouping/segregation?
1) Spectral grouping = combining different frequency sound components that occur at the same time
2) Sequential grouping = combing sequences of sounds over time
Outline spectral grouping
Frequency components of a sound are more likely to be grouping into a single sound as it is likely that they have been caused by the same sound-producing event.
However for complex sounds our brains use:
- Harmonicity: if a component is mistuned to other components it will be heard as a separate sound. The components that are tuned harmonics will be heard as the same sound.
- Common frequency change: frequency components that change together tend to group together
Outline sequential grouping
- Grouping of sounds over time is known as auditory stream segregation.
- Tendency to either group or segregate sequences depends on a number of factors including:
1) Similarity of pitch and/or timbre
2) Temporal proximity
3) Continuity
Outline similarity of pitch and timbre as a factor of sequential grouping
Pitch:
- Sounds with similar pitch are often produced by the same source
- Increasing frequency difference (altering the pitch) promotes stream segregation
Timbre:
- Sound source often have distinct timbre (e.g. musical instruments) providing a good cue for stream segregation
- Sounds that have similar sources will have similar harmonics so will be grouped together
- If the same pitch (i.e. same frequency) is used but the notes are played on different instruments there will be different harmonic properties (distinct timbres) so the components will be segregated
When a cycle of sounds are matched in pitch and timbre, a combined melody and rhythm is created.
Outline temporal proximity as a factor of sequential grouping
- Sounds that occur in rapid progression tend to be produced by the same source
- Increasing presentation rate (increasing time between sounds) promote stream segregation
Outline continuity as a factor of sequential grouping
- sounds that stay constant or change smoothly are often produced by the same source
- perceived as continuous even when interrupted by noise
- continuity can be applied to speech sounds (phonemic restoration). If you interrupt speech with silence, it makes it difficult to understand (intelligibility) but filling the gaps with noise improves intelligibility.
A sound arrives at your ears with zero interaural time difference. Where could it have been located in space?
Immediately in front of you
Immediately behind you Immediately above your head
Artist Vincent van Gogh famously severed part of his left ear. What aspect of his ability to localise sound do you think was most likely to be affected and why?
Elevation - filter properties of the pinna (the sound bounces off different parts of the outer ear and alter the relative intensity of sound waves). This changes the sound source elevation.