1/55
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
sound
air vibrations between 20-20,000 Hz are perceived as “this”
pitch
loudness
timbre
nature of the stimulus
sound is categorized along physical and perceptual dimensions
timbre
complexity of a sound that consists of the fundamental (lowest) frequency plus all other associated vibrations
overtones are multiples of the fundamental frequency (harmonics are overtones that are integer multiples)
hearing in nature
elephants hear low frequencies that humans don’t
cats hear high frequencies that humans don’t
17th century aids for hearing
“ear trumpets" had a wider end that collected sound and a tapering end placed in ear
a strategy similar to one adopted by animals
19th century aids for hearing
elegant means of concealment and bone conduction devices were developed
20th century
before radar distant enemy movements were detected by sound
outer ear
pinna (ear lobe) and auditory canal
folds in the pinna direct sound reflections to the canal
middle ear
the tympanic membrane and ossicles in an air-filled chamber
inner ear
cochlea
fluid filled
role of middle ear
alternating compressions and rarefactions push and pull the tympanic membrane
lever action of the ossicles causes footplate to push in at oval window
role of ossicles
usually there is a loss of amplitude going from air (middle air) to liquid (inner ear), but this helps to overcome
role of tympanic membrane
pressure at footplate is amplified relative to “this”
structure of inner ear: cochlea
bony and coiled
3 fluid-filled chambers (‘scala’) with flexible membranes at each ‘end’
oval window at stapes
round window at end of coil
basilar membrane extends the entire length
hole at the apex (helicotrema)
3 fluid-filled chambers
scala vestibula and tymani
perilympth (low K+)
scala media
endolymph (high K+)
organ of corti
sits on basilar membrane
where ‘mechanoelectric transduction’ takes place
location of auditory receptors: hair cells
inner hair cells
1 row of ~3500
perception of pitch and timbre
absolutely required for hearing
outer hair cells
3 rows of ~12,000
amplify and modulate
not needed for “hearing”
loss leads to lower sensitivity
hair cell structure
stereocilia at the apex
transduction occurs here
base: cell body where neurotransmitter is released
innervated by dendrites of spiral ganglion cells
frequency detection begins
different frequencies are coded by the cochlea based on the rate of hair cell action potentials produced AND where the hair cells are located
temporal code (low frequencies)
based on number/timing of action potentials generated by hair cells
fewer/spaced = lower frequency
more/close = higher frequency
limited by the max. num of APs the auditory nerve can generate (~1000/sec)
place code (all but lowest frequencies)
higher frequencies cause vibration at the stiff and narrow base
relatively lower frequencies vibrate the wider and floppier apex
vibration causes the hair cells in that location to fire
placing code along the basilar membrane
as the stapes move in and out, it causes endolymph to flow
this generates a traveling wave
at 3000 Hz, the fluid and membrane movement end abruptly about halfway between the base and apex
stereocilia of hair cells
when sound causes the basilar membrane to move up and down, hair cell stereocilia will move correspondingly
waves of motion cause back and forth movement in the inner/outer hair cells
inner hair cells in stereocilia
stereocilia attached to tectorial membrane by fine filaments
outer hair cells in stereocilia
stereocilia in contact with tectorial membrane
cilia of hair cells
stereocilia from frog hair cell, where tip links attach to insertional plaques
transduction by inner hair cells
signals to the brain are mostly carried by “this”
channels to open
stretching of tip links cause
influx of K+ and Ca++
these chemicals cause depolarization of the cell and results in release of neurotransmitter by the hair cell (transduction of IHCs)
firing in the cochlear nerve
more neurotransmitter releases ___
(transduction by IHCs)
hair cells form synapses with
with dendrites of bipolar cells of the VCN (vestibulocochlear nerve)
IHCs (release)
glutamate is released by
OHCs (release)
ACh is released by
release of cochlear nerve is from
95% transmits input from inner hair cells, 5% from outer hair cells
role of OHCs
like IHCs, they depolarize due to vibrations of basilar membrane
these changes cause this to change its length and amplify BM movement
alter sound sensitivity to both weak and loud sounds via feedback from the brain
encoding sound: location
vertical sound localization can be acquired from reflections off the pinna
the timber of sounds differs depending on how the pinna is oriented
delays in direct vs reflected path
“spectral filtering”
spectral filtering
sounds of different frequencies bounce off the folds differently and in varied directions depending on the orientation of the head
interaural intensity difference
for higher frequencies
ears experience different level of sound
the “head shadow” effect
if wavelength >/= width of the head (ie: low frequencies), the sound will diffract around the head and be heard with almost equal intensity in the opposite ear
interaural time difference
for all frequencies
onset disparity
phase disparity
onset disparity
works only when a sound is first heard
phase disparity
differences in compressions and rarefactions at each ear
encoding location for lower frequencies
localization from the left or right can be acquired from phase disparity in sound arriving at the two ears
audio pathway to the brain
from the auditory nerve to the brain
tonotopy:
topographically organized mapping of different frequencies in auditory brain regions
cells in primary auditory cortex
sensitive to temporal and directional characteristics
primary auditory cortex
mostly areas 41 and 42
not necessary for sound detection
more important for hearing “biological sounds” (ex: speech) than for pure tones
association areas
receive input from primary auditory cortex
ex: Wernicke’s
streams
flow of information goes in two basic directions
anterior/ventral
posterior/dorsal
anteroventral stream
decodes what the sounds are (pitch)
‘what’
recognition and meaning
posterodorsal stream
decodes location
‘where’
conduction deafness
usually involves a middle ear problem that blocks sound vibrations from reaching the inner ear
inability to process sound coming in at initial
no flow to the ossicles
sensorineural deafness
there is a problem with the structures—especially the cochlea—that converts sound vibrations into neural activity and project to the brain
no flow to the cochlea (deficit)
central deafness
damage to auditory brain structures can affect hearing in various ways
now flow to the brain
extreme acoustic trauma
repeated trauma can cause permanent and profound hearing loss or deafness