1/29
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
RECAP: Do organisms have any direct experience of the world?
No —> what we perceive is determined by action potentials sent by the PNS, or the peripheral nervous system (ex. nerves that detect touch, pain, etc.)
ex. even though photoreceptors stem from CNS tissue/are anatomically part of the CNS, they have the same role as other peripheral sensory receptors. They take information from the world and turn them into neural signals.
Perception is limited by peripheral mechanisms
for example, the human eye can only detect visible light (between 400-700 nm), but we CANNOT see infrared or UV light
these wavelengths of light undeniably exist, but they DON’T factor into our perception of the world
What is the computation problem in neuroscience?
There is too much information in the world to be encoded by peripheral sensory circuits.
For example, the world contains a LOT of visual information. If the eye were a camera, we would get 10^10 bits of raw data
We have 100 million photoreceptors available to capture/encode a LOT of data, and 100 million V1 neurons ready to receive/process it….but we have just 1 million optic nerve fibers, and because of the refractory period, they can only transmit so many spikes per second)
This creates an information bottleneck —> there is 100-1000x more data than the optic nerve is capable of carrying. The retina can’t possibly send ALL that information (in its raw state, at least) downstream to the brain.
Just like computers perform file compression (especially for things like videos), the retina will filter, compress, and TRANSFORM the visual scene - or visual info - BEFORE passing it on to the brain
this relates to why different retinal ganglion cells specialize in different things/there’s a division of labor going on (ex. the midget retinal ganglion cells for color and detail, the parasol retinal ganglion cells for motion/high temporal resolution)
TLDR: sensory peripheral circuits are EXQUISITELY made (evolved) to overcome ENORMOUS information processing demands
What is the primary visual pathway?
retina (first cells, not always in spatial terms though, being the photoreceptors, last ones being the retinal ganglion cells whose axons make up the optic nerve)
LGN (think LG TV) or lateral geniculate nucleus of the thalamus
primary visual cortex/V1 (the FIRST cortical step for information outside of the retina
extrastriate cortices (cortical areas beyond V1): made for higher-order visual processing
What does it mean for this primary pathway to experience a PARTIAL decussation at the optic chiasm?
All information from the LEFT visual field (for BOTH eyes) goes to the right hemisphere of the brain, and vice versa
so imagine both eyes split down the middle - the left halves of BOTH eyes correspond to the RIGHT brain hemisphere
What do some of the other (non-primary) pathways do? What are they responsible for?
retina to pretectum: this controls pupil size - your pupils will dilate in low light conditions
retina to superior colliculus: rapid gaze orienting towards objects with sudden motion, or objects that are looming over you
retina to suprachiasmatic nucleus or SCN: circadian rhythm (24 hour biological clock that is paired to the light/dark cycle of the natural world)
In monkeys, how much of cerebral cortex is involved in vision/visual information processing?
30 regions of cortex (over 50%) in monkeys
we see BILLIONS of neurons and extensive connections between these neurons, with both feedforward and feedback connections
feedforward: info goes from upstream (think presynaptic) to downstream (postsynaptic)
feedback: vice versa
What is the difference between the cornea and the lens?
cornea (OUTSIDE): primary refractive element of the eye
FIXED shape/curvature that cannot change
provides most of the eye’s focusing power
lens (INSIDE): flexible, elastic, responsible for fine-tuning focus (ex. when you have objects at different distances from each other)
What are the basics of visual transduction? Are photoreceptors depolarized, or hyperpolarized?
light is focused on the retina and strikes photoreceptors at the BACK of the eye
absorption of light by photopigments sets off a cascade of events that leads to a change in membrane potential
most sensory receptors will depolarize in response to a certain stimulus, but photoreceptors are unique because they HYPERPOLARIZE (more negative) when activated
How is the retina organized?
fovea
tiny pit in the center of the retina that is specialized for high acuity vision
periphery of retina
here, light passes through several layers of cells (first retinal ganglion cells and finally the photoreceptors) before reaching the sensory receptors that are activated by them
The fovea is special because here, these layers have been “pushed aside” to allow photoreceptors to have DIRECT exposure to light (unlike the periphery where the light has to travel beyond the ganglion + bipolar + others first)
Also, in the fovea, bipolar cells get input from a single photoreceptor (cone), while in the periphery of the retina, bipolar cells get input from MULTIPLE photoreceptors
What is the signal flow in the retina? What separates retinal ganglion cells from photoreceptors and bipolar cells?
photoreceptors, which create a vertical pathway to…
bipolar cells, which create a vertical pathway to…
retinal ganglion cells
Retinal ganglion cells are special because they are the only spiking/output cells in the retina, as their axons form the fibers of the optic nerve)
the retinal ganglion cells are the ONLY cells in the retina that communicate via action potentials
What are the two classes of specialized photoreceptors?
cones
concentrated in the fovea, responsible for HIGH acuity vision there
mediate vision under WELL-LIT conditions
three cone types: creates trichromatic color vision (red, green, blue) because certain cones are only/most strongly activated for certain wavelengths of light
short or S-cones: blue light
medium or M-cones: green light
long or L-cones: red light
higher temporal resolution than rods (!! this gets confusing because the magnocellular layers of V1 are associated with rods, which are known to respond more slowly, but actually the M layers get input from BOTH rods and cones)
rods
very sensitive to light, can respond to a SINGLE photon
mediate vision under low-light conditions
unlike cones, which have 1:1 relationships with bipolar cells (as in, 1 bipolar cell gets input from 1 cone), MANY rods converge onto a SINGLE bipolar cell, which then converges onto a SINGLE retinal ganglion cell
we don’t have multiple different types of rods (like how different cones are activated by different wavelengths of light/colors) —> rods do MONOCHROMATIC vision
only one possible rod photopigment (rhodopsin), which is HIGHLY sensitive to light but does NOT differentiate between colors
This means that cones have high spatial resolution, while rods do not.
for cones we see fine spatial detail and the brain can tell exactly what cone was activated. This leads to sharp, detailed vision. For rods, we have low spatial resolution (fine details are lost because the brain is averaging space, over a larger area of the retina)
Which photoreceptors are responsible for low-light conditions?
In the dark, we throw away color + fast temporal information (encoded by the cones) to increase light sensitivity.
rods just have one possible photopigment/cannot differentiate between different light wavelengths, so that’s why everything looks black and white in low light conditions
Which respond more slowly to changes in light: rods or cones?
Rods; their membrane potential changes much more slowly. However, rods DO produce MUCH bigger hyperpolarizations/negative changes in membrane potential in RESPONSE to light than cones do —> useful in low-light conditions.
BUT cones are still better at responding to smaller flickers in light, or rapid motion, than rods, even though rods ultimately have the bigger response/more sensitivity
What is the difference in response type to a brief pulse of light between cones and rods?
cones: small but QUICK biphasic (up, then down) response to a brief pulse of light
rods: BIGGER but slower MONOPHASIC response
What does it mean for color perception to be based on relative activity?
The brain doesn’t read absolute activation of a single cone type - it compares how strongly each type of cone is “firing” (via its respective retinal ganglion cell) relative to the other types of cones.
if the L-cones (for long wavelengths like red) are being more stimulated than the M (green) or S (blue) cones, we perceive that color as RED.
if all cone types are stimulated equally relative to each other, we see that color as WHITE
this works with different light sources because objects will reflect DIFFERENT parts of the light spectrum depending on what light is shining on them
for example, a banana might look MORE yellow under harsh fluorescent lighting than softer natural lighting, but we will STILL continue to perceive the banana as yellow AS LONG AS the proportion of L cone activation (since yellow is closest to red) is HIGHER than M or S cone activation
so cone activation is contrasted, and that gives us our perception of color
even as the SPECTRAL content of light might change, we could still sense the color of objects based on how the brain actually encodes color
Why is it important for the eye to be movable?
the eye is NOT like a camera which has pixels that all share the same properties/has uniform spatial resolution - as we know, the area with the highest or best spatial resolution is the fovea
this is because cones are concentrated in the fovea
meanwhile, rods are plentiful in the periphery (sometimes the rods/periphery of the retina is useful for star gazing, since you can catch dimmer stars out of the corner of your eye thanks to the rods’ greater sensitivity to light)
the eye needs to be movable so that we can see MORE than one area, or aspect of something, with high quality/high acuity vision
What is visual angle?
visual angle - how large something appears on the retina
larger or closer objects have a higher visual angle —> either (or both) distance and size could impact it
we can draw two lines (one at the top of an object, and one at the bottom —> these lines cross in the middle (think of it like a decussation in terms of what it looks like) to get visual angle
How are cones arranged in the fovea? For every degree of visual angle, how many cones are there?
in the fovea, cones are packed tightly in a regular grid, and the angular separation between neighboring cones is about 0.0084 degrees of visual angle
so for every degree of visual angle, we have 120 cones sampling the scene
What is the Nyquist theorem? How does it relate to spatial frequency (define this first?
spatial frequency: how often visual features, like light-dark stripes, edges, or patterns, repeat per unit of visual angle (represented in degrees)
spatial frequency represented in cycles per degree (so one cycle = one light-dark pair)
for example, let’s say we have a black and white stripe pattern that repeats ten times per degree. That means we have 10 CYCLES per degree.
fine details relate to a high spatial frequency (many, many cycles per degree of visual angle), while broad-low detail patterns relate to a low spatial frequency (where we don’t see as many cycles per degree)
The retina can only transmit a LIMITED range of spatial frequencies before blurring or undersampling occurs. Foveal cones are already PACKED and CREATED to capture the highest spatial frequencies optics can provide (60 cycles per degree)
the Nyquist theorem is basically saying that signals or information have to be compressed. The retina does that by taking samples.
NYQUIST THEOREM: SAMPLING FREQUENCY MUST BE AT LEAST TWICE THE FREQUENCY PRESENT
if you think of the signal like a sin wave, we have to take samples at TWO points on the wave (cycle)
Why is the amplitude of the sin wave (cycle) lower in what is captured by the eye than what actually exists? What is affected by blur - low or high spatial frequencies?
Let’s say that the amplitude of the sin wave represents energy, or the strength of the signal. The peaks and troughs of the wave (high points versus low points) represent CONTRAST.
The amplitude captured by the eye is squished down due to blurring (reduction of strength in the signal)
if the peaks and troughs are closer together, that shows a reduction in contrast (blur)
blurring doesn’t affect low spatial frequencies (coarse details), but it DOES affect high spatial frequencies where contrast becomes important
How can we write out the Nyquist theorem, which allows us to perfectly reconstruct the visual signal (without undersampling?
f(sample) must be greater than or equal to 2 * f(max)
How do photoreceptors deal with the large range of light intensities in the world, which far exceed what a single neuron can encode at a time? What do threshold, coding range, and saturation mean?
Humans see across about 10 orders of magnitude of light intensity, from starlight (dim) to bright sunlight.
however, each photoreceptor can only reliably encode a limited range of input intensities
threshold (on the x-axis): the minimum light intensity NEEDED to encode a response, or get the neuron to change from its baseline activity (where it is considered “silent”)
CODING range (on x-axis): the range of intensities over which the neuron can change its output in a meaningful way. Basically, this is the span of intensities for which the neuron will have a graded response
for photoreceptors, this is a change in membrane potential, and for other neurons (like retinal ganglion cells), this is a change in their firing rate/spikes per second
the bigger the light intensity, the BIGGER the response of the neuron. So for photoreceptors, that means stronger hyperpolarization, and for downstream neurons that means faster firing rates (for ON-center cells at least)
Think of CODING range as the USEFUL span where the neuron encodes light intensity differences
saturation: this is where the neuron’s output MAXES out and any further increases in light intensity don’t change the resulting signal
What does it mean for there to be a trade-off between the slope (y/x) and coding range (on x-axis)? What do those terms even mean?
Coding range is the span of light intensities over which the neuron can respond meaningfully. A wide coding range allows for the neuron to encode a LARGE range of light intensities, but each change in intensity leads to a smaller change in output (amount of hyperpolarization or amount of change in firing rate).
BUT, if we have a SMALLER coding range, the neuron will be really sensitive to small changes in light intensity (however, that will just be for a limited amount of light intensities
Steep slope —> limited coding range (smaller amount of x-axis or denominator, since slope = y/x) BUT more sensitive to changes in light intensity
we also reach saturation faster
Gradual slope —> bigger coding range, but less sensitive to changes in light intensity
Talk about coding range (x-axis) vs dynamic range (y-axis) - what is the difference between the two?
coding range - the span of light intensities over which the neuron can meaningfully change its response
dynamic range - the min/max of the neuron’s response (how much can the neuron hyperpolarize, or change its firing rate? what is the range?)
Why is the ideal to have population coding (neurons that have different, NON-OVERLAPPING coding ranges/ranges of light intensity, or sections of the x-axis)?
If all neurons had the same or similar coding ranges, we would have redundancy (many neurons signaling for the same group of light intensities). BUT we want to detect as many light intensities as possible.
this is why we are looking for softer, more gradual slopes —> even if this means that neurons will be less sensitive to changes in light intensity, because the change in output per light intensity will be smaller, we have a much wider coding range. PLUS, we don’t hit saturation until later.
What would happen if we had a population of neurons and reduced the coding range (range of light intensities for which the neuron can meaningfully respond to) for each neuron? What could we do to compensate - increase/decrease dynamic range? increase the # of neurons?
We would not want to increase or decrease dynamic range - increasing might increase the output that the neuron is capable of generating per change in light intensity, but it doesn’t actually change or compensate for the DECREASED scope of light intensities that the neurons are capable of responding to.
a better idea would be to increase the number of neurons in the population, if we can’t change the slope. That way, we could still cover a bigger range of light intensities (directly affect coding range).
How does light/dark adaptation work?
When you move from darkness to light, photoreceptors adjust their sensitivity downwards.
this means that BOTH threshold (light intensity at which neuron changes its activity from baseline) and saturation (light intensity at which the neuron no longer changes its activity - the activity has been maxed out) are ADJUSTABLE.
by adjusting saturation upwards, the neuron is now more sensitive to changes around the new, brighter light intensity (instead of tiny variations in light that were really only relevant in the dark)
without that upward or downward adjustment of saturation (adaptation), allowing us to hit a higher or lower range of light intensities, the average light level would either SATURATE neurons in bright light, or leave them silent (unactivated/still at baseline) in dim light where the little things matter
So basically: saturation goes up thanks to adaptation —> neuron’s output doesn’t get maxed out by a HIGH average light level —> we can now detect changes (variations) in light intensity within a new environment (the intensity range has shifted)
Can both cones and rods adapt to the level of ambient (environmental) light in the room?
Yes, BUT rods have a lower threshold (can respond to much dimmer light) than cones (which are better for bright, quick flashes of light than rods since they have high temporal resolution BUT are less sensitive to light intensity)
cones have a much smaller range in light sensitivities that they respond to
What does adaptation actually allow us to do?
Adaptation allows the retina to respond to relative changes, or contrast, as opposed to RAW brightness. By adapting, we are basically SUBTRACTING OUT the average light level in the environment.
—> we are compressing the huge range of light intensities into a narrower band of signals that the brain can use, which decreases the information processing burden. Neurons are using their limited output range EFFECTIVELY.