Motion Perception Notes
Why Do Some Animals Freeze When They Sense Danger?
Survival Mechanism: Predators detect movement easily, so freezing helps prey avoid detection.
Camouflage Effect: Stationary objects blend into the background, making them harder to spot.
How Do Films Create Movement From Still Pictures?
Phi Phenomenon: Our brain perceives a series of still images as continuous motion.
Persistence of Vision: The visual system retains an image briefly after it disappears, smoothing transitions between frames.
What’s Special About the Movement of Human and Animal Bodies?
Biological Motion: Recognizing motion patterns unique to living beings (e.g., walking, running, gesturing).
Point-Light Walkers: Experiments using lights on joints to analyze movement perception.
Functions of Motion Perception
1. Motion Perception & Survival
Akinetopsia (Motion Blindness):
Case of L.M. (Zihl et al., 1983, 1991) – Couldn’t perceive motion after a stroke.
Struggled with basic tasks like pouring tea and crossing streets.
Demonstrates how crucial motion perception is for daily life.
Attentional Capture:
Motion attracts attention automatically (e.g., dodging a flying baseball).
Helps detect predators or prey.
2. Detecting and Perceiving Objects Through Motion
Motion Reveals Objects:
Camouflaged animals become visible when they move.
Seeing someone waving in a crowd helps in recognition.
Movement Aids Object Recognition:
Viewing an object from different angles improves shape perception.
Example: A moving horse reveals its 3D structure better than a still image.
Grouping Elements Together:
Example: A flock of birds moving together is perceived as one unit.
Biological Motion:
Motion of arms, legs, and body parts forms a coordinated pattern.
3. Perceiving Events
Event Segmentation:
We break continuous actions into meaningful events (e.g., ordering coffee, paying, leaving).
Event Boundaries: When one action ends and another begins.
Zacks et al. (2009): Motion changes (e.g., hand acceleration) signal event boundaries.
4. Motion in Social Perception
Understanding Intentions Through Movement:
Koul et al. (2019): Observers correctly judged whether a hand movement was for drinking or pouring.
Emotions in Motion: Body movements convey emotions like happiness, aggression, or fear.
Heider & Simmel (1944) – Geometric Shapes & Social Perception
People assigned human traits to moving triangles and circles.
Shows how motion influences social interpretation.
Point-Light Walkers (Johansson, 1973, 1975):
Lights on joints reveal movement patterns.
Helps identify social interactions even without facial cues.
Centelles et al. (2013): People with autism had difficulty distinguishing social vs. non-social movements.
5. Motion & Action
Navigating Through the Environment:
Walking and avoiding obstacles depend on motion perception.
Detecting moving objects prevents collisions.
Sports & Movement:
Watching and playing sports rely heavily on tracking motion.
Example: Following a football pass or a baseball double play.
Grasping and Manipulating Objects:
Reaching for a bottle involves tracking movement to coordinate actions.
Key Takeaways
Motion perception is essential for survival, detecting threats, and recognizing objects.
Motion helps segment experiences into meaningful events.
Social interactions rely on movement cues for understanding emotions and intentions.
Sports, daily activities, and even storytelling in film depend on accurate motion perception.
Studying Motion Perception
Key Question: When do we perceive motion?
Real Motion: Perception of actual movement (e.g., a car driving by, people walking).
Illusory Motion: Perception of movement when no real motion occurs. Three types:
Apparent Motion: Alternating stimuli in different locations create the illusion of movement (e.g., movies, moving signs).
Induced Motion: A large moving object makes a smaller stationary object appear to move (e.g., clouds moving past the moon).
Motion Aftereffects: Viewing continuous motion causes stationary objects to appear to move in the opposite direction (e.g., waterfall illusion).
Comparing Real and Apparent Motion
Historically considered separate, but research shows similarities.
Larsen et al. (2006) fMRI Study:
Real motion and apparent motion activate similar brain areas.
The brain responds as if motion occurs even when no stimulus is present.
Suggests shared neural mechanisms for both types of motion.
Two Real-Life Motion Perception Situations
Following an Object with the Eyes (Figure 8.8a)
The object’s image stays in the same place on the retina, yet motion is perceived.
Indicates motion perception isn't solely based on retinal image movement.
Looking Straight Ahead While an Object Moves (Figure 8.8b)
The object’s image moves across the retina, activating a sequence of receptors.
Suggests retinal image movement contributes to motion perception.
The Ecological Approach to Motion Perception
Gibson’s Theory (1950, 1966, 1979):
Perception relies on environmental information, not just retinal images.
Optic Array: The structured pattern of surfaces, textures, and contours in the environment.
Movement of the observer causes changes in the optic array, providing motion cues.
Local Disturbance in the Optic Array:
When an object moves, it covers and uncovers background elements.
Example:
Figure 8.8a: Jeremy moves, Maria follows with her eyes → His image stays in place on her retina, but the background changes → Maria perceives motion.
Figure 8.8b: Maria keeps her eyes still, Jeremy moves → His image moves across her retina → The background still changes in the same way, signaling motion.
Global Optic Flow:
When the entire visual field moves together, no motion is perceived.
Example:
Figure 8.8c: Maria moves her eyes across a stationary scene → Everything shifts in her view at once → No single object is perceived as moving.
This tells the brain Maria is moving, not the environment.
Key Takeaway:
Motion is perceived when one part of a scene moves relative to the rest.
No motion is perceived when everything moves together or stays still.
While Gibson’s theory explains much, additional factors must be considered for a full understanding of motion perception.
The Corollary Discharge and Motion Perception
Alternative to Gibson’s Approach:
Focuses on neural signals from the eye to the brain.
Explains why we don’t perceive motion blur when moving our eyes.
Corollary Discharge Theory (CDT) – Three Signals:
Image Displacement Signal (IDS) – When an image moves across the retina.
Motor Signal (MS) – Sent from the brain to the eye muscles to move the eyes.
Corollary Discharge Signal (CDS) – A copy of the motor signal sent to a brain structure called the comparator.
How the Comparator Works:
Motion is perceived if the comparator receives only one signal(either IDS or CDS).
No motion is perceived if both signals arrive together—they cancel each other out.
Explaining the Situations in Figure 8.8:
Figure 8.9a: Maria follows Jeremy with her eyes → CDS present, no IDS → Motion is perceived.
Figure 8.9b: Maria keeps eyes still, Jeremy moves → IDS present, no CDS → Motion is perceived.
Figure 8.9c: Maria scans the room → CDS and IDS both present → No motion is perceived.
Physiological Approach to Motion Perception:
Another explanation focuses on how retinal receptors detect movement.
Introduces the Reichardt detector, a neural circuit that helps detect motion by tracking stimulation across receptors.
The Reichardt Detector
Explaining Motion Perception (Figure 8.9b)
When the eye is stationary and an object moves across the retina.
Werner Reichardt (1961, 1987) proposed a neural circuit to explain this process.
How the Reichardt Detector Works:
Consists of two neurons (A & B) connected to an output unit.
A delay unit slows signals from neuron A before they reach the output unit.
The output unit multiplies signals from A and B to generate a movement signal.
Motion Detection Process:
Object (e.g., Jeremy) moves left to right → Activates neuron A first.
Signal from neuron A is delayed while Jeremy moves further.
Jeremy reaches neuron B, activating it.
If timing is right, the delayed signal from A and the new signal from B reach the output unit simultaneously, triggering a movement signal → Perception of motion.
Direction Sensitivity:
Left to right motion is detected because the delay ensures synchronization.
Right to left motion is NOT detected because the signals don’t align at the output unit.
Neural Basis of Motion Perception:
More complex versions of this circuit exist in amphibians, rodents, primates, and humans.
These direction-selective neurons respond to motion in specific directions.
Beyond Single-Neuron Responses to Motion
Issue with single neurons & motion perception:
A single neuron’s response can be ambiguous due to the aperture problem (Park & Tadin, 2018).
Example: A neuron detecting a moving vertical pole might misinterpret motion if viewed through a small receptive field.
Aperture Problem (Figure 8.15):
A neuron’s receptive field acts like a small window, seeing only part of the scene.
If a woman moves a pole horizontally → neuron detects rightward movement.
If she moves upstairs (diagonal motion) → neuron still detects rightward movement, ignoring the upward component.
Demonstration: Moving a pencil behind a small hole (Figure 8.16) shows the illusion of pure horizontal motion, even when the actual motion is diagonal.
Solutions to the Aperture Problem
Detecting object endpoints
Some striate cortex neurons respond to the ends of moving objects (Pack et al., 2003).
Example: If part of a moving pencil tip is visible, motion is correctly perceived as diagonal rather than just horizontal.
Pooling information from multiple neurons
MT neurons initially respond incorrectly (basing motion on edge orientation).
After 140 ms, MT neurons adjust to the actual motion direction by combining signals from many neurons (Pack & Born, 2001).
This process corrects misinterpretations from single neurons.
Conclusion
Motion perception is complex due to the aperture problem.
The visual system solves this by:
Using striate cortex neurons to detect endpoints of moving objects.
Using MT cortex neurons to combine multiple signals for accurate motion detection.
These mechanisms allow us to perceive true motion direction despite individual neuron limitations.
Motion and the Human Body (Section 8.8) – Point Form Summary
Apparent Motion of the Body
Apparent motion: Perception of movement between two stationary stimuli shown in different locations in rapid succession.
Shortest path constraint: Movement is usually perceived along the shortest path between two stimuli.
Shiffrar & Freyd (1990, 1993) experiment:
Alternating images of a woman’s hand in different positions (front and back of head).
Fast alternation (5+ times/sec) → Hand appears to move through the head (Figure 8.18b).
Slow alternation → Hand is perceived as moving around the head (Figure 8.18c).
Findings:
The visual system needs time to process complex movements.
The perception of human body movement is differentfrom inanimate objects (e.g., boards).
Brain activation (Stevens et al., 2000):
Both types of motion activated the parietal cortex (motion perception).
Motor cortex was only activated when motion was biologically possible (movement around the head).
Suggests a connection between action perception and motor processing (ties to mirror neurons).
Biological Motion & Point-Light Walkers
Point-light walkers: Small lights on human joints filmed while moving.
Stationary person → Lights appear random.
Moving person → Lights are immediately perceived as a walking person.
Biological motion: Self-produced motion from living beings (walking, running, etc.).
We are highly skilled at recognizing it due to frequent exposure.
Neural basis for biological motion perception (Grossman & Blake, 2001):
Compared brain activity for:
Point-light walkers (Figure 8.19a).
Scrambled motion (Figure 8.19b).
Superior Temporal Sulcus (STS) was more active for biological motion.
Other Brain Areas Involved in Biological Motion
Superior Temporal Sulcus (STS) → Key region for biological motion perception.
Other activated areas:
Fusiform Face Area (FFA) (Grossman & Blake, 2002).
Mirror neuron areas in the Prefrontal Cortex (PFC) (Saygin et al., 2004).
Network of brain regions specialized for biological motion(Pelphrey et al., 2003, 2005).
Table 8.1 – Brain Regions & Motion Perception:
V1 (Striate Cortex) → Detects motion direction in small receptive fields.
MT (Middle Temporal) Cortex → Processes object motiondirection/speed.
MST (Medial Superior Temporal) Cortex → Processes optic flow, moving object location, reaching for moving objects.
STS (Superior Temporal Sulcus) → Specialized for biological motion (humans/animals).
Disrupting STS Impairs Biological Motion Perception
Grossman et al. (2005) experiment:
Task: Identify biological motion vs. scrambled motion.
TMS (Transcranial Magnetic Stimulation) applied to STS → Reduced biological motion perception.
TMS applied to MT (motion-sensitive area) → No effect.
Conclusion: STS is necessary for perceiving biological motion.
Further evidence:
Patients with STS damage struggle with biological motion perception (Battelli et al., 2003).
TMS on PFC also disrupts biological motion perception (van Kemenade et al., 2012).
Final takeaway: Biological motion is a specialized type of motionwith dedicated brain processing areas.
Motion and the Human Body (Section 8.8) – Point Form Summary
Apparent Motion of the Body
Apparent motion: Perception of movement between two stationary stimuli shown in rapid succession.
Shortest path constraint: Movement is usually perceived along the shortest path between stimuli.
Shiffrar & Freyd (1990, 1993) Experiment:
Alternating images of a woman's hand in different positions (front and back of head).
Fast alternation (5+ times/sec): Hand appears to move through the head.
Slow alternation: Hand appears to move around the head.
Findings:
The visual system requires time to process complex movements.
Human body movement perception differs from inanimate objects.
Brain Activation (Stevens et al., 2000):
Both types of motion activated the parietal cortex (motion perception).
Motor cortex was only activated when movement was biologically possible.
Suggests a connection between action perception and motor processing (mirror neurons).
Biological Motion & Point-Light Walkers
Point-light walkers: Small lights placed on human joints during movement.
Stationary person: Lights appear random.
Moving person: Lights perceived as a walking person.
Biological motion: Self-produced motion (walking, running, etc.).
Recognized easily due to frequent exposure.
Neural Basis (Grossman & Blake, 2001):
Compared brain activity for point-light walkers vs. scrambled motion.
Superior Temporal Sulcus (STS): More active for biological motion.
Brain Areas Involved in Biological Motion
STS: Key region for biological motion perception.
Other activated areas:
Fusiform Face Area (FFA) (Grossman & Blake, 2002).
Mirror neuron areas in the Prefrontal Cortex (PFC) (Saygin et al., 2004).
Network of brain regions specialized for biological motion (Pelphrey et al., 2003, 2005).
Table 8.1 – Brain Regions & Motion Perception:
V1 (Striate Cortex): Detects motion direction in small receptive fields.
MT (Middle Temporal) Cortex: Processes object motion direction/speed.
MST (Medial Superior Temporal) Cortex: Processes optic flow, moving object location, reaching for moving objects.
STS (Superior Temporal Sulcus): Specialized for biological motion (humans/animals).
Disrupting STS Impairs Biological Motion Perception
Grossman et al. (2005) Experiment:
Task: Identify biological motion vs. scrambled motion.
TMS applied to STS: Reduced biological motion perception.
TMS applied to MT: No effect.
Conclusion: STS is necessary for perceiving biological motion.
Further Evidence:
Patients with STS damage struggle with biological motion perception (Battelli et al., 2003).
TMS on PFC also disrupts biological motion perception (van Kemenade et al., 2012).
Representational Momentum
Freyd’s Experiment:
Showed participants a person in midair and tested perception of time-forward vs. time-backward images.
Findings: Participants anticipated the downward motion, making time-forward judgments more difficult.
Representational momentum: Perceived motion continues in the observer’s mind (David & Senior, 2000; Freyd, 1983).
fMRI Study (Kourtzi & Kanwisher, 2000):
MT and MST cortex respond to both real and implied motion.
Greater response to implied motion than to static images.
Motion Aftereffect (Winawer et al., 2008):
Viewing implied motion in pictures causes perception of motion in the opposite direction.
Motion’s Importance in Perception
Motion is crucial for:
Locating objects.
Avoiding dangers.
Interacting with the environment.
Gaining information.
Motion appears throughout perception studies:
Chapter 6: Visual attention (eye movement, scanning scenes).
Chapter 7: Taking action (walking, grasping, observing others move).
Chapter 8: Perceiving movement.
Future chapters: Depth perception (Ch. 10), moving sounds (Ch. 12), motion on skin (Ch. 15).
Developmental Dimension: Infants & Biological Motion
Biological motion perception develops with age.
Evidence for innate biological motion perception:
Newborn chicks (Vallortigara et al., 2005): Preferred biological motion without prior experience.
Newborn humans (Simion et al., 2008): Preferred biological motion displays over random motion.
Two mechanisms in infants:
Reflex-like mechanism at birth: Helps react to caregivers.
Second mechanism (emerging at ~3 months): Improves with experience and enhances social interaction.
Research shows perception declines at 1–2 months before improving again at 3 months and beyond.
Conclusion
Biological motion perception is distinct from other motion types.
Specific brain areas (STS, FFA, PFC) are dedicated to processing biological motion.
Perception of motion is essential for survival and social interaction.
Key Questions About Color Perception
Why does mixing yellow and blue paint create green? (p. 201)
Why do colors look the same indoors and outdoors? (p. 215)
Does everyone perceive color the same way? (pp. 207, 218, 224)
The Importance of Color in Our Lives
Color is an integral part of our environment and influences many aspects of daily life:
Practical applications – Traffic lights, clothing choices, paintings.
Psychological impact – People have favorite colors (e.g., blue is most favored; Terwogt & Hoeksma, 1994).
Emotional associations – Colors are linked to feelings:
Purple → Rage
Red → Embarrassment
Green → Envy
Blue → Sadness (Terwogt & Hoeksma, 1994; Valdez & Mehribian, 1994).
Cultural meanings –
Red → Danger
Purple → Royalty
Green → Ecology
Despite our constant interaction with color, we often take it for granted—until we lose the ability to see it.
Case Study: Mr. I.’s Loss of Color Vision
Mr. I., a painter, became color blind at age 65 after a car accident caused a cortical injury.
He wrote to neurologist Oliver Sacks (1986), describing his distress:
“My dog is gray. Tomato juice is black. Color TV is a hodge-podge.”
Psychological impact:
His once vibrant paintings became meaningless.
Food (now gray) became unappetizing.
Sunsets (once red and warm) appeared as black streaks in the sky (Sacks, 1995).
Diagnosis: Cerebral Achromatopsia (color blindness due to brain injury).
Born vs. Acquired Color Blindness
Cerebral achromatopsia – Occurs after a brain injury in individuals who previously had normal color vision.
Genetic color blindness (partial or total) – Present from birth due to the absence of cone receptors.
Key differences:
Born color-blind individuals do not feel the same emotional loss because they never experienced normal color vision.
However, they report similar effects, such as the darkening of reds.
Shared struggles:
Distinguishing objects (e.g., a brown dog was visible on a light road but blended into foliage in the background).
Adapting to a Colorless World
Over time, Mr. I. adjusted to his condition and embraced black-and-white art.
His experience underscores the profound role color plays in perception, emotion, and daily life.
Research on color blindness provides further insights (Heywood et al., 1991; Nordby, 1990; Young et al., 1980; Zeki, 1990).
Beyond Aesthetics: The Functional Role of Color
Color enhances our visual world, but it also serves critical practical and biological functions beyond beauty.
Would you like to explore more about color perception, vision science, or the psychology of color? 😊
Functions of Color Perception
Color perception plays a critical role in both natural and human-made environments, serving as a signaling system and enhancing perceptual organization (Smithson, 2015).
1. Color as a Signaling System
Natural examples:
A banana turns yellow when ripe.
Red fruit stands out against green foliage, aiding foraging.
Human-made examples:
A red traffic light signals "stop".
Color-coded labels help in product recognition.
2. Color and Perceptual Organization
Perceptual organization (discussed in Chapter 5) refers to how we group similar elements and distinguish objects from backgrounds.
Example: Monkeys foraging for fruit
A monkey with color vision can easily detect red fruit in green leaves (Figure 9.1a).
A color-blind monkey struggles to differentiate the fruit (Figure 9.1b).
Evolutionary perspective:
The ability to detect colorful food may have driven the evolution of primate color vision (Mollon, 1989, 1997; Sumner & Mollon, 2000; Walls, 1942).
Color-blind humans face similar challenges; Knut Nordby (1990), a vision scientist with total color blindness, described having to feel for berries instead of seeing them.
3. Color and Object Recognition
Color improves recognition speed and accuracy (Tanaka & Presnell, 1999).
Observers recognized a yellow banana faster than a purple banana.
Beyond single objects:
Color helps us recognize entire scenes (Gegenfurtner & Rieger, 2000).
It allows for quick scene perception (Castelhano & Henderson, 2008).
4. Color and Emotional Perception
Color influences how we perceive facial expressions (Thorstenson et al., 2019).
A face appears angrier when red.
A face appears more disgusted when green (Figure 9.3).
Next Steps: Understanding How We Perceive Color
How does our nervous system process color?
What is the relationship between light and color?
How do the two main theories of color vision explain our perception?
Color and Light
The relationship between color and light was famously explored by Isaac Newton (1642–1727), whose prism experiments laid the foundation for our understanding of color perception.
1. Newton’s Prism Experiment
Newton directed sunlight through a hole in a window shade, allowing a narrow beam to pass through Prism 1 (Figure 9.4a).
The prism separated white light into a spectrum of colors (Figure 9.4b).
He disproved the idea that the prism added color to light. Instead, he concluded that white light is composed of different colors, which the prism merely separates.
A second prism did not change the color of individual beams, proving that spectrum colors are pure and not mixtures of other colors.
Key Observations
Different colors bend by different amounts when passing through a prism.
Red bends the least; violet bends the most.
Color is linked to the physical properties of light, later identified as wavelengths.
2. The Visible Spectrum and Wavelengths
Scientists later confirmed that color perception is based on wavelengths of light:
Violet → 400–450 nm
Blue → 450–490 nm
Green → 500–575 nm
Yellow → 575–590 nm
Orange → 590–620 nm
Red → 620–700 nm
3. Reflectance and Transmission
The colors we see in objects depend on the wavelengths of light they reflect into our eyes.
Chromatic colors (e.g., red, blue, green) result from selective reflection—some wavelengths are reflected more than others.
Achromatic colors (white, gray, black) occur when all wavelengths are reflected equally.
Example (Figure 9.5a & b):
A red paper reflects long wavelengths and absorbs the rest.
A white paper reflects all wavelengths equally.
For transparent materials (e.g., glass, liquids), color is determined by selective transmission—only certain wavelengths pass through (Figure 9.5c).
Cranberry juice transmits red light.
Limeade transmits green light.
4. Color Mixing: Subtractive vs. Additive
Mixing Paints (Subtractive Color Mixing)
When mixing paints, each absorbs (subtracts) certain wavelengths, leaving only common reflected wavelengths.
Example: Mixing blue and yellow paint produces green (Figure 9.7a & b).
Blue paint absorbs long wavelengths, reflecting short & medium.
Yellow paint absorbs short wavelengths, reflecting medium & long.
The only shared reflected wavelengths are in the green spectrum → we see green.
Mixing Lights (Additive Color Mixing)
Light mixing is different because all projected wavelengths add together.
Example: Mixing blue and yellow light produces white (Figure 9.8 & 9.9).
Blue light → contains short wavelengths.
Yellow light → contains medium and long wavelengths.
Combined, they contain short + medium + long wavelengths, which our brain perceives as white.
5. Summary of Wavelength & Color
✔ Objects reflect or transmit specific wavelengths, which determine their color.
✔ Mixing paints results in fewer wavelengths being reflected (subtractive mixing).
✔ Mixing lights results in more wavelengths being reflected (additive mixing).
✔ Color perception is influenced by background context and lighting conditions.
Would you like an explanation of how the eye processes color or a comparison of different color vision theories next? 😊
9.3 Perceptual Dimensions of Color
Newton’s Seven Colors
Described visible spectrum with seven colors: red, orange, yellow, green, blue, indigo, violet.
Likely influenced by mysticism (seven musical notes, days of the week, known planets, deadly sins).
Modern science excludes indigo (hard to distinguish from blue and violet).
Nonspectral Colors & Color Differentiation
Nonspectral colors (e.g., magenta) are mixtures of other colors.
Humans can differentiate around 2.3 million colors (Linhares et al., 2008).
Three Perceptual Dimensions of Color
Hue – Color type (e.g., red, blue, green).
Saturation – Intensity of color (more white added = less saturation).
Value (Lightness) – Light-to-dark dimension (darker = lower value).
Color Solid (Munsell Color System)
Hues arranged around the circumference.
Saturation increases outward from the center.
Value/lightness represented by height (lighter at top, darker at bottom).
Next Focus: Connection between color vision and cone receptors in the retina.
Trichromacy of Color Vision - Key Points
Historical Background
Newton’s Prism Experiment (1704): Proposed that different light components stimulate the retina differently, leading to color perception.
Thomas Young (1802): Suggested color vision is based on three principal colors (red, yellow, blue).
James Clerk Maxwell & Hermann von Helmholtz (1855-1860): Provided experimental evidence, leading to the Young-Helmholtz Trichromatic Theory.
Color-Matching Experiments
Conducted by Maxwell to prove trichromacy.
Used a reference color (single wavelength) and a comparison field(mixing wavelengths).
Key findings:
At least three wavelengths were needed to match any reference color.
Two wavelengths were insufficient.
Never needed four wavelengths.
Conclusion: Color vision depends on three types of cone receptors with different spectral sensitivities.
Physiological Evidence of Trichromacy
1963-1964: Discovery of three cone types using microspectrophotometry.
Cone absorption spectra:
S-cones (short) – 419 nm
M-cones (middle) – 531 nm
L-cones (long) – 558 nm
Adaptive optical imaging: Allowed visualization of cone distribution in the retina.
How Cone Responses Relate to Color Perception
Different wavelengths stimulate different cones to varying degrees.
Example responses:
Blue light → Strong S-cone activation, weaker M & L.
Yellow light → Strong M & L activation, weak S.
White light → Equal activation of all three cones.
Metamerism & Color Matching
Metamerism: Two physically different lights appearing identical due to identical cone activation patterns.
Example:
620 nm (red) + 530 nm (green) mix = Same cone activation as 580 nm (yellow) → Perceived as identical.
Color Vision Deficiencies
Monochromacy (One Pigment - Color Blindness):
Rare (10 in 1 million).
Only rods function → Vision in shades of gray.
Can simulate this by dark adaptation.
Dichromacy (Two Pigments - Partial Color Blindness):
Can perceive color but confuses some hues.
Needs only two wavelengths to match any reference color.
Tested using Ishihara plates.
9.5 on the Opponency of Color Vision:
Opponent-Process Theory:
Proposed by Ewald Hering, this theory suggests that color perception is based on opponent pairs: red–green, blue–yellow, and black–white.
Colors across from each other on the color circle are complementary, meaning they cancel each other out to produce gray or white.
Behavioral Evidence for Opponency:
Phenomenological Evidence: Observations of color experiences, including hue scaling experiments, showed that colors are perceived as combinations of four primary colors (red, green, blue, yellow).
Psychophysical Evidence: Hue cancellation experiments (Hurvich & Jameson, 1957) quantified opponent processes by measuring how much of an opposing color was needed to cancel out a given hue.
Physiological Evidence:
Opponent Neurons: Discovered in the lateral geniculate nucleus (LGN) and visual cortex, these neurons respond in opposite ways to different wavelengths of light.
Types of Opponent Neurons:
Single-opponent cells: Respond to large areas of color.
Double-opponent cells: Respond to color patterns and edges, linking color perception to form recognition.
Neural Mechanisms: Opponent cells process signals from cone receptors, with excitatory and inhibitory inputs determining responses to different wavelengths.
Questioning Unique Hues:
Research has challenged the idea that red, green, blue, and yellow are uniquely perceived hues.
New experiments using different primary colors (orange, lime, purple, teal) produced similar results, suggesting that perception is more complex than initially thought.
Opponent neurons may not directly map to specific colors but instead process differences between cone responses, helping to differentiate overlapping wavelengths.
Color and Form Integration:
Some opponent neurons detect edges and shapes using color differences rather than brightness, reinforcing the idea that color perception is integrated with form processing.
Cortical Mechanisms of Color Perception:
Question: Is there one specialized area in the cortex for color processing, like the FFA, EBA, and PPA for faces, bodies, and places?
Semir Zeki (1983-1990) popularized the idea of a "color center" in V4 based on color-responsive neurons.
However, later research suggests color processing is distributedacross multiple cortical areas.
Study by Lafer-Sousa et al. (2016):
Participants watched 3-second video clips while brain scans were done.
Findings:
Color areas are sandwiched between areas that process faces and places.
Figure 9.28a: Data from one hemisphere of an individual shows the proximity of these areas.
Figure 9.28b: Combined data from multiple participants highlights this adjacency.
Brain Damage & Double Dissociation:
Patient D.F.: Impaired object perception but normal color perception.
Other patient: Impaired color perception but normal form perception.
This illustrates that color and form are independently processed in the brain.
Achromatopsia and Prosopagnosia:
72% of achromatopsia patients also have prosopagnosia (face recognition issues).
Despite being independent, color processing is closely relatedto face and place processing due to their proximity in the cortex.
Cortex Function:
Color processing is both separate from other functions and closely related to them.
Color can influence perceptual organization, attention, and motion perception.
Current Knowledge Gap:
Despite data on how neurons respond to different wavelengths, we still don’t fully understand how signals from the three types of cones create our perception of color.
Color Constancy: Perception of an object’s color remains relatively constant despite changes in lighting conditions.
Interaction of Illumination and Reflection:
Sunlight has a balanced wavelength distribution.
Incandescent bulbs emit more long-wavelength (yellowish) light.
LED bulbs emit more short-wavelength (bluish) light.
Chromatic Adaptation:
Prolonged exposure to a specific color reduces sensitivity to that color.
Example: Adapting to red light reduces long-wavelength cone sensitivity, altering color perception.
Uchikawa et al. (1989) Experiment:
Three conditions tested color constancy under different illumination settings.
Chromatic adaptation resulted in partial color constancy.
Environmental Adaptation:
Seasonal changes affect perceived color in a scene.
Green-rich summer scenes decrease green perception, making other colors stand out.
Memory Color:
Prior knowledge of object colors influences perception.
Familiar objects appear more saturated than unfamiliar objects of the same wavelength.
Illumination Consideration:
The visual system compensates for lighting changes, unlike cameras.
Color constancy improves with diverse surrounding colors, binocular vision, and 3D viewing.
#TheDress Phenomenon (2015):
Viral image where people perceived the same dress in different colors.
Surveys showed a split between blue/black and white/gold perceptions.
Highlighted differences in individual visual processing and color interpretation.
Lightness Perception and Color Vision - Study Notes
Lightness Perception and Shadows
Role of the Penumbra
The penumbra (fuzzy shadow edge) provides the visual system with cues that indicate a shadow.
When the penumbra is covered, the shadowed area appears as a change in reflectance rather than illumination.
Key Concept: Lightness constancy occurs when the penumbra is visible but fails when it is masked.
Orientation of Surfaces and Perception of Lightness
Perceiving Lightness at a Corner (Demonstration)
A folded index card with one side illuminated and the other shadowed is perceived as a single material with different illumination.
Viewing it through a hole removes orientation and illumination cues, making the shadowed area appear as a reflectance change (gray rather than white).
Key Concept: Lightness constancy depends on visual information about surface orientation and lighting.
Example: Statue of St. Mary
At night, the dark area above Mary's arms seemed possibly blue (like her sash) or in shadow.
Daylight confirmed it was a shadow.
Key Concept: Shadows can sometimes be confused for differences in material color.
Color Perception and Wavelengths
Color Perception is Constructed
Wavelengths themselves are colorless; color is created by the nervous system's response.
Under dim lighting (dark adaptation), color perception diminishes, demonstrating that cones are responsible for color vision.
Newton’s View: Colors do not exist in light; they are sensations caused by the interaction of light with the nervous system.
Color Vision Across Species
Example: Honeybees can see ultraviolet light that humans cannot.
Key Concept: Different species experience different color perceptions based on their visual system.
Comparison with Other Senses
Just as wavelengths don’t inherently have color, sounds don’t have pitch, and molecules don’t have taste—perception is constructed by the nervous system.
Infant Color Vision
Development of Color Perception
Color vision is limited at birth due to underdeveloped cone receptors.
By 3-4 months, infants develop appreciable color vision.
Bornstein’s Experiment (1976)
Used habituation to test whether infants categorize colors like adults.
Infants recognized a 480-nm light (blue) as different from a 510-nm light (green) but did not distinguish 510-nm from 540-nm (both green).
Key Finding: 4-month-old infants categorize colors similarly to adults.
Franklin & Davies’ Experiment (2004)
Used the novelty-preference procedure: Infants looked longer at a new color when it crossed a categorical boundary (e.g., green to blue).
Supported the idea that infants have adult-like color categories before acquiring language.
Skelton et al. (2017) - Further Findings
Infants distinguish blue, green, purple, yellow, and redcategories.
Color perception continues to develop into teenage years.
Key Concept: Early color categorization is based on cone receptor mechanisms, not language.
Key Questions to Consider
How do we perceive depth from a two-dimensional retinal image?
Why is depth perception better with two eyes than one?
Why don’t people appear to shrink as they move farther away?
The Connection Between Depth and Size
Depth and size perception are closely linked in vision.
Example (Figure 10.1):
A man appears small on a chair but is actually on a distant platform.
The illusion occurs because the brain misinterprets depth, leading to a misjudgment of size.
This demonstrates how size perception depends on correctly perceiving depth.
Perceiving Depth: The Challenge
The image projected onto the retina is two-dimensional.
Despite this, we can perceive depth in a three-dimensional world.
Example (Figure 10.2):
Light from a tree and a house falls onto different points on the retina.
The retinal image alone does not indicate how far the objects are.
The brain interprets additional depth cues to resolve this ambiguity.
Cue Approach to Depth Perception
Depth perception relies on visual cues that correlate with distance in the real world.
Key Depth Cue: Occlusion
When one object partially blocks another, the blocked object must be farther away.
Example: A tree blocking part of a house signals that the tree is in front.
Three Major Depth Cue Categories
Oculomotor Cues
Based on eye muscle movements and focus adjustments.
Monocular Cues (not detailed in the passage but implied)
Depth cues that work with one eye, such as occlusion, perspective, and shading.
Binocular Cues (related to why two eyes improve depth perception)
Differences in images from each eye help determine depth.
Some Questions to Consider:
How can we perceive depth based on the two-dimensional image on the retina?
Why do we see depth better with two eyes than with one?
Why don’t people appear to shrink in size as they walk away?
10.1 Perceiving Depth
Depth perception allows us to judge distances in a three-dimensional world despite the two-dimensional image on the retina.
Example: An illusion can make a person appear much smaller due to a misperception of their distance from the camera.
Depth and size perception are closely related; errors in depth perception can lead to errors in size perception.
The brain uses various optical and environmental cues to determine depth and size.
Cue approach to depth perception:
Identifies depth information in the retinal image.
Example: Occlusion—when one object partially covers another, the covered object is perceived as farther away.
Learned through experience and eventually becomes automatic.
10.2 Oculomotor Cues (Based on eye movement and muscle tension)
Convergence: Inward movement of eyes when focusing on close objects.
Accommodation: The lens changes shape to focus on objects at different distances.
Demonstration:
Hold a finger at arm’s length and slowly move it toward your nose.
Feel your eyes move inward (convergence) and the tension as the lens adjusts (accommodation).
These cues are most effective for objects within arm’s length.
10.3 Monocular Cues (Work with one eye)
Pictorial Cues (Provide depth information in images)
Occlusion: When one object partially covers another, the covered object appears farther away.
Relative Height:
Objects higher in the visual field (closer to the horizon) appear farther away.
Objects lower in the field (sky) appear farther away.
Familiar Size:
We judge distance based on our prior knowledge of an object’s size.
Example: If a dime, quarter, and half-dollar appear the same size in a picture, we assume the dime is closer.
Relative Size:
When two objects are the same actual size, the one that appears smaller is assumed to be farther away.
Example: Motorcycles and telephone poles in a landscape image.
Perspective Convergence:
Parallel lines appear to converge as they recede into the distance (e.g., railroad tracks).
Used in Renaissance paintings to create depth.
Atmospheric Perspective:
Distant objects appear hazier and have a slight blue tint due to the scattering of light in the atmosphere.
Example: Far-off mountains look bluish and less sharp.
Texture Gradient:
Objects in a uniform pattern appear closer together as they recede, creating a sense of depth.
Example: A field of flowers where those in the distance seem more densely packed.
Shadows and Depth Perception
Shadows as Cues for Object Location:
Shadows provide information about the position of objects.
Example: Spheres on a checkerboard.
Without shadows: unclear if spheres are on the surface or floating.
With shadows: left spheres on the surface, right spheres floating above.
Shadows enhance 3D perception:
Shadows define contours (e.g., in mountains) and make objects appear three-dimensional.
Example: Mountains look 3D in the morning (shadows present), flat at noon (no shadows).
Motion-Produced Cues
Motion Parallax:
Occurs when nearby objects move faster than distant objects as the observer moves.
Example: Looking out the window of a moving car:
Nearby objects (e.g., trees) move quickly.
Distant objects (e.g., houses) move slowly.
Mechanism:
The image of near objects moves a greater distance across the retina compared to distant objects.
Use in animals and technology:
Locusts use motion parallax to gauge distance to prey (Wallace, 1959).
Motion parallax helps robots detect obstacles (Srinivasan & Venkatesh, 1997).
Also used in cartoons and video games to create depth.
Deletion and Accretion:
Deletion: As the observer moves, objects become covered.
Accretion: Objects become uncovered when the observer moves in the opposite direction.
Example:
Right hand covered by left hand as head moves left (deletion).
Left hand uncovers right hand as head moves right (accretion).
Use: This gives cues about object distance.
Application: Important in how we perceive depth in the environment.
Integrating Monocular Depth Cues
Monocular Depth Cues:
Provide "best guess" depth information.
Can be uninformative in some situations.
Example: Relative height works well on a flat plane, but may not help in 3D scenes.
Monocular Cues Work Over Different Distances:
Close Range (0–2 meters):
Accommodation, Convergence.
Medium Range (2–20 meters):
Motion parallax, Deletion & Accretion.
Long Range (Above 20 meters):
Atmospheric perspective, Relative height, Texture gradients.
Whole Range:
Occlusion, Relative size.
Combining Cues:
Depth perception is a combination of cues, no single cue provides perfect depth information.
Relative Depth: Some cues give relative depth (e.g., occlusion, relative height).
Actual Depth: Some cues contribute to more precise depth (e.g., relative size, motion parallax).
Summary of Depth Cues
Relative Depth Cues (Table 10.1a):
Occlusion: Works across all ranges (0–20+ meters).
Deletion & Accretion: Effective from 2–20 meters.
Relative Height: Works from 2–20 meters and above.
Atmospheric Perspective: Useful only above 20 meters.
Actual Depth Cues (Table 10.1b):
Relative Size: Useful across all distances (0–20+ meters).
Texture Gradients: Effective from 2–20 meters and above.
Motion Parallax: Effective for 0–20 meters.
Accommodation & Convergence: Only useful at very close distances (0–2 meters).
Binocular Depth Information
Monocular Cues: You can perceive depth with one eye, but closing one eye reduces depth information.
Two-Eyed Depth Perception: Involves differences between images formed on the left and right eyes.
Demonstration: Two Eyes, Two Viewpoints
Close your right eye and hold a finger at arm's length.
Place another finger a foot away to cover the first one.
Switch between eyes: the position of the front finger appears to change when switching from one eye to the other.
Stereoscopic Vision
Stereoscopic Depth Perception: Created by differences in the images seen by both eyes (binocular disparity).
Monocular vs. Stereoscopic Vision: Monocular uses cues like occlusion and relative size, while stereoscopic vision gives a richer depth experience.
Susan Barry’s Experience with Stereovision
Background: Susan had cross-eye issues as a child (strabismus), leading to monocular depth perception.
Overcoming Monocular Vision: After eye exercises, she gained stereovision, which significantly enhanced her depth perception.
Example: First experience with stereoscopic depth was with a steering wheel, which seemed to "float" when she used stereovision.
Stereoscopic Depth in Movies
Standard Movies: Use monocular cues (occlusion, relative height, shadows) for depth perception.
3D Movies: Use two cameras to simulate stereoscopic vision, projecting images separately for each eye using polarized light.
Binocular Disparity
Definition: Difference in the images on the left and right retinas, forming the basis of stereoscopic vision.
Corresponding Retinal Points
Corresponding Points: Areas that would overlap if both eyes were aligned.
Horopter: The surface where images of objects fall on corresponding points.
Noncorresponding Points and Absolute Disparity
Noncorresponding Points: Objects not on the horopter; their images fall on noncorresponding points on each retina.
Absolute Disparity: The angle between where an object should fall on the retina and where it actually does.
Crossed vs. Uncrossed Disparity
Crossed Disparity: When an object is closer than the point of fixation (left eye sees it to the right, right eye sees it to the left).
Uncrossed Disparity: When an object is farther than the point of fixation (left eye sees it to the left, right eye sees it to the right).
Distance and Absolute Disparity
Greater Disparity: Indicates greater distance from the horopter.
Disparity Creates Stereopsis (Perceptual Depth)
Stereopsis: The perception of depth due to binocular disparity, demonstrated with random-dot stereograms.
Random-Dot Stereograms: Patterns that have no pictorial depth cues, but when viewed with each eye separately, create the perception of depth due to disparity.
The Correspondence Problem
Problem: How the brain matches corresponding points from the left and right retinas to calculate depth.
Challenge: It’s easy to match features in the real world, but much harder with random-dot patterns like in stereograms.
Neural Mechanisms
Disparity Tuning: Neurons respond to specific disparities between the images of the left and right eyes (disparity tuning curves).
Conclusion
The ability to perceive depth through stereoscopic vision is based on binocular disparity, with various neural mechanisms involved in solving the correspondence problem.
Binocular Depth Cells
Binocular Depth Cells (Disparity-Selective Cells): Neurons that respond to different amounts of binocular disparity.
Discovered in V1 (Primary Visual Cortex): Research in the 1960s and 1970s revealed these cells that fire based on disparity between the images in the left and right eyes.
Disparity Tuning Curve: Shows how these neurons respond best to a specific amount of disparity (e.g., about 1 degree).
Stimulus–Physiology Relationship
Stimulus–Physiology Relationship (B): How the amount of disparity in the visual stimulus corresponds to the firing of binocular depth cells.
Stimulus–Perception Relationship
Stimulus–Perception Relationship (A): The connection between binocular disparity and the perception of depth.
Physiology–Perception Relationship
Physiology–Perception Relationship (C): Demonstrates the connection between binocular depth cells and depth perception.
Selective Rearing Experiment
Blake and Hirsch (1975):
Reared cats alternating vision between eyes during the first 6 months.
Result: Few binocular neurons and poor depth perception performance.
Conclusion: Disparity-selective neurons are responsible for stereopsis.
Research Outside V1
Disparity-Selective Neurons Found Beyond V1: Neurons sensitive to disparity are also found in other brain areas, such as area MT.
Microstimulation Experiment
DeAngelis (1998):
Stimulated disparity-selective neurons in area MT of monkeys.
Result: Stimulation shifted the monkey’s depth perception to match the induced disparity.
Conclusion: Stimulating disparity-selective neurons affects depth perception, demonstrating the physiology-perception relationship.
Summary of Findings
Disparity-Selective Neurons: Provide the physiological basis for stereopsis and depth perception.
Experiments (Selective Rearing & Microstimulation): Both confirm that these neurons are critical for depth perception.
Depth Perception Across Species
Humans: Use multiple depth cues, including binocular disparity.
Cats, Monkeys, Humans: Have frontal eyes with overlapping visual fields, allowing the use of binocular disparity for depth perception.
Other Animals with Binocular Disparity:
Owls, Horses, Insects: Also use binocular disparity to perceive depth.
Praying Mantis:
Large overlapping eye fields.
Experiment with prisms showed mantis' striking response was based on the degree of disparity.
Further experiments (mantis cinema) confirmed findings, showing the mantis perceives depth based on disparity (Nityanada et al., 2016).
Lateral Eyes and Limited Disparity
Animals with Lateral Eyes (e.g., Rabbits, Pigeons):
Have less overlap between visual fields, limiting disparity use to a small area.
Rabbits: Wide panoramic view but stereoscopic depth occurs only in the overlap.
Pigeons: Small overlap area around the beak, useful for pecking at grain, with some depth perception at this point.
Motion Parallax in Insects
Motion Parallax: Primary method for insects to judge depth.
Locusts: Sway their heads side to side to create motion parallax, using the degree of head sway to estimate distance based on how much objects move across the retina.
Echolocation in Bats
Bats: Use echolocation to sense depth.
Echolocation: Sends out sound pulses and measures the time it takes for the echoes to return to determine the distance of objects.
Can sense obstacles in complete darkness using this method (Griffin, 1944).
Echo timing helps bats avoid objects, similar to sonar used in WWII for underwater object detection.
Summary of Depth Perception Across Species
Various Methods: Animals use different depth cues (binocular disparity, motion parallax, echolocation) depending on their anatomy and needs.
Adaptations: Some animals rely on specific types of cues, such as lateral-eyed animals sacrificing disparity for a wider field of view, and bats relying on echolocation in the dark.
Size and Depth Perception: Perception of size and depth are related. Misjudging distance can affect perceived size (e.g., small box mistaken for a truck).
Holway and Boring Experiment (1941): Observers matched the size of a comparison circle to test circles viewed at different distances.
Visual Angle: Angle formed between an object and the lines extending to the observer’s eyes, influenced by both object size and distance.
Experiment Results: With depth cues, observers judged size accurately; without depth cues, they relied on retinal image size (visual angle).
Size Constancy: Objects appear to maintain a consistent size despite changes in distance. For example, trees in the distance appear the same size even though they take up less of the field of view.
Size Constancy Demonstration: Estimating someone's height at different distances shows size perception remains stable.
10.8 Illusions of Depth and Size
Visual Illusions Overview:
Fascinate people by showing how the visual system can be tricked.
Examples:
Illusions of Lightness (Chevreul, Mach Bands) – small changes in lightness seen near borders.
Attentional Effects (Change Blindness) – two alternating scenes appear similar despite differences.
Illusions of Motion – stationary objects perceived as moving.
The Müller-Lyer Illusion:
Description: Right vertical line appears longer than the left, although both are the same length.
Explanations:
Misapplied Size Constancy Scaling (Gregory, 1966):
Size constancy adjusts for distance, but in 2D, it distorts perceptions.
The fins on lines make them look like parts of corners, affecting depth perception.
Inside corners (line on right) appear farther, making the line appear longer.
Demonstration: Lines seem different when compared to 3D representations (like books).
Conflicting Cues Theory (Day, 1989):
Perceived length is based on actual line length and overall figure length.
Outward fins cause the vertical line to appear longer.
Applies to other Müller-Lyer variations (dots, dumbbells).
Other Size Illusions:
The Ponzo Illusion:
Converging railroad tracks cause top animal to appear larger even though both are the same size.
Explanation: Depth cues from tracks make the top object seem farther, adjusting perceived size.
The Ames Room:
Two people of equal size appear different in size depending on their location in a distorted room.
Explanation (Size-Distance Scaling): The room’s shape causes the perceived distance and visual angle to differ.
Alternative Explanation (Relative Size): Perception based on how much space each person fills in the room.
The Moon Illusion:
Description: The moon appears larger on the horizon than when higher in the sky.
Explanation (Apparent Distance Theory):
The horizon moon is perceived farther due to depth cues from the terrain, making it appear larger.
S = R × D (Size-Distance Scaling): Larger perceived distance on the horizon, despite identical visual angle.
Other Theories:
Angular Size Contrast Theory: Larger surrounding sky makes the elevated moon appear smaller.
Factors Involved:
Atmospheric perspective, color (redness increases perceived size), oculomotor factors (convergence).
Key Takeaways:
Illusions of Depth and Size demonstrate how depth cues and visual context alter size perception.
Theories like size-distance scaling and conflicting cues explain many of these illusions.
Complex illusions (e.g., Moon, Ames Room) involve multiple factors influencing perception.
Infant Visual Acuity: Development of Depth Perception
Infants develop the ability to perceive depth through different kinds of depth information over the first year of life. Here's a breakdown of the development of key types of depth perception:
1. Optic Flow and Initial Depth Perception
Optic flow: By 3 days old, infants can respond to optic flow (movement patterns indicating depth) by moving their heads backward when the flow is from front to back.
Blink response: By 3 weeks, infants begin to blink in response to approaching objects, indicating an early response to depth cues.
2. Binocular Disparity
Binocular disparity refers to the difference between the views seen by each eye, used to perceive depth.
Binocular fixation: Between 1 and 2 months, infants start to show some ability to fixate on objects binocularly (with both eyes aligned), though it's imprecise.
Development: The ability to use disparity information begins to emerge between 3 to 6 months. Infants around 3 months of age can follow moving stereograms (random-dot patterns that appear as 3D to those who can perceive disparity).
Research by Robert Fox (1980) showed that infants between 3 and 6 months can follow objects using binocular disparity, establishing when infants can perceive depth from binocular disparity.
3. Pictorial Cues
Pictorial cues are visual clues used to interpret depth from two-dimensional images, like overlap or shading.
Familiar size: By around 5–7 months, infants begin to use knowledge of the size of familiar objects to gauge distance. For example, an object that looks larger in the visual field is perceived as closer.
Cast shadows: Shadows provide depth information about the relative position of objects. Infants around 7 months are sensitive to cast shadows, perceiving objects casting shadows as nearer.
Key Experiments and Findings:
Granrud et al. (1985): Infants at 7 months use familiar size to perceive depth, while 5-month-olds do not.
Yonas & Granrud (2006): Infants at 7 months begin to perceive depth from cast shadows, showing an ability to distinguish shadows from other markings.
Conclusion:
Infant depth perception evolves progressively from involuntary responses (such as head movements and blinking) at a few weeks old to more complex uses of binocular disparity and pictorial depth cues by 6 to 7 months. Over time, as infants gain more experience interacting with their environment, they become able to integrate multiple depth cues, but it takes years for these cues to fully coordinate and integrate into the adult experience of depth perception.