Second Half Lectures

Lecture 10 - 10/16/24

Space

  1. Object distance and size

    1. Perceived Size α = size on retina X perceived distance

    2. Size constancy

      1. The image of an object on the retina gets smaller as the object gets further away

      2. Vision relies on many cues to judge distance or depth

      3. Given that distance, an estimate of size can be recovered

    3. Emmert's Law

      1. The apparent size of an afterimage is directly proportional to the perceived distance of the surface on which you see it

    4. Size constancy illusions

      1. Left line appears closer and right one appears farther because of implied distance to inner and outer corners when angles taken as perspectives cues (as they are on the right)

      2. Automatic processing depth cues triggers inappropriate distance scaling perceived size on the left

      3. Top line looks longer because of apparently greater distance

      4. Automatic processing of depth cues triggers inappropriate distance scaling of perceived size on the left

      5. Over application of distance on apparent size

    5. Moon Illusion

      1. Visual phenomenon where the moon appears significantly larger when it is near the horizon than when it is high in the sky, even though its actual size and distance from Earth remain the same

        1. Apparent Distance Hypothesis

        2. Relative Size Hypothesis

        3. Atmospheric Effects

    6. Quiz 

      1. Q. If people are asked to judge the size of a traffic light, their estimate is very bad. What does this tell us about size perception?

        1. Size constancy does not exist for all objects, and thus it is not a useful cue.

        2. The size of an object on our retina is the most reliable cue for computing the size of an object.

        3. Perceived size cannot be a reliable cue if we do not have a good estimate of the distance.

          1. Size perception relies heavily on contextual information, including distance cues. When we view a traffic light, without a clear sense of its distance from us, it becomes challenging for our visual system to accurately judge its true size. This emphasizes the role of distance cues in helping us interpret the size of objects correctly; without them, our size estimates tend to be inaccurate

        4. We make less size judgment errors with objects that are above the ground compared to those that are on the ground.

    7. Extrapolating from 2D retinal image

      1. The brain's ability to interpret a flat, 2D image captured by the retina and perceive a 3D world from it.

        1. The mind uses cues and prior knowledge to build a coherent understanding of the spatial relationships, depths, and distances of objects in our environment

      2. The retina records a 2D projection of light and it lacks depth information

        1. Several visual cues it uses:

          1. Monocular depth cues

            1. Information available from one eye, like perspective, texture gradient, and occlusion (one object blocking part of another)

          2. Binocular cues

            1. Arise from slight difference between images each eye sees (binocular disparity) and help provide depth perception

          3. Motion parallax

            1. Objects closer to us appear to move faster across our field of view than object that are further away

          4. Prior knowledge and expectations

            1. Brain uses stored information from previous experiences to fill in gaps and resolve ambiguities

  2. Pictorial cues

    1. Cues that support depth perception in flat, static images

      1. Occlusion

        1. Closer objects bloc farther objects

        2. T-Junctions indicate which objects are in front of which others

          1. But not by how much

      2. Linear perspective

        1. Lines parallel in the real world will appear to converge in 2D

        2. Buildings with right angles and parallel lines

        3. Geometry requires that parallel lines converge in the distance

        4. Introduced into painting by romans

      3. Height in field

        1. Assuming a flat, level ground plane

        2. Height in field corresponds to distance

        3. Horizon is at eye level

      4. Known size (object knowledge)

        1. If object is familiar and has a typical size

        2. Then reverse the relation and recover distance to the object

        3. Useful cue but often overruled

        4. Very Big Kid

          1. Mistaken judgment of distance overrules knowledge of size of children

      5. Texture gradients

        1. Assume stuff on ground is uniform in size

        2. Change in size must be due to change in distance

        3. Doesn’t require level ground plane but this helps

      6. Atmospheric perspective

        1. Distance surfaces have less contrast because intervening atmosphere superimposes a haze

        2. The farther the distance, the more the haze, the lower the contrast

      7. Shadows

        1. The effects of light - shadows, shading, highlights - may be discounted in order to recover the surface reflectance but they are not just thrown out

        2. They also tell us about depth

        3. Shadows tell us about the relative placement of objects

        4. And about the relief of the surface on which they fall

        5. If a shadow is in contact with the object that casts it, then the object rests on the shadowed surface

        6. If there is a gap, the object floats over the surface

        7. Distance between the object and the shadow it casts indicate the distance between the object and the shadowed surface

      8. Quiz:

        1. Which of the following pictorial cues is NOT metric?

          1. Linear perspective

            1. Relies on geometric rules (converging parallel lines)

          2. Occlusion

            1. Tells us which objects are in front of others but not by how much

            2. Provides qualitative information, not exact measurements

          3. Height in field

            1. Assumes a flat ground plane, with height corresponding to distance

          4. Atmospheric perspective

            1. Uses contrast reduction due to haze as an indicator of distance

      9. Why do these cues work?

        1. These cues arise from depth differences between objects in the 3D world

          1. Unlikely to arise by chance

          2. They are highly informative and trustworthy

        2. Not so in 2D pictures of course where our visual system is trickled into attributing 3D depth as the cause of these cues

      10. How do these cues work?

        1. Not laws or real rules that oblige a fixed interpretation

          1. Not fixed rules

            1. Depth cues don’t function as absolute laws and instead offer suggestions or probabilistic interpretations about spatial relationships

              1. Ie:

                1. Occlusion

                  1. Suggest relative depth

                2. Linear perspective

                  1. Suggests that converging lines meet at a point in the distance but assumes parallel line sin 3D

        2. Each cue “suggests” a small number of possibilities from most likely to least

        3. Choose an interpretation compatible with greatest number of cues in local region

        4. Even if there are too few cues to have much certainty

        5. Sudoku puzzle of vision

          1. Constraint satisfaction process

            1. The brain “fills in” missing information about depth, distance, and object relationships by respecting the constraints provided by multiple visual cues

        6. What if the inference is wrong?

          1. Optical illusions occur

            1. The brains assumptions based on typical cue interactions fail

            2. Ie:

              1. Ames room

                1. Creates a distorted perception of size due to conflicting depth cues

              2. Confetti illusion

                1. Misinterprets color and shading due to contextual influences

          2. Errors in this “puzzle-solving” are a natural consequence of the probabilistic and heuristic-based nature of perception

  3. Special Point of View Effects

    1. From one point of view, the cues suggest one from

    2. But this is only seen from that one view

    3. These cues may even suggest a form that is impossible

    4. This demonstrates that these cues are interpreted locally

    5. Ames Room:

      1. Perspective cues to depth override known size

    6. Beuchet chair

      1. Exemplifies how visual perception prioritizes local consistency in interpreting cues over global spatial coherence

        1. Underscored that depth and form perception rely heavily on the observer’s viewpoint and the alignment of visual information

      2. An example of how perspective can override spatial inconsistencies

    7. Quiz:

      1. Recall the Ames room display in which two people stand on opposite sides of the room. One person appears smaller because he or she

        1. Has a larger visual angle, so size constancy causes him to appear smaller

        2. Has a smaller visual angle than the other person but appears to be at the same distance

          1. Visual angle is smaller of the person in the far corner because they are farther away

            1. Usually smaller visual angle would indicate greater distance, but since the brain incorrectly perceived them to be at the same distance as the nearer person, it interprets the smaller visual angle as the person being smaller

        3. Has the same visual angle as the other person but is actually much further away

        4. Perceived distance change but his visual angle does not

  4. Depth Reversals

    1. When a single 2d image or scene suppoirts two equally plausible 3d interpretations and the perception alternated between them

    2. In previous examples, size and depth was misjudged

    3. Here, each of these figures, there are two possible shapes that differ only in the sign of the depth

    4. There is no misjudgement, both are consistent with the image

    5. When the perceived object reverses, convex becomes concave and vice versa, front and back are exchanged

    6. Shadow/Shading Reversals

      1. Convex (bump) versus concave (dent) depends on where the light is coming from

      2. Light assumed from above in room

      3. Or above head? Turn head upside down

    7. General Convex / Concave reversals

      1. Maid of the mist draws close to bottom of niagara falls

      2. Or about to be swept over the falls?

    8. Mach’s folded Card

      1. An illusion that highlights how depth reversals, when applied to real 3D objects, can create striking and unexpected perceptual effects, particularly when motion is introduced

      2. Nothing else happens in flat images

      3. But a depth reversal for a real object has truly surprising effects when it moves or you move

      4. These reveal the elaborate construction underlying our perception of each object

      5. Demo:

        1. Note changes in surface material and shadow

        2. Note following motion as you move your head

    9. Action at a distance

      1. Because the depth is reversed, the shape must appear to move in the same direction you do to explain or keep up with the changes in the view you now have of the object

      2. Phenomenon that occurs during depth reversals in real 3D objects

      3. The term captures how the perceived motion of the object adapts in a way that seems to defy normal; physical constraints

    10. The depth reversal effect

      1. When the perceived object depth of an object flips, the brain assumes a new 3D configurations for the object

      2. The brain then adjusts how the object's position and movement are interpreted

    11. Perceived motion of the object:

      1. If you move left or right, the reversed object appears to shift its orientation or “follow” you, even tho it is stationary or moving independently

      2. This occurs because the brain recalculates how the object must behave to match the new, reversed depth perception.

      3. The perceived motion seems to act a a distance, beyond the object’s immediate physical influence

  5. Constraints and sudoku puzzles

    1. Cue:

      1. A local, informative part of an image

      2. Each cue gives rise to a set of possibilities

      3. Each possibility constraints the assignment of surfaces abd edges around the cue

      4. The final interpretation of the image is the one that is most compatible with “all” the cues

      5. This is called constraint satisfaction and it is also the method for solving crossword puzzles and sudoku

      6. In these puzzles:

        1. The final answer must build from starting cues and satisfy all the rules

      7. How many cues are checked before finding the best interpretation?

        1. Usually only a few are enough to fiend a good solution

      8. Evidence is that only a few are checked within a local region

      9. Allows many different regions to be analyzed in parallel

      10. Allows rapid determination of depth over whole scene

      11. Cost:

        1. Inconsistencies across regions not noticed

Lecture 11 - 10/21/24


  1. One Minute Quiz

    1. Perceived Size α = size on retina X perceived distance

      1. Q: Size constancy?

        1. The image of an object on the retina gets smaller as the object gets farther away

        2. Vision relies on many cues to judge distance or depth

        3. And given that distance, an estimate of size can be recovered

      2. Constraints and sudoku puzzles

        1. How many cues are checked before finding the best interpretation?

          1. Usually, only a few are enough to find a good solution

          2. Evidence is that only a few are checked within a local region

          3. Allows many different regions to be analyzed in parallel

          4. Allows rapid determination of depth over the whole scene

          5. Cost: inconsistencies across regions not notices

  2. Binocular Vision

    1. Accommodation

      1. Tis the process that only works for the short distance

      2. Bring the target into your focus

      3. The lens is stretched or relaxed

      4. The visual system can sense how your lens are modulated through muscles

        1. As your lens relaxes and stretches, it will measure the distance

          1. Translates this into a distance for the object

      5. Problem: can only work for a short distance

      6. Emmetropia: 

        1. Perfect

      7. Myopia:

        1. Nearsightedness

        2. Focal point in front of the retina

        3. Has a fixation focal point before the retina

        4. Can see near but not far

      8. Hyperopia:

        1. Far sightedness

        2. Can see far away, not near

        3. The lens are correcting this problem and making sure your focal point isn’t hitting the retina correctly

    2. Convergence

      1. Only works in short distance

      2. Not much use beyond 1m

      3. Both eyes move inwards or outwards

      4. Angle of convergence (large)

        1. Close objects

      5. Angle of convergence (small)

        1. Far objects

      6. Primarily good thing for near events

    3. Motion Parallax

      1. As we move, we create relative motion of the objects around us

      2. Distance objects move last, close objects move most

      3. imagine you are driving, the objects far away move less, but objects closer move the most. 

        1. Closer objects move more quickly than objects far away

        2. If the object is moving faster, it is closer

        3. You feel like the abject moving closer to you is moving the other way

      4. This relative motion is seen as depth

      5. But we can see depth between two objects without moving our heads (parallax) and without moving our eyes back and forth between them (convergence and accommodation)

    4. Stereoscopic vision

      1. Depth information from binocular disparity

      2. Found in many predatory and/or arboreal species

      3. Daily life:

        1. Important cue for estimating close distances

      4. Geometry of binocular disparity

        1. Corresponding retinal points

          1. Same position on each retina with respect to the point of fixation (fovea)\

        2. Horopter

          1. Locus of all points in 3d space that fall on corresponding retinal points

            1. Vieth-muller circle

            2. Zero disparity (single image)

              1. You have single images on the focal point

          2. What about objects off the horopter?

            1. Fall on different retinal points

              1. Ie: finger test

      5. Binocular Disparity

        1. A tiny difference

        2. Perceived depth increases with disparity

          1. Crossed or uncrossed

        3. Fixating on a:

          1. A falls on corresponding points in the two retinas: zero disparity

          2. B falls in different points: has disparity

        4. Disparity = Өr - Өl

          1. As perceived depth increases, disparity increases

    5. Stereograms

      1. Stereoscope

        1. What they see, its not a 3d image

          1. Its a 2d image with forced depth perception

            1. Overlaying two images so the brain thinks there is depth

          2. 3D glasses

      2. Free fusion w/o optical aids

        1. Stereo methods

          1. O view the 3d image you must converge your eyes at a different plane of depths from the picture

            1. This superimposes the picture on itself with an offset

        2. Free fusion

          1. Uncrossed (parallel fusion:

            1. Focus behind                                            actual image

          2. Crossed fusion:

            1. Focus in front of actual image

      3. Autostereograms (“magic eye”)

        1. Single image with repeating 2d patterns

          1. Generally use uncrossed fusion

      4. Stereopsis: depth from binocular disparity

        1. How does it work?

          1. In depth perception, we skip shape analysis and you combine the left and right image

        2. Bela Julesz

          1. Random dot stereograms

          2. When you can still perceive the depth, it must be shape analysis

          3. Otherwise featureless random black and white pixels which are essentially the same texture in each eye

          4. Some dots, however, are shifted laterally with respect to the others

          5. If fixating on background dots, they fall on corresponding points in both eyes

          6. But the dots of displaced square do not

          7. They have a disparity in the two eyes views

            1. Therefore, the square is seen in depth

        3. Puzzle of random dot stereogram

          1. How to match up images in left and right eyes?

            1. Correspondence problem

              1. Suggests stereopsis occurs at a coarser scale

              2. Kaufman & Pitablado (1965): letter stereograms

                1. Individual elements are not identical

                2. Yet stereoscopic depth is perceived

    6. Physiology of disparity

      1. -10% of population

      2. Amblyopia (lazy eye)

        1. Blurred non foveal image suppressed by visual system

        2. Often results from strabismus (misalignment of eyes)

      3. Binocular critical period

        1. Best treated before age 5

      4. Summary of physiology

        1. Binocular cells have receptive fields in each eye

        2. Range of different separations

    7. Stereoblindness

      1. If you see the number 1, you have depth perception

      2. Problems learning to read

      3. 3d space is compressed into a 2d image

      4. 3-4 months of age is when people develop a sense of 3d space

      5. “Fixing My Gaze” book

    8. Binocular Rivalry

      1. When input to two eyes is completely different

      2. Competition between eyes

        1. Best guess about the world given inconsistent retinal images

      3. During binocular rivalry all/part o one image appears totally suppressed from consciousness


Lecture 12 - 10/23/24

  1. Motion

    1. Directly measured not inferred

      1. Evidence for “motion detectors”

        1. Motion after effects

        2. Kinematograms, motion then shape

      2. The waterfall illusion

        1. First described by Aristotle ca ~350 BC

          1. Variant of the motion aftereffect (MAE)

      3. Motion Aftereffect

        1. Motion is experienced on the test even though the pattern is not seen to move anywhere

        2. Motion cannot be only an inference based on noticing change in location

        3. Motion Aftereffect (MAE)

          1. Reveals the properties of motion detectors in the visual cortex

      4. Kinematogram

        1. Perceive shape defined by motion

        2. Therefore motion cannot depend only on first seeing shape and then tracking it to infer motion

      5. If motion is directly measured? Or inferred from position change of object?

        1. Motion Aftereffects

          1. Saw motion when there was no change in position of object

        2. Kinematogram

          1. Saw motion when there was no noticeable object

        3. Directly measured by what?

    2. Physiology of “motion detectors”

      1. Reichardt Detector

      2. Resposonse of a directionally selective cell, striate cortex

      3. Consequences of motion detectors

        1. Cant distinguish continuous motion vs discrete jump over tim and space within a short-range

        2. “Apparent motion”

          1. Perceived smooth motion from rapidly alternating stationary targets

          2. Underlies many “motion” percepts

          3. In movies (fps)

          4. Correspondence Problem

            1. Which way are the dots pairs moving?

          5. Areas in cortex specialized for motion: MT (V5), MST

          6. MT (V5)

            1. All cells respond to motion

            2. Many respond to “global” motion, independently of local directions

            3. Large receptive fields

          7. MST (V5a)

            1. Most respond to motion

            2. Very Large fields, can extend into both hemifields

            3. Like expansion, shift, rotation, contraction

    3. What is motion good for?

      1. Recover the 3D dimensions

        1. Aka structure from motion (kinetic depth effect)

        2. Parallax: motion of our head or body reveals depth because closer objects move faster

      2. Summoning attention

        1. Motion captures your attention

      3. Form from motion

        1. Extract form from motion

        2. As soon as the shapes start moving, you can separate the shapes

      4. Break camouflage, segment objects from background

    4. Motion blindness

      1. Akinetopsia

      2. Motion Blind Patient LM - Ellen

      3. Damage in MT Area

      4. Sees motion as still images

      5. Can’t tell when to stop filling a cup

      6. Can't read facial expressions

      7. Moving objects don’t attract her attention

      8. Motion Blindness in normals

        1. Strobe environment demo

          1. Trouble catching balls

          2. Holding posture

          3. Judging relative location of moving and steady objects

    5. Aperture problem

      1. The direction of motion of a straight line is ambiguous

      2. The visible displacement of the line can arise from an infinite set of possible physical motions

      3. What does this have to do with motion detectors?

        1. Receptive fields act as small windows hiding the end points

      4. How to resolve this ambiguity?

        1. One solution is to rely on local 2D features that doesn’t rely on 3D

          1. Line endings and corners

          2. End stopped cells respond to line endings and corners

      5. Motion measured within small local receptive fields is often different from actual (global) motion of object

        1. But some parts of object give unambiguous cues to direction

      6. Barber Pole Illusion

      7. Whose terminator is it?

        1. Does the line ending belong to the line?

      8. End stop to the rescue

        1. Terminator motion disambiguates line or edge motion

        2. End-stopped V1 neurons respond selectively to the endpoints of contours

        3. The direction of the end of the line is more important than the orientation of the line

        4. Effectiveness of terminator depends on who it belongs to

  2. Summary

    1. Directly measured not shape based

    2. Motion detector subunits offset in space and time

    3. Motion useful for seeing shape, depth, drawing attention, breaking camouflage

    4. Damage to MT causes motion blindness

    5. Aperture problem, local motion differs from global motion


Lecture 13 - 10/28/24


One Minute Quiz:

  1. Stereoscopic Vision

    1. Out ability to see depth comes from binocular disparity

  2. Physiological Mechanism of Stereopsis

    1. Binocular cells have receptive fields in each eye

    2. These come with a range of different separations between the receptive field centers (relative to the fovea of each eye)

    3. These cells are selective for disparity

    4. The separation defines the preferred disparity for that cell

    5. Binocular cells have receptive fields in each eye

    6. These come with a range of different separations between the receptive field centers (relative to the fovea of each eye)

    7. These cells are selective for disparity

    8. The separation defines the preferred disparity for that cell

  1. Color I

    1. Perceiving Color

      1. Why is it important?

        1. Ex: Finding Fruit, in Birds – finding mates

      2. Photoreceptor Sensitivities

        1. Determined 

        2. X axis is wavelength

          1. Blue – High energies

            1. “S” cones

            2. Short wavelength spectrums

            3. Mostly blue cones

            4. Respond to the blue cool spectrum

            5. Humans:

              1. Smallest number of cones

          2. Red – Slow Energies

            1. “L” cones

            2. Long wavelength cones

            3. Mostly Red cones

            4. Responds to the warm red spectrum

          3. Green - Mid Wave Length

            1. “M” cones

            2. Mid wave length

            3. Centered around greenish-yellow cones

        3. Y axis 

      3. Why 3 cones?

        1. Principle of Univariance

          1. 1 receptor: same response to certain wavelength

        2. If we had 1 cone, it would be hard to distinguish the color – we don’t have the sensation of the color and we mostly see light and dark

          1. By combining the 3 different type of receptors we can perceive the color

        3. We have different sensitivities income distributions

          1. Not all colors look the same

      4. Limitations

        1. Perceived color varies with ratio of responses of the three cones

        2. There are infinite set of wavelengths, many combinations

          1. But, we only have 3 cones, or 3 values, one for each cone

        3. Non-invertible: can’t recover the wavelength distribution

          1. It is impossible to recover the original wavelength spectrum

        4. Metamer: different spectra, same color

          1. Natural Light vs LED

          2. Same sensation of the color, but different spectra

          3. “This is a unique term”

        5. Invertible Code:

          1. If a signal, say, the price of a care in dollars, is converted into a new code, say, the price of euros, the new code is invertible if we can recover the original value ($$) from the new value (Euros)

        6. Metamer:

          1. Two lights that have same perceived color but different spectra

    2. Color Coding: trichromacy vs. opponency

      1. Trichromatic Theory

        1. 3 primaries are enough

        2. Population coding

        3. Three photoreceptors primaries

          1. L,M, and S Cones

        4. Thomas young’s Theory (1802)

          1. It is impossible to conceive each sensitive point of the retina to contain an infinite number of particles, each capable of vibrating in perfect unison with every possible undulation, it becomes necessary to suppose the number limited, for instance to the three principal colors, red, green[yellow] and blue

        5. Consequences of Trichromacy

          1. Infinite # of spectra can activate cones similarly

          2. Metamers: different spectra that are perceptually indistinguishable

        6. Evidence for Trichromacy

          1. Color Matching Experiment

            1. Two primaries are not enough, 4 are too many

          2. Direct recording from photoreceptors in retina

            1. Single receptor drawn into microelectrode

            2. Measure stimulation by light beam

      2. Opponent process theory

        1. 4 primaries + B/W

        2. Opponent organization

        3. Opponent coding from retina to brian

          1. Retinal ganglion cells and LGN

        4. Hering’s Opponent process theory (1892)

          1. Retinal ganglion cells (“LGN 2”)

          2. There are a group of cells that have the opponent component characteristics

          3. If they are more excited by greenish and less excited by reddish, they will be more excited when there more green and vise versa

          4. Chromatically opponent but not spatially

          5. Non-opponent: spatially opponent but not chromatically

          6. Same cells carry color and luminance information spatial organization of color different from luminance

        5. Evidence for color opponency

          1. Unique Hues

            1. Certain color combinations don’t exist

              1. We have reddish-orange, blue-green

              2. But no red-green or yellowish-blue

          2. Hue Cancellation

            1. Adjust blue light to cancel out yellow

              1. Blue-yellow seen as white

              2. No blue-yellow mix

            2. Adjust red light to cancel out green

              1. Red-green combination seen as yellow

              2. No reddish-green mix

          3. Negative afterimages

            1. Boundaries create stronger afterimages

            2. Seeing opposite colors

            3. The cell is tired

          4. Physiological evidence: Opponency

        6. Two-stage Model of color coding

          1. Three photoreceptors primaries

            1. L,M, and S cones

          2. Opponent coding retina to brain

            1. Retinal ganglion cells & LGN

          3. How do we build opponency

            1. Combinations of L,M, and S

          4. Building red- Green opponency

            1. Interplay of excitation and inhibition

            2. L-M or M-L

          5. Building blue-yellow opponency

            1. Interplay of excitation and inhibition

            2. S-(M+L) or (L+M)-S

          1. Correct answer: C

    3. Color Mixing: additive vs subtractive

      1. Huge range of possible wavelength combinations

      2. How can we get a particular color?

        1. Color mixing

      3. Additive for lights

        1. Tells us about response of visual system

      4. Subtractive for points

        1. Tells us about physics of the stimulus\

      5. Subtractive color mixing (CMYK)

        1. Start with white light

        2. Combination of pigments

        3. Light subtracted from pigments

          1. Red + green = blue

        4. Printing, optical filters

      6. Subtractive Mixing

        1. Blue absorbs red, reflects some green and lots of blue

        2. Yellow absorbs red and blue, reflects some green and lots of yellow

        3. Mix the two and only green survives

        4. Another approach to color mixing

          1. Newton’s prism experiments

          2. Color as combination of light,s not pigments

      7. Additive color mixing (RGB)

        1. Start without light

        2. Combination of lights

        3. Combination color depends on cone properties

        4. 3 primaries sufficient

        5. RGB add together to produce every possible color

          1. Red + green = yellow

        6. TV/Computer monitors

        7. Using Subtractive color, additively

          1. Pointillists, like color TV phosphor, use additive colors when their spots of paint did not overlap

          2. In contrast, typical painting use subtractive color

    4. Summary

      1. Perceiving color: 3 cones, 3 dimensions, metamer

      2. Coding color

        1. Trichromacy + opponency

        2. Opponent pathways

        3. Non-opponent

      3. Color  Mixing:

        1. Additive, 3 dimensions

        2. Subtractive, complex

      4. Bring Favorite item with color to next class


Lecture 14 - 10/30/24


  1. One minute quiz

    1. Opponent Process Theory

      1. Evidence

        1. Unique hues

        2. Hue cancellation

        3. Negative afterimages

        4. Physiological evidence LGN

      2. 4 primaries + B/W

      3. Opponent organization


Color Lecture 1

  1. Color constancy & contexts

    1. Surface properties and Illumination

      1. Light from surface = illumination x reflectance

    2. Color constancy

      1. Even if illumination changes, we still perceive the same color

      2. Perceived color largely unaffected by illumination

    3. Color constancy & illumination

      1. One more example of a more general problem

        1. Light from surface = illumination x reflectance

        2. Is surface color due to light or paint?

    4. Assumptions about shadows

      1. Shadow darkens surfaces without changing colors

      2. Luminance change, but not hue change

      3. Shadows have fuzzy edges

      4. Signals “light”, not “paint”

    5. Color Contrasts

      1. Surrounding context may affect how you perceive color - it does matter

    6. Color assimilation

      1. The color changes based on how it is interlaced

  2. Color blindness

    1. Neural basis of color deficiency

      1. 3 types of cone receptors

      2. Maximal sensitivity to different wavelengths of light

        1. Short (S) wavelengths - Blue

        2. Medium (M) wavelengths - Green

        3. Long(L) wavelengths - Red

          1. Lose R or G cones (1/20 males. 1/400 females)

          2. Lose b cones (very rare)

          3. Lose two cones (cone monochromatic, rare)

          4. Lose all cones (rod monochromatic, rare)

      3. Abnormal or missing cones

        1. Most often the M or L type (“reg-green” color deficiency

        2. Lose one kind of cone and your vision now only two dimensions

        3. What does it look like?

    2. Testing for color vision deficiency

      1. Ishihara color test: pseudoisochromatic plate

        1. What's the number?

        2. Color is the only cue

        3. Specific profiles for different color deficiencies

        4. No other visual cues to help distinguish numbers

    3. Evolution of color receptors

      1. 3rd chromosome for rods

      2. 7th chromosome for b cones

      3. X chromosome for r and g

      4. Most recent mutation

      5. Unstable

    4. Distribution of color blindness

      1. 8.0% of european males

        1. Lower percentages elsewhere’

      2. 0.0% of old world primates

    5. Genetics of red-green color blindness

      1. If mother is a carrier

        1. No daughters are color blind

        2. Half of daughters are carriers

        3. Half of sons are color blind

      2. If Father is color blind and mother is carrier

        1. Half of daughters are color blind

      3. If mother is color blinds

        1. All sons are color blind

    6. Island of colorblindness

      1. In the pacific atoll of pingelap

        1. Achromatopsia is caused by a genetic mutation

        2. Complete or partial abscess of cones

        3. Vision that is dominated by rods instead of cones

        4. Poor visual acuity and sunlight is very painful

  3. Cortical achromatopsia

    1. Damage to area V8 (very rare)

    2. Cerebral achromatopsia

    3. Despite normal cone function

    4. Lose subjective experience of color

    5. World is like a black and white movie

    6. Color lost but not shape or motion based on color

  4. Demo - achromatopsia

    1. How?

      1. Using low-pressure sodium lamp

      2. Emit only at 577 nm so all surfaces with reflected light will have only that once wavelengths

      3. The ratio of cone responses will be identical for every surface and they will all look the same color

      4. They will vary only in total response, that is, in luminance but not color

  5. Perceiving the world through color alone

    1. Lose spatial acuity, motion, and depth more that pattern 

    2. Motion slows down


Lecture 15 - 11/4/24

  1. Exam 2 Review Location and Time TBD


Pop quiz:

  1. The way a television generates the color images is based on:

    1. Additive color mixture

    2. Color contrasty

    3. Simultaneous color contrast

    4. Subtractive color mixture


Ecological Perception and Action

  1. Overview

    1. Optic Flow

      1. Gibson et al. claim that optical flow:

        1. Specifies exact direction of travel (heading)

        2. Specifies distance of surfaces

        3. Suppplice information for postural control

        4. Indicates exact time of contact

      2. Focus of expansion (FOE):

        1. Central point from which motion seems to emanate

        2. Only stationary point in optic

        3. SPECIFIES EXACT DIRECTION OF TRAVEL

        4. Gibson’s analysis of flying during World War II

          1. How do pilots make successful high speed andings?

            1. The use optic flow

            2. You don't have to remember, you don't have to put any cognitive representation in memory. All the information is outside and the optic flow tells you have to behave/operate

    2. Optic Flow as a motion cue

      1. Lac of optic flow signals you are stationary

      2. Optic flow can trigger perception of self motion

        1. Vection (self-motion illusions)

        2. If you are stationary outside, and optic flow is moving clockwise, you will feel like you are moving counter clockwise

      3. Vection Illusion

        1. Optic flow in periphery overrides vestibular input

          1. Dominance of vision over vestibular information

          2. Perception of self motion

    3. Visual motion control os posture

      1. “Swinging room” experiment

        1. Stationary floor, but moveable walls and ceiling

        2. Lee & Aronson (1974)

        3. The floor is not moving, but the are moving the wall and ceiling moving

          1. If you are not using optic flow, nd rely on vestibular cues, you would know that you are not moving

          2. For infants

            1. Visual information is more important

          3. Adults

            1. Less sensitive than infants

            2. When standing on a narrow beam

              1. When the room moves slightly, they will lose their balance

      2. Moving toward vs moving away

        1. Even more evident in children

        2. Quiz:

          1. Answer: A

            1. Why:

              1. Optic Flow from forward motion of room

              2. Interpreted as backward sway

              3. Compensate by swaying forward

              4. You wanted to compensate your moving backwards, so you stabilize your posture by moving forward

    4. Time to contact (Time to collision)

      1. How do you know when something is approaching?

      2. Scientist propose that we know when objects are approaching observer by (tau)

        1. Time to collision (tau) = S/V

        2. Time to collision can be calculated without knowing distance or velocity → just because you know

        3. Tau = 100 / percent change of visual angle per second

          1. Can measure visual angle and rate of change

          2. But size distance and speed of object unknown

          3. Compute toe to collision (tau)

          1. Only need tot know change of the visual angle per second

      3. Expanding patterns

        1. Larger arrows mean faster expansion

          1. Sec = unit of time

      4. Quiz:

        1. Answer:

          1. 20 months

          2. Why:

            1. 100 / ( % change in visual angle / months)

      5. Possible uses for TAU

        1. Heading for a soccer ball

        2. Streamlining during a dive

        3. Quiz:

          1. A gannet is plunged directly down toward the ocean hoping to catch a fish. The gannet should ensure that the fish is lined up with

            1. The gannet’s line of sight

            2. The optic array

            3. The focus of expansion

            4. The direction of the ocean waves

            5. Answer:

              1. C – the Focus of Expansion

              2. Why:

                1. Planes use the same thing when to figure to land

                2. The focus of expansion does not move or change, it is the point that you are moving towards

      6. Maintaining collision path

        1. How do you catch a fly ball?

          1. Run so that ball looks like its moving in a straight line

          2. “Linear optical tracking”

            1. Eep at fixed line of sight

            2. Symmetrical looming

        2. Problem with catching a ball

          1. About 100 msec between retin and response of higher visual area

          2. Ball is moving at 100 mph

          3. In the time the brain can register the balls location, it has moved 15 ft

          4. For moving objects, visual system acts as if it extrapolates where it will be

          5. Predicts the present

          6. Only for moving  objects

          7. Doesn’t work for flash objects

    5. Flash lag effect

      1. Moving objects seen ahead of flashed bar

      2. Moving object extrapolated forward to expected location

      3. Visual system seems to predict where moving object will be

        1. “Predicting the present”

      4. Only for moving objects

        1. Not for flashed images

        2. “Flash-lag” effect

      5. https://upload.wikimedia.org/wikipedia/commons/6/60/Flash_lag.gif

    6. Blindsight

      1. Visually guided actions can occur without conscious vision

      2. Blindsight after lesion of the striate cortex

      3. Patient reports no awareness of target

      4. But when asked to point or grasp target, does so

        1. They cannot see the object, they can point or grasp the target

          1. Cant tell you what it is, or why they moved

      5. Separate visual input to the where/action pathway bypassing LGN and striate cortex

        1. A working hypothesis

        2. We see what we see and we know what we are seeing

      6. No input to striate cortex and awareness

    7. Locomotion and visual development

      1. Role of locomotion in human depth perception?

      2. Eleanor Gibson (1960): Visual cliff

        1. Glass gives illusion of depth

        2. Measure willingness to go to “deep” side

      3. The Visual Cliff

        1. Avoidance of visual cliff

        2. Tied to locomotion

          1. Early in locomotion: no avoidance

          2. More crawling experience: refusal to “deep” side

        3. Depth perception emerges from interaction with world

      4. Testing a causal role of locomotion

        1. Hein  Held (1963): Kitten carousel

          1. “Active” kitten pulled “passive” kitten around enclosure

          2. Same visual stimulation, but only one moved actively

        2. Active kitten showed normal visual development

        3. Passive kitten never developed depth perception

    8. Summary:

      1. Optic Flow

        1. Optic flow offers rich source of information

        2. Optic flow as a motion cue

        3. Visual motion control of posture

        4. Impressions of self motion

        5. Can be used to determine time to contact

      2. Flash Lag effect

        1. Moving object extrapolated forward to expected location?

      3. Blindsight

        1. Visually guided actions can occur without conscious vision

        2. There is another vision pathway

      4. Role of locomotion in human depth perception


Lecture 16 - 11/6/24


Development & Social Perception 


Two fundamental aspects of emotional expressions:

  1. Producing facial expressions

    1. Emerge Early in a predictable Sequence

      1. Basic reactions

        1. As early as birth

      2. Simple emotions

        1. 2-3 months

      3. Social emotions

        1. 1 year

      4. Complex emotions

        1. 3 years

    2. Many expressions appear before social learning is possible

    3. Nature vs Nurture: Evidence from blind individuals

  1. Blind individuals show same victory/pride expressions as sighted individuals

  2. Many expressions are innate, not learned through observation

  1. Cross Species Expressions

  1. Emotional expressions across primates suggest evolutionary origins of expressions

  1. Perceiving Facial Expressions

    1. Early Development of emotion recognition

      1. Social smiling

        1. 1-3 months

      2. Respond to other’s expressions

        1. 2-4 months

    2. Universal Emotions: Paul Ekman’s Research

      1. Emotions are found across all cultures, including isolated societies

    3. Cultural Universality: The Fore STudy

      1. Fore tribe in Papua New Guinea is an isolated society with minimal outside contract

      2. Could accurately identify basic emotions, suggesting that emotional expressions are not culturally learned; instead they’re universal

  2. Face Orientation and EMotional Recognition

    1. We are better at processing information in upright faces

  3. Summary:

    1. Production of expressions:

      1. Emerges very early in development

      2. Present in congenitally blind individuals

      3. Shared across species

    2. Perception of Expressions

      1. Develops in early infancy

      2. Universal across cultures

      3. Specialized processing for upright faces


The Physiology of Emotion Perception

  1. Speed of emotional processing

    1. The brain processes emotions remarkable fast

    2. Emotional valence (positive/negative) detected in just 15ms

      1. Faster than conscious awareness

      2. Equivalent to a single video frame

    3. Why fast?

      1. Threat detection is evolutionary advantageous

  2. Neural Pathway: A Dual Route

    1. Standard Visual Route

      1. Through visual cortex 

      2. Detailed processing

      3. Conscious awareness

    2. “Quick Route”

      1. Direct pathway to amygdala

      2. Bypasses detailed processing

      3. Enables rapid emotional responses

      4. Evolutionary older pathway

  3. The amygdala

    1. Triggert’s body’s response to perceived threats and plays a key role in processing emotions, particularly fear

  4. Research spotlight: Vuilleumier et al (2003)

    1. Strong amygdala response to brief, blurry fearful faces (low SF)

    2. Menial activation in face recognition areas (FFA)

    3. Supports existence of “quick route”

  5. Asymmetry in EMotional Processing

    1. Fixate the nose of each face in turn

    2. Which appears happier?

    3. The left side of faces carries more emotional weight

      1. Why?

        1. Right hemisphere left visual field input and the right amygdala is more sensitive to emotional content

  6. Quiz:

    1. Which is not evidence of a quick route for emotional processing

      1. We can process emotional content even with unclear visual stimuli

      2. The amygdala strongly responses to brief, blurry fearful faces that barely activated the fusiform face area (FFA)

      3. Recognition of emotional expressions is strongly dependent on face orientation

      4. Emotional responses can occur before conscious awareness of the stimulus

  7. Summary:

    1. Ultra-rapid emotion processing (15ms)

    2. Dual route system

      1. Standard visual pathway

      2. Direct amygdala pathway

      3. Right hemisphere advantage\


Mirror Neurons: Perceiving Other Minds

  1. Social perception beyond visual processing

    1. We don’t just see actions, we understand intentions

  2. Early development of social understanding

    1. Infants prefer helpers over hinderer; suggests innate social perception system

  3. What are Mirror Neurons?

    1. Neurons that fire when:

      1. We perform an action

      2. We see others perform an action

  4. Mirror neuron system in action

    1. Mirroring creates an internal simulation of other’s actions

    2. Seeing someone smile → activate sour own smile muscles

    3. Watching someone reach → activates our reaching neurons

  5. The simulation theory

    1. How mirror neurons help us understand others:

      1. Observer action/emotion

      2. Activate corresponding neural circuits

      3. Simulate experience internally

      4. Understand other’s intentions/feelings

    2. Example:

      1. We don’t just see a curved mouth, we see a happy person


Gestures, Gaze, and Congruence in Social Communication

  1. Non-Verbal Communication

    1. Three Key Components

      1. Gaze Patterns

      2. Gestures

      3. Behavioral Congruence

    2. Can be:

      1. Conscious or unconscious

      2. Cultural or universal

      3. Learned or innate

    3. Conscious, meaningful gestures are often culture specific

      1. Ie: Indian Nod

    4. Unconscious gestures are universal across cultures

      1. Defensive and victory poses

      2. Present in blind individuals, so occurs without learning or sight

  2. Basic Properties of Gaze

    1. Head and gaze direction indicates focus of attention

    2. Sclera (eye whites) help with gaze detection

    3. Crucial for social interaction

  3. Shared Attention

    1. The ability to follow and share attention with others by tracking their gaze

    2. Developmental milestone

      1. Present as early as 1 month

      2. Critical for social learning and cognitive development

    3. Research evidences

      1. Infants preferentially look in the direction others are gazing

    4. Experiment:

      1. In the following experiment:

        1. Infants looked at the display of the face and then saw two rectangles

        2. Experimenters observed how often the infants looked at left or right rectangle

        3. Even at 1 month, infants showed shared attention

        4. They looked first and longer where the face had just looked

  4. Eye contact in Social interaction

    1. Extraordinarily sensitive to the direction of people’s gazes

    2. Eye contact initiates social engagement

    3. Reveals interests, scrutiny, dominance, and deception

  5. Congruence in Social Interaction

    1. Reciprocal behaviors in social settings

      1. Yawning

      2. Laughing

      3. Unconscious reproduction of postures and gestures

  6. The rocking Chair Study

    1. Unintentional synchronization

      1. People sitting in rocking chairs natural match each other’s movements without trying

    2. Visual attention matters

      1. The more directly people can see each other, the stronger the synchronization becomes


Gait and Gender

  1. Takeaway:

    1. Male and female gaits differ significantly

    2. We can judge judge based on gait alone


Role of culture in perception

  1. Pictorial depth cue

    1. Hudson (1960) study: Schooled Children vs unschooled tribal children

    2. Is the spear closer to the elephant or antelope?

    3. Tribal children had difficulty interpreting depth cues in 2D pictures

  2. Carpentered World Hypothesis

    1. Our environment shapes our perception

    2. Modern environments have many straight lines and edges

    3. African zulus were less affected by Muller-Lyer illusion

    4. Suggest reduced susceptibility due to less exposure to “carpentered” environments

  3. Size Constancy Study

    1. Size Constancy

      1. Size of object is constant under different conditions

      2. Pygmy living in dense forest had difficulty perceiving size/distance of objects in open spaces

      3. Example:

        1. A pygmy youth mistook distance buffalo for inspects


Western vs Eastern Cultures

  1. Analytic thought (Western)

    1. Origin: Greek Philosophy

    2. Determinants:

      1. Relatively mild climates

      2. Economy does not require strong social ties

    3. World view:

      1. Things exist by themselves and can be defined by their attributes (context independent, object-oriented)

      2. These patterns are observed in Euro-American Societies

  2. Holistic Thought (Eastern)

    1. Origin: East Asian Philosophy (Taoism, Buddhism, East Asian Animism)

    2. Determinants

      1. Frequently changing climate

      2. Economy requires strong social ties

    3. Worldview:

      1. Things are interrelated

      2. Various facts are involves in an event (context dependant, context sensitive)

      3. These patterns are observed in Chinese, Japanese, and Korean Cultures

  3. Self Concepts

    1. Independent View

      1. In north america, people tend to conceptualize “the self” as an entity detached from others and its context

    2. Interdependent View

      1. In east Asia, people tend to conceptualize “the self” as a relational and contextual existence

  4. Attention to Object vs Field

    1. Western Participants

      1. Focused on focal objects

      2. Better object recognition when background changes

    2. Eastern Participants

      1. Focused on background/context

      2. Worse object recognition when background changes

  5. Change blindness across cultures

    1. Results:

      1. Japanese participants detected more changes in contextual/background elements, while americans detected more changes in focal objects

  6. Point of View (Picture Taking Task)

    1. Picture produced by American Participants

    2. Picture Produced by Japanese Participants

    3. Results

      1. Americans take photos with larger face-to-frame rations (close up) while japanese take photos with more background and smaller face-to-frame ratios


Own-Race Bias

  1. Face Recognition Bias

    1. Own Race Bias

      1. People have more difficulty differentiating and remembering faces of another race compared to faces of their own race

    2. Explained by Contact Hypothesis

      1. Frequencies of exposure to different racial groups affects recognition ability

    3. Sometimes called the “cross-race effect” or “other-race effect”


Summary:

  1. Facial Expressions

    1. Developmental sequence

    2. Universal & Innate Emotions

  2. Emotional Physiology

    1. Rapid Processing

    2. Dual Route System

    3. Right Hemisphere Bias

  3. Mirror Neurons

    1. Internal Simulation

  4. Gesture, Gaze, & Congruence

  5. Gait & Gender

  6. Cultural Perception

    1. Depth Perception variations

    2. Carpentered World Hypothesis

    3. Size consistency

  7. East vs West

    1. Object vs context focus

    2. Differing viewing styles

  8. Own Race Bias

    1. Better own-race recognition




Lecture 17 - 11/11/24


  1. Selection in Space

    1. Cueing Experiment

      1. Posner Cueing Experiment

        1. Peripheral Cue:

          1. Validly cued trial because the target X was on the same side as the cue

          2. Exogenous cueing / involuntary Attention

        2. Symbolic cue:

          1. Invalidly cued trail because the target X was on the opposite side 

            1. The red cue captured our attention and made us focus on the red square side, but when the X was presented on the other side, we had to shift our focus to the other side

            2. Endogenous Cueing / Voluntary Attention


  1. Both cue types can facilitate performance for stimuli presented at the cued location

    1. Ie: faster reaction time for validly cued targets compared with invalidly cued targets

  1. Metaphors for Selective Attention

    1. “Spotlight” model:

      1. Attention is restricted in space and moves from one point to the next

        1. Areas within the spotlight received extra processing

    2. “Zoom Lens” model:

      1. The attended region can grow or shrink depending on the size of the area to be processed

  2. Dichotic Listening Experiments & Theories of Selective Attention

    1. Supports early selection model

    2. Dichotic Listening Experiment

      1. Different messages are presented to the two ears

      2. Pay attention to the message presented to one ear (attended dmessage)

      3. Repeat the attended message out loud (shadowing)

      4. Ignore the message presented to the other ear (unattended message)

      5. Ignored message does not reach awareness

    3. Led to early selection model of attention

      1. Also called a bottleneck because the filter restricts information flow

  3. The cocktail party effect

    1. In Gray & Wedderburn’s experiment, about ⅓ of the participants detected their names presented to the unattended ear

    2. This phenomenon of hearing distinctive messages that are not being attended is called the cocktail party effect

  4. Dear Aunt Jane Experiment

    1. “Dear Aunt Jane” experiment

      1. Attend to and shadow the message presented to the left ear

      2. Participants reported healing the message “Dear Aunt Jane”, which starts in the left ear, jumps to the right ear, then goes back to the left

  5. The late selection Model

    1. Dear Aunt Jane EXP. & Cocktail Party Problem challenge the Early Selection Model

    2. Model proposes that all sensory information is processed to a certain degree for meaning before attention selects what to focus on

    3. Contrast with early selection model, which suggests that information is filtered out at an early stage, based primarily on physical characteristics like volume or pitch, because deeper processing for meaning occurs

  6. Visual Search

    1. Feature Search

      1. Search for a target defined by a single attribute

        1. Such as a salient color or orientation

      2. The efficiency of visual search

        1. Is the average increase in RT for each item added to the display

          1. Measured in terms of search slope, or ms/item

          2. The larger the search slope (more ms/item), the less efficient the search

      3. Feature search is efficient

        1. RT is not influenced by set size

          1. Ie: search slope is flat

    2. Conjunction Search

      1. Search for a target defined by the presence ofg two or more attributes

      2. Conjunction search is inefficient

        1. RT increases as the set size increases

    3. Comparison

      1. Feature Search

        1. Efficient

        2. RT not influenced by set size

        3. Parallel processing

          1. Simultaneously processing stimuli

      2. Conjunction Search

        1. Inefficient

        2. RT increases as set size increases

        3. Serial; processing

          1. Attending and processing one item at a time

    4. Quiz:

      1. Which of the following would take the longest?

        1. Finding a green triangle among a mix of 40 yellow squares and green circles

          1. Differs from two features

          2. Conjunction search

          3. Large set size

        2. Finding a green triangle among a mix of 40 green squares and yellow triangles

          1. Challenging conjunction search with a large set size and high similarity between the distractors and the target

          2. Differences from distractors by both colors and shape

          3. Conjunction search

          4. Large set size

        3. Finding a green triangle among a mix of 20 yellow squares and yellow triangles

          1. Differs from distractores by a single feature

          2. Feature search

          3. Smaller set size

        4. Finding a green triangle among a mix of 20 green squares and yellow triangles

          1. Difference from distractors by two features

          2. Conjunction search

          3. Smaller set size

      2. Quiz Answer: B

  7. Feature Integration Theory & Illusory Conjunctions

    1. The binding problem

      1. Perceiving the vertical red bar moving to the right

      2. Color, motion, and orientation are represented by separate neurons

      3. How do we combine these features when perceiving the bar?

    2. Feature integration theory

      1. What color was the #1

      2. What color was the small triangle? Large triangle?

      3. What color was the large circle?

    3. Illusory Conjunctions

      1. Support the idea that some features are represented independently and must be correctly bound together with attention

  1. Attentional Limits in Time

    1. Rapid Serial Visual Presentation (RSVP) Task & Attentional Blink

      1. Attentional blink

        1. The difficulty in perceiving and responding to the second of two target stimuli and a RSVP stream of distracting stimuli

          1. ATTENTION IS LIMITED

        2. Green and Bavelier (2008)

          1. Reported that people who play first-person shooter video games have a reduced attentional blink

          2. This suggests that visual attention performance can be improved with practice

        3. Marvin Chun’s fishing metaphor for attentional blink

          1. You can see all the stuff in the river as it drifts by

            1. Ie: the boot and the fish

          2. You commit to netting fish number 1 (f1). 

            1. The boot drifts by and fish 2 (f2) appears

          3. Because you are tired up with f1, you will not be able to capture f2, and that second fish swims away

            1. Detected perhaps, but uncollected

  2. The Physiological Basis of Attention

    1. Brain Areas

      1. Attention to a specific part of the visual field causes neurons coding those locations to have increased activity

      2. Attention can enhance the neural processing of a specific object

      3. The activity of the FFA (“face area”) or PPA (“place area”) is modulated as a function of attended object type (face or house)

      4. Activation in the visual cortex measured with fMRI

    2. Single Cell

      1. Attention

        1. Increases the firing rate of a single cell in the monkey parietal cortex

        2. Different ways in which attention could alter the activity of a single cell

        3. Attention shifts the receptive fields of cells in the monkey parietal cortex

        4. Note the change in the receptive field map when the monkey directs attention to diamond (a) or circle (b)

    3. Quiz

      1. Recall our discussion of the cueing experiment and our ability to attend to specific objects. Based on what you have learned, which of the following do you think would be correct?

          1. People would be equally fast on condition 1 and 2

          2. People would be faster on Condition 1

            1. We are attending to the left target and expect the x to appear in the left target

              1. Faster reaction times

          3. People would be faster on Condition 2

            1. Attention must shift to the target’s exact location, causing a delay

          4. People would fail on both conditions

  3. Disorders of Visual Attention

    1. Quiz Answer: B

    2. Neglect

      1. They are not blind, their vision is perfect

        1. But because of the right side lobe damage, they have a hard time processing left visual field 

      2. The inability to attend to or respond to stimuli in the contralesional visual field

        1. typically , neglect of the left visual field after damage to the right parietal lobe

        2. Not restricted to vision:

          1. auditory/somatosensory domains

          2. Different from hemianopia (loss of V1)

        1. 5 “slices” through the brain of a patient with neglect MRI viewed as through from above). The damage, shown here in yellow, includes the right parietal and frontal lobe. The patient neglects the left side of space

      3. Object copying task:

        1. A patient with neglect often omits one side of the object

      4. Line bisection task:

        1. A patient with neglect might neglect the lines in the left side of the image following damage to the right parietal lobe

      5. The neglect can be relative to the object, no to the whole scene

        1. Step 1: The patient neglected the left side of the barbell

        2. Step 2: The barbell was rotated through 180 degrees

        3. Step 3: the englect roasted with the object in this example of object-centered neglect

    3. Extinction

      1. The inability to perceived a stimulus to one side of the point of fixation in the presence of another stimulus, typically in a comparable position in the other visual field

        1. Milder form of neglect

    4. Balint’s Syndrome

      1. Thought to be related to neglect (more severe)

      2. It involves:

        1. The inability to perceive the visual field as a whole (simultanagnosia)

        2. Difficulty in fixating the eyes (oculomotor apraxia)

        3. Inability to move the hand to a specific object by using vision (optic ataxia)

        4. Lesions to both sides of the brain (typically posterior parietal cortex)

  4. Summary:

    1. Inattentional Blindness

    2. Cueing Experiment

    3. “Spotlight” and “Zoom lens” models of attention

    4. Early & Late Selection Models

    5. Visual Search

    6. FIT & Illusory Conjunctions

    7. RSVP and Attentional Blink

    8. Attention in the brain: Visual areas and single cells

    9. Disorders of Attention


Concepts:

  1. External Attention

    1. Attention to stimuli in the world

  2. Internal Attention

    1. Our ability to attend to one line of thought as opposed to another or to select one response over another

  3. Overt Attention

    1. Involves directing attention from one place/object to another by moving the eyes

  4. Covert Attention

    1. Involves directing attention without moving the eyes

  5. Voluntary vs Involuntary Attention


Attention is Selective and Limited


Lecture 18- 11/13/24


Exam 2 Information:

  1. 11/20 Wednesday 8:30-9:30 - 70 minutes (come 5 minutes early)

  2. Lecture 10 - Lecture 18

  3. 50% MC 50% SA

  4. Review Nov 18 (Monday) @ Metcalf AUD @ 5:30

    1. Post questions by Nov 17 11:59


Review:

Q: Feature Integration Theory

  1. The Binding Problem

    1. Perceiving the vertical red bar moving to the right

      1. Color, motion, and orientation are represented by separate neurons

      2. How do we combine these features when perceiving the bar?

  2. Feature integration theory

    1. FIT suggest that the solution to the binding problem is attention

  3. Illusory Conjunctions

    1. Support the idea that some features are represented independently and must be correctly bound together with attention

Q: RSVP and Attentional Blink

  1. Bad performance:

    1. Attentional blink

      1. You can catch first target but miss the second target

      2. The difficulty in perceiving and responding to the second of two target stimuli amid a RSVP stream of distracting stimuli

        1. ATTENTION IS LIMITED

      3. Fishing metaphor

        1. You miss the second fish after catching the first fish if the second one comes at a critical time point

Q: Brain Areas

  1. Attention to a specific part of the visual field causes neurons coding those locations to have increased activity

  2. Attention can enhance the neural processing of a specific object

  3. The activity of the FFA or PPA is modulated as a function of attend object type


Q: neural mechanisms of attention

  1. Different ways in which attention could alter the activity of a single cell

    1. Enhancement

      1. Attention increases the overall firing rate of a neuron in response to its preferred stimulus

    2. Sharper tuning

      1. Reduces response to stimuli that deviate slightly from the preferred feature, enhancing specificity

    3. Altered tuning

      1. Attention shifts the preferred stimulus or feature of a neuron, modifying what the neuron responds to most strongly


Notes:  

  1. Conscious perception limited by attention and memory

    1. Motion-Induced Blindness

      1. Attention for awareness

        1. Things may fade out of your attention, disappearing from awareness 

          1. ie: concentrate on a book, and you are aware of little else

        2. Attention acts as a gateway to conscious perception determining what enters awareness and what is filtered out

          1. Objects or events not attend to may “disappear” from awareness even if they are within the visual field

        3. Attention limits the scope of conscious perception to a manageable subset of the available sensory information

    2. Inattentional blindness

      1. Failure to notice a fully visible and unexpected object because attention is focused elsewhere

        1. Gorilla experiment’

      2. Demonstrated that without attention, even significant stimuli can go unnoticed

    3. Change Blindness

      1. Difficulty detecting changes between two images or scenes when the change occurs during a visual interrupted or distraction

        1. Ie: a person talking to a stranger fails to notice when the stranger is replaced by someone else if the swap occurs during a brief interruption

      2. Our brains retain a general gist of scenes rathers than detailed representations, relying on attention to detect changes

      3. Changes are often missed unless they are directly attend to or highly salient

    4. Only attended items enter visual short-term memory

      1. Sensory memory (Iconic memory)

        1. Only lasts for 200-500 msec

        2. A kind of photographic memory (no limit)

      2. ADD ATTENTION →

      3. Short term memory (working memory)

        1. Lasts over many seconds

        2. Very limited capacity

      4. ADD REHEARSAL →

      5. Long-term memory

        1. Capacity and duration unlimited

  2. Limited Capacity of visual working memory

    1. You can rember only up to 4 items

    2. Conscious perception is limited

      1. Because your visual attention and memory ar elimited

    3. All these results show that you are only aware of things that you select for your attention and short-term memory

    4. What about unattended things?

      1. Most of them will be decayed, forgotten, and discarded, so you cannot use them

  3. The fate of unseen stimuli

    1. Stimulus below an individual's threshold for conscious perception ios registered and processed without our awareness

      1. Subliminal perception

        1. Only appear for a single frame 

          1. Too short to pick up)

        2. Things we don’;t notice influence us too

          1. Republican ad shows Al Foire and then “RATS” appears for one frame (1/30 of a second)

    2. Despite your limited conscious perception

      1. Individual items are something you do not see

      2. Your visual scenes are much richer

    3. Influence by subliminal perception

    4. Invisible stimulus can attract attention

  4. Understanding visual scenes

    1. Gist

      1. Fast visual scene understanding, even when the image is blurred

      2. You can recognize a scene within 20 msec

    2. Spatial layout

      1. To the structural and geometric arrangement of elements within a scene

      2. Overall characteristics of a scene

        1. Openness

        2. Depth

        3. geometry

      3. For global structure of the scene

    3. How can piercing scenes be so fast?

      1. Two different components of a visual scene

        1. Low spatial frequency

          1. coarse

        2. High spatial frequency

          1. Fine

      2. Global information about a whole scene relies on the low-spatial frequency component

      3. Visual system can quickly analyze this information whole we are not aware of it at all

    4. Guided search by global information of a scene

      1. Gist of a scene

      2. Spatial layout of a scene

    5. Ensemble representations

      1. Knowledge about the properties of a group of objects

        1. Mean size

        2. Approximate number

        3. Centroid

        4. Mean emotion

      2. They are about “group” of similar objects

      3. They are useful because the natural scenes often contain many similar objects

        1. Redundancy and Regularity

      4. Ensemble Representations are efficient and economical

        1. Given the limited capacity of attention and memory

        2. Given the remarkable ability to group things together

          1. By similarity

          2. By distance

        3. You use ensemble representations everyday

        4. Global information makes your visual experience of a scene rich and vivid

          1. Without explicit effort, you may know about spatial layout of the structure

          2. You recognize that this is a scene of outdoors, man made, and navigate

          3. You know about groups of similar objects 

            1. Buildings, cares, or people

          4. You may not need to attend to and remember every single element of this scene in order to understand the scene

    6. Memory for scenes

      1. Memory for scenes is amazingly good

        1. Participants were shown 10,000 image for 5 seconds each

        2. They were about 90% correct when quizzed 2 days later

      2. Because you can understand visual scenes fast and efficiently

      3. Because you already have so much knowledge about scenes in your long-term memory

    7. Neural basis for scene perception

      1. PPA (Parahippocampal place area)

      2. Retrosplenial complex (RSC)

      3. Complementary function of the PPA and RSC

        1. PPA treats each view of panoramic scene as different images 

          1. Viewpoint specific representation

        2. RSC treats different views of panorama as the same stimulus

      4. Togethers, they enable both specific and integrative representations of scenes across viewpoints