Presentation Information Exam 2: PSYC 453

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/33

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

34 Terms

1
New cards

Functional Magnetic Resonance Imaging of Synesthesia (Nunn et al., 2002)

Purpose: To investigate how synesthetes experience color sensations when hearing words.

Main Question: Do synesthetic color experiences activate the same brain regions as real color perception?

Methods

Auditory Task

Participants listened to spoken words & true tones (like beeps)

Just hearing the words makes color areas activate, strong evidence for cross-wiring

Visual Color Localizer Task

Participants looked at colored & grayscale patterns (Called Mondrians)

This allowed researchers to find each person's brain area (V4/V8) responsible for seeing color

Training study

Non-synesthetes practiced word-color pairs (Monday=blue) to see if their brains would eventually respond like synesthetes

They used an fMRI to track which brain regions had increased activity during these tasks

2
New cards

Functional Magnetic Resonance Imaging of Synesthesia (Nunn et al., 2002) (cont.)

Results for Synesthetes

Their V4/V8 region lit up!

There was NO activation in early visual regions (V1/V2); they weren't just imagining pictures

Action was left-lateralized on the language-dominant side of the brain. This makes sense because it was triggered by words.

Results for Non-Synesthetes

NO V4/V8 activation even AFTER learning word-color pairs

Their language and memory areas worked harder, but not their visual color cortex

Synesthesia is NOT learned

Authors’ Conclusion: Synesthetic color experiences are genuine perceptual events mediated by the visual cortex, not associative memory.

Answer to Research Question: Yes, synesthetic color experiences activate the same cortical areas used in real color perception.

3
New cards

Hearing Colors, Tasting Shapes (Vilayanur S. Ramachandran & Edward M. Hubbard) Pop-out/Segregation

Main question: Is synesthesia a genuine sensory experience?

Conclusions

Synesthesia is real and measurable

Likely results from neural cross-activation between adjacent sensory areas

Offers a model for understanding perception, creativity, and cognition

Methods: Pop-out/Segregation

Experiment: Contrasting background elements allow visual segregation

Ex. Created shapes using 2s among 5s; asked, “What shape is it?”

Results: Pop-out/Segregation

In synesthetes, the shape of numbers was correctly reported 90% of the time

The physical color and the associated color of the number are indistinguishable, unless the physical color is intense

Conclusion: Induced colors are genuinely sensory

4
New cards

Hearing Colors, Tasting Shapes (Vilayanur S. Ramachandran & Edward M. Hubbard) Crowding Test

Experiment: Identification of numbers in the periphery with/without crowding (different numbers surround the number of interest)

Results: Crowding Test

“Invisible” number induces synesthesia

Used color to deduce the number without perceiving the number itself

Conclusion: The number is processed in the fusiform gyrus and evokes color before the crowding effect occurs

5
New cards

Hearing Colors, Tasting Shapes (Vilayanur S. Ramachandran & Edward M. Hubbard) Other Behavioral Experiments

Contrast Test: Does reducing the contrast of the number and background change induced color?

Roman numeral test: Do Roman numerals of the same numerical concept also induce color?

Large number made of smaller numbers: Seeing the “forest or the trees”?

Results

Contrast test: Weaker synesthetic color as contrast is decreased (although number is still perfectly visible)

Conclusion: viewing number does not guarantee seeing color

Roman numeral test: No color induced in roman numerals

Conclusion: visual form, not concept, triggers response

Large number made of smaller numbers: Synesthetes can switch colors depending on focus

Conclusion: visual input categorization by attention matters (not appearance alone)

6
New cards

Hearing Colors, Tasting Shapes (Vilayanur S. Ramachandran & Edward M. Hubbard) fMRI

Results: Neuroimaging

Synesthetes show activation in V4 (color processing) even when numbers are black

Overlapping activation for number and color areas

Different activation patterns support multiple types of synesthesia

Ex. Color-blind “Martian” colors

Supports the cross-activation hypothesis: Color area (V4) and number area (fusiform gyrus) are adjacent

7
New cards

Preverbal Infants’ Sensitivity to Synaesthetic Cross-Modality Correspondences (Walker et al., 2010)

Methods

Pitch-Height

Congruent: Higher pitch, Higher ball

Incongruent: Higher pitch, lower ball

Pitch-Sharpness

Congruent: Higher pitch, More sharpness

Incongruent: Higher pitch, Less sharpness

16 children (8 male, 8 female) completed the Pitch-Height experiment

16 different children (8 male, 8 female) completed the Pitch-Sharpness experiment

Each child observed 6 animations (3 congruent, 3 incongruent) in a mixed order

Note: 8 of the children viewed a congruent animation first while the other 8 viewed the incongruent animation first

Eye fixations and total length of time looking at the animations were recorded

Summary

Overall, children looked longer at the congruent conditions in both experiments

When the modalities matched, infants’ preferential looking was maintained more

Serves as evidence of infants’ unlearned/innate sensitivity to the connections of auditory pitch with visuospatial height and sharpness

Conclusion

Strongest indication of the idea that synesthetic cross-modality correspondences are an unlearned aspect of perception

<p>Methods</p><p>Pitch-Height</p><p>Congruent: Higher pitch, Higher ball</p><p>Incongruent: Higher pitch, lower ball</p><p>Pitch-Sharpness</p><p>Congruent: Higher pitch, More sharpness</p><p>Incongruent: Higher pitch, Less sharpness</p><p>16 children (8 male, 8 female) completed the Pitch-Height experiment</p><p>16 different children (8 male, 8 female) completed the Pitch-Sharpness experiment</p><p>Each child observed 6 animations (3 congruent, 3 incongruent) in a mixed order</p><p>Note: 8 of the children viewed a congruent animation first while the other 8 viewed the incongruent animation first</p><p>Eye fixations and total length of time looking at the animations were recorded</p><p>Summary</p><p>Overall, children looked longer at the congruent conditions in both experiments</p><p>When the modalities matched, infants’ preferential looking was maintained more</p><p>Serves as evidence of infants’ unlearned/innate sensitivity to the connections of auditory pitch with visuospatial height and sharpness</p><p>Conclusion</p><p>Strongest indication of the idea that synesthetic cross-modality correspondences are an unlearned aspect of perception</p>
8
New cards

Synesthetic Colors Determined by having Colored Refrigerator Magnets in Childhood - Witthoft, N., & Winawer, J. (2006)

Summary

AED showed highly consistent letter–color associations across sessions.

Her synesthetic colors matched the hues of the childhood colored magnet set.

Every 6th letter shared a similar hue pattern from the magnet layout.

Synesthetic colors can transfer to new alphabets (like Cyrillic) based on visual or phonetic similarity.

Case and font tests: uppercase and standard fonts produced more saturated colors.

Lightness constancy illusions: synesthetic colors changed with shadow context, like real colors.

Conclusions

AED’s consistent and automatic color associations meet criteria for true synesthesia.

Early visual processing underlies synesthetic color perception.

The specific color pairings were likely learned from childhood experience (magnet exposure).

Demonstrates that synesthesia can be experience-shaped, blending learning and perception.

Synesthetic activation arises from overlapping neural representations and multiple representational levels (visual, phonetic, conceptual).

9
New cards

Differential Processing of Objects under Various Viewing Conditions in the Human Lateral Occipital Complex (Kalanit Grill-Spector et al. 1999)

Lateral Occipital Complex (LOC)

More sensitive to viewing objects than textures/noise

Faces included in objects

Adaptation Duration Procedure: Methods

Patterns of 32 stimuli with noise and texture changes

Patterns contain 1-32 different images

Results:

LOC activation ~48% with repeated images compared to no repeated

LOC was more adaptive to repeated stimuli than Other visual areas

V4 activation ~56%

Size Invariance Experiment: Methods

Repeatedly shown image with noise/texture interspersed, with some being sized differently

Also shown images of a similar category (Semantic Control)

Shown different objects but with different sizes

Results:

Images of different sizes still showed similar adaptation

Different response to similar control - not adapting to category

Different sizes of different images showed similar responses - not responding to size of object

10
New cards

Differential Processing of Objects under Various Viewing Conditions in the Human Lateral Occipital Complex (Kalanit Grill-Spector et al. 1999) (cont.)

Face Experiment 1: Methods

Shown photos with faces in different conditions

Translation, Rotation, Illumination

Also shown different faces and scrambled faces (noise)

Results:

Most adaptation to translation than rotation and illumination

Stronger invariance to some changes than others

Face Experiment 2: Methods

Very similar to that of 1, with different parameters changing

Size, Translation, and rotation

Also shown scrambled faces (noise)

Results:

Translation and Size show more adaptation than Rotation

Further confirms rotational invariance

Face and Car Experiment: Methods

Shown faces and cars with different changes (and noise)

Translation, Rotation

Also shown different objects

Results:

Faces generally have higher activation than cars

After weighting, showed similar adaptation to both

LOC not only specialized for faces

11
New cards

Differential Processing of Objects under Various Viewing Conditions in the Human Lateral Occipital Complex (Kalanit Grill-Spector et al. 1999) (cont. 2)

Summary

Adaptation: fMRI shows a reliable way to measure adaptation to test invariance in LOC

LOC Invariance: LOC shown to be invariant to size and position changes. Partially invariant to illumination and rotation changes.

Subsection of LOC - Posterior Fusiform (PF/LOa)

Showed stronger invariance than the rest of LOC

Conclusions

The LOC shows invariance to size and translation changes to objects

Adapatation to object itself, not only faces (Face and Car) or category of object (Size Invariance)

Partial invariance to rotation and illumination changes (Face 1, 2, and Car)

Invariance not consistent throughout LOC

PF/LOa has stronger invariance than other areas in LOC, LOC view of object is not 3D

12
New cards

Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects

Question: Is this region face-specific or does it respond to any category of visual expertise?

Hypothesis: The FFA is not exclusively face-specific, but can be tuned to other object categories with training and expertise.

5 adults trained over ~7 hours (4+ days) to become Greeble experts, 6 novices (no training) for comparison.

Training goal:

Learn to categorize Greebles by family and individual identity.

Criterion: equally fast at both → indicates expert-level configural processing.

Experimental Design

fMRI scanning conducted across 6 sessions: Before training, 3 times during training, twice after reaching expertise.

Tasks during fMRI:

Sequential-matching task: judge if two images are the same or different (upright vs inverted; Greebles vs faces).

Passive-viewing task: simply view faces, Greebles, or objects.

Region of Interest (ROI): middle and anterior fusiform gyri (face-selective areas).

13
New cards

Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects (cont.)

Key Findings: fMRI Results

Before training:

Right middle fusiform gyrus activated strongly for faces, weakly for Greebles.

After expertise:

Activation for upright Greebles increased significantly—almost equal to that for faces.

Main effect:

FFA activation increased with training (F1,4 = 88.9, p < 0.005).

Right hemisphere dominance

fMRI Results Continued: Spatial Correlation

Used correlation maps to test if activation moved or increased in magnitude.

Found that Greeble activation became spatially aligned with the FFA face-activation pattern.

The activation did not shift → it increased in intensity within the same region. → Indicates FFA recruitment for new object expertise

14
New cards

Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects (cont. 2)

Behavioral Results

Accuracy improved with training (error < 13% throughout).

Response times for identifying individual Greebles dropped from 1150 ms → 450 ms.

Novices perform faster at family-level recognition.

Experts perform with equal speed at family & individual level → people looked at relationships between items rather than individual features (similar to faces).

Performance improvements did not account for increased FFA activation (no performance confound).

Passive Viewing (Experts vs. Novices)

During passive viewing:

Experts: FFA activated for both faces and Greebles.

Novices: FFA activated only for faces.

Confirms that expertise alone (not attention or task demands) drives fusiform activation.

15
New cards

Activation of the middle fusiform 'face area' increases with expertise in recognizing novel objects (conclusions)

Summary

Training with novel objects (Greebles) increased activation in the right middle fusiform gyrus.

The same region that responds to faces became responsive to Greebles after expertise developed.

No comparable change for faces → not a general practice effect.

Conclusion: The “face area” is not face-exclusive; it’s tuned by experience and expertise.

Key Takeaways & Broader Implications

Challenges the modular view (that faces use a unique brain module)

FFA likely supports subordinate-level recognition (distinguishing individuals within a category).

Expertise induces configural processing—the same mechanism used for faces.

Suggests FFA is a general substrate for fine-grained visual discrimination, refined by experience.

FFA activity scales with expertise → not strictly for faces.

Right middle fusiform gyrus is the critical area for expert visual recognition.

16
New cards

The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception Kanwisher, McDermott, & Chun (1997) (Part 1)

Methods Part 1 - Part I: Determine Region of Interest (ROI)

20 subjects under 40 years

Passive viewing task

Facial perception is automatic, so viewing images passively (just looking) rather than completing a defined task best simulates how we naturally perceive faces without intentional control

Subjects told to fixate on dot and simply look at image when it appeared

Control Condition: Non-face stimuli (common objects)

Experimental Condition: Face stimuli

Results Part I

One region consistently activated when viewing faces vs objects across subjects in 12/15 subjects with usable data

Area in the right fusiform gyrus called the FFA

Stronger activation was found predominantly lateralized on the right side of the brain (in right-handed individuals)

Other regions such as the middle temporal gyrus/superior temporal sulcus activated for some but not all participants when viewing faces

Higher activation for objects than faces in the parahippocampal region

Led to hypothesized specialization of the area for some aspect of face perception

17
New cards

The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception Kanwisher, McDermott, & Chun (1997) (Part 2)

Methods Part 2: 5 subjects from Part 1 with clear activation in FFA (ROI) tested on two different stimulus conditions

Condition 1: Intact two-tone faces vs scrambled two-tone faces

Passive viewing task

Subjects told to fixate on dot and otherwise simply look at image

Scrambled condition scrambled facial regions to become unrecognizable as a face

Preserved mean luminance (light intensity) and low-level features

Test whether response was from low-level features in faces but not the faces themselves

Condition 2: Front view photos of faces vs front view photos of houses

Passive viewing task

Subjects told to fixate on dot and otherwise simply look at image

Test whether face area was involved in distinguishing between typical examples of things rather than faces themselves

Results Part II

Clear higher activation found for face than non-face stimuli in each subject’s respective previously determined ROI (averaged activation from each voxel within ROI)

Ratio of average percentage of signal change across ROIs during face epochs (time periods) vs non-face epochs ranged from 2.8 to 6.6, indicates high degree of selectivity in face ROIs

ROI in fusiform gyrus responded more strongly to: Faces than objects, Intact faces than scrambled two-tone faces, Faces than houses

18
New cards

The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception Kanwisher, McDermott, & Chun (1997) (Part 3)

Methods Part 3

5 subjects from Part I with clear activation in fusiform face area (ROI) tested on two different stimulus conditions (2 subjects overlapping with Part II)

Condition 1: Three-quarter-view images of faces (hair tucked away) vs images of human hands

Passive viewing task

Subjects told to fixate on dot and otherwise simply look at image

Test if response would generalize to other viewpoints

Test if hair and other external features were involved in recognizing faces

Test if area would respond to any part of human, not just faces

Condition 2: Same stimuli (¾ faces and human hands)

Performing “1-back” task asking subjects to search for consecutive repetitions of identical stimuli (not passive viewing task)

Press button when repetition detected

Blank grey field shown between each image of either face or hand

If more difficult for hands than faces, hands require more attention meaning that a stronger response in the ROI for faces can’t be attributed to attention

Results Part III

Clear higher signal intensities in previously determined ROI for face than non-face stimuli

Ratio of average percentage of signal change across ROIs during face epochs (time periods) vs non-face epochs ranged from 2.7 to 4.5, indicates high degree of selectivity in face ROIs

Subjects reported higher difficulty in identifying repetition with hands than faces

ROI in the fusiform gyrus responded more strongly to: Faces than objects, Faces than hands in both passive viewing and 1-back tests

19
New cards

The Fusiform Face Area: A Module in Human Extrastriate Cortex Specialized for Face Perception Kanwisher, McDermott, & Chun (1997) (Conclusions)

Summary of Findings

ROI in fusiform gyrus found in subjects with higher activation viewing faces than viewing objects

Fusiform Face Area (FFA)

Middle temporal gyrus and superior temporal sulcus had higher activation viewing faces than objects in some, but not all subjects

Parahippocampal area had higher activation viewing objects than faces

Established ROI (FFA) showed higher activation when viewing: Intact faces than scrambled faces, Front view faces than front view houses, Three-quarter-view faces than human hands in passive viewing task, Three-quarter-view faces than human hands in 1-back task

Conclusions

Area found that responds more strongly to faces than objects: Fusiform Face Area

Fusiform Face Area is specialized for face perception

Not for mean luminance or low-level features (intact vs scrambled)

Not for typical examples of stimuli (front-view faces vs front-view houses)

Not for human-like object (¾ faces vs human hands)

Not for higher attention/task difficulty (1-back task as reported higher difficulty for repeated hands than faces)

Responds to images of faces taken from different viewpoints (part III)

Parahippocampal area showing higher activation for objects than faces insinuates distinct brain areas for perceiving faces vs objects

MTG and STS may play secondary roles in face perception

Tasks were able to rule out FFA being specialized for other aspects of perception and attribute its purpose to be solely for face perception

20
New cards

A Cortical Representation of the Local Visual Environment: Russell Epstein & Nancy Kanwisher (1998)

Is there a specialized brain area that helps us perceive visual environments?

A specific region in the parahippocampal cortex could selectively respond to visual scenes/places

Parahippocampal Place Area (PPA)

Methods: How they looked for PPA (fMRI)

Design: A block design—shown images from a specific category (e.g., objects) for a given 16-second block

Sets of 4 blocks (of varying categories)

Followed by “fixation period” → allow BOLD signal to return to baseline

Lasted 5 minutes 36 seconds total per participant

Methods: Finding the ROI

Tasks:

Passive Viewing - told to just observe images

1-Back Task - asked to press button upon repeat images to test whether the PPA response is due attention or not

Functional Localizer: Avoids circular reasoning

Used to define PPA functionally

Used data from experiment 1 to define ROI in experiment 2 (and vice versa) Used unknown data set to define ROI in experiment 3

Consistency of brain scans → PPA is anatomically distinct

21
New cards

A Cortical Representation of the Local Visual Environment: Russell Epstein & Nancy Kanwisher (1998) Experiment 1

Experiment #1: Existence of “Place Area”

2 Conditions: Intact and Scrambled

Compared fMRI activity across 4 categories: Faces, Objects, Houses, and Scenes

Intact—Scrambled: served as control, shows PPA responds to high-level meaning of “category”

Findings:

PPA responds strongly to scenes

Weak response to Houses & Objects

Minimal response to faces

No significant difference in activity between two tasks (passive vs. 1-back)

High I-S value for scenes → PPA responds to scenes, not visual complexity or low-level features

22
New cards

A Cortical Representation of the Local Visual Environment: Russell Epstein & Nancy Kanwisher (1998) Experiment 2

Experiment #2: Defining a “Scene”

8 Conditions:

“Multiple Object” Hypothesis: PPA will respond high for furnished rooms and objects arrays, but low for empty rooms

Spatial layout Hypothesis: PPA will respond high for furnished rooms and empty rooms, but low for object arrays

PPA response to empty and furnished rooms was strong

Response to object arrays was low

PPA responds to spatial layout, not multiple objects

No significant difference in activity between 2 tasks (passive vs. 1-back)

23
New cards

A Cortical Representation of the Local Visual Environment: Russell Epstein & Nancy Kanwisher (1998) Experiment 3

Experiment #3: Does the Layout need to Make Sense?

5 Conditions: Faces only, Objects only, Fractured + rearranged layout, Fractured layout, and Intact layout

Findings:

PPA responded strongly to intact layout and fractured layout

Response dropped for the Fractured + Rearranged images (no coherence)

PPA is factoring in geometric coherence.

No significant difference between two tasks

24
New cards

A Cortical Representation of the Local Visual Environment: Russell Epstein & Nancy Kanwisher (1998) Conclusions

Selectivity: The PPA responds selectively to visual scenes (Exp. 1)

Automaticity: Response is automatic and not due to attention (1-back task)

Core Factor: PPA’s response is driven by spatial layout of scene, no the objects within (Exp. 2)

Coherence: Spatial layout must be coherent (e.g., form a 3D room) to cause full response (Exp. 3)

Conclusions

Primary function of PPA: represent places by encoding the geometry of local environment, crucial for navigation.

Show evidence of brain modularity: Like how faces are “specialized” in the FFA, the PPA” specializes: in place processing.

Spatial Layout is key component to “why” the PPA “likes” places (or scenes)

25
New cards

Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex Haxby et al.

Conclusion

Each object category (faces, houses, tools, etc.) evoked a distinct pattern of neural activity in ventral temporal cortex.

Category-specific information remained even when regions that responded maximally were excluded, showing that representations are distributed and overlapping.

Within “category-selective” regions (e.g., FFA, PPA), responses to other categories were still distinct, suggesting no truly isolated modules.

The pattern of weak (nonmaximal) responses also carried category-related information, not just low-level visual features.

The consistent spatial organization of these patterns across individuals indicates a topographic structure for object representation (Object Form Topography).

26
New cards

Face perception in monkeys reared with no exposure to faces Yoichi Sugita (2008)

Subjects:

Control: 1 two-year-old male Japanese monkey; 3 two-year-old female Japanese monkeys, r aised normally by mothers

Test: 5 female and 5 male Japanese monkeys, separated from mothers

Raised by human caregivers wearing face masks, in environment with colorful toys, imitation flowers, and 2 hour daily playtime with caregiver

Separated into 6, 12, and 24 months deprivation

Training: One month before testing, monkeys were trained to look at screen for 5 seconds

Post-Deprivation Period: Exposed to faces for first time for 1 month

Conditions: 4 monkeys saw monkey faces, 6 monkeys exposed to human faces (caregiver without masks)

Monkeys later moved to room with other monkeys for normal social interaction

Visual Stimuli

40 grayscale face images: 20 human faces, 20 monkey faces

5 human and 5 monkey faces used to create modified versions to test configural and featural processing

Non-face images (20): Houses, tools, vehicles → things monkeys had never seen before

Preferential Looking Procedure

Stimuli presented in pairs from three possible combinations:

Human face vs. non-face object

Monkey face vs. non-face object

Human face vs. monkey face

Visual Paired Comparison Task (VPC): Conducted one day after the preferential looking test, tested recognition memory

27
New cards

Face perception in monkeys reared with no exposure to faces Yoichi Sugita (2008) Results & Conclusions

Good at processing faces even if they were never exposed to faces

During Deprivation

Preferential looking:

Deprived: Preferred faces over objects but showed no species bias

Control: Preferred monkey over human faces

Meaning: deprived monkeys recognized faces as special

VPC:

Deprived: recognized novel vs. familiar faces for both humans and monkeys & featural and spacing manipulations

Control: discriminate monkey faces but not human faces

Meaning: Born able to recognize faces and detect subtle differences -> but early experience fine tunes which faces they become experts in

After Deprivation

6 monkeys saw human faces vs. 4 monkeys saw monkey faces. Tested again after 1 month.

Human-exposed monkeys: Looked longer at human faces

Monkey-exposed monkeys: Looked longer at monkey faces

Preference stable after a year, even after daily exposure to both species

Conclusion

Face processing does not require visual experience with faces to exist (innate)

Brain already has general processing system

But we become experts in faces we are exposed to

28
New cards

The effect of viewpoint on body representation in the extrastriate body area (EBA) Chan et al. 2004

Methods: Participants & Design

N = 10 healthy adults (5F)

2×2 factorial: Viewpoint (Egocentric vs Allocentric) × Identity (Self vs Other).

1-back repetition-detection task to maintain attention.

Separate localizer scans to functionally define left/right EBA (bodies > faces/tools/scenes)

Stimuli: Each participant photographed (20 poses), head excluded.

Each pose shot in both egocentric (from eye-level) and allocentric (~1 maway).

29
New cards

The effect of viewpoint on body representation in the extrastriate body area (EBA) Chan et al. 2004 - Results

Right EBA preference for allocentric views suggests an early role in social vision (processing others' bodies).

EBA did not encode identity (Self ≈ Other), implying representation of body form/posture, not who it is.

Modulation by viewpoint was modest compared to strong body-selective responses overall.

Hemispheric asymmetry aligns with broader patterns (right: view-dependent; left: more abstract).

Right EBA: Allocentric > Egocentric; not sensitive to identity.

Left EBA: No reliable effects of viewpoint or identity.

Whole-brain: Egocentric engages left SPL; allocentric engages occipito-parahippocampal regions; 'Other' engages anterior STS.

Overall: EBA encodes body appearance early; identity likely represented elsewhere (e.g., STS/ventral temporal).

30
New cards

The effect of viewpoint on body representation in the extrastriate body area (EBA) Chan et al. 2004 - Conclusions

Q1 (Viewpoint): Yes—EBA (right) is modulated by viewpoint, preferring allocentric (others-like) views.

Q2 (Identity): No—EBA does not distinguish self vs familiar others under these task conditions.

Conclusion: EBA plays an early, perceptual role in parsing human bodies rather than encoding identity.

Implication: Self/other distinctions likely arise downstream (e.g., STS, parietal systems).

31
New cards

Representation of Manipulable Man-Made Objects in the Dorsal Stream (Chao & Martin, 2000)

Methods

Participants

10 right-handed, neurologically normal adults

Stimuli & Tasks

Black-and-white photos: tools, animals, faces, houses

Control: phase-scrambled versions of the same images

Experiment 1: Viewed images at fixation, 2 images/second, 1728 total stimuli

Experiment 2: Silently named tools/animals, 2 seconds per image, 360 total stimuli (matched for word difficulty and frequency)

32
New cards

Representation of Manipulable Man-Made Objects in the Dorsal Stream (Chao & Martin, 2000) Conclusions

Viewing and naming tools are selectively activated: The left ventral premotor cortex (BA 6), left posterior parietal cortex (BA 40)

Conclusions:

The study extended the distributed network of recognizing tools: fusiform gyrus, middle temporal gyrus, left posterior parietal cortex, left ventral premotor cortex (right-handed individuals).

This network of ventral and dorsal sites is active whenever we recognize manipulable objects

33
New cards

A Real-World Size Organization of Object Responses in Occipitotemporal Cortex (Konkle & Olivia, 2012)

Overall Results

The brain does differentiate neural representations based upon the real-world size of an object.

Organized from big to small in a medial-to-lateral pattern and superior-to-inferior pattern

1. Areas maintain their real-world size preference despite discrepancies in retinal image size

The relationships between projection location and the object are largely based upontheir true-size rather than retinal projection

2. Size areas are activated during visual imagery.

Projection sites reflect stored pre-existing visual knowledge on objects

3. Projections aren’t affected by an object’s conceptual size• No strong connections to abstract concepts of true-size.

Size preferences in the ventral cortex are more likely connected to information from the object category, its visual form, or commonalities among like-objects.

34
New cards

A Real-World Size Organization of Object Responses in Occipitotemporal Cortex (Konkle & Olivia, 2012) Introduction

Study Background: Objects are often grouped together in specific representation patterns across the ventral-temporal cortex.

Research Study/Purpose: Does the brain differentiate neural representation locations based on the real-world size of an object regardless of retinal projection size?

Results: Yes, the brain does have a preference in object size! Object sizes are grouped in a medial-to-lateral organization across the ventral-temporal cortex

Explore top flashcards

Unit 7 Biology Test
Updated 183d ago
flashcards Flashcards (45)
apush - ch. 15
Updated 1081d ago
flashcards Flashcards (54)
Angel Diction (59)
Updated 1002d ago
flashcards Flashcards (59)
LA Midterm 2023
Updated 696d ago
flashcards Flashcards (73)
Pharmacology Exam 1
Updated 1004d ago
flashcards Flashcards (171)
Unit 7 Biology Test
Updated 183d ago
flashcards Flashcards (45)
apush - ch. 15
Updated 1081d ago
flashcards Flashcards (54)
Angel Diction (59)
Updated 1002d ago
flashcards Flashcards (59)
LA Midterm 2023
Updated 696d ago
flashcards Flashcards (73)
Pharmacology Exam 1
Updated 1004d ago
flashcards Flashcards (171)