Week 9 - Social Interaction

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/30

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

31 Terms

1
New cards

What is social binding versus visual input?

Social binding and visual input refer to different levels of processing in human perception, particularly in how we understand social scenarios. 

  • Visual input refers to the raw, low-level sensory information received by the eyes (e.g., colors, shapes, movements, and spatial arrangements of objects and people). The brain's visual system processes this information rapidly to identify basic features and patterns.

  • Social binding is a proposed mechanism for a faster, more efficient way the brain interprets this visual input in a social context. It is the process by which the visual system automatically groups individual people into meaningful social units or events, based on cues like proximity or mutual gaze. 

<p>Social binding and visual input refer to different levels of processing in human perception, particularly in how we understand social scenarios.<span><span>&nbsp;</span></span></p><ul><li><p><span><strong><span>Visual input</span></strong><span> refers to the raw, low-level sensory information received by the eyes (e.g., colors, shapes, movements, and spatial arrangements of objects and people). The brain's visual system processes this information rapidly to identify basic features and patterns.</span></span></p></li><li><p><span><strong><span>Social binding</span></strong><span> is a proposed mechanism for a faster, more efficient way the brain interprets this visual input in a social context. It is the process by which the visual system automatically groups individual people into meaningful social units or events, based on cues like proximity or mutual gaze.&nbsp;</span></span></p></li></ul><p></p>
2
New cards

Why are social interactions prioritised?

Social interactions are prioritized during vision primarily because of their critical evolutionary and social significance for human survival and functioning.

The visual system has evolved specialized, automatic mechanisms to rapidly detect, attend to, and process social information, even over non-social stimuli or when attention is directed elsewhere.

  • Survival Advantage: Rapidly identifying social interactions (e.g., potential threats, allies, or mates) provided a significant evolutionary advantage. The ability to quickly understand others' actions, goals, and intentions is fundamental for navigating a complex social world effectively.

  • Automatic Processing: The brain processes socially relevant stimuli (like faces, eyes, and body movements) in dedicated visual regions, often automatically and without conscious effort. This allows for the efficient gating of vital social cues to conscious awareness, even when visual inputs are ambiguous or competing for attention.

  • Facilitating Social Cognition: Prioritizing social cues, such as joint attention (following someone's gaze to a shared object), acts as a fundamental building block for higher-level social cognitive processes like empathy, communication, and "theory of mind" (inferring mental states in others).

  • Coordination and Learning: Visual social cues are essential for coordinating behavior with others (e.g., during joint tasks or conversations) and are vital for social learning and skill development, particularly in childhood.

  • Behavioral Benefits: Strong social connections fostered through these interactions are linked to improved mental well-being, better physical health outcomes, and cognitive benefits such as enhanced memory and attention. 

3
New cards

How did Vestner et al. (2019) test spatial distortion (facing vs. facing away) using a visual search task?

This task measured how quickly participants could find a target pair among distractors, suggesting how efficiently the stimuli were processed. 

  • Procedure:

    • Participants initiated a trial by holding down the spacebar, which presented four stimulus pairings (dyads of people or objects like arrows/cameras) in the four quadrants of the screen.

    • Most quadrants contained distractor pairs, and one contained the target pair.

    • The target could be a "front-to-front" (facing) dyad or a "back-to-back" (non-facing) dyad. The distractors typically consisted of pairs all facing the same direction (e.g., both left or both right).

    • Participants were instructed to release the spacebar as soon as they found the target.

    • Releasing the spacebar caused the stimuli to disappear, and participants then indicated the target's location by pressing a corresponding keyboard key.

    • Measurement: The primary measure was the reaction time (RT) from stimulus onset until the spacebar was released. Faster RTs indicated easier detection.

  • Key Finding: Participants found facing (front-to-front) target pairs faster than back-to-back target pairs when hidden amongst distractors that faced the same direction. 

<p>This task measured how quickly participants could find a target pair among distractors, suggesting how efficiently the stimuli were processed.<span><span>&nbsp;</span></span></p><ul><li><p><span><strong><span>Procedure</span></strong><span>:</span></span></p><ul><li><p><span><span>Participants initiated a trial by holding down the spacebar, which presented four stimulus pairings (dyads of people or objects like arrows/cameras) in the four quadrants of the screen.</span></span></p></li><li><p><span><span>Most quadrants contained distractor pairs, and one contained the target pair.</span></span></p></li><li><p><span><span>The target could be a "front-to-front" (facing) dyad or a "back-to-back" (non-facing) dyad. The distractors typically consisted of pairs all facing the same direction (e.g., both left or both right).</span></span></p></li><li><p><span><span>Participants were instructed to release the spacebar as soon as they found the target.</span></span></p></li><li><p><span><span>Releasing the spacebar caused the stimuli to disappear, and participants then indicated the target's location by pressing a corresponding keyboard key.</span></span></p></li><li><p><span><span>Measurement: The primary measure was the reaction time (RT) from stimulus onset until the spacebar was released. Faster RTs indicated easier detection.</span></span></p></li></ul></li><li><p><span><strong><span>Key Finding</span></strong><span>: Participants found facing (front-to-front) target pairs faster than back-to-back target pairs when hidden amongst distractors that faced the same direction.&nbsp;</span></span></p></li></ul><p></p>
4
New cards

How can perceived distance between people be affected by psychosocial factors?

Psychosocial factors significantly influence the perceived distance between individuals, which differs from actual physical proximity. This phenomenon, known as proxemics, involves several key factors: 

  • Cultural Background: Cultural norms heavily dictate appropriate personal space. People from "contact cultures" (e.g., in Latin America or the Middle East) typically feel comfortable standing closer to others than people from "non-contact cultures" (e.g., in North America or Northern Europe), and these differences can lead to misinterpretations of distance.

  • Relationship Type: The perceived distance is largely dependent on the nature of the relationship. We naturally perceive less distance between ourselves and close friends, family members, or romantic partners compared to strangers or authority figures.

  • Emotional State: Current emotions can alter perceived distance. Feelings of anxiety, fear, or discomfort often increase the perceived distance as individuals seek more space, while positive emotions like happiness or excitement may decrease it.

  • Personality and Comfort Levels: Introverted individuals or those with a higher need for personal space generally perceive a greater distance as necessary for comfort than extroverts or those comfortable with close proximity.

  • Power Dynamics/Status: Hierarchical relationships affect perceived distance. Individuals in positions of authority or of higher social status often maintain more space from subordinates, which emphasizes the power differential. 

<p>Psychosocial factors significantly influence the <em>perceived</em> distance between individuals, which differs from actual physical proximity. This phenomenon, known as proxemics, involves several key factors:<span><span>&nbsp;</span></span></p><ul><li><p><span><strong><span>Cultural Background:</span></strong><span> Cultural norms heavily dictate appropriate personal space. People from "contact cultures" (e.g., in Latin America or the Middle East) typically feel comfortable standing closer to others than people from "non-contact cultures" (e.g., in North America or Northern Europe), and these differences can lead to misinterpretations of distance.</span></span></p></li><li><p><span><strong><span>Relationship Type:</span></strong><span> The perceived distance is largely dependent on the nature of the relationship. We naturally perceive less distance between ourselves and close friends, family members, or romantic partners compared to strangers or authority figures.</span></span></p></li><li><p><span><strong><span>Emotional State:</span></strong><span> Current emotions can alter perceived distance. Feelings of anxiety, fear, or discomfort often increase the perceived distance as individuals seek more space, while positive emotions like happiness or excitement may decrease it.</span></span></p></li><li><p><span><strong><span>Personality and Comfort Levels:</span></strong><span> Introverted individuals or those with a higher need for personal space generally perceive a greater distance as necessary for comfort than extroverts or those comfortable with close proximity.</span></span></p></li><li><p><span><strong><span>Power Dynamics/Status:</span></strong><span> Hierarchical relationships affect perceived distance. Individuals in positions of authority or of higher social status often maintain more space from subordinates, which emphasizes the power differential.&nbsp;</span></span></p></li></ul><p></p>
5
New cards

What has been found about people facing vs. facing away and perceived distance?

Studies have found that people generally perceive individuals (or avatars) who are facing toward them as physically closer than those who are facing away, even when the actual distance is the same. 

This effect has been observed in both real-life and virtual reality settings, and several potential explanations for it have been explored: 

  • Action Tendencies: People maintain a larger actual distance from a person's front than their back in social interactions. This perception bias (seeing a facing person as closer) may serve an adaptive, self-preservation function by enhancing the apparent danger or potential for interaction, thus promoting quicker reactions or appropriate social distance behavior.

  • Attention Guidance: Observers tend to focus their attention on the front of a person or object. This closer focus of attention may result in shorter distance estimates compared to when their attention is directed to the back of a person facing away.

  • Social vs. Lower-level Processing: The effect seems to be driven by general body orientation (front vs. back) rather than specific social cues like eye gaze, which might suggest lower-level visual processing is involved. However, some research suggests the "social interaction hypothesis," where face-to-face dyads are processed as a single social unit, compressing the perceived distance between them.

  • Emotional/Anxiety Factors: Subjective factors influence distance perception. For instance, people with high social anxiety tend to perceive strangers as being closer than they actually are, which in turn predicts their preference for a greater physical distance. 

<p>Studies have found that people generally perceive individuals (or avatars) who are facing toward them as physically closer than those who are facing away, even when the actual distance is the same.<span><span>&nbsp;</span></span></p><p>This effect has been observed in both real-life and virtual reality settings, and several potential explanations for it have been explored:<span><span>&nbsp;</span></span></p><ul><li><p><span><strong><span>Action Tendencies:</span></strong><span> People maintain a larger actual distance from a person's front than their back in social interactions. This perception bias (seeing a facing person as closer) may serve an adaptive, self-preservation function by enhancing the apparent danger or potential for interaction, thus promoting quicker reactions or appropriate social distance behavior.</span></span></p></li><li><p><span><strong><span>Attention Guidance:</span></strong><span> Observers tend to focus their attention on the front of a person or object. This closer focus of attention may result in shorter distance estimates compared to when their attention is directed to the back of a person facing away.</span></span></p></li><li><p><span><strong><span>Social vs. Lower-level Processing:</span></strong><span> The effect seems to be driven by general body orientation (front vs. back) rather than specific social cues like eye gaze, which might suggest lower-level visual processing is involved. However, some research suggests the "social interaction hypothesis," where face-to-face dyads are processed as a single social unit, compressing the perceived distance between them.</span></span></p></li><li><p><span><strong><span>Emotional/Anxiety Factors:</span></strong><span> Subjective factors influence distance perception. For instance, people with high social anxiety tend to perceive strangers as being closer than they actually are, which in turn predicts their preference for a greater physical distance.&nbsp;</span></span></p></li></ul><p></p>
6
New cards

How did Vestner et al. (2019) test spatial distortion (facing vs. facing away) using a spatial memory task?

The primary task involved participants judging or remembering the interpersonal distance between two individuals (dyads). 

  1. Stimuli: Participants were shown pairs of human silhouettes, arranged either face-to-face (facing) or back-to-back (facing away/non-facing).

  2. Procedure: In one version, participants were shown a dyad, followed by a blank interval, and then the same dyad again at a different location. They were asked to judge whether the inter-individual distance had changed from the initial presentation.

  3. Spatial Distortion Measure: The researchers found that facing dyads were remembered as being physically closer together than back-to-back dyads, consistent with a spatial memory distortion or "distance compression effect". This effect suggests that facing dyads are processed as an integrated social unit, compressing the perceived spatial representation between them. 

The inclusion of colour, which was irrelevant to the main distance judgment task, came in a surprise memory task following the main experiment. This surprise test was used to see if participants had enhanced memory for various features (including color) when they had previously processed the individuals as an "interacting" (facing) social unit. 

  • Memory Retention: The results showed that memory retention of both group-relevant and group-irrelevant features (like color) was enhanced when recalling interacting partners. This indicates that processing a dyad as a social interaction leads to better overall feature binding and memory for the elements that form that social unit. 

<p>The primary task involved participants judging or remembering the interpersonal distance between two individuals (dyads).<span><span>&nbsp;</span></span></p><ol><li><p><span><strong><span>Stimuli:</span></strong><span> Participants were shown pairs of human silhouettes, arranged either face-to-face (facing) or back-to-back (facing away/non-facing).</span></span></p></li><li><p><span><strong><span>Procedure:</span></strong><span> In one version, participants were shown a dyad, followed by a blank interval, and then the same dyad again at a different location. They were asked to judge whether the inter-individual distance had changed from the initial presentation.</span></span></p></li><li><p><span><strong><span>Spatial Distortion Measure:</span></strong><span> The researchers found that facing dyads were remembered as being physically closer together than back-to-back dyads, consistent with a spatial memory distortion or "distance compression effect". This effect suggests that facing dyads are processed as an integrated social unit, compressing the perceived spatial representation between them.&nbsp;</span></span></p></li></ol><p></p><p>The inclusion of colour, which was irrelevant to the main distance judgment task, came in a surprise memory task following the main experiment. This surprise test was used to see if participants had enhanced memory for various features (including color) when they had previously processed the individuals as an "interacting" (facing) social unit.<span><span>&nbsp;</span></span></p><ul><li><p><span><strong><span>Memory Retention:</span></strong><span> The results showed that memory retention of both group-relevant and group-irrelevant features (like color) was enhanced when recalling interacting partners. This indicates that processing a dyad as a social interaction leads to better overall feature binding and memory for the elements that form that social unit.&nbsp;</span></span></p></li></ul><p></p>
7
New cards

What did Vestner et al. (2019)’s results tell us about people in pairs?

Vestner et al. (2019)'s research indicates that pairs of people facing each other are detected faster in visual search tasks compared to pairs arranged back-to-back.

This "search advantage" stems from a general attention orienting mechanism guided by low-level directional cues, rather than a specialized system for processing social interactions.

  • Participants were better at remembering all details in interacting dyads than non-facing.

  • BUT they were better at discriminating correct from incorrect when foils were from a different pair only for facing pairs.

  • Suggests pairs are remembered together when they’re seen as interacting.

<p>Vestner et al. (2019)'s research indicates that pairs of people facing each other are detected faster in visual search tasks compared to pairs arranged back-to-back. </p><p>This "search advantage" stems from a general attention orienting mechanism guided by low-level directional cues, rather than a specialized system for processing social interactions.</p><ul><li><p>Participants were better at remembering all details in interacting dyads than non-facing.</p></li><li><p>BUT they were better at discriminating correct from incorrect when foils were from a different pair only for facing pairs.</p></li><li><p>Suggests pairs are remembered together when they’re seen as interacting.</p></li></ul><p></p>
8
New cards

What is the two-body inversion effect?

The two-body inversion effect shows our visual system processes pairs of facing bodies as single, structured units, much like individual bodies, making them harder to recognize when inverted compared to bodies facing away, which are processed piece-by-piece.

This effect reveals that spatial relationships between bodies (specifically facing each other) trigger specialized configural processing, treating the dyad as a whole, thus showing sensitivity to upright, interacting body configurations.

<p>The two-body inversion effect shows our visual system processes pairs of facing bodies as single, structured units, much like individual bodies, making them harder to recognize when inverted compared to bodies facing away, which are processed piece-by-piece. </p><p>This effect reveals that spatial relationships between bodies (specifically facing each other) trigger specialized configural processing, treating the dyad as a whole, thus showing sensitivity to upright, interacting body configurations. </p>
9
New cards

How does empathy increase focus on people in dynamic scenes?

Empathy increases focus on people in dynamic scenes by triggering an automatic and sustained redirection of attention toward social and emotional cues, which is driven by a shared neural representation of others' experiences.

This enhances a person's ability to interpret and respond to the social environment.

  • Activation of Shared Neural Circuits: When observing others, particularly in emotionally charged or dynamic situations, the same brain networks responsible for experiencing those actions or emotions firsthand (e.g., the anterior insula and frontal operculum) are activated in the observer. This "mirroring" mechanism makes the observed person's experience personally relevant, which in turn captures attention.

  • Prioritized Social Attention: Individuals with a high trait of empathy (as measured by the empathy quotient or Interpersonal Reactivity Index) show an attentional bias toward human figures and emotional faces compared to non-social stimuli. This suggests that social cues are given priority in visual processing.

  • Sustained Processing: High-empathy individuals not only initially fixate on social and emotional information but also maintain their attention for a longer duration. They are less likely to quickly disengage their gaze from an emotional face or a painful scene, allowing for more elaborate and sustained processing of the social information over time.

  • Information Integration: Arriving at a full empathic response in a complex, dynamic scene may require gathering multiple relevant cues (facial expressions, body posture, context) and integrating this information over time. This need for deeper social understanding motivates increased and sustained focus.

  • Motivational Significance: Empathy helps to assign motivational significance to social stimuli. Cues that predict an important social outcome (e.g., a partner's pain) elicit a stronger brain response related to attentive processing, effectively guiding attention according to perceived social needs and goals. 

<p>Empathy increases focus on people in dynamic scenes by triggering an automatic and sustained redirection of attention toward social and emotional cues, which is driven by a shared neural representation of others' experiences. </p><p>This enhances a person's ability to interpret and respond to the social environment. </p><ul><li><p><span><strong><span>Activation of Shared Neural Circuits</span></strong><span>: When observing others, particularly in emotionally charged or dynamic situations, the same brain networks responsible for experiencing those actions or emotions firsthand (e.g., the anterior insula and frontal operculum) are activated in the observer. This "mirroring" mechanism makes the observed person's experience personally relevant, which in turn captures attention.</span></span></p></li><li><p><span><strong><span>Prioritized Social Attention</span></strong><span>: Individuals with a high trait of empathy (as measured by the empathy quotient or Interpersonal Reactivity Index) show an attentional bias toward human figures and emotional faces compared to non-social stimuli. This suggests that social cues are given priority in visual processing.</span></span></p></li><li><p><span><strong><span>Sustained Processing</span></strong><span>: High-empathy individuals not only initially fixate on social and emotional information but also maintain their attention for a longer duration. They are less likely to quickly disengage their gaze from an emotional face or a painful scene, allowing for more elaborate and sustained processing of the social information over time.</span></span></p></li><li><p><span><strong><span>Information Integration</span></strong><span>: Arriving at a full empathic response in a complex, dynamic scene may require gathering multiple relevant cues (facial expressions, body posture, context) and integrating this information over time. This need for deeper social understanding motivates increased and sustained focus.</span></span></p></li><li><p><span><strong><span>Motivational Significance</span></strong><span>: Empathy helps to assign motivational significance to social stimuli. Cues that predict an important social outcome (e.g., a partner's pain) elicit a stronger brain response related to attentive processing, effectively guiding attention according to perceived social needs and goals.&nbsp;</span></span></p></li></ul><p></p>
10
New cards

Why are interacting humans considered “special objects”?

Interacting humans are considered "special objects" of visual attention primarily due to their social salience and the rich, complex information their interactions convey.

Visual attention is a limited resource, and the brain prioritizes stimuli that are most relevant for survival and social functioning.

Interactions may be viewed and processed differently than independent agents:

  • Facing dyads are:

    • Recognized as human more quickly and

    • Show a greater inversion effect (Papeo et al, 2017, Papeo & Abassi, 2019).

    • Are processed faster and remembered better (Vestner et al., 2019).

    • Are found and processed more quickly in visual search (Papeo et al., 2019), though this may not be specific to human dyads (e.g., Vestner et al., 2020, 2022).

    • Draw more visual attention (Stagg et al., 2014; Skripkauskaite et al., 2022; Daughters et al, 2025)

11
New cards

Very briefly, how does MRI work? (recap)

  • MRI machine applies powerful magnetic field.

  • Protons become oriented parallel to field.

  • Radio frequency (RF) pulse perturbs them.

  • MRI measures how long it takes protons to return to “normal state” (by detecting energy released).

  • Takes longer in some tissues than others

    • So they look different in the images

<ul><li><p>MRI machine applies powerful magnetic field.</p></li><li><p>Protons become oriented parallel to field.</p></li><li><p>Radio frequency (RF) pulse perturbs them.</p></li><li><p>MRI measures how long it takes protons to return to “normal state” (by detecting energy released).</p></li><li><p>Takes longer in some tissues than others</p><ul><li><p>So they look different in the images</p></li></ul></li></ul><p></p>
12
New cards

Very briefly, how does fMRI work? (recap)

  • Differences for functional imaging:

    • Focus on oxygenated vs deoxygenated blood.

      • More active brain area → more oxygen flowing through blood.

      • Oxygenated blood is less magnetic → bigger MR signal.

    • Measure time-course

      • Take image of brain every 1 -3 sec

      • Look at changes over time

  • Collect entire scan of the brain (in slices) every ~2 seconds

<ul><li><p>Differences for functional imaging:</p><ul><li><p>Focus on oxygenated vs deoxygenated blood.</p><ul><li><p>More active brain area → more oxygen flowing through blood.</p></li><li><p>Oxygenated blood is less magnetic → bigger MR signal.</p></li></ul></li><li><p>Measure time-course</p><ul><li><p>Take image of brain every 1 -3 sec</p></li><li><p>Look at changes over time</p></li></ul></li></ul></li><li><p>Collect entire scan of the brain (in slices) every ~2 seconds</p></li></ul><p></p>
13
New cards

What is a brain region of interest selective for social interaction?

The posterior superior temporal sulcus (pSTS) is a key brain region that shows robust and unique selectivity specifically for processing social interactions.

This selectivity is distinct from other social functions like face perception or theory of mind.

<p>The posterior superior temporal sulcus (pSTS) is a key brain region that shows robust and unique selectivity specifically for processing social interactions. </p><p>This selectivity is distinct from other social functions like face perception or theory of mind. </p>
14
New cards

Does modality impact the responsiveness of the pSTS?

  • Any region that is sensitive to auditory interactions (the pSTS?) should respond more to conversations than narrations (across languages?).

  • Should also be sensitive to the coherence of the conversation (at least in the understood language).

  • pSTS is:

    • Sensitive to number of speaker (our big contrast)

    • Sensitive to coherence of interactions (our tighter contrast)

  • Not unequivocal evidence, but

    • Suggests visually defined ”interaction” pSTS area is also sensitive to interactive cues in the auditory modality.

<ul><li><p>Any region that is sensitive to auditory interactions (the pSTS?) should respond more to conversations than narrations (across languages?).</p></li><li><p>Should also be sensitive to the coherence of the conversation (at least in the understood language).</p></li><li><p>pSTS is:</p><ul><li><p>Sensitive to number of speaker (our big contrast)</p></li><li><p>Sensitive to coherence of interactions (our tighter contrast)</p></li></ul></li><li><p>Not unequivocal evidence, but</p><ul><li><p>Suggests visually defined ”interaction” pSTS area is also sensitive to interactive cues in the auditory modality.</p></li></ul></li></ul><p></p>
15
New cards

What are some key components of interactions?

<p></p>
16
New cards

Describe the ‘visual social brain’.

The "visual social brain" refers to specialized brain networks, particularly in the posterior superior temporal sulcus (pSTS) and occipitotemporal cortex (EBA), that process the visual cues of social interactions, like body movements, facial expressions, and eye gaze, to understand others' intentions, emotions, and mental states (Theory of Mind).

It's a key part of the broader "social brain," integrating visual perception with complex social cognition to make sense of dynamic social scenes and predict behavior, forming a unique pathway for social perception distinct from object recognition.

<p>The "visual social brain" refers to specialized brain networks, particularly in the posterior superior temporal sulcus (pSTS) and occipitotemporal cortex (EBA), that process the visual cues of social interactions, like body movements, facial expressions, and eye gaze, to understand others' intentions, emotions, and mental states (Theory of Mind). </p><p>It's a key part of the broader "social brain," integrating visual perception with complex social cognition to make sense of dynamic social scenes and predict behavior, forming a unique pathway for social perception distinct from object recognition. </p>
17
New cards

What did Abassi and Papeo (2020, 2021) find about the EBA?

Abassi and Papeo (2020, 2021) found the Extrastriate Body Area (EBA) is more uniquely engaged by static, face-to-face human interactions than the Posterior Superior Temporal Sulcus (pSTS), suggesting EBA plays a crucial role in the initial visual detection of social cues, while the pSTS is also involved but might be more sensitive to dynamic social interactions, with both regions working together in processing social scenes, with EBA often showing stronger sensitivity to body-specific interaction cues.

18
New cards

What has multivariate regression analysis revealed about the pSTS and EBA?

Multivariate regression analysis in functional neuroimaging studies has revealed that the posterior superior temporal sulcus (pSTS), extrastriate body area (EBA), and middle temporal visual area (MT) are highly interconnected and collectively contribute to specialized visual processing, such as body and motion perception.

Specialized Roles in Perception:

  • The pSTS is strongly associated with processing human actions and social cues, such as the direction a person is walking or their intentions, and maintains representations invariant to size or viewpoint changes.

  • The EBA primarily encodes the facing direction of a body (e.g., whether a person in a point-light display is facing towards or away from the viewer).

  • The MT (or MT+) is critically involved in general motion processing and encodes the direction of movement (e.g., walking direction in point-light displays).

<p>Multivariate regression analysis in functional neuroimaging studies has revealed that the posterior superior temporal sulcus (pSTS), extrastriate body area (EBA), and middle temporal visual area (MT) are highly interconnected and collectively contribute to specialized visual processing, such as body and motion perception. </p><p>Specialized Roles in Perception:</p><ul><li><p>The pSTS is strongly associated with processing human actions and social cues, such as the direction a person is walking or their intentions, and maintains representations invariant to size or viewpoint changes.</p></li><li><p>The EBA primarily encodes the facing direction of a body (e.g., whether a person in a point-light display is facing towards or away from the viewer).</p></li><li><p>The MT (or MT+) is critically involved in general motion processing and encodes the direction of movement (e.g., walking direction in point-light displays).</p></li></ul><p></p>
19
New cards

What is the recent proposal of a 3rd visual stream?

A recent proposal by David Pitcher and the late Leslie Ungerleider introduces a third visual stream in the primate brain, specialized for dynamic social perception.

This stream runs from early visual cortex into the superior temporal sulcus (STS) via motion-selective areas (V5/MT), complementing the two established "what" and "where" (or "how") pathways.

This new model updates the influential "two visual pathways" model (ventral "what" stream for object recognition and dorsal "where"/"how" stream for spatial processing and action) that has dominated neuroscience for decades.

While some researchers view it as a distinct pathway within the broader ventral stream, the body of evidence points towards it being a functionally and anatomically independent visual stream in its own right.

<p>A recent proposal by David Pitcher and the late Leslie Ungerleider introduces a third visual stream in the primate brain, specialized for dynamic social perception. </p><p>This stream runs from early visual cortex into the superior temporal sulcus (STS) via motion-selective areas (V5/MT), complementing the two established "what" and "where" (or "how") pathways. </p><p>This new model updates the influential "two visual pathways" model (ventral "what" stream for object recognition and dorsal "where"/"how" stream for spatial processing and action) that has dominated neuroscience for decades. </p><p>While some researchers view it as a distinct pathway within the broader ventral stream, the body of evidence points towards it being a functionally and anatomically independent visual stream in its own right. </p>
20
New cards

Can we confirm the role of the EBA?

Current research confirms that the Extrastriate Body Area (EBA) plays a key role in coding the spatial relationships of body parts.

It primarily focuses on processing the visual perception of the human body and its parts, as well as integrating this information with motor actions and proprioception.

<p>Current research confirms that the Extrastriate Body Area (EBA) plays a key role in coding the spatial relationships of body parts. </p><p>It primarily focuses on processing the visual perception of the human body and its parts, as well as integrating this information with motor actions and proprioception. </p>
21
New cards

What has fMRI found about hemispheric differences between the left and right EBA?

Functional Magnetic Resonance Imaging (fMRI) studies have found that while the extrastriate body area (EBA) is present in both hemispheres, there are functional specializations, with the right EBA showing a general dominance in overall activation and processing of self-other identity, and the left EBA showing greater sensitivity to cues related to social interactions and potential action understanding.

22
New cards

What is fMRI guided TMS?

fMRI-guided TMS (Transcranial Magnetic Stimulation) is an advanced, personalized brain therapy that uses functional MRI (fMRI) to pinpoint exact brain regions or networks involved in a patient's symptoms, then uses TMS magnetic pulses to modulate activity in those specific areas, improving balance and function for conditions like depression, anxiety, or TBI, unlike traditional TMS which uses more general landmarks.

It's a powerful tool for research and treatment, allowing scientists to see how stimulating one area affects the whole brain network in real-time.

<p>fMRI-guided TMS (Transcranial Magnetic Stimulation) is an advanced, personalized brain therapy that uses functional MRI (fMRI) to pinpoint exact brain regions or networks involved in a patient's symptoms, then uses TMS magnetic pulses to modulate activity in those specific areas, improving balance and function for conditions like depression, anxiety, or TBI, unlike traditional TMS which uses more general landmarks. </p><p>It's a powerful tool for research and treatment, allowing scientists to see how stimulating one area affects the whole brain network in real-time. </p>
23
New cards

What did TMS to the left EBA show?

TMS to the left Extrastriate Body Area (EBA) showed it causally supports processing social interactions by eliminating the two-body inversion effect (2BIE) for face-to-face dyads, demonstrating the left EBA isn't just for individual bodies but for detecting the 'unit' of interacting people, disrupting this specific social perception cue.

This confirms the left EBA is crucial for holistic processing of social configurations, beyond just body parts.

  • TMS effects are site specific

  • TMS effects are category specific

  • Left EBA is causally necessary for encoding facing human dyads.

24
New cards

How are social interactions perceived in early childhood?

  • Infants differentiate facing vs non-facing dyads very early (at least by 6 months).

  • By 14 months, selectively attend to interactions.

  • fNIRS evidence that dmPFC (and STC and vmPFC) preferentially process social interactions in infants (6 – 13 months).

  • Suggests sensitivity to interactive information develops early

<ul><li><p>Infants differentiate facing vs non-facing dyads very early (at least by 6 months).</p></li><li><p>By 14 months, selectively attend to interactions.</p></li><li><p>fNIRS evidence that dmPFC (and STC and vmPFC) preferentially process social interactions in infants (6 – 13 months).</p></li><li><p>Suggests sensitivity to interactive information develops early</p></li></ul><p></p>
25
New cards

Do children and adults attend to social interactions differently?

  • For both Children (aged 8 – 11) and Adults

  • Humans capture attention before other scene elements

  • Interactors capture attention faster

  • And hold attention for longer

  • Interaction understanding develops across childhood

26
New cards
27
New cards

How is social interaction perception developed in the brain?

Social interaction perception develops in the brain through a network of regions, primarily the posterior superior temporal sulcus (pSTS), which selectively processes dynamic social cues like body movement, alongside areas for faces (fusiform gyrus), bodies (EBA), motion (MT+), and higher-level understanding (temporoparietal junction - TPJ).

This processing starts early in life, with the pSTS becoming increasingly specialized to interpret complex interactions (like cooperation vs. conflict) as we develop, integrating visual input to form a rich understanding of others' intentions and relationships.

  • Right pSTS is more selective than left.

  • Adults are more selective to interactions in the pSTS than children.

  • This difference is more pronounced in the left hemisphere.

  • Children show no interaction selectivity in the left hemisphere pSTS

<p>Social interaction perception develops in the brain through a network of regions, primarily the posterior superior temporal sulcus (pSTS), which selectively processes dynamic social cues like body movement, alongside areas for faces (fusiform gyrus), bodies (EBA), motion (MT+), and higher-level understanding (temporoparietal junction - TPJ). </p><p>This processing starts early in life, with the pSTS becoming increasingly specialized to interpret complex interactions (like cooperation vs. conflict) as we develop, integrating visual input to form a rich understanding of others' intentions and relationships. </p><ul><li><p>Right pSTS is more selective than left.</p></li><li><p>Adults are more selective to interactions in the pSTS than children.</p></li><li><p>This difference is more pronounced in the left hemisphere.</p></li><li><p>Children show no interaction selectivity in the left hemisphere pSTS</p></li></ul><p></p>
28
New cards

Explain how STS selectivity changes across development.

The superior temporal sulcus (STS) demonstrates early face-selective responses in infancy that become more focal, bilateral, and strongly tuned to complex social information (like social interaction and dynamic expressions) throughout childhood and adolescence.

<p>The superior temporal sulcus (STS) demonstrates early face-selective responses in infancy that become more focal, bilateral, and strongly tuned to complex social information (like social interaction and dynamic expressions) throughout childhood and adolescence. </p>
29
New cards

Do adults process interactions differently?

  • Interactions are processed “automatically”/perceptually in adulthood

  • Children need a network that involves social cognitive regions

  • Understanding interactions becomes more “perceptual” and “automatic” across development

30
New cards

Is the pSTS sensitive to the content of interactions?

The posterior superior temporal sulcus (pSTS) is highly sensitive to the content of interactions, representing information about intentions, emotional tone (cooperating/competing), and the goals of social exchanges, not just basic body movement.

It helps differentiate helping vs. hindering actions, processes emotional context, and integrates visual and auditory cues to understand the "what" and "why" of social interactions, making it crucial for navigating our social world.

  • The pSTS responds differently to competitive vs. cooperative interactions

  • Interaction type can been decoded in the pSTS

<p>The posterior superior temporal sulcus (pSTS) is highly sensitive to the content of interactions, representing information about intentions, emotional tone (cooperating/competing), and the goals of social exchanges, not just basic body movement. </p><p>It helps differentiate helping vs. hindering actions, processes emotional context, and integrates visual and auditory cues to understand the "what" and "why" of social interactions, making it crucial for navigating our social world. </p><ul><li><p>The pSTS responds differently to competitive vs. cooperative interactions</p></li><li><p>Interaction type can been decoded in the pSTS</p></li></ul><p></p>
31
New cards

Give an overall summary.

  • Interactive information is present as early as 3

  • The pSTS becomes more ”tuned” and more selective across early development

  • One mechanism of “tuning” may include network changes

  • Interaction responses in STS are supported by connectivity to perceptual areas in adults and mentalising regions in children

  • May reflect different strategies for understanding interactions

  • Near ‘automatic’ inference from visual information in adults?

  • Children may need to use social reasoning to understand (many) observed interactions