1/21
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
top-down attention w/ example
using deliberate, conscious decision to pay attention to something
observer guided, endogenous attention
include switch costs for tasks bc of assumption of information processing
“I want to eat a burrito” “I want to study for this test”
types of top-down attention
sustained attention/vigilance: focusing on one thing for entire duration
divided attention: multi-tasking, proven to not work well
drop in performance after switching tasks
selective attention: focus on one input while ignoring other attention
have limited info processing resources, must prioritize what to process
bottom-up attention w/ example & significance
stimuli-guided automatic attention— exogenous attention
ex: ambulance down the street
significance: important for survival, many cues have specialized processing regions in brain
fearful stimuli, our name, addictive stimuli
Inverted U: Yerkes Dodson Law
optimal arousal for best attention is medium level
depicts U in graph: when too little arousal (bored), too much (stressed)
perfect balance in between where still being challenged
optimal levels of norepinephrine
spatial neglect definition & example
damage to R hemisphere, causes deficits in spatial attention & cannot attend to info in opposite side of lesion (L side)
ex: woman only drawing right side of daisies— no attention to left-sided details bc she can’t see left visual field so it doesn’t matter to her
affects all senses, not just vision: left side of world out of awareness (half of imaginations, memories, eats from one side of plate, reads only words on right side)
broadbent’s early selection filter model & dichotic listening tasks
filter info at level of perception, before info is processed for meaning
filter between sensory info (u can hear it) & semantic meaning processed
dichotic listening tasks:
ppl played 2 messages at same time in both ears, only have to recite from one ear (one left unattended)
don’t remember content of message they don’t have to recite bc they filter out sensory info before meaning processed
only remember minute details like gender of unattended info— details that don’t require deep processing
evidence against early selection filter model
unattended info can break thru certain situations
ex: cocktail party, u hear ur name thru dialogue of another party
filtering unattended info out for meaning at that point
ex: shocked/shadowing tasks— participants get shocked pairing w/ certain names in unattended ear that they aren’t supposed to listen to
when name that was paired w/ shocked plays their skin conductance increases — processed for meaning
attenuator model
early filter attenuates/reduces rather than blocks unattended material, filter moves depending on info importance
meaningful info will not get blocked & make it thru— like hearing ur name in other conversation
helps explain cocktail party example
late selection filter models & example
we process input at the level of meaning, then select what we want to process for attention
seen thru stroop tasks
controlled tasks— naming color of ink, mismatched (incongruent trial)
automatic tasks— reading color names, striaghtforward (congruent trial)
people slower in incongruent trial than congruent trials bc meaning of words is processed automatically, supporting late selection filter model
load theory & examples
attentional filtering/selection can occur at diff points of processing
where filter is depends on how much of ur resources are being taken up
easy task, low load: you select ur attention at later level, (late selection filter model), process all info (unattended & attended) at same level of meaning
hard task, heavy load: you select ur attention at early level, filter out irrelevant info at level of perception so u retain ur needed resources for important attention
central resource capacity view & applied example
there’s one resource pool where all attention resources are allocated, doesn’t matter how distracting task is/from what senses the task pulls
ex: driving simulation: either listened to radio (using diff sense) or no radio
would be equally distracted by audio & visual distraction
ex 2: listening to audio book & solving complex math problem difficult task
multiple resource capacity view & example applied
attentional load depends on match between relevant & irrelevant (distracting) info
if pulling from same sense: will be more taxing on senses & will be more distracted
if pulling from diff senses: will be less taxing & have minimal effects on loads— makes multi-tasking hypothetically possible
ex: listen to people speaking in scanner
low load: note if its quiet or loud voice speaking
high load: point out 2 syllable words in mono & trisyllabic word list spoken
had visual distraction— visual distraction had no effect on either performance because its from diff sense & doesn’t change for amount of loads
change blindness & example
failure to notice changes in scenes when change occurs during brief disruption
ex: door study— person asks for directions then switches, half of ppl don’t notice diff person
ex 2: continuity errors in old tv (Wizard of Oz)
ex 3: flicker technique paradigm— repeated alteration of 2 images, most ppl dont notice difference
inattentional blindness & example
failing to notice smthing new/unexpected in environment you’re attending to
ex: moon walking bear video
deer jumps in front of car but you don’t notice even though you’re looking at road
activating attention
attention selectively focus on specific space region— like spotlight
shifting attention involves disengaging & reorienting to another space
posner cueing task— explain SOA phenomena & inhibition of return
SOA: time btwn start of cue & start of target
short SOA: faster valid trials, attention is already there
longer SOA: longer valid trials, you look away from that spot & are looking other places alr (inhibition of return)
IOR: demonstrates that at longer delays of space btwn target & cue— attention is inhibited from going to recently attended space if nothing is showing up there
we will move our eyes away to look for possible new areas
ex: helps for looking for candy bar in aisle— we won’t will easily scan to evade checking the same place twice
definition of pre attention phase & what ideas r encapsulated in this for integrating features
bottom-up, automatic processing of features
explains pop-out effect— time taken to find feature that is distinct by one feature is independent of # of items there, only for features processed automatically in v1
it just pops out 😂
feature search: search for object that differs by 1 feature is automatic & requires bottom-up
focused attention phase definition & ideas encapsulated
object features integrated to guide search thru top-down, voluntary feature search
conjunction search: search for object that differs from distractors based on many features
as # of distractors increase— reaction time is longer
ex: where’s Waldo
overt visual attention
attending to smthing w/ eye movement
covert visual attention
attending to smthing w/o eye movement
cultural differences relating to attention in East Asian students vs Western students looking at image
east asian: pay more attention to details
western: pay more attention to center objects
why is multi-tasking/divided attention proven to not fully work
divided attention requires switching tasks, a process that isn’t automatic
task switching → drop in performance, mental sets(way of thinking) need to change