1/209
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
conceptual knowledge
knowledge that enables us to recognize objects and events and to make inferences about their properties
concept
mental representation of a class or individual
categorization
the process by which objects are placed in categorical groups; include all possible examples of a particular concept
what do categories allow us to do?
enable us to understand individual cases not previously encountered; without them we'd be constantly mystified/helpless
what are categories
pointers to knowledge that provide lots of general information from which we can focus on unique characteristics
definitional approach to categorization
determine category membership based on whether the object meets the definition of the category
major problem with definitional approach
not all (commonly agreed) category members actually meet the definition—e.g., "chairs"
Wittgenstein's Family Resemblance
Category members resemble one another in various (but not all) ways; led to prototype idea where you "reference" a category member
prototype approach to categorization
membership in a category is determined by comparing the object to a prototype that represents the category
prototypes are usually
highly typical members (ex. robin for "bird" instead of penguin)
strong relationship between prototypicality and family resemblance
rosch and mervis: list common features of various objects to find that highly typical category members shared many features w/other members and vice versa
typicality effects
exemplars that are more average or normal for a given category are likely to be listed first when people are asked to name exemplars of that category, and are more rapidly verified as category members
exemplar approach to categorization
the approach to categorization in which members of a category are judged against exemplars, examples of members of the category that the person has encountered in the past
Advantages of exemplar approach
Easily takes into account atypical cases
By remembering unique cases instead of always using "average"
For example: some birds don't fly
Easily deals with variable categories
Don't have average for "games"
Lots of different examples (video games, football, chess)
can be used along with prototypes
categories are hierarchically organized
Categories are hierarchically organized from more specific to more general
Evidence that Basic-Level Is Special
-Going above basic level results in a large loss of information
-Going below basic level results in little gain of information
semantic networks
Maybe concepts are arranged in (semantic) networks that represent how they are organized in the mind
Hierarchical semantic network model
a model of semantic memory organized in terms of nodes and links that stores properties at the highest relevant node to conserve cognitive economy
cognitive economy
shared properties are only stored at higher-level nodes
Hierarchical semantic network model prediction
verifying properties should take longer the more "nodes" must be traversed (longer "distance")
spreading activation
when concept presented, relevant node activated; when node activated, activity spreads among all connected links (semantically related concepts, properties, etc.)
"lexical decision" priming task
present pairs of words or nonwords, when both are words some are related and some aren't, result shows faster RT for related words which shows spreading activation (today show a single word then between related or not)
criticisms of Hierarchical semantic network model prediction (C and Q)
Cannot explain some typicality effects (some things are verified faster even when node time is the same);
little evidence for cognitive economy/inheritance (ex. wings also stored at canary node);
some sentence-verification results are problematic (pig is animal is verified faster than pig is mammal)
connectionist approach to categorization
connectionism is the approach to creating computer models for representing cognitive processes (aka parallel distributed processing)
linked units
Input units: activated by stimulation from environment
Hidden units: receive input from input units
Output units: receive input from hidden units
connection weights
determine how much activation one unit can pass on to another unit
How do connectionist networks learn?
- don't have knowledge "programmed in"
- so...typically begin with equal or random response parameters
- then, "train" network over many trials
- stimulus is given to begin and illicit network response;
correct response is given and error signal is generated
Back-propagation
process wherein error signal transmitted back through the circuit; process repeats until error signal is zero
promise to connectionist approach
Much success in simulating many cognitive processes and seems analogous to real brains/neurons (can explain generalization of learning and exhibits graceful degradation: disruption of performance occurs gradually as parts of the system are damaged)
how are concepts represented in the brain?
maybe various brain areas specialized to process information about different categories
sensory-functional hypothesis
derived from finding that some brain-damaged individuals have trouble categorizing animals, but not artifacts (or reverse); maybe we categorize animals w/sensory info, artifacts by function but inconsistent evidence
semantic category approach
proposes that there are specific neural circuits in the brain for some specific categories; not on sensory vs functional basis
multiple factors approach (property cluster)
examines how concepts are differentiated from each other as function of various kinds of properties, not identifying specific brain areas/networks for existing concepts; discover how properties cluster within various categories; "crowding" may differentiate some concepts
Embodied approach to categorization
our knowledge of concepts is based on reactivation of sensory and motor processes that occur when we interact with the object
Mirror neurons (example of perception/action connections)
fire when we do a task or when we observe another doing that same task
Semantic somatotopy
correspondence between words related to specific parts of the body and the location of brain activity associated with that part of the body; not always observant of impairment
Hub & Spoke model
proposal that different areas of the brain are connected to the anterior temporal lobe; some patients with anterior temporal lobe damage have semantic dementia
trouble with identifying all objects, not just particular categories
the ATL integrates info from more specialized category areas
Porbric et al. (2010)
- Used TMS to stimulate ATL or parietal (spoke)
- When TMS impaired ATL, trouble naming both artifacts and living things
- When TMS impaired certain parietal area, only trouble naming artifacts
Mental imagery
experiencing a sensory impression without sensory input; have imagery in various sense modalities (hearing, taste, etc.)
Visual imagery
"seeing" in absence of visual stimulus; another important form of cognition--nonverbal
Wundt and Imagery
proposed 3 basic elements of consciousness: sensations, feelings, and images
Imageless thought debate
discussion about whether contemplation is possible without pictures
Imageless thought debate - Aristotle
claimed thought impossible w/out images
Imageless thought debate - Galton
noted some w/lousy visual imagery think just fine
Imageless thought debate - Smallwood
good evidence for imageless thought, uses random prompts for participants to note ongoing mental processes; sometimes imageless thought reported
Paivio (1963, 1965)—early cognitive paradigm used paired-associate learning
study pairs of words; first word used as recall cue at test; varied whether words were concrete or abstract
imagery easier w/concrete (vs. abstract) words;
result is that memory was better for concrete words
"Conceptual-peg" hypothesis
concrete words allow forming visual images that other words, items, etc. can "hang onto"
Mental rotation
vary angle of comparison shape, measure response RT
Result: RT increased linearly w/angle (to max--180°)
Do imagery and perception share the same mechanisms?
If so, perhaps we "scan" visual images in similar fashion to how we "scan" (actually) seen objects
Kosslyn (1973)—classic study
Memorize picture (boat), create an image of it
Participants first focused on one part of the boat, then asked to "look" for another part—measured RT for verifying
Result: Longer RT to check "longer distances" in image
Early support for similar imagery/perception mechanisms
Lea (1975)
More distractions when scanning longer distances may have increased reaction time
Kosslyn et al. (1978)
Island with 7 locations, 21 trips
It took longer to scan between greater distances
Visual imagery is spatial (like perception)
"imagery debate"
Is imagery spatial or propositional?
Pylyshyn (1973)
Spatial representation is an epiphenomenon
Accompanies real mechanism but is not actually a part of it
Proposed that imagery is propositional
Can be represented by abstract symbols
Pylyshyn (2003)
Kosslyn's results can be explained by using real-word knowledge unconsciously
Tacit- knowledge explanation
Finke and Pinker (1982)
First, briefly presented display w/four dots
Then, second display w/arrow appears
Participants judge whether arrow points to dots previously seen
Not instructed to use visual imagery
No time to memorize, no (prior) tacit knowledge
Key result: Longer RT when greater distance between arrow and (previous) dot
supports mental spatial/"traveling" imagery idea
Relationship between viewing distance and ability to perceive details
Imagine small object next to large object
Quicker to detect details on the larger object
Kosslyn (1978): tried this with imagery
Imagine two animals (e.g., rabbit/elephant or rabbit/fly)
"Zoom in" or "out" until larger animal fills visual field
Then...(critical task):
Ask questions about rabbit features (e.g., have whiskers?)
Result: faster RTs when rabbit "large" vs. "small"
Also... "mental-walk" task w/single imagined animals
"Zoom in" in imagination until animal fills visual field, estimate distance
Result: Move closer to small animals than to large animals
Again, support idea that images are spatial, like perception
Do perception and imagery interact?
perhaps share same/similar mechanisms
Perky (1910)
projected faint images, participants imagined same object
- reported images closely resembled projected ones
- participants didn't realize projected images were present
- so...images and actual visual stimuli seem confusable/similar
Farah (1985)
- participants initially imagined "H" or "T", then either H or T actually presented (briefly), accuracy calculated
- Result: better performance when prior image matched actual stimulus
- again suggests that imagery and actual perception closely related
"imagery debate" resolution
Not quite: still possible to alternatively explain many results as using tacit knowledge, etc.
For example: "Zoom-in/zoom-out" studies
BUT: perhaps neuroimaging approaches will aid progress
Kreiman et al. (2000)
Record individual neuron responses to perceiving vs. imagining object—same neurons respond
Le Bihan et al. (1993)—fMRI study
Both real & imagined (visual) stimuli activate similar areas in visual cortex
what do some studies suggest?
perception vs. imagery differences (i.e., only partial overlap)
Ganis and coworkers (2004)--fMRI
again, compare real vs. imagined visual stimuli
very similar activation for both in front & middle brain
BUT: much stronger activation for real stimuli in visual cortex (back of brain)
Amedi et al. (2005)—fMRI
as usual, various similar activations for real vs. imagined
BUT: w/imagined, less activity for other sensory areas
consistent w/imagery being more fragile, need to minimize interference
But...fMRI correlations w/imagery don't definitively show causal relationship (even though very suggestive)
So...would be helpful to manipulate relevant brain areas
Kosslyn et al (1999):
Used transcranial magnetic stimulation (TMS)
- TMS applied to visual brain area during both perception and imagery task
Results:
- RT slower for both tasks when TMS applied to visual area, no effect for either task when applied to control brain area
- Suggests visual area brain activity plays causal role for both perception and imagery
Farah (2000): M.G.S.--patient w/removed right occipital lobe
visual brain area damage decreased both perceptual visual field and size of image visual field
so...in "mental walk", horse filled up imagined visual field from further away
again, strong perception/imagery relationship
Bisiach & Luzzatti (1978): patient w/unilateral neglect
Depending on whether patient imagined familiar location from one end or the other, "left" side was neglected
again, strong perception/imagery link
Guariglia et al (1993)
patient with unilateral neglect, but only with images (!)—perception OK
Farah et al. (1998)
patient "R.M."—damage to occipital and parietal lobes
could recognize and draw objects presented to him
but—couldn't draw from memory (which requires imagery)
Behrmann et al. (1994)
C.K.—patient w/visual agnosia
couldn't visually recognize real objects, but could image/draw
what to make of neuropsychological results
much evidence suggesting same/shared mechanisms for perception and imagery
but—also good evidence that perception and imagery are dissociable, suggesting separate mechanisms
Behrmann et al. (1994)—suggested solution
Perception and imagery mechanisms partially overlap
Visual perception involves bottom-up processing; located at lower and higher visual centers
Imagery is a top-down process; located at higher visual centers (only)
Behrmann application to
C.K., R.M., and M.G.S.'s dissociation patterns:
CK: lower visual damage left (higher-level) imagery OK
RM's higher-level damage impaired imagery but not visual processes
More trouble explaining M.G.S., who still had imagery problem despite having only lower visual center damage
Chalmers and Reisberg (1985)
Had participants create mental images of ambiguous figures
Difficult to flip from one perception to another while holding a mental image of it
practical uses of imagery
tool to improve memory
method of loci - placing images at locations
Visualizing items to be remembered in different locations in a mental image of a (familiar) spatial layout
Pegword technique--Associating to-be-remembered words w/images
similar to method of loci, but use standard words rather than locations (e.g., one-bun; two-shoe, three-tree, etc.)
form visual image of each to-be-remembered word along with "pegword" from your standard list
Harvey et al. (2005)—several results
- Groups either imagined favorite food or favorite vacation.
- Result: Food craving increased for food imagining group
- Later, food-imaging participants imagined either nonfood visual images or nonfood auditory images
- Food craving decreased in both groups, more for nonfood visual group
- Consistent w/ Baddely's WM model (i.e., more interference w/visual nonfood imagery)
Kemps & Tiggermann (2013)
Participants looked at phone app w/random visual dots whenever felt food craving
Food cravings, actual consumption went down
Note: error in text—the random dots interfere w/visuospatial sketchpad, not phonological loop
problem
an obstacle between a present state and a goal; not immediately obvious how to get around the obstacle, so the problem is (obviously) difficult
Key Gestalt problem-solving framework
-First, ascertain how problem is represented in mind
-To solve, generally need to restructure problem (i.e., change problem representation)
-Classic example: Kohler's "circle problem"
importance of insight in solving problems
Insight: sudden realization of problem solution
Often requires restructuring the problem
if insight occurs
shouldn't experience much "warning" prior to insight/solution; also...noninsight problems should yield "warning"
Metcalfe and Wiebe (1987)
- Insight: triangle problem, chain problem
- Noninsight: algebra
- Warmth judgments every 15 seconds
- Insight problems solved suddenly
- Noninsight problems solved gradually
Candle problem
Mount candle on wall so it can burn but not drip on floor
Two-string problem
Given chair and pliers, how connect two strings?
Duncker's (1945) candle problem
Only c. 50% solved; much better (c. 90%) when box empty (matches separate) rather than matches in box
Maier's (1931) two-strings problem
Less than 50% solved; much better (c. 80%) when "hint" given ("accidentally" hit strings)
Central problem
Fixation—focus on aspect of problem that prevents arriving at (different) solution
common form of central problem
Functional fixedness--restricting use of an object to familiar functions
functional fixedness in candle problem
seeing boxes as containers inhibited using them as supports
functional fixedness in two-string problem
usual function inhibits seeing them as possible weights
functional fixedness
A preconceived notion about how to approach a problem
Based on past experiences with similar problems
another (unhelpful) mental set
Applying past solution approaches to (even very similar) additional problems may inhibit use of better solutions
Water-jug problem (Luchins, 1942)
three jugs, hold different quantities of water
task: obtain desired amount by pouring water back and forth
Result: (successful) method for earlier problems carried over to final problems, even though latter had simpler solutions
Information-Processing Approach
Influential modern information-processing approach:
Newell and Simon (1972)
Models problem solving as a search (for solution)