1/99
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
recurring themes in cogsci
computation, levels of analysis/explanation, tacit knowledge, unconscious processing, modularity, innateness, rationality, dual processes
computational approach
how the brain uses algorithms to solve functional problems, like the rules of binary addition in a calculator
David Marr's three levels of analysis/explanation
1. functional = problem the capacity is supposed to solve (language - mapping from sounds to meaning; face recog - mapping from faces to name; perception - inverse optics)
2. algorithmic = procedures that enable the problem to be solved (syntax trees, feature representations, bayesian updating)
3. physical = the neural/chemical substrates in which the procedures are implemented (different parts of the brain, the optic system)
approach a problem integrating all these levels
tacit knowledge
things you know but can't readily articulate
ex. contracted "is" - we know it can only be contracted on the left side of a phrase, but we couldnt say this on the spot
unconscious processing
things your mind does without your awareness
ex. bayes rule, syntax trees, reading emotions, making decisions, moral judgement
modularity
functional specialization within the mind/brain
analogy: organs of the body
visual, auditory, language processing, cheater detection, emotional, face processing, and other mechanism
unconscious processing + modularity : blindsight
there are multiple pathways for vision, not all conscious
just because the man lost conscious vision, some unconscious vision paths still function
innateness
where do the contents of the mind come from?
role in infant cognition (intuitive physics) and language (NSL)
rationality
reasoning correctly, making good (or bad?) decisions
kahneman and tversky
two programs:
1. heuristics and biases program = people are bad at logical reasoning, probability, and statistics so we make judgements w/ heuristics, short cuts that are fast/easy/error prone
2. neuroeconomics program = the brain contains sophisticated mechanisms for rapidly and accurately doing logical reasoning, probability, and statistics
dual process models
fast, automatic, effortless processing
versus
slow, controlled, effortful processing
stroop task
moral judgement - trolley problem
stroop task
test dual processing models
easier to say the ink color when it matches what the word actually says
when they are mismatched, we must put effort into processing the ink color, not the words themselves
functional level of visual perception
the inverse optics problem
how does the brain see?
the eye doesn't see. the brain sees. the eye just transmits.
the "inner picture" theory of perception
says eyes are a window to external reality and recreates in head like a camera --> light reflects onto lens and creates images
its an intuitive take but not sufficient
because who is perceiving the inner picture? a little man? its an infinite regress problem
our visual system isn't exactly like a camera: perception isn't a direct window to objective reality, its an active, guessed construction by the brain about the nature of the world
functions
mappings from inputs to outputs
well-specified = one-to-one mapping from input to output
underspecified = one-to-many outputs, having to sort between multiple outputs, inverse optics is like this
what kind of problem is perception?
an iterated reference problem, under-determined/specified
steps of the perception function from a 2d retinal input to a 3d representation of a scene
sensory input --> color, shading, texture, contours --> shapes separated from the background --> 3d representation of scene --> conceptual representation: "living room"
probabilistic inference
prior probabilities + evidence + assumptions = inference
many ordinary inferences involve hidden assumptions that we use w/out thinking, they make reference to possible hypotheses
the perceptual inference problem
how do we get from the input (a 2d pattern of light on the retina) to the output (a 3d representation of reality)?
this problem is underdetermined: there are multiple scenes that could lead to the same retinal representation
only way to solve is with ASSUMPTIONS
who distinguished forward and inverse optics
steven pinker
forward: 3d --> 2d
inverse: 2d --> 3d
input: retinal image
output: specification of the objects in the world and what they are made of
inverse optics has no solution, but our brain does it so easily. how? with assumptions, brain filling in missing info
inferring shape from shading
assumption: there is a single overhead light source
makes predictable lighting patterns
if this assumption exists, then things lighter on top and darker on bottom are bubble-like
and things darker on top and lighter on the bottom are hole-like
inferring color from shading
infer that shadows from a single overhead life source make things darker
the color i perceive in shadow isnt accurate, so ill adjust --> color appears lighter
the perception of apparent motion
when the motion of an intermittently seen object is ambiguous, the visual system resolves confusion by applying some tricks that reflect a built-in knowledge of properties of the physical world
ramachandran and antsis
straight line assumption
things tend to move in a straight line
linear motion in preference to perceiving abrupt changes in direction
think of this preference as built in knowledge of the relative probabilities of two competing hypotheses about motion
ex. moving dots see saw video
rigidity assumption
all points on a moving object are assumed to move in synchrony
aka one point on an object moving means they all are
ex. leopard
correspondences suggesting that the leopards spots can fly off in dif directions arent even considered
ex. yellow dot video
occlusion assumption
follows from first two put together
a moving object will progressively cover and uncover portions of a persisting background
ex. dots appear to jump w/ square, brain perceives the slide
optical illusion explanation
Helmholtz's Principle of Unconscious Inference
we perceive objects and events that under normal conditions would be most likely to produce the received sensory stimulation
brain hates coincidences
the algorithmic piece of inverse optics
bayes rule
who created bayes theorem?
thomas bayes (1702-1761)
english mathematician and presbyterian minister
famous for essay about chances
bayes rule
tells us how to reason about an uncertain world
tells us how to update our beliefs on the basis of new evidence
a waye to formalize both prior knowledge and learning
assign probabilities to some hypotheses --> se some new data --> update the probabilities assigned to these hypotheses
background assumptions of bayes rule
1. degrees of belief: probabilities - our belief ina. hypothesis h can be expressed as a real number from 0-1 (0 completely false, 1 completely true)
2. learners represent probability distributions and use these probabilities to represent uncertainty in inference
H
set of hypotheses under consideration - these hypotheses are mutually exclusive and jointly exhaustive
P(hi/d)
posterior probability of hypothesis hi given the observed data d
P(hi)
prior probability of hypothesis hi - how much we believe in hi before observing any data
P(d/hi)
likelihood of data given hi - probability with which we would expect to observe the data d if hi were true
E P(d/hj)P(hj)
normalizing term - ensures that probability assigned to all hypotheses sum to one. Data that leads us to strongly believe in one hypothesis must decrease our degree of belief in all hypotheses
total probability of the evidence (P(d))
perceptual inference is...
iterative
blindsight
some w/ damage to the primary visual cortex have "blindsight": cortical blindness with preserved subcortical processing
blindsight CAN detect: simple shapes, orientation of lines, movement, color, emotional expressions and postures, objects appearing/disappearing
blindsight CANNOT detect: name or gender of the person whose emotions they have perceived
blindsight process
retina --> lateral geniculate nucleus (initial superficial processing) --> SKIPS visual cortex (complex processing w/ conscious awareness) --> other cortical regions (eg motor cortex)
how does blindsight relate to inverse optics as a whole?
there are multiple visual pathways
major lobes/cortexes of the brain
frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain stem
vocab to refer to vague area of the brain
side view:
superior/dorsal = top
anterior/rostral = front
inferior/ventral = bottom
posterior/caudal = back
front view:
lateral = away from midline
medial = towards midline
where is the damage to the brain in cases of blindsight?
the primary visual cortex (V1) in the occipital lobe
where does the assumption that light comes from overhead come from? is it innate or learned from experience?
Hershberger (1970) chick study
method: chicks raised with only light from below, then trained to peck for food in concave or convex marks in a cage only lit from the sides so there was no shadow distinction
then they were presented with photos of shadowed convex or concave shapes
hypothesis: the chicks would prefer the photographs shadowed as they would be with light coming from below, consistent with their experience
results: for both shape preferences they preferred the photos consistent with light coming from above, even though they had never experienced it (innateness)
way to test assumptions about where light comes from
shape perception through shade
concave: dark on top, light on bottom
convex: light on top, dark on bottom
why care about face perception?
its a useful exemplar of a fundamental principle of mind and brain organization: modularity
ex. important to know who's in your tribe and who's not
what is face processing?
a specialized process (or module) that operates quickly and automatically under the right conditions
prosopagnosia
face blindness
a disorder of the brain stemming from lesions to the fusiform face area (FFA, located in the inferior temporal lobe) an inability to recognize faces
it is a selective deficit in processing faces, despite otherwise intact visual perception
and it is NOT a stored memory deficit
why do we study Sugita (2008)
direct and clean manipulation of experience
results indicate both innately specified, specialized processing, but limits on that specialization, and clear roles for experience
results also also indicate the cost of experience (sensitive period)
Sugita (2008) summary
indicator: looking time
control: monkeys growing up in normal monkey populations
experimental group: 6, 12, and 24 mo monkeys all deprived of any faces for their lives thus far
method: deprived monkeys of each age group then exposed for 1 mo to only monkey faces or only human faces
results:
before exposure period, control monkeys preferred monkey faces to objects and humans, while deprived monkeys preferred faces of either type to objects
after exposure period, monkeys exposed to monkey faces are like control monkeys, while monkeys exposed to human faces prefer human faces over both objects and monkey faces
after exposure period, features and variation, monkeys exp to humans can identify changes in both monkey and human faces, but all monkeys look alike; while monkeys exp to monkeys can discriminate between variations in monkey faces but all humans look alike
what have we learned from Sugita (2008)?
we don't know how exactly faces are processed, we don't know what's innately-specified about face processing, we don't know exactly what the adaptive function of the innately-specified face processing is
BUT
we do have evidence with the powerful idea of an innate, computational template that is fine-tuned via experience with the world
where do we (and alison gopnik) fall on the nature/nurture continuum of thinking
on the continuum between nativist (completely innate) and empiricist/anti-nativist (pure learning and experience) we fall in hybrid/designed to learn middle
designed to learn: children have innate learning apparatuses which make learning possible, our learning is then tuned by experience
fundamental cogsci idea: the brain is a computer designed by evolution and programmed/fine tuned by experience
infant cognition
another underdetermined problem where the input cannot suffice for the output
poverty of the stimulus argument
babies balance the scale with innate knowledge of arithmetic, principles of intuitive physics, theory of mind (reasoning about other minds), and statistics (probabilistic reasoning)
arithmetic
5 month olds
question: do babies distinguish mathematically possible and impossible outcomes?
method:
1 + 1 = 1 or 2
1 figurine placed in case, screen went up, babies could see a hand add a figurine behind the screen then leave empty, screen fell to reveal either 1 or 2 figures (1 is supposed to be expected and 2 impossible in this scenario)
2 -1 = 1 or 2
2 figurines placed in case, screen went up and infants could see a hand removing a figuring, screen fell to either reveal 1 (expected) or 2 (impossible) figurines remaining
results: in both cases, 5 mos were surprised at the impossible outcomes and not surprised at the expected outcomes, so yes, they can do this simple math to distinguish expected/impossible outcomes
the violation of expectation method
a way to test what babies are paying attention to
babies look longer at novel or unexpected events than at more predictable ones
explanation: infants 1 possess the expectation, 2 detect the violation, 3 respond with increased attention
support: brain waves show a pattern when detecting an error that is detectable by EEG - its called error related negativity (ERN) and it is consistent timing with violation-of-expectation looking time
principles of intuitive physics
infants interpret the world according to general and intuitive physical principles
1. continuity: 2.5 mos, if an object is placed behind an occluder, it does not cease to exist (aka object permanence), is also at play in the baby arithmetic study, another study showed that 2.5 mos detect when this is violated (and covering violations duck and straight line assumption princess)
2. solidity: 2.5 mos, two objects cannot occupy the same space at the same time; entails that solid objects cant pass through other solid objects (ex. babies surprised when the truck appeared to pass through the wall)
3. cohesion: 3 mos, objects are single, integrated entities (ex. 8 mos fail to represent/track the movement of split snails, but can w/ whole snails) (entails rigidity: when parts of an object move, the whole object does)
4. contact: 6-7 mos, an object cannot exert force on another object from a distance (aka no telekinesis)
5. gravity (aka support): 4.5 mos, objects must be supported from below (ex. floating truck)
theory of mind
15-18 months
reasoning about other minds
gopnik broccoli experiment - 14 mos can't offer food based on other's demonstrated preferences, but 18 mos can (experimenters acted like either like crackers or broccoli, baby likes crackers but older babies can give broccoli to those w/ the preference even if its different than their own)
false belief tasks - standard task (asking which box Sally will look in when Ann moves her marble w/out Sally seeing) shows the change to be able to reason about other minds to be at 4 years of age
BUT the modified false belief task shows this in 15 mo infants - used VoE, includes a TB condition where experimenter sees the melon move AND a FB condition where the experimenter doesn't see it move; babies looked longer in FB conditions when the experimenter chose the box the babies didn't think they should based on what they know
statistics
8-11 months
probabilistic reasoning
11 mo infant study: baby cant see the contents of a box from which balls (white or red) are being pulled. what assumptions are they making about the population of the box based on the samples being drawn?
expected condition - all red balls are pulled, box opens to reveal mostly red balls
unexpected condition - all white balls pulled, box opens to reveal mostly red balls
control condition - experimenter drew sample from pocket, and infants did not look longer at the "unexpected" outcome
suggests that babies are sensitive to random vs nonrandom sampling (aka statistical + reasoning about others minds knowledge)
difference between standard fb task and modified task
standard: sally and ann, relies on language/verbal communication, supports this ability starts at around 4 yrs
modified: melon toy, relies on looking times (VoE), supports this ability in younger children (15 mos)
theory of mind + statistical knowledge example
more snickers than kitkats in a bowl --> Jordan always grabs snickers, Alex always grabs a kitkat --> who prefers what? --> Alex likes kitkats because there are less and he always gets one so hes choosing purposefully, we don't know about Jordan because its just more statistically likely that he will grab that by chance
capacity to reason probabilistically
denison and xu's argument about babies early emerging ability to make sophisticated (inductive) inferences under uncertainty
though adults overlook this ability
gopnik: children learn like scientists do, by making hypotheses, experimenting, and updating data
example of babies' sensitivity to random v nonrandom sampling
babies need to make generalizations about limited data in order to make plausible assumptions
this means they need to be sensitive to where data comes from: random (generalizable) or nonrandom (nongeneralizable)
method: 15 mos, baby sees three squeaky blue balls from a box of mostly blue balls, and then generalize that squeaky quality to yellow balls --> the same 3 blue squeaky balls are then pulled from a box of mostly yellow balls (not statistically plausible random sampling) baby then does NOT generalize the squeaks to the yellow balls because they know the blue ones were selectively sampled and thus probably different --> when just 1 squeaky blue ball was sampled (less data) babies still generalized to the yellows bc this is still statistically probable
Stahl and Feigensen (2015) 11 mos gravity and solidity infant study
object choice key findings: when the truck moved in a knowledge-consistent manner, infants preferred to move on to a new toy; but when the truck moved in a knowledge-inconsistent way, the baby preferred to continue exploring the truck
action choice: babies who saw the truck pass through the solid wall chose to bang it against something to test solidity, whereas babies who saw the truck float above the pedestal were more likely to drop the object to test the principle of gravity
SHOWS THAT: babies are flexible, explanatory learners - they preferentially explore objects and deliberately test hypotheses after learning new and surprising info
allows for intuitive physics while making room for active learning
babies make rational causal inferences from probabilistic (even limited) data and decide what to do based on their observations
example of babies using limited data to make rational inferences about what to do
question: when baby is confronted with a not working toy, do they reason about it being their fault or the toys?
method: baby sees toy working for grad student but not for prof --> hands off his own not working toy to mom (he thinks he's doing something wrong)
baby sees prof and gsi succeed once on the toy --> tosses his not working one and grabs a new one (he thinks something is wrong w/ the toy)
David Marr's 3 levels of explanation applied to language
functional: mapping from sounds to meanings
algorithmic: syntax trees
physical: brain area's for language (motor cortex, broca's area, primary auditory cortex, and wernicke's area)
Noam Chomsky
hugely influential linguist who pioneered chomskyan cognitivism: language as an internal-to-mind computational system
result of the cognitive revolution: response to behaviorism that said you can observe behavior AND internal mental processes
he used the poverty of the stimulus argument for language
held a hybrid position between nativism and associationism (language is entirely a matter of learned associations), nature (UG) and nurture (learning)
associationism
john locke: humans are born as blank slates and then shaped by experience
BF Skinner adopted this idea
study of human language = study of verbal behavior
verbal behavior is learned like other behaviors through stimulus-response associations: regularly experience A followed by B, an association forms between them, where you experience As your mind produces representations of Bs (ex. rooster crowing --> sun comes up --> feeling warm)
instrumental conditioning: action is followed by reward, increasing the likelihood that a child will repeat that behavior in the same situation (ex. call the furry thing a cat --> reward --> knows to call that furry thing a cat again)
all in all what is in the head: associations between verbal actions and situations (and a minimal learning apparatus to enable learning of associations) and how did it get there: associationistic learning (particularly instrumental learning)
problems with associationism
it cant explain these language properties:
1. stimulus independence = people routinely utter sentences in contexts that are independent of the contexts in which the sentences were learned --> associationism says the context of learning and context of utterance must be identical
2. novelty = many of the sentences a person knows and can say have never been encountered before (ex. the wug test --> kids could generalize grammatical rules to made up words. ex2. - children make novel grammatical errors in their spoken language output that they have never heard before like we go-ed to the store is an overgeneralization of the -ed suffix)
3. productivity: a language is productive if there is no upper bound on the number of sentences it can express - language has infinite generative capacity (made possible by complementizers like THAT)
4. systematicity: a person who knows the language reliably knows groups of expressions at a time (ex. if joe knows the meaning of "the cat is on the mat" then he also knows the meaning of "the mat is on the cat")
chomskyan cognitivism
the poverty of the stimulus argument: language capacities are underdetermined by the stimuli children receive
so there must be universal restrictive principles that guide learning
whats in the head: focus on syntactic processing and how it can be understood in terms of a phrase structure grammar composed of abstract combinatoric phrase structure rules (syntax trees)
how did it get in the head: its innate - part of the Universal Grammar
bridge from sound to meaning
syntax (between phonology and semantics)
syntax and semantics are distinct: sentences can be syntactically sound and semantically meaningless (colorless green ideas sleep furiously)
phrase structure grammars
sets of rules that employ abstract categories and can be used to generate valid sentences in a language (include rules, lexicon, and a tree)
they have the following features:
1. hierarchal = a sentence is broken down into abstract constituents which are broken down into further constituents
2. combinatoric = the rules are defined over elements that can be recombined in open-ended ways
3. recursive = allow for repeated application of certain rules that can operate on the output of other rules, eventually forming infinite loops of sentences of unbounded lengths (ex. that....that...that)
lexical ambiguity
words have more than one possible meaning
ex. I saw bats
phrase structure does not change
resolved with context and defining the word clearly
structural ambiguity
sentences have more than one possible meaning
does not come from the meanings of individual words, but rather ambiguous grammatical structure
resolved by phrase structure trees to disambiguate the sentences
usually has to do with where the PP is attached (want it to be on the same level as whatever other phrase it goes with per interpretation)
universal grammar
the initial state of the human mind that allows it to acquire languages and the term also refers to more specific theories about what that state is/contains
it is the system of categories, mechanisms, and constraints shared by all human languages
chomsky called it the essence of all human languages
ug as an innate language acquisition device: when you know a language, you know principles/properties that apply to all languages (innate) and parameters that vary from one language to another (learned)
jukebox analogy: some variations on what songs play, but there is a preset menu
NOTE: most recent chomskyan theory excludes phrase structure but we still follow early chomsky in class
can the cognitivist account explain those four features of language
1. novelty = yes, if we have predictable, flexible phrase structure rules we can use them to generate completely new meanings and sentences (ex. the wug test, generalizing rules onto made up words)
2. systematicity = yes, with phrase structure rules made of combinatory syntactic elements, you can slot in different concepts w/out changing the grammar
3. productivity = yes, cognitivism includes rules that are recursive (complementizer phrases)
4. stimulus independence = yes, phrase structure is independent of any stimuli
chomsky two arguments for the innateness of language
1. argument for universality - individuals in a speech community have all developed essentially the same language - universality within speech communities and across speech communities - all employ basic structural rules - language is present in all human societies, everyone of a normal intelligence develops language, and they all have around the same complexity (other activities, like cooking, do not share these qualities)
2. poverty of the stimulus argument - language is underdetermined by evidence available to children - leads us to believe there are rules that are innate - combinatory and formulated using highly abstract categories (ex. knowing assertions and questions are systematically related - move aux verb to front of sentence)
NSL timeline
pre1970s: children isolated within family using homesigns
1977/1981: new schools open in managua for the dead, children then developed their own gestural language to communicate and pass it on (and enrich it) over time to younger children
2004: 800 deaf signers of NSL
Senghas et al (2004)
every language consists of a finite set of parts that can be recombined in infinite ways and are organized in a principled and hierarchal fashion
language is both discrete and combinatorial
these principles make infinite expressions possible within a finite system
NSL: description of motion in NSL is a lens into the intro of segmented, linear, and hierarchical info in a communication system --> motion events are made of a manner (ex. rolling) and a path (ex. descending, down a hill) --> simultaneous vs separate signs
Study groups: 1 pre1984, 2 1984-1993, 3 post1993, g hearing individuals
Results: g and 1 showed most simultaneousness, while 2 and 3 did mostly sequential/combinatoric patterning with a ABA (roll-descend-roll) structure
what can we learn from NSL?
1. language is created by kids, not taught or learned solely by imitation, they build on and develop concepts
2. humans learning abilities have a predisposition for creating linear sequencing (sequential combos appear even when simultaneous models are available)
ABSL (Al-Sayyid Bedouin Sign Language)
another sign language community that developed with no instruction
important in that its phrase structure is different than the Arabic spoken dialects in the area as well as Israeli Sign Language used in surrounding areas
they do subject-object -verb (woman apple give) instead of subject-verb-object
discretely neurally localized
some parts of the brain are primarily responsible for language processing
evidence comes from language disorders
motor cortex, broca's area, primary auditory cortex, wernicke's area
independence of form and meaning
independence of syntax and semantics as well as the independence of linguistic ability from other aspects of cognition receive support from double dissociations of meaning and form in neuropsychological literature
broca's area
green, left frontal lobe, syntactic region, lesions produce speech that is agrammatical and lacks fluency but is still semantically meaningful (broca's aphasia)
ex. can describe content of a picture but in a clunky and disconnected manner
wernicke's area
yellow, left temporal lobe, semantic region, lesions produce complex/grammatical speech w/out coherent meaning and difficulties understanding others' speech, syntax w/out semantics (wernicke's aphasia)
ex. man describing events fluently but the meaning is word salad
"thank you so much, I hope the world lasts for you"
motor cortex
frontal lobe, blue, lesions (if they happen to affect language) produce difficulty coordinating mouth, lips, and tongue to produce speech
sort of associated with broca's area
primary auditory cortex
temporal lobe, purple, lesion produce cortical deafness (cannot hear sounds)
sort of associated with wernicke's area
williams syndrome
congenital disorder characterized by distinct cognitive organization and impairments but completely intact language capabilities
good storytellers
separation of language from other areas of the brain
Hickok et al (2002)
is the brains organization for language truly based on the functions of hearing and speaking? (aka are brocas and wernickes areas related to the cortexes near them?)
test this with sign language users
they also experience deficits resembling brocas/wernickes aphasia
suggests that the organization of the brain for language is not affected by the way it is perceived/produced
it cant just be based on speaking/hearing because people communicate in other ways
the neural organization of sign language is more tied to that of spoken language than visual-spatial processing
also supports modularity - separate module including all language, separate from other processing systems
four key claims about language processing
1. language processing is mandatory: it operates irrespective of conscious initiation or control
2. it is fast and incremental: we understand speech and text as it unfolds, EEG shows we detect semantic/context violations almost immediately (he spread the warm bread with socks) shows we process as we go along, dont wait for whole picture
3. most of the fast/incremental processes are not available to conscious awareness: can we catch the brain in the middle of semantic word sense disambiguation? in competent speakers, words when looked up will also pull up related words (and their multiple meanings like bug to spy) --> when primed w/semantically related words, people identify possible matches faster
4. language processing must deal with ambiguous input: hard for our brains since we interpret as we go (leading to things like garden path sentences) brain skates over lexical ambiguity and uses other clues like visual and context to determine structural ambiguity
Richard Samuels
philosopher who attempted to define innateness in cognitive science
goal not to come up with one definition, because this is generally impossible
more of a conceptual analysis of possible definitions and what qualities are important
want a definition that is not too narrow but also not to broad
definition of innate as meaning present at birth
corollary: if a trait emerges sometime after birth, then it is not innate
counterexample: teeth/beards - these emerge after birth, but we still think of them as innate
so this definition is too narrow
definition of innate as meaning a product of internal causes
corollary: if a trait is a product of an external cause, then it is not innate
counterexample: teeth/beards - their development is influenced by external causes (oxygen, nutrients) but we still think of them as innate
so this definition is too narrow
callback to hybrid position of nature and nurture
definition of innate as meaning genetically determined
but what does it mean to say a trait is genetically determined?
1. caused by one's genes alone: teeth/beards require certain outside factors to develop, so this account claims they are not genetically determined (too narrow)
2. having high heritability (variations in the trait are explained by variation in the genes): having a head is innate but theres not variation in this trait (making heritability undefined in this case) (again, too narrow)
definition of innate as meaning reliably develops across most normal environments
all people develop teeth across most normal environments, so this seems to work...but our belief that the sky is blue also develops in most normal environments and this is not innate... so this definition is too broad (but probably the closest, better to be too broad than too narrow)
module
a mental computer that is specialized to do just one kind of mental job
the brain as a swiss army knife, modules as graphics card on a computer or the organs of the body
ex. visual perception - we don't have to consciously solve the problem of inverse optics in our central computer, its like there's a fast, specialized computer that does it while the main computer and other modules do other things
types of modules
1. visual mechanisms
2. auditory mechanisms
3. language processing mechanisms
4. cheater detection mechanisms
5. emotional mechanisms
6. face processing mechanisms
7. many others?
difference between modular systems and central cognition
modular systems = fast, automatic, spontaneous (ex. facial processing, language processing)
central cognition = slow, effortful, general purpose (ex. complex math or riddles)
how much of the mind is modular?
John Locke: none
Popular view: only some elements of the mind, especially perception, are modular (class position)
Many evolutionary psychologists: your mind is entirely modular
characteristic features of modular systems
1. their operation is mandatory: outside of conscious control (language processing, processing of a dice as a 3d shape, see the bowl of veggies as a shape, versus a complex calculation)
2. their operation is fast: we understand as things unfold
3. they are domain-specific: aka operate on a limited/specialized set of inputs (ex. the wason selection task is hard with numbers but easy with a social scenario because we have a module specialized for cheater-detection
4. they are informationally encapsulated: the module can't access info in other systems (ex. the muller-lyer illusion - we cant help but see them as different length lines - central cognition knows they are the same, but the visual module can't access this info and sees them as different)
5. they are inaccessible to other mental systems: aka reverse of information encapsulation, other mental systems can't access info in modules (ex. inability to articulate certain grammatical rules on the spot because central cognition can't access info in the module that holds it)
5. they are neurally discretely localized (like language and the FFA) but most modules are not (domain specificity does not imply this quality)
how many of the modular features does a system need to have to be considered modular?
none! but the more a system has, the more modular it is