1/31
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Markov Chain
A system describing a sequence of distinct states where the probability of each transition depends entirely on the state that came before; maybe this is how thoughts and languages are structured!
Dots represent states
Arrows represent transitions between those states
Numbers represent the likelihood of each transition
[probability of moving to the next state depends only on the current state, not on the full sequence of events that preceded it]
These are usually n-gram models, where the system looks at the last n-1 words to predict the next.
Bigram: based on 1 word before.
Trigram: based on 2 words before.
(Predictive Text)
Problems:
“Colorless green ideas sleep furiously”
Syntactically well-formed sentences or thoughts can have a probability of transition of ZERO
Can’t capture long-distance dependencies
Ddon’t “know” what a sentence means. —> they don’t track consistency, referents, or long-range context, they’re shallow
Long distance dependency - The future state depends only on the current state—not on any states before it. (
“If the dog that chased the cat is hungry, it will bark.”
To interpret “it”, you need to recall “the dog”, not “the cat”—even though several words intervene. A first-order Markov Chain, which only looks one step back, cannot do this)
Better story: Thought has hierarchical structure
Distributed Representation
Representations or concepts are distributed throughout the brain, rather than having a discrete location; On this view, concepts are anti-localized.
Core property of Connectionism
Modularity
The idea that the mind is made up of independent systems - modules - that are specialized for certain computations.
Fixed neural architecture, Mandatory operation, Limited central accessibility, Domain specificity, Informal encapsulation
What makes a module? (five)
Fixed neural architecture
One of the conditions of determining whether something is a module: More or less specialized parts of the brain for operation of the particular system.
Does the mind have a standard / localizable architecture at the implementational level?
Example: facial recognition area in the brain (maybe good evidence)
Mandatory operation
One of the conditions of determining whether something is a module: Happens automatically, not under our conscious control.
Example: You can’t recognize someone speaking to you as nothing but English, can’t “turn it off” and hear jargon instead
Limited central accessibility
One of the conditions of determining whether something is a module: A system is inaccessible if the representations it computes prior to producing its output are inaccessible to consciousness.
Example: We have no conscious access to the processes that are occurring within a module.
Domain specificity
One of the conditions of determining whether something is a module: Does it have a specific task or goal it is trying to accomplish? Does it have a particular subject matter?
Example: Fusiform Face Area (FFA): Area of visual processing that is specifically specialized in facial recognition = helping us recognize and understand other people’s faces
Information encapsulation
One of the conditions of determining whether something is a module: The module cannot access information stored elsewhere; Some of the information that these modules use are encapsulated within the model; the processing will have access only to the information within that module.
Bottom-Up Processing
Information flowing from senses to perception.
Top-Down Processing
Information flowing from world knowledge to perception (color constancy, reading typos)
Cognitive Penetrability
A challenge to modularity, a perceptual system is cognitively penetrable when its operations are directly sensitive to an agent’s beliefs, desires, intentions, and other non-perceptual states
When mental representations in central cognition affect what we perceive; extremely debated
Connectionism
The mind is a network - a neural network - consisting of nodes and weighed connections between those nodes
Architecture of the mind resembles the architecture of the brain
Strengths:
Learning
Graceful degradiation
An attractive account of concepts
Weaknesses:
One-off learning
No obvious explanation of systematicity and productivity
Are they just capturing mere associations, rather than mastering abstract rules?
Parallel processing, Distributed representations rather than discrete symbols, Stataitical properties rather than logical rules
What are the three key properties of connectionism? (three)
Jennifer Anderson Neuron
Neurons in the medial temporal lobe that fire selectively to images of Jennifer Aniston
Neurons have an ability to change their association, the way neurons are connected is known as Cascading activation - Hierarchical layers of interconnected neurons (a single action potential, and then they’re produced by individual neurons)
Cascading Activation
A process in connectionist models (like neural networks) where the activation of one unit (or node) spreads to other connected units, which in turn may activate others, creating a ripple or chain reaction of activation through the network.
Wason Selection task
Test for logical reasoning with conditional statements (if then statements), where we are bad at it when the subject matter is abstract.
However, we improve task performance drastically when the subject matter is about certain social settings or behaviors (when someone is breaking a rule or cheating)
Hypothesized we evolved to have a cheater detection module
What influences how good we are at reasoning with conditional statements - Social contexts and perspective taking
Cultural Intelligence Hypothesis
Answers the question = why are humans so intelligent? Social cognition preceded general intelligence.
Proposes that humans evolved advanced cognitive abilities primarily to navigate and learn from complex social and cultural environments
In the social domain, humans may have more of an advantage than some primates because humans don’t simply have more general intelligence
Compositionality
Complex expressions are built from expressions or constituents, the meaning of a complex expression is a function of the meaning of the constituents and the way they are combined
Property of Language of Thought (LOT)
Property of Language of Thought (LOT)
Thought is a formal system
Three properties: Logical structure, syntactic processes, compositionality
Productivity
The ability to entertain a potential infinity of thoughts
We know we have it because we can conjure up thoughts nobody has ever thought before → potential infinity of sentences
(Bluey gave a flower to Greeny’s child’s child’s child..)
[We can run the same rules over the same constituents infinitely]
Systematicity
Interrelations between the thoughts we can have and the ways in which we can think.
Certain thoughts are intrinsically connected to certain other thoughts
How do we know we have it? - We see systematically in natural language acquisition
(Example: Bluey MIMMED (jumped over) Greeny, can tell us information about how Pinky mimmed Goldy)
Computational Theory of Mind
A mind is a computational system, and cognition is computations performed over representations (manipulating symbols that have a specific structure)
Right level of explanation
A bridge between brain and behavior
Extension to non-human minds
Cognitive Architecture
A proposal about the kinds of structures that make up a mind and how they work together to give rise to cognitive behavior —> a blueprint for intelligence.
Language of Thought
“Our best theories make use of mental language, so we have good reason to accept that it exists.”
Proposed by philosopher Jerry Fodor, it suggests that thinking occurs in a mental language that operates similarly to formal, symbolic language
Thoughts are built from symbols according to rules, which can be manipulated algorithmically for reasoning, learning, and decision-making
(e.g preserves relationships = subject-verb-object)
(Does a good job explaining a wide range of cognitive behavior such as : natural language, reasoning, planning, decision-making, perception, and animal cognition)
Compositionality
Complex expressions are built from smaller expressions, or constituents
The meaning of a complex expression is a function if the meaning of the constituents and the way they are combined
Smaller constituents are concepts, and larger constituents are propositions
Logical Structure
(A property of Language of Thought)
LoT expressions have a logical form, where they contain analogues of logical operators
Analogues of logical operators = or, if-then, not, some, all, etc.
Logical structure is typically modeled using predicate calculus (John jumped and Mary climbed)
Analogues of logical operators = Counterparts or equivalents of logic functions that appear in other fields like language, mathematics, and programming
Syntactic Processes
(A property of Language of Thought)
Computations manipulate symbols in virtue of their formal syntactic properties, not their semantic properties
Meaning doesn’t figure into the rules of composition — only form
If John jumped, then Mary climbed
(a) John jumped
(b) John didn’t jump
Argument from Systematicity of Thinking
There are systemic relations among inferences you can draw, like inferring that Greeny has a flower from “Greeny and Bluey have flowers,”
compositional, logical form, syntactic rules, practice, productivity, systematicity
LoT is _____________, has ____________, and follows _______________; and is supported by arguments from ________, ____________, and _____________
Computations over structured representations
Refers to the idea that the mind processes information by manipulating internally stored symbols that have a specific structure
Computation: Following rules
Representations: Organized in ways that preserve relationships and meanings
Markov Chain