The End of Cognitive Science

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/31

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

32 Terms

1
New cards

Markov Chain

A system describing a sequence of distinct states where the probability of each transition depends entirely on the state that came before; maybe this is how thoughts and languages are structured!

  • Dots represent states

  • Arrows represent transitions between those states

  • Numbers represent the likelihood of each transition

[probability of moving to the next state depends only on the current state, not on the full sequence of events that preceded it]

These are usually n-gram models, where the system looks at the last n-1 words to predict the next.

  • Bigram: based on 1 word before.

  • Trigram: based on 2 words before.

(Predictive Text)

Problems:

  • “Colorless green ideas sleep furiously”

    • Syntactically well-formed sentences or thoughts can have a probability of transition of ZERO

    • Can’t capture long-distance dependencies

    • Ddon’t “know” what a sentence means. —> they don’t track consistency, referents, or long-range context, they’re shallow

  • Long distance dependency - The future state depends only on the current state—not on any states before it. (

    “If the dog that chased the cat is hungry, it will bark.”

    To interpret “it”, you need to recall “the dog”, not “the cat”—even though several words intervene. A first-order Markov Chain, which only looks one step back, cannot do this)

Better story: Thought has hierarchical structure

2
New cards

Distributed Representation

Representations or concepts are distributed throughout the brain, rather than having a discrete location; On this view, concepts are anti-localized.

  • Core property of Connectionism

3
New cards

Modularity

The idea that the mind is made up of independent systems - modules - that are specialized for certain computations.

4
New cards

Fixed neural architecture, Mandatory operation, Limited central accessibility, Domain specificity, Informal encapsulation

What makes a module? (five)

5
New cards

Fixed neural architecture

One of the conditions of determining whether something is a module: More or less specialized parts of the brain for operation of the particular system.

Does the mind have a standard / localizable architecture at the implementational level?

Example: facial recognition area in the brain (maybe good evidence)

6
New cards

Mandatory operation

One of the conditions of determining whether something is a module: Happens automatically, not under our conscious control.

Example: You can’t recognize someone speaking to you as nothing but English, can’t “turn it off” and hear jargon instead

7
New cards

Limited central accessibility

One of the conditions of determining whether something is a module: A system is inaccessible if the representations it computes prior to producing its output are inaccessible to consciousness.

Example: We have no conscious access to the processes that are occurring within a module.

8
New cards

Domain specificity

One of the conditions of determining whether something is a module: Does it have a specific task or goal it is trying to accomplish? Does it have a particular subject matter?

Example: Fusiform Face Area (FFA): Area of visual processing that is specifically specialized in facial recognition = helping us recognize and understand other people’s faces

9
New cards

Information encapsulation

One of the conditions of determining whether something is a module: The module cannot access information stored elsewhere; Some of the information that these modules use are encapsulated within the model; the processing will have access only to the information within that module.

10
New cards

Bottom-Up Processing

Information flowing from senses to perception.

11
New cards

Top-Down Processing

Information flowing from world knowledge to perception (color constancy, reading typos)

12
New cards

Cognitive Penetrability

A challenge to modularity, a perceptual system is cognitively penetrable when its operations are directly sensitive to an agent’s beliefs, desires, intentions, and other non-perceptual states

  • When mental representations in central cognition affect what we perceive; extremely debated

13
New cards

Connectionism

The mind is a network - a neural network - consisting of nodes and weighed connections between those nodes

  • Architecture of the mind resembles the architecture of the brain

Strengths:

  • Learning

  • Graceful degradiation

  • An attractive account of concepts

Weaknesses:

  • One-off learning

  • No obvious explanation of systematicity and productivity

  • Are they just capturing mere associations, rather than mastering abstract rules?

14
New cards

Parallel processing, Distributed representations rather than discrete symbols, Stataitical properties rather than logical rules

What are the three key properties of connectionism? (three)

15
New cards

Jennifer Anderson Neuron

Neurons in the medial temporal lobe that fire selectively to images of Jennifer Aniston

  • Neurons have an ability to change their association, the way neurons are connected is known as Cascading activation - Hierarchical layers of interconnected neurons (a single action potential, and then they’re produced by individual neurons)

16
New cards

Cascading Activation

A process in connectionist models (like neural networks) where the activation of one unit (or node) spreads to other connected units, which in turn may activate others, creating a ripple or chain reaction of activation through the network.

17
New cards

Wason Selection task

Test for logical reasoning with conditional statements (if then statements), where we are bad at it when the subject matter is abstract.

However, we improve task performance drastically when the subject matter is about certain social settings or behaviors (when someone is breaking a rule or cheating)

  • Hypothesized we evolved to have a cheater detection module

  • What influences how good we are at reasoning with conditional statements - Social contexts and perspective taking

18
New cards

Cultural Intelligence Hypothesis

Answers the question = why are humans so intelligent? Social cognition preceded general intelligence.

Proposes that humans evolved advanced cognitive abilities primarily to navigate and learn from complex social and cultural environments

  • In the social domain, humans may have more of an advantage than some primates because humans don’t simply have more general intelligence

19
New cards

Compositionality

Complex expressions are built from expressions or constituents, the meaning of a complex expression is a function of the meaning of the constituents and the way they are combined

  • Property of Language of Thought (LOT)

20
New cards

Property of Language of Thought (LOT)

Thought is a formal system

  • Three properties: Logical structure, syntactic processes, compositionality

21
New cards

Productivity

The ability to entertain a potential infinity of thoughts

  • We know we have it because we can conjure up thoughts nobody has ever thought before → potential infinity of sentences

(Bluey gave a flower to Greeny’s child’s child’s child..)

[We can run the same rules over the same constituents infinitely]

<p><span>The ability to entertain a potential infinity of thoughts</span></p><ul><li><p><span>We know we have it because we can conjure up thoughts nobody has ever thought before → potential infinity of sentences</span></p></li></ul><p></p><p><em>(Bluey gave a flower to Greeny’s child’s child’s child..)</em></p><p>[We can run the same rules over the same constituents infinitely]</p>
22
New cards

Systematicity

Interrelations between the thoughts we can have and the ways in which we can think.

  • Certain thoughts are intrinsically connected to certain other thoughts

  • How do we know we have it? - We see systematically in natural language acquisition

(Example: Bluey MIMMED (jumped over) Greeny, can tell us information about how Pinky mimmed Goldy)

<p><span><strong><u>Interrelations</u></strong> between the thoughts we can have and the ways in which we can think.</span></p><ul><li><p>Certain thoughts are <strong><u>intrinsically connected</u></strong> to certain other thoughts</p></li><li><p><strong>How do we know we have it? - </strong>We see systematically in natural language acquisition</p></li></ul><p></p><p><em>(Example: Bluey MIMMED (jumped over) Greeny, can tell us information about how Pinky mimmed Goldy)</em></p><p></p>
23
New cards

Computational Theory of Mind

A mind is a computational system, and cognition is computations performed over representations (manipulating symbols that have a specific structure)

  • Right level of explanation

  • A bridge between brain and behavior

  • Extension to non-human minds

24
New cards

Cognitive Architecture

A proposal about the kinds of structures that make up a mind and how they work together to give rise to cognitive behavior —> a blueprint for intelligence.

25
New cards

Language of Thought

“Our best theories make use of mental language, so we have good reason to accept that it exists.”

Proposed by philosopher Jerry Fodor, it suggests that thinking occurs in a mental language that operates similarly to formal, symbolic language

  • Thoughts are built from symbols according to rules, which can be manipulated algorithmically for reasoning, learning, and decision-making

(e.g preserves relationships = subject-verb-object)

(Does a good job explaining a wide range of cognitive behavior such as : natural language, reasoning, planning, decision-making, perception, and animal cognition)

26
New cards

Compositionality

Complex expressions are built from smaller expressions, or constituents

  • The meaning of a complex expression is a function if the meaning of the constituents and the way they are combined

Smaller constituents are concepts, and larger constituents are propositions

27
New cards

Logical Structure

(A property of Language of Thought)

LoT expressions have a logical form, where they contain analogues of logical operators

  • Analogues of logical operators = or, if-then, not, some, all, etc.

Logical structure is typically modeled using predicate calculus (John jumped and Mary climbed)

Analogues of logical operators = Counterparts or equivalents of logic functions that appear in other fields like language, mathematics, and programming

28
New cards

Syntactic Processes

(A property of Language of Thought)

Computations manipulate symbols in virtue of their formal syntactic properties, not their semantic properties

  • Meaning doesn’t figure into the rules of composition — only form

If John jumped, then Mary climbed

(a) John jumped

(b) John didn’t jump

29
New cards

Argument from Systematicity of Thinking

There are systemic relations among inferences you can draw, like inferring that Greeny has a flower from “Greeny and Bluey have flowers,”

30
New cards

compositional, logical form, syntactic rules, practice, productivity, systematicity

LoT is _____________, has ____________, and follows _______________; and is supported by arguments from ________, ____________, and _____________

31
New cards

Computations over structured representations

Refers to the idea that the mind processes information by manipulating internally stored symbols that have a specific structure

  • Computation: Following rules

  • Representations: Organized in ways that preserve relationships and meanings

32
New cards

Markov Chain