PHIL 351 - 2.1

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/14

flashcard set

Earn XP

Description and Tags

Classical vs. nonclassical views of cognitive architecture

Last updated 4:06 PM on 4/15/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

15 Terms

1
New cards

LOT and Cognitive Architecture

  • The propositional content of a thought is the meaning of a LOT sentence.

  • The propositional attitude is the subject’s ‘computational relation’ to the LOT sentence, aka. the causal or functional role the sentence is playing in the subject’s mental economy

    • Ex. S thinks that P if and only if S is cognitively related to a sentence (in S’s internal LOT) that means P

    • Ex. Amy believes that grass is green if and only if Amy stands in the belief-relation to a LOT sentence that means grass is green.


Motivation 

  • Cognition has a language-like structure 

  • Rules for concatenating symbols is recursive → increasingly complex sentences 

  • The meaning of a complex LOT sentence is a systematic function of the semantic values of the constituent elements and their syntactic organization (compositionality)

2
New cards

ANNs - History 

  1. McCulloch and Pitts (1949) 

  • Simple binary threshold model 

  • Suggested what neurons do + how synapses affect neural activity 

  • Could compute basic logical operations (proof neural nets can perform Turing-compatible tasks, ex, AND/OR/NOT) 

  • Did not answer how neural networks evolve + learn

  1. Rosenblatt (1958)

  • 2 layer net called ‘perceptron’ (no hidden layer), just input + output 

  • ‘Neurons that wire together, fire together!’

  • Gave suggestions for training two-layered networks using error correction techniques 

  • Suggested an answer to how neural networks evolve + learn! 


  1. Minsky and Papert (1969)

  • Showed Rosenblatt’s perceptron couldn't compute XOR 

  • Showed its not obvious on how to train multi-layered networks

3
New cards

Connectionism

Modern look at computation → system to execute certain functions! 

  • Many simple units/nodes that interact via connections with varying ‘weights’ (excitatory/inhibitory)

    • Pattern of activation across nodes. Can be categorized/used for perceptual discrimination tasks 

    • Engages in representational activity!

  • Nodes are interconnected across adjacent layers

  • Memory distributed in connection weights (no central controller), rather than a local representation

    • Short term memory stored in unit’s changing state of activation 

    • Long term memory stored in strength of connection weights between units 

  • The only activity in the system is the changing activation level of units, in response to cumulative signal from connected units 

    • Sending signal: Simple function of activation level (representable numerically)

    • Receiving signal: Complex function of signal sent + the excitatory/inhibitory weight of the connection w/ the unit that sent signal 

  • Many more connections in network than units

  • Pattern of weighted connection among units determines the character of a given network (ex. particular pattern of activation given a certain input)

4
New cards

Why connectionism?

  1. Provides a model of cognition that is closer to the biological structure of the brain (neural networks + parallel processing)

  2. Offers superior learning capabilities from environmental data (learns from environment + graceful degradation)

  3. Good at ambiguous tasks/handling patterns! Ex. filling in gaps, finding similarities, recognizing repeated instances 

  • Patterns that networks are trained to respond are drawn from massive amounts of info, involving large number of weakly correlated variables 

5
New cards

Machine Learning Training

  • Networks are trained to execute tasks (mapping functions) via training sets (exposure to examples) 

  • Examples are precategorized/pre-labelled → network gets better over time after using learning algorithm 

Error propogation:

  • Compare actual output with desired output; adjust connections based on feedback to reduce discrepancy. Gradual error reduction over time!

6
New cards

Learning algorithm and Error Propagation

Training technique, automated procedure for adjusting connection weights/activation thresholds until desired results are reached! 

  • Gradually nudge connection weights towards target response (ex. Back propagation of error) 

  • Once sufficient at training sets, will adjust and perform with novel data 

7
New cards

Advantages of Error Propagation/Supervised Learning

  • Graceful Degradation: Damage to node = slight decrease in performance, not collapse of whole system 

  • Fault Tolerance: System can continue to operate in the event of failing component(s) 

  • Sensitive to pattern differences undetectable by humans 

  • Resilient to noise/ambiguity/damage 

  • Spontaneous generalization to novel cases

8
New cards

Concerns of Error Propagation/Supervised Learning

  1. Predicts forms of neural connections not found in mammalian brains 

  2. Problems with scaling up to more complex tasks 

  • Catastrophic forgetting: System matures up to a point - when exposed to novel data the system dramatically declines + does not recover

  1. Require huge quantities of training data and repetition to learn very simple tasks; no one-shot learning (as seen in humans) 

  2. Unobvious how to train multilayer networks → more hidden layers, hard to distribute blame for error signal 

  3. Requires pre-labelled + pre-categorized training data 

  • ANN learning how to classify data on its own. All answers are provided to the ANN 

  • Can cause discrepancies between desired output + actual output 

  1. If a system already possesses target mapping (has the answer through the data set), is it really learning?

  • Connectionism was presumed to be an alternative to hard-coded knowledge!

9
New cards

Respects in which ANNs and LOT differ

  1. LOT: Symbolic, rule based system (top-down)

  • Knowledge represented in a propositional, language format (similar to human thought) 

  • Good at structured tasks

  • Step-by-step reasoning (sequential processing) 

  • Explicit programming of rules/structured dataset → learns from fewer examples

  • Semantic constituents → combinatorial structure allows different organizations to carry content 

    • Ex. someone who can think “John loves Mary” can also think “Mary loves John” 

  1. ANNs: Connectionist, learning based systems (Bottom-up) 

  • Knowledge represented through numerical weights distributed across layers of nodes 

  • Knowledge distributed through the network!

  • Parallel processing, good for unstructured data (pattern recognition) 

  • Require massive datasets 

  • Lack combinatorial structure 

10
New cards

The argument from systematicity for the claim that the human mind has a LOT structure

Human mind is combinatorial! Key feature: 

  • Semantic constituents + structure of propositional contents are isomorphic to the syntactic constituents + organization of the representational vehicles that carry its connects

    • Hence why someone can think John loves Mary and also Mary loves John 

  • ANNs lack combinatorial structure 

Fodor’s argument against systemacity in ANNs 

  1. If human thought is systematic, then it must have vehicles whose structure maps on to the structure of their contents.

  2. If ANNs are good models of human thought, then human thought cannot have vehicles whose structure maps on to the structure of their contents.

  3. Human thought is systematic.

  4. Human thought must have vehicles whose structure maps on to the structure of their contents.

  5. Artificial neural networks are not good models of human thought.

11
New cards

ANN advocate’s reply to Fodor’s argument for systematicity

Radical response: Deny that human thought is systematic

ANN advocate’s argument against systemacity in humans

  1. If human thought is systematic, then it must have vehicles whose structure maps on to the structure of their contents.

  2. If ANNs are good models of human thought, then human thought cannot have vehicles whose structure maps on to the structure of their contents.

  3. ANNs are good models of human thought.

  4. Human thought cannot have vehicles whose structure maps on to the structure of their contents.

  5. Human thought is not systematic

ANN theorists wishing to explain propositionally structured thought face a dilemma:

  1. either an ANN can be trained to display systematicity or it cannot. 

  2. If it cannot, it is hopeless as a general account of human cognition. 

  3. If it can, it simply vindicates the classical approach.

12
New cards

Theoretical morals Smith has about AI and its failings (general)

Recap of Smith’s Project: Arguing for the distinction between 2 kinds of intelligence

  1. The sort we exemplify 

  • Sees this as ‘genuine intelligence’ → manifests the capacity for judgment as opposed to just reckoning (calculative reasoning) like computers

  1. The sort modern computers exemplify 

AI systems need to deal with reality as it actually is, not the way we think it is!

  • We represent the world through thoughts/language —> but the world is not inherently split into neat objects, as we may perceive it. Not complete representations of reality!

  • He finds ANNs more compatible with his view of reality (can get a more objective understanding of world)

13
New cards

1st Wave AI Failures

  1. Neurological 

  • Architecture of digital computer is much different from the human brain 

  • Brain operations are much slower!

  1. Perceptual 

  • Classism rests on the false assumption that perception = recovering info about objects from retinal stimulation 

  1. Epistemiological 

  • AI can know a label but not the referent in the real word (ex. Symbol dog is a node in a database, not an object with weight/fur/life) 

  • Inability to generate genuine knowledge or understand the world, despite being able to manipulate symbols and follow rules

  1. Ontological 

  • We exist in a ‘non-conceptual’ world

  • 1st wave AI thinks that the world comes in neat, ontologically discrete objects. This misconception explains its ultimate inadequacy 

  • We cannot assume an intelligent system that we build will parse reality in the same way we do

If the system is intelligence, it should make its own sense of things, including constructing its own conceptual schemes

14
New cards

Theoretical morals Smith draws from 1st wave AI

  1. The good: Representational Mandate

  • Proper functioning of world-directed system must be governed by normative criteria applying to its mechanical operations

  • Operations framed in terms of situations + states of affairs of the world that the system is representing/reasoning about, which situations and states of affair will not in be in effective (causal) reach

  1. The bad: Deepest failing = AI took objects for granted 

  • Rested on naive conception of the world outside the mind

    • Naive to assume that the world comes chopped into neat, ontologically discrete objects

  • Misconception of world in GOFAI: interpret/filter the world though abstractions or idealizations

    • Highlights some aspects of what is represented

    • Minimizes/distorts others, and ignores/abstracts in-the-world detail

15
New cards

Theoretical morals Smith draws from 2nd wave AI

  1. The good: Yields valuable philosophical insights 

  • Suggest rich + ineffable web of statistical relatedness, weaves the world together into an integrated “subconceptual” whole 

  • However, reality itself surpasses an intelligent being’s capacity to conceptualize verbally articulate it 

  • Intelligent systems must come to grip with world on conceptual level


  1. The bad: Similar error as GOFAI, fails to explain how a system comes to understand what it is representing/talking about 

  • To achieve understanding (and genuine intelligence), Smith suggests the system would need to be capable of assuming a stance of deference to the world 

    • Must have its own stance, not just act in accord to human deference

To do this, it will have to know: 

  1. That there is a world 

  2. That its representations are about that world 

  3. That it + its representations must defer to the world that they represent