L8 Language and modeling

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/20

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

21 Terms

1
New cards

What are 2 aspects of human cogniton?

  • study how information is acquired, represented and processed in human brain

  • identify processes and mechanisms involved in learning and processing

2
New cards

What are 2 aspects of computational modeling?

  • simulate a cognitive process via computational tools and techniques

  • use the model to explain the observed human behavior

3
New cards

What are models of natural language?

Probabilistic machines (trained to predict the most likely next word), humans are not since humans know what they want to deliver

→ example: chat gpt

4
New cards

What is fast mapping?

Children can establish a connection between a new object when they have 2 objects infront of them and know 1, asked to give the “dax” (made up word) to the researcher

  • young children easily map novel words to novel objects in familiar context

5
New cards

How does research work? (3)

  • you have a cognitive process → ex. learning new words

  • you have empirical findings → ex. how do children behave in specific situations

  • ultimate goal is to build a theoretical account to explain the empirical findings

    • what type of info do children use for learning words

*different paradigm approaches

6
New cards

How does input map to output?

input/info/observation → BLACK BOX → output/knowledge/behavior

<p>input/info/observation → BLACK BOX → output/knowledge/behavior</p>
7
New cards
<p>Theory model</p>

Theory model

  • environment (stimuli perceived by brain) → cognitive process → behavior (sometimes behavior leads back to environment and affects/changes it)

*the theory is interested in describing the cognitive process and predicting the behavior, not really interested in manipulating the environment

<ul><li><p>environment (stimuli perceived by brain) → cognitive process → behavior (sometimes behavior leads back to environment and affects/changes it)</p></li></ul><p>*the theory is interested in describing the cognitive process and predicting the behavior, not really interested in manipulating the environment</p>
8
New cards
<p>Running an experiment diagram</p>

Running an experiment diagram

control the stimuli (environment) and observe the behavior

<p><span><strong>control</strong></span> the stimuli (environment) and <span><strong>observe</strong></span> the behavior</p>
9
New cards
<p>Building a computational model diagram</p>

Building a computational model diagram

simulates stimuli and implements cognitive process and generates behavior → ex. i think humans rely on associations, so i build…

<p><span><strong>simulates</strong></span> stimuli and <span><strong>implements</strong></span> cognitive process and <span><strong>generates</strong></span> behavior → ex. i think humans rely on associations, so i build…</p>
10
New cards

How do humans categorize objects?

categorization model:

  • features, shape, color…

  • also have to explain how model gives priority to one feature over another, what happens when new objects are introduced, are new categories created?…

<p>categorization model:</p><ul><li><p>features, shape, color…</p></li><li><p>also have to explain how model gives priority to one feature over another, what happens when new objects are introduced, are new categories created?…</p></li></ul><p></p>
11
New cards

What are 4 common modeling frameworks?

  • Symbolic: first generation of models influenced by early AI

    • had same underlying symbols and logical rules

  • Connectionist: inspired by the architecture of human brain

    • neural network models, limited and hard to train, small dataset

  • Probabilistic: following the success of statistical machine learning techniques

    • statistical models, machine learning, dominant models until neural models came back

  • Neural: a resurrection of old connectionist models

    • deep learning

12
New cards

Describe 2 aspects of symbolic modeling

  • explicit formalization of the representation and processing of knowledge through a symbol processing system

  • representation of knowledge: a set of symbols and their propositional relation

    • symbol of apple = APPLE (01)

    • symbol of cat = CAT (02)

    • symbols are arbitrary, we have to know what properties each symbol has, use propositions to add value to the symbols (01: color = red, animate = False, category = food)

<ul><li><p>explicit formalization of the representation and processing of knowledge through a symbol processing system</p></li><li><p><strong>representation of knowledge</strong>: a set of symbols and their propositional relation</p><ul><li><p>symbol of apple = APPLE (01)</p></li><li><p>symbol of cat = CAT (02)</p></li><li><p>symbols are arbitrary, we have to know what properties each symbol has, use propositions to add value to the symbols (01: color = red, animate = False, category = food)</p></li></ul></li></ul><p></p>
13
New cards

Describe 3 aspects of the learning and processing mechanism → a symbolic approach

  • Processing and updating knowledge via general rules or schemas and under certain constraints

  • Applied rules must always be true

  • Each rule is augmented by a list of exceptions → tokens for which the rule is not applicable

<ul><li><p>Processing and updating knowledge via general rules or schemas and under certain constraints</p></li><li><p>Applied rules must always be true</p></li><li><p>Each rule is augmented by a list of <span style="color: inherit">exceptions</span> → tokens for which the rule is not applicable</p></li></ul><p></p>
14
New cards

Describe symbolic modeling using the English Past-Tense example from language

  • context-free grammar (CFGs)

    • a symbolic formalism for representing grammatical knowledge of language

<ul><li><p>context-free grammar (CFGs)</p><ul><li><p>a symbolic formalism for representing grammatical knowledge of language</p></li></ul></li></ul><p></p>
15
New cards

How do you deal with uncertainty? → Probabilistic Modeling

uncertainty:

  • two tables and two chairs create the shape of a stool → category for tables and chair, but now you have another category that doesn’t rly fit the already created categories

    • stool shape has 65% chair and 35% table → follows certain template

to deal with uncertainty, apply probability theory on previous exposure to data:

  • Representation of knowledge

    • Weighted information units that reflect bias or confidence based on previous observations

  • Learning mechanism

    • Principled algorithms for weighting and combining evidence to form hypotheses that explain data best

*Often in combination with techniques and formalisms from other frameworks

16
New cards

How does adding probabilities to symbolic models work?

  • a symbolic rule-based representation

    • each rule is augmented with a probability value indicating its applicability

  • english past-tense formation rules

*estimating probabilities comes from analyzing datasets

*how do we use the probabilities → we have to know how to combine the pieces of evidence, which are relevant…

  • overgeneralizing, bayesian rule (how to combine different pieces of info)

<ul><li><p>a symbolic rule-based representation</p><ul><li><p>each rule is augmented with a probability value indicating its applicability</p></li></ul></li><li><p>english past-tense formation rules</p><p></p></li></ul><p>*estimating probabilities comes from analyzing datasets</p><p>*how do we use the probabilities → we have to know how to combine the pieces of evidence, which are relevant…</p><ul><li><p>overgeneralizing, bayesian rule (how to combine different pieces of info)</p></li></ul><p></p>
17
New cards

What are 4 aspects of connectionist modeling?

  • human brain as inspiration → neuron (nucleus, input-output signal channels)

  • strength of neural network models → combing different elements

  • representation of knowledge: distributed activation patterns

  • input pattern → hidden layers → output pattern (behavior)

    • how is symbol “apple” changed into distributed on computational model

    • learning happens by connection weights changing over time to reduce error

<ul><li><p>human brain as inspiration → neuron (nucleus, input-output signal channels)</p></li><li><p>strength of neural network models → combing different elements</p></li><li><p>representation of knowledge: distributed activation patterns</p></li><li><p>input pattern → hidden layers → output pattern (behavior)</p><ul><li><p>how is symbol “apple” changed into distributed on computational model</p></li><li><p><strong>learning</strong> happens by connection weights changing over time to reduce error</p></li></ul></li></ul><p></p>
18
New cards

Explain distributed input representation (5)

  • Distribute Features: Features (shape, color, edible, animate) are spread across multiple nodes in a model, forming a distributed representation. These features work together to represent input data.

  • Input to Model: For tasks like predicting the English past tense, input words (e.g., "walk", "bring") are processed into distributed features, represented as nodes (circles in diagrams). Each node holds a part of the word's meaning.

  • Model Processing: The model uses phonemes as input (e.g., "w", "a", "k", "l") and generates outputs like past-tense forms by processing these features through nodes. The result is a set of features corresponding to word categories.

  • Feedback from Weights: As the model processes data, it adjusts weights based on feedback (errors), improving predictions over time.

  • New Word Generation: By distributing features and adjusting weights, the model generates new words or modifies existing ones based on input and learned patterns.

Key Points about the Concept:

  • Distributed Representations: These models rely on the idea that information, whether it's input or output, is not isolated but represented across multiple factors or features. For instance, when categorizing an object, the shape, color, and other attributes come together to form a holistic understanding of what the object is.

  • Neural Networks for Predicting Language Patterns: These models are widely used in language processing tasks such as predicting verb tenses based on phonemes.

<ul><li><p class=""><strong>Distribute Features:</strong> Features (shape, color, edible, animate) are spread across multiple nodes in a model, forming a <strong>distributed representation</strong>. These features work together to represent input data.</p></li><li><p class=""><strong>Input to Model:</strong> For tasks like predicting the English past tense, input words (e.g., "walk", "bring") are processed into <strong>distributed features</strong>, represented as nodes (circles in diagrams). Each node holds a part of the word's meaning.</p></li><li><p class=""><strong>Model Processing:</strong> The model uses phonemes as input (e.g., "w", "a", "k", "l") and generates outputs like past-tense forms by processing these features through nodes. The result is a set of features corresponding to word categories.</p></li><li><p class=""><strong>Feedback from Weights:</strong> As the model processes data, it adjusts <strong>weights</strong> based on feedback (errors), improving predictions over time.</p></li><li><p class=""><strong>New Word Generation:</strong> By distributing features and adjusting weights, the model generates new words or modifies existing ones based on input and learned patterns.</p></li></ul><p></p><p><strong>Key Points about the Concept:</strong></p><ul><li><p class=""><strong>Distributed Representations:</strong> These models rely on the idea that information, whether it's input or output, is not isolated but represented across multiple factors or features. For instance, when categorizing an object, the shape, color, and other attributes come together to form a holistic understanding of what the object is.</p></li><li><p class=""><strong>Neural Networks for Predicting Language Patterns:</strong> These models are widely used in language processing tasks such as predicting verb tenses based on phonemes.</p></li></ul><p></p>
19
New cards

What is deep learning (2)

  • a resurrection of the connectionist models due to higher computational power and better training techniques

  • adding more hidden layers increases model’s power in learning abstract complex structures, hence “deep” learning

  • ex. image processing models

    • certain parts of model becomes specialized for certain features (shape, human vs cat face)

    • model recognizes which features are important

<ul><li><p>a resurrection of the connectionist models due to higher computational power and better training techniques</p></li><li><p>adding more hidden layers increases model’s power in learning abstract complex structures, hence “deep” learning</p></li><li><p>ex. image processing models</p><ul><li><p>certain parts of model becomes specialized for certain features (shape, human vs cat face)</p></li><li><p>model recognizes which features are important</p></li></ul></li></ul><p></p>
20
New cards

What are 3 common modelling frameworks?

  • Symbolic: first generation of models influenced by early AI

    • Symbolic systems of knowledge representation

    • Logic-based inference techniques

  • Probabilistic: following the success of statistical machine learning techniques

    • Combining descriptive power of symbolic models with the flexibility of the connectionist models

  • Neural: inspired by architecture of human brain

    • Distributional representation of knowledge

    • parallel processing of data

21
New cards

What are 4 things using computational methods to model cognitive processes enable us to do?

Using computational methods for modeling cognitive processes enables us to

  • study these processes through simulation

  • evaluate the plausibility of existing theories of cognition

  • explain the observed human behavior during a specific process

  • predict behavioral patterns that have not yet been experimentally investigated