Cognition Lectures 10 and 11- Semantics + Large Models

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/19

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

20 Terms

1
New cards

Semantic Categories

Hierarchical tree of concepts

<p>Hierarchical tree of concepts</p>
2
New cards

Semantic Categories

  • Showing people movies (a dynamic stimulus)

  • Can assign everything in a frame to a category

<ul><li><p><span>Showing people movies (a dynamic stimulus)</span></p></li><li><p><span>Can assign everything in a frame to a category</span></p></li></ul><p></p>
3
New cards

Semantic Categories (cont)

Estimating the semantic tuning of each voxel in the brain

<p>Estimating the semantic tuning of each voxel in the brain</p>
4
New cards

Key takeaway

  • Category information is widely distributed across many areas of the cortex

  • Stimulus-selective regions (e.g. PPA) are “selectivity peaks” in this distribution

5
New cards

Computational Modeling

Approximating the mind with math

  • Key idea: emulating the code and operations of mental representation in an alternate, artificial medium

6
New cards

Perceptron

The first age of the neural network

<p><span>The first age of the neural network</span></p>
7
New cards

Neocognitron

Hierarchical network structure, similar to human brain

<p><span>Hierarchical network structure, similar to human brain</span></p>
8
New cards

Modern Neural Networks (2012-now)

Larger version of earlier networks (more layers ~ “deep”)

  • The key breakthrough that enabled modern neural networks are: more layers and more training data

  • First breakthrough happened for visual categorization, the most explored domain up to the 2010s

<p><span>Larger version of earlier networks (more layers ~ “deep”)</span></p><ul><li><p><span>The key breakthrough that enabled modern neural networks are: more layers and more training data</span></p></li><li><p><span>First breakthrough happened for visual categorization, the most explored domain up to the 2010s</span></p></li></ul><p></p>
9
New cards

Towards Human-Scale Categorization

The first neural network that could distinguish between thousands of items

<p>The first neural network that could distinguish between thousands of items</p>
10
New cards

Usefulness of Deep Neural Networks (DNNs)

Good prediction for human behavior 

  • Greatly improved ability to predict performance, judgements, and error patterns compared to e.g. HMAX

11
New cards

Representational Similarity Analysis (RSA)

Quantifying the joint similarity structure of a set of items

*silly picture on slides

<p><span>Quantifying the joint similarity structure of a set of items</span></p><p><span>*silly picture on slides</span></p>
12
New cards

Usefulness of Deep Neural Networks (DNNs)

Good model for neural representations across (visual) cortex

  • Greatly improved ability to predict performance, judgments, & error patterns compared to e.g., HMAX

  • They start capturing things in meaning space rather than just what they look like

13
New cards

Usefulness of Modern Neural Networks

  • A lot of these networks are similar to the endpoint of computing in the brain

  • We can build models that predict what humans are doing

14
New cards

Are DNNs Good Models for the Visual System

Sort of, maybe?

  • Large improvements in both categorization performance and prediction IT/LO neural responses

15
New cards

Categories as Collections of Features

Slow vs. fast, big vs. small, furry vs. smooth, etc

<p><span>Slow vs. fast, big vs. small, furry vs. smooth, etc</span></p>
16
New cards

How Categorization Works: A Theory

Categories as multidimensional Twizzlers

  • When a image hits our retina it doesn’t look like it should

  • Visual system turns A into B

17
New cards

Are DNNs Good Models for the Visual System?

Increasingly yes; potentially because they untangle manifolds

  • High categorization performance and good precision of IT/LO neural responses

18
New cards

Emulating Complex Behavior

DeepFakes; Mimicking appearance, voice, mannerisms, etc

  • AI doesn't care about solving the problem you give them, they just care about giving the answer that they were trained to give

19
New cards

Word Embedding Models

Capturing semantic relationships between concepts

Syntactic context

  • “The bear was no larger than the bull”

  • “Told him to ride the bull, but not the bear”

  • “Traded two bull pelts for one bear rug”


Goal

  • Given an input word (“bear”)

  • Predict next word in the sentence

Result

  • Words that co-occur often are encoded

  • Similarly in hidden layer

20
New cards

Emulating Complex Cognitive Behavior

ChatGPT/Conversational Agents: Talking to the void (and it talks back)

  • Realistic conversations

  • Breadth of fact-based “knowledge”

  • Can convincingly sound like a real person

    • I.e. passes the “Turing test”

  • Can tell you how to build a nuclear bomb