3- Neural Networks and Artificial Intelligence

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/24

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

25 Terms

1
New cards

The term "deep neural networks" refers to

neural networks that has multiple hidden layers

2
New cards

Cichy and Kaiser (2019) argue that deep neural networks have several benefits, including

all of the above

3
New cards

The Turing test refers to 

a test of how well machines can imitate human behavior

4
New cards

Jones and Bergen (2025) found that large language models like GPT 4.5 were often mistaken for humans, especially...

when the experimenters encouraged the model to adopt a humanlike persona

5
New cards

Potential concerns with generative artificial intelligence include

all of the above

6
New cards

When defining the depth of a neural network (e.g.,

The Hidden and Output layers (representing transformations of input)

7
New cards

What key non-algorithmic advance allowed deep learning networks, which were conceptually created in the 1960s-1980s,

Hardware and computational advances (e.g., faster processing units)

8
New cards

The fundamental objective during the training phase of a neural network model is to

Minimize the error between the model's prediction and the true output

9
New cards

Backpropagation is the influential learning algorithm used to adjust connection weights. This process specifically involves

Dividing the error signal up amongst the network nodes ("blame assignment")

10
New cards

The vanishing gradient problem is a difficulty encountered when training very deep neural networks, primarily affecting the assignment of blame (weight adjustment) to which part of the network?

The early (initial) hidden layers

11
New cards

In the structure of a neural network model, the model's "knowledge" or acquired representation of the input is established and stored within the

Connection weights (analogous to synapses in the brain)

12
New cards

Large Language Models (LLMs), such as those powering modern AI chatbots, are fundamentally trained on a massive scale primarily to perform which task?

Predict the upcoming words in a sequence

13
New cards

The summary mentions that strong biases are baked into models. This is primarily a result of

The large, uncurated data sets used for training the LLMs

14
New cards

A node in a neural network "fires" (becomes active) only when its summed input signal exceeds a certain numerical value. This mechanism is most analogous to a biological neuron reaching its

Firing threshold

15
New cards

Which type of connection, explicitly mentioned as a possibility alongside feedforward and feedback, refers to signals passed within the same layer of nodes in a neural network?

Lateral

16
New cards

The use of artificial neural networks in cognitive science may provide teleological explanations. What does a teleological explanation focus on?

Explaining the purpose or end goal (why) of a cognitive process

17
New cards

The challenge of determining if LLMs are truly "intelligent," despite passing benchmark tests of human-like performance, is conceptually related to the limitations of defining intelligence using the

Turing Test

18
New cards

The output layer of a neural network is directly responsible for

Generating the model’s final prediction or decision

19
New cards

If an input signal passes through three hidden layers before reaching the output layer, how many transformations has the original input gone through?

4

20
New cards

The activation of a specific node in a hidden layer is calculated as the

Sum of input nodes multiplied by their respective connection weights

21
New cards

According to the material (Kanwisher et al., 2023), one insight provided by deep networks about how the mind works is the ability to ask "why" questions. This suggests using neural networks to understand the

Evolutionarily adaptive purpose of a cognitive function

22
New cards

A connection that passes a signal from a later layer (e.g., a hidden layer) back to an earlier layer (e.g., an input layer) is known as a

Feedback connection

23
New cards

The success and impressive capability of current Large Language Models (LLMs) is heavily reliant on

Training on massive, often web-scraped, data sets.

24
New cards

The connection weights in a neural network are explicitly analogous to which component in a biological brain?

Synapses

25
New cards

A "deep network" is generally defined in the lecture as a network having

Three or more layers (excluding the input layer)