1/24
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
The term "deep neural networks" refers to
neural networks that has multiple hidden layers
Cichy and Kaiser (2019) argue that deep neural networks have several benefits, including
all of the above
The Turing test refers to
a test of how well machines can imitate human behavior
Jones and Bergen (2025) found that large language models like GPT 4.5 were often mistaken for humans, especially...
when the experimenters encouraged the model to adopt a humanlike persona
Potential concerns with generative artificial intelligence include
all of the above
When defining the depth of a neural network (e.g.,
The Hidden and Output layers (representing transformations of input)
What key non-algorithmic advance allowed deep learning networks, which were conceptually created in the 1960s-1980s,
Hardware and computational advances (e.g., faster processing units)
The fundamental objective during the training phase of a neural network model is to
Minimize the error between the model's prediction and the true output
Backpropagation is the influential learning algorithm used to adjust connection weights. This process specifically involves
Dividing the error signal up amongst the network nodes ("blame assignment")
The vanishing gradient problem is a difficulty encountered when training very deep neural networks, primarily affecting the assignment of blame (weight adjustment) to which part of the network?
The early (initial) hidden layers
In the structure of a neural network model, the model's "knowledge" or acquired representation of the input is established and stored within the
Connection weights (analogous to synapses in the brain)
Large Language Models (LLMs), such as those powering modern AI chatbots, are fundamentally trained on a massive scale primarily to perform which task?
Predict the upcoming words in a sequence
The summary mentions that strong biases are baked into models. This is primarily a result of
The large, uncurated data sets used for training the LLMs
A node in a neural network "fires" (becomes active) only when its summed input signal exceeds a certain numerical value. This mechanism is most analogous to a biological neuron reaching its
Firing threshold
Which type of connection, explicitly mentioned as a possibility alongside feedforward and feedback, refers to signals passed within the same layer of nodes in a neural network?
Lateral
The use of artificial neural networks in cognitive science may provide teleological explanations. What does a teleological explanation focus on?
Explaining the purpose or end goal (why) of a cognitive process
The challenge of determining if LLMs are truly "intelligent," despite passing benchmark tests of human-like performance, is conceptually related to the limitations of defining intelligence using the
Turing Test
The output layer of a neural network is directly responsible for
Generating the model’s final prediction or decision
If an input signal passes through three hidden layers before reaching the output layer, how many transformations has the original input gone through?
4
The activation of a specific node in a hidden layer is calculated as the
Sum of input nodes multiplied by their respective connection weights
According to the material (Kanwisher et al., 2023), one insight provided by deep networks about how the mind works is the ability to ask "why" questions. This suggests using neural networks to understand the
Evolutionarily adaptive purpose of a cognitive function
A connection that passes a signal from a later layer (e.g., a hidden layer) back to an earlier layer (e.g., an input layer) is known as a
Feedback connection
The success and impressive capability of current Large Language Models (LLMs) is heavily reliant on
Training on massive, often web-scraped, data sets.
The connection weights in a neural network are explicitly analogous to which component in a biological brain?
Synapses
A "deep network" is generally defined in the lecture as a network having
Three or more layers (excluding the input layer)