1/15
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Deep Learning (DL)
A family of algorithms within the connectionist paradigm that uses artificial neural networks to enable machines to reach human-like performance in complex tasks
“Deep" in DL
Technically refers to the number of "hidden" layers in a feed-forward neural network
Empiricism + DL
DL is a radical form of empiricism; it assumes that concepts and knowledge are products of experience and data rather than predefined rules
Early Artificial Neural Networks vs Modern DL
Early artificial neural networks (ANNs) in the 1980s were designed by psychologists to study human cognition - Modern DL is primarily driven by engineering and application goals, though its results are increasingly relevant to cognitive science
Backpropagation
An efficient mathematical rule for adapting the connections (weights) between units based on the error between the desired output and the actual output
Stochastic Gradient Descent (SGD)
The modern refinement of backpropagation where error gradients are estimated over random subsets (batches) of data, making training more efficient for deep models
Convolutional Neural Networks (CNNs)
Architectures specialized for vision - They use hierarchical processing where early layers extract low-level features and higher layers extract complex objects
Recurrent Neural Networks (RNNs)
Architectures used for language processing
Long Short-Term Memory (LSTM)
Use "gates" to solve the problem of retaining information over many time steps, allowing the processing of complex sentences
Variational Autoencoders
A deep learning correlate of the free-energy principle in the brain, using Bayesian inference to predict incoming sensory data
Pure Vision
The theory (often associated with David Marr) that the goal of vision is to create a detailed 3D model of the world through hierarchical feature extraction and strictly feed-forward processing
4E Cognition Challenge
Contemporary theories argue cognition is Embodied, Embedded, Enactive, and Extended, rejecting internal representations in favor of action-oriented interaction with the environment
The DL "Revenge" for Computationalism
DL models (like AlexNet) are "shamelessly pure"—they are disembodied, inactive, and static—yet they routinely outperform humans in object recognition
Rule-Free Learning
DL models challenge the rationalist view (e.g., Chomsky, Pinker) that language requires innate, predefined rules
Syntactic Competence
Modern RNNs can learn complex linguistic structures, such as subject-verb agreement over long distances and syntactic island constraints, purely from exposure to data
Past Tense Debate
Early connectionist models were criticized for being unrealistic (e.g., the "Wickelfeature" problem), but modern DL versions have obviated many of these criticisms, showing how irregular verbs can be learned without algebraic rules