Perconti and Plebe (Machine Learning)

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/15

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 5:44 AM on 4/10/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

16 Terms

1
New cards

Deep Learning (DL)

A family of algorithms within the connectionist paradigm that uses artificial neural networks to enable machines to reach human-like performance in complex tasks

2
New cards

“Deep" in DL

Technically refers to the number of "hidden" layers in a feed-forward neural network

3
New cards

Empiricism + DL

DL is a radical form of empiricism; it assumes that concepts and knowledge are products of experience and data rather than predefined rules

4
New cards

Early Artificial Neural Networks vs Modern DL

Early artificial neural networks (ANNs) in the 1980s were designed by psychologists to study human cognition - Modern DL is primarily driven by engineering and application goals, though its results are increasingly relevant to cognitive science

5
New cards

Backpropagation

An efficient mathematical rule for adapting the connections (weights) between units based on the error between the desired output and the actual output

6
New cards

Stochastic Gradient Descent (SGD)

The modern refinement of backpropagation where error gradients are estimated over random subsets (batches) of data, making training more efficient for deep models

7
New cards

Convolutional Neural Networks (CNNs)

Architectures specialized for vision - They use hierarchical processing where early layers extract low-level features and higher layers extract complex objects

8
New cards

Recurrent Neural Networks (RNNs)

Architectures used for language processing

9
New cards

Long Short-Term Memory (LSTM)

Use "gates" to solve the problem of retaining information over many time steps, allowing the processing of complex sentences

10
New cards

Variational Autoencoders

A deep learning correlate of the free-energy principle in the brain, using Bayesian inference to predict incoming sensory data

11
New cards

Pure Vision

The theory (often associated with David Marr) that the goal of vision is to create a detailed 3D model of the world through hierarchical feature extraction and strictly feed-forward processing

12
New cards

4E Cognition Challenge

Contemporary theories argue cognition is Embodied, Embedded, Enactive, and Extended, rejecting internal representations in favor of action-oriented interaction with the environment

13
New cards

The DL "Revenge" for Computationalism

DL models (like AlexNet) are "shamelessly pure"—they are disembodied, inactive, and static—yet they routinely outperform humans in object recognition

14
New cards

Rule-Free Learning

DL models challenge the rationalist view (e.g., Chomsky, Pinker) that language requires innate, predefined rules

15
New cards

Syntactic Competence

Modern RNNs can learn complex linguistic structures, such as subject-verb agreement over long distances and syntactic island constraints, purely from exposure to data

16
New cards

Past Tense Debate

Early connectionist models were criticized for being unrealistic (e.g., the "Wickelfeature" problem), but modern DL versions have obviated many of these criticisms, showing how irregular verbs can be learned without algebraic rules