L2: Rise of neural networks and deep learning

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/15

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 9:27 AM on 4/26/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

16 Terms

1
New cards

Training

  1. Input data into neural networks

  2. Choose algorithms

  3. Feed data and adjust parameters

  4. Evaluate generalisation to real data

2
New cards

Prediction

  1. Receive data

  2. Generate predictions based on patterns

3
New cards

Structure of a deep network

  • Hidden layer: Columns of middle neurons

  • Deep network: Machine learning model with multiple hidden layers of interconnected nodes between input and output layers

<ul><li><p>Hidden layer: Columns of middle neurons</p></li><li><p>Deep network: Machine learning model with multiple hidden layers of interconnected nodes between input and output layers</p></li></ul><p></p>
4
New cards

Hyperparameters

  • Define the structure of the neural network

  • E.g. number of hidden layers, neuron per layer

5
New cards

Forward propogation

  • Synapse takes value from input and multiplies it by its weight

  • Neuron adds output of all synapses and applies activation function

  • Synapse weight → Connection strength

6
New cards

Trained network

  • Gives correct score for all training data

  • Weights = Trainable parameters

  • Neural network is like a machine with knobs (e.g. synthesizer)

    • Weights = knobs

    • Training = turning knobs til output sounds right

  • Number of trainable parameters (500B+)

7
New cards

How do you train large neural networks?

  1. Initialise with random weights

  2. Measure the error

    1. J = how wrong the model is

  3. Minimise error → Adjust parameter so J gets smaller

8
New cards

Naive (Brute-force) approach

  • Try every knob position

  • Find where J is lowest

  • IMPOSSIBLE

9
New cards

Minimisation: Backpropagation

  • Use backpropagation and update weights with gradient descent

  • Works for deep networks

10
New cards

How to do backpropagattion?

  1. Randomly pick value for w

  2. Pick two more around the point

  3. Compute gradient (slope of error)

  4. Update weights in downhill direction

  5. Repeat until error is near 0

11
New cards

Prediction

Learning to predict the future from the past

12
New cards
13
New cards
14
New cards
15
New cards
16
New cards