6 DL: Neural Networks

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/13

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

14 Terms

1
New cards

You can explain the difference between AI, ML, and DL (AI > ML > DL)

  • AI is de algemene term

  • ML is een subgebied dat leert van data

  • DL is een subset van ML met diepe neurale netwerken voor complexe patronen.

2
New cards

You know what Latent means in DL (Latend Space)

  • Something hidden

  • Present, but not directly observable

3
New cards

You understand what latent space means in DL (Latent Space)

Gecomprimeerde representatie van de belangrijkste kenmerken in data, niet rechtstreeks zichtbaar maar betekenisvol voor het model.

4
New cards

You know what encoding and decoding mean (Encoding/Decoding)

Encoderen = data samendrukken tot latente features.
Decoderen = vanuit latente features terug naar bruikbare output genereren.

5
New cards

You understand how artificial neurons work (Artificial Neuron)

Een artificiële neuron ontvangt inputs, weegt ze, telt ze op en gebruikt een activatiefunctie om te beslissen of hij “vuurt”.

6
New cards

You can describe the architecture of a neural network (Neural Network Layers)

  1. Input Layer

    • Takes the raw input data (e.g., pixel values, features).

    • One neuron per input feature.

    • No computation here — just forwards the data.

  2. Hidden Layers

    • Perform most of the learning.

    • Each neuron:

      • Computes a weighted sum of its inputs

      • Adds a bias

      • Applies an activation function (e.g., ReLU, Sigmoid)

    • There can be one or many hidden layers (hence "deep" learning).

    • The more layers → the more complex patterns the network can learn.

  3. Output Layer

    • Produces the final prediction:

      • Regression → 1 neuron (e.g., predicted price)

      • Binary classification → 1 neuron with sigmoid

      • Multi-class classification → N neurons with softmax

7
New cards

You know the difference between theoretical and practical models (Universal Approximator)

Theoretical:

  • 1 hidden (very wide) layer
    with non-linear activations is enough to solve any problem.

Practical:

  • Computationally very hard

  • Does not generalize well to unseen data

8
New cards

You understand training and interfering in the training cycle of a neural network (Training vs Inferencing)

Training = gewichten aanpassen met data.
Inferencing = voorspellingen maken met een getraind model.

9
New cards

You know the learning cycle of a neural network (Learning Cycle)

1.Forward Pass

  • Run batch of training samples through network

2.Calculate error (or loss)

  • Supervised learning = we need an error to guide our search for good weight   settings

3.Use optimizer

  • Figure out how to adjust the weights a little for that last layer

4.Go backwards (one layer at a time)

  • Figure out how to adjust, go back one layer,…

  • repeat all the way back to the first layer

10
New cards

You can explain the forward and backward pass (Forward & Backpropagation)

Forward pass:

  • data door het netwerk -> output.
     Loss berekenen. 

Backward pass:

  • Fout terugpropageren door netwerk om gewichten aan te passen (backpropagation).

11
New cards

You know what an epoch is (Epoch)

  • Een epoch is één volledige ronde waarbij je het hele trainingsdataset één keer door je model laat gaan.

  • Represents one iteration over the entire dataset.

12
New cards

You know how training is performed in batches (Batches)

  • We cannot pass the entire dataset into the neural network at once. So, we divide the dataset into number of batches.

13
New cards

You understand the use of dropout as regularization (Dropout)

Dropout schakelt willekeurige neuronen tijdelijk uit tijdens training om overfitting te voorkomen.

<p>Dropout schakelt willekeurige neuronen tijdelijk uit tijdens training om overfitting te voorkomen.</p>
14
New cards

You can work with tf.keras Sequential and Functional API (Keras APIs)

Sequential API = laag per laag model.
Functional API = flexibeler, laat meerdere inputs/outputs of complexe architecturen toe