L2_tools

0.0(0)
studied byStudied by 2 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/15

flashcard set

Earn XP

Description and Tags

Flashcards for reviewing the tools of deep learning, focusing on TensorFlow, Keras, and related concepts.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

16 Terms

1
New cards

What are the primary deep learning frameworks mentioned?

TensorFlow, Keras, and PyTorch are the primary deep learning frameworks. TensorFlow is developed by Google and is known for its scalability and production readiness. Keras is a high-level API that runs on top of TensorFlow (though it supports other backends as well) and is designed for rapid experimentation and ease of use. PyTorch, developed by Facebook, is favored for its dynamic computation graph and research flexibility.

2
New cards

What is the de-facto language for deep learning?

Python is the de-facto language for deep learning due to its simplicity, extensive libraries (such as NumPy, SciPy, Pandas, Matplotlib, and scikit-learn), and frameworks specifically designed for deep learning (like TensorFlow, Keras, and PyTorch). Its large community and extensive documentation make it easy for researchers and developers to prototype and deploy deep learning models.

3
New cards

What is the underlying language for framework backends?

The underlying languages for deep learning framework backends are primarily C++ and CUDA. C++ is used for its performance and compatibility across different hardware platforms. CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model developed by NVIDIA, which is used to leverage the power of GPUs for accelerating computations

4
New cards

What is the core object in TensorFlow?

The core object in TensorFlow is the Tensor, which is a multidimensional array. Tensors are similar to NumPy arrays but can run on GPUs or TPUs, making them suitable for deep learning computations. Tensors can have various data types, including float32, int32, and string.

5
New cards

In TensorFlow, what should be used for storing constant values such as input data?

In TensorFlow, Tensors are fundamental for storing constant values such as input data. They are immutable, meaning their values cannot be changed once created, which makes them suitable for holding fixed data like input datasets or pre-defined numerical constants used throughout the model.

6
New cards

In TensorFlow, what should be used for values that will be updated, such as model weights?

In TensorFlow, tf.Variable should be used for values that need to be updated during training, such as model weights and biases. Unlike tf.constant, tf.Variable allows you to change the stored value. They hold stateful values that persist across multiple executions of a graph. Variables must be initialized before use, and they can be updated using methods like .assign().

7
New cards

How are element-wise addition and multiplication performed in TensorFlow?

Using operators like + and *

8
New cards

How is matrix multiplication performed in TensorFlow?

Using the @ operator

9
New cards

What is the purpose of tf.math.reduce_sum in TensorFlow?

To compute the sum of elements across specified axes of a tensor.

10
New cards

How does broadcasting work in TensorFlow?

TensorFlow's broadcasting mechanism is similar to NumPy's, allowing operations between tensors with different shapes under certain conditions. Specifically:

  1. Dimensions Compatibility: Two dimensions are compatible when they are equal or one of them is 1.

  2. Broadcasting Rules: If the number of dimensions of two tensors are different, the tensor with fewer dimensions is padded with ones on its leading (left) side.

  3. Computation: When performing an operation, TensorFlow stretches the tensor with dimension 1 to match the size of the other tensor along that dimension. This stretching doesn't involve copying data, thus it's memory efficient.

For example, adding a scalar to a matrix involves broadcasting the scalar to every element in the matrix. Similarly, operations between tensors of shapes (m, n) and (1, n) or (m, 1) are possible due to broadcasting.

Broadcasting allows for more concise code and efficient computations when dealing with tensors of different shapes that have a clear mathematical relationship.

11
New cards

How are images represented in deep learning?

In deep learning, images are typically represented by a 3D tensor with the shape [height, width, channel]. Height and width denote the dimensions of the image in pixels, while the channel represents color depth. Common representations include RGB (3 channels) or grayscale (1 channel).

<p>In deep learning, images are typically represented by a 3D tensor with the shape [height, width, channel]. Height and width denote the dimensions of the image in pixels, while the channel represents color depth. Common representations include RGB (3 channels) or grayscale (1 channel). </p>
12
New cards

What dimension is added during minibatch gradient descent?

During minibatch gradient descent, a fourth dimension is added to the image tensor to represent the batch size. This transforms the shape from [height, width, channel] for a single image to [batch, height, width, channel] for a batch of images. This allows the model to process multiple images in parallel, improving training efficiency. Each batch consists of multiple independent samples that the model processes simultaneously to compute the gradient of the loss function.

13
New cards

What do tf.expand_dims and tf.squeeze do?

tf.expand_dims is used to increase the rank of a tensor by inserting a new dimension of size one at the specified axis. This is useful for aligning tensors with different shapes or preparing data for operations that require a specific number of dimensions. tf.squeeze removes dimensions of size one from the shape of a tensor. It is helpful for simplifying tensors and removing unnecessary dimensions that do not contribute to the data.

14
New cards

How can you force TensorFlow to run computations on the CPU?

You can force TensorFlow to run computations on the CPU by using tf.device('CPU:0') within a with statement. This ensures that all operations within that block are executed on the CPU, which is useful for debugging or when certain operations are not supported on GPUs. For example:

with tf.device('CPU:0'):
    # Perform TensorFlow operations here
    result = tf.matmul(tensor1, tensor2)

This code explicitly tells TensorFlow to use the CPU for the matrix multiplication operation.

15
New cards

What does tf.GradientTape() do?

tf.GradientTape() is a context in TensorFlow that records all operations performed inside it for automatic differentiation. This is particularly useful for computing gradients of a computation with respect to some inputs, typically variables. You start a tf.GradientTape() context, perform operations involving tensors and variables that you want to differentiate, and then use the tape to compute the gradients of a target (e.g., a loss function) with respect to some sources (e.g., model parameters). This is a fundamental tool for training neural networks using backpropagation.

16
New cards

Name three components included in the Keras framework.

Keras includes modular components for neural network building and training: 1. Layers: Define network structure (core, convolutional, recurrent). 2. Models: Organize layers (Sequential, Functional API). 3. Optimizers: Update weights (Adam, SGD, RMSprop). 4. Losses: Quantify prediction errors (Crossentropy, MSE). 5. Metrics: Evaluate performance (Accuracy, Precision, Recall). 6. Callbacks: Utilities during training (EarlyStopping, ModelCheckpoint, TensorBoard). 7. Datasets: Provide pre-loaded data for benchmarking. 8. Applications: Include pre-trained models for transfer learning.