1/54
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Artificial Intelligence
The ability of a computer to perform tasks commonly associated with intelligent beings
Machine Learning
Study of algorithms that learn from examples and experience instead of hard coded rules. They make predictions on new data
Deep Learning
Subfield of machine learning focusing on learning data representations as successive layers of increasing meaningful representations.
Types of Machine Learning
Supervised Learning: Labeled Data
Unsupervised Learning: Unlabeled
Self-Supervised: Can be inferred or automatically labeled
Reinforcement Learning: Reward based
Generalization
The quality of ML model is measured on new unseen data
Regularization
Any method to prevent overfitting in a model
Neuron
Has either a linear classifier or an activation function to determine when it should produce a result
Neural Network
Collection of Neurons organized by layers
Input Layer
One or more hidden layers
Output layer
Output Activation
Typically ReLU between layers
At output we need to determine the type we want
Multi Label Classification → Sigmoid
Multi Class Classification → Softmax
Regression → Linear
Backpropogation
The algorithm for computing the gradients for gradient descent for a neural network
Dropout
Randomly setting a fraction rate of input units to 0 at each update during training
Prevents overfitting
Flatten
Flattens the input into a vector
Used if the input has multiple dimensions such as image data
Computer Vision
Giving computers the ability to understand visual information
Understanding Images
Take an image and create a boundary window we grab a feature map for. Repeat going through all boundaries
Model Parallelism
Training on multiple GPUs
Training Parallelism
Splitting the training data up and training each set on its own GPU
Locks
Used to secure resources to execute a part of a job
Lock requests starts the critical section and unlock request ends it
Resource Conflict
Two jobs have a resource conflict if they share required resourcesRes
Resource Contention
Two jobs contend for a resource when one job requests a resource that the other job already has
Fairness
An infinite execution is fair to a task if the task is executed repeatedly during this execution
If a task is able to be run, it shouldn’t be starved by the scheduler
Weak Fairness
An infinite execution sequence is fair to a task if repeatedly the task is executed or disabled
Consensus
Each process starts with an initial value known only to itself. Each process must come to a decision that agrees with one another
Leader Election
Elect a unique node as a leader. Every other node follows the leader or is one
Mixed Jobs
Scheduler handling both aperiodic and periodic tasks
Optimal Aperiodic Job Scheduling Algorithm
Minimizes either response time of the aperiodic job at the head of the queue or average response time of all aperiodic jobs
Background Scheduling
Having the aperiodic jobs execute only when there is no periodic job ready to execute
Simple Periodic Server
Tp = (ps, es)
es - Execution budget of the server
us = es / ps - The size of the server
Runs the task until the aperiodic queue us empty or until Tp has executed es time
Cannot be run again until the next period which replenishes the execution budget
Backlogged
Whenever the aperiodic job queue is nonempty or the server is executing a job
Idle
Whenever the server is not backlogged
Eligible
It is backlogged and has an execution budget greater than 0
Consuming
Server consumes the execution budget at the rate of one time unit per unit of execution
Bandwidth Preserving Servers
Scheduler tracks consumption of servers execution budget and suspends the server when the budget is exhausted or the server becomes idle
Replenishes execution budget at the appropriate replenishment times
Only eligible for execution when it is backlogged and execution budget is non zero
Deferrable Servers
The execution budget is consumed at the rate of one time unit per unit of execution
Execution budget is set to es at time instants kps for k>=0
Unused execution cannot be carried over
Total Bandwidth Server (TBS)
Allocate a fixed maximum percentage Us of the processor to server aperiodic jobs
RM vs EDF
RM is rate monotonic so use whichever one has the shortest execution time as priority
EDF is earliest deadline first so use task that is closer to deadline
Constant Bandwidth Server
Slots scheduled servers using EDF nots jobs
Good for tasks with probabilistic resource requirements
Greedy Reclamation of Used Bandwidth
A round robin scheduling that combines EDF + CBS
If a server is inactive another server can use it to expand the utilization
Bandwidth Inheritance
A task inside a critical section inherits the bandwidth for another task. Stealing resources from another server
G-EDF Bound
Us=Qs/Ts<=M
Utilization is the execution / period
Must be less than the number of cores
M-CBS (Multi CBS)
Send highest utilization jobs to specific processors, use EDF for the rest
This is because EDF not great on multiprocessors
Multiprocessor System
Tightly coupled and usually has shared memory
Distributed System
Loosely coupled. Costly to keep global status. One scheduler per processor
Partitioning
Ensuring that each task runs on the same processor. Use uniprocessor algorithms
Degree of Migration
No Migration (Partitioned Scheduling)
Restriction Migration
Full Migration
P, R, F
Priority Schemes
Static → RM
Job Level Dynamic → EDF
Unrestricted Dynamic → LLF
S, J, U
Multiprocessor Priority Ceiling Protocol (MCPC)
Assumes that tasks and resources are statically bound to processors
The host processor is called the synchronization processor for that resource
Use RPC like calls to get access to the resource
Blocking Time
Local Blocking Time: Due to contention for resources on local processor
Local preemption Delay: Due to preemption of a task by global critical sections that belong to remote tasks but execute on the tasks processor
Remote blocking time: Due to contention with some lower priority tasks for remote resources on the synchronization processor
Remote preemption delay: Due to preemptions by higher priority global critical sections on synchronization processors
Deferred blocking time: Due to the suspended execution of local higher priority tasks
End to End Scheduling
Assume that each subtask only accesses local resources
Turns into a set of uniprocessor scheduling problems
Guarantee that this total meets a deadline
Direct Synchronization Protocol
As soon as a subtask finishes executing, it sends a signal down stream in the task chain
Pros:
Global clock synchronization not needed
Yields shortest average end to end response times
Cons:
Often results in bursty release times of downstream subtasks
Difficult to assign priorities to subtasks on different processors
Phase Modification Protocol
Maintains a minimum temporal distance between the release times of jobs in sibling sub stacks of a task chain
Pros:
Easy to implement when clocks are synchronized
End to end response time is the sum of all subtasks
Cons:
Requires global synchronization
Difficult to get tight boudns on maximum response times
Modified PM Protocol
Combines synchronization messages of DS with phase modification of PM
Pros:
Do not need synchronized clocks
Cons:
Successor task may no longer behave like a periodic task when an upstream job overruns its wcet
Release Guard Protocol
Sends a signal as soon as the job completes. Releases the next subtasks as soon as it receives the first signal. Inter release time of any two consecutive jobs are never less than the period of the subtask unless releasing a job early doesn’t effect anything
Pros:
Work conserving
No global clock
Cons:
None
Worst Case Executing Time
Uses measurements
Experimental
Doesn’t have verification
Bounding Execution Time
Static and uses analysis tools
Determines cycles automatically
Guaranteed WCET
Implicit Path Enumeration (IPET)
Uses integer linear programming which is slow but easy to implement
Goes through the path of the code