Reduce the costs of ML workflows with preemptible VMs and GPUs

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/9

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

10 Terms

1
New cards

Preemptible VMs

Compute Engine instances that last a maximum of 24 hours, provide no availability guarantees, and are priced lower than standard VMs.

2
New cards

Google Kubernetes Engine (GKE)

A service for managing Kubernetes clusters that can utilize preemptible VMs for ML workflows.

3
New cards

Kubeflow

An open-source project for deploying machine learning workflows on Kubernetes.

4
New cards

Kubeflow Pipelines

A feature of Kubeflow that allows users to build and deploy scalable ML workflows based on Docker containers.

5
New cards

Cost Reduction

The primary benefit of using preemptible VMs in ML workflows, especially for jobs with flexible completion times.

6
New cards

Node Pool

A group of preemptible, GPU-enabled instances in a GKE cluster used for running ML workloads.

7
New cards

Idempotency

A property that ensures preemptible steps can either be repeated without side effects or can checkpoint work to resume after interruption.

8
New cards

Tensor2Tensor

A model training framework mentioned as an example of using preemptible VMs in a Kubeflow pipeline.

9
New cards

Stackdriver Monitoring

A tool used to inspect logs for both current and terminated pipeline operations in Kubeflow.

10
New cards

Autoscale

A feature that allows node pools to automatically adjust the number of instances based on workload, helping to reduce costs.