1/47
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Parallelism
Executing multiple operations at the same time using multiple processing units
Concurrency
Structuring a system so multiple tasks overlap in time (not necessarily simultaneously executing)
Data Parallelism
Same operation applied across many data items in parallel
Task Parallelism
Different tasks/functions run in parallel
Pipeline Parallelism
Work split into stages that operate concurrently like an assembly line
Coarse-Grained Parallelism
Large tasks with minimal communication overhead
Fine-Grained Parallelism
Very small tasks requiring frequent synchronization
Speedup
Ratio of sequential time to parallel time
Amdahl’s Law
Limits speedup based on the fraction of work that cannot be parallelized
CPU-Bound Task
Task limited by computation rather than waiting on I/O
I/O-Bound Task
Task limited by waiting on disk
Thread
Lightweight unit of execution sharing memory within a process
GIL (Global Interpreter Lock)
CPython mutex allowing only one thread to run Python bytecode at a time
Critical Section
Code accessing shared mutable state requiring exclusive access
Race Condition
Incorrect behavior caused by timing-dependent access to shared data
Deadlock
Threads wait on each other forever
Starvation
A thread never gets CPU or resources
Livelock
Threads keep responding to each other without making progress
Mutex/Lock
Synchronization tool allowing one thread at a time into a critical section
Thread Pool
Fixed set of reusable worker threads to reduce creation overhead
queue.Queue
Thread-safe FIFO used for communication in producer–consumer patterns
Process
Independent execution unit with its own memory space
Process vs Thread
Processes isolate memory and avoid GIL limits but are heavier than threads
Multiprocessing in Python
Uses multiple processes for true parallelism on CPU-bound work
IPC (Inter-Process Communication)
Mechanisms that allows different processes, which are typically isolated, to communicate and exchange data
multiprocessing.Queue
Process-safe FIFO queue for sending Python objects between processes
Process Pool
Pool of processes handling tasks in parallel without repeated process creation
Scheduling
OS mechanism deciding which ready task runs next
FCFS Scheduling
Runs tasks in arrival order (suffers convoy effect)
SJF Scheduling
Chooses shortest next CPU burst (optimal waiting time but risks starvation)
Round Robin Scheduling
Time-slice based scheduling giving each task a fair quantum
Priority Scheduling
Chooses highest-priority task (may starve low-priority tasks without aging)
Context Switch
Saving and restoring task state when switching CPU execution
Virtual Memory
Abstraction giving each process its own address space regardless of physical RAM
Heap
Memory region for dynamic allocation
Stack
Per-thread memory for function calls and local variables
Shared Memory (threads)
Threads access the same address space directly
User-Level Threads
Threads managed in user space by a library
Kernel-Level Threads
Threads managed by the OS and scheduled independently
Hyperthreading
Hardware technique where one core exposes multiple logical threads
NUMA
Architecture where memory access cost depends on which CPU node owns the memory
GPU Parallelism
Massively parallel architecture optimized for SIMD workloads
Producer-Consumer Pattern
Producers generate work items consumed by workers
Readers-Writers Problem
Multiple readers can run concurrently but writers need exclusive access
Dining Philosophers Problem
Classic synchronization challenge illustrating deadlock risks
Boss-Worker Pattern
Controller thread assigns tasks to worker threads
Monte Carlo Simulation
Random sampling technique easily parallelized across independent trials
Recursion
Function calling itself with a base case and recursive step