Parallelism & Concurrency

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/47

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

48 Terms

1
New cards

Parallelism

Executing multiple operations at the same time using multiple processing units

2
New cards

Concurrency

Structuring a system so multiple tasks overlap in time (not necessarily simultaneously executing)

3
New cards

Data Parallelism

Same operation applied across many data items in parallel

4
New cards

Task Parallelism

Different tasks/functions run in parallel

5
New cards

Pipeline Parallelism

Work split into stages that operate concurrently like an assembly line

6
New cards

Coarse-Grained Parallelism

Large tasks with minimal communication overhead

7
New cards

Fine-Grained Parallelism

Very small tasks requiring frequent synchronization

8
New cards

Speedup

Ratio of sequential time to parallel time

9
New cards

Amdahl’s Law

Limits speedup based on the fraction of work that cannot be parallelized

10
New cards

CPU-Bound Task

Task limited by computation rather than waiting on I/O

11
New cards

I/O-Bound Task

Task limited by waiting on disk

12
New cards

Thread

Lightweight unit of execution sharing memory within a process

13
New cards

GIL (Global Interpreter Lock)

CPython mutex allowing only one thread to run Python bytecode at a time

14
New cards

Critical Section

Code accessing shared mutable state requiring exclusive access

15
New cards

Race Condition

Incorrect behavior caused by timing-dependent access to shared data

16
New cards

Deadlock

Threads wait on each other forever

17
New cards

Starvation

A thread never gets CPU or resources

18
New cards

Livelock

Threads keep responding to each other without making progress

19
New cards

Mutex/Lock

Synchronization tool allowing one thread at a time into a critical section

20
New cards

Thread Pool

Fixed set of reusable worker threads to reduce creation overhead

21
New cards

queue.Queue

Thread-safe FIFO used for communication in producer–consumer patterns

22
New cards

Process

Independent execution unit with its own memory space

23
New cards

Process vs Thread

Processes isolate memory and avoid GIL limits but are heavier than threads

24
New cards

Multiprocessing in Python

Uses multiple processes for true parallelism on CPU-bound work

25
New cards

IPC (Inter-Process Communication)

Mechanisms that allows different processes, which are typically isolated, to communicate and exchange data

26
New cards

multiprocessing.Queue

Process-safe FIFO queue for sending Python objects between processes

27
New cards

Process Pool

Pool of processes handling tasks in parallel without repeated process creation

28
New cards

Scheduling

OS mechanism deciding which ready task runs next

29
New cards

FCFS Scheduling

Runs tasks in arrival order (suffers convoy effect)

30
New cards

SJF Scheduling

Chooses shortest next CPU burst (optimal waiting time but risks starvation)

31
New cards

Round Robin Scheduling

Time-slice based scheduling giving each task a fair quantum

32
New cards

Priority Scheduling

Chooses highest-priority task (may starve low-priority tasks without aging)

33
New cards

Context Switch

Saving and restoring task state when switching CPU execution

34
New cards

Virtual Memory

Abstraction giving each process its own address space regardless of physical RAM

35
New cards

Heap

Memory region for dynamic allocation

36
New cards

Stack

Per-thread memory for function calls and local variables

37
New cards

Shared Memory (threads)

Threads access the same address space directly

38
New cards

User-Level Threads

Threads managed in user space by a library

39
New cards

Kernel-Level Threads

Threads managed by the OS and scheduled independently

40
New cards

Hyperthreading

Hardware technique where one core exposes multiple logical threads

41
New cards

NUMA

Architecture where memory access cost depends on which CPU node owns the memory

42
New cards

GPU Parallelism

Massively parallel architecture optimized for SIMD workloads

43
New cards

Producer-Consumer Pattern

Producers generate work items consumed by workers

44
New cards

Readers-Writers Problem

Multiple readers can run concurrently but writers need exclusive access

45
New cards

Dining Philosophers Problem

Classic synchronization challenge illustrating deadlock risks

46
New cards

Boss-Worker Pattern

Controller thread assigns tasks to worker threads

47
New cards

Monte Carlo Simulation

Random sampling technique easily parallelized across independent trials

48
New cards

Recursion

Function calling itself with a base case and recursive step