parallelism concurrency final

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/114

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

115 Terms

1
New cards

parallelism

tasks executing at the exact same time

2
New cards

concurrency

tasks making progress during overlapping time intervals

3
New cards

parallelism requirement

the need for multiple cores to run tasks simultaneously

4
New cards

data parallelism

applying the same operation to different chunks of data concurrently

5
New cards

fine-grained parallelism

many small tasks requiring frequent synchronization

6
New cards

speedup

ratio of sequential runtime to parallel runtime (Ts/Tp)

7
New cards

amdahl’s law

the sequential portion limits the maximum achievable speedup

8
New cards

free lunch is over

hardware clock speeds no longer increase, requiring software-level parallelism

9
New cards

thread barrier

synchronization point where threads wait until all arrive

10
New cards

global interpreter lock

CPython's mechanism preventing true parallel execution of CPU-bound threads

11
New cards

multiprocessing

enabling true parallelism for CPU-bound tasks

12
New cards

CPU-bound task

computation-heavy task such as complex simulations

13
New cards

shared memory

threads within a process share memory and global variables

14
New cards

critical section

code requiring exclusive access to shared resources

15
New cards

lock

mechanism ensuring only one thread enters a critical section at a time

16
New cards

classes

fundamental OOP structures important for the course

17
New cards

start()

method that begins thread execution by calling run() internally

18
New cards

threading timer

schedules a function to run after a delay

19
New cards

runnable

state where a thread is ready to run but not yet scheduled

20
New cards

deadlock

threads waiting indefinitely for resources held by each other

21
New cards

threads

best suited for I/O-bound workloads

22
New cards

GIL impact

prevents true parallelism for CPU-bound threaded tasks

23
New cards

heisenbug

bug that changes or disappears when observed

24
New cards

thread-safe

correct behavior under concurrent execution

25
New cards

shared mutable state

the primary cause of thread-safety issues

26
New cards

context switch

events such as blocking I/O that cause the OS to switch threads

27
New cards

thread context

stored CPU state including PC, registers, and stack pointer

28
New cards

thrashing

excessive context switching causing performance degradation

29
New cards

thread pool

collection of reusable worker threads

30
New cards

executor map

method submitting multiple tasks concurrently

31
New cards

shared memory risk

race conditions from unsynchronized access

32
New cards

queue FIFO

retrieves items in first-in, first-out order

33
New cards

queue thread safety

built-in thread-safe design using internal locking

34
New cards

queue sentinel

special value (None) used to signal completion

35
New cards

semaphore count

the number of available permits

36
New cards

binary semaphore

semaphore restricted to values 0 or 1

37
New cards

with semaphore

automatically acquires and releases a permit

38
New cards

thread barrier

synchronizing threads at a checkpoint

39
New cards

process memory

isolated private memory space

40
New cards

process vs thread memory

processes isolate memory; threads share it

41
New cards

multiprocessing

Python interface for creating and managing processes

42
New cards

process ID

unique identifier of a running process

43
New cards

IPC necessity

required because processes cannot share memory directly

44
New cards

GIL workaround

using processes for true CPU parallelism

45
New cards

IPC challenge

higher overhead and complexity than threads

46
New cards

process-safe queue

multiprocessing.Queue for inter-process communication

47
New cards

pipe

pair of connection endpoints for two-way communication

48
New cards

pipe use case

direct parent–child process communication

49
New cards

manager shared state

proxy-based sharing via a manager process

50
New cards

shared_memory

low-overhead shared buffers without pickling

51
New cards

mp value

shared integer accessed via .value

52
New cards

pool map

parallel mapping of a function across data

53
New cards

map vs apply_async

synchronous vs asynchronous task submission

54
New cards

apply_async result

retrieving results via result.get()

55
New cards

close() and join()

stop task submission and wait for tasks to finish

56
New cards

process advantage

true multi-core parallelism for CPU-bound tasks

57
New cards

process drawback

higher memory and communication overhead

58
New cards

process synchronization primitives

must be created via multiprocessing module or Manager

59
New cards

throughput

number of completed tasks per unit time

60
New cards

convoy effect

short jobs stuck behind long ones in FCFS scheduling

61
New cards

SJF drawback

requires predicting CPU burst length

62
New cards

round robin

fixed time slice per process

63
New cards

priority scheduling starvation

low-priority tasks might never run

64
New cards

aging

increasing process priority to prevent starvation

65
New cards

MLFQ

scheduler allowing processes to move between queues

66
New cards

context switch

saving and restoring CPU state to run different tasks

67
New cards

context elements

PC, registers, stack pointer, and process state

68
New cards

preemptive multitasking

OS interrupts running tasks to schedule others

69
New cards

virtual memorY

strong process isolation

70
New cards

private stack

each thread’s separate stack for local variables

71
New cards

cache coherence

keeping shared memory values consistent across CPU caches

72
New cards

false sharing

separate variables sharing a cache line and causing contention

73
New cards

hyperthreading

duplicating some execution resources to expose multiple logical cores

74
New cards

NUMA

memory access time depends on memory’s physical location

75
New cards

GPU advantage

thousands of SIMD cores for massively parallel workloads

76
New cards

advisory file locking

cooperative locking where processes must check locks

77
New cards

mandatory file locking

OS-enforced file locking

78
New cards

Boss

distributes tasks to workers

79
New cards

Worker

executes assigned tasks independently

80
New cards

boss-worker queue

boss sends tasks through a shared queue

81
New cards

producer

creates data or tasks

82
New cards

consumer

processes data or tasks

83
New cards

producer-consumer benefit

decouples production rate from consumption rate

84
New cards

bounded buffer

buffer with fixed maximum capacity

85
New cards

server

listens for client requests and responds

86
New cards

client

sends requests and receives responses

87
New cards

reader-writer problem

multiple readers allowed; writers require exclusivity

88
New cards

writer starvation

writers blocked indefinitely when readers dominate

89
New cards

chopsticks

resource used in the dining philosopher’s problem

90
New cards

dining philosophers issue

risk of deadlock if all hold one chopstick

91
New cards

sleeping barber barber state

sleeps when no customers are present

92
New cards

sleeping barber full shop

arriving customer leaves if no waiting seats

93
New cards

cigarette smokers agent

places two ingredients on the table

94
New cards

cigarette smokers challenge

complex signaling to avoid deadlock or starvation

95
New cards

monte carlo

numerical estimation via repeated random sampling

96
New cards

GIL vs C#

C# has no GIL, allowing true parallel threads

97
New cards

task parallel library

core C# concurrency abstraction for async and parallel tasks

98
New cards

csharp thread class

represents an OS-managed thread

99
New cards

parameterized threading

passing an object to Thread.Start()

100
New cards

join()

waits for a thread to finish execution