Parallelism & Concurrency Final

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/118

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

119 Terms

1
New cards

According to the course overview, which of the following is NOT listed as a main learning objective for successful graduates?

Master advanced C# memory management techniques.

2
New cards

What specific version of Python is highly recommended for use in this course according to the reading?

The latest version (3.13.x mentioned)

3
New cards

What is the fundamental difference between parallelism and concurrency as defined in the text?

Parallelism involves tasks literally executing at the same time, while concurrency involves tasks making progress in overlapping time periods.

4
New cards

Which concept, parallelism or concurrency, strictly requires multiple processing units (e.g., multiple cores) to be achieved?

Parallelism

5
New cards

Applying the same image filter to different blocks of a large image simultaneously is an example of which type of parallelism?

Data Parallelism

6
New cards

Which characteristic best describes fine-grained parallelism according to the reading?

Small tasks executing concurrently with frequent communication and synchronization.

7
New cards

How is "Speedup" defined in the context of parallel performance?

The ratio of sequential execution time to parallel execution time (Ts / Tp).

8
New cards

What is the main implication of Amdahl's Law regarding parallel computing?

The inherently sequential portion of a program limits the maximum achievable speedup.

9
New cards

What does the phrase "The Free Lunch is Over" signify in the context of computing performance?

Clock speeds are no longer increasing significantly, requiring software to be explicitly parallelized for performance gains on multi-core processors.

10
New cards

What is the primary effect of the Global Interpreter Lock (GIL) in CPython?

It simplifies memory management but prevents true parallelism for CPU-bound tasks using the threading module.

11
New cards

Based on the implications of the GIL, when is using Python's multiprocessing module generally preferred over the threading module?

For CPU-bound tasks where true parallelism on multiple cores is needed to improve performance.

12
New cards

Which of the following tasks is most likely to be CPU-bound?

Performing complex mathematical calculations for a physics simulation.

13
New cards

What is a key characteristic of threads created within the same process, as described in the text?

They share the same memory space and resources like global variables.

14
New cards

What is a "critical section" in the context of concurrent programming?

A segment of code where a shared resource is accessed, requiring exclusive access to prevent race conditions.

15
New cards

What is the primary purpose of using a threading.Lock in Python?

To ensure only one thread can execute a specific block of code (critical section) at a time, preventing race conditions.

16
New cards

Besides lists and dictionaries, what other fundamental Python concept is explicitly mentioned in Section 1.1 as important to review for this course?

Python Classes (OOP)

17
New cards

When creating a thread using a class that inherits from threading.Thread, how should the thread's main logic execution be initiated according to the text?

By calling the start() method, which internally calls the run() method in a new thread.

18
New cards

Which function within the threading module is specifically designed to schedule a target function to be executed once after a specific delay (in seconds) has passed?

threading.Timer(delay, target_function)

19
New cards

According to the described thread lifecycle, what state is a thread in after its start() method has been called but before the operating system scheduler has actually allocated CPU time for it to execute?

Runnable (or Ready)

20
New cards

What term describes the specific situation where two or more threads are blocked forever, each waiting for a lock or resource that is held by another thread in the same waiting group?

Deadlock

21
New cards

Python threads are generally best suited for which type of tasks?

I/O-bound tasks (like network requests or disk access).

22
New cards

What is the primary impact of the Global Interpreter Lock (GIL) in CPython on CPU-bound tasks using threads?

It prevents true parallel execution on multiple cores, limiting speedup.

23
New cards

What term describes bugs in multithreaded programs that seem to disappear or change when you try to observe them (e.g., using a debugger or print statements)?

Heisenbugs

24
New cards

Code is considered "thread-safe" if it:

Functions correctly and produces predictable results even when executed concurrently by multiple threads.

25
New cards

What is identified as the most common source of thread safety problems?

Shared mutable state (data that can be changed and accessed by multiple threads).

26
New cards

Which of the following events can trigger a context switch

A thread making a blocking system call (like waiting for I/O)

27
New cards

What information is typically saved as part of a thread's "context" during a context switch?

Program Counter (PC), CPU registers, and stack pointer.

28
New cards

What is a potential negative consequence of excessive context switching, sometimes referred to as "thrashing"?

Degraded performance as the system spends more time switching than executing code.

29
New cards

What is a primary benefit of using a thread pool instead of creating new threads for every task?

It reduces the overhead of thread creation and destruction by reusing existing worker threads.

30
New cards

In the ThreadPoolExecutor Example, how are multiple download tasks submitted to the pool to run concurrently?

Using executor.map(download_page, urls), which maps the function to the iterable of URLs.

31
New cards

Why did the CPU-bound factorial calculation example run significantly slower with the thread pool compared to the sequential version in CPython?

The overhead of thread management and context switching, combined with the GIL preventing parallel execution, outweighed any benefits.

32
New cards

What is the main danger of using shared memory directly for inter-thread communication without proper synchronization mechanisms like locks?

High potential for race conditions leading to corrupted data and unpredictable results.

33
New cards

What fundamental principle does the standard queue.Queue data structure follow for element retrieval?

First-In, First-Out (FIFO)

34
New cards

What is a major advantage of using queue.Queue for inter-thread communication highlighted in the text?

It is inherently thread-safe, handling internal locking automatically.

35
New cards

In the Queue examples, what is the purpose of the producer thread putting None into the queue?

To act as a sentinel value, signaling to the consumer(s) that no more valid items will be produced.

36
New cards

What does the internal counter of a counting semaphore typically represent?

The number of available "permits" or "slots" for accessing a shared resource.

37
New cards

What is the key functional difference between a binary semaphore and a counting semaphore?

A binary semaphore's counter is limited to 0 and 1 (acting like a mutex/lock), while a counting semaphore's counter can be any non-negative integer.

38
New cards

In the Semaphore example, using the "with semaphore:" statement provides a convenient way to:

Automatically call semaphore.acquire() before entering the block and semaphore.release() upon exiting the block (ensuring release even if errors occur).

39
New cards

What is the main function of a threading.Barrier synchronization primitive?

To make a group of threads wait until all participating threads have reached a specific synchronization point before any are allowed to proceed.

40
New cards

What happens when a thread calls the barrier.wait() method on a threading.Barrier object?

The thread blocks until the specified number of 'parties' (threads) required by the barrier have all called wait().

41
New cards

What is a key characteristic of a process as defined in Section 5.1?

It has its own private, isolated memory space.

42
New cards

How does memory usage differ fundamentally between processes and threads within the same application?

Processes have independent memory spaces, while threads share the same memory space.

43
New cards

Which Python module provides the primary interface for creating and managing processes?

multiprocessing

44
New cards

What does the os.getpid() function return?

The unique Process ID (PID) of the currently executing process.

45
New cards

Why is Inter-Process Communication (IPC) necessary?

Because processes have separate, isolated memory spaces and cannot directly share variables.

46
New cards

What significant limitation of CPython's threading model can be overcome by using multiple processes?

The Global Interpreter Lock (GIL), allowing true parallelism for CPU-bound tasks.

47
New cards

What is mentioned as a challenge associated with using Inter-Process Communication (IPC)?

Higher overhead and complexity compared to inter-thread communication via shared memory.

48
New cards

Which specific queue class from the reading is designed to be process-safe for communication between different processes?

multiprocessing.Queue

49
New cards

What does the mp.Pipe() function return when called?

A pair of connection objects representing the two ends of the pipe.

50
New cards

For which scenario is mp.Pipe() typically best suited according to the text?

Direct communication between two closely related processes (e.g., parent/child).

51
New cards

How does mp.Manager() facilitate safe sharing of objects like lists and dictionaries between processes?

By using a separate server process to manage the actual objects and providing proxies to worker processes.

52
New cards

What is the primary advantage of using multiprocessing.shared_memory (introduced in Python 3.8)?

It allows direct memory access without pickling, offering lower overhead, especially for large numerical data.

53
New cards

If shared_int = mp.Value('i', 10) creates a shared integer, how do you access or modify its underlying value?

By using the .value attribute (e.g., shared_int.value += 1).

54
New cards

What does the map() method of a multiprocessing.Pool achieve?

It applies a given function to each item in an iterable, distributing the work among the pool's worker processes and collecting results.

55
New cards

What is the key difference in execution behavior between pool.map() and pool.apply_async()?

map() is synchronous (blocks until all results are ready), while apply_async() is asynchronous (returns immediately with a result object).

56
New cards

When using pool.apply_async with a callback function, what is the purpose of calling pool.close() followed by pool.join()?

To signal that no more tasks will be submitted and then wait for all submitted tasks in the pool to complete their execution.

57
New cards

What is stated as the most significant reason to choose processes over threads in Python, especially for performance-critical applications?

To overcome the GIL limitation for CPU-bound tasks and achieve true parallelism across multiple cores.

58
New cards

Compared to threads, what is a noted drawback or challenge when using processes?

Higher resource overhead (memory, context switching) and more complex Inter-Process Communication (IPC).

59
New cards

Where must process-safe synchronization primitives like multiprocessing.Lock, multiprocessing.Semaphore, etc., be imported or created from to work correctly across process boundaries?

The multiprocessing module or a multiprocessing.Manager object.

60
New cards

What is "throughput" in the context of OS scheduling goals?

Completing as many processes/threads as possible per unit of time.

61
New cards

What is the "convoy effect" associated with First-Come, First-Served (FCFS) scheduling?

Short processes getting stuck waiting behind long processes that arrived earlier.

62
New cards

What is a major drawback of the Shortest-Job-First (SJF) scheduling algorithm?

It requires knowing the length of the next CPU burst, which is usually difficult to predict accurately.

63
New cards

The Round Robin (RR) scheduling algorithm assigns each process a small, fixed unit of CPU time known as a:

Time slice or quantum.

64
New cards

What potential problem is particularly associated with Priority Scheduling algorithms?

Starvation of low-priority processes if there's a continual supply of high-priority processes.

65
New cards

What is the purpose of the "aging" technique sometimes used in priority scheduling?

To prevent starvation by gradually increasing the priority of processes that have been waiting for a long time.

66
New cards

What key feature distinguishes Multilevel Feedback Queue scheduling from basic Multilevel Queue scheduling?

Processes can migrate between different queues based on their execution behavior (e.g., CPU usage, waiting time).

67
New cards

What is context switching in an operating system?

The mechanism of saving the state (context) of the currently running process/thread and restoring the state of another one to allow the CPU to switch between them.

68
New cards

Which pieces of information are typically part of the saved "context" of a process or thread during a context switch?

Program Counter (PC), CPU registers, stack pointer, and process state.

69
New cards

What is the key difference between preemptive and non-preemptive (cooperative) multitasking?

In preemptive multitasking, the OS can forcibly interrupt a running process to switch to another; in non-preemptive, processes must voluntarily yield control.

70
New cards

What is a major benefit provided by the virtual memory abstraction?

Process isolation, preventing processes from interfering with each other's memory spaces.

71
New cards

While threads within a single process share resources like the heap and code segment, what does each thread typically possess individually?

Its own private stack used for local variables, function parameters, and return addresses.

72
New cards

What is the main purpose of cache coherence protocols in modern multi-core processors?

To ensure that all cores maintain a consistent and correct view of shared memory data, despite having their own private caches.

73
New cards

What is "false sharing" in the context of multi-core processor caches?

When different threads modify logically distinct variables that happen to reside on the same physical cache line, causing unnecessary cache coherence traffic.

74
New cards

How does Hyperthreading (or Simultaneous Multithreading - SMT) enhance CPU performance?

A single physical core duplicates some internal execution resources (like registers) to appear as multiple logical cores to the OS, allowing faster switching between threads to hide latency.

75
New cards

What is the defining characteristic of a Non-Uniform Memory Access (NUMA) computer architecture?

The time required for a processor to access a particular piece of memory depends on the physical location of that memory relative to the processor's node (local vs. remote access times differ).

76
New cards

What architectural feature makes Graphics Processing Units (GPUs) particularly effective for certain parallel computing tasks?

A massively parallel design with thousands of smaller cores optimized for SIMD (Single Instruction, Multiple Data) operations.

77
New cards

What primarily limits the parallel I/O performance of traditional spinning Hard Disk Drives (HDDs)?

The physical movement time of the read/write head (seek time) required to access different data locations, making concurrent random access inefficient.

78
New cards

What distinguishes advisory file locking from mandatory file locking?

Advisory locking relies on processes cooperating to explicitly check for and respect locks, while mandatory locking is enforced by the operating system kernel itself.

79
New cards

What key benefit does file system journaling provide for modern operating systems?

Improved data integrity and faster file system recovery after unexpected system crashes or power failures by logging metadata changes before applying them.

80
New cards

In the Boss-Worker pattern, what is the primary responsibility of the "boss"?

Distributing tasks to worker threads and potentially managing them.

81
New cards

What is the main role of the "worker" threads in the Boss-Worker pattern?

To receive tasks from the boss and execute them independently.

82
New cards

In the provided Boss-Worker code example, how does the boss typically distribute tasks to the workers?

By putting task descriptions into a shared queue.Queue.

83
New cards

What do "producer" threads do in the Producer-Consumer pattern?

Generate data or tasks and place them into a shared buffer or queue.

84
New cards

What is the role of "consumer" threads in the Producer-Consumer pattern?

Retrieve data or tasks from a shared buffer or queue and process them.

85
New cards

What is a primary benefit or purpose of the Producer-Consumer pattern?

Decoupling the rate of data generation (production) from the rate of data processing (consumption).

86
New cards

What distinguishes the Bounded Buffer pattern from the basic Producer-Consumer pattern?

The shared buffer has a fixed, maximum size.

87
New cards

Under what condition will a producer block (wait) in the Bounded Buffer pattern?

When the shared buffer is full.

88
New cards

Under what condition will a consumer block (wait) in the Bounded Buffer pattern?

When the shared buffer is empty.

89
New cards

What is the typical role of the "server" in the Client-Server pattern?

To listen for incoming client connections, process client requests, and send back responses.

90
New cards

What do "clients" typically do in the Client-Server pattern?

Initiate connections to the server, send requests for services or data, and receive responses.

91
New cards

What is the main synchronization challenge in the Reader-Writer problem?

Allowing multiple reader threads to access a shared resource concurrently while ensuring that writer threads have exclusive access.

92
New cards

In reader-priority solutions to the Reader-Writer problem, which group (readers or writers) might potentially face starvation if there is a continuous stream of incoming readers?

Writers

93
New cards

In the classic Dining Philosophers problem, what must each philosopher acquire simultaneously to be able to eat?

Two chopsticks (specifically, the ones immediately to their left and right).

94
New cards

What classic concurrency issue is primarily illustrated by the Dining Philosophers problem if not implemented carefully?

Deadlock, where each philosopher acquires one resource (their left chopstick) and waits indefinitely for the other resource held by their neighbor.

95
New cards

In the Sleeping Barber problem scenario, what does the barber do when there are no customers waiting in the shop chairs and none currently being served?

Goes to sleep in the barber chair until a customer arrives and wakes him.

96
New cards

What happens to a potential customer arriving at the barber shop described in the Sleeping Barber problem when all the waiting room chairs are already occupied?

The customer gives up and leaves the shop immediately without getting a haircut.

97
New cards

In the Cigarette Smokers problem, what does the "agent" entity provide in each cycle?

Two out of the three distinct ingredients (e.g., tobacco and paper, but lacking matches) required to make a cigarette.

98
New cards

What core concurrency challenge is highlighted by the synchronization requirements in the Cigarette Smokers problem?

Coordinating complex conditions to ensure the correct smoker (the one with the third ingredient) is signaled and avoids deadlock or starvation.

99
New cards

What is the fundamental technique employed in Monte Carlo simulations?

Using repeated random sampling of inputs or processes to obtain numerical results or statistical estimates

100
New cards

What is the most significant difference regarding the Global Interpreter Lock (GIL) when comparing CPython threading to C#/.NET threading?

C#/.NET does not have a GIL, enabling threads to execute truly in parallel on multiple CPU cores for CPU-bound work.