Critical section

1. Understanding the Critical Section Problem
  • Critical Section: A specific segment of code within a program where shared resources (like variables, files, or hardware) are accessed. To prevent data inconsistencies, only one process should be allowed to execute its critical section at any given time.

  • Race Condition: Occurs when multiple processes attempt to access and modify the same shared data concurrently. The final outcome becomes dependent on the unpredictable order in which these processes execute, leading to potentially incorrect results.

  • Synchronization: The process of coordinating the execution of multiple concurrent processes or threads to ensure data consistency and avoid race conditions when they access shared resources.

2. Requirements for a Robust Solution to the Critical Section Problem

Any solution to the critical section problem must satisfy these three conditions:

  1. Mutual Exclusion: If one process is executing in its critical section, no other process is allowed to execute in its critical section.

  2. Progress: If no process is executing in its critical section, and one or more processes want to enter their critical section, then only those processes not executing in their remainder section can participate in deciding which process will enter its critical section next. This decision cannot be postponed indefinitely.

  3. Bounded Waiting: There must be a limit on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. This prevents starvation.

3. Methods for Handling Critical Sections
3.1 Software Solutions
  • Mutex Locks (Binary Semaphores):

    • A simple synchronization primitive that provides mutual exclusion.

    • A process acquires the lock before entering its critical section and releases it upon exiting.

    • Only one process can hold the mutex at a time.

  • Semaphores:

    • A more generalized synchronization tool that acts as an integer variable.

    • Accessed only through two atomic operations:

      • wait()\text{wait}() (also known as P()\text{P}() or %down()\% \text{down}()): Decrements the semaphore value. If the value becomes negative, the process waits.

      • signal()\text{signal}() (also known as V()\text{V}() or %up()\% \text{up}()): Increments the semaphore value. If there are waiting processes, one is woken up.

    • Types of Semaphores:

      • Binary Semaphore: Can only have values 0 or 1, essentially acting as a mutex.

      • Counting Semaphore: Can have any non-negative integer value, used to control access to resources with multiple identical instances.

  • Peterson's Solution:

    • A classic software-based solution for two processes that ensures mutual exclusion, progress, and bounded waiting.

    • Uses a flag array (to indicate if a process wants to enter its critical section) and a turn variable (to indicate whose turn it is).

3.2 Hardware Solutions
  • Hardware instructions can provide atomic operations that simplify critical section implementation.

  • TestAndSet() Instruction:

    • Atomically tests a memory word's value and sets it to true.

    • boolean TestAndSet (boolean *target): Returns the original target value and sets target to true in a single, indivisible operation.

  • Swap() Instruction:

    • Atomically swaps the contents of two memory words.

    • void Swap (boolean *a, boolean *b): Swaps the values of a and b atomically.

3.3 Operating System Supported Solutions
  • Monitors:

    • A high-level synchronization construct that encapsulates shared data and the procedures that operate on that data.

    • Ensures that only one process can be active within the monitor at any given time, providing mutual exclusion implicitly.

    • Uses condition variables along with %wait()\% \text{wait}() and %signal()\% \text{signal}() operations to allow processes to wait for certain conditions to be met and to signal other waiting processes.