chapter 6 - Synchronization

Chapter 6: Synchronization

Overview

  • Focus on the synchronization of processes in operating systems.

  • Explore the critical-section problem, possible solutions, and tools for synchronization.

Objectives

  • Introduce process synchronization concepts.

  • Examine the critical-section problem, ensuring shared data consistency.

  • Discuss both software and hardware solutions for synchronization.

  • Review tools used to handle synchronization issues.

Background

  • Concurrent Execution of Processes: Processes may interrupt execution at any moment; this can lead to data inconsistency when multiple processes access shared data.

  • Consumer-Producer Example:

    • An integer counter tracks full buffers in a producer-consumer scenario (initialized to 0).

    • Producer increments the counter upon producing a buffer, and the consumer decrements it when consuming.

Producer and Consumer Algorithms

Producer Code

while (true) {
    /* produce an item in next produced */
    while (counter == BUFFER_SIZE) ;  /* do nothing */
    buffer[in] = next_produced;
    in = (in + 1) % BUFFER_SIZE;
    counter++;
}

Consumer Code

while (true) {
    while (counter == 0) ;  /* do nothing */
    next_consumed = buffer[out];
    out = (out + 1) % BUFFER_SIZE;
    counter--;
}

Race Condition

  • A Race Condition occurs when multiple processes access shared variables concurrently, producing inconsistent results.

  • Example Execution Interleaving with counter = 5:

    • Actions may interfere, resulting in incorrect final counter values.

  • To avoid this, processes need synchronization to access shared variables sequentially.

Critical Section Problem

  • Definition: When one process is executing in its critical section, no others may enter their critical sections.

  • A protocol must be designed to manage this access, involving entry and exit sections around the critical section.

Critical Section Structure

  • General structure of a process includes:

    • Entry section (request permission)

    • Critical section (manipulate shared resources)

    • Exit section (release permission)

    • Remainder section (perform other tasks)

Requirements for a Solution to the Critical-Section Problem

  1. Mutual Exclusion: Only one process can be in its critical section at a time.

  2. Progress: No process should be indefinitely postponed from entering its critical section if others wish to do so.

  3. Bounded Waiting: After a process requests to enter its critical section, there should be a limit on how many times other processes can enter their critical sections.

Handling Critical Sections in OS

  • Preemptive Kernel: Allows processes to be interrupted; increases race condition risk due to shared data.

  • Non-preemptive Kernel: Only runs until exit or yielding, generally avoids race conditions.

Peterson’s Solution (Software-based)

  • A solution that uses two shared variables: turn (indicating whose turn it is) and a flag array (indicating process readiness).

  • Algorithm: Each process sets its flag before entering the critical section, yielding to the other process by setting turn.

    • Ensures mutual exclusion, progress, and bounded waiting conditions are satisfied.

Synchronization Hardware

  • Systems often provide hardware-level support with atomic instructions to manage critical sections and avoid race conditions.

  • Method involves using a lock to ensure that access to critical sections can be managed without interference.

Mutex Locks

  • A straightforward locking mechanism to protect critical sections:

    • acquire() and release() functions encapsulate the lock mechanism, though require busy waiting (spinlock).

Semaphores

  • A more sophisticated synchronization tool, involving an integer variable with atomic operations: wait(S) and signal(S).

  • Two types: Counting Semaphore (range unrestricted) and Binary Semaphore (range 0 or 1).

Semaphore Usage Example

  • In a producer-consumer scenario, processes can use a semaphore to coordinate:

    • P1 calls signal after producing.

    • P2 calls wait before consuming.

Semaphore Implementation without Busy Waiting

  • Each semaphore maintains a waiting queue to handle process blocking without busy waiting.

  • Operations include adding processes to wait when semaphores are not available and waking a process when resources become available.

Problems with Semaphores

  • Incorrect semaphore operations can lead to issues like deadlock and starvation.

  • Proper sequencing and careful code management are crucial to prevent synchronization problems.