DP

L28_Ch11c

Chapter 11: I/O Management and Disk Scheduling

Overview

  • This chapter discusses I/O management and disk scheduling, focusing on RAID (Redundant Array of Independent Disks) and disk cache management.

11.6 RAID

Definition and Characteristics

  • RAID consists of seven levels (0 through 6) that combine multiple disk drives into a single logical unit.

  • Key characteristics include:

    • Data distributed across the physical drives in a method called striping.

    • Redundant disk capacity used for parity information ensures recoverability if a disk fails.

Historical Context

  • The term RAID was introduced by researchers from UC Berkeley.

  • Initially defined configurations for improving data access and redundancy.

Benefits

  • Simultaneous access to data improves I/O performance.

  • Provides redundancy through stored parity, enabling recovery from disk failures.

RAID Levels

Level 0: Non-redundant

  • Data is distributed across all disks without redundancy.

  • Achieves high performance, but no data protection.

Level 1: Mirrored

  • Duplicates all data across two drives.

  • Eliminates write penalties; second drive provides data access during a failure.

  • High cost due to using double the storage capacity.

Level 2: Redundancy through Hamming Code

  • Implements parallel access and data striping.

  • Utilizes Hamming code for error correction; effective in high-error environments.

Level 3: Bit-interleaved Parity

  • Requires one redundant disk for the entire array.

  • Provides parallel access, enabling high data transfer rates.

Level 4: Block-level Parity

  • Constructs a parity strip across each disk, affecting small write requests.

  • Involves a write penalty due to parity bit calculation.

Level 5: Distributed Block Parity

  • Distributes parity across all disks, enhancing data availability.

  • Loss of one disk does not lead to data loss.

Level 6: Dual Redundancy

  • Utilizes two parity calculations for extremely high data availability.

  • Each write impacts two parity blocks, resulting in a write penalty.

RAID Levels Overview

Level

Description

Disks Required

Data Availability

I/O Transfer Capacity

Small Request Rate

0

Nonredundant

N

Lower than single disk

Very high (both read/write)

Very high (both read/write)

1

Mirrored

2N

Higher than others, lower than RAID 6

Higher for read, similar for write

2x single disk (read), similar for write

2

Redundancy via Hamming code

N + m

Much higher than single disk

Highest of all

~2x single disk

3

Bit-interleaved parity

N + 1

Much higher than single disk

Highest of all

~2x single disk

4

Block-interleaved parity

N + 1

Higher than others, comparable to RAID 2, 3, 5

Similar to RAID 0 (read), lower (write)

Similar to RAID 0 (read), lower (write)

5

Distributed block parity

N + 1

Much higher

Similar to RAID 0 (read), lower (write)

Similar to RAID 0 (read), lower (write)

6

Dual distributed parity

N + 2

Highest among all

Similar to RAID 0 (read), lower than RAID 5 (write)

Similar to RAID 0 (read), lower than RAID 5 (write)

11.7 Disk Cache

Definition

  • Cache memory is a smaller, faster memory location between main memory and the processor, reducing average access time.

  • Disk cache serves as a buffer for disk sectors, containing copies of sectors for quick access.

Functionality

  • Upon an I/O request, the system checks the disk cache first.

  • If the requested sector is found in the cache, it is accessed directly; otherwise, it is read from the disk into the cache.

Replacement Strategies

  • Least Recently Used (LRU): Replaces the block that has been in the cache the longest without being referenced.

  • Least Frequently Used (LFU): Replaces the block with the fewest accesses; maintains a counter for each block.

Performance Evaluation

  • LRU strategy is often effective, and its performance can be assessed through various cache sizes and miss rates, as illustrated in figures 11.10 and 11.11.

Summary of Disk Cache Management

  • Disk cache optimizes I/O performance through effective memory storage solutions.

  • LRU and LFU are critical algorithms for managing cache blocks efficiently.