Chapter 6 - Memory Chapter 6

Chapter 6 - Memory

6.1 Introduction

  • Memory is central to the functionality of stored-program computers.
  • Previous chapters discussed components of memory and their interactions with different Instruction Set Architectures (ISAs).
  • Focus of this chapter: Memory organization.
  • Importance of understanding memory organization for system performance analysis.

Outline

  • Types of memory and the memory hierarchy
  • Cache memory
  • Virtual memory

6.2 Types of Memory

  • Main Memory Types: Two primary types exist:
    • Random Access Memory (RAM)
    • Read-Only Memory (ROM)
  • Types of RAM:
    • Dynamic RAM (DRAM):
    • Composed of capacitors that leak charge over time, requiring periodic refreshing to maintain data.
    • Considered inexpensive due to its simpler design.
    • Static RAM (SRAM):
    • Utilizes circuits similar to D flip-flops.
    • Faster than DRAM and does not require refreshing, making it suitable for cache memory.
  • ROM:
    • Retains data without refreshing.
    • Used for storing permanent or semi-permanent data that remains even when the system is turned off.

6.3 The Memory Hierarchy

  • General principle: Faster memory is more costly than slower memory.
  • Memory is organized hierarchically for optimal performance at minimal cost:
    • Small and fast storage (Registers) in CPU.
    • Main memory (larger and slower) accessed via data bus.
    • Disk and tape drives for larger, nearly permanent storage (more distant from CPU).
  • Access Times for Various Memory Types:
    • Registers: 0.3ns - 2ns
    • Level 1 Cache: 3ns - 8ns
    • Level 2 Cache: 6ns - 20ns
    • Main Memory: 30ns - 70ns
    • Solid-State Disk: 35μs - 100μs
    • Fixed and Removable Hard Disks: 3ms - 15ms, 100ms - 5s
    • Optical Disks (e.g., Jukeboxes): 10s - 3m
    • Magnetic Tapes (Robotic libraries) and USB Flash Drives follow correspondingly.
  • Key Concepts:
    • Registers: Storage locations on the processor.
    • Virtual Memory: Implemented using hard drive to extend RAM addressing capability.
    • Cache Memory: Provides quick access speeds.
  • Data Access Process:
    • CPU requests data starting from cache, moving to main memory, and finally disk if not found in previous layers.

6.4 Cache Memory

  • Purpose: To speed up data access by storing recently used data closer to the CPU compared to main memory.
  • Characteristics: Although smaller, cache memory has access times significantly lower than that of main memory.
  • Types of Cache:
    • Direct Mapped Cache
    • Simplest scheme. Memory block X maps to cache block Y = X mod N (where N = number of cache blocks).
    • Fully Associative Cache
    • Allows blocks to be stored anywhere in cache, avoiding rapid eviction until capacity is reached.
    • Set Associative Cache
    • Combines direct mapped and fully associative approaches. Memory mapping occurs to a set of several blocks, not just one.

6.4.1 Direct Mapped Cache

  • Mapping Scheme: Binary main memory address is divided into:
    • Offset: Identifies address within a block.
    • Block Field: Selects a unique cache block.
    • Tag Field: Remaining bits used for identification.
  • Example 6.1:
    • Byte-addressable main memory with 4 blocks and cache with 2 blocks of 4 bytes each.
    • Mapping explanation based on block and offset fields illustrated, demonstrating how memory addresses convert to cache locations.

6.4.2 Fully Associative Cache

  • Characteristics: Blocks can reside anywhere in cache.
  • Accessing Data: Concurrent search for tags in all cache blocks.
  • Challenges: Requires costly hardware and algorithms for block eviction (victim block selection).

6.4.3 Set Associative Cache

  • Structure: Divided into sets, each containing multiple cache blocks.
  • Memory Reference Breakdown: Divided into Tag, Set, and Offset fields.
  • Example 6.5:
    • 2-Way Set Associative example with specific memory and cache details, illustrating how bits are distributed among tag, offset, and set.

6.5 Virtual Memory

  • Concept: Enhances system performance by extending memory capacity without requiring more physical RAM; disk portions serve as supplemental memory.
  • Paging: Divides main memory into managed page frames that are loaded into disk as needed.
  • Address Translation:
    • Physical Address: Actual address in memory.
    • Virtual Address: Program-generated address mapped to physical addresses.
  • Page Fault and Memory Fragmentation:
    • Page fault: When a required logical address requires a page retrieval from disk.
    • Fragmentation can occur due to the paging process, creating unusable address spaces.

6.5.1 Page Table

  • Maintains data about each page’s location in memory/disk, with one table per active process.
  • Address Translation Fields:
    • Partitioning into Page Field (identifies page location) and Offset Field (indicates address within page).
  • Valid Bit: Indicates page’s memory presence; if zero, necessitates disk retrieval (page fault).

6.5.2 Example Calculations

  • EAT Calculation: Effective Access Time (EAT) includes various memory access times and fault probabilities.
  • Page Replacement: If memory is full, a page must be evicted and replaced with a page from disk when necessary.

6.5.3 Segmentation in Virtual Memory

  • Variations from paging; memory divided into variable-length segments controlled by programmer.
  • Segments managed similar to pages, searched in the segment table after page faults, addressing fragmentation issues.

6.5.4 Fragmentation Types**

  • Internal Fragmentation: Occurs when portions of allocated pages are unused, leading to wasted memory.
  • External Fragmentation: When segments are broken down over time, preventing new allocations due to non-contiguous spaces.

6.6 A Real-World Example

  • Pentium Architecture: Supports combinations of paging and segmentation. Multiple cache levels (L1, L2) and TLB structure.
  • Cache Configuration:
    • L1 cache (instruction and data cache) next to processor; L2 cache serves intermediary.
    • Cache designs influence system performance significantly.

Conclusion

  • Memory organization is crucial for performance; consists of a hierarchy with fast-to-slow memory levels.
  • Cache improves speed, while virtual memory provides capacity by utilizing disk space.
  • Cache mapping methodologies include direct, associative, and set associative designs.
  • Address issues of fragmentation for both paging and segmentation processes.