ch9-Memory Management

CHAPTER 9: MAIN MEMORY

COURSE CONTENT

  • Background

  • Swapping

  • Contiguous Memory Allocation

  • Fragmentation (Internal & External)

  • Non-Contiguous Memory Allocation

  • Segmentation

  • Paging

  • Structure of the Page Table, Address Translation

OBJECTIVES

  • Provide detailed descriptions of various ways of organizing memory hardware.

  • Discuss various memory-management techniques, including paging and segmentation.

  • Describe the Intel Pentium architecture that supports both pure segmentation and segmentation with paging.

BACKGROUND

  • A program must be brought from disk into memory and placed within a process to be executed.

  • Programs can be created in machine language, assembly language, or high-level language.

  • Main memory and registers are the only storage entities directly accessible by the CPU.

  • The CPU fetches instructions from main memory based on the program counter's value.

  • Typical instruction cycle involves: fetching, decoding the instruction, operand fetching, and possibly storing the result in memory.

BACKGROUND (CONT.)

  • The memory unit only recognizes:

    • Address + read requests (e.g., load memory location 20010 into register 8).

    • Address + data and write requests (e.g., store content of register 6 into memory location 1090).

  • Memory units do not know how addresses are generated.

  • Register access typically occurs within one CPU clock cycle; main memory access can take multiple cycles and may cause stalls.

  • Cache memory sits between main memory and CPU registers to improve access speed.

MEMORY PROTECTION

  • A base register (the smallest legal physical address of a program in memory) and a limit register (size of the program) define program boundaries in memory.

  • Together, these registers define the logical address space.

  • The CPU must ensure every memory access in user mode is within the limits defined by the base and limit registers.

HARDWARE ADDRESS PROTECTION

  • External message illustrating CPU checks against a base and limit; invalid requests lead to traps to the OS.

ADDRESS BINDING

  • Programs stored on disk must be loaded into memory before execution, generally as binary executable files.

  • It's convenient to assume the first physical address of a program starts at location 0000.

  • Addresses are represented differently throughout program execution:

    • Source program addresses are generally symbolic (e.g., variable "count").

    • Compilers bind symbolic addresses to relocatable addresses (e.g., "14 bytes from the beginning").

    • Linker/loader binds relocatable to absolute addresses (e.g., 74014).

BINDING OF INSTRUCTIONS AND DATA TO MEMORY

  • Address binding can occur at three times:

    • Compile time: When memory location is known in advance; absolute code is generated.

    • Load time: When memory location isn't known until load; relocatable code is created.

    • Execution time: Where binding occurs during runtime if the process can be moved; requires hardware support for address maps (e.g., base and limit registers).

STEP PROCESSING OF A USER PROGRAM

  • Consists of several stages: source program -> compiler -> object module -> linkage editor -> loader -> execution in memory.

LOGICAL VS. PHYSICAL ADDRESS SPACE

  • Distinction between logical (generated by the CPU) and physical addresses (seen by memory unit) is crucial in memory management.

  • They remain the same in compile-time and load-time binding but differ in execution-time binding.

  • Logical addresses represent the entire set of logical addresses generated by a program; physical addresses correspond to actual memory locations.

MEMORY-MANAGEMENT UNIT (MMU)

  • The MMU maps virtual addresses to physical addresses at runtime, allowing user programs to deal only with logical addresses, hiding real physical addresses.

DYNAMIC LINKING

  • Static linking combines system libraries and program code before execution.

  • Dynamic linking defers linking until execution time using stubs to replace themselves with appropriate memory-resident library routine addresses.

  • OS checks memory address space to ensure the routine is included.

DYNAMIC LOADING

  • Allows loading of routines into memory only when called, improving memory utilization, and prevents loading of unused routines.

  • Useful for handling infrequent cases without burdening the OS, although user-level design is necessary.

SWAPPING

  • Temporary situation where a process is swapped out to backing store, allowing other processes to load and execute.

  • Total physical memory used can exceed physical memory limits.

  • A backing store is needed, ideally providing quick access to all memory images.

  • "Roll out, roll in" is a variant for priority-based scheduling algorithms.

SCHEMATIC VIEW OF SWAPPING

  • Illustrates the process of swapping processes between user space, backing store, and main memory.

CONTEXT SWITCH TIME INCLUDING SWAPPING

  • If processes are not in memory, context switch time can be significant (e.g., 4000 ms total for a 100MB process at a 50MB/sec transfer rate).

  • System can optimize context switch times by accurately tracking memory usage with system calls.

CONTIGUOUS ALLOCATION

  • Main memory is divided into partitions for OS and user processes.

  • Each process is contained within a single contiguous memory section; efficiency is crucial due to limited resources.

MULTIPLE-PARTITION ALLOCATION

  • Variable-partition allocation sizes to meet process needs, and holes (free memory blocks) are maintained to efficiently allocate memory.

DYNAMIC STORAGE-ALLOCATION PROBLEM

  • Strategies for satisfying requests from free holes:

    • First-fit: Allocate the first adequate hole.

    • Best-fit: Allocate the smallest adequate hole.

    • Worst-fit: Allocate the largest hole.

  • First-fit and best-fit usually outperform worst-fit.

FRAGMENTATION

  • External Fragmentation: Total memory required exists but is non-contiguous. Approx. 50% of memory may be unusable due to fragmentation.

  • Internal Fragmentation: Resulting from allocated memory being larger than needed, contributing to unused space within partitions.

FRAGMENTATION (CONT.)

  • Compaction can reduce external fragmentation by consolidating free memory blocks but requires dynamic relocation and cannot occur during I/O operations.

NON-CONTIGUOUS ALLOCATION

  • Allows programs to reside in multiple parts of memory, necessitating hardware support.

  • Techniques include segmentation and paging.

SEGMENTATION

  • A memory-management scheme that reflects the user’s view of memory, where programs consist of several segments (logical units).

  • Allows segments to reside in varied parts of memory, avoiding contiguous allocation limits.

USER'S VIEW OF A PROGRAM

  • Illustrates how different segments (e.g., main program, stack, symbol tables) comprise the logical address space of a program.

LOGICAL AND PHYSICAL MEMORY

  • Displays the correlation between logical address space segments and their physical memory addresses, showing the division between program parts.

SEGMENTATION ARCHITECTURE

  • Logical addresses consist of two components: segment number and offset, which need mapping to a physical address through a segment table.

  • Critical registers help manage segment tables, including base and limit registers for addressing.

SEGMENTATION HARDWARE

  • Illustrates the mechanism of segment mapping using the base and limit registers with the CPU to access physical memory.

EXAMPLE OF SEGMENTATION

  • Example demonstrates how logical addresses are translated to physical addresses based on segments and offsets, ensuring accurate memory access.

PAGING

  • Reveals that physical addresses can be non-contiguous as processes are split into fixed-size blocks (pages) to avoid external fragmentation, although some internal fragmentation might still occur.

ADDRESS TRANSLATION SCHEME

  • Logical address space and page size determine the necessary sizing for address components, aligning logical addresses to their physical counterparts via a page table.

PAGING HARDWARE

  • Illustrates the translation from logical addresses to physical addresses through the page table mechanism, aiding efficient memory access.

PAGING MODEL OF LOGICAL AND PHYSICAL MEMORY

  • Binds logical memory structure to its physical memory equivalent through pages and frames, facilitating memory management.

PAGING EXAMPLE

  • Example provided showcases the calculation of physical memory addresses using frame sizes based on logical and physical constraints.

ALLOCATING FRAMES TO A NEW PROCESS

  • Demonstrates before and after states of memory allocation showing how frames can be assigned efficiently to new processes.

INTERNAL FRAGMENTATION

  • Explanation on how to calculate internal fragmentation in pages, including average calculations and implications on memory utilization.

STRUCTURE OF THE PAGE TABLE

  • Discusses the scale of memory structures involved in paging, presenting calculations for entries and bits necessary for table management.