In-Depth Notes on Main Memory and Addressing in Operating Systems

Main Memory Overview

  • Programs must be loaded from disk into memory to be executed.
  • The CPU accesses memory via a direct connection with main memory and registers (fast access).
  • Main memory is slower; hence, cache memory is used to alleviate latency.

Memory Protection

  • Ensures processes access only their allocated address space.
  • Achieved using base and limit registers to define logical address space.

Address Binding

  • Programs on disk are queued before loading.
  • Addresses during different program phases:
    • Source code: symbolic address representation.
    • Compiled code: relocatable addresses (e.g., “14 bytes from the start”).
    • Loader: binds relocatable to absolute addresses.

Instruction and Data Binding to Memory

  • Address binding occurs at three stages:
    • Compile time: Absolute code, needs recompilation if location changes.
    • Load time: Relocatable code is generated.
    • Execution time: Binding occurs at runtime, requires hardware support (e.g., MMU with base/limit registers).

Logical vs Physical Address Space

  • Logical address: generated by the CPU (virtual address).
  • Physical address: directly recognized by memory.
  • Mapping occurs via a memory-management unit (MMU).

Memory-Management Unit (MMU)

  • Maps virtual addresses to physical addresses during execution.
  • Utilizes a relocation register that adjusts all addresses generated by the process before they are sent to memory.

Dynamic Loading and Linking

  • Dynamic Loading: Routine loaded only when called.
    • Results in better memory utilization.
  • Dynamic Linking: Linking is delayed until execution, using stubs to resolve addresses for library routines.

Contiguous Allocation

  • Early method of memory allocation: resident OS in low memory, user processes in high memory.
  • Utilizes relocation registers to protect user processes, ensuring each logical address is within bounds.

Variable Partition

  • Memory allocated in partitions of varied size, improving efficiency.
  • When a process finishes, its partition can be combined with adjacent free partitions.

Dynamic Storage Allocation Problem

  • Addressing allocation strategy:
    • First-fit.
    • Best-fit.
    • Worst-fit.
  • First-fit and best-fit are typically more efficient.

Fragmentation

  • External Fragmentation: Available memory in total but not contiguous.
  • Internal Fragmentation: Allocated memory greater than requested.
  • Solutions include memory compaction (only feasible with dynamic relocation).

Paging

  • Allows noncontiguous physical address space, aiding in reducing external fragmentation.
  • Divides memory into fixed-sized blocks called frames.
  • Logical memory divided into pages, with a page table facilitating address mapping.
  • Pages can suffer from internal fragmentation.

Address Translation in Paging

  • Logical address consists of a page number and page offset.
  • Physical addresses derived using the page table (base address of each page).

TLB (Translation Lookaside Buffer)

  • Cache used for faster address translation.
  • Provides significant performance improvements; management policies are crucial.

Memory Protection Techniques

  • Protection bits associated with frames for access rights.
  • Valid-invalid bits in the page table for region access control.

Shared Pages

  • Efficient for reentrant code shared among processes, each process retains private data copies.

Page Table Management

  • Large page table sizes necessitate hierarchical or hashed page table strategies to reduce overhead.

Swapping

  • Allows processes to temporarily be moved from memory to maintain total process space despite limited physical memory.
  • The process may not return to the same physical address depending on the address binding method.

Context Switching with Swapping

  • High context switch time if a process is not in memory, necessitating a swap with considerable time overhead.