memory management

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/114

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

115 Terms

1
New cards

Memory

Memory is a large array of words or bytes, each with its own address, serving as a repository of quickly accessible data shared by the CPU and I/O devices.

2
New cards

Volatility of Main Memory

Main memory is volatile, meaning its contents are lost during system failure or power loss.

3
New cards

Key Activities of the Operating System in Memory Management

The OS keeps track of used memory parts, decides which processes to load next when space is available, and handles memory allocation and deallocation.

4
New cards

Bringing a Program into Memory

A program initially resides on disk as a binary executable file and must be loaded into memory by the operating system to be executed.

5
New cards

Multiple Processes in Memory

To improve CPU utilization and speed, operating systems often keep multiple processes in memory at the same time, requiring effective memory management schemes.

6
New cards

User Process Protection in Physical Memory

Most systems allow user processes to reside in any part of physical memory and provide separate memory areas to prevent one process from interfering with another.

7
New cards

Address Binding

Address binding is the process of mapping symbolic addresses in source programs to actual memory addresses, which can be re-locatable or absolute.

8
New cards

Compile-time Binding

Compile-time binding occurs when addresses are known beforehand.

9
New cards

Load-time Binding

Load-time binding occurs when addresses are assigned during loading.

10
New cards

Execution-time Binding

Execution-time binding happens during program execution, allowing movement between memory segments.

11
New cards

Role of the Compiler in Address Binding

The compiler binds symbolic addresses to re-locatable addresses during program compilation.

12
New cards

Loader/Linkage Editor in Address Binding

The loader or linkage editor binds re-locatable addresses to absolute addresses when loading the program into memory.

13
New cards

Logical Address Space

Logical address space is generated by the CPU (virtual address), while physical address space is the actual address seen by the memory hardware.

14
New cards

Difference Between Logical and Physical Addresses

In compile-time and load-time binding, logical and physical addresses are the same; in execution-time binding, they differ, with the MMU performing run-time address translation.

15
New cards

Memory Management Unit (MMU)

The MMU performs run-time mapping from virtual (logical) addresses to physical addresses, using a relocation register to add the base address to each virtual address.

16
New cards

Dynamic Loading

Dynamic loading is a technique where a routine is loaded into memory only when it is called, optimizing memory usage by loading necessary code on demand.

17
New cards

Dynamic Linking

Dynamic linking delays the linking process until execution time, often used for system libraries, allowing shared use and efficient memory utilization.

18
New cards

Purpose of a Stub in System Libraries

A stub is a small piece of code used to locate and execute the appropriate memory-resident library routine, avoiding each user program including a copy of system libraries.

19
New cards

Swapping in Operating Systems

Swapping involves temporarily moving a process from memory to a backing store (a large disk) and later bringing it back into memory for continued execution, helping manage memory resources.

20
New cards

Backing Store in Swapping

The backing store is a fast, large disk that stores copies of all memory images for all users and provides direct access for swapping processes in and out of memory.

21
New cards

CPU scheduler

Selects a process for execution.

22
New cards

Dispatcher

Checks if the process is in memory; if not, it may swap out other processes to make space.

23
New cards

Swap time

The major part of swap time is transfer time, which is proportional to the amount of memory being swapped.

24
New cards

Compile or load time binding

A swapped-out process is swapped back into the same memory space it occupied before.

25
New cards

Problem with swapped-out processes

A problem is if a swapped-out process is performing I/O, which can be mitigated by latching the job in memory during I/O or performing I/O only into OS buffers.

26
New cards

Contiguous memory allocation

Involves dividing main memory into partitions where each process resides in a single, continuous memory location, typically used in older systems.

27
New cards

Mapping and protection in contiguous allocation

Achieved using relocation and limit registers; the relocation register holds the smallest physical address, and the limit register defines the range of legal logical addresses.

28
New cards

Fixed partitions

Have a set size.

29
New cards

Variable-sized partitions

Tracked by the OS, which manages allocated and free holes, allowing more flexible memory allocation.

30
New cards

Dynamic storage-allocation problem

Involves how to satisfy a memory request of size 'n' from a list of free holes, using strategies like first-fit, best-fit, or worst-fit.

31
New cards

First-fit strategy

Allocates the first sufficiently large hole.

32
New cards

Best-fit strategy

Allocates the smallest suitable hole.

33
New cards

Worst-fit strategy

Allocates the largest hole, aiming to leave the largest leftover space.

34
New cards

External fragmentation

Occurs when enough total memory exists for a request but is not contiguous, making it impossible to allocate memory despite sufficient total free space.

35
New cards

Internal fragmentation

Happens when allocated memory exceeds the requested size, leaving unused space within a partition.

36
New cards

Compaction

Shuffles memory contents to consolidate free memory into one large block, but it requires dynamic relocation and incurs a cost proportional to the amount of memory moved.

37
New cards

Paging

Eliminates external fragmentation by dividing memory into fixed-size pages, allowing non-contiguous allocation of physical memory.

38
New cards

Physical memory in paging system

Divided into fixed-sized blocks called frames.

39
New cards

Logical memory in paging system

Divided into fixed-sized blocks called pages, with page size equal to frame size.

40
New cards

Page table

Used to translate logical addresses to physical addresses by storing the base address of each page in physical memory.

41
New cards

Address translation in paging system

The CPU-generated address is divided into a page number and a page offset; the page number is used as an index into the page table to find the base address, which is then combined with the offset to form the physical address.

42
New cards

Page table base register (PTBR)

Points to the location of the page table in main memory.

43
New cards

Page table length register (PTLR)

Indicates the size of the page table.

44
New cards

Two-memory-access problem

Accessing data requires two memory accesses: one for the page table and one for the data.

45
New cards

Translation Look-Aside Buffer (TLB)

A TLB is a special hardware cache that performs parallel searches for page numbers, providing quick access to frame numbers and reducing address translation time.

46
New cards

TLB hits and misses

On a TLB hit, the frame number is retrieved quickly. On a miss, the frame number is obtained from the page table in memory, and the TLB may update its entries using algorithms like LRU.

47
New cards

Address Space Identifiers (ASIDs)

ASIDs identify different processes within TLB entries, allowing the TLB to hold entries for multiple processes simultaneously and avoiding the need to flush the TLB during context switches.

48
New cards

Effective Access Time (EAT)

EAT is the average time to access memory, considering the TLB hit ratio, TLB lookup time, and memory cycle time, calculated as EAT

49
New cards

TLB size and system performance

A larger TLB generally increases the hit ratio, leading to a lower EAT and improved system performance.

50
New cards

Memory protection in paging

Memory protection is implemented by associating a protection bit with each frame and using valid-invalid bits in the page table to indicate whether a page is in the process's logical address space and in memory.

51
New cards

Valid and invalid bits in the page table

A valid bit indicates the page is in the process's logical address space and in memory, while an invalid bit indicates the page is either not valid or stored on disk.

52
New cards

Frame size as a power of 2

Having frame size as a power of 2 simplifies address translation by allowing easier calculation and division of addresses.

53
New cards

Internal fragmentation in paging

Internal fragmentation occurs because fixed-sized frames may not be fully utilized by processes, leaving unused space within frames.

54
New cards

Operating systems using paging

Unix, Linux, and Windows.

55
New cards

Additional bits in access permissions

Additional bits can be used for access permissions like read-write or read-only. The Memory Management Unit (MMU) checks accesses against these bits and traps to the OS on illegal attempts.

56
New cards

Challenges of large logical address spaces

Large logical address spaces (e.g., 2^32 to 2^64) can cause page tables to become very large, making management and access more complex.

57
New cards

Hierarchical paging

Hierarchical paging divides the page table into smaller pieces, typically in a two-level scheme where the page table is paged. Address translation involves indexing into an outer page table and then into an inner page table, with caching helping maintain performance.

58
New cards

Multi-level paging

Multi-level paging involves multiple levels of page tables, requiring multiple memory accesses for address translation. Caching these tables helps maintain reasonable performance despite the multiple accesses.

59
New cards

Hashed page tables

Hashed page tables are used for address spaces larger than 32 bits. Virtual page numbers are hashed into a table containing chains of elements, each with the virtual page number, mapped frame, and a pointer to the next element.

60
New cards

Clustered page tables

Clustered page tables are similar to hashed page tables but organize entries in clusters for improved access.

61
New cards

Inverted page tables

Use a single table for all processes, with one entry per real page of memory, containing the virtual address and process ID.

62
New cards

Advantages of inverted page tables

Reduce memory needed but increase search time, which can be mitigated with hashing.

63
New cards

Reentrant code sharing in paging

One copy of read-only code can be shared among processes, while each process maintains separate copies of private code and data.

64
New cards

Segmentation in memory management

A memory management scheme that presents memory as a collection of logical segments (e.g., main program, procedures, variables, stack).

65
New cards

Structure of a logical address in segmentation

A two-tuple: that maps to a physical address via a segment table.

66
New cards

Purpose of the segment table in segmentation

Maps two-dimensional logical addresses to one-dimensional physical addresses, with each entry containing a base and a limit.

67
New cards

Segment-table base register (STBR)

Points to the segment table's location.

68
New cards

Segment-table length register (STLR)

Indicates the number of segments used by a program, aiding in checking the legality of segment numbers.

69
New cards

Memory allocation strategies in segmentation

Can use first-fit or best-fit strategies, which may lead to external fragmentation.

70
New cards

Drawbacks of memory allocation in segmentation

External fragmentation makes it difficult to find contiguous free space for segments.

71
New cards

Memory protection in segmentation

Provided by associating a validation bit and read/write/execute privileges with each segment table entry.

72
New cards

Virtual memory

Separates user logical memory from physical memory, allowing programs to execute even if only partially in memory.

73
New cards

Primary purpose of virtual memory

Overcomes physical memory limitations.

74
New cards

Advantages of virtual memory

Programs are not constrained by physical memory size, enabling more concurrent programs and reducing I/O for loading/swapping.

75
New cards

Demand paging

A memory management scheme that brings a page into memory only when it is needed.

76
New cards

Benefits of demand paging

Reduces I/O and memory requirements, providing faster response and allowing memory sharing among more users.

77
New cards

Hardware support for demand paging

Requires a valid-invalid bit in the page table to indicate if a page is in memory.

78
New cards

Page request when page is in memory

The Memory Management Unit (MMU) proceeds as usual without interruption.

79
New cards

Page request when page is not in memory

The MMU generates a page-fault trap to the operating system to handle the situation.

80
New cards

Steps involved in handling a page fault

  1. Check if the reference address is valid. 2. If invalid, terminate the process. 3. If valid, find a free frame.
81
New cards

Pure demand paging

Pure demand paging is a scheme where a page is only brought into memory when it is required, with a process starting with no pages in memory and page faults occurring until all needed pages are loaded.

82
New cards

Page fault handling

The CPU needs to restart instructions after a page fault regardless of whether the fault occurred during instruction fetch or operand fetch to ensure correct program execution.

83
New cards

Page replacement

Page replacement occurs when a process needs a page but there are no free frames; the OS selects an unused page in memory to swap out, enabling large virtual memory on smaller physical memory.

84
New cards

Frame-allocation algorithm

A frame-allocation algorithm determines how many frames to allocate to each process, preventing over-allocation of memory during demand paging.

85
New cards

Page replacement algorithms

Page replacement algorithms aim to minimize the page-fault rate by selecting which pages to replace based on specific strategies, evaluated using reference strings.

86
New cards

FIFO page replacement algorithm

FIFO replaces the oldest page in memory. It can suffer from Belady's anomaly, where increasing the number of frames results in more page faults.

87
New cards

Optimal (OPT) page replacement algorithm

OPT replaces the page that will not be used for the longest time in the future. It has the lowest page fault rate but is difficult to implement because it requires future knowledge of reference strings.

88
New cards

Least Recently Used (LRU) page replacement algorithm

LRU replaces the page that has not been used for the longest time. It generally performs well and does not suffer from Belady's anomaly, but requires hardware assistance to track recency.

89
New cards

LRU approximation using hardware

LRU can be approximated with a reference bit set when a page is referenced. The algorithm replaces pages with a reference bit of 0, although it doesn't know the exact order of non-referenced pages.

90
New cards

Additional-reference-bits algorithm

It keeps a history of reference bits over time, often in an 8-bit shift register, and replaces the page with the smallest value.

91
New cards

Second chance algorithm

A FIFO-based algorithm that uses the reference bit; if the page to be replaced has its reference bit set, the bit is cleared, and the page is given a second chance.

92
New cards

Enhanced second-chance algorithm

It incorporates the modify bit to prioritize replacement of pages that haven't been used or modified.

93
New cards

Page Buffering Algorithm

This algorithm maintains a pool of free frames; when a page fault occurs, a victim frame is selected, the desired page is read into a free frame before the victim is written out, allowing the process to restart sooner; the victim frame is later written to disk and added back to the free frame pool.

94
New cards

Frame Allocation among processes

Frames must be allocated among processes based on total available frames and minimum requirements; too few frames increase page faults. Allocation schemes include fixed (equal or proportional) and priority-based, where higher priority processes get more frames.

95
New cards

Global replacement

Global replacement allows processes to select replacement frames from all system frames, potentially increasing throughput but reducing process control over page faults.

96
New cards

Local replacement

Local replacement restricts processes to their own allocated frames, keeping the number constant.

97
New cards

Thrashing in memory management

Thrashing occurs when a process spends more time paging than executing, leading to a significant decrease in performance.

98
New cards

Thrashing

Occurs when a process has insufficient frames, leading to a high page-fault rate, low CPU utilization, and excessive paging, which hampers overall system productivity.

99
New cards

Scenario leading to thrashing

When the OS increases multiprogramming due to low CPU utilization, processes start page faulting and stealing frames from others, causing more page faults and a cycle of repeated thrashing.

100
New cards

Reducing thrashing

Decreasing the degree of multiprogramming or using local (or priority) replacement algorithms, which slow down thrashing and reduce its impact.