Chapter 9: Memory Management in Operating Systems

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/97

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

98 Terms

1
New cards

What is required for a program to run in memory?

A program must be brought from disk into memory and placed within a process.

2
New cards

What are the only storage units that the CPU can access directly?

Main memory and registers.

3
New cards

What does the memory unit see during operations?

A stream of addresses with read requests or addresses with data and write requests.

4
New cards

How does register access compare to main memory access in terms of speed?

Register access is done in one CPU clock or less, while main memory can take many cycles, causing a stall.

5
New cards

What component sits between main memory and CPU registers?

Cache.

<p>Cache.</p>
6
New cards

Why is memory protection necessary?

To ensure that a process can access only those addresses in its address space.

7
New cards

How can memory protection be provided?

By using a pair of base and limit registers that define the logical address space of a process.

8
New cards

What must the CPU check for every memory access in user mode?

The CPU must check that the access is between the base and limit for that user.

9
New cards

What type of instructions are privileged regarding memory protection?

Instructions for loading the base and limit registers.

10
New cards

What is the input queue in the context of address binding?

Programs on disk, ready to be brought into memory to execute.

11
New cards

What is a disadvantage of loading the first user process at physical address 0000?

It is inconvenient to have the first user process always at that address.

12
New cards

What are the different representations of addresses at various stages of a program's life?

Source code addresses are usually symbolic, compiled code addresses bind to relocatable addresses, and linkers or loaders bind relocatable addresses to absolute addresses.

13
New cards

What are the three stages of address binding for instructions and data?

Compile time, load time, and execution time.

14
New cards

What happens at compile time in address binding?

If the memory location is known a priori, absolute code can be generated; recompilation is necessary if the starting location changes.

15
New cards

What is generated at load time if the memory location is not known at compile time?

Relocatable code.

16
New cards

What occurs during execution time in address binding?

Binding is delayed until run time if the process can be moved during execution from one memory segment to another.

17
New cards

What hardware support is needed for address maps?

Base and limit registers.

18
New cards

What is the purpose of the page table in memory management?

To manage the mapping of virtual addresses to physical addresses.

<p>To manage the mapping of virtual addresses to physical addresses.</p>
19
New cards

What are the two types of memory allocation discussed?

Contiguous Memory Allocation and Paging.

20
New cards

What architectures are used as examples in memory management?

Intel 32 and 64-bit Architectures and ARMv8 Architecture.

21
New cards

What is the main objective of Chapter 9 in the Operating System Concepts book?

To provide a detailed description of various ways of organizing memory hardware and discuss memory-management techniques.

22
New cards

What is the central concept of memory management regarding address spaces?

The concept of a logical address space that is bound to a separate physical address space.

23
New cards

What is a logical address?

An address generated by the CPU, also referred to as a virtual address.

24
New cards

What is a physical address?

The address seen by the memory unit.

25
New cards

When are logical and physical addresses the same?

They are the same in compile-time and load-time address-binding schemes.

26
New cards

How do logical and physical addresses differ in execution-time address-binding schemes?

Logical (virtual) and physical addresses differ in execution-time address-binding schemes.

27
New cards

What is the logical address space?

The set of all logical addresses generated by a program.

28
New cards

What is the physical address space?

The set of all physical addresses generated by a program.

29
New cards

What does the Memory-Management Unit (MMU) do?

It is a hardware device that maps virtual addresses to physical addresses at run time.

<p>It is a hardware device that maps virtual addresses to physical addresses at run time.</p>
30
New cards

What is the role of the relocation register in memory management?

The value in the relocation register is added to every address generated by a user process at the time it is sent to memory.

31
New cards

What type of addresses does a user program deal with?

A user program deals with logical addresses and never sees the real physical addresses.

32
New cards

When does execution-time binding occur?

Execution-time binding occurs when a reference is made to a location in memory.

33
New cards

What happens to logical addresses during execution-time binding?

Logical addresses are bound to physical addresses.

34
New cards

What is a simple scheme in memory management?

It is a generalization of the base-register scheme, where the base register is now called the relocation register.

35
New cards

What is the function of the relocation register in memory management?

The value in the relocation register is added to every address generated by a user process at the time it is sent to memory.

36
New cards

What is dynamic loading in operating systems?

Dynamic loading allows a program to execute without needing the entire program in memory; routines are loaded only when called.

37
New cards

What are the benefits of dynamic loading?

Better memory-space utilization since unused routines are never loaded, and all routines are kept on disk in relocatable load format.

38
New cards

What is dynamic linking?

Dynamic linking postpones linking until execution time, using a stub to locate the appropriate memory-resident library routine.

39
New cards

What happens if a routine is not in a process's memory address during dynamic linking?

The operating system adds the routine to the address space if it is not present.

40
New cards

What is the difference between static linking and dynamic linking?

Static linking combines system libraries and program code into the binary program image, while dynamic linking occurs at execution time.

41
New cards

What is contiguous allocation in memory management?

Contiguous allocation is a method where main memory is divided into two partitions: one for the resident operating system and one for user processes.

42
New cards

How does contiguous allocation protect user processes?

Relocation registers are used to protect user processes from each other and from changing operating-system code and data.

43
New cards

What do the base and limit registers do in contiguous allocation?

The base register contains the smallest physical address, and the limit register defines the range of logical addresses allowed.

44
New cards

What is the purpose of the MMU in memory management?

The Memory Management Unit (MMU) maps logical addresses dynamically.

<p>The Memory Management Unit (MMU) maps logical addresses dynamically.</p>
45
New cards

What is variable partition allocation?

Variable partition allocation allows multiple-partition sizes for efficiency, with holes of various sizes scattered throughout memory.

46
New cards

What is a 'hole' in the context of variable partition allocation?

A hole is a block of available memory that can accommodate a process when it arrives.

47
New cards

What happens when a process exits in variable partition allocation?

The partition is freed, and adjacent free partitions are combined.

48
New cards

What information does the operating system maintain regarding memory partitions?

The operating system maintains information about allocated partitions and free partitions (holes).

49
New cards

What is the dynamic storage-allocation problem?

The problem of how to satisfy a request of size n from a list of free holes.

50
New cards

What is the significance of dynamic loading for large code bases?

It is useful when large amounts of code are needed to handle infrequently occurring cases.

51
New cards

What is the role of the operating system in dynamic loading?

The OS can provide libraries to implement dynamic loading.

52
New cards

What is a stub in the context of dynamic linking?

A small piece of code used to locate the appropriate memory-resident library routine.

53
New cards

Why might versioning be needed in dynamic linking?

Versioning may be needed to manage updates and compatibility of system libraries.

54
New cards

How does contiguous allocation affect the degree of multiprogramming?

The degree of multiprogramming is limited by the number of partitions available.

55
New cards

What is the main memory structure in contiguous allocation?

Main memory is usually divided into a low memory area for the operating system and a high memory area for user processes.

56
New cards

What is the first-fit memory allocation strategy?

Allocate the first hole that is big enough.

57
New cards

What is the best-fit memory allocation strategy?

Allocate the smallest hole that is big enough; must search the entire list unless ordered by size.

58
New cards

What is the worst-fit memory allocation strategy?

Allocate the largest hole; must also search the entire list.

59
New cards

Which memory allocation strategies are better in terms of speed and storage utilization?

First-fit and best-fit are better than worst-fit.

60
New cards

What is external fragmentation?

Total memory space exists to satisfy a request, but it is not contiguous.

61
New cards

What is internal fragmentation?

Allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition but not being used.

62
New cards

What does the first fit analysis reveal about fragmentation?

Given N blocks allocated, 0.5 N blocks are lost to fragmentation, with 1/3 potentially being unusable (50-percent rule).

63
New cards

How can external fragmentation be reduced?

By compaction, which involves shuffling memory contents to place all free memory together in one large block.

64
New cards

What is required for compaction to be possible?

Relocation must be dynamic and done at execution time.

65
New cards

What is the I/O problem related to compaction?

Latch job in memory while it is involved in I/O and do I/O only into OS buffers.

66
New cards

What is paging in memory management?

A method where the physical address space of a process can be noncontiguous, avoiding external fragmentation and the problem of varying sized memory chunks.

<p>A method where the physical address space of a process can be noncontiguous, avoiding external fragmentation and the problem of varying sized memory chunks.</p>
67
New cards

How is physical memory divided in paging?

Into fixed-sized blocks called frames, with sizes that are powers of 2, between 512 bytes and 16 Mbytes.

68
New cards

How is logical memory divided in paging?

Into blocks of the same size called pages.

69
New cards

What is the purpose of a page table in paging?

To translate logical addresses to physical addresses.

<p>To translate logical addresses to physical addresses.</p>
70
New cards

What is the address translation scheme in paging?

The address generated by the CPU is divided into a page number (used as an index into a page table) and a page offset (combined with the base address to define the physical memory address).

71
New cards

What are the components of a logical address in paging?

For a logical address space of 2^m and page size of 2^n, it is divided into page number (p) and page offset (d).

72
New cards

What is the internal fragmentation in paging?

The unused space within a page when the allocated memory is larger than the requested memory.

73
New cards

How is internal fragmentation calculated in a paging example?

For a page size of 2,048 bytes and a process size of 72,766 bytes, the internal fragmentation is 2,048 - 1,086 = 962 bytes.

74
New cards

What is the worst-case fragmentation scenario in paging?

1 frame - 1 byte.

75
New cards

What is the average fragmentation in paging?

1/2 of the frame size.

76
New cards

Are small frame sizes desirable in paging?

Yes, small frame sizes are generally desirable to minimize internal fragmentation.

77
New cards

What is the purpose of a page table entry in memory management?

To track memory allocation, but it consumes memory itself.

78
New cards

What are the two page sizes supported by Solaris?

8 KB and 4 MB.

79
New cards

What is the process of swapping in operating systems?

Temporarily moving a process out of memory to a backing store and bringing it back for execution.

<p>Temporarily moving a process out of memory to a backing store and bringing it back for execution.</p>
80
New cards

What is a backing store in the context of swapping?

A fast disk large enough to accommodate copies of all memory images for users, providing direct access to these images.

81
New cards

What is the 'roll out, roll in' swapping variant?

A method used in priority-based scheduling where lower-priority processes are swapped out to allow higher-priority processes to execute.

82
New cards

What is the relationship between swap time and memory size?

Total transfer time is directly proportional to the amount of memory swapped.

83
New cards

What does the system maintain to manage processes in memory?

A ready queue of processes that are ready to run and have their memory images on disk.

84
New cards

Does a swapped-out process need to return to the same physical address?

It depends on the address binding method used.

85
New cards

What happens to pending I/O operations during swapping?

Pending I/O cannot occur as it would affect the wrong process.

86
New cards

What is double buffering in the context of swapping?

Transferring I/O to kernel space before sending it to the I/O device, which adds overhead.

87
New cards

When is standard swapping typically not used in modern operating systems?

It is not used unless free memory is extremely low.

88
New cards

What is the context switch time when swapping is involved?

It can be very high if the next process is not in memory, requiring swapping of processes.

89
New cards

How long can a context switch take when swapping a 100MB process at a transfer rate of 50MB/sec?

Total context switch time can be 4000 ms (4 seconds) for swapping out and in.

90
New cards

What system calls can reduce the size of memory swapped?

request_memory() and release_memory() to inform the OS of memory usage.

91
New cards

Why is swapping not typically supported on mobile systems?

Due to limited flash memory space, limited write cycles, and poor throughput between flash memory and CPU.

92
New cards

How does iOS manage low memory conditions?

It asks apps to voluntarily relinquish allocated memory, allowing read-only data to be reloaded from flash if needed.

93
New cards

What does Android do when free memory is low?

It terminates apps but first writes the application state to flash for fast restart.

94
New cards

What memory management method is supported by both iOS and Android?

Paging.

95
New cards

What is a major drawback of swapping in modern operating systems?

It can significantly increase context switch times and overall system latency.

96
New cards

What is the impact of swapping on system performance?

Swapping can degrade performance due to high context switch times and the overhead of managing memory.

97
New cards

What is the significance of a threshold in memory allocation?

Swapping is started if memory allocation exceeds a certain threshold and disabled when below it.

98
New cards

What is the effect of modified versions of swapping found in systems like UNIX and Windows?

They optimize memory management by only swapping when free memory is critically low.