Chapter 3: Memory Management

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/99

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

100 Terms

1
New cards

Memory management

during execution, programs with the data they access must be present in the main memory

2
New cards

Address Binding

processes on the disk that are waiting to be brought into memory for execution from the input queue

input queue → select a process → loads into memory → execute and accesses instructions and data from memory → process terminates → memory space is declared available

3
New cards

Most systems allow a user process

to reside in any part of the physical memory

4
New cards

in most cases user cases will go through

several steps during compile time, load time, execution time and before being executed

*addresses may be represented in different ways during these steps

5
New cards

source program

addresses are generally symbolic

6
New cards

compiler

typically binds symbolic address to relocatable addresses

7
New cards

Linkage

binds the relocatable addresses to absolute addresses

8
New cards

Binding

each binding is a mapping from one address space to another

9
New cards

Binding instructions during compile time

if we know at compile time where the process will reside in memory then absolute code can be generated

10
New cards

Binding instructions at load time

if not known at compile time, compiler must generate relocatable code

11
New cards

Binding instructions at execution time

if process can be moved during its execution from one memory segment to another, then binding must be delayed until run time

12
New cards

logical address

address generated by CPU

13
New cards

physical address

address seen by the memory unit that is the one loaded into memory address register of the memory

14
New cards

compile time and load time address binding methods generate

identical logical and physical addresses

15
New cards

execution time binding scheme generates

different logical and physical addresses
usually refer to the logical addresses generated as a virtual address

16
New cards

logical address space

set of all logical addresses generated by a program

17
New cards

physical address space

set of all physical addresses corresponding to these logical addresses

18
New cards

memory management unit

completes run time mapping from virtual to physical addresses

19
New cards

dynamic relocation

hardware adds relocation register (base) to virtual addresses to get physical addresses, then compares the resulting address with the limit register to make sure it's in range - if not, it raises an exception

20
New cards

virtual address

An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed.

is divided into 2 parts: page # and offset

21
New cards

page number

used to index page table
equals virtual address divided by page size

22
New cards

offset

equals virtual address mod the page size

23
New cards

page table entry gives

the frame number for the page

24
New cards

frame number is combined with offset to obtain

physical address
(frame number * frame size) + offset = physical address

25
New cards

Dynamic loading

routine is not loaded until it is called

26
New cards

size of process is limited to physical memory so dynamic loading helps

obtain better memory utilization

27
New cards

all routines are kept on a disk (dynamic loading)

in a relocatable load form

28
New cards

main program loaded into (dynamic loading)

memory and executed

29
New cards

when routine needs to call another routine (dynamic loading)

calling routine checks to see if called routine has been loaded

30
New cards

if called routine has not been loaded (dynamic loading)

relocatable linking loader is called to load the desired routine into memory and update the program's address tables to reflect the change and control is passed to newly loaded routine

31
New cards

dynamic loading advantages

unused routine never loaded

32
New cards

swapping

process can temporarily be swapped out of memory to a backing store and brought back into memory for a continued execution

33
New cards

memory space where swapping happens

swapped back to same place it was initially occupying BUT depends on address method
*if binding done at assembly or load time, not easily moved diff location
if done at execution time, can be swapped to diff location since physical address computed at execution time

34
New cards

backing store

fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images
maintains a ready queue consisting of all processes whose memory images are on the backing store

35
New cards

dispatcher

checks whether next process in queue is in memory and swaps in desired process

36
New cards

whenever cpu scheduler decides to execute a process

calls the dispatcher

37
New cards

swap time

takes time to swap (context-switch time)
fairly high
execution time should be higher than swap time
major part of swap time is transfer time
total transfer time is directly proportional to the amount of memory swapped

38
New cards

factors that affect swapping

process must be completely idle in order to be swapped
process may be waiting for an I/O operation
if I/O is asynchronously accessing the user memory for I/O buffers, then process cannot be swapped

39
New cards

divide memory into (memory allocation)

several fixed sized partitions

40
New cards

each partition may contain (memory allocation)

exactly one process

41
New cards

when a partition is free (memory allocation)

a process is selected from input queue and is loaded into the free partition

42
New cards

when process terminates (memory allocation)

partition becomes available for another process

43
New cards

processes in ready queue (memory allocation)

require different amounts of memory (partitions)

44
New cards

if partitions are too big or small (memory allocation)

merge or split contiguous and free partitions to accomodate a process

45
New cards

dynamic storage allocation problem

how to satisfy a request of size n from a list of free partitions in main memory

46
New cards

dynamic storage allocation problem solution

1. first fit
2. best fit
3. worst fit

47
New cards

first fit (dynamic storage allocation problem solution)

1. allocate first hole that is big enough
2. searching can start at beginning or where previous first-fit search ended
3. stop when find hole that is big enough

48
New cards

best-fit (dynamic storage allocation problem solution)

1. allocate smallest hole that is big enough
2. must search entire list, unless the list is ordered by size
3. this strategy produces smallest leftover hole

49
New cards

worst-fit (dynamic storage allocation problem solution)

1. allocate largest hole

50
New cards

fragmentation

as processes are loaded and removed from memory, the free memory space is broken into little pieces which results in fragmentation

51
New cards

External Fragmentation

enough total memory to satisfy request, but available spaces are not contiguous
storage is fragmented into a large number of small holes
can be severe in worst case

52
New cards

Internal Fragmentation

occurs when memory blocks assigned to process are bigger than what the process actually needs
some portion of memory is left unused and cannot be used by another process
solution: use best-fit approach

53
New cards

compaction (solution to external fragmentation)

shuffle the memory contents so as to place all free memory together in one large lock

54
New cards

problems with compaction

not always possible
relocation is static and is done at assembly or load time, compaction cannot be done
possibly only if relocation is dynamic and done at execution time

55
New cards

Paging (solution to external fragmentation)

memory management scheme that permits the physical address space of a process to be non-contiguous

avoids problem of fitting memory chunks of varying sizes onto the backing store

56
New cards

break physical memory into (paging)

fixed blocks called frames

57
New cards

break logical memory into (paging)

blocks of same size called pages

58
New cards

when process is to be executed (paging)

pages loaded into any available memory frames from backing store

59
New cards

backing store divided into (paging)

fixed-sized blocks that are of the same size as the memory frames

60
New cards

frames and pages are

same size

61
New cards

every address in cpu is divided into

1. page number (p): used as an index into page table
2. page offset (d): displacement within the page

62
New cards

page table

contains the base address of each page in physical memory

63
New cards

base address is combined with

page offset to define the physical memory address

64
New cards

multilevel paging

divide pages into smaller sizes and use a two-level paging algorithm in which page table itself is also paged

65
New cards

Page Table Entries

contains info about frame number, present/absent, protection, reference, caching bit, dirty

66
New cards

frame number (page table entries)

denotes frame where page is present in main memory
number of bits depends on the number of frames in the main memory

67
New cards

present/absent or valid/invalid (page table entries)

whether page is present in the main memory of not

68
New cards

protection (read/write) (page table entries)

specifies permissions for read or write operations on page
0 = read, 1 = both

69
New cards

reference (page table entries)

whether page has been referenced in last clock cycle or not
set to 1 when age accessed

70
New cards

caching bit

en/disable caching of the page
when needed fresh data, have to disable caching so as to avoid fetching of old data from the cache
when caching disabled, set to 1

71
New cards

dirty (page table entries)

whether page has been modified or not
helps in avoiding unnecessary writes to the secondary memory when a page is being replaced by another page
1 = modified

72
New cards

Address Translation with a 2-level Page Table

components: P1, P2, offset
- P1: gives index to first level page table
- P2: gives index to second level page table
- offset = offset

73
New cards

virtual memory

storage scheme where secondary memory can be treated as though it is part of main memory and large processes can be stored in-only the part that is actually needed for execution is loaded on the actual main memory

74
New cards

benefits of virtual memory

1. no longer constrained by size of physical memory
2. more programs can be run at same time with no increase in response time or turnaround time
3. less I/O needed to load/swap each user program into memory, so each user program will run faster

75
New cards

demand paging

when pages are loaded only ad they are needed. pages are only loaded when they are demanded during prog. execution. pages that are never accessed are thus never loaded into physical memory

76
New cards

when want to execute process (demand paging)

swap it into memory and use lazy swapper: only swap part that we need

77
New cards

Present/Absent/Valid/Invalid bit

set to valid: page is both legal and in memory
set to invalid: page is either not in logical address space of the process or is valid but currently on the disk

78
New cards

page fault

when page is not present in main memory/marked as invalid

79
New cards

if page fault occurs on the instruction fetch

can restart by fetching the instructions again

80
New cards

if page fault occurs while fetching an operand

must fetch and decode instructions again and fetch the operand

81
New cards

1. determine whether process is (procedure for handling page faults)

valid/invalid

82
New cards

2. if process invalid (procedure for handling page faults)

terminate process

83
New cards

3. valid but not yet brought in that page (procedure for handling page faults)

page it in

84
New cards

4. find free frame (procedure for handling page faults)

by taking one from free-frame list

85
New cards

5. schedule a disk operation to (procedure for handling page faults)

read desired page into newly allocated frame

86
New cards

6. when disk reads is complete (procedure for handling page faults)

we modify the internal table kept with the process and the page table to indicate that the page is now in memory

87
New cards

7. restart instruction (procedure for handling page faults)

that was interrupted by ta

88
New cards

process can now access (procedure for handling page faults)

page as though it had always been in memory

89
New cards

if not page faults

effective access time will equal the Memory Access time

90
New cards

probability of page fault ( 0 <= p <= 1)

effective access time = ((1-p) x ma)+ (p x page fault time)

((1-p) x ma) : time spent for normal memory access

(p x page fault time): time spent for handling page fault

91
New cards

page replacement

technique of swapping out pages from memory when there are no more free frames available, in order to make room for other ages which has been loaded into the physical memory

92
New cards

1. find the location of (page replacement)

page to be loaded on the disk

93
New cards

2. if there is a free frame (page replacement)

use that frame and load the page into that frame

94
New cards

3. No free frame (page replacement)

use page replacment algorithm to select a victim frame

95
New cards

4. write a victim frame to disk and (page replacement)

change page and frame tables accordingly

96
New cards

6. restart (page replacement)

user process

97
New cards

when no frames are free

two page transfers required (one in and one out)
this doubles page-fault service time and increases the effective access time

98
New cards

can reduce the increase of effective access time by

making use of dirty bit

99
New cards

if modify bit is set

page has been modified and need to write that to disk

100
New cards

modify bit is not set

don't need to overwrite this page on the disk