1/99
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Memory management
during execution, programs with the data they access must be present in the main memory
Address Binding
processes on the disk that are waiting to be brought into memory for execution from the input queue
input queue → select a process → loads into memory → execute and accesses instructions and data from memory → process terminates → memory space is declared available
Most systems allow a user process
to reside in any part of the physical memory
in most cases user cases will go through
several steps during compile time, load time, execution time and before being executed
*addresses may be represented in different ways during these steps
source program
addresses are generally symbolic
compiler
typically binds symbolic address to relocatable addresses
Linkage
binds the relocatable addresses to absolute addresses
Binding
each binding is a mapping from one address space to another
Binding instructions during compile time
if we know at compile time where the process will reside in memory then absolute code can be generated
Binding instructions at load time
if not known at compile time, compiler must generate relocatable code
Binding instructions at execution time
if process can be moved during its execution from one memory segment to another, then binding must be delayed until run time
logical address
address generated by CPU
physical address
address seen by the memory unit that is the one loaded into memory address register of the memory
compile time and load time address binding methods generate
identical logical and physical addresses
execution time binding scheme generates
different logical and physical addresses
usually refer to the logical addresses generated as a virtual address
logical address space
set of all logical addresses generated by a program
physical address space
set of all physical addresses corresponding to these logical addresses
memory management unit
completes run time mapping from virtual to physical addresses
dynamic relocation
hardware adds relocation register (base) to virtual addresses to get physical addresses, then compares the resulting address with the limit register to make sure it's in range - if not, it raises an exception
virtual address
An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed.
is divided into 2 parts: page # and offset
page number
used to index page table
equals virtual address divided by page size
offset
equals virtual address mod the page size
page table entry gives
the frame number for the page
frame number is combined with offset to obtain
physical address
(frame number * frame size) + offset = physical address
Dynamic loading
routine is not loaded until it is called
size of process is limited to physical memory so dynamic loading helps
obtain better memory utilization
all routines are kept on a disk (dynamic loading)
in a relocatable load form
main program loaded into (dynamic loading)
memory and executed
when routine needs to call another routine (dynamic loading)
calling routine checks to see if called routine has been loaded
if called routine has not been loaded (dynamic loading)
relocatable linking loader is called to load the desired routine into memory and update the program's address tables to reflect the change and control is passed to newly loaded routine
dynamic loading advantages
unused routine never loaded
swapping
process can temporarily be swapped out of memory to a backing store and brought back into memory for a continued execution
memory space where swapping happens
swapped back to same place it was initially occupying BUT depends on address method
*if binding done at assembly or load time, not easily moved diff location
if done at execution time, can be swapped to diff location since physical address computed at execution time
backing store
fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images
maintains a ready queue consisting of all processes whose memory images are on the backing store
dispatcher
checks whether next process in queue is in memory and swaps in desired process
whenever cpu scheduler decides to execute a process
calls the dispatcher
swap time
takes time to swap (context-switch time)
fairly high
execution time should be higher than swap time
major part of swap time is transfer time
total transfer time is directly proportional to the amount of memory swapped
factors that affect swapping
process must be completely idle in order to be swapped
process may be waiting for an I/O operation
if I/O is asynchronously accessing the user memory for I/O buffers, then process cannot be swapped
divide memory into (memory allocation)
several fixed sized partitions
each partition may contain (memory allocation)
exactly one process
when a partition is free (memory allocation)
a process is selected from input queue and is loaded into the free partition
when process terminates (memory allocation)
partition becomes available for another process
processes in ready queue (memory allocation)
require different amounts of memory (partitions)
if partitions are too big or small (memory allocation)
merge or split contiguous and free partitions to accomodate a process
dynamic storage allocation problem
how to satisfy a request of size n from a list of free partitions in main memory
dynamic storage allocation problem solution
1. first fit
2. best fit
3. worst fit
first fit (dynamic storage allocation problem solution)
1. allocate first hole that is big enough
2. searching can start at beginning or where previous first-fit search ended
3. stop when find hole that is big enough
best-fit (dynamic storage allocation problem solution)
1. allocate smallest hole that is big enough
2. must search entire list, unless the list is ordered by size
3. this strategy produces smallest leftover hole
worst-fit (dynamic storage allocation problem solution)
1. allocate largest hole
fragmentation
as processes are loaded and removed from memory, the free memory space is broken into little pieces which results in fragmentation
External Fragmentation
enough total memory to satisfy request, but available spaces are not contiguous
storage is fragmented into a large number of small holes
can be severe in worst case
Internal Fragmentation
occurs when memory blocks assigned to process are bigger than what the process actually needs
some portion of memory is left unused and cannot be used by another process
solution: use best-fit approach
compaction (solution to external fragmentation)
shuffle the memory contents so as to place all free memory together in one large lock
problems with compaction
not always possible
relocation is static and is done at assembly or load time, compaction cannot be done
possibly only if relocation is dynamic and done at execution time
Paging (solution to external fragmentation)
memory management scheme that permits the physical address space of a process to be non-contiguous
avoids problem of fitting memory chunks of varying sizes onto the backing store
break physical memory into (paging)
fixed blocks called frames
break logical memory into (paging)
blocks of same size called pages
when process is to be executed (paging)
pages loaded into any available memory frames from backing store
backing store divided into (paging)
fixed-sized blocks that are of the same size as the memory frames
frames and pages are
same size
every address in cpu is divided into
1. page number (p): used as an index into page table
2. page offset (d): displacement within the page
page table
contains the base address of each page in physical memory
base address is combined with
page offset to define the physical memory address
multilevel paging
divide pages into smaller sizes and use a two-level paging algorithm in which page table itself is also paged
Page Table Entries
contains info about frame number, present/absent, protection, reference, caching bit, dirty
frame number (page table entries)
denotes frame where page is present in main memory
number of bits depends on the number of frames in the main memory
present/absent or valid/invalid (page table entries)
whether page is present in the main memory of not
protection (read/write) (page table entries)
specifies permissions for read or write operations on page
0 = read, 1 = both
reference (page table entries)
whether page has been referenced in last clock cycle or not
set to 1 when age accessed
caching bit
en/disable caching of the page
when needed fresh data, have to disable caching so as to avoid fetching of old data from the cache
when caching disabled, set to 1
dirty (page table entries)
whether page has been modified or not
helps in avoiding unnecessary writes to the secondary memory when a page is being replaced by another page
1 = modified
Address Translation with a 2-level Page Table
components: P1, P2, offset
- P1: gives index to first level page table
- P2: gives index to second level page table
- offset = offset
virtual memory
storage scheme where secondary memory can be treated as though it is part of main memory and large processes can be stored in-only the part that is actually needed for execution is loaded on the actual main memory
benefits of virtual memory
1. no longer constrained by size of physical memory
2. more programs can be run at same time with no increase in response time or turnaround time
3. less I/O needed to load/swap each user program into memory, so each user program will run faster
demand paging
when pages are loaded only ad they are needed. pages are only loaded when they are demanded during prog. execution. pages that are never accessed are thus never loaded into physical memory
when want to execute process (demand paging)
swap it into memory and use lazy swapper: only swap part that we need
Present/Absent/Valid/Invalid bit
set to valid: page is both legal and in memory
set to invalid: page is either not in logical address space of the process or is valid but currently on the disk
page fault
when page is not present in main memory/marked as invalid
if page fault occurs on the instruction fetch
can restart by fetching the instructions again
if page fault occurs while fetching an operand
must fetch and decode instructions again and fetch the operand
1. determine whether process is (procedure for handling page faults)
valid/invalid
2. if process invalid (procedure for handling page faults)
terminate process
3. valid but not yet brought in that page (procedure for handling page faults)
page it in
4. find free frame (procedure for handling page faults)
by taking one from free-frame list
5. schedule a disk operation to (procedure for handling page faults)
read desired page into newly allocated frame
6. when disk reads is complete (procedure for handling page faults)
we modify the internal table kept with the process and the page table to indicate that the page is now in memory
7. restart instruction (procedure for handling page faults)
that was interrupted by ta
process can now access (procedure for handling page faults)
page as though it had always been in memory
if not page faults
effective access time will equal the Memory Access time
probability of page fault ( 0 <= p <= 1)
effective access time = ((1-p) x ma)+ (p x page fault time)
((1-p) x ma) : time spent for normal memory access
(p x page fault time): time spent for handling page fault
page replacement
technique of swapping out pages from memory when there are no more free frames available, in order to make room for other ages which has been loaded into the physical memory
1. find the location of (page replacement)
page to be loaded on the disk
2. if there is a free frame (page replacement)
use that frame and load the page into that frame
3. No free frame (page replacement)
use page replacment algorithm to select a victim frame
4. write a victim frame to disk and (page replacement)
change page and frame tables accordingly
6. restart (page replacement)
user process
when no frames are free
two page transfers required (one in and one out)
this doubles page-fault service time and increases the effective access time
can reduce the increase of effective access time by
making use of dirty bit
if modify bit is set
page has been modified and need to write that to disk
modify bit is not set
don't need to overwrite this page on the disk