T2

0.0(0)
studied byStudied by 4 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/39

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

40 Terms

1
New cards

What is the best-fit strategy for memory allocation?

  • To improve CPU utilization by allowing multiple programs to reside in memory

  • Find the smallest free chunk that fits the request

  • To enable protection between processes

  • To support interactivity for programmers

Find the smallest free chunk that fits the request

2
New cards

What is the worst-fit strategy for memory allocation?

  • Find the largest free chunk and split it

  • The running program's view of memory

  • The hardware's view of memory

  • The operating system's view of memory

Find the largest free chunk and split it

3
New cards

What is the advantage of the first-fit strategy for memory allocation?

  • To allow them to grow in opposite directions

  • To improve performance

  • To simplify memory management

  • It has low search overhead to find a free chunk

It has low search overhead to find a free chunk

4
New cards

In the next-fit strategy, what does the allocator keep track of?

  • To provide a private, potentially large address space for each process

  • The location where it last searched for a free chunk

  • To implement memory protection

  • To improve CPU utilization

The location where it last searched for a free chunk

5
New cards

What is the purpose of the magic number stored in the header before an allocated chunk?

  • How to provide protection between processes

  • How to efficiently and flexibly virtualize memory

  • How to implement virtual machines on top of physical hardware

  • To provide integrity checking for the allocated chunk

To provide integrity checking for the allocated chunk

6
New cards

What is a potential disadvantage of the best-fit strategy for heap memory allocation?

  • It may lead to external fragmentation

  • It has high search overhead to find the best-fit chunk

  • It does not support coalescing of free chunks

  • It escalates memory between two sized chunks

It has high search overhead to hnd the best-fit chunk

7
New cards

In the first-fit strategy, how is the search for a free chunk typically performed?

  • By searching the free list from the beginning

  • By searching the free list from the last satisfied location

  • By using a tree data structure to find the best-fit chunk

  • By sorting the free list by address or size

By searching the free list from the beginning

8
New cards

Which of the following is not a goal of a virtual memory system?

Transparency

Efficiency

Protection

User Control

User Control

9
New cards

What does transparency in virtual memory refer to?

Maling to virtualization visiblo to the ruting program

Making the virtualization invisible to the running program

Implementing protection between processes

Increasing the efficiency of virtualization

Making the virtuslization invisible to the running program

10
New cards

What is the purpose of the heap in the address space?

To store program code

To store local variables and function call information

To store dynamically allocated memory

To store operating system data

To store dynamically allocated memory.

11
New cards

Which of the following is not a component of a process's address space?

Code

Stack

Heap

Cache

Cache

12
New cards

What is the purpose of the stack in the address space?.

To store program code

To store dynamically allocated memory

To store local variables and function call information

To store operating system data

To store local variables and function call information

13
New cards

Why must the ability to update base/bounds registers be a privileged operation?

To prevent process from allocating too much memory

To prevent processes from modifying other processes memory space

To increase performance by offloading translation to kernel

To prevent fragmentation

To prevent processes from modifying other processes memory space

14
New cards

Where is the size of an allocated memory block typically stored?

In a separate lookup table

In metadata just before the returned memory pointer

In metadata after the allocated block

In a global variable

In metadata just before the returned memory pointer

15
New cards

Which protection configuration would be typical for a code segment?

Read-Execute

Read-Write

Read-Write-Execute

Write-Execute

Read-Execute

16
New cards

How many TLB misses are incurred for the first 5 iterations from a loop of memory accesses

This loop has never run before.

• TLB can store 16.536 entries.

• Page size is 1k (1024 bytes)

Int, A(15];

For (int i=0 ; i <1024; i++)

count +- a[i]

}

5

17
New cards

What is the purpose of the valid bit in a PTE?

To support sparse address spaces

To indicate if the page content is up to date

To save memory by not allocating frames for unused pages

Both A and C

Both A and C

18
New cards

Which of the following is NOT a problem that paging tries to solve?

External fragmentation

Internal fragmentation

Sparse address space usage

All of the above are problems the paging tries to solve

Internet fragmentation

19
New cards

Which of the following is NOT a typical step in the address translation process with paging?

Extract the VPN from the virtual address

Use the VPN to index into the page table and retrieve the PTE

Check the valid and protection bits in the PTE

Concatenate the VPN with the offset to form the physical address

Concatenate the VPN with the offset to form the physical address

20
New cards

Which of the following is NOT a typical use for the accessed or reference bit in a PTE?

To track which pages are popular and should be kept in memory

To indicate if a page has been swapped out to disk

To assist in page replacement decisions

All of the above are typical uses for the accessed bit

To indicate if a page has been swapped out to disk

21
New cards

What is the principle of temporal locality in virtual memory systems?

Programs tend to access certain pages frequently

Programs tend to access memory randomly

Programs tend to access memory in close proximity

Programs tend to access different parts of memory at different times

Programs tend to access certain pages frequently

22
New cards

What is the purpose of the use bit (or reference bit) in a page table entry?

To indicate if the page has been modified

To indicate if the page is currently in physical memory

To track whether the page has been recently accessed

To indicate the access privileges for the page

To track whether the page has been recently accessed

23
New cards

What is the purpose of the clock algorithm in page replacement?

To approximate a stack algorithm

To approximate the Least Recently Used (LRU) policy

To implement the FIFO policy

To implement the Random policy

To approximate the Least Recently Used (LRU) policy

24
New cards

What is the purpose of the modified bit (or dirty bit) in a page table entry?

To indicate if the page has been modified while in memory

To track whether the page has been recently accessed

To indicate if the page is currently in physical memory

To indicate whether the virtual address is valid

To indicate if the page has been modified while in memory

25
New cards

What is the main difference between demand paging and pre-paging

Pre-paging loads pages before they are requested

Pre-paging uses less memory

Pre-paging always more efficient

Pre-paging only works with a TLB

Pre-paging loads pages before they are requested

26
New cards

What is thrashing in virtual memory systems?

A situation where the system constantly pages due to memory oversubscription, severely impacting overall performance

A situation where the system runs out of swap space, causing a system crash

A situation where the system experiences a high rate of TLB misses, severely impacting overall system performance

A situation where the system runs out of physical memory causing a system crash

A situation where the system constantly pages due to memory oversubscription, severely impacting overall performance

27
New cards

What is the purpose of the out-of-memory killer in some operating systems?

To prevent thrashing by killing memory-intensive processes.

To decide which pages to replace

To decide when to write pages out to disk

To decide when to prefetch pages into memory

To prevent thrashing by killing memory-intensive processes.

28
New cards

What is Belady's anomaly?

A situation where increasing the cache size decreases the hit rate for certain policies

A situation where the optimal policy performs worse than other policies

A situation where thrashing occurs

A situation where the systems runs of swap space

A situation where increasing the cache size decreases the hit rate for certain policies

29
New cards

What is the stack property in page replacement policies?

When the set of pages in memory with n frames is always a subset of the pages that would be in memory within n+1 frames for the same reference string

A property that ensures the hit rate improves or stays the same when the cache size increases

When the set of pages in memory with n frames is not correlated with the pages that would be in memory within n+1 frames for the same reference string

When the set of pages in memory with n frames is always a subset of the pages that would be in memory within n+1 frames for the same reference string.

30
New cards

What metrics can be used to evaluate TLB performance?

TLB Hit Rate. TLB Miss Rate

TLB Miss Rate. Spatial Locality

TLB Hit Rate. Temporal Locality

Spatial Locality. Temporal Locality

TLB Hit Rate. TLB Miss Rate

31
New cards

What would be an accurate alternative acronym for the TLB?

PFC - Physical Frame Cache

MEME - Memory Extensions for Mapping Exceptions

ATC - Address Translation Cache

DSC - Disk Swapping Cache

ATC - Address Translation Cache

32
New cards

What is the primary advantage of a software-managed TLB over a hardware-managed TLB?

Higher performance

Less memory usage

Greater flexibility in replacement policies

Simpler implementation

Greater flexibility in replacement policies

33
New cards

Which of the following is NOT a common TLB replacement policy?

Least Recently Used (LRU)

First-In First-Out (FIFO)

Random

Most Recently Used (MRU)

Most Recently Used (MRU)

34
New cards

Which of the following is NOT a reason for the success of TLBs?

They take advantage of spatial locality

They take advantage of imperial locality

They enhance the need for page table access

They eliminate the need for virtual memory

They eliminate the need for virtual memory

35
New cards

Assuming an empty cache, the first memory requests are always

Conflict misses

Capacity misses

Compulsory misses

Coherence misses

Compulsory misses

36
New cards

What is the purpose of the "offset" field in a virtual address?

To identify the virtual page number

To identify the physical frame number

To identify the byte within the page

To identify the process identifier

To identify the byte within the page

37
New cards

Define thrashing in virtual memory systems and explain its consequences (4 pts).

38
New cards

Briefly describe the roles of the following bits in a PTE (1 pt ea.): protection bits, present bit, di accessed/reference bit

39
New cards

Given the following reference string and 3 frames of memory, how many page faults are incu algorithm (2 pts)?

012340123432110

9

40
New cards

In a 32-bit virtual address space with 1024 byte page size, find (1 pt ea.):

VPN address space size (in bits):

Offset address space size in bits):

Total number of pages:

Total size of memory in bytes:

22

10

4194304

4294967296