1/40
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Paging suffers from the problem of external fragmentation.
False:Paging avoids external fragmentation and the need for compaction
In the execution-time address-binding scheme the logical and physical address spaces differ
TRUE: "Thus, in the execution-time address-binding scheme, the logical and physical address spaces differ."
The Translation Look-aside Buffer (TLB) is a special small, fast-lookup hardware cache used with page tables
TRUE: "The standard solution to this problem is to use a special, small, fast-lookup hardware cache called a translation look-aside buffer (TLB). The TLB is associative, high-speed memory."
Segmentation allows the physical address space of a process to be noncontiguous,
TRUE: "Segmentation permits the physical address space of a process to be noncontiguous."
The base register holds the smallest legal physical memory address, and the limit register specifies the size of the range,
TRUE: "The base register holds the smallest legal physical memory address; the limit register specifies the size of the range."
The CPU can directly access both main memory and disk storage during program execution
, FALSE: "Main memory and the registers built into the processor itself are the only general-purpose storage that the CPU can access directly."
Dynamic loading requires special support from the operating system in order to function,
FALSE: "Dynamic loading does not require special support from the operating system. It is the responsibility of the users to design their programs to take advantage of such a method."
If the TLB does not support separate ASIDs, it can retain entries for multiple processes simultaneously across context switches without being flushed,
FALSE: "If the TLB does not support separate ASIDs, then every time a new page table is selected (for instance, with each context switch), the TLB must be flushed (or erased) to ensure that the next executing process does not use the wrong translation information."
In paging, physical memory is divided into fixed-sized blocks called frames, and logical memory is divided into blocks of the same size called pages,
TRUE: "The basic method for implementing paging involves breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of the same size called pages."
In compile-time address binding, if the starting location of a process changes after compilation, the code does not need to be recompiled,
FALSE: "If, at some later time, the starting location changes, then it will be necessary to recompile this code."
Copy-on-write allows a parent and child process to initially share the same pages; a copy of a shared page is created only if either process writes to it,
TRUE: "Copy-on-write...works by allowing the parent and child processes initially to share the same pages. These shared pages are marked as copy-on-write pages, meaning that if either process writes to a shared page, a copy of the shared page is created."
The LRU page-replacement algorithm replaces the page that has not been used for the longest period of time,
TRUE: "LRU replacement associates with each page the time of that page's last use. When a page must be replaced, LRU chooses the page that has not been used for the longest period of time."
A process is considered to be thrashing when it is spending more time paging than executing,
TRUE: "This high paging activity is called thrashing. A process is thrashing if it is spending more time paging than executing."
Global page replacement allows a process to select a replacement frame only from its own set of allocated frames,
FALSE: "Global replacement allows a process to select a replacement frame from the set of all frames, even if that frame is currently allocated to some other process... Local replacement requires that each process select from only its own set of allocated frames."
Belady's anomaly refers to the phenomenon where, for some page-replacement algorithms, increasing the number of frames can result in more page faults,
TRUE: "Belady's anomaly: for some page-replacement algorithms, the page-fault rate may increase as the number of allocated frames increases."
The working-set model uses a parameter Δ (delta) to define the working-set window — the set of pages in the most recent Δ page references,
TRUE: "The working-set model is based on the assumption of locality. This model uses a parameter, Δ, to define the working-set window. The idea is to examine the most recent Δ page references. The set of pages in the most recent Δ page references is the working set."
The optimal page-replacement algorithm is practical and widely used in real operating systems because it guarantees the lowest page-fault rate,
FALSE: "Unfortunately, the optimal page-replacement algorithm is difficult to implement, because it requires future knowledge of the reference string... As a result, the optimal algorithm is used mainly for comparison studies."
A page fault occurs when a process accesses a page that is marked as invalid in the page table,
TRUE: "Access to a page marked invalid causes a page fault."
With demand paging, all pages of a process must be loaded into physical memory before the process begins execution,
FALSE: "With demand-paged virtual memory, pages are loaded only when they are demanded during program execution. Pages that are never accessed are thus never loaded into physical memory."
Using a local replacement algorithm completely prevents the performance degradation caused by thrashing on the system as a whole,
FALSE: "We can limit the effects of thrashing by using a local replacement algorithm... However, the problem is not entirely solved. If processes are thrashing, they will be in the queue for the paging device most of the time... the effective access time will increase even for a process that is not thrashing.
A safe state guarantees that the system can allocate resources to each thread (up to its maximum) in some order and still avoid deadlock.
TRUE: The existence of a safe sequence makes a state 'safe.' A safe state is never a deadlocked state.
In livelock, threads are blocked (suspended) while waiting for each other, just as they are in deadlock.
FALSE: In livelock the threads are NOT blocked — they keep retrying. They aren't making progress, but they are still actively running.
The most common approach used by operating systems such as Linux and Windows to deal with deadlocks is to simply ignore the problem.
TRUE: Ignoring deadlock (the ostrich approach) is the default in most real operating systems because deadlocks occur infrequently and the cost of prevention/avoidance is high.
To prevent circular wait, a thread may request resource Rj after holding resource Ri only if F(Rj) < F(Ri), where F is the ordering function.
FALSE: The ordering requires F(Rj) > F(Ri) (increasing order), not F(Rj) < F(Ri) as the statement claims.
Deadlocks can be prevented by denying the mutual-exclusion condition, because all resources can be made sharable.
FALSE: Some resources (e.g., mutex locks) are intrinsically nonsharable, so denying mutual exclusion is not a general solution.
If a resource-allocation graph contains a cycle, a deadlock has definitely occurred, regardless of the number of instances per resource type.
FALSE: A cycle guarantees deadlock ONLY when every resource type has exactly one instance. With multiple instances, a cycle is necessary but not sufficient.
The no-preemption deadlock prevention protocol is generally applicable to resources such as CPU registers and database transactions, but not to mutex locks and semaphores.
TRUE: Preemption requires saving/restoring state — feasible for CPU registers and DB transactions, but not for mutex locks or semaphores.
Under the normal mode of operation, a thread must request a resource before using it and must release the resource after using it.
TRUE: The three-step sequence is: Request → Use → Release. This is the fundamental resource-use protocol.
A deadlock occurs when at least one thread in a set is blocked and cannot proceed, even if other threads in the set are still running.
FALSE: A set of threads is in a deadlocked state when every thread in the set is waiting for an event that can be caused only by another thread in the set.
Every unsafe state is a deadlocked state.
FALSE: Unsafe /= deadlock. An unsafe state may LEAD to deadlock, but a system can be unsafe without currently being deadlocked.
The Scenario:
Partitions: 100 MB, 170 MB, 40 MB, 205 MB, 300 MB, 185 MB (Ordered P1 to P6) Process Queue: 200 MB, 15 MB, 185 MB, 75 MB, 175 MB, 80 MB (Ordered J1 to J6 )
The Worst-Fit algorithm allocates the largest available hole. If you use Worst-Fit for the first process P5 (200 MB), it will be placed in J1 (300 MB). What is the main disadvantage of this specific move in this scenario?
A) It creates a tiny hole of 5 MB that is useless for most processes.
B) It uses up the only partition large enough to potentially hold a future 250 MB process.
C) It prevents J3(185 MB) from being placed if and P6 are already full.
D) It is slower to execute than First-Fit because it must always search the entire list.
B) It uses up the only partition large enough to potentially hold a future 250 MB process.
An operating system's deadlock-detection algorithm has just triggered, confirming that four processes are currently in a deadlocked state. According to the standard methods for recovery, which of the following approaches could the system take to break the deadlock?
A) Implementing a "Hold and Wait" policy to ensure no new resources are requested.
B) Forcing a context switch to a higher-priority process that is not part of the deadlock.
C) Aborting one or more processes involved in the circular wait or preempting resources from them.
D) Increasing the number of resource instances available until the cycle is naturally broken.
C) Aborting one or more processes involved in the circular wait or preempting resources from them.
The Scenario:
Partitions: 100 MB, 170 MB, 40 MB, 205 MB, 300 MB, 185 MB (Ordered to ) Process Queue: 200 MB, 15 MB, 185 MB, 75 MB, 175 MB, 80 MB (Ordered to )
In the First-Fit algorithm, the OS allocates the first hole that is big enough. After J1 (200 MB) is placed in P4(205 MB) and J2(15 MB) is placed in P1(100 MB), where will J3 (185 MB) be placed?
A) P2 (170 MB)
B) P5 (300 MB)
C) P6 (185 MB)
D) The request cannot be satisfied.
B) P5 (300 MB)
In a system resource-allocation graph, Thread T1 is currently waiting for an instance of Resource R1. However, all instances of R1 are currently being held by Thread T2.
How would the relationship from T1 to R2 be represented in the graph?
A) An assignment edge, denoted by R1 -> T1.
B) A request edge, denoted by T1 -> R1
C) A request edge, denoted by R1 ->T1.
D) An assignment edge, denoted by T1 -> R1.
B) A request edge, denoted by T1 -> R1
The Scenario:
Partitions: 100 MB, 170 MB, 40 MB, 205 MB, 300 MB, 185 MB (P1 Ordered to P6) Process Queue: 200 MB, 15 MB, 185 MB, 75 MB, 175 MB, 80 MB (J1 Ordered to J6)
The Best-Fit algorithm allocates the smallest hole that is big enough, searching the entire list (unless ordered). Which of the following is a consequence of using Best-Fit for this specific sequence?
A) J1 (200 MB) is placed in P5, leaving a 100 MB hole.
B) J3 (185 MB) J3 is placed perfectly into P6, leaving 0 MB of internal fragmentation for that partition.
C) Every single process in the queue (J1 through J6) is successfully allocated.
D) J5 (175 MB) fails to be allocated because all remaining holes are too small.
J3 (185 MB) is placed perfectly into P6, leaving 0 MB of internal fragmentation for that partition.
You have a two-dimensional array int A [] [] =new int [100][100] where each integer takes up one memory location.
Base Address: A[0][0] is located at address 200.
Page Size: 200 locations per page.
Process: The code resides in Page 0 (addresses 0-199).
Frames: The system has 3 page frames.
Frame 1 is occupied by the process (Page 0).
Frames 2 and 3 are initially empty.
Replacement: LRU (Least Recently Used) is used for the two available frames.
The Question : How does the choice of nested loop structure affect the number of page faults generated during array initialization?
I. Loop A (Column-Major):
for (int j = 0; j < 100; j++) for (int i = 0; i < 100; i++) A[i][j] = 0;
II. Loop B (Row-Major):
for (int i = 0; i < 100; i++) for (int j = 0; j < 100; j++) A[i][j] = 0;
Which of the following is true regarding the page faults generated?
B) Loop A generates 10,000 page faults (one for every access), while Loop B generates only 50 page faults.
A system is using a resource-allocation graph to track state. Thread T3 requests an instance of resource type R3. At that exact moment, one instance of R2 is available. The operating system grants the request.
According to the rules of resource-allocation graphs, what happens to the graph immediately after the request is granted?
A) The request edge T3 -> R2 is deleted, and no new edge is created until the thread finishes.
B) A new assignment edge T3 -> R2 is added alongside the existing request edge.
C) The request edge T3 -> R2 is instantaneously transformed into an assignment edge R2 -> T3.
D) The dots (instances) inside the R2 rectangle are removed to show the resource is gone.
C) The request edge T3 -> R2 is instantaneously transformed into an assignment edge R2 -> T3.
An operating system designer is tasked with implementing a deadlock prevention strategy. They review the four necessary conditions for deadlock: Mutual Exclusion, Hold and Wait, No Preemption, and Circular Wait.
The designer concludes that while multiple conditions can technically be attacked, eliminating the circular wait is the most practical approach for a general-purpose system. Which of the following best explains why the other three conditions are often considered "impractical" to eliminate?
A) Eliminating Mutual Exclusion is impossible because some resources (like a printer or a mutex) are inherently non-sharable by nature.
B) Eliminating Hold and Wait requires processes to request all resources at once, which leads to low resource utilization and possible starvation.
C) Eliminating No Preemption is difficult because the state of some resources (like a partially updated database) cannot easily be "saved and restored" later.
-> D) All of the above. <-
You are analyzing a process that has m available frames in physical memory. The process executes a sequence of memory accesses defined by a page-reference string of length p, which contains n distinct page numbers. Consider the worst-case scenario where the page-replacement algorithm makes the least efficient decisions possible.
What is the absolute upper bound on the number of page faults for this process?
A) n
B) p
C) p - m
D) n x m
B) p
A system administrator is evaluating different page-replacement algorithms to minimize page faults. They are specifically looking for "stack algorithms" that are immune to Belady's Anomaly (where increasing the number of frames actually increases the number of page faults).
Which of the following correctly ranks the algorithms from highest page-fault rate ("Bad") to lowest ("Perfect") and accurately identifies their susceptibility to Belady's Anomaly?
A) Ranking: FIFO → Second-Chance → LRU → Optimal
Belady's Anomaly: Only FIFO and Second-Chance suffer from it
B) Ranking: Optimal → LRU → Second-Chance → FIFO
Belady's Anomaly: All four algorithms can suffer from it depending on the reference string.
C) Ranking: Second-Chance → FIFO → LRU → Optimal
Belady's Anomaly: Only LRU is immune because it is a stack algorithm.
D) Ranking: FIFO → LRU → Second-Chance → Optimal
Belady's Anomaly: Only Optimal and LRU suffer from it
A) Ranking: FIFO → Second-Chance → LRU → Optimal
Belady's Anomaly: Only FIFO and Second-Chance suffer from it.
You are analyzing a process that has m available frames in physical memory. The process executes a sequence of memory accesses defined by a page-reference string of length p, which contains n distinct page numbers.
Regardless of which page-replacement algorithm you choose (FIFO, LRU, or Optimal), what is the absolute lower bound on the number of page faults that must occur?
A) 0
B) m
C) n
D) p
C) n