1/47
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Why is linked list allocation bad for random access
Must traverse pointers from beginning to reach a block
Why is linked list allocation unreliable
Broken pointer loses rest of file
Why is indexed allocation better
Direct access to blocks through index
Why is contiguous allocation fast
Block location computed directly using offset
Why is contiguous allocation hard to grow
Needs contiguous free space
Why do flat namespaces cause collisions
All files share one directory
Why does opening a file require multiple disk reads
Each directory in path must be resolved step by step
Why are directories needed
They map human names to file metadata
Why are file systems data structures
They consist of directories inodes blocks and metadata
Why are transactions needed
Prevent inconsistent state from partial updates
What happens before commit on crash
Transaction is rolled back
What happens after commit on crash
Log is replayed to restore changes
Why must commit be atomic
System must clearly know if transaction happened
Why must log be written before data
Ensures recovery information exists before changes
Why is journaling better than fsck
Replays exact operations instead of guessing
Why is logging not enough
Does not prevent concurrent transaction conflicts
Why is locking needed
Prevents transactions from interfering with each other
Difference between consistency and recovery
Consistency is valid state recovery restores it after crash
Difference between durability and consistency
Durability ensures persistence consistency ensures correctness
Why does demand paging improve performance
Avoids loading unused pages and reduces disk I O
Why does locality make paging work
Programs reuse same and nearby data
What triggers a page fault
Access to page not present in memory
What happens during a page fault
OS loads page updates tables and resumes execution
Why are page faults expensive
They may require disk access
Why is TLB needed
Avoids repeated slow page table lookups
What happens on TLB miss
Lookup page table then update TLB
Why can FIFO have Belady anomaly
Ignores usage and may evict useful pages
Why does LRU perform better
Keeps recently used pages in memory
Why does thrashing occur
Working set larger than memory
Why is RAID 0 unsafe
No redundancy so any failure loses data
Why is RAID 1 reliable
Data is duplicated across disks
Why is RAID 1 expensive
Requires double storage space
How does parity allow recovery
Missing data computed using XOR of remaining data
Why is parity disk a bottleneck
All writes must update shared parity
Why are small writes expensive in RAID
Require reading old data and parity then writing both
Why is RAID 5 better than RAID 4
Parity distributed reduces bottleneck
What is cache consistency problem
Different clients may have outdated file copies
How does NFS handle consistency
Frequently checks with server and writes on close
How does AFS handle consistency
Uses cached copy and server invalidation
Why is NFS less scalable
Frequent server communication increases load
Why is AFS more scalable
Less communication with server
What tradeoff does AFS make
Scalability at cost of immediate consistency
Why cannot we have perfect consistency and scalability
Strong consistency requires communication which reduces scalability
Why is FAT32 like linked list allocation
Clusters linked through FAT entries
How does FAT32 read a file
Follow cluster chain from starting cluster to EOF
Why is FAT32 bad for random access
Must traverse cluster chain
Why is FAT pointer corruption dangerous
Breaks rest of file chain
What is first step to read FAT32 file
Find starting cluster from directory entr