Q: What is deadlock?
A:
Deadlock: A permanent blocking of a set of processes that either compete for system resources or communicate with each other. Each process in the set is blocked waiting for an event that can only be triggered by another blocked process in the set.
Q: What are the four necessary conditions for deadlock?
A:
Mutual Exclusion: Only one process can use a resource at a time.
Hold and Wait: A process holds resources while waiting for others.
No Preemption: Resources cannot be forcibly taken from a process.
Circular Wait: A circular chain of processes exists, each holding a resource needed by the next.
Q: What are the two categories of resources, and how do they differ?
A:
Reusable Resources: Can be safely used by only one process at a time and are not depleted (e.g., processors, memory, files).
Consumable Resources: Created (produced) and destroyed (consumed) by processes (e.g., messages, interrupts).
Q: What is a resource allocation graph, and how is it used to detect deadlock?
A:
Resource Allocation Graph: A directed graph that shows resource allocation and requests between processes and resources.
Deadlock Detection: A cycle in the graph indicates a potential deadlock.
Q: What are the two main methods for deadlock prevention?
A:
Indirect Methods: Prevent one of the three necessary conditions (mutual exclusion, hold and wait, no preemption).
Direct Methods: Prevent circular wait by imposing a linear ordering of resource requests.
Q: What is deadlock avoidance, and how does it differ from prevention?
A:
Deadlock Avoidance: Dynamically analyzes resource requests to ensure the system remains in a safe state.
Difference: Avoidance allows more concurrency than prevention by making runtime decisions rather than imposing strict rules.
Q: What is the Banker’s Algorithm, and how does it work?
A:
Banker’s Algorithm: A deadlock avoidance strategy that ensures the system remains in a safe state by simulating resource allocation before granting requests.
Steps:
Check if the request can be granted.
Pretend to allocate resources.
Check if the resulting state is safe.
If safe, grant the request; otherwise, block the process.
Q: What is a safe state in the context of deadlock avoidance?
A:
Safe State: A system state where there is at least one sequence of resource allocations that allows all processes to complete without deadlock.
Q: How does deadlock detection work, and what are its advantages and disadvantages?
A:
Deadlock Detection: Periodically checks for cycles in the resource allocation graph.
Advantages:
Early detection of deadlock.
Simple algorithm for incremental changes.
Disadvantages:
Frequent checks consume CPU time.
Recovery from deadlock can be costly.
Q: What are the main strategies for recovering from deadlock?
A:
Abort All Deadlocked Processes: Terminate all processes involved in the deadlock.
Rollback Processes: Restore processes to a previous checkpoint and restart them.
Preempt Resources: Forcibly take resources from processes to break the deadlock.
Kill Processes One by One: Abort processes incrementally until the deadlock is resolved.
Q: What is starvation, and how does it differ from deadlock?
A:
Starvation: A process is indefinitely delayed because other processes are always given priority for resources.
Difference: In starvation, the process is not blocked permanently (like in deadlock) but is repeatedly denied access to resources.
Q: What is the dining philosophers problem, and what does it illustrate?
A:
Problem: Five philosophers alternate between thinking and eating, requiring two forks to eat. The challenge is to prevent deadlock and starvation.
Illustrates: Issues of mutual exclusion, deadlock, and starvation in concurrent systems.
Q: What are some solutions to the dining philosophers problem?
A:
Semaphores: Use semaphores to control access to forks.
Monitors: Use monitors to enforce mutual exclusion and synchronization.
Limit Philosophers: Allow only four philosophers to sit at the table at a time.
Q: What concurrency mechanisms are provided by UNIX and Linux?
A:
Pipes: Circular buffers for interprocess communication (IPC).
Messages: Blocks of bytes with an accompanying type, used for IPC via message queues.
Shared Memory: Fastest form of IPC, where multiple processes share a common block of memory.
Signals: Software interrupts used to notify processes of asynchronous events.
Semaphores: Used for synchronization, with operations like semWait
and semSignal
.
Q: How do real-time (RT) signals in Linux differ from standard signals?
A:
Priority Order: RT signals are delivered in priority order.
Queuing: Multiple RT signals can be queued.
Value Passing: RT signals can send a value (integer or pointer) along with the signal.
Q: What is a spinlock, and when is it used?
A:
Spinlock: A lock that causes a thread to wait in a loop (spin) while checking if the lock is available.
Use: Effective for very short critical sections where the wait time is expected to be minimal.
Q: What are atomic operations, and why are they important in concurrency?
A:
Atomic Operations: Operations that execute without interruption and cannot be partially completed.
Importance: Ensure that critical sections of code are executed without interference, preventing race conditions.
Q: What are memory barriers, and why are they used?
A:
Memory Barriers: Instructions that enforce the order of memory operations (loads and stores).
Use: Prevent the compiler and processor from reordering instructions, ensuring correct synchronization.
Q: What is RCU, and how does it work?
A:
RCU: A synchronization mechanism that allows multiple readers and one writer to access shared data concurrently.
Workflow:
Writer: Creates a copy of the data, updates it, and atomically updates the pointer to the new copy.
Readers: Access either the old or new version of the data, ensuring consistency.
Q: What is priority inversion, and how can it be resolved?
A:
Priority Inversion: A lower-priority process holds a resource needed by a higher-priority process, causing the higher-priority process to wait.
Resolution: Use priority inheritance, where the lower-priority process temporarily inherits the higher priority.
Q: What is the difference between deadlock and livelock?
A:
Deadlock: Processes are permanently blocked, waiting for resources held by each other.
Livelock: Processes are not blocked but are constantly changing states in response to each other, making no progress.
Q: How can the "no preemption" condition be prevented to avoid deadlock?
A:
Solution: Allow resources to be forcibly taken from a process (preemption).
Example: If a process is denied a resource, it must release all currently held resources and request them again.
Q: How can the "circular wait" condition be prevented to avoid deadlock?
A:
Solution: Impose a linear ordering of resource types and require processes to request resources in that order.
Example: If a process holds resource R1, it can only request resources with a higher order (e.g., R2, R3).
Q: What are the steps in the deadlock detection algorithm?
A:
Mark processes with no allocated resources.
Initialize a temporary vector W
to the available resources.
Find an unmarked process whose requests can be satisfied by W
.
If found, mark the process and add its allocated resources to W
.
Repeat until no more processes can be marked.
Unmarked processes are deadlocked.
Q: What are the two approaches to terminating processes for deadlock recovery?
A:
Abort All Deadlocked Processes: Terminate all processes involved in the deadlock.
Abort Processes Incrementally: Terminate processes one by one until the deadlock is resolved.
Q: How does resource preemption work for deadlock recovery?
A:
Resource Preemption: Forcibly take resources from processes to break the deadlock.
Steps:
Select a victim process.
Preempt its resources.
Roll back the process to a previous state.
Reallocate resources to other processes.
Q: How does deadlock manifest in real-world systems?
A:
Example: Traffic deadlock at a four-way intersection, where each car is waiting for another to move.
Solution: Traffic rules (e.g., yielding to the right) prevent deadlock in most cases.
Q: How is deadlock handled in database systems?
A:
Detection: The database system periodically checks for cycles in the wait-for graph.
Recovery: The system rolls back one or more transactions to break the deadlock.
Q: What are the challenges of deadlock in distributed systems?
A:
Challenges:
No global state information.
Communication delays make detection harder.
Solutions: Use distributed algorithms like the Chandy-Misra-Haas algorithm for deadlock detection.
Q: What are the key takeaways from Chapter 6?
A:
Deadlock is a permanent blocking of processes due to resource contention.
Four Conditions: Mutual exclusion, hold and wait, no preemption, and circular wait.
Strategies: Prevention, avoidance, detection, and recovery.
Classic Problems: Dining philosophers problem illustrates deadlock and starvation.
Real-World Systems: Deadlock can occur in traffic, databases, and distributed systems.