Looks like no one added any tags here yet for you.
Deadlock
This situation happens when 2 or more processes are waiting on a resource from another process resulting in none being able to proceed, resulting in cycle of dependency
Four necessary conditions needed for deadlock
Mutual Exclusion
Hold and wait
No Preemption
Circular wait
Circular Wait
The circular wait condition is a scenario in which a set of processes are waiting for resources in a circular chain. Each process holds a resource and is waiting to acquire a resource held by the next process in the chain, thus creating a closed loop.
Mutual Exclusion
is a condition where at least one resource must be held in a non-shareable mode, meaning that only one process can use the resource at any given time.
Hold and Wait
The hold and wait condition occurs when a process is holding at least one resource while waiting to acquire additional resources that are currently being held by other processes, potentially leading to a deadlock situation.
No Preemption
is a condition where resources cannot be forcibly taken from a process holding them, meaning that a process must voluntarily release its resources.
Deadlock Prevention
In order to prevent deadlock we essentially have to avoid one of the deadlock conditions which are: mutual exclusion, hold and wait, no preemption and circular wait.
Deadlock Avoidance
Compared to deadlock prevention, deadlock avoidance consist of implementing methods such as the Banker’s algortihm, Ressource Allocation Graph.
Banker’s Algorithm
Helps with deadlock avoidance by ensuring that a system will only grant resource requests if doing so keeps the system in a safe state. It helps in avoiding situations that can lead to deadlock.
Resource Allocation
The process of distributing available system resources to processes or users based on their requirements, ensuring efficient and fair utilization
How does OS handle resource allocation?
Process scheduling
Memory Management
Reqource requests
Granting Resources
Concurency Control
Deadlock Avoidance
Priority Handling
Process Scheduling
The OS uses scheduling algorithms to allocate CPU time to processes
Memory Management
It manages primary and secondary memory, allocating memory segments or pages to processes while ensuring no overlap
Resource Requests
Processes request resources using system calls, and the OS checks availability
Granting resources
The OS grants resources if they are available or quees the process if the resource is currently occupied
Concurrency control
It ensures the safe sharing of resources among processes using process if the reaource is currently occupied
Priority Handling
Resources may be allocated based on priority, ensuring critical tasks are served first
Safe State
A state where the system can allocate resources to all processes without causing a deadlock
Unsafe state
A state where resource allocation may lead to a deadlock if not managed carefully
Max Matrix in Resource Allocation
Represents the maximum number of resources a process may request
Need Matrix in Resource Allocation
Represents the remaining resources a process needs to complete calculated as Max - Allocation
Logical Memory Address
Generated by the CPU, see by the process
Physical Memory Address
Actual location in memory, seen by hardware
How is the mapping between logical and physical address achieved?
It is achieved using the Memory Management Unit, which essentially translates logical addresses to physical addresses dynamically
Trashing
A condition where excessive paging occurs, leading to high CPU and low performance. It’s caused by insufficient memory and high multiprogramming level. It can be avoided by using a working set model and increasing physical memory
Translation Lookaside Buffer (TLB)
The role of TLB in OS is the cache page table entries, reducing address translation time and improving memory access speed
Segmentation
Divides memory into variable-sized segments based on logical units like functions
Segmentation vs Paging
Compared to segmentation, paging uses fixed-sized blocks, segmentation uses logical divisions
Advantaged of Segmentation
Logical grouping, better protection
Disadvantages of Segmentation
External fragmentation, complexity in management
External Fragmentation
Occurs when free memory is scattered in small, unusable blocks. It can be mitigated by compaction or paging.
Internal Fragmentation
Occurs when allocated memory has unused space within blocks. It’s different from external fragmentation because internal happens inside allocated spaces; external happens outside
Frame Allocation
Divides physical memory into fixed-size frames and assigns them to processes
PAA (Page Allocation Algorithm)
The PAA decides how many frames each process receives based on demand or predefined policies
Pure Paging
Divides memory into fixed-size pages and maps them to frames. No contiguous allocation required
Benefits of Pure Paging
Eliminates external fragmentation, simplifies allocation
Challenges of Pure Paging
Overhead of maintaining page tables, internal fragmentation within pages
Reference String
A reference string is a sequence of memory page requests made by a process during execution
How does a page fault occur?
Page fault occurs when a process tries to access the page which is not loaded in physical memory
Page fault impact on system performance
Increased Latency: A page fault requires a disk I/O operation, which is much slower than memory access
Reduced Throughput: Frequent page faults slow down the entire system
Trashing: If page faults occur too frequently, the system may spend more time swapping pages than executing processes, causing severe performance degradation
Inverted Page Table
Memory management structure that contains one entry for each physical frame in memory instead of one entry per Virtual page in the logical address space. A traditional page table has one entry for each virtual page of a process and mapping it directly to a physical frame
Sequential Access
In sequential access, data in the file is accessed in order, from beginning to end
Direct Access
In direct access, data in the file can be accessed randomly by jumping directly to any block
Sequential Access impact on file access time and system performance
Access time is linear because you must traverse each block in sequence.
Performance is efficient for files that are read/written from start to finish.
Direct Access impact on file access time and system performance
Access time is constant for a given block since you can directly fetch it.
Better performance for random access but requires more system resources to manage block locations.
Impact on System Perfromance
Sequential access minimizes disk seek time and works well for sequentially processed files.
Direct access improves performance for random access but may involve more overhead in managing file pointers.
Buffer
A buffer is a temporary storage area in memory used to hold data during input/output operations. It acts as a middleman between two components that operate at different speeds, like the CPU and the disk
How does a buffer work?
When reading data:
The OS reads data from a slow device (like a disk) into the buffer.
The CPU processes the data from the buffer, allowing the disk and CPU to work independently.
When writing data:
The OS stores data in the buffer before writing it to the disk.
The CPU doesn’t need to wait for the slower disk to complete the write operation.
Different types of buffers
Input Buffer
Output Buffer
Double Buffer
Circular Buffer
Buffer Pool
Input buffer
Stores data read from input devices or storage before the CPU processes it.
Example: When reading a file from a disk, data is first placed in the input buffer.
Output buffer
Temporarily stores data that the CPU wants to send to an output device or disk.
Example: When saving a file, the data is written to the output buffer before being sent to the disk.
Double buffer
Uses two buffers:
One buffer is being filled with data.
The other is being processed simultaneously.
Advantage: Reduces wait time, as I/O and CPU operations can overlap.
Circular buffer
A buffer organized in a circular manner, where the end connects to the beginning.
Common in real-time systems for continuous data streams (e.g., audio or video processing).
Buffer pool
A collection of reusable buffers shared by multiple processes or threads.
Example: Database systems often use buffer pools to improve efficiency.
Memory Allocation Strategies
First-Fit
Best-Fit
Worst-Fit
First Fit
The OS scans memory from the beginning and allocates the first block that is large enough for the process.
How It Works:
If a process needs 5 KB and the first free block of 10 KB is found, it is allocated 5 KB, leaving 5 KB unused.
Advantages:
Simple and fast.
Disadvantages:
Creates small unused memory blocks (fragmentation).
Best Fit
The OS searches the entire list of free blocks and allocates the smallest block that fits the process's size.
How It Works:
If a process needs 5 KB, the OS allocates a block of exactly 5 KB or the smallest block greater than 5 KB.
Advantages:
Reduces fragmentation.
Disadvantages:
Slower due to searching the entire list.
Worst Fit
The OS allocates the largest available block to the process.
How It Works:
If a process needs 5 KB and the largest block is 50 KB, it will allocate 5 KB from that block, leaving 45 KB unused.
Advantages:
Leaves larger blocks available for future processes.
Disadvantages:
Often results in poor space utilization.
How does the choice of memory allocation strategy impact system performance?
First-Fit:
Performance: Fast because it stops searching as soon as a suitable block is found.
Fragmentation: May lead to small leftover spaces (external fragmentation), which wastes memory.
Best-Fit:
Performance: Slower because it scans the entire list to find the smallest suitable block.
Fragmentation: Reduces fragmentation but leaves many very small, unusable blocks.
Worst-Fit:
Performance: Similar to best-fit, as it may also require scanning the entire list.
Fragmentation: Often results in large leftover blocks, which can still be useful for future processes.
Seek Time
Seek time is the time taken by the disk’s read/write head to move to the track where the desired data is located.
LOOK
LOOK: Moves in one direction, servicing requests until the end, then reverses. Efficient for moderate loads.
CLOOK
CLOOK: Similar but only moves to the last request in the direction, then jumps back to the start. Preferable when fairness and reduced starvation are priorities.
What are the different RAID levels, and how do they impact performance and redundancy?
RAID 0: High performance, no redundancy.
RAID 1: Full redundancy, slower writes.
RAID 5: Good balance between performance and redundancy using parity data.
Discuss the advantages and disadvantages of RAID 0
Advantages: High performance, no redundancy.
Disadvantages: Data loss if one disk fails.
Discuss the advantages and disadvantages of RAID 1
Advantages: Full redundancy, simple recovery.
Disadvantages: Expensive, storage space halved.
Discuss the advantages and disadvantages of RAID 5
Advantages: Efficient storage, good read performance, fault-tolerant.
Disadvantages: Slower writes due to parity calculations.
What is protection in an operating system?
Protection in an operating system refers to mechanisms that control access to the resources of a computer system, such as files, memory segments, and the CPU. It ensures that processes and users access these resources in accordance with the system's policies.
Why is protection important in operating system?
Preventing intentional and unintentional violations of access rights.
Ensuring programs use resources as intended and stated in policies.
Detecting latent errors between subsystems to improve reliability
How does the operating system enforce security policies to protect resources?
Authentication: Verifies user identity through credentials like passwords, biometrics, or tokens.
Authorization: Assigns permissions to users and processes based on roles or access control lists (ACLs).
Encryption: Protects data in storage and transit to prevent unauthorized access.
Auditing and Logging: Tracks activities to detect and respond to security breaches.
Resource Isolation: Uses mechanisms like virtual memory and user/kernel mode to separate processes and prevent unauthorized access.
Firewalls and Filters: Restrict network traffic to enforce communication policies.
Principle of Least Privilege: Limits access rights to the minimum necessary for tasks.
What are privacy principles in the context of operating systems, and how do they ensure user data privacy?
Privacy principles in operating systems are guidelines and mechanisms designed to protect user data from unauthorized access or disclosure.
They include:
Ensuring confidentiality by restricting access to sensitive data.
Implementing secure authentication to confirm user identities.
Encrypting data to prevent unauthorized reading, even if it is accessed
What are the main privacy challenges faced by modern operating systems?
Data Breaches: Unauthorized access to sensitive user data.
Tracking and Profiling: Operating systems may collect excessive user activity data for analytics or advertising.
Insecure Permissions: Applications or processes may access more data than necessary, violating privacy.
Malware and Spyware: Malicious software can exploit system vulnerabilities to steal personal information.
Weak Encryption: Insufficient encryption of stored or transmitted data increases privacy risks.
Shared Resources: Shared memory or files may lead to unintended data leaks between processes.
User Unawareness: Lack of transparency or unclear privacy policies regarding data collection and usage.
Third-Party Applications: External apps may exploit OS permissions to access sensitive data.