OS Final Revision

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/70

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

71 Terms

1
New cards

Deadlock

This situation happens when 2 or more processes are waiting on a resource from another process resulting in none being able to proceed, resulting in cycle of dependency

2
New cards

Four necessary conditions needed for deadlock

  • Mutual Exclusion

  • Hold and wait

  • No Preemption

  • Circular wait

3
New cards

Circular Wait

The circular wait condition is a scenario in which a set of processes are waiting for resources in a circular chain. Each process holds a resource and is waiting to acquire a resource held by the next process in the chain, thus creating a closed loop.

4
New cards

Mutual Exclusion

is a condition where at least one resource must be held in a non-shareable mode, meaning that only one process can use the resource at any given time.

5
New cards

Hold and Wait

The hold and wait condition occurs when a process is holding at least one resource while waiting to acquire additional resources that are currently being held by other processes, potentially leading to a deadlock situation.

6
New cards

No Preemption

is a condition where resources cannot be forcibly taken from a process holding them, meaning that a process must voluntarily release its resources.

7
New cards

Deadlock Prevention

In order to prevent deadlock we essentially have to avoid one of the deadlock conditions which are: mutual exclusion, hold and wait, no preemption and circular wait.

8
New cards

Deadlock Avoidance

Compared to deadlock prevention, deadlock avoidance consist of implementing methods such as the Banker’s algortihm, Ressource Allocation Graph.

9
New cards

Banker’s Algorithm

Helps with deadlock avoidance by ensuring that a system will only grant resource requests if doing so keeps the system in a safe state. It helps in avoiding situations that can lead to deadlock.

10
New cards

Resource Allocation

The process of distributing available system resources to processes or users based on their requirements, ensuring efficient and fair utilization

11
New cards

How does OS handle resource allocation?

Process scheduling

Memory Management

Reqource requests

Granting Resources

Concurency Control

Deadlock Avoidance

Priority Handling

12
New cards

Process Scheduling

The OS uses scheduling algorithms to allocate CPU time to processes

13
New cards

Memory Management

It manages primary and secondary memory, allocating memory segments or pages to processes while ensuring no overlap

14
New cards

Resource Requests

Processes request resources using system calls, and the OS checks availability

15
New cards

Granting resources

The OS grants resources if they are available or quees the process if the resource is currently occupied

16
New cards

Concurrency control

It ensures the safe sharing of resources among processes using process if the reaource is currently occupied

17
New cards

Priority Handling

Resources may be allocated based on priority, ensuring critical tasks are served first

18
New cards

Safe State

A state where the system can allocate resources to all processes without causing a deadlock

19
New cards

Unsafe state

A state where resource allocation may lead to a deadlock if not managed carefully

20
New cards

Max Matrix in Resource Allocation

Represents the maximum number of resources a process may request

21
New cards

Need Matrix in Resource Allocation

Represents the remaining resources a process needs to complete calculated as Max - Allocation

22
New cards

Logical Memory Address

Generated by the CPU, see by the process

23
New cards

Physical Memory Address

Actual location in memory, seen by hardware

24
New cards

How is the mapping between logical and physical address achieved?

It is achieved using the Memory Management Unit, which essentially translates logical addresses to physical addresses dynamically

25
New cards

Trashing

A condition where excessive paging occurs, leading to high CPU and low performance. It’s caused by insufficient memory and high multiprogramming level. It can be avoided by using a working set model and increasing physical memory

26
New cards

Translation Lookaside Buffer (TLB)

The role of TLB in OS is the cache page table entries, reducing address translation time and improving memory access speed

27
New cards

Segmentation

Divides memory into variable-sized segments based on logical units like functions

28
New cards

Segmentation vs Paging

Compared to segmentation, paging uses fixed-sized blocks, segmentation uses logical divisions

29
New cards

Advantaged of Segmentation

Logical grouping, better protection

30
New cards

Disadvantages of Segmentation

External fragmentation, complexity in management

31
New cards

External Fragmentation

Occurs when free memory is scattered in small, unusable blocks. It can be mitigated by compaction or paging.

32
New cards

Internal Fragmentation

Occurs when allocated memory has unused space within blocks. It’s different from external fragmentation because internal happens inside allocated spaces; external happens outside

33
New cards

Frame Allocation

Divides physical memory into fixed-size frames and assigns them to processes

34
New cards

PAA (Page Allocation Algorithm)

The PAA decides how many frames each process receives based on demand or predefined policies

35
New cards

Pure Paging

Divides memory into fixed-size pages and maps them to frames. No contiguous allocation required

36
New cards

Benefits of Pure Paging

Eliminates external fragmentation, simplifies allocation

37
New cards

Challenges of Pure Paging

Overhead of maintaining page tables, internal fragmentation within pages

38
New cards

Reference String

A reference string is a sequence of memory page requests made by a process during execution

39
New cards

How does a page fault occur?

Page fault occurs when a process tries to access the page which is not loaded in physical memory

40
New cards

Page fault impact on system performance

Increased Latency: A page fault requires a disk I/O operation, which is much slower than memory access

Reduced Throughput: Frequent page faults slow down the entire system

Trashing: If page faults occur too frequently, the system may spend more time swapping pages than executing processes, causing severe performance degradation

41
New cards

Inverted Page Table

Memory management structure that contains one entry for each physical frame in memory instead of one entry per Virtual page in the logical address space. A traditional page table has one entry for each virtual page of a process and mapping it directly to a physical frame

42
New cards

Sequential Access

In sequential access, data in the file is accessed in order, from beginning to end

43
New cards

Direct Access

In direct access, data in the file can be accessed randomly by jumping directly to any block

44
New cards

Sequential Access impact on file access time and system performance

  • Access time is linear because you must traverse each block in sequence.

  • Performance is efficient for files that are read/written from start to finish.

45
New cards

Direct Access impact on file access time and system performance

  • Access time is constant for a given block since you can directly fetch it.

  • Better performance for random access but requires more system resources to manage block locations.

46
New cards

Impact on System Perfromance

  • Sequential access minimizes disk seek time and works well for sequentially processed files.

  • Direct access improves performance for random access but may involve more overhead in managing file pointers.

47
New cards

Buffer

A buffer is a temporary storage area in memory used to hold data during input/output operations. It acts as a middleman between two components that operate at different speeds, like the CPU and the disk

48
New cards

How does a buffer work?

  1. When reading data:

    • The OS reads data from a slow device (like a disk) into the buffer.

    • The CPU processes the data from the buffer, allowing the disk and CPU to work independently.

  2. When writing data:

    • The OS stores data in the buffer before writing it to the disk.

    • The CPU doesn’t need to wait for the slower disk to complete the write operation.

49
New cards

Different types of buffers

Input Buffer

Output Buffer

Double Buffer

Circular Buffer

Buffer Pool

50
New cards

Input buffer

  • Stores data read from input devices or storage before the CPU processes it.

  • Example: When reading a file from a disk, data is first placed in the input buffer.

51
New cards

Output buffer

  • Temporarily stores data that the CPU wants to send to an output device or disk.

  • Example: When saving a file, the data is written to the output buffer before being sent to the disk.

52
New cards

Double buffer

  • Uses two buffers:

    • One buffer is being filled with data.

    • The other is being processed simultaneously.

  • Advantage: Reduces wait time, as I/O and CPU operations can overlap.

53
New cards

Circular buffer

  • A buffer organized in a circular manner, where the end connects to the beginning.

  • Common in real-time systems for continuous data streams (e.g., audio or video processing).

54
New cards

Buffer pool

  • A collection of reusable buffers shared by multiple processes or threads.

  • Example: Database systems often use buffer pools to improve efficiency.

55
New cards

Memory Allocation Strategies

First-Fit

Best-Fit

Worst-Fit

56
New cards

First Fit

  • The OS scans memory from the beginning and allocates the first block that is large enough for the process.

  • How It Works:

    • If a process needs 5 KB and the first free block of 10 KB is found, it is allocated 5 KB, leaving 5 KB unused.

  • Advantages:

    • Simple and fast.

  • Disadvantages:

    • Creates small unused memory blocks (fragmentation).

57
New cards

Best Fit

  • The OS searches the entire list of free blocks and allocates the smallest block that fits the process's size.

  • How It Works:

    • If a process needs 5 KB, the OS allocates a block of exactly 5 KB or the smallest block greater than 5 KB.

  • Advantages:

    • Reduces fragmentation.

  • Disadvantages:

    • Slower due to searching the entire list.

58
New cards

Worst Fit

  • The OS allocates the largest available block to the process.

  • How It Works:

    • If a process needs 5 KB and the largest block is 50 KB, it will allocate 5 KB from that block, leaving 45 KB unused.

  • Advantages:

    • Leaves larger blocks available for future processes.

  • Disadvantages:

    • Often results in poor space utilization.

59
New cards

How does the choice of memory allocation strategy impact system performance?

First-Fit:

  • Performance: Fast because it stops searching as soon as a suitable block is found.

  • Fragmentation: May lead to small leftover spaces (external fragmentation), which wastes memory.

Best-Fit:

  • Performance: Slower because it scans the entire list to find the smallest suitable block.

  • Fragmentation: Reduces fragmentation but leaves many very small, unusable blocks.

Worst-Fit:

  • Performance: Similar to best-fit, as it may also require scanning the entire list.

  • Fragmentation: Often results in large leftover blocks, which can still be useful for future processes.

60
New cards

Seek Time

Seek time is the time taken by the disk’s read/write head to move to the track where the desired data is located.

61
New cards

LOOK

LOOK: Moves in one direction, servicing requests until the end, then reverses. Efficient for moderate loads.

62
New cards

CLOOK

CLOOK: Similar but only moves to the last request in the direction, then jumps back to the start. Preferable when fairness and reduced starvation are priorities.

63
New cards

What are the different RAID levels, and how do they impact performance and redundancy?

  • RAID 0: High performance, no redundancy.

  • RAID 1: Full redundancy, slower writes.

  • RAID 5: Good balance between performance and redundancy using parity data.

64
New cards

Discuss the advantages and disadvantages of RAID 0

  • Advantages: High performance, no redundancy.

  • Disadvantages: Data loss if one disk fails.

65
New cards

Discuss the advantages and disadvantages of RAID 1

  • Advantages: Full redundancy, simple recovery.

  • Disadvantages: Expensive, storage space halved.

66
New cards

Discuss the advantages and disadvantages of RAID 5

  • Advantages: Efficient storage, good read performance, fault-tolerant.

  • Disadvantages: Slower writes due to parity calculations.

67
New cards

What is protection in an operating system?

Protection in an operating system refers to mechanisms that control access to the resources of a computer system, such as files, memory segments, and the CPU. It ensures that processes and users access these resources in accordance with the system's policies.

68
New cards

Why is protection important in operating system?

  • Preventing intentional and unintentional violations of access rights.

  • Ensuring programs use resources as intended and stated in policies.

  • Detecting latent errors between subsystems to improve reliability​

69
New cards

How does the operating system enforce security policies to protect resources?

  • Authentication: Verifies user identity through credentials like passwords, biometrics, or tokens.

  • Authorization: Assigns permissions to users and processes based on roles or access control lists (ACLs).

  • Encryption: Protects data in storage and transit to prevent unauthorized access.

  • Auditing and Logging: Tracks activities to detect and respond to security breaches.

  • Resource Isolation: Uses mechanisms like virtual memory and user/kernel mode to separate processes and prevent unauthorized access.

  • Firewalls and Filters: Restrict network traffic to enforce communication policies.

  • Principle of Least Privilege: Limits access rights to the minimum necessary for tasks.

70
New cards

What are privacy principles in the context of operating systems, and how do they ensure user data privacy?

Privacy principles in operating systems are guidelines and mechanisms designed to protect user data from unauthorized access or disclosure.

They include:

  • Ensuring confidentiality by restricting access to sensitive data.

  • Implementing secure authentication to confirm user identities.

  • Encrypting data to prevent unauthorized reading, even if it is accessed

71
New cards

What are the main privacy challenges faced by modern operating systems?

  • Data Breaches: Unauthorized access to sensitive user data.

  • Tracking and Profiling: Operating systems may collect excessive user activity data for analytics or advertising.

  • Insecure Permissions: Applications or processes may access more data than necessary, violating privacy.

  • Malware and Spyware: Malicious software can exploit system vulnerabilities to steal personal information.

  • Weak Encryption: Insufficient encryption of stored or transmitted data increases privacy risks.

  • Shared Resources: Shared memory or files may lead to unintended data leaks between processes.

  • User Unawareness: Lack of transparency or unclear privacy policies regarding data collection and usage.

  • Third-Party Applications: External apps may exploit OS permissions to access sensitive data.