OS Final Revision

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 70

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

71 Terms

1

Deadlock

This situation happens when 2 or more processes are waiting on a resource from another process resulting in none being able to proceed, resulting in cycle of dependency

New cards
2

Four necessary conditions needed for deadlock

  • Mutual Exclusion

  • Hold and wait

  • No Preemption

  • Circular wait

New cards
3

Circular Wait

The circular wait condition is a scenario in which a set of processes are waiting for resources in a circular chain. Each process holds a resource and is waiting to acquire a resource held by the next process in the chain, thus creating a closed loop.

New cards
4

Mutual Exclusion

is a condition where at least one resource must be held in a non-shareable mode, meaning that only one process can use the resource at any given time.

New cards
5

Hold and Wait

The hold and wait condition occurs when a process is holding at least one resource while waiting to acquire additional resources that are currently being held by other processes, potentially leading to a deadlock situation.

New cards
6

No Preemption

is a condition where resources cannot be forcibly taken from a process holding them, meaning that a process must voluntarily release its resources.

New cards
7

Deadlock Prevention

In order to prevent deadlock we essentially have to avoid one of the deadlock conditions which are: mutual exclusion, hold and wait, no preemption and circular wait.

New cards
8

Deadlock Avoidance

Compared to deadlock prevention, deadlock avoidance consist of implementing methods such as the Banker’s algortihm, Ressource Allocation Graph.

New cards
9

Banker’s Algorithm

Helps with deadlock avoidance by ensuring that a system will only grant resource requests if doing so keeps the system in a safe state. It helps in avoiding situations that can lead to deadlock.

New cards
10

Resource Allocation

The process of distributing available system resources to processes or users based on their requirements, ensuring efficient and fair utilization

New cards
11

How does OS handle resource allocation?

Process scheduling

Memory Management

Reqource requests

Granting Resources

Concurency Control

Deadlock Avoidance

Priority Handling

New cards
12

Process Scheduling

The OS uses scheduling algorithms to allocate CPU time to processes

New cards
13

Memory Management

It manages primary and secondary memory, allocating memory segments or pages to processes while ensuring no overlap

New cards
14

Resource Requests

Processes request resources using system calls, and the OS checks availability

New cards
15

Granting resources

The OS grants resources if they are available or quees the process if the resource is currently occupied

New cards
16

Concurrency control

It ensures the safe sharing of resources among processes using process if the reaource is currently occupied

New cards
17

Priority Handling

Resources may be allocated based on priority, ensuring critical tasks are served first

New cards
18

Safe State

A state where the system can allocate resources to all processes without causing a deadlock

New cards
19

Unsafe state

A state where resource allocation may lead to a deadlock if not managed carefully

New cards
20

Max Matrix in Resource Allocation

Represents the maximum number of resources a process may request

New cards
21

Need Matrix in Resource Allocation

Represents the remaining resources a process needs to complete calculated as Max - Allocation

New cards
22

Logical Memory Address

Generated by the CPU, see by the process

New cards
23

Physical Memory Address

Actual location in memory, seen by hardware

New cards
24

How is the mapping between logical and physical address achieved?

It is achieved using the Memory Management Unit, which essentially translates logical addresses to physical addresses dynamically

New cards
25

Trashing

A condition where excessive paging occurs, leading to high CPU and low performance. It’s caused by insufficient memory and high multiprogramming level. It can be avoided by using a working set model and increasing physical memory

New cards
26

Translation Lookaside Buffer (TLB)

The role of TLB in OS is the cache page table entries, reducing address translation time and improving memory access speed

New cards
27

Segmentation

Divides memory into variable-sized segments based on logical units like functions

New cards
28

Segmentation vs Paging

Compared to segmentation, paging uses fixed-sized blocks, segmentation uses logical divisions

New cards
29

Advantaged of Segmentation

Logical grouping, better protection

New cards
30

Disadvantages of Segmentation

External fragmentation, complexity in management

New cards
31

External Fragmentation

Occurs when free memory is scattered in small, unusable blocks. It can be mitigated by compaction or paging.

New cards
32

Internal Fragmentation

Occurs when allocated memory has unused space within blocks. It’s different from external fragmentation because internal happens inside allocated spaces; external happens outside

New cards
33

Frame Allocation

Divides physical memory into fixed-size frames and assigns them to processes

New cards
34

PAA (Page Allocation Algorithm)

The PAA decides how many frames each process receives based on demand or predefined policies

New cards
35

Pure Paging

Divides memory into fixed-size pages and maps them to frames. No contiguous allocation required

New cards
36

Benefits of Pure Paging

Eliminates external fragmentation, simplifies allocation

New cards
37

Challenges of Pure Paging

Overhead of maintaining page tables, internal fragmentation within pages

New cards
38

Reference String

A reference string is a sequence of memory page requests made by a process during execution

New cards
39

How does a page fault occur?

Page fault occurs when a process tries to access the page which is not loaded in physical memory

New cards
40

Page fault impact on system performance

Increased Latency: A page fault requires a disk I/O operation, which is much slower than memory access

Reduced Throughput: Frequent page faults slow down the entire system

Trashing: If page faults occur too frequently, the system may spend more time swapping pages than executing processes, causing severe performance degradation

New cards
41

Inverted Page Table

Memory management structure that contains one entry for each physical frame in memory instead of one entry per Virtual page in the logical address space. A traditional page table has one entry for each virtual page of a process and mapping it directly to a physical frame

New cards
42

Sequential Access

In sequential access, data in the file is accessed in order, from beginning to end

New cards
43

Direct Access

In direct access, data in the file can be accessed randomly by jumping directly to any block

New cards
44

Sequential Access impact on file access time and system performance

  • Access time is linear because you must traverse each block in sequence.

  • Performance is efficient for files that are read/written from start to finish.

New cards
45

Direct Access impact on file access time and system performance

  • Access time is constant for a given block since you can directly fetch it.

  • Better performance for random access but requires more system resources to manage block locations.

New cards
46

Impact on System Perfromance

  • Sequential access minimizes disk seek time and works well for sequentially processed files.

  • Direct access improves performance for random access but may involve more overhead in managing file pointers.

New cards
47

Buffer

A buffer is a temporary storage area in memory used to hold data during input/output operations. It acts as a middleman between two components that operate at different speeds, like the CPU and the disk

New cards
48

How does a buffer work?

  1. When reading data:

    • The OS reads data from a slow device (like a disk) into the buffer.

    • The CPU processes the data from the buffer, allowing the disk and CPU to work independently.

  2. When writing data:

    • The OS stores data in the buffer before writing it to the disk.

    • The CPU doesn’t need to wait for the slower disk to complete the write operation.

New cards
49

Different types of buffers

Input Buffer

Output Buffer

Double Buffer

Circular Buffer

Buffer Pool

New cards
50

Input buffer

  • Stores data read from input devices or storage before the CPU processes it.

  • Example: When reading a file from a disk, data is first placed in the input buffer.

New cards
51

Output buffer

  • Temporarily stores data that the CPU wants to send to an output device or disk.

  • Example: When saving a file, the data is written to the output buffer before being sent to the disk.

New cards
52

Double buffer

  • Uses two buffers:

    • One buffer is being filled with data.

    • The other is being processed simultaneously.

  • Advantage: Reduces wait time, as I/O and CPU operations can overlap.

New cards
53

Circular buffer

  • A buffer organized in a circular manner, where the end connects to the beginning.

  • Common in real-time systems for continuous data streams (e.g., audio or video processing).

New cards
54

Buffer pool

  • A collection of reusable buffers shared by multiple processes or threads.

  • Example: Database systems often use buffer pools to improve efficiency.

New cards
55

Memory Allocation Strategies

First-Fit

Best-Fit

Worst-Fit

New cards
56

First Fit

  • The OS scans memory from the beginning and allocates the first block that is large enough for the process.

  • How It Works:

    • If a process needs 5 KB and the first free block of 10 KB is found, it is allocated 5 KB, leaving 5 KB unused.

  • Advantages:

    • Simple and fast.

  • Disadvantages:

    • Creates small unused memory blocks (fragmentation).

New cards
57

Best Fit

  • The OS searches the entire list of free blocks and allocates the smallest block that fits the process's size.

  • How It Works:

    • If a process needs 5 KB, the OS allocates a block of exactly 5 KB or the smallest block greater than 5 KB.

  • Advantages:

    • Reduces fragmentation.

  • Disadvantages:

    • Slower due to searching the entire list.

New cards
58

Worst Fit

  • The OS allocates the largest available block to the process.

  • How It Works:

    • If a process needs 5 KB and the largest block is 50 KB, it will allocate 5 KB from that block, leaving 45 KB unused.

  • Advantages:

    • Leaves larger blocks available for future processes.

  • Disadvantages:

    • Often results in poor space utilization.

New cards
59

How does the choice of memory allocation strategy impact system performance?

First-Fit:

  • Performance: Fast because it stops searching as soon as a suitable block is found.

  • Fragmentation: May lead to small leftover spaces (external fragmentation), which wastes memory.

Best-Fit:

  • Performance: Slower because it scans the entire list to find the smallest suitable block.

  • Fragmentation: Reduces fragmentation but leaves many very small, unusable blocks.

Worst-Fit:

  • Performance: Similar to best-fit, as it may also require scanning the entire list.

  • Fragmentation: Often results in large leftover blocks, which can still be useful for future processes.

New cards
60

Seek Time

Seek time is the time taken by the disk’s read/write head to move to the track where the desired data is located.

New cards
61

LOOK

LOOK: Moves in one direction, servicing requests until the end, then reverses. Efficient for moderate loads.

New cards
62

CLOOK

CLOOK: Similar but only moves to the last request in the direction, then jumps back to the start. Preferable when fairness and reduced starvation are priorities.

New cards
63

What are the different RAID levels, and how do they impact performance and redundancy?

  • RAID 0: High performance, no redundancy.

  • RAID 1: Full redundancy, slower writes.

  • RAID 5: Good balance between performance and redundancy using parity data.

New cards
64

Discuss the advantages and disadvantages of RAID 0

  • Advantages: High performance, no redundancy.

  • Disadvantages: Data loss if one disk fails.

New cards
65

Discuss the advantages and disadvantages of RAID 1

  • Advantages: Full redundancy, simple recovery.

  • Disadvantages: Expensive, storage space halved.

New cards
66

Discuss the advantages and disadvantages of RAID 5

  • Advantages: Efficient storage, good read performance, fault-tolerant.

  • Disadvantages: Slower writes due to parity calculations.

New cards
67

What is protection in an operating system?

Protection in an operating system refers to mechanisms that control access to the resources of a computer system, such as files, memory segments, and the CPU. It ensures that processes and users access these resources in accordance with the system's policies.

New cards
68

Why is protection important in operating system?

  • Preventing intentional and unintentional violations of access rights.

  • Ensuring programs use resources as intended and stated in policies.

  • Detecting latent errors between subsystems to improve reliability​

New cards
69

How does the operating system enforce security policies to protect resources?

  • Authentication: Verifies user identity through credentials like passwords, biometrics, or tokens.

  • Authorization: Assigns permissions to users and processes based on roles or access control lists (ACLs).

  • Encryption: Protects data in storage and transit to prevent unauthorized access.

  • Auditing and Logging: Tracks activities to detect and respond to security breaches.

  • Resource Isolation: Uses mechanisms like virtual memory and user/kernel mode to separate processes and prevent unauthorized access.

  • Firewalls and Filters: Restrict network traffic to enforce communication policies.

  • Principle of Least Privilege: Limits access rights to the minimum necessary for tasks.

New cards
70

What are privacy principles in the context of operating systems, and how do they ensure user data privacy?

Privacy principles in operating systems are guidelines and mechanisms designed to protect user data from unauthorized access or disclosure.

They include:

  • Ensuring confidentiality by restricting access to sensitive data.

  • Implementing secure authentication to confirm user identities.

  • Encrypting data to prevent unauthorized reading, even if it is accessed

New cards
71

What are the main privacy challenges faced by modern operating systems?

  • Data Breaches: Unauthorized access to sensitive user data.

  • Tracking and Profiling: Operating systems may collect excessive user activity data for analytics or advertising.

  • Insecure Permissions: Applications or processes may access more data than necessary, violating privacy.

  • Malware and Spyware: Malicious software can exploit system vulnerabilities to steal personal information.

  • Weak Encryption: Insufficient encryption of stored or transmitted data increases privacy risks.

  • Shared Resources: Shared memory or files may lead to unintended data leaks between processes.

  • User Unawareness: Lack of transparency or unclear privacy policies regarding data collection and usage.

  • Third-Party Applications: External apps may exploit OS permissions to access sensitive data.

New cards

Explore top notes

note Note
studied byStudied by 11 people
853 days ago
5.0(2)
note Note
studied byStudied by 3 people
489 days ago
5.0(1)
note Note
studied byStudied by 20 people
754 days ago
5.0(1)
note Note
studied byStudied by 22 people
98 days ago
5.0(1)
note Note
studied byStudied by 11 people
56 days ago
5.0(1)
note Note
studied byStudied by 2 people
58 days ago
5.0(1)
note Note
studied byStudied by 21 people
818 days ago
5.0(1)
note Note
studied byStudied by 129 people
695 days ago
5.0(6)

Explore top flashcards

flashcards Flashcard (20)
studied byStudied by 11 people
832 days ago
5.0(2)
flashcards Flashcard (28)
studied byStudied by 9 people
602 days ago
5.0(1)
flashcards Flashcard (29)
studied byStudied by 1 person
647 days ago
5.0(2)
flashcards Flashcard (65)
studied byStudied by 3 people
14 days ago
5.0(2)
flashcards Flashcard (43)
studied byStudied by 1 person
645 days ago
5.0(1)
flashcards Flashcard (25)
studied byStudied by 5 people
103 days ago
5.0(1)
flashcards Flashcard (34)
studied byStudied by 2 people
242 days ago
5.0(1)
flashcards Flashcard (88)
studied byStudied by 2 people
6 hours ago
5.0(1)
robot