CSIT-345 Comprehensive Midterm Study Guide

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 102

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

103 Terms

1

Operating System

Software managing computer hardware and software resources.

New cards
2

Goals of Operating System

Efficiency, user convenience, and resource management.

New cards
3

Components of Computer System

Hardware, software, and user interface elements.

New cards
4

User View vs System View

User view focuses on usability; system view on functionality.

New cards
5

Kernel

Core component of an operating system managing resources.

New cards
6

Device Controller

Manages device operations and communicates with the CPU.

New cards
7

Interrupt

Signal indicating an event requiring immediate attention.

New cards
8

Device Driver

Software enabling communication between OS and hardware devices.

New cards
9

Interrupt Service Routine (ISR)

Code executed in response to an interrupt.

New cards
10

Interrupt Vector

Table mapping interrupts to their corresponding ISRs.

New cards
11

Features of Interrupt-Driven OS

Efficient resource management and responsive user interaction.

New cards
12

Nonmaskable Interrupt

Cannot be ignored; critical for system stability.

New cards
13

Maskable Interrupt

Can be ignored or delayed by the CPU.

New cards
14

Caching

Storing frequently accessed data for faster retrieval.

New cards
15

Direct Memory Access (DMA)

Allows devices to access memory without CPU intervention.

New cards
16

CPU Core

Basic processing unit of a CPU executing instructions.

New cards
17

2 Types of Multiprocessor Systems

Symmetric and asymmetric systems based on resource sharing.

New cards
18

Advantages of Multicore Design

Improved performance and energy efficiency in processing.

New cards
19

Graceful Degradation

System maintains functionality despite component failures.

New cards
20

Fault Tolerance

Ability to continue operation despite faults.

New cards
21

Asymmetric Cluster

Nodes perform different roles; not identical in function.

New cards
22

Symmetric Cluster

All nodes perform the same tasks and roles.

New cards
23

Mode Bit

Indicates current operating mode of the CPU.

New cards
24

Program vs Process

Program is static; process is dynamic execution instance.

New cards
25

Program Counter

Register tracking the next instruction to execute.

New cards
26

Single vs Multithreaded Process

Single requires one program counter; multithreaded needs multiple.

New cards
27

Data Migration

Transferring data between storage locations or systems.

New cards
28

Cache Coherency

Ensures consistency of data stored in cache.

New cards
29

Protection vs Security

Protection prevents unauthorized access; security ensures data integrity.

New cards
30

User and Group Control Methods

Access control lists, permissions, and authentication mechanisms.

New cards
31

Operating System Services

File management, process management, and device management.

New cards
32

I/O Operations vs File System Manipulation

I/O involves data transfer; file manipulation involves data management.

New cards
33

Resource Allocation

Assigning resources like CPU, memory, and I/O to processes.

New cards
34

System Call

Request for service from the operating system.

New cards
35

System Call Sequence

Steps to perform operations like file copying.

New cards
36

System Call Interface

Gateway for user programs to request OS services.

New cards
37

Parameter Passing Methods

Registers, stack, and block; stack preferred for simplicity.

New cards
38

printf() System Call

Invokes output operation to display formatted text.

New cards
39

Single Task System

Handles one task at a time.

New cards
40

Multitask System

Handles multiple tasks concurrently.

New cards
41

Object Files

Compiled code ready for linking into an executable.

New cards
42

Linker

Combines object files into a single executable.

New cards
43

Loader

Loads executable files into memory for execution.

New cards
44

Relocation

Adjusting addresses in code for memory placement.

New cards
45

Dynamically Linked Libraries

Shared libraries loaded at runtime for efficiency.

New cards
46

OS Development Requirements

Divided into functionality, performance, and usability.

New cards
47

OS Design Principle

Simplicity and modularity enhance maintainability.

New cards
48

OS Structures

Monolithic, microkernel, layered, and modular designs.

New cards
49

Modules

Self-contained components improving code organization.

New cards
50

Process Memory Sections

Divided into text, data, heap, and stack.

New cards
51

Activation Record

Data structure storing function call information on stack.

New cards
52

Process States

New, ready, running, waiting, terminated.

New cards
53

Process Control Block (PCB)

Data structure storing process information and state.

New cards
54

2 Process Scheduling Objectives

Maximize CPU utilization and minimize response time.

New cards
55

Process Scheduler

Allocates CPU to processes based on scheduling algorithms.

New cards
56

Scheduling Queues

Ready, waiting, and terminated queues managed by scheduler.

New cards
57

Degree of Multiprogramming

Number of processes in memory simultaneously.

New cards
58

CPU Scheduler

Selects which process to execute next.

New cards
59

I/O Bound Process

Spends more time waiting for I/O than computing.

New cards
60

CPU-Bound Process

Spends more time computing than waiting for I/O.

New cards
61

Context Switch

Switching CPU from one process to another.

New cards
62

Process Representation

Processes represented by PCB in the system.

New cards
63

Resource Allocation for Process

Resources allocated during process creation.

New cards
64

Parent Process Termination

Must release resources and terminate child processes.

New cards
65

Independent Process

Operates without relying on other processes.

New cards
66

Cooperating Process

Requires synchronization with other processes.

New cards
67

Process Cooperation Reasons

Resource sharing, communication, and data consistency.

New cards
68

Interprocess Communication Models

Message passing and shared memory for data exchange.

New cards
69

Synchronization Issues

Race conditions and deadlocks in shared memory.

New cards
70

Producer-Consumer Paradigm

Model for managing shared resources between processes.

New cards
71

Indirect Communication Method

Uses shared data structures for message exchange.

New cards
72

Blocking vs Nonblocking Message Passing

Blocking waits for message; nonblocking proceeds immediately.

New cards
73

Remote Procedure Calls

Enables execution of code on remote systems.

New cards
74

Types of Pipes

Anonymous and named pipes for interprocess communication.

New cards
75

Thread

Lightweight process with its own execution context.

New cards
76

Concurrency in Single vs Multicore

Single core executes tasks sequentially; multicore concurrently.

New cards
77

Types of Parallelism

Data and task parallelism for performance improvement.

New cards
78

Amdahl's Law

Formula to calculate speedup in parallel processing.

New cards
79

User Threads

Managed by user-level libraries, not OS.

New cards
80

Kernel Threads

Managed directly by the operating system.

New cards
81

User vs Kernel Thread Relationships

One-to-one, many-to-one, one-to-many, and hybrid models.

New cards
82

Asynchronous vs Synchronous Threading

Asynchronous allows non-blocking operations; synchronous waits.

New cards
83

Create/Join Strategy

Thread creation and synchronization method.

New cards
84

Implicit vs Explicit Threading

Implicit threading is automatic; explicit requires programmer control.

New cards
85

Methods for Implicit Threading

Task pools, thread pools, and futures.

New cards
86

Signal

Notification sent to a process to trigger action.

New cards
87

Asynchronous Signal

Delivered at any time; does not block execution.

New cards
88

Synchronous Signal

Delivered at a specific point in execution.

New cards
89

Default Signal Handler

Kernel-defined action taken when a signal is received.

New cards
90

Signal Delivery

Determined by the target process's signal handler.

New cards
91

Thread Cancellation

Terminating a thread before its completion.

New cards
92

Asynchronous Cancellation

Immediate thread termination regardless of state.

New cards
93

Deferred Cancellation

Thread checks for cancellation request before terminating.

New cards
94

Lightweight Process

Thread with minimal overhead for context switching.

New cards
95

Event Triggering Upcall

Notification to a process on event occurrence.

New cards
96

Nonpreemptive Scheduling

Running process cannot be interrupted.

New cards
97

Preemptive Scheduling

Allows interruption of running processes for others.

New cards
98

Dispatcher Job

Transfers control to the selected process.

New cards
99

Scheduling Algorithms

FCFS, SJF, and Round Robin for process management.

New cards
100

Average Wait Time Calculation

Total wait time divided by number of processes.

New cards
robot