CSIT-345 Comprehensive Midterm Study Guide

0.0(0)
studied byStudied by 1 person
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/102

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

103 Terms

1
New cards

Operating System

Software managing computer hardware and software resources.

2
New cards

Goals of Operating System

Efficiency, user convenience, and resource management.

3
New cards

Components of Computer System

Hardware, software, and user interface elements.

4
New cards

User View vs System View

User view focuses on usability; system view on functionality.

5
New cards

Kernel

Core component of an operating system managing resources.

6
New cards

Device Controller

Manages device operations and communicates with the CPU.

7
New cards

Interrupt

Signal indicating an event requiring immediate attention.

8
New cards

Device Driver

Software enabling communication between OS and hardware devices.

9
New cards

Interrupt Service Routine (ISR)

Code executed in response to an interrupt.

10
New cards

Interrupt Vector

Table mapping interrupts to their corresponding ISRs.

11
New cards

Features of Interrupt-Driven OS

Efficient resource management and responsive user interaction.

12
New cards

Nonmaskable Interrupt

Cannot be ignored; critical for system stability.

13
New cards

Maskable Interrupt

Can be ignored or delayed by the CPU.

14
New cards

Caching

Storing frequently accessed data for faster retrieval.

15
New cards

Direct Memory Access (DMA)

Allows devices to access memory without CPU intervention.

16
New cards

CPU Core

Basic processing unit of a CPU executing instructions.

17
New cards

2 Types of Multiprocessor Systems

Symmetric and asymmetric systems based on resource sharing.

18
New cards

Advantages of Multicore Design

Improved performance and energy efficiency in processing.

19
New cards

Graceful Degradation

System maintains functionality despite component failures.

20
New cards

Fault Tolerance

Ability to continue operation despite faults.

21
New cards

Asymmetric Cluster

Nodes perform different roles; not identical in function.

22
New cards

Symmetric Cluster

All nodes perform the same tasks and roles.

23
New cards

Mode Bit

Indicates current operating mode of the CPU.

24
New cards

Program vs Process

Program is static; process is dynamic execution instance.

25
New cards

Program Counter

Register tracking the next instruction to execute.

26
New cards

Single vs Multithreaded Process

Single requires one program counter; multithreaded needs multiple.

27
New cards

Data Migration

Transferring data between storage locations or systems.

28
New cards

Cache Coherency

Ensures consistency of data stored in cache.

29
New cards

Protection vs Security

Protection prevents unauthorized access; security ensures data integrity.

30
New cards

User and Group Control Methods

Access control lists, permissions, and authentication mechanisms.

31
New cards

Operating System Services

File management, process management, and device management.

32
New cards

I/O Operations vs File System Manipulation

I/O involves data transfer; file manipulation involves data management.

33
New cards

Resource Allocation

Assigning resources like CPU, memory, and I/O to processes.

34
New cards

System Call

Request for service from the operating system.

35
New cards

System Call Sequence

Steps to perform operations like file copying.

36
New cards

System Call Interface

Gateway for user programs to request OS services.

37
New cards

Parameter Passing Methods

Registers, stack, and block; stack preferred for simplicity.

38
New cards

printf() System Call

Invokes output operation to display formatted text.

39
New cards

Single Task System

Handles one task at a time.

40
New cards

Multitask System

Handles multiple tasks concurrently.

41
New cards

Object Files

Compiled code ready for linking into an executable.

42
New cards

Linker

Combines object files into a single executable.

43
New cards

Loader

Loads executable files into memory for execution.

44
New cards

Relocation

Adjusting addresses in code for memory placement.

45
New cards

Dynamically Linked Libraries

Shared libraries loaded at runtime for efficiency.

46
New cards

OS Development Requirements

Divided into functionality, performance, and usability.

47
New cards

OS Design Principle

Simplicity and modularity enhance maintainability.

48
New cards

OS Structures

Monolithic, microkernel, layered, and modular designs.

49
New cards

Modules

Self-contained components improving code organization.

50
New cards

Process Memory Sections

Divided into text, data, heap, and stack.

51
New cards

Activation Record

Data structure storing function call information on stack.

52
New cards

Process States

New, ready, running, waiting, terminated.

53
New cards

Process Control Block (PCB)

Data structure storing process information and state.

54
New cards

2 Process Scheduling Objectives

Maximize CPU utilization and minimize response time.

55
New cards

Process Scheduler

Allocates CPU to processes based on scheduling algorithms.

56
New cards

Scheduling Queues

Ready, waiting, and terminated queues managed by scheduler.

57
New cards

Degree of Multiprogramming

Number of processes in memory simultaneously.

58
New cards

CPU Scheduler

Selects which process to execute next.

59
New cards

I/O Bound Process

Spends more time waiting for I/O than computing.

60
New cards

CPU-Bound Process

Spends more time computing than waiting for I/O.

61
New cards

Context Switch

Switching CPU from one process to another.

62
New cards

Process Representation

Processes represented by PCB in the system.

63
New cards

Resource Allocation for Process

Resources allocated during process creation.

64
New cards

Parent Process Termination

Must release resources and terminate child processes.

65
New cards

Independent Process

Operates without relying on other processes.

66
New cards

Cooperating Process

Requires synchronization with other processes.

67
New cards

Process Cooperation Reasons

Resource sharing, communication, and data consistency.

68
New cards

Interprocess Communication Models

Message passing and shared memory for data exchange.

69
New cards

Synchronization Issues

Race conditions and deadlocks in shared memory.

70
New cards

Producer-Consumer Paradigm

Model for managing shared resources between processes.

71
New cards

Indirect Communication Method

Uses shared data structures for message exchange.

72
New cards

Blocking vs Nonblocking Message Passing

Blocking waits for message; nonblocking proceeds immediately.

73
New cards

Remote Procedure Calls

Enables execution of code on remote systems.

74
New cards

Types of Pipes

Anonymous and named pipes for interprocess communication.

75
New cards

Thread

Lightweight process with its own execution context.

76
New cards

Concurrency in Single vs Multicore

Single core executes tasks sequentially; multicore concurrently.

77
New cards

Types of Parallelism

Data and task parallelism for performance improvement.

78
New cards

Amdahl's Law

Formula to calculate speedup in parallel processing.

79
New cards

User Threads

Managed by user-level libraries, not OS.

80
New cards

Kernel Threads

Managed directly by the operating system.

81
New cards

User vs Kernel Thread Relationships

One-to-one, many-to-one, one-to-many, and hybrid models.

82
New cards

Asynchronous vs Synchronous Threading

Asynchronous allows non-blocking operations; synchronous waits.

83
New cards

Create/Join Strategy

Thread creation and synchronization method.

84
New cards

Implicit vs Explicit Threading

Implicit threading is automatic; explicit requires programmer control.

85
New cards

Methods for Implicit Threading

Task pools, thread pools, and futures.

86
New cards

Signal

Notification sent to a process to trigger action.

87
New cards

Asynchronous Signal

Delivered at any time; does not block execution.

88
New cards

Synchronous Signal

Delivered at a specific point in execution.

89
New cards

Default Signal Handler

Kernel-defined action taken when a signal is received.

90
New cards

Signal Delivery

Determined by the target process's signal handler.

91
New cards

Thread Cancellation

Terminating a thread before its completion.

92
New cards

Asynchronous Cancellation

Immediate thread termination regardless of state.

93
New cards

Deferred Cancellation

Thread checks for cancellation request before terminating.

94
New cards

Lightweight Process

Thread with minimal overhead for context switching.

95
New cards

Event Triggering Upcall

Notification to a process on event occurrence.

96
New cards

Nonpreemptive Scheduling

Running process cannot be interrupted.

97
New cards

Preemptive Scheduling

Allows interruption of running processes for others.

98
New cards

Dispatcher Job

Transfers control to the selected process.

99
New cards

Scheduling Algorithms

FCFS, SJF, and Round Robin for process management.

100
New cards

Average Wait Time Calculation

Total wait time divided by number of processes.