Last saved 3 days ago
KS

OS 101 - Midterms

robot
knowt logo

OS 101 - Midterms

Page 1: What is Operating System?

Definition

  • An operating system (OS) is a program that serves as an intermediary between users and computer hardware.

Goals of Operating System

  • Execute User Programs: Facilitates user problem-solving by executing programs.

  • Convenience: Makes the computer system easy to use.

  • Efficiency: Utilizes computer hardware efficiently.

Purposes of an Operating System

  • Controls allocation and utilization of computing system resources among various users and tasks.

  • Provides a simplified interface between computer hardware and programmers, aiding in the coding and debugging of application programs.


Page 2: Purposes and Tasks of Operating System

Tasks of an Operating System

  1. File Management: Provides facilities to create and modify programs and data files using an editor.

  2. Compiler Access: Access to compiler for translating user programs from high-level to machine language.

  3. Loader Program: Facilitates moving compiled program code to computer memory for execution.

  4. I/O Programming Routines: Manages details of input/output operations.

Computer System Structure

  • Divided into four components:

    • Hardware: Basic computing resources (CPU, memory, I/O devices).

    • Operating System: Controls and coordinates hardware usage among applications and users.

    • Application Programs: Define resource utilization to solve user computing problems (e.g., word processors, compilers, web browsers, database systems, video games).

    • Users: People, machines, and other computers.


Page 3: Abstract View of Components

  • User: Interacts with application programs.

  • Application Programs: Examples include compilers, web browsers, development kits.

  • Operating System: Facilitates interaction between applications and hardware.

  • Computer Hardware: Comprises CPU, memory, and I/O devices.


Page 4: Storage Structure

Main Memory

  • Only large storage medium accessible directly by the CPU.

  • Characteristics:

    • Random Access: Can access any memory location directly.

    • Volatility: Typically volatile, using Dynamic Random-access Memory (DRAM).

Secondary Storage

  • Serves as an extension of main memory, offering large non-volatile storage capacity.

  • Hard Disk Drives (HDD): Features rigid platters with magnetic materials, logically divided into tracks and sectors. The disk controller manages interactions between the device and the computer.

  • Non-volatile Memory (NVM): Faster than HDDs and becoming popular as performance increases and costs drop.


Page 5: Storage Hierarchy

  • Organized in a hierarchy based on:

    • Speed: Faster storage types are at the top.

    • Cost: Higher speed typically comes with higher costs.

    • Volatility: Ranges from volatile to non-volatile storage.

Caching

  • Copies information into faster storage systems; main memory acts as a cache for secondary storage.

  • Device Drivers: Each device controller has a driver providing a uniform interface between it and the kernel.

Storage Levels

  • Volatile Storage: Registers, cache, and main memory.

  • Non-volatile Storage: Includes primary storage, non-volatile memory, secondary storage (HDDs, optical disks), and tertiary storage (magnetic tapes).


Page 6: Operating System Services

User-Oriented Services

  • Provides a suite of functions helpful to users, including:

    • User interface

    • Program execution

    • I/O operations

    • File-system manipulation

    • Communications

    • Error detection

System Efficiency Services

  • Ensures effective operation of the system via:

    • Resource allocation

    • Logging

    • Protection and security.


Page 7: User Operating System Interface

  • Graphical User Interface (GUI): Elements include:

    • Icons

    • Buttons

    • Menus

    • Windows

    • Pointers


Page 8: System Calls

Definition

  • A system call is a programmatic way for a program to request a service from the operating system kernel.

  • It allows programs to interact with the OS and provides access to its services via the Application Program Interface (API).

Execution Context

  • Executed in kernel mode and serves as an entry point into the kernel.

Example of System Calls

  • Sequence to copy file contents involves:

    1. Acquire input file name.

    2. Display prompt and accept input.

    3. Open input file (abort if non-existent).

    4. Create output file (abort if exists).

    5. Loop to read from input and write to output until completion.

    6. Close output file and notify the user.


Page 9: Types of System Calls

  1. File System Operations: Managing file manipulation actions (creation, deletion).

  2. Process Control: Managing processes (creation, termination).

  3. Memory Management: Handling memory allocation and deallocation.

  4. Interprocess Communication (IPC): Facilitating communication between processes.

  5. Device Management: Interactions with peripheral devices and their management.


Page 10: System Services

System Program Functionality

  • Provides a convenient environment for program development and execution, classified as follows:

    • File Manipulation: Managing files and directories.

    • Status Information: Obtaining system performance data.

    • Programming Language Support: Includes compilers and loaders.

    • Communications: Mechanisms to connect users and processes.

    • Background Services: Tasks that support system operations between boot and shutdown.


Page 11: Further System Services

File Modification Services

  • Utilize text editors to create and alter files with advanced functionalities.

Programming Language Support

  • Provision of compilers and debugging systems for program execution.

Communication Mechanisms

  • Allow messaging, web browsing, remote login, and file transfers.

Background Services

  • Run continuously, providing essential services like disk checks and process scheduling.


Page 12: View of Operating System Services

User Interfaces

  • Include various types (GUI, command line) serving different user interactions.

Operating System Functions

  • Encompass program execution, I/O operations, and error detection services.

Resource Management

  • Handling allocation and protection aspects while ensuring user access.


Page 13: What is a Process?

Definition

  • A process refers to a program in execution, managed by the operating system.

Process Components

  • Text Section: Contains the program code.

  • Stack: Holds temporary data, parameters, return addresses, and local variables.

  • Data Section: Contains global variables.

  • Heap: Memory allocated during runtime.


Page 14: Process Details

Characteristics

  • Program (passive on disk) becomes a process (active in memory).

  • Execution initiated via GUI commands or command line.

  • A single program can yield multiple processes.

Process States

  • New: Being created.

  • Running: Instructions actively executed.

  • Waiting: Awaiting an event.

  • Ready: Assigned to a processor.

  • Terminated: Finished execution.


Page 15: Process Control Block (PCB)

Information Stored in PCB

  • Process State: Current state of the process (running, waiting).

  • Program Counter: Next instruction to execute.

  • Registers: Contents of CPU registers related to the process.

  • Memory Limits: Memory allocated for this process.

  • I/O Status Information: Devices allocated and list of open files.

Process Scheduling

  • Scheduler selects processes for CPU execution, optimizing its utilization and ensuring swift task switching.


Page 16: Process Operations

Process Creation

  • Parent processes can create child processes, managed with unique identifiers (PID).

  • Resource Sharing: Options vary from sharing all to shared subsets or none.

  • Execution Synchronization: Parents may wait for child processes or continue concurrently.

UNIX Systems Calls

  • fork(): Creates new processes.

  • exec(): Replaces process memory space with a new program.

  • wait(): Parent waits for child termination.


Page 17: Process Termination

  • Termination occurs when a process executes its last statement and invokes the exit() system call.

  • Parents can terminate children with abort(); if the parent exits, children may also be terminated.

  • If no parent waits for it, the process becomes a zombie; if the parent has exited, it becomes an orphan.

Interprocess Communication (IPC)

  • Processes may be independent or cooperating (sharing data).

  • Reasons include information sharing, speed enhancement, and modular design.


Page 18: What is a Thread?

Definition

  • A thread is a basic unit of CPU utilization comprising:

    • Thread ID

    • Program Counter

    • Register Set

    • Stack

Shared Resources

  • Threads within the same process share code, data section, and OS resources (e.g., open files).

  • Processes vs. Threads: Traditional processes have one thread; multiple threads allow multitasking within a single process.


Page 19: Benefits of Threads

Advantages

  • Responsiveness: Allows processes to continue executing even if part is blocked (important for UIs).

  • Resource Sharing: Easier resource sharing than with processes or message passing.

  • Economy: Creating threads is less costly than full processes; switching overhead is lower than context switching.

  • Scalability: Better utilization of multicore architectures.


Page 20: Types of Parallelism

  1. Data Parallelism: Operates on subsets of the same data spread across multiple cores.

  2. Task Parallelism: Distributes unique tasks across threads, each performing distinct operations.

User and Kernel Threads

  • User Threads: Managed by user-level libraries (e.g., POSIX Pthreads, Windows threads).

  • Kernel Threads: Supported directly by the kernel (e.g., in Windows, Linux).


Page 21: Thread Models

Many-to-One

  • Multiple user-level threads mapped to a single kernel thread; a single block can halt all.

One-to-One

  • Each user thread corresponds to a kernel thread, providing greater concurrency but with scheduling overhead limitations.

Two-Level Model

  • Allows user threads to be bound to kernel threads for streamlined execution.


Page 22: Basic Concepts

CPU Utilization

  • Maximized through multiprogramming, with processes alternating between execution and I/O wait.

Burst Cycle

  • Consists of CPU burst followed by I/O burst; managing these durations is crucial for efficient scheduling.


Page 23: CPU Scheduler

Definition

  • The CPU scheduler selects processes in the ready queue to allocate CPU time, making decisions based on state transitions.

Preemptive vs Non-Preemptive Scheduling

  • Preemptive: Allows scheduling during context or completion state changes.

  • Non-Preemptive: Scheduling occurs only at terminations or state changes to waiting.

Use in Modern Operating Systems

  • Preemptive scheduling algorithms dominate OS designs (Windows, macOS, Linux, UNIX).


Page 24: Dispatcher

Functionality

  • Dispatcher gives CPU control to selected processes, involving:

    • Context Switching

    • User Mode Switching

    • Restarting User Programs

Dispatch Latency

  • Time taken for one process to stop and another to start.

Scheduling Criteria

  • CPU Utilization: Keeping CPU busy.

  • Throughput: Number of processes completed in a time unit.

  • Turnaround Time: Time from submission to completion.

  • Waiting Time: Time spent in the ready queue.

  • Response Time: Time from request to first response.


Page 25: CPU Scheduling Algorithms

Types of Scheduling Methods

  • Preemptive Scheduling: Process switch occurs when transitioning to ready states.

  • Non-Preemptive Scheduling: Happens solely at termination or switch to waiting.

First Come, First Serve (FCFS) Scheduling

  • A non-preemptive method handling processes in arrival order, similar to a line in a store.

  • Mechanics:

    1. Arrival of processes follows order in the system.

    2. CPU executes processes until completion before moving to the next.

    3. Continues until the queue is empty.


Page 26: Shortest Job First (SJF) Scheduling

Definition

  • Shortest Job First (SJF) selects the waiting process with the least execution time next.

  • This can be preemptive or non-preemptive; effectively reduces average waiting time.

Implementation Steps

  1. Sort processes by arrival time.

  2. Select the minimum burst time from the ready pool after processing.

  3. Pool is dynamically adjusted throughout execution.

Characteristics

  • Shortest Job First minimizes average waiting time but may cause starvation.


Page 27: Shortest Remaining Time First (SRTF)

Definition

  • Shortest Remaining Time First (SRTF) is a preemptive version of SJF, prioritizing the process with the least remaining time.

Steps to Implement SRTF Algorithm

  1. Input details of all processes: arrival and burst time.

  2. Keep track of remaining execution time in a dedicated array.

  3. Initialize the current time tracking completion metrics.

  4. Monitor arrivals at each time unit and update the ready queue accordingly.

Round Robin Scheduling

  • Distributes fixed CPU time slices (quantum) to processes in a queue, rotating through them to promote fairness.


Priority Scheduling

Overview

  • Allocates CPU based on process priority; higher priority processes execute first, with equal priority handled by FCFS.

  • Factors determining priority could include memory and time requirements.