An operating system (OS) is a program that serves as an intermediary between users and computer hardware.
Execute User Programs: Facilitates user problem-solving by executing programs.
Convenience: Makes the computer system easy to use.
Efficiency: Utilizes computer hardware efficiently.
Controls allocation and utilization of computing system resources among various users and tasks.
Provides a simplified interface between computer hardware and programmers, aiding in the coding and debugging of application programs.
File Management: Provides facilities to create and modify programs and data files using an editor.
Compiler Access: Access to compiler for translating user programs from high-level to machine language.
Loader Program: Facilitates moving compiled program code to computer memory for execution.
I/O Programming Routines: Manages details of input/output operations.
Divided into four components:
Hardware: Basic computing resources (CPU, memory, I/O devices).
Operating System: Controls and coordinates hardware usage among applications and users.
Application Programs: Define resource utilization to solve user computing problems (e.g., word processors, compilers, web browsers, database systems, video games).
Users: People, machines, and other computers.
User: Interacts with application programs.
Application Programs: Examples include compilers, web browsers, development kits.
Operating System: Facilitates interaction between applications and hardware.
Computer Hardware: Comprises CPU, memory, and I/O devices.
Only large storage medium accessible directly by the CPU.
Characteristics:
Random Access: Can access any memory location directly.
Volatility: Typically volatile, using Dynamic Random-access Memory (DRAM).
Serves as an extension of main memory, offering large non-volatile storage capacity.
Hard Disk Drives (HDD): Features rigid platters with magnetic materials, logically divided into tracks and sectors. The disk controller manages interactions between the device and the computer.
Non-volatile Memory (NVM): Faster than HDDs and becoming popular as performance increases and costs drop.
Organized in a hierarchy based on:
Speed: Faster storage types are at the top.
Cost: Higher speed typically comes with higher costs.
Volatility: Ranges from volatile to non-volatile storage.
Copies information into faster storage systems; main memory acts as a cache for secondary storage.
Device Drivers: Each device controller has a driver providing a uniform interface between it and the kernel.
Volatile Storage: Registers, cache, and main memory.
Non-volatile Storage: Includes primary storage, non-volatile memory, secondary storage (HDDs, optical disks), and tertiary storage (magnetic tapes).
Provides a suite of functions helpful to users, including:
User interface
Program execution
I/O operations
File-system manipulation
Communications
Error detection
Ensures effective operation of the system via:
Resource allocation
Logging
Protection and security.
Graphical User Interface (GUI): Elements include:
Icons
Buttons
Menus
Windows
Pointers
A system call is a programmatic way for a program to request a service from the operating system kernel.
It allows programs to interact with the OS and provides access to its services via the Application Program Interface (API).
Executed in kernel mode and serves as an entry point into the kernel.
Sequence to copy file contents involves:
Acquire input file name.
Display prompt and accept input.
Open input file (abort if non-existent).
Create output file (abort if exists).
Loop to read from input and write to output until completion.
Close output file and notify the user.
File System Operations: Managing file manipulation actions (creation, deletion).
Process Control: Managing processes (creation, termination).
Memory Management: Handling memory allocation and deallocation.
Interprocess Communication (IPC): Facilitating communication between processes.
Device Management: Interactions with peripheral devices and their management.
Provides a convenient environment for program development and execution, classified as follows:
File Manipulation: Managing files and directories.
Status Information: Obtaining system performance data.
Programming Language Support: Includes compilers and loaders.
Communications: Mechanisms to connect users and processes.
Background Services: Tasks that support system operations between boot and shutdown.
Utilize text editors to create and alter files with advanced functionalities.
Provision of compilers and debugging systems for program execution.
Allow messaging, web browsing, remote login, and file transfers.
Run continuously, providing essential services like disk checks and process scheduling.
Include various types (GUI, command line) serving different user interactions.
Encompass program execution, I/O operations, and error detection services.
Handling allocation and protection aspects while ensuring user access.
A process refers to a program in execution, managed by the operating system.
Text Section: Contains the program code.
Stack: Holds temporary data, parameters, return addresses, and local variables.
Data Section: Contains global variables.
Heap: Memory allocated during runtime.
Program (passive on disk) becomes a process (active in memory).
Execution initiated via GUI commands or command line.
A single program can yield multiple processes.
New: Being created.
Running: Instructions actively executed.
Waiting: Awaiting an event.
Ready: Assigned to a processor.
Terminated: Finished execution.
Process State: Current state of the process (running, waiting).
Program Counter: Next instruction to execute.
Registers: Contents of CPU registers related to the process.
Memory Limits: Memory allocated for this process.
I/O Status Information: Devices allocated and list of open files.
Scheduler selects processes for CPU execution, optimizing its utilization and ensuring swift task switching.
Parent processes can create child processes, managed with unique identifiers (PID).
Resource Sharing: Options vary from sharing all to shared subsets or none.
Execution Synchronization: Parents may wait for child processes or continue concurrently.
fork(): Creates new processes.
exec(): Replaces process memory space with a new program.
wait(): Parent waits for child termination.
Termination occurs when a process executes its last statement and invokes the exit() system call.
Parents can terminate children with abort(); if the parent exits, children may also be terminated.
If no parent waits for it, the process becomes a zombie; if the parent has exited, it becomes an orphan.
Processes may be independent or cooperating (sharing data).
Reasons include information sharing, speed enhancement, and modular design.
A thread is a basic unit of CPU utilization comprising:
Thread ID
Program Counter
Register Set
Stack
Threads within the same process share code, data section, and OS resources (e.g., open files).
Processes vs. Threads: Traditional processes have one thread; multiple threads allow multitasking within a single process.
Responsiveness: Allows processes to continue executing even if part is blocked (important for UIs).
Resource Sharing: Easier resource sharing than with processes or message passing.
Economy: Creating threads is less costly than full processes; switching overhead is lower than context switching.
Scalability: Better utilization of multicore architectures.
Data Parallelism: Operates on subsets of the same data spread across multiple cores.
Task Parallelism: Distributes unique tasks across threads, each performing distinct operations.
User Threads: Managed by user-level libraries (e.g., POSIX Pthreads, Windows threads).
Kernel Threads: Supported directly by the kernel (e.g., in Windows, Linux).
Multiple user-level threads mapped to a single kernel thread; a single block can halt all.
Each user thread corresponds to a kernel thread, providing greater concurrency but with scheduling overhead limitations.
Allows user threads to be bound to kernel threads for streamlined execution.
Maximized through multiprogramming, with processes alternating between execution and I/O wait.
Consists of CPU burst followed by I/O burst; managing these durations is crucial for efficient scheduling.
The CPU scheduler selects processes in the ready queue to allocate CPU time, making decisions based on state transitions.
Preemptive: Allows scheduling during context or completion state changes.
Non-Preemptive: Scheduling occurs only at terminations or state changes to waiting.
Preemptive scheduling algorithms dominate OS designs (Windows, macOS, Linux, UNIX).
Dispatcher gives CPU control to selected processes, involving:
Context Switching
User Mode Switching
Restarting User Programs
Time taken for one process to stop and another to start.
CPU Utilization: Keeping CPU busy.
Throughput: Number of processes completed in a time unit.
Turnaround Time: Time from submission to completion.
Waiting Time: Time spent in the ready queue.
Response Time: Time from request to first response.
Preemptive Scheduling: Process switch occurs when transitioning to ready states.
Non-Preemptive Scheduling: Happens solely at termination or switch to waiting.
A non-preemptive method handling processes in arrival order, similar to a line in a store.
Mechanics:
Arrival of processes follows order in the system.
CPU executes processes until completion before moving to the next.
Continues until the queue is empty.
Shortest Job First (SJF) selects the waiting process with the least execution time next.
This can be preemptive or non-preemptive; effectively reduces average waiting time.
Sort processes by arrival time.
Select the minimum burst time from the ready pool after processing.
Pool is dynamically adjusted throughout execution.
Shortest Job First minimizes average waiting time but may cause starvation.
Shortest Remaining Time First (SRTF) is a preemptive version of SJF, prioritizing the process with the least remaining time.
Input details of all processes: arrival and burst time.
Keep track of remaining execution time in a dedicated array.
Initialize the current time tracking completion metrics.
Monitor arrivals at each time unit and update the ready queue accordingly.
Distributes fixed CPU time slices (quantum) to processes in a queue, rotating through them to promote fairness.
Allocates CPU based on process priority; higher priority processes execute first, with equal priority handled by FCFS.
Factors determining priority could include memory and time requirements.
OS 101 - Midterms
An operating system (OS) is a program that serves as an intermediary between users and computer hardware.
Execute User Programs: Facilitates user problem-solving by executing programs.
Convenience: Makes the computer system easy to use.
Efficiency: Utilizes computer hardware efficiently.
Controls allocation and utilization of computing system resources among various users and tasks.
Provides a simplified interface between computer hardware and programmers, aiding in the coding and debugging of application programs.
File Management: Provides facilities to create and modify programs and data files using an editor.
Compiler Access: Access to compiler for translating user programs from high-level to machine language.
Loader Program: Facilitates moving compiled program code to computer memory for execution.
I/O Programming Routines: Manages details of input/output operations.
Divided into four components:
Hardware: Basic computing resources (CPU, memory, I/O devices).
Operating System: Controls and coordinates hardware usage among applications and users.
Application Programs: Define resource utilization to solve user computing problems (e.g., word processors, compilers, web browsers, database systems, video games).
Users: People, machines, and other computers.
User: Interacts with application programs.
Application Programs: Examples include compilers, web browsers, development kits.
Operating System: Facilitates interaction between applications and hardware.
Computer Hardware: Comprises CPU, memory, and I/O devices.
Only large storage medium accessible directly by the CPU.
Characteristics:
Random Access: Can access any memory location directly.
Volatility: Typically volatile, using Dynamic Random-access Memory (DRAM).
Serves as an extension of main memory, offering large non-volatile storage capacity.
Hard Disk Drives (HDD): Features rigid platters with magnetic materials, logically divided into tracks and sectors. The disk controller manages interactions between the device and the computer.
Non-volatile Memory (NVM): Faster than HDDs and becoming popular as performance increases and costs drop.
Organized in a hierarchy based on:
Speed: Faster storage types are at the top.
Cost: Higher speed typically comes with higher costs.
Volatility: Ranges from volatile to non-volatile storage.
Copies information into faster storage systems; main memory acts as a cache for secondary storage.
Device Drivers: Each device controller has a driver providing a uniform interface between it and the kernel.
Volatile Storage: Registers, cache, and main memory.
Non-volatile Storage: Includes primary storage, non-volatile memory, secondary storage (HDDs, optical disks), and tertiary storage (magnetic tapes).
Provides a suite of functions helpful to users, including:
User interface
Program execution
I/O operations
File-system manipulation
Communications
Error detection
Ensures effective operation of the system via:
Resource allocation
Logging
Protection and security.
Graphical User Interface (GUI): Elements include:
Icons
Buttons
Menus
Windows
Pointers
A system call is a programmatic way for a program to request a service from the operating system kernel.
It allows programs to interact with the OS and provides access to its services via the Application Program Interface (API).
Executed in kernel mode and serves as an entry point into the kernel.
Sequence to copy file contents involves:
Acquire input file name.
Display prompt and accept input.
Open input file (abort if non-existent).
Create output file (abort if exists).
Loop to read from input and write to output until completion.
Close output file and notify the user.
File System Operations: Managing file manipulation actions (creation, deletion).
Process Control: Managing processes (creation, termination).
Memory Management: Handling memory allocation and deallocation.
Interprocess Communication (IPC): Facilitating communication between processes.
Device Management: Interactions with peripheral devices and their management.
Provides a convenient environment for program development and execution, classified as follows:
File Manipulation: Managing files and directories.
Status Information: Obtaining system performance data.
Programming Language Support: Includes compilers and loaders.
Communications: Mechanisms to connect users and processes.
Background Services: Tasks that support system operations between boot and shutdown.
Utilize text editors to create and alter files with advanced functionalities.
Provision of compilers and debugging systems for program execution.
Allow messaging, web browsing, remote login, and file transfers.
Run continuously, providing essential services like disk checks and process scheduling.
Include various types (GUI, command line) serving different user interactions.
Encompass program execution, I/O operations, and error detection services.
Handling allocation and protection aspects while ensuring user access.
A process refers to a program in execution, managed by the operating system.
Text Section: Contains the program code.
Stack: Holds temporary data, parameters, return addresses, and local variables.
Data Section: Contains global variables.
Heap: Memory allocated during runtime.
Program (passive on disk) becomes a process (active in memory).
Execution initiated via GUI commands or command line.
A single program can yield multiple processes.
New: Being created.
Running: Instructions actively executed.
Waiting: Awaiting an event.
Ready: Assigned to a processor.
Terminated: Finished execution.
Process State: Current state of the process (running, waiting).
Program Counter: Next instruction to execute.
Registers: Contents of CPU registers related to the process.
Memory Limits: Memory allocated for this process.
I/O Status Information: Devices allocated and list of open files.
Scheduler selects processes for CPU execution, optimizing its utilization and ensuring swift task switching.
Parent processes can create child processes, managed with unique identifiers (PID).
Resource Sharing: Options vary from sharing all to shared subsets or none.
Execution Synchronization: Parents may wait for child processes or continue concurrently.
fork(): Creates new processes.
exec(): Replaces process memory space with a new program.
wait(): Parent waits for child termination.
Termination occurs when a process executes its last statement and invokes the exit() system call.
Parents can terminate children with abort(); if the parent exits, children may also be terminated.
If no parent waits for it, the process becomes a zombie; if the parent has exited, it becomes an orphan.
Processes may be independent or cooperating (sharing data).
Reasons include information sharing, speed enhancement, and modular design.
A thread is a basic unit of CPU utilization comprising:
Thread ID
Program Counter
Register Set
Stack
Threads within the same process share code, data section, and OS resources (e.g., open files).
Processes vs. Threads: Traditional processes have one thread; multiple threads allow multitasking within a single process.
Responsiveness: Allows processes to continue executing even if part is blocked (important for UIs).
Resource Sharing: Easier resource sharing than with processes or message passing.
Economy: Creating threads is less costly than full processes; switching overhead is lower than context switching.
Scalability: Better utilization of multicore architectures.
Data Parallelism: Operates on subsets of the same data spread across multiple cores.
Task Parallelism: Distributes unique tasks across threads, each performing distinct operations.
User Threads: Managed by user-level libraries (e.g., POSIX Pthreads, Windows threads).
Kernel Threads: Supported directly by the kernel (e.g., in Windows, Linux).
Multiple user-level threads mapped to a single kernel thread; a single block can halt all.
Each user thread corresponds to a kernel thread, providing greater concurrency but with scheduling overhead limitations.
Allows user threads to be bound to kernel threads for streamlined execution.
Maximized through multiprogramming, with processes alternating between execution and I/O wait.
Consists of CPU burst followed by I/O burst; managing these durations is crucial for efficient scheduling.
The CPU scheduler selects processes in the ready queue to allocate CPU time, making decisions based on state transitions.
Preemptive: Allows scheduling during context or completion state changes.
Non-Preemptive: Scheduling occurs only at terminations or state changes to waiting.
Preemptive scheduling algorithms dominate OS designs (Windows, macOS, Linux, UNIX).
Dispatcher gives CPU control to selected processes, involving:
Context Switching
User Mode Switching
Restarting User Programs
Time taken for one process to stop and another to start.
CPU Utilization: Keeping CPU busy.
Throughput: Number of processes completed in a time unit.
Turnaround Time: Time from submission to completion.
Waiting Time: Time spent in the ready queue.
Response Time: Time from request to first response.
Preemptive Scheduling: Process switch occurs when transitioning to ready states.
Non-Preemptive Scheduling: Happens solely at termination or switch to waiting.
A non-preemptive method handling processes in arrival order, similar to a line in a store.
Mechanics:
Arrival of processes follows order in the system.
CPU executes processes until completion before moving to the next.
Continues until the queue is empty.
Shortest Job First (SJF) selects the waiting process with the least execution time next.
This can be preemptive or non-preemptive; effectively reduces average waiting time.
Sort processes by arrival time.
Select the minimum burst time from the ready pool after processing.
Pool is dynamically adjusted throughout execution.
Shortest Job First minimizes average waiting time but may cause starvation.
Shortest Remaining Time First (SRTF) is a preemptive version of SJF, prioritizing the process with the least remaining time.
Input details of all processes: arrival and burst time.
Keep track of remaining execution time in a dedicated array.
Initialize the current time tracking completion metrics.
Monitor arrivals at each time unit and update the ready queue accordingly.
Distributes fixed CPU time slices (quantum) to processes in a queue, rotating through them to promote fairness.
Allocates CPU based on process priority; higher priority processes execute first, with equal priority handled by FCFS.
Factors determining priority could include memory and time requirements.