1/58
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Program Counter (PC)
A pointer that indicates which instruction to execute next.
Stack Pointer (SP)
Points to the top of the process's stack, indicating where to find local variables and return addresses.
PSW/FLAGS Register
Status indicators for the CPU, showing results of last operations, like zero flag and carry flag.
Pseudoparallelism
Fake multitasking where the appearance of concurrent execution is created by rapidly switching between tasks.
Multiprogramming
Running multiple programs in memory by switching between them to maximize CPU usage.
Copy-on-Write
A memory management technique where memory is shared until a write operation occurs, then a copy is made.
Context Switch
The process of saving the state of a running process and loading the state of another process.
Critical Region (Critical Section)
Code that accesses shared resources, which must not be executed by more than one process at a time.
Mutual Exclusion
Ensures that only one process can enter a critical region of code at a time.
Race Conditions
Situations where the outcome depends on the timing of events, often leading to unpredictable results.
Spin Lock
A locking mechanism where a process repeatedly checks if a lock is available, wasting CPU resources.
Semaphore
A synchronization primitive that maintains a count of available resources and manages access to those resources.
Kernel
The core component of an operating system that manages hardware and software interactions.
Jobs of the Kernel
Includes process management, memory management, device management, file system control, CPU scheduling, and handling system calls.
Process vs Program
A program is a static set of instructions; a process is a program in execution with its current state.
What is a Thread?
A lightweight process within a program, sharing resources but having its own stack and program counter.
Process State: READY
Indicates a process is ready to be executed by the CPU.
Process State: RUNNING
Indicates a process is currently being executed by the CPU.
Process State: BLOCKED
Indicates a process cannot continue until a certain condition is met.
Transition: READY to RUNNING
Occurs when the scheduler selects a process to execute.
Transition: RUNNING to READY
Happens when a process's time slice expires or a higher priority process arrives.
Transition: RUNNING to BLOCKED
Occurs when a process needs resources that are currently unavailable.
Transition: BLOCKED to READY
Happens when the resource a blocked process was waiting for becomes available.
RUNNING to READY vs RUNNING to BLOCKED
RUNNING to READY indicates a preemption, while RUNNING to BLOCKED indicates resource unavailability.
Context Switch - 6 Steps
Steps to save current state, select new process, and load the new state.
Problems from Bad-Timed Context Switches
Can lead to race conditions, data corruption, and inconsistent program states.
Process Table (proctab)
An array structure for storing process information, indexed by Process ID.
Accessing Current Running Process
Use proctab[currpid], where currpid is the ID of the currently running process.
Ready List
A queue of processes that are in the READY state, waiting for CPU access.
Semaphore Structure in Xinu
Contains a count of resources and a queue of waiting processes.
Semaphore wait() Operation
Decreases the semaphore count and blocks the process if the count is negative.
Semaphore signal() Operation
Increases the count and wakes a waiting process if applicable.
Reading Semaphore Count
Reflects the availability of resources, with negative counts indicating waiting processes.
Message Passing: receive()
A blocking operation that waits for a message to be ready before proceeding.
Message Passing: recvclr()
Non-blocking operation that clears the message buffer if a message is available.
Message Passing: recvtime()
Blocking operation that allows for a timeout while waiting for a message.
Killing a SLEEPING Process
Requires removing it from the sleep queue before freeing its resources.
Killing a READY Process
Requires removing from the ready list before freeing the process entry.
Killing a WAITING Process
Requires removal from the semaphore queue and adjusting the semaphore count.
Why Must We Clean Up Before Killing?
Ensures system integrity and prevents dangling pointers or crashes.
Data Structures Used in Kernel
Includes process tables, ready lists, semaphore queues, and message buffers.
Fields in Process Entry
Contain process attributes like name, priority, state, and pointers.
Kernel-Level vs User-Level Threads
Kernel-level threads are managed by the OS, while user-level threads are managed by user libraries.
Invariants in Xinu
Conditions that must always hold true for proper system functioning.
Why Invariants Matter
Violating invariants indicates kernel bugs and can lead to process starvation.
Atomic Actions
Operations that must complete fully or not at all, without interruption.
fork() Function
Creates a copy of the parent process, resulting in a child process with its own execution context.
fork() Return Values
0 for child, positive PID for parent, -1 for failure.
Complete Process Lifecycle
Describes the states and transitions of a process from creation to termination.
Why Copy-on-Write Improves fork()
It avoids unnecessary copying of data by sharing memory until a write operation is performed.
Spin Lock vs Semaphore Comparison
Spin locks waste CPU resources, good for short waits; semaphores block processes and are better for longer waits.
Relationship Between States and Data Structures
Kernel maintains the mapping between process states and their corresponding data structures.
Preemption by Higher Priority Process
Process preemption occurs when a higher-priority process is ready, prompting a context switch.
Why Process Termination Must Check State
Ensures appropriate cleanup based on process state to prevent system inconsistencies.
Quantum in Time-Sharing
Defines the maximum time a process can run before being switched out for fairness.
Priority-Based Scheduling in Xinu
Processes are managed in a priority queue, with higher-priority processes served first.
Why Atomicity Important for Synchronization
Atomicity in operations prevents race conditions and ensures proper execution of critical sections.
Blocking and I/O Operations
Managing I/O by blocking the process and yielding CPU to maximize efficiency.
Why Race Conditions Are Particularly Hard to Debug
Unpredictable nature of race conditions makes them difficult to replicate and resolve.