Exam Notes on Concurrency, Memory Management, and Security

Critical Section Problem

  • A situation where multiple threads/processes need to access shared resources.
  • The main requirement is to ensure that only one thread or process can enter the critical section at any given time.

Atomic Operations

  • Operations that are performed completely and indivisibly or not at all.
  • They prevent race conditions in concurrent programming.

CAS (Compare and Swap)

  • It stands for Compare and Swap.
  • It's an atomic instruction that updates the value of a variable only if it currently has a specific expected value.

TAS (Test-and-Set)

  • Stands for Test-and-Set.
  • It's an atomic instruction used to write to a lock variable and simultaneously return its old value.
  • Commonly used for implementing spinlocks.

Mutex Lock

  • Stands for mutual exclusion lock.
  • A synchronization primitive that prevents multiple threads from accessing a critical section simultaneously, thus ensuring exclusive access.

Semaphore

  • A synchronization tool that uses a counter to control access to shared resources by multiple processes or threads.

Spinning vs Sleeping

  • Spinning: A thread keeps checking for resource availability in a loop (CPU-intensive).
  • Sleeping: A thread suspends its execution and waits for a signal, reducing CPU usage compared to spinning.

Condition Variable

  • A synchronization primitive that allows threads to wait for a specific condition to become true.

Producer-Consumer Model

  • A classical synchronization pattern where producers generate data and consumers consume it, typically using a shared buffer.

Deadlock

  • A situation in which two or more processes are blocked indefinitely because each is waiting for the other to release a resource.

4 Conditions for Deadlock

  • Mutual Exclusion: Only one process can use a resource at a time.
  • Hold and Wait: A process holds allocated resources while waiting for additional resources.
  • No Preemption: A resource can only be released voluntarily by the process holding it.
  • Circular Wait: A set of processes are waiting for each other in a circular fashion.

Mutual Exclusion (in Deadlock)

  • Only one process can use a particular resource at any given time. This is a necessary condition for deadlock to occur.

Bounded Buffer Problem

  • A classical synchronization problem where a fixed-size buffer is shared by producer(s) and consumer(s).

Readers-Writers Problem

  • A synchronization problem where multiple readers can access shared data simultaneously, but writers require exclusive access.

Dining Philosophers Problem

  • A synchronization issue involving multiple philosophers who need two forks to eat but can only pick up one at a time, potentially leading to deadlock or starvation. Forks are the resources.

Resource Allocation Graph

  • A graph representing the allocation of resources to processes and the requests of processes for resources.
  • It is used to detect deadlocks.

Banker's Algorithm

  • A deadlock avoidance algorithm that checks if granting a resource request will lead to a safe state.
  • If the system can allocate resources to each process in some order, it is in a safe state.

Deadlock Prevention

  • Strategies to prevent deadlock by ensuring that at least one of the four necessary conditions (mutual exclusion, hold and wait, no preemption, circular wait) cannot occur.

Deadlock Avoidance

  • Avoiding deadlock by making resource allocation decisions that ensure the system never enters an unsafe state.

Time Sharing Memory

  • Memory shared among multiple users or processes by allocating short time slices (time slots) to each.

Static Relocation

  • Adjusting memory addresses at compile time or load time. The addresses are fixed once the process starts executing.

Dynamic Relocation

  • Adjusting memory addresses during the execution of a process using a relocation register.

Segmentation Addressing

  • Dividing memory into variable-sized segments, each with a base address and a limit.

Contiguous Memory Allocation

  • Allocating a single, continuous block of memory to a process. This method simplifies memory management but can lead to external fragmentation.

Paging

  • Dividing memory into fixed-size pages (logical memory) and frames (physical memory) and mapping virtual addresses to physical addresses via page tables.

Paging Hardware

  • The Memory Management Unit (MMU), which supports page translation using page tables.

Paging Address Calculation

  • Virtual address is composed of a page number and an offset.
  • The page number is used to index the page table, which provides the corresponding frame number.
  • The physical address is then calculated as: Physical Address = Frame Number + Offset.
  • Virtual Address = Page Number + Offset

Inverted Page Table

  • A page table structure where there is one entry per physical frame in memory, indexed by the physical frame number.

Hashed Page Table

  • A page table structure that uses a hash function to quickly translate virtual to physical page numbers.

Hierarchical Page Table

  • A multi-level page table structure used to efficiently manage large virtual address spaces.

Translation Lookaside Buffer (TLB)

  • A cache that stores recent virtual-to-physical address translations to speed up memory access.

Swapping

  • Moving entire processes between main memory (RAM) and secondary storage (disk) to manage limited memory resources.

Page Fault

  • An interrupt (trap) that occurs when a program attempts to access a page that is not currently loaded in main memory.

Demand Paging

  • A memory management technique where pages are loaded into memory only when they are needed (on demand), rather than in advance.

Effective Access Time (EAT) in Demand Paging

  • EAT is calculated considering the memory access time and the page fault rate.
  • EAT = (1 - p) \times memory_access + p \times page_fault_time where p is the page fault rate.

Page Replacement

  • The process of selecting which page in memory to evict (replace) when a new page needs to be loaded and no free frames are available.

Thrashing

  • A state where the system spends more time paging than executing, leading to a significant drop in performance due to constant page faults.

Threats vs Attacks

  • Threats: Potential security risks or dangers.
  • Attacks: Actual attempts to exploit vulnerabilities and compromise a system's security.

Security Attacks

  • Actions taken to compromise the confidentiality, integrity, or availability of a system or its data.