CPSC 351 Final

0.0(0)
studied byStudied by 2 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/53

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

54 Terms

1
New cards

Threads Chapter

2
New cards

Thread

Basic unit of CPU utilization that is used to run cycles in parallel, made up of

  1. Program counter: keeps track of which instruction to execution next

  2. Register values: stores the current working variables

  3. Stack: keeps track of the program execution history

In Unix, you can either fork all the threads of a process or just the caller thread. Calling exec removes all the threads

3
New cards

Thread vs process

Process: used to group resources together

Thread: entity scheduled for execution on the CPU

4
New cards

Pros and cons of multithreading over multiprocessing

+ Continues running even when part of the program is blocked

+ Threads of a process share memory and resources

+ Faster to create and manage

+ Threads can execute in parallel

- Synchronization issues

- Requires defined workflows

- Harder to debug

5
New cards

Threading models

One-to-one: each user thread corresponds to a kernel thread

Many-to-one: all user threads share one kernel thread

Many-to-many: N user threads correspond to M kernel threads

6
New cards

Signal

Used in Unix to notify a process that a particular event has occurred

  1. Synchronous: resulting from process operations (e.g. illegal memory access)

  2. Asynchronous: generated by outside event (e.g. user pressing CTRL+C)

7
New cards

Signal handling options in multithreaded processes

  • Deliver the signal to the thread to which the signal applies

  • Deliver the signal to every thread in the process

  • Deliver the signal to certain threads in the process

  • Assign a specific thread to receive all signals for the process

8
New cards

Thread pool

Holds existing threads in a resting state until they are requested, is useful because it requires less overhead than creating a new thread and limits the number of existing threads

9
New cards

Thread specific data

Data which belongs exclusively to a single thread, useful when you don’t have control over the thread creation process

10
New cards

Thread cancellation

Terminating a thread before it has completed its task

  1. Asynchronous cancellation: terminates immediately upon request, may not be able to release allocated resources

  2. Synchronous/deferred cancellation: thread periodically checks whether or not it should exit, allows for releasing resources

11
New cards

Clone()

Creates a child process (can also make a thread) and decides how much is shared between the two processes according to the flags that are passed into it

  • CLONE_FS: file-system information is not shared

  • CLONE_VM: memory space is not shared

  • CLONE_SIGHAND: signal handlers are not shared

  • CLONE_FILES: set of open files are not shared

12
New cards

Process Synchronization Chapter

13
New cards

Race condition

Unpredictable order of execution between operations/threads

14
New cards

Process synchronization

Ensures an orderly execution of cooperating processes to maintain data consistency

15
New cards

Critical section

Segment of process code in which the process modifies shared resources, can cause synchronization issues if two or more threads try to modify it at the same time

16
New cards

Atomic instruction

Operation that can’t be preempted

17
New cards

Requirements for solving the critical section problem

  1. Mutual exclusion: only one process may enter a critical section at a time

  2. Progress: no process executing outside of its critical section may block another process from entering theirs

  3. Bounded waiting: process can’t be perpetually barred from entering a critical section by other processes

18
New cards

Parallelism vs concurrency

Parallelism: operations are being executed by different CPU cores at the same time

Concurrency: operations are being executed by the same CPU core at different times

19
New cards

Semaphore

Synchronization constructs that limit how many processes/threads can enter a critical section at the same time

  • Binary semaphore: only allows one process at a time

  • Counting semaphore: allows a maximum of N processes at a time

20
New cards

Deadlock

A circular waiting relationship between two threads that occurs when each of them locks the other out of unlocking a mutex

21
New cards

Spinlock vs sleeplock

Spinlock: thread actively checks if mutex is unlocked

Sleeplock: thread sleeps until mutex is unlocked

22
New cards

Priority inversion problem

Occurs when a lower priority process holds a lock and prevents a higher priority process from acquiring it, can be solved by having the low priority process inherit the priority of the high priority process

23
New cards

Memory Chapter

24
New cards

Two main issues of memory management

Speed: memory speed is slower than CPU speed, causing a performance bottleneck, can be solved with caching

Protection: processes need to be protected from having their memory accessed/modified by other processes, can be solved using defined memory segments

25
New cards

Caching

Uses small but fast memory to temporarily store data/instructions, can have issues with data consistency if not properly implemented

26
New cards

Memory protection hardware

Base register: holds the smallest legal memory address accessible by the process

Limit register: specifies the size of the process memory

27
New cards

Address binding

Determines where in memory variables/instruction should be stored, can be bound at

  1. Compile time

  2. Load time

  3. Execution time

28
New cards

Static vs dynamic linking

Static linking: all libraries are included in a binary at compile time, highly compatible but requires more space

Dynamic linking: libraries are loaded at execution time, needs less space but the system that the binary is running on must have the required libraries

29
New cards

Process swapping

Temporarily moves processes from memory to a backing store (e.g. hard disk), allows you to run more processes than the RAM could handle on its own

  1. Backing store: stores memory images of all processes

  2. Roll out, roll in: stores low priority processes so higher priority processes can be loaded and executed

30
New cards

Fixed vs variable partition schemes

Fixed partition: each process is allocated a fixed partition size

Variable partition: each process is allocated memory from a hole (block of available memory) large enough to accommodate it

31
New cards

Variable partition allocation strategies

  1. First-fit: allocates first available memory to processes

  2. Best-fit: allocates the smallest possible gap in memory to processes

  3. Worst-fit: allocates the largest possible gap in memory to processes

32
New cards

Virtual memory

Memory addresses that are accessed by processes, which are translated by the MMU into physical addresses that are accessed by RAM

33
New cards

External vs internal fragmentation

External fragmentation: occurs when there is enough memory to allocate for a process, but there is no individual gap large enough to hold it

Internal fragmentation: occurs when a process is allocated more memory than it requests/needs

34
New cards

Compaction

Solves the external fragmentation problem by shuffling memory contents until all free memory is merged into one large block. However this has high overhead and is only possible during execution time

35
New cards

Paging

Solves the internal fragmentation problem by breaking up physical memory into frames and virtual memory into pages

36
New cards

Page table

Contains the base addresses of all pages in physical memory. Are protected by

  • Access permission: frame numbers have associated read-write bits

  • Valid-invalid bit: determines whether a page is/isn’t part of the virtual memory space of the process

37
New cards

Page table pointer register

Stores the beginning of a process page table

38
New cards

Translation look-aside buffer (TLB)

Caches a subset of page table entries. Must be flushed after context switches since each process has its own page table, however address-space identifiers (ASIDs) reduce the need for this by uniquely identifying each process

TLB hit: page number is in the TLB, returns the corresponding frame number

TLM miss: page number isn’t in the TLB, looks up the frame number in the page table

39
New cards

Types of paging

  1. Single-level: one page table, requires much more contiguous memory

  2. Hierarchical: splits up page tables into levels (page # is also split for each table), requires more memory accesses

  3. Hashed: maps virtual addresses to physical ones using a hash table, longer look up times due to collisions

  4. Inverted: stores one entry per physical frame and one page table per process, longer search times and difficulty in implementing shared memory

  5. Demand: pages are loaded into physical memory during execution only when needed, however performance can be worse due to page faults

40
New cards

Page sharing

Allows multiple processes to map the same physical frame to their pages, useful for pages containing shared re-entrant (read-only) code

41
New cards

Page fault

Occurs in demand paging when a frame is not loaded into memory (invalid bit), is handled by:

  1. Validating the reference

  2. Allocating a frame

  3. Loading from disk to frame

  4. Resetting the page table

  5. Restarting instructions

42
New cards

Thrashing

Occurs in demand paging when a process doesn’t have enough frames, leading to constant swapping between pages. Is a problem because of low CPU utilization, can be avoided by having more frames/using less frames

43
New cards

Copy-on-write (COW)

Enables parent and child processes to share memory until that memory is modified, at which point the target page is copied and assigned to the process that performed the modification

44
New cards

Segmentation

Virtual memory is divided into multiple virtual address spaces, such that each virtual address is made up of a pair <segment #, offset). A segment table is used to store the base and limit registers of each segment, and a segmentation fault occurs if the offset > limit

45
New cards

Process Scheduling Chapter

46
New cards

CPU scheduling

Selecting the next process for execution on the CPU once the current process leaves the CPU idle

47
New cards

Properties of good CPU schedulers

  1. High CPU utilization

  2. Low I/O waiting time

48
New cards

CPU/IO burst cycles

CPU burst: time that process spends executing on the CPU

I/O burst: time that process spends waiting for I/O

49
New cards

CPU scheduling decisions

Occur when processes:

  1. Switch from running to waiting state

  2. Switch from running to ready state

  3. Switch from waiting to ready state

  4. Terminate

50
New cards

Dispatcher module

Gives control of the CPU to the process selected by the scheduler by:

  1. Switching the context

  2. Switching to user mode

  3. Jumping to the proper location to restart a program

51
New cards

Dispatcher latency

Time that it takes for the dispatcher to stop one process and start another (should be minimized)

52
New cards

Process scheduling algorithms

  1. First Come First Served: allocates the CPU to the process that requests it first

  2. Shortest Job First Non-preemptive: allocates the CPU to the process which has the shortest next CPU burst without interruption

  3. Shortest Job First Preemptive: allocates the CPU to the process which has the shortest next CPU burst, even if another process is currently running

  4. Priority Scheduling: allocates the CPU to the process with the highest associated priority

  5. Round Robin: allocates the CPU to each process for a small unit of time (quanta) in a cycle, until they finish execution

  6. Ideal Fair Scheduler: equally divides CPU time among running processes (if there are N processes, then each process runs for 1/Nth of the CPU time)

53
New cards

Starvation

Occurs in preemptive priority scheduling when a low priority task is kept perpetually waiting because of higher priority tasks

54
New cards

Symmetric vs asymmetric multiprocessing

Asymmetric multiprocessing: one processor handles all scheduling, I/O processing, etc. while all the other processors execute user code, minimizes data sharing issues

Symmetric multiprocessing: each processor is self-scheduling