cpsc 3220 terms

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/193

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

194 Terms

1
New cards

virtualization

Provide an application with the illusion of resources that are not physically present.

2
New cards

reliability

A property of a system that does exactly what it is designed to do.

3
New cards

availability

The percentage of time that a system is usable.

4
New cards

portability

The ability of software to work across multiple hardware platforms.

5
New cards

application programming interface (API)

The system call interface provided by an operating system to applications.

6
New cards

overhead

The added resource cost of implementing an abstraction versus using the underlying hardware resources directly

7
New cards

response time

The time for a task to complete, from when it starts until it is done.

8
New cards

throughput

The rate at which a group of tasks are completed.

9
New cards

performance predictability

Whether a system's response time or other performance metric is consistent over time.

10
New cards

time-sharing operating system

An operating system designed to support interactive use of the computer.

11
New cards

protection

The isolation of potentially misbehaving applications and users so that they do not corrupt other applications or the operating system itself.

12
New cards

operating system kernel

The kernel is the lowest level of software running on the system, with full access to all of the capabilities of the hardware.

13
New cards

process

The execution of an application program with restricted rights — the abstraction for protection provided by the operating system kernel

14
New cards

executable image

File containing a sequence of machine instructions and initial data values for a program

15
New cards

process control block

A data structure that stores all the information the operating system needs about a particular process

16
New cards

dual mode operation

Hardware processor that has (at least) two privilege levels: one for executing the kernel with complete access to the capabilities of the hardware and a second for executing user code with restricted rights.

17
New cards

privileged instruction

Instruction available in kernel mode but not in user mode.

18
New cards

hardware timer

A hardware device that can cause a processor interrupt after some delay, either in time or in instructions executed.

19
New cards

interrupt

An asynchronous signal to the processor that some external event has occurred that may require its attention

20
New cards

interrupt handler

A kernel procedure invoked when an interrupt occurs

21
New cards

TOCTOU

A security vulnerability arising when an application can modify the user memory holding a system call parameter (such as a file name), after the kernel checks the validity of the parameter, but before the parameter is used in the actual implementation of the routine

22
New cards

bootloader

Program stored at a fixed position on disk (or flash RAM) to load the operating system into memory and start it executing

23
New cards

host operating system

An operating system that provides the abstraction of a virtual machine, to run another operating system as an application.

24
New cards

guest operating system

An operating system running in a virtual machine.

25
New cards

shell

A job control system implemented as a user-level process. When a user types a command to the shell, it creates a process to run the command

26
New cards

producer-consumer relationship

The shell can use a pipe to connect two programs together, so that the output of producer-consumer one is the input of another.

27
New cards

client-server relationship

Two-way communication between processes, where the client sends requests to the server to do some task, and when the operation is complete, the server replies back to the client.

28
New cards

microkernel

An operating system design where the kernel itself is kept small, and instead most of the functionality of a traditional operating system kernel is put into a set of user-level processes, or servers, accessed from user applications via interprocess communication.

29
New cards

monolithic kernel

An operating system design where most of the operating system functionality is linked together inside the kernel.

30
New cards

parent process

the process creator

31
New cards

child process

the process being created

32
New cards

UNIX fork

makes a complete copy of the parent process but with one exception in order to distinguish them

33
New cards

UNIX exec

this brings the new executable image into memory and starts it running

34
New cards

UNIX wait

pauses the parent until the child finishes or crashes or is terminated

35
New cards

concurrency

Multiple activities that can happen at the same time

36
New cards

thread

a single execution sequence that represents a separately schedulable task.

37
New cards

thread scheduler

Software that maps threads to processors by switching between running threads and threads that are ready but not running.

38
New cards

cooperative multithreading

Each thread runs without interruption until it explicitly relinquishes control of the processor

39
New cards

preemptive multithreading

The operating system scheduler may switch out a running thread, e.g., on a timer interrupt, without any explicit action by the thread to relinquish control at that point.

40
New cards

ready list

The set of threads that are ready to be run but which are not currently running.

41
New cards

waiting list

The set of threads that are waiting for a synchronization event or timer expiration to occur before becoming eligible to be run.

42
New cards

finished list

The set of threads that are complete but not yet de-allocated

43
New cards

kernel thread

A thread that is implemented inside the operating system kernel

44
New cards

thread context switch

Suspend execution of a currently running thread and resume execution of some other thread.

45
New cards

policy-mechanism separation

A system design principle where the implementation of an abstraction is independent of the resource allocation policy of how the abstraction is used.

46
New cards

green threads

A thread system implemented entirely at user-level without any reliance on operating system kernel services, other than those designed for single-threaded processes

47
New cards

scheduler activations

A multiprocessor scheduling policy where each application is informed of how many processors it has been assigned and whenever the assignment changes.

48
New cards

asynchronous I/O

A design pattern for system calls to allow a single-threaded process to make multiple concurrent I/O requests. When the process issues an I/O request, the system call returns immediately. The process later on receives a notification when the I/O completes.

49
New cards

event-driven programming

A coding design pattern where a thread spins in a loop; each iteration gets and processes the next I/O event

50
New cards

continuation

A data structure used in event-driven programming that keeps track of a task's current state and its next step

51
New cards

data parallel programming

A programming model where the computation is performed in parallel across all items in a data se.

52
New cards

mechanism

is how to do something, and

53
New cards

policy

is determining what to do or when to do it (i.e., a decision-making rule)

54
New cards

race condition

When the behavior of a program relies on the interleaving of operations of different threads.

55
New cards

atomic operation

Indivisible operations that cannot be interleaved with or split by other operations

56
New cards

memory barrier

An instruction that prevents the compiler and hardware from reordering memory accesses across the barrier —no accesses before the barrier are moved after the barrier and no accesses after the barrier are moved before the barrier.

57
New cards

synchronization variable

A synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. No thread returns from the barrier until all n check in.

58
New cards

state variable

Member variable of a shared object.

59
New cards

mutual exclusion

When one thread uses a lock to prevent concurrent access to a shared data structure.

60
New cards

lock

A type of synchronization variable used for enforcing atomic, mutually exclusive access to shared data.

61
New cards

critical section

A sequence of code that operates on shared state.

62
New cards

condition variable

A synchronization variable that enables a thread to efficiently wait for a change to shared state protected by a lock.

63
New cards

readers/writers lock

A lock which allows multiple "reader" threads to access shared data concurrently provided they never modify the shared data, but still provides mutual exclusion whenever a "writer" thread is reading or modifying the shared data

64
New cards

synchronization barrier

A synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. No thread returns from the barrier until all n check in.

65
New cards

starvation

The lack of progress for one task, due to resources given to higher priority tasks.

66
New cards

atomic read-modify-write instruction

A processor-specific instruction that lets one thread temporarily have exclusive and atomic access to a memory location while the instruction executes. Typically, the instruction (atomically) reads a memory location, does some simple arithmetic operation to the value, and stores the result

67
New cards

spinlock

A lock where a thread waiting for aBUSY lock "spins" in a tight loop until some other thread makes it FREE

68
New cards

Serialization

Event A must happen before Event B

69
New cards

Mutual exclusion

Events A and B must not happen at the same time

70
New cards

false sharing

Extra inter-processor communication required because a single cache entry contains portions of two different data structures with different sharing patterns.

71
New cards

fine-grained locking

A way to increase concurrency by partitioning an object's state into different subsets each protected by a different lock.

72
New cards

ownership design pattern

A technique for managing concurrent access to shared objects in which at most one thread owns an object at any time, and therefore the thread can access the shared data without a lock

73
New cards

test and test-and-set

An implementation of a spinlock where the waiting processor waits until the lock is FREE before attempting to acquire it

74
New cards

compare-and-swap

An atomic read-modify-write instruction that first tests the value of a memory location, and if the value has not been changed, sets it to a new value.

75
New cards

MCS lock

An efficient spinlock implementation where each waiting thread spins on a separate memory location

76
New cards

read-copy-update (RCU)

A synchronization abstraction that allows concurrent access to a data structure by multiple readers and a single writer at a time

77
New cards

publish (RCU)

For a read-copy-update lock, a single, atomic memory write that updates a shared object protected by the lock. The write allows new reader threads to observe the new version of the object.

78
New cards

grace period (RCU)

For a shared object protected by a read-copy-update lock, the time from when a new version of a shared object is published until the last reader of the old version is guaranteed to be finished.

79
New cards

lock-free data structures

Concurrent data structure that guarantees progress for some thread: some method will finish in a finite number of steps, regardless of the state of other threads executing in the data structure.

80
New cards

deadlock

A cycle of waiting among a set of threads, where each thread waits for some other thread in the cycle to take some action.

81
New cards

bounded resources

A necessary condition for deadlock: there are a finite number of resources that threads can simultaneously use.

82
New cards

no preemption

A necessary condition for deadlock to occur: once a thread acquires a resource, its ownership cannot be revoked until the thread acts to release it.

83
New cards

wait while holding

A necessary condition for deadlock to occur: a thread holds one resource while waiting for another.

84
New cards

circular waiting

A necessary condition for deadlock to occur: there is a set of threads such that each thread is waiting for a resource held by another.

85
New cards

lock ordering

A widely used approach to prevent deadlock, where locks are acquired in a pre-determined order.

86
New cards

safe state (deadlock)

A state of an execution such that regardless of the sequence of future resource requests, there is at least one safe sequence of decisions as to when to satisfy requests such that all pending and future requests are met

87
New cards

unsafe state (deadlock)

A state of an execution such that there is at least one sequence of future resource requests that leads to deadlock no matter what processing order is tried.

88
New cards

deadlocked state

The system has at least one deadlock.

89
New cards

workload

A set of tasks for some system to perform, along with when each task arrives and how long each task takes to complete

90
New cards

fairness

Partitioning of shared resources between users or applications either equally or balanced according to some desired priorities.

91
New cards

compute-bound task

A task that primarily uses the processor and does little I/O

92
New cards

I/O-bound task

A task that primarily does I/O, and does little processing.

93
New cards

preemption

When a scheduler takes the processor away from one task and gives it to another

94
New cards

time quantum (a.k.a. time slice)

The length of time that a task is scheduled before being preempted.

95
New cards

round robin

A scheduling policy that takes turns running each ready task for a limited period before switching to the next task.

96
New cards

short-term scheduler (a.k.a. dispatcher)

runs after every interrupt

97
New cards

medium-term scheduler (a.k.a. swapper)

every few seconds

98
New cards

long-term scheduler (a.k.a. batch job initiator)

runs on the order of minutes

99
New cards

affinity scheduling

A scheduling policy where tasks are preferentially scheduled onto the same processor they had previously been assigned, to improve cache reuse.

100
New cards

oblivious scheduling

A scheduling policy where the operating system assigns threads to processors without knowledge of the intent of the parallel application.