D686: Operating Systems for Computer Scientists (chapter 4 & 5)

studied byStudied by 1 person
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 58

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

59 Terms

1

threads

lightweight units of process execution that share the same memory space but run independently

New cards
2

register set

collection of registers used to store data and instructions currently being processed by the CPU

New cards
3

user thread

a thread that operates exclusively in user mode, managed and scheduled by user-level thread libraries without direct kernel involvement

New cards
4

kernel threads

threads that operate in kernel mode, directly managed by the operating system kernel, providing efficient execution and access to system resources

New cards
5

user thread

A thread running in user mode.

New cards
6

kernel threads

Threads running in kernel mode.

New cards
7

thread control block (TCB)

a data structure in an operating system that stores information about a thread, including its identifier, state, CPU information, priority, and pointers to related threads

New cards
8

thread ID

unique identifier assigned to a thread by the operating system

New cards
9

thread state

the current condition or status of a thread in the system (e.g., running, waiting, blocked)

New cards
10

register contents

the current values stored in the CPU registers used by the thread

New cards
11

thread priority

the assigned importance level of a thread, determining its scheduling order compared to other threads

New cards
12

multicore

refers to a CPU architecture that integrates multiple processing cores onto a single chip, or within a single system, enabling parallel execution of tasks

New cards
13

data parallelism

a computational technique where subsets of identical data are distributed across multiple processing cores, 
each core performing the same operation simultaneously on its subset of data

New cards
14

task parallelism

a computational approach where tasks or threads are distributed across multiple processing cores, with each task executing a unique operation independently, often on different or related data sets

New cards
15

fork()

a system call that creates a new process by duplicating the existing process

New cards
16

exec()

a system call that replaces the current process with a new program

New cards
17

signal

a way to alert a process about an event

New cards
18

default signal handler

the built-in handler for signals unless a process uses its own

New cards
19

pthread_kill()

a function in the POSIX threads (Pthreads) library used to send a signal to a specific thread within the same process

New cards
20

asynchronous procedure call (APC)

a function a thread sets to run when it gets a certain notice 

New cards
21

thread cancellation

ending a thread before it finishes

New cards
22

pthread_cancel()

a function that requests the cancellation of a specific thread in Pthreads

New cards
23

process synchronization

the coordination of processes to ensure they operate smoothly and efficiently without interfering with each other, especially when accessing shared resources or data

New cards
24

mutex

short for mutual exclusion, a mutex is a lock that ensures only one process can access a critical section or resource at a time, preventing race conditions

New cards
25

semaphores

synchronization tools used to control access to shared resources by multiple processes

New cards
26

starvation

a situation where a process is perpetually denied access to resources because other processes continuously acquire them

New cards
27

circular wait

a condition where each process in a set is waiting for a resource held by another process in the same set, contributing to deadlock

New cards
28

fair resource allocation

a principle ensuring that all processes have fair access to resources, preventing starvation and ensuring balanced system performance

New cards
29

producer-consumer scenario

where one process produces data and another consumes it, requiring synchronization to ensure that the buffer used does not overflow or underflow

New cards
30

concurrency

the ability of an operating system to execute multiple processes simultaneously, improving performance and responsiveness

New cards
31

critical section

A section of code responsible for changing data that must only be executed by one thread or process at a time to avoid a race condition.

New cards
32

entry section

The section of code within a process that requests permission to enter its critical section.

New cards
33

exit section

The section of code within a process that cleanly exits the critical section.

New cards
34

remainder section

Whatever code remains to be processed after the critical and exit sections.

New cards
35

preemptive kernel

A type of kernel that allows a process to be preempted while it is running in kernel mode.

New cards
36

nonpreemptive kernels

A type of kernel that does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.

New cards
37

mutex lock

A mutual exclusion lock; the simplest software tool for assuring mutual exclusion.

New cards
38

contended

A term describing the condition of a lock when a thread blocks while trying to acquire it.

New cards
39

uncontended

A term describing a lock that is available when a thread attempts to acquire it.

New cards
40

busy waiting

A practice that allows a thread or process to use CPU time continuously while waiting for something. An I/O loop in which an I/O thread continuously reads status information while waiting for I/O to complete.

New cards
41

spinlock

A locking mechanism that continuously uses the CPU while waiting for access to the lock

New cards
42

semaphore

An integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait() and signal().

New cards
43

counting semaphore

A semaphore that has a value between 0 and N, to control access to a resource with N instances.

New cards
44

binary semaphore

A semaphore of values 0 and 1 that limits access to one resource (acting similarly to a mutex lock).

New cards
45

readers-writers problem

A synchronization problem in which one or more processes or threads write data while others only read data.

New cards
46

reader-writer lock

A lock appropriate for access to an item by two types of accessors, read-only and read-write.

New cards
47

dining-philosophers problem

A classic synchronization problem in which multiple operators (philosophers) try to access multiple items (chopsticks) simultaneously.

New cards
48

monitor

a synchronization construct in programming languages that prevents race conditions

New cards
49

synchronization

coordinating processes or threads to ensure they work together properly without causing problems

New cards
50

initialization

the setup code included in the monitor package and used once when creating the monitor

New cards
51

encapsulation

keeping data and methods that use it together in one unit to make it easier to manage and protect

New cards
52

monitor entry queue

a place that holds all the threads (or procedures) waiting to enter the monitor

New cards
53

monitor procedure

 functions that can be called from outside the monitor

New cards
54

private data

secret data inside the monitor, including private functions that cannot be used outside the monitor

New cards
55

signal()

a function that allows one of the paused processes in the waiting queue to start running

New cards
56

wait()

a function that pauses a process and puts it in a waiting queue inside the monitor

New cards
57

interprocess communication (IPC)

mechanisms allowing processes to communicate and synchronize their actions

New cards
58

race conditions

occur when multiple processes access and modify shared data concurrently, leading to unpredictable outcomes depending on the timing of their execution

New cards
59

shared variables

memory locations accessible by multiple processes for reading and writing data

New cards
robot