OS Midterm

5.0(5)
studied byStudied by 199 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/115

flashcard set

Earn XP

Description and Tags

First established knowt in the history of crenstantinople

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

116 Terms

1
New cards

data register

small fast data storage location on the CPU (aka buffer register)

2
New cards

address register

specifies the address in memory for the next read or write

3
New cards

PC (program counter)

holds address of next instruction to be fetched

4
New cards

instruction register

stores the fetched instruction

5
New cards

interrupt

allows other modules to interrupt the normal sequencing of the processor

6
New cards

hit ratio

fraction of all memory accesses found in the cache

7
New cards

Principle of locality

memory references by processor tend to cluster.

8
New cards
<p>temporal locality</p>

temporal locality

limited range of memory addresses requested repeatedly over a period of time

9
New cards
<p>spatial locality</p>

spatial locality

memory addresses that are requested sequentially

10
New cards

cache

small, quick access storage close to the CPU used for repetitively accessed data or instructions. Modernly stored in 3 levels (L1, L2, L3)

11
New cards

memory hierarchy

system of memory levels balancing cost and capacity vs speed. Bigger = slower = cheaper

<p>system of memory levels balancing cost and capacity vs speed. Bigger = slower = cheaper</p>
12
New cards

volatile memory

memory that will be cleared when computer is powered off (ex: RAM)

13
New cards

purpose of interrupts

helpful for handling asynchronous events, multitasking, and error handling

14
New cards

interrupt classes

program (illegal instruction)

timer

I/O

hardware failure

15
New cards

interrupt handler

determines nature of interrupt and performs necessary actions

16
New cards

program flow with and without interrupts

program is able to execute separate instructions when waiting on something (like I/O)

<p>program is able to execute separate instructions when waiting on something (like I/O) </p>
17
New cards

multiple interrupt handling

Approach 1: Disable interrupts while processing an interrupt

Approach 2: Use a priority scheme

18
New cards

calculation of EAT (Effective Access Time)

Ts = H*T1 + (1-H)*(T1 + T2)

where

Ts = average access time

H = hit ratio

T1 = access time of M1 (cache)

T2 = access time of M2 (main memory)

19
New cards

Instruction execution order

fetch instruction, then execute

20
New cards

Operating System

interface between applications and hardware that controls the execution of programs

21
New cards

basic elements of a computer

  • processor

  • I/O modules

  • Main memory

  • System Bus

22
New cards

system bus

provides communication between computer components

23
New cards

I/O modules

move data between computer and external environment

  • secondary memory

  • communication equipment

  • terminal

24
New cards

programmed I/O

I/O module performs action and sets appropriate bits in I/O status register. processor periodically checks status of I/O module

25
New cards

Interrupt-Driven I/O

I/O module interrupts processor when ready to exchange data

26
New cards

Direct Memory Access (DMA)

performed by separate module on system bus or incorporated into I/O module

27
New cards

symmetric multiprocessors (SMP)

stand-alone computer system where

  • 2+ processors

  • processors share memory, access to I/O

  • system controlled by one OS

  • high performance/scaling/availability

28
New cards

kernel

contains the most frequently used OS instructions and other portions. The central component of the OS. Manages resources, processes, and memory

29
New cards

turnaround time

total time to execute a process

30
New cards

process switch

switching between processes, requires switching data within registers (aka context switch)

31
New cards

process

Instance of a program in execution; unit of activity that can be executed on a processor

32
New cards

3 components of a process

  • executable program

  • associated data needed by program

  • execution context

33
New cards

execution context

  • internal data OS can supervise/control

  • contents of registers

  • process state, priority, I/O wait status

34
New cards

5 OS management responsibilities

  • process isolation

  • automatic allocation + management

  • modular programming support

  • protection and access control

  • long-term storage

35
New cards

Application Binary Interface (ABI)

how compiler builds an application. Defines system call interface through user Instruction set architecture (ISA)

36
New cards

Instruction Set Architecture (ISA)

Contains set of executable instructions by CPU. Considered an interface

37
New cards

thread

a lightweight process that shares resources within a process. Dispatchable unit of work; includes a thread context

38
New cards

multithreaded process

process which can separate concurrent threads

39
New cards

multiprogramming

the ability to store processes in memory and switch execution between programs

40
New cards

degree of multiprogramming

number of concurrent processes allowed in main memory

41
New cards

goals of an OS

convenience

efficiency

evolution ability

manage computer resources

42
New cards

multitasking vs parallelism

multitasking executes multiple processes on one CPU by allocating each process CPU time. Parallel processing involves using multiple cores.

43
New cards

activities associated with processes

creation, execution, scheduling, resource management

44
New cards

virtual memory

allocated space for a program that has relative memory addresses

45
New cards

paging

system of fixed size blocks assigned to processes

<p>system of fixed size blocks assigned to processes</p>
46
New cards

microkernel architecture

assigns few essential functions to kernel

  • simple implementation

  • flexible

  • good for distributed environment

smaller than monolithic kernels

47
New cards

monolithic kernel

kernel where all components are in 1 address space. large and hard to design, but high performing

48
New cards

signal

mechanism to send message kernel→process

49
New cards

system call

mechanism to send message process→kernel

50
New cards

distributed operating system

provide illusion of

  • single main and secondary memory space

  • unified access facilities

51
New cards

object oriented OS

  • add modular extensions to small kernel

  • easy OS customizability

  • eases development of tools

52
New cards

5 process states

  • new

  • ready

  • running

  • blocked

  • exiting

53
New cards

blocked vs suspended

Blocked

  • waiting on event

  • can run once event happens

Suspended

  • able to run

  • instructed not to run

54
New cards

swapping

moving pages from memory to disk

  • happens when OS runs out of physical memory

55
New cards

dispatcher

small program that switches processor between processes

56
New cards

ready queue

queue that stores processes ready to run (waiting for CPU time)

57
New cards

event queue

queue that manages and processes asynchronous events (ex: timers, I/O)

58
New cards

virtual machine

dedicate 1 or more cores to a particular process and leave processor alone

59
New cards

preemption

suspending a running process to allow another process to run

60
New cards

process switch

7 step execution to switch processes

  • save processor context

  • update PCB

  • move PCB to appropriate queue

  • select new process

  • update PCB

  • update memory data structures

  • restore processor context

61
New cards

process image

process’s state at a given moment

  • user-level context

  • register context

  • system level context

62
New cards

process control block (PCB)

data needed by OS to control process

  • identifiers

  • user-visible registers

  • control and status register

  • scheduling

  • privileges

  • resources

  • memory management

63
New cards

role of PCB

  • contain info about process

  • read/modified by every module in OS

  • defines state of OS

hard to protect

64
New cards

User Running (process state)

Executing in user mode

65
New cards

Kernel Running (process state)

Executing in kernel mode

66
New cards

ready to run, in memory (process state)

ready to run as soon as the kernel schedules it

67
New cards

asleep in memory (process state)

unable to run until event occurs; process in main memory (blocked state)

68
New cards

ready to run, swapped (process state)

ready to run, but must be swapped into main memory

69
New cards

sleeping, swapped (process state)

process awaiting event and swapped into secondary storage (blocked state)

70
New cards

preempted (process state)

able to run, but instructed not to. Process returning from kernel mode to user mode, kernel does process switch to switch to other process

71
New cards

created/new (process state)

process newly created; not ready to run. Parent has signaled desire for child but child is not allocated space nor in main memory yet

72
New cards

zombie (process state)

process DNE, but leaves record for parent process to collect

73
New cards

I/O bound processes

processes that spend a significant amount of time waiting for I/O responses

74
New cards

CPU bound processes

processes that spend almost all of their time in CPU time

75
New cards

User vs Kernel mode implementation

user mode requests services from OS through system calls and interrupts

76
New cards

User vs Kernel mode reasoning

  • protection

  • security

  • isolation

  • flexibility

77
New cards

When Kernel mode is used

applications act in user mode, until they need special access through system calls and interrupts

78
New cards

process creation steps

  • assign PID

  • allocate space

  • initialize PCB

  • set linkages

  • create/expand other data structures

79
New cards

Trap

error generated by current process

known as exception/fault

80
New cards

when process switches occur

  • timeout

  • I/O

  • system calls

  • interrupts

81
New cards

User level thread

  • thread management done by application

  • kernel not aware of threads

82
New cards

Kernel level thread

thread management done by kernel

83
New cards

benefits of threads

threads share memory, are quicker, more efficient

84
New cards

5 components of a thread

  • execution state

  • thread context

  • execution stack

  • storage

  • memory/resource access

85
New cards

thread execution states

  • ready

  • running

  • blocked

86
New cards

thread operations

  • spawn

  • block

  • unblock

  • finish

87
New cards

ULT pros and cons

pros:

  • doesn’t require kernel mode

  • works on any OS

cons:

  • system calls block all threads of a process

  • cannot multiprocess

88
New cards

KLT pros and cons

pros:

  • can run multiple threads in parallel

  • can schedule new thread if thread is blocked

cons:

  • needs kernel mode

  • OS specific

89
New cards

ULT vs KLT applications

ULT: web servers, games, user level applications

KLT: network services, device drivers, background applications

90
New cards

user vs kernel mode

User: most applications run here, restricted access, safer

Kernel: unrestricted access, dangerous

91
New cards

Amdahl’s law

the idea that speedup has diminishing returns and does not scale linearly. Allows us to determine optimal number of processors

92
New cards

Linux tasks

  • single-threaded process

  • thread

  • kernel tasks

93
New cards

Linux namespaces

separate views that process can have of the system

  • helps create illusion that processes are the only process on a system

94
New cards

monitor

easier to control semaphore implemented at the PL level

95
New cards

synchronization

enforce mutual exclusion

achieved by condition variables

  • binary variables that flag suspension or resumption of a process

96
New cards

message passing

needs synchronization and communication

has send and receive

both sender and receiver can be blocked

97
New cards

addressing

schemes for specifying processes in send and receive

direct and indirect

98
New cards

readers/writers problem

data area shared among many processes

3 conditions

  • any number of readers

  • 1 writer

  • no reading when writer writing

99
New cards

race condition

when multiple threads/processes read and write data items; final result depends on order of execution

100
New cards

mutual exclusion

requirement that no other processes can be in a critical section when 1 process is accessing critical resources