cop4610 exam2

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/83

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

84 Terms

1
New cards

Monitor

A lock + condition variables

2
New cards

Lock

provides mutual exclusion to shared data.

a lock provides two operations
-Acquire()
-Release()

3
New cards

Condition Variable

provides a queue for waiting threads inside a critical section

each conditional variable
-consists of a queue of threads
-Provides three operations
**wait()
**Signal()
**Broadcast()

4
New cards

Hoare vs. Mesa Monitors

Hoare Monitors (used in most textbooks)
-Signal() transfers the CPU directly to a waiting thread.

Mesa Monitors (used in most real operating systems)
-Signal() only puts a waiting thread on the scheduler's ready queue
-By the time the waken thread gets the CPU, the waiting condition may no longer be true and needs to be retested.

5
New cards

Semaphores VS Monitors

We can't quite implement monitors with semaphores.

Condition variables only work inside a lock
-using semaphores inside a lock may deadlock

Condition variables have no history, but semaphores have
-Signal() with an empty queue does nothing
**A subsequent Wait() will wait
-V() increments semaphore
**A subsequent P() will not wait
-Wait() will always wait
-P() might not wait

6
New cards

Deadlock

Occur when threads are waiting for resources with circular dependencies.

Deadlock implies starvation

7
New cards

Preemptable Resource

EX- CPU
can be resolved by reallocation of resources

8
New cards

Nonpreemptable Resource

A resource that cannot be taken away from its current owner without causing computation to fail

9
New cards

Starvation

a thread waits indefinitely

10
New cards

Checkpointing

taking snapshots of system states from time to time

11
New cards

Four Conditions for Deadlocks

limited access (lock-protected resources)

no preemption (if someone has the resource, it cannot be taken away)

wait while holding (holding a resource while requesting and waiting for the next resource)

circular chain of requests

12
New cards

Banker's Algorithm

-A thread states its maximum resource needs in advance.

-The OS allocates resource dynamically as needed. A thread waits if granting its request would lead to deadlocks.

-A request can be granted if some sequential ordering of threads is deadlock free.

13
New cards

Deadlock Recovery Techniques

  • Scan the resource allocation graph

  • Detect circular chains of requests

  • Recover from the deadlock

14
New cards

Interprocess Communication

Processes communicate among address spaces.
-Bug stream ex. pipe
-Message passing (send/receive)
-File system (read and write)
-Shared memory

Direct
-send(P1, message)

15
New cards

16
New cards

17
New cards

18
New cards

19
New cards

System Call

A user process asks the OS to do something on the process's behalf.

20
New cards

Hardware-Supported Mechanisms

Address translation
Dual-mode operation

21
New cards

Software-Supported Mechanisms

Strong Typing
Software Fault Isolation

22
New cards

Steps to switch between kernel and user spaces

-Creates a process and initialize the address space
-Loads the program into the memory
-Initializes translation tables
-Sets the HW pointer to the translation table
-Sets the CPU to user mode
-Jumps to the entry point of the program

23
New cards

Context switching between processes vs. threads

Unlike context switching among threads, to switch among processes
-Need to save and restore pointers to translation tables

To resume process execution
-Kernel reloads old register values
-Sets CPU to user mode
-Jumps to the old program counter

24
New cards

Segment

A logically contiguous memory region

25
New cards

External Fragmentation

memory wasted because the available memory is not contiguous for allocation

26
New cards

Internal Fragmentation

allocated pages are not fully used

27
New cards

Translation Lookaside Buffers

Store recently translated memory addresses for short-term reuses

28
New cards

base and bound translation

  • Each process is loaded into a contiguous region of physical memory
  • Processes are protected from one another
  • Each process "thinks* that it owns a dedicated machine, with memory addresses from 0 to bound.
  • An OS can move a process around
    --By copying bits
    --Changing the base and bound registers
29
New cards

Segmentation-based Translation

use a table of base-and-bound pairs

30
New cards

Paging-based translation

memory allocation via fixed-size chunks of memory, or pages

31
New cards

Segmented-paging translation

breaks the page table into segments

  • Easier to grow/shrink individual segments
  • Finer control of segment accesses
    ** e.g., read-only for shared code segment
    ** Recall the semantics of fork()…
  • More efficient use of physical space
  • Multiple processes can share the same code segment
  • Memory allocation is still complex
    ** Requires contiguous allocation
32
New cards

paged page tables

two-level tree of page tables

33
New cards

Caching

Storing copies of data at places that can be accessed more quickly.

34
New cards

temporal locality

Recently referenced locations are more likely to be referenced soon.

35
New cards

spatial locality

Referenced locations tend to be clustered.

36
New cards

Cache pollution

Leaving behind cache content with no localities.

37
New cards

Translation Lookaside Buffer (TLB)

  • Tracks frequently used translations
  • Avoids translations in the common case
38
New cards

virtually addressed cache

between the CPU and the translation tables

39
New cards

Physical addressed cache

between the translation tables and the main memory

40
New cards

Design issues of caching

  1. How is a cache entry lookup performed?
  2. Which cache entry should be replaced when the cache is full?
  3. How to maintain consistency between the cache copy and the real data?
41
New cards

Four types of cache misses

  1. Compulsory Misses
  2. Capacity Misses
  3. Competing Cache Misses
  4. Policy Misses
42
New cards

Ways to perform TLB lookups

  1. sequential search of the TLB table
  2. direct mapping - assigns each virtual page to a specific slot int the TLB
  3. set associativity- used N TLB banks to perform lookups in parallel
  4. fully associative cache- allows looking up all TLB entries in parallel
43
New cards

write-through caching policy

Immediately propagates update through various level of caching.

44
New cards

write-back caching policy

Delays the propagation until the cached item is replaced.

45
New cards

Demand Paging

Allowing pages that are referenced actively to be loaded into memory.

46
New cards

page fault

A referenced page is not in memory.

47
New cards

Transparent

(invisible) mechanisms

  • A process does not know how it happened
  • It needs to save the processor states and the faulting instruction
48
New cards

Belady's Anomaly

Adding more frames can cause more page faults.

49
New cards

Thrashing

When the memory is overcommitted.

50
New cards

Working Set

Pages that are referenced within the past T seconds.

51
New cards

Steps to carry out a page fault

  1. Choose a page
  2. If the page has been modified, write its contents to disk
  3. Change the corresponding page table entry and TLB entry
  4. Load new page into memory from disk
  5. Update page table entry
  6. Continue the thread
52
New cards

Page replacement policies

  • Random Replacement
  • FIFO (First In First Out)
  • MIN (Optimal)
  • LRU (Least-recently Used)
  • LFU (Least Frequently Used)
53
New cards

Polling

A CPU repeatedly checks the status of a device.

54
New cards

Interrupt-driven I/Os

A device controller notifies the corresponding device driver when the device is available.

55
New cards

Direct Memory Address

Using an additional controller to perform data movements.

56
New cards

Device Controller

Converting between serial bit stream and a block of bytes.

57
New cards

Device Driver

An OS component that hides the complexity of an I/O device.

58
New cards

Memory-mapped I/O

Making no distinction between device addresses and memory addresses.

59
New cards

Categories of I/O Devices

block device
character device
network device

60
New cards

ways to access a device

Polling
interrupt-driven I/O's
Direct Memory Access (DMA)
Double Buffering

61
New cards

disk arm scheduling policies

  1. FCFS (first come, first serve)
  2. SSTF (shortest seek time first, ie closest to current disk arm position)
  3. SCAN (takes the closest request in the direction of travel, ie elevator)
  4. C-SCAN (goes in direction of travel servicing, then restarts at 0)
62
New cards

Disk Platters

coated with magnetic materials for recording

63
New cards

Disk Arm

moves a comb of disk heads

64
New cards

Sector

the smallest unit of disk storage

65
New cards

Cylinder

consists of all tracks with a given disk arm position

66
New cards

Zone Bit Recording

zones near the edge store more information (higher bandwidth)

67
New cards

Track

each disk platter is divided into concentric tracks

68
New cards

Track skew

starting position of each track is slightly skewed

69
New cards

Thermo-calibrations

performed to account for changes of radius due to temperature changes

70
New cards

Seek time

the time to position disk heads (~4 msec on average)

71
New cards

Rotational Latency

the time to rotate the target sector to underneath the head

72
New cards

Transfer Time

the time to transfer bytes

73
New cards

Disk access time

seek time + rotational delay + transfer time

74
New cards

Latency

Seek time + rotational delay

75
New cards

Bandwidth

Bytes transferred / disk access time

76
New cards

Block Device

stores information in fixed-size blocks, each one with its own address
(e.g. disks)

77
New cards

Character Device

delivers or accepts a stream of characters, and individual characters are not addressable
(e.g. keyboards, printers)

78
New cards

Network Device

transmits data packets

79
New cards

Flash Memory

form of solid-state memory similar to ROM

  • Holds data without power supply
80
New cards

NOR Flash

used in cellular phones and PDAs
byte addressable
mostly replaced by DRAM + NAND Flash

81
New cards

Flash Translation Layer (FTL)

Write new version elsewhere
Erase the old version later

82
New cards

flash page

The atomic unit in which data is read from flash memory.

1 flash page ~= 1 disk block

83
New cards

flash blocks

consists of 4-64 flash pages
may not be atomic

84
New cards

double buffering

uses two buffers. While one is being used, the other is being filled