cis3207 midterm

5.0(2)
studied byStudied by 42 people
5.0(2)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/83

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No study sessions yet.

84 Terms

1
New cards

an instance of a program being executed by an OS

What is a process?

2
New cards

an instance of executing a portion of a program within a process without incurring the overhead of creating and managing separate PCBs

independent scheduled program unit launched within a process

What is a thread?

3
New cards
  • threads are inside of processes

  • a process can have multiple threads

  • threads share memory while processes do not

What is the difference between a process and a thread?

4
New cards

for process p

  • CPU state

  • process state - ready, running, or blocked

  • memory

  • scheduling information - ex. p’s CPU time, real time in the system, priority, deadlines, etc.

  • accounting information - ex. amount of CPU time, memory used, etc.

  • open files

  • other resources - ex. printers, network ports

What are the minimal PCB contents?

5
New cards

Ex. ready, running, or blocked

What is a process “state”?

6
New cards

!!! NOT SURE: What are saved for the process state?

7
New cards

events

  • reading

  • writing

  • open file

  • close file

  • etc.

What can cause a process to change state?

8
New cards

1) the number of CPUs, 2) the type of CPU (virtual or physical), 3) enable concurrency

What do processes provide independence from?

9
New cards

independent development of concurrent tasks

What does a process as an abstraction allow?

10
New cards

no

Does having more processes necessarily mean better performance?

11
New cards

more efficient scheduling

What does having more processes allow?

12
New cards

no

Does having more CPUs than the number of processes help with more performance?

13
New cards

!!! NOT SURE: How to optimize a single application’s performance using multiple CPUs.

14
New cards

CPU state (PCB)

current state of CPU is saved when p is stopped and copied back to CPU when p executes again

15
New cards

Process state (PCB)

ready, running, or blocked

16
New cards

Memory map (PCB)

points to area of p’s memory

17
New cards

Scheduler information (PCB)

ex. p’s CPU time, real time in the system, priority, deadlines, etc.

18
New cards

Account information (PCB)

ex. amount of CPU time, memory used, etc.

19
New cards

Parent (PCB)

the process that created p

20
New cards

Children (PCB)

process(es) created by p

21
New cards

Open files (PCB)

files open for p

22
New cards

Resource control block

for resource r

  • resource description

  • state

  • waiting list (process requests)

23
New cards

Resource description (RCB)

contains description of r’s properties and capabilities

24
New cards

State (RCB)

shows the current availability of r

  • if r is single-unit resource: state indicates if r is free or allocated to some process

  • if r has multiple identical units (ex. pool of identical buffers): state keeps track of how many units are free and how many are allocated

25
New cards

Waiting list (process requests) (RCB)

where a process p’s PCB is moved to from the ready list when p requests r and r is unavailable. p’s PCB is moved back to the ready list when r is available and allocated to p

26
New cards

Thread control block (TCB) contents

  • thread state

  • memory map

  • parent

  • children

  • open files

  • other resources

27
New cards

Thread control block (TCB)

Data structure that holds a separate copy of the dynamically changing information necessary for a thread to execute independently. Replicates only the bare minimum of info in each but shares code, global data, and open files. More efficient to manage than process because of this sharing

28
New cards

Thread state (TCB)

running, ready, or blocked

29
New cards

remains in PCB / shared by all threads

!!! NOT SURE: Memory map (TCB)

30
New cards

remains in PCB / shared by all threads

!!! NOT SURE: Parent (TCB)

31
New cards

remains in PCB / shared by all threads

!!! NOT SURE: Children (TCB)

32
New cards

shared between threads

Open files (TCB)

33
New cards

!!! NOT SURE: Other resources (TCB)

remains in PCB / shared by all threads

34
New cards

implement OS design objectives tailored for supported applications

What do OS policies do?

35
New cards

policy implementations

What are algorithms?

36
New cards

design theories

What are design objectives led by?

37
New cards

evidence

What must design theories be supported by?

38
New cards

via state transitions

How does a computer program transform input data into actions?

39
New cards

observable behavior to demonstrate the required evidence

What is the result of state transitions?

40
New cards
  • data structures and algorithms (legacy curriculums)

  • AI tools are outpacing legacy training productions

  • new paradigm: program behavior observation

  • develop sensitivity to program state changes

How to programing:

41
New cards

program behavior observation + develop sensitivity to program state changes

New paradigm of programming?

42
New cards
  • timing measures

  • input data inspection

Learning tools:

43
New cards

world clock (not clock on machine). save your own time

Timing measures

44
New cards

size, order, special properties, etc.

!!! ASK HOW HE WANTS US TO DETERMINE SIZE, ORDER, PROPERTIES, ETC.: Input data inspection

45
New cards

testing and correctness validation

!!! NOT SURE: Functional analysis

46
New cards

printf or gdb

!!! GET FAMILIAR WITH GDB: Debugging

47
New cards

non-functional: performance, reliability, security

!!! NOT SURE: Behavior analysis

48
New cards
  • creates a child process c of parent process p

  • c runs concurrently with p

  • c and p execute the next instruction following the fork() call

  • c uses the same program counter (pc), CPU, and open files as p

Explain “fork()” call

49
New cards
  • negative value: failed to create child process

  • zero: returned to the newly created child process

  • positive value: returned to parent (or caller). Value is the process ID (PID) of the new child process

What values are returned by fork() and what do they mean?

50
New cards

yes

Do child and parent processes run concurrently

51
New cards

no

Does a child process take parameters?

52
New cards

integer

What kind of value does a child process return?

53
New cards

blocks the calling process until one of its child processes exits or a signal is received. after the child terminates, the parent continues its execution after wait system call instruction.

Explain “wait()” call

54
New cards
  • allocate initial buffer

  • read input character by character (getchar() in a loop to read chars from stdin)

  • resize the buffer dynamically (realloc())

  • terminate on newline (\n) or end of file (replace \n with \0 to terminate string)

  • return the dynamically allocated string (the caller needs to free the memory)

Outline the read_line() function that reads a terminal command line with unknown length

55
New cards
  • read the input line (fgets())

  • tokenize the command line (strtok() with space '“ “ as the delimiter to extract 1st token which is the command)

  • extract the arguments (continue using strtok() to retrieve remaining arguments (tokens))

  • store tokens in an array for further processing

Outline the strtok() use in parsing out the command line command and its arguments

56
New cards

!!! NOT SURE: What mask should the “open()” system call to use for a file that is to accepting appending data items with 0644 permissions?

57
New cards

creates a copy of a file descriptor

  • uses the lowest-numbered unused descriptor for new descriptor

  • if sucessful, original and copy file descriptors may be used interchangeably

  • both refer to same open file description so they share same file offset and file status flags

Explain how dup() works

58
New cards

creates a copy of a file descriptor using descriptor number specified by user as opposed to dup() which uses lowest-numbered unused descriptor

Explain how dup2() works

59
New cards

the pipe redirects the standard output (stdout) of cmd1 to the standard input (stdin) of cmd2.

  • shell creates a pipe using pipe() which creates

    • pipefd[0] the read the end of the pipe

    • pipefd[1] the write the end of the pipe

  • forks the 1st child process (cmd1)

    • shell creates a child process to execute cmd1

    • cmd1’s stdout is redirected to pipfd[1] write end

    • child closes pipefd[0] read end

    • child executes cmd1 using exec()

  • forks the 2nd child process(cmd2)

    • shell creates another child process but this one executes cmd2

    • cmd2’s stdin is redirected to pipefd[0] write end

    • child closes pipefd[1] write end

    • child executes cmd2 using exec()

  • parent waits for both children to complete then closes both the read and write ends of the pipe

Explain how redirection works for cmd1 | cmd2

60
New cards

(NOM) Algorithm Time Complexity

Ta(n)

61
New cards

(NOM) Certificate Time Complexity

Tc(n)

62
New cards

(NOM) Structure Compact Algorithms

Ta(n)/Tc(n) < n^e, e>0

63
New cards

(NOM) Non-Structure Compact Algorithms

Ta(n)?Tc(n)>=n^e, e>0

64
New cards
  • exploit locality as much as possible

  • must maintain correct results compared to algorithm before

Runtime resource optimization must:

65
New cards

OS for multicore and multiprocessors

What is a foundational change in computer architecture due to unlimited hardware expansion potential?

66
New cards

unlimited hardware expansion potentials

Why is OS for multicore and multiprocessors a foundational change in computer architecture?

67
New cards

needs a theory to meet all non-functional requirements in any scale.

Why does OS for multicore and multiprocessors require a new design theory?

68
New cards

Amdahl’s law

What is the new design theory for OS for multicore and multiprocessors that meets all non-functional requirements in any scale?

69
New cards

predicting estimated processing times using a single vs. P processors for programs solving arbitrary problems

What is the assumption for Amdahl’s law (1967)?

70
New cards

β (beta) = percentage of serial instructions

What is being defined in Amdahl’s law (1967)

71
New cards
<p>SpeedUp(P) = [T<sub>seq</sub>] / [T<sub>par</sub>(P)] = 1/ β+[(1-β) / P]</p><p></p><p>overall speedup = old execution time/ new execution time</p>

SpeedUp(P) = [Tseq] / [Tpar(P)] = 1/ β+[(1-β) / P]

overall speedup = old execution time/ new execution time

What is the equation for Amdahl’s law (1967)?

72
New cards

yes

Is SpeedUp(P) a function of a number of processors in Amdahl’s law (1967)?

73
New cards

you can’t do this!

Problem size is absent, how to quantify β for arbitrary programs in Amdahl’s law (1967)?

74
New cards

the ambiguity cannot be resolved

!!! NOT SURE: Scaling P is ambiguous: a) it can mean adding processors while keeping the problem size fixed, or) adding processors to solving bigger problem sizes. in Amdahl’s law (1967)

75
New cards

1967

What year is Amdahl’s law?

76
New cards

Predicting estimated processing times using a single vs. P processors (same as Amdahl’s)

What is the assumption for Gustafson’s law (1988)?

77
New cards

measured Tpar(P) = Tseq’(P)+ Tpar’(P)

entire time

What is given for Gustafson’s law (1988)?

78
New cards

β’ = percentage of sequential portion of the P-processor run = [Tseq’(P)] / [Tseq’(P) + Tpar’(P)]

1-β’ = parallel portion percentage of the P-processor run = β’ + (1 - β’)P

lim SpeedUp(P) = ? Not directly possibly
P→infinity

What is being defined in Gustafson’s law (1988)

79
New cards

you cannot take the limit mathematically

What is a problem of Gustafson’s law (1988)?

80
New cards

1988

What year is Gustafson’s law?

81
New cards

Find SpeedUp

β = 0.6

1-β = 0.4

P = 10

Use Amdahl’s law to solve this

1/(β + (1-β)/P) = 1/(0.6 + 0.4/10)=1/0.64 ~ 1.56

Suppose that we want to enhance the processor used for web serving. The new processor is 10 times faster on computation in the web serving application than the original processor. Assuming that the original processor is busy with computation 40% of the time and is waiting for I/O 60% of the time, what is the overall speedup gained by incorporating the faster processor?

82
New cards

Find SpeedUp of HW and SpeedUp of GPU

HW:

β = 0.2

1 - β = need to solve 1 - 0.2 to find

P = 10

GPU:

β = 0.5

1 - β = need to solve 1 - 0.5 to find

P = 1.6

SpeedUp(HW) = 1/((1-0.2)+0.2/10) = 1/0.82 ~ 1.22

SpeedUp(GPU) = 1/((1-0.5)+0.5/1.6) = 1/0.8125 ~ 1.23

!!! NOT SURE EQUATION SEEMS OFF: Implementations of floating-point (FP) square root is computationally expensive. Suppose FP square root is responsible for 20% of the execution time of a critical application. One proposal is to enhance the FPSQR hardware and speed up this operation by a factor of 10. The other alternative is try to make all FP instructions that accounts for half of all instructions in the GPU faster by a factor of 1.6. Compare these two design alternatives.

83
New cards

Find β

SpeedUp = 80

P = 100

Use Amdahl’s law

SpeedUp = 1/(β + (1 – β)/100) = 80
β = 1-0.9975=0.0025

Suppose you want to achieve a speedup of 80 with 100 processors. What fraction of the original computation can be sequential?

84
New cards

Gustafson’s law

• Gustafson measured on a 1024 processor Tpar(1024).
• Cannot figure out β.
• Proposed β’ instead.