Chapter 2 Processes and Threads (Tanenbaum 4th edition)

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/99

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

100 Terms

1
New cards

What is a process?

An operating system abstraction to represent what is needed to run a program

2
New cards

What does a program consist of?

Sequential Program Execution Stream
(includes state of CPU registers)
Protected resources
(memory state, I/O state)

3
New cards

NB: What is a key point about the execution of a process?

It is strictly sequential - there is no concurrency

4
New cards

Where is the current state of the process held?

In the PCB

5
New cards

How do we multiplex several processes?

Allocate CPU time to processes (using suitable scheduling policies)
Controlled access to non-CPU resources (e.g. memory, I/O)

6
New cards

What is a thread?

An independent sequence of execution within a program/ process (a single program/ process can have multiple threads)
NB: Program and process are used interchangeably

7
New cards

Similarities between a thread and a process?

- Both have a single sequential flow of control with a start and end
- At any time a thread has a single point of execution
- A thread has its own execution stack & program counter
- Sometimes a thread is called a lightweight process

8
New cards

Differences between a thread and a process?

- A thread cannot exist on its own. It exists within a process
- Usually created and/or controlled by a process
- Threads can share a process's resources, including memory and open
files

9
New cards

What is sequential programming?

Traditional activity of constructing a program containing one process using a (sequential) computer language
The program is supposed to execute on a single processor architecture

10
New cards

How is the CPU linked to RAM and I/O devices?

Buses

11
New cards

What is stored in RAM?

Program instructions and data

12
New cards

Where are the program instructions and data stored?

RAM

13
New cards

How is the next instruction of the fetch, decode, execute cycle referenced?

By the current value of the program counter (PC)

14
New cards

What is the execution sequence?

The sequence of values of the PC (program counter)

15
New cards

What is a deterministic execution sequence?

Only one possible sequence of execution
(Sequential programming is deterministic)

16
New cards

How does a sequential program specify it's execution sequence?

Strictly - the order of execution must be followed
(NB: modern CPUs don't obey this rule and will alter sequence if it achieves the same result)

17
New cards

How is a program statement compiled?

Into single or multiple machine instructions
(e.g. Q (y = x+1 )is broken into 3 machine instructions as follows:
q1: load the value of x into a CPU register
q2: increment the value in this register by 1
q3: store the value in this register at the address of y)

18
New cards

What is meant by P must precede Q?

Q can only begin after P finishes

19
New cards

What is total ordering?

Single threaded computation - no overlap in the execution of statements
(i.e. The execution sequence at the program level
P; Q; R;
implies the execution sequence at the system level
p, q1, q2, q3, r1, r2, r3)

20
New cards

What are the key features of a sequential program?

- Total ordering
- Deterministic: same input results in same output
- Sequential programming <-> Finding a strict sequence of steps to
achieve the desired end

21
New cards

What is the problem with sequential programming?

It does not apply for many practical problems

22
New cards

What is a parallel program?

A program that doesn't require total order of it's execution, it's processes can be carried out in any order - however each process could require total order
(e.g. finding the factors of a number, you could take all numbers less than that number and check if each one was a factor, however the order of checking the numbers does not matter - it can be carried out in parallel)

23
New cards

What is partial order?

When total and non-total ordering are involved
(splitting up tasks at start and getting the results at end - total - middle of execution is parallel, we don't know the order - non-total)

24
New cards

What is concurrent programming?

A program containing multiple processes/ threads that can execute in parallel (that can run on a single or multi-processor system)

25
New cards

What does concurrent programming require?

A language the supports it, such as Java Threads and Socket or Posix thread support

26
New cards

Benefits of concurrent programming

- Improve efficiency on multi-CPU hardware
- Improve CPU utilisation (via multi-tasking) on a uni-CPU system (OS)
- Some applications are inherently non-deterministic and concurrent (e.g. a traffic light system is determined by external events - a sensor - and it is impossible to predict the order)

27
New cards

How do we represent a concurrent program?

Using COBEGIN and COEND to bracket the processes

28
New cards

When does a concurrent program end?

Only when all processes in COBEGIN/ COEND section terminate
NB: The statements may overlap, but don't have to

29
New cards

What is a parallel computer system?

A computer with multi-CPUs/ cores

30
New cards

Where can parallel computation be implemented?

On a parallel computer system

31
New cards

What is maximum parallel computation?

When each task is computed on its own CPU
NB: This is not always possible

32
New cards

Can a uni-CPU system support multi-tasking/ multi thread?

Yes

33
New cards

How does a uni-CPU system support multi-tasking/ multi thread?

Interleaving

34
New cards

How does interleaving work?

- Each process/ thread has its own process counter
- The PC forks to produce many process counters, which later re-join
- In each CPU cycle, a process is non-deterministically chosen (depends on scheduler) and it's next command is loaded and executed
(NB: There may be many different possible paths)

35
New cards

What are the issues with concurrent programming?

- The concurrent processes must interact with each other in order to share resources or exchange data
- Synchronisation (to prevent unacceptable interleaving & outputs)
- Distribution (of processes among processors and processor interaction)

36
New cards

What are the issues with concurrent operation?

- Passing messages between processes
- Safe sharing of resources
- Correct sequencing in the presence of dependencies

37
New cards

When do programs used shared resources?

Sending/ receiving data to/ from the same device (printer etc.)
Using the same file of block of RAM (variable, object attribute etc.) to store data

38
New cards

What is the OS' role in the use of shared resources?

To avoid incorrect operation of programs due to shared resource clashes

39
New cards

Example
A shared resource: printer spooler

We want to ensure that files are printed one after another and
not, say, one page of one file, the next page of a different file.

- Printer spool: a place on a disk organised into a number of
consecutive slots

- Each process that wants to print a file
detects the first unoccupied slot;
stores the file name in the slot.

- The printer manager periodically examines the slots and sends
the files to print.

CLASH:

- Processes P1 and P2 are simultaneously running and both want
to print their files.

- Process P1 detects the first unoccupied slot in the spooler
directory (slot 6 say) and then a context switch occurs

- Process P2 resumes and detects that slot 6 is free, writes its file
name there and then a context switch occurs.

- Process P1 resumes again: it writes its file name to slot 6 (erasing
the record of P2).

- P2's file will never be printed.

40
New cards

What is a race condition?

When two or more processes are reading or writing some shared data and the result depends on the order of execution

41
New cards

When does a race condition occur?

- A process is interrupted during the use of a shared resource
- Another process uses the same shared resource during the interruption

42
New cards

How do we prevent a race condition happening?

Avoid at least one of the conditions:
- A process is interrupted during the use of a shared resource
- Another process uses the same shared resource during the interruption

43
New cards

What is a critical region?

Part of a program where shared memory is accessed

44
New cards

How do we successfully synchronise critical regions?

- Mutual exclusion
- Progress
- Bounded wait

45
New cards

What is mutual exclusion of processes?

No two processes can enter their critical region at the same time

46
New cards

What is progress of a process?

No process running outside the critical region may block other processes

47
New cards

What is the bounded wait of a process?

Each process should enter its critical region eventually

48
New cards

How does the 'disabling interrupts' synchronisation method work?

A processor accessing a shared resource acts as follows:
- Disable all interrupts (hardware level instructions available to processors)
- Access the shared shared resource
- Re-enable interrupts

49
New cards

Advantages & disadvantages of the 'disabling interrupts' synchronisation method

- Disabling interrupts achieves mutual exclusion

- Disabling interrupts prevents any other processes running even if
it does not need the shared resource
- The strategy is non-preemptive
- So neither progress nor bounded wait is assured

50
New cards

How does the 'access in turns' synchronisation method work?

(A preemptive scheduling policy is needed)

A process busy waits whilst a resource is in use. Uses it. Then afterwards resets 'turn' variable to be used by other resources

(e.g. assume there are only two processes P0 and P1
- The resource is associated with a shared variable "turn" that can
take values 0 or 1
- Assume turn = 0
- If P1 wants to access the resource it will be busy waiting (in the
empty while loop).
- When P0 finishes using the resource it assigns turn = 1 enabling P1 to proceed)

51
New cards

Disadvantages of the 'access in turns' synchronisation method

If it is not its turn, a process cannot start using the resource even
if the other process is not interested.
So performing the while loop wastes CPU time.

52
New cards

What is the key difference between Peterson's algorithm and the 'access in turns' synchronisation method?

It keeps track of which processes are interested in accessing the critical region

53
New cards

How does Peterson's algorithm work?

- Assume we have only two processes, P0 and P1
- Each process uses a Boolean flag[pid] to express interest in entering their critical section
- Both flag[0] and flag[1] are initialised to false
- To avoid a deadlock, the process that starts the entry protocol
last is allowed to proceed to the critical section first

(Informal justification:
- If only P0 wants to access the resource. Then P0 can proceed to
the critical section because flag[1] = false and vice versa if only P1 wants the resource.
- If both processes want to access the resource, then both flag[0]
and flag[1] are true.
- Suppose that P1 is the last to modify turn, i.e. turn == 0.
- Then P1 will busy-wait while P0 will enter the critical region and vice versa if P0 happens to be the last to set turn.)

54
New cards

How is busy waiting different between Peterson's algorithm and the 'access in turns' algorithm?

Although the while loop in Peterson's algorithm performs busy waiting, waiting is performed only if the other process is interested in accessing the resource

55
New cards

Problems with Peterson's algorithm?

- Busy waiting; which is difficult to design, understand, generalise and analyse
- No separation between synchronisation and "normal" variables
- So; they are often error prone and inefficient
- Hard to predict execution order of statements in modern compilers and processors

56
New cards

Motivation for using semaphores?

- Make synchronisation simpler by using a single statement for entry and exit protocols
(e.g. GETSEMAPHORE;
critical section;
RELEASESEMAPHORE;
non-critical section)
- Avoid busy waiting

57
New cards

What do semaphores provide in concurrent programs?

Basic signalling mechanism to implement mutual exclusion and condition synchronisation

58
New cards

What is a semaphore?

A (non-negative) integer representing the quantity of a resource available

59
New cards

What are the only two operations permitted on a semaphore s?

Wait(s) or P(s): an atomic operation that waits for semaphore to become positive, then decrements it
- If s > 0 then s--
- else block the calling process
Signal(s) or V(s): an atomic operation that increments the semaphore, waking up a waiting process, if any
- If processes are blocked then unblock one
- else increment s

60
New cards

How are semaphores implemented?

Hardware facilities and in the kernel of OS' by block-waiting (efficient)

61
New cards

How should Wait() and Signal() be implemented?

As synchronised methods

62
New cards

What do Wait() and Signal() do whilst waiting?

They block-wait, using no CPU time
(As opposed to busy-waiting)

63
New cards

What is the type of a semaphore which can only take values 0 or 1?

Binary
Otherwise, it's called general (or counting)

64
New cards

What should the initial value of a semaphore be?

init_val >= 0

65
New cards

How should overflow not occur in a semaphore?

By incrementing the value of the semaphore

66
New cards

How many semaphores are required for synchronisation?

One for each resource/ constraint
(i.e. Producer and Consumer problem: one semaphore to test if
buffer is empty, another to test if buffer is full and a binary semaphore for mutual exclusion)

67
New cards

Positive aspects of semaphores

- Powerful enough to program all kinds of synchronisation and resource management requirements
- Relatively easy to implement efficiently
- A general semaphore can be simulated by binary ones

68
New cards

Negative aspects of semaphores

It is a low level facility and still error-prone (especially when more than one semaphore is in use at a time)
- Missing a Signal() may lead to deadlock
- Missing a Wait() may lead to a violation of safety requirement,
such as mutual exclusion
- Putting them in wrong order may be dangerous: just swapping
Wait()s can lead to deadlock

69
New cards

How do we overcome the negative aspects of semaphores?

Use monitors instead!

70
New cards

What are the two parts to a monitor?

- A lock for enforcing mutual exclusion
- Condition variable(s) for controlling access to resources (aka guards)

71
New cards

What is a monitor, in object-oriented terms?

A class with:
- Some attributes (data) (to be shared by processes)
- Some synchronised methods used to access the data
- Some variables for condition control (guards)

72
New cards

How can monitor data be accessed?

Only through it's methods (data is private - need good public interface)
Entry to a monitor by one process excludes entry by any other processes

73
New cards

How does Java execute a synchronised method?

It associates a lock with every object which:
- Transparently inserts code to acquire the lock before executing a synchronised method
- and code to release the lock when the method returns

74
New cards

What happens when a concurrent thread accesses a locked resource?

It is blocked until the lock is released

75
New cards

How many threads can hold a single lock?

Just one - thus only one can be executing the synchronised method
(However executing synchronised methods of different objects can be concurrent)

76
New cards

Why do we use condition synchronisation?

To synchronise the order of threads entering the critical section

77
New cards

What is condition synchronisation

The property that one process wishing to access shared must be delayed until some condition colds
(E.g. in the Producer-Consumer problem
- Producer can place an item into the buffer only when it is not full
- Consumer can take an item from the buffer only when it is not empty
regardless of whether any other thread is trying to use the buffer)

78
New cards

What does the Java notifyAll method of a monitor do?

Causes all threads waiting on the resource to re-evaluate their conditions again
(For this reason notifyAll is used mostly, as all threads see if they can access the resource - if producer is slower than consumer, consumer is waiting to take resources, so producer notifies all consumers when there is another resource available and all consumer processes check to see if they still need the resource)

79
New cards

Where is the critical code when using monitors?

Inside the monitor class

80
New cards

What are the benefits of using monitors over semaphores?

- Releases the user's responsibility for coding/ ordering Wait() and Signal() operations in the thread class
- The responsibility for ensuring mutual exclusion is moved to the language compiler
- is more efficient and robust than Semaphore

81
New cards

When do we schedule a process?

- when it is created
- when a process exits
- when a process blocks
- when IO interrupt occurs

82
New cards

What are the categories of scheduling algorithms?

- Batch (just run)
- Interactive (preemption)
- Real-time (deadlines)

83
New cards

How do we measure scheduling?

- Turn around time
- throughput
- response time
- average wait times

84
New cards

What is an IO bound process?

Waiting for IO bursts take up a lot of time.

85
New cards

What is a CPU-bound process?

Long CPU bursts, shorter I/O bursts

86
New cards

What are the goals of all scheduling algorithms?

- Fairness in CPU sharing
- Policy enforcement - stated goal is done
- Balance - all parts of system busy

87
New cards

What are the goals of batch systems?

- maximize throughput
- minimize turnaround time
- Maximize CPU utilization

88
New cards

What are the goals of interactive systems?

- Fast response time
- proportionality - meet user expectations while still maximizing throughput

89
New cards

What are the goals of real-time systems?

- Meeting deadlines
- Predictability -

90
New cards

What are some batch system scheduling algorithms?

1) First-come first-served
2) Shortest Job First
3) Shortest Remaining Time Next

91
New cards

What is FCFS?

First-com first-served scheduling for batch systems where processes are ordered as a queue. Each ready (created, unblocked) process is added to the end of the queue. It is non-preemptive.

92
New cards

What is the main disadvantage of FCFS?

It can hurt I/O bound process.

93
New cards

What is SJF?

Shortest job first scheduling for batch systems assumes the run-time of a process is known and jobs are ordered based on this. It is non-preemptive.

94
New cards

What is SRTN?

Shortest Remaining Time Next scheduling for batch systems always chooses the process with shortest remaining time. If a new process that is shorter comes to ready (created, unblocked) and is shorter, preemption will stop the current process and start the new one. This scheduling algorithm assumes the run-time is known in advance.

95
New cards

What are some interactive system scheduling algorithms?

1) Round-Robin Scheduling
2) Priority Scheduling
3) Multi-level feedback queues
4) Lottery Scheduling
5) Fair Share Scheduling

96
New cards

What is round-robin scheduling?

An interactive system scheduling algorithm where each process has a quantam, at the expiration of which the CPU is given to another process. This promotes fairness

97
New cards

What are some considerations when determining the length of a quantam?

If the quantam is too short, there are too many context switches, lowering CPU efficiency. Too long and response time on interactive processes is poor.

98
New cards

Why is having a quantam longer than a CPU burst a good thing in interactive scheduling?

Having a quantam longer than the CPU burst means that the process will be placed into blocking and can complete its I/O bound task while another process is executing, rather than waiting to finish its burst the next time it gets processing time. This increases interactivity.

99
New cards

What is priority scheduling?

An interactive system scheduling algorithm where each process is assigned a priority and the process with the highest priority is allowed to run. These processes' priorities are dynamically assigned. Highest priority processes must not be allowed to run forever - can decrease priority while running and/or use time quantam

100
New cards

What is Multi-Level FeedBack Queue scheduling?

It is a type of priority scheduling where a queue is maintained for each level of priority. Each level is run at round robin, and if a process at one level is preempted, it is demoted to a lower priority.