ITSC 3146 test 2 review

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall with Kai
GameKnowt Play
New
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/56

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

57 Terms

1
New cards

There needs to be an ____ mechanism for data sharing among different processes.

explicit

2
New cards

The sharing mechanism is _____ for threads of the same process.

implicit

3
New cards

The bank example demonstrated that threads must not be

_________

when accessing shared data. This means that when one thread is accessing shared data, no other thread should be allowed to

_________

that same shared data.

interleaved, access

4
New cards

According to example in the video, the rules of strict alternation stipulate that while one of the siblings is taking a turn the other sibling must ____.

wait

5
New cards

Mutual ______ is the idea that when one thread enters its critical region, the other thread cannot enter its critical region. This is a benefit of strict alternation.

exclusion

6
New cards

Busy _____ is a drawback of strict alternation because processes waste CPU cycles unnecessarily by repeatedly checking the turn variable.

waiting

7
New cards

In the code presented for Peterson's solution, a process will indicate that it is leaving the critical region by setting the interested flag to FALSE in the _____ method.

leave_region

8
New cards

In _________ a process does not have to wait unless another process is taking a turn or has expressed interest in taking a turn. In Strict Alternation , on the other hand, a process must wait until the other process has taken a turn, even if another process is not interested in taking a turn is not taking a turn.

Peterson's Solution, Strict Alternation

9
New cards

Busy waiting uses CPU cycles because it is constantly checking or _____ the status of critical region entry.

polling

10
New cards

An alternative to busy waiting is to put the thread to

_____

instead of having it continually poll the status of critical region entry. This effectively blocks or suspends the thread from

_______

.

sleep, execution

11
New cards

A ___ is a shared variable that can be in either a locked state or a an unlocked state.

mutex

12
New cards

With a mutex, a thread or process that wishes to enter the critical region must invoke a ____ operation.

lock

13
New cards

With a mutex, the thread_yield call means that the current thread or process yields the ____ so another process or thread can be scheduled instead.

CPU

14
New cards

In the pthread library you would declare a variable of type ____ to create the global mutex variable.

pthread_mutex_t

15
New cards

The pthread_mutex_lock method takes a _____ to the mutex variable as the parameter.

pointer

16
New cards

In the example in the video the bank balance will always be the same no matter how many times we run the program because we can be sure that threads are accessing the balance variable in a ____ manner by using a mutex variable.

safe

17
New cards

Task decomposition is when multiple threads execute ______ with ______ data whereas data decomposition is when multiple threads execute ________ with ______ data.

different functions ,

the same,

different functions,

different

18
New cards

A approach in which a thread executes a function and

______

output data, which is then

_____

by another thread executing a different function is referred to as ____

______

decomposition.

produces, consumed, data, flow

19
New cards

A way to facilitate

____ ____

decomposition is to use a

_____ _____

, which must be used in a

______ manner, i.e., its state must always be consistent.

Answer 1:

data

Answer 2:

flow

Answer 3:

shared

Answer 4:

buffer

Answer 5:

safe

20
New cards

The pthread library provides a mechanism, called a

____ _____

, that allows

_______

based on the

_____ or state of data.

Answer 1:

condition

Answer 2:

variable

Answer 3:

synchronization

Answer 4:

condition

21
New cards

The

________ wait

function is used when a thread wants to wait for a particular condition to occur and takes

______

parameter(s). The

_________signal

function is used when a thread wants to signal the occurrence of a particular condition and it takes ____

parameter(s).

Answer 1:

pthread_cond_wait

Answer 2:

2

Answer 3:

pthread_cond_signal

Answer 4:

1

22
New cards

A condition variable is used in conjunction with a mutex variable. True or False.

True

23
New cards

When the pthread_cond_wait function is called, the system automatically ______ the associated mutex variable and then ______ . When the corresponding pthread_cond_signal function is called, the system _____ and then _____ the associated mutex on its behalf.

Answer 1:

unlocks

Answer 2:

puts the calling thread to sleep

Answer 3:

wakes up the calling thread

Answer 4:

locks

24
New cards

The part of the CPU that is responsible for deciding which job to run at any given time is called the

_____ . It uses a scheduling

______ to make these decisions.

Answer 1:

scheduler

Answer 2:

algorithm

25
New cards

batch-

interactive-

real-time-

batch- a system where performance can be optimized based on jobs

interactive- a system where performance perceived by the user must be optimized

real-time- a systems where predictability rather than importance is most important as there are specific deadlines for each job

26
New cards

Batch systems have three main objectives:

1. Maximize the number of jobs to complete in a given time period. This is known as

_______

.

2. Minimize the time between the submission and completion of jobs, otherwise known as

______

time.

3. Keep the CPU

______

at all times.

throughput, turnaround, utilized

27
New cards

Interactive systems have two main objectives:

1. Minimize the time between the issuing and the completion of jobs, otherwise known as

______

time.

2. Honor the users' perception that short jobs will take a short amount of time and that long jobs will take longer. This is known as

________

.

response, predictability

28
New cards

Real-time systems maintain predictability for the system by meeting the

______

and by considering the

______

set by the system.

Answer 1:

deadlines

Answer 2:

priorities

29
New cards

In

_______

scheduling, a job that is currently being executed cannot be interrupted to schedule another job.

In

_______

scheduling, a job that is currently being executed can be interrupted to schedule another job.

Answer 1:

non-preemptive

Answer 2:

preemptive

30
New cards

First come, first served (FCFS) is a _______ policy that runs jobs in the order that they were added to the _____.

Answer 1:

non-preemptive

Answer 2:

queue

31
New cards

The Shortest-Time-First (SJF) policy will execute the job with the shortest CPU _____ first. In the ____ version, the jobs that arrive later will wait even if they have shorter time than what remains on the currently running job.

Answer 1:

burst

Answer 2:

non-preemptive

32
New cards

The Round Robin scheduling policy is more suitable for _____ systems than First Come First Served(FCFS) or Shortest-Job-First (SJF)

interactive

33
New cards

In a Round Robin system, if the

______ of CPU time is too short, there will be too many _______ switches, which creates too much switching ______.

Answer 1:

quantum

Answer 2:

context

Answer 3:

overhead

34
New cards

Priority based scheduling will schedule the jobs with the

______

priority first. Priority can be determined by an

_______

factor such as how long it has been in the system or by an

______

factor such as the importance to the user.

Answer 1:

highest

Answer 2:

internal

Answer 3:

external

35
New cards

Multi-level scheduling is used for ______ approaches for mixed type jobs. This scheduling approach divides jobs into separate

______ that may use different scheduling policies.

Answer 1:

hybrid

Answer 2:

queues

36
New cards

All of the memory resources needed by a process are referred to as

_______ _______.

For a process to be able to execute, its data must be in

_______ _______.

Answer 1:

process

Answer 2:

data

Answer 3:

main

Answer 4:

memory

37
New cards

A mechanism that can be used to allow multiple processes to reside in memory simultaneously is _______ _______.

Answer 1:

memory

Answer 2:

partitioning

38
New cards

When using static, equal sized partitions, a given process is always mapped to the same _______ _______.

physical partition

39
New cards

An advantage of static equal sized partition scheme is its _______. A disadvantage is that it suffers from ____ fragmentation.

Answer 1:

simplicity

Answer 2:

internal

40
New cards

In a scheme with static unequal sized partitions, the system needs to store the ______ address and the _____ in order to enforce protection.

Answer 1:

base

Answer 2:

limit

41
New cards

A scheme that allows dynamic partitions of varying sizes starts with unpartitioned memory, called _____ _____ and then creates partitions dynamically based on process needs.

Answer 1:

free

Answer 2:

space

42
New cards

A possible _____ to keep track of free spaces is to use a linked list.

mechanism

43
New cards

In the first fit approach, the system chooses the ______ free space that is _____ enough to fit the process being placed.

Answer 1:

first

Answer 2:

large

44
New cards

Both the best fit and the worst fit approaches require traversal of the entire free space list. True or False.

True

45
New cards

In general, programs issue _____ _____ addresses and not _____ _____ addresses.

Answer 1:

logical or relative

Answer 2:

physical or absolute

46
New cards

Compaction is an option to reduce _____ fragmentation of main memory. The disadvantage of this approach is that it is ____ to keep doing it every now and then.

Answer 1:

external

Answer 2:

expensive

47
New cards

Paging is a _____ that allows a process to be split up into multiple parts and allocated to multiple partitions. Here, the main memory is divided into equal sized blocks called _____ and the logical address space of a process is divided into blocks of the same size called

________

Answer 1:

mechanism

Answer 2:

page

Answer 3:

frames

Answer 4:

pages

48
New cards

Logical pages of a given process must always be allocated to contiguous page frames in the physical main memory. True or False?

False

49
New cards

To keep track of what logical page of which process maps to what physical page frame in the main memory, the Operating Systems uses structures called ______ ______.

Answer 1:

page

Answer 2:

tables

50
New cards

In a paged system, the logical address issued by a process includes two parts, namely, the logical _______ number and the

_____ within it. Similarly, the physical address includes the physical _____ number and the

______ within it.

Answer 1:

page

Answer 2:

offset

Answer 3:

page

Answer 4:

frame

Answer 5:

offset

51
New cards

The operating system is responsible for translating the logical address issued by a process into a physical address. While doing so, it can retain the _____ as it is.

Answer 1:

offset

52
New cards

To speed up access to page table entries, systems typically store frequently used page table entries in a special cache called the_______

buffer.

Answer 1:

translation

Answer 2:

look-aside

53
New cards

A process may not use all of its data all the time. An approach in which pages may be brought into the main memory as needed is called _______ When using this approach, if a process requests for data on some page and that page is not in the main memory yet, a

______ is said to occur. In a system using this approach, logical - also know as _____ addresses may be _____ than physical addresses.

Answer 1:

demand

Answer 2:

paging

Answer 3:

page

Answer 4:

fault

Answer 5:

virtual

Answer 6:

longer

54
New cards

"algorithm" shown below for the Optimal page replacement policy.

if (newly requested page is already in main memory ) {

access is a hit

} else {

access causes a page fault

identify page whose next access will be farthest in the future

replace identified page with newly requested page

}

optimal

55
New cards

"algorithm" shown below for the First In First Out page replacement policy.

if (newly requested page is already in main memory ) {

access is a hit

} else {

access causes a page fault

identify page that was brought into main memory the longest time ago

replace identified page with newly requested page

}

first in first out

56
New cards

"algorithm" shown below for the Least Recently Used page replacement policy.

if (newly requested page is already in main memory ) {

access is a hit

} else {

access causes a page fault

identify page whose previous access was farthest in the past

replace identified page with newly requested page

}

Least Recently Used

57
New cards

Modified First In First Out page replacement policy, also known as the Second Chance page replacement policy???

similar to first in first out, but more efficient. replaces pages that were also hit not last to be replaced.