CS-245 (Computing Systems) Exam 1 -- Multiprocessing

0.0(0)
studied byStudied by 6 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/28

flashcard set

Earn XP

Description and Tags

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

29 Terms

1
New cards

For the activity where we did a very long addition problem, first alone and then later as a team where each person was assigned to add each column (then the results were aggregated at the end, including handling ‘carry’);

Was this data parallelism or task parallelism? (long answer)

Data parallelism; data parallelism works by dividing up the data into small slices, and running essentially the same procedural steps on each of the slices, then (when needed) combining the intermediate results into an overall result.

2
New cards

For the activity where we did a very long addition problem, first alone and then later as a team where each person was assigned to add each column (then the results were aggregated at the end, including handling ‘carry’);

Was this data parallelism or task parallelism?

Data parallelism

3
New cards

For the activity where we were counting cards (“single processing” as one person counting and sorting all of the cards, “multiprocessing” was broken into teams where each person had one specific task);

Was this data parallelism or task parallelism? (long answer)

Task parallelism; task parallelism is where you take a complicated process and simplify it by breaking down its independent requirements and performing each of those in a separate process/thread. In this exercise, we broke down the very complicated categorization task into simpler task-focused subsets (“count the face cards”, “count the clubs”, etc) and assigned these simple tasks to separate people. Then, by making one very fast pass through the cards (the “data”) while everyone performed their assigned simple task, we completed the overall task much quicker.

4
New cards

For the activity where we were counting cards (“single processing” as one person counting and sorting all of the cards, “multiprocessing” was broken into teams where each person had one specific task);

Was this data parallelism or task parallelism?

Task parallelism

5
New cards

What are some of the mistakes that need to be overcome when multiprocessing?

The main challenge is coordination. If multiple processes are doing truly separate activities which do not use any of the same data then coordination is not a problem; but in the more interesting case where the separate processes are performing separate parts of one overall objective, then coordination between the parts is essential. Without coordination, we would likely get incorrect results (like the “multiple threads update a shared counter” example which we studied in class).

6
New cards

What is a “process”?

A program in execution; the operating system has loaded the program file into memory and set up control structures to supervise its execution (e.g. a program counter pointing to the next instruction to be executed).

7
New cards

What are the similarities between processes and threads?

Both are methods for telling the computer to execute several things at the same time.

8
New cards

Fill in the blanks:

If a running process executes the fork() method to create a new process, the original process is referred to as the [1], and the newly-created process is referred to as the [2].

  1. parent

  2. child

9
New cards

Fill in the blanks:

A process has one [1], but may have any number of [2].

  1. parent

  2. children

10
New cards

Fill in the blank:

As new processes are created, they form a [1] structure, which could be diagrammed with the [2] processes at the top and [3] processes below.

  1. tree

  2. parent

  3. child

11
New cards
int main() {
    fork();
    fork();
    printf("I hope you get this right!\n");
    return 0;
}

How many times will this program print “I hope you get this right!”?

It will print four times.

Explanation:

The program begins execution as just one process.

When it gets to the first fork(), this creates one new child process (so, counting the original parent there are now two).

Each of those processes proceeds to the next line (the second fork()).

Each one causes a new processes to be created -- so now there are a total of four.

Each of those four processes executes the printf() line, resulting in the line being printed 4 times.

12
New cards
long counter = 0;


void *runner(void *tid) {
    counter++;
    printf("Hello, from thread #%d!\n", (long)tid);
    pthread_exit(NULL);
}


int main(int argc, char *argv[]) {
    printf("Hello, from main!\n");
    /* Create threads */
    pthread_t tid[5];
    for (int t = 0; t < 5; t++) {
        pthread_create(&tid[t], NULL, runner, (void *)t);
    }
    /* Wait for all threads to exit */
    for (int t = 0; t < 5; t++) {
        pthread_join(tid[t], NULL);
    }
    printf("Done. Counter=%d\n", counter);
}

When the following program is run, what output will it produce? Is it possible to predict what order these lines will print in?

This program will produce the following output:

Hello, from main!

Hello, from thread #0!

Hello, from thread #1!

Hello, from thread #2!

Hello, from thread #3!

Hello, from thread #4!

Done. Counter=5

No, it is not possible to predict what order these lines will come out, except to say that the first line will be printed first and the last line will be printed last. The order of the other lines (from the child processes) is unpredictible since all those threads are running at the same time; predicting which one gets to its printf() statement in which order is not possible.

Also, while it is likely that the ‘Counter’ value will be 5 at the end, note that this program has no locking around its critical section (the “counter++;” statement), so it is possible that some updates may get lost resulting in the ‘counter’ value being less than 5 after all the threads are done executing.

13
New cards

Race condition

A situation where several processes access and manipulate the same data concurrently, and the outcome depends on the particular order in which the accesses take place.

14
New cards

Critical section

The segment of code in which the process may be modifying one or more shared resources (ex. updating a shared variable).

15
New cards

Which of the tools for locking access to a critical section is also capable of “counting”, meaning that it could be used to protect a resource of which there is more than one?

Semaphore

16
New cards

CPU operations are [quicker/slower] than I/O operations.

quicker

17
New cards

The CPU-I/O burst cycle presents an opportunity for the computer to do [more/less] work in the same amount of time.

more

18
New cards

Necessary conditions

All must be true for deadlock to occur; mutual exclusion, hold and wait, no preemption, circular wait

19
New cards

Deadlock avoidance

Avoiding deadlock by carefully allocating resources; giving out resources only when it’s “safe”

20
New cards

Deadlock

A state in which there is a set of processes where every process is waiting for an event which only by another process in the set can cause

21
New cards

Detection and recovery

Track resource allocations (e.g. by constructing a resource graph), and use this information to decide if deadlock has occurred. If it has occurred, take steps to recover

22
New cards

Deadlock prevention

Make it impossible for deadlock to occur, by attacking the "four conditions". Structure the system so that it is impossible for at least one of the four conditions to be true.

23
New cards

Fill in the blank:

In a multiprocessor system, all CPUs have access to ______.

shared memory

24
New cards

Cache

Super-fast memory which is located on the processor chip; helps to speed up memory access by remembering values which were previously retrieved

25
New cards

Cache coherency

In systems which have a "cache", the problem of "cache coherency" involves how to make sure that the processor sees the correct values even if memory has been updated by a different processor.

26
New cards

What are send(dest, &mptr); and receive(addr, &buff); used for?

To send and receive messages between programs which may be running on the same computer or on different computers.

27
New cards

What happens when you use “blocking” send?

The program sending the message “blocks” (pauses and does not continue) until it is sure that the message has been sent.

28
New cards

What is an RPC (remote procedure call)?

A program on one computer calls a subroutine which is implemented in a program running on a different computer (using message-passing)

29
New cards

What is a “connection-oriented" protocol?

When a persistent connection is established between the sender and the receiver, then one or more messages are exchanged, then when the conversation is completely over the connection is broken (similar to the way a telephone conversation works).