Parallel Computing Week 1

0.0(0)
studied byStudied by 1 person
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/27

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

28 Terms

1
New cards

Moore’s law dictates what?

Dictates that every 2 years, the number of transistors on microchips tends to double

2
New cards

What is the purpose of parallel computing (I.e. optimizing the software to utilize _______?)

optimizing the software to utilize the hardware on the system

3
New cards

what is a super computer?

a computer that is a connection of thousands of processors and cores

4
New cards

what is a cluster?

a cluster is the practice of having many computers put together to build a supercomputer.

5
New cards

do clusters work collectively or individually

they work collectively

6
New cards

what is a server, in parallel computing terms?

huge installations of computers connected via a metered connection

7
New cards

what is the most important aspect of super computers, clusters, and grid computing (____ is key?)

Accuracy in communication (communication is key)

8
New cards

What makes debugging hard, when it comes to parallel processing?

Parallel processing involves the usage of multiple chips, and therefore many different memory addresses, hence the difficulty in tracking a single process at once

9
New cards

True or False, the introduction of parallel processing will always reduce the time needed for processing by exactly 1/x processors.

False; While processor management helps to offload workloads and reduce task time, splitting a 90 second process into 2 processors will not necessarily make the process take 45 seconds

10
New cards

What are the 4 taxonomies, in relation to parallel computing?

SISD (single instruction, single data), SIMD (single instruction, Multiple [diff] data), MISD (multiple instructions, single data), and MIMD (multiple instructions, Multiple [diff] data)

11
New cards

SISD involves:
— instruction(s) ?
— piece(s) of data?
— processor(s) ?

1 instruction, 1 piece of data, 1 processor

12
New cards

SIMD involves:
— instruction(s) ?
— piece(s) of data?
— processor(s) ?

1 instruction, n pieces of data, n processors

13
New cards

MISD involves:
— instruction(s) ?
— piece(s) of data?
— processor(s) ?

n instructions, 1 pieces of data, n processors

14
New cards

MIMD involves:
— instruction(s) ?
— piece(s) of data?
— processor(s) ?

n instructions, n pieces of data, n processors

15
New cards

Explain the difference between SPMD and MPMD

SPMD involves one program, running with many processors and many data; MPMD involves several programs, running with many processors and many data

16
New cards

In the MPMD subdivision of parallel processing, what pulls everything back together post-processing?

The Master computer

17
New cards

Why might conflicts occur when using multiple processors with shared memory?

because there will be no errors thrown, and you can overwrite memory from addresses allocated to other processors

18
New cards

True or False, pointer tracking is NOT considered sequential computing?

False; pointer tracking involves sequential processes, thus it is considered to be a sequential practice

19
New cards

True or False, you can parallelize monitor and mouse computing?

False

20
New cards

What is distributed memory?

A parallel processing approach where a host computer/master program uses the memory/processors of another computer tethered to it via some sort of connection (typically metered)

21
New cards

True or False, under the Distributed Memory methodology of parallel computing, change to a memory item on one node/processor is not seen on/from another node/processor

True

22
New cards

Just how scalable is distributed memory?

Distributed memory is very scalable

23
New cards

is the chipset multiprocessor architecture or symmetric multiprocessor architecture the fastest computing method, and why?

chipset multiprocessing is the fastest, as memory transfer can stay within the chip (processors does not need to communicate with a bus to access memory, and instead they share the same cache)

24
New cards

when implementing parallel computing for symmetric multiprocessors, what are the most important things to take caution of?

Caring for read and write cycles, and caring for the priority of processor tasks.

25
New cards

When the operating system is deciding the winner of processors cycle ties (in terms of priority), who does it give priority to?

A randomly selected processor

26
New cards

True or False, Heterogeneous chip designs are built to be hardware agnostic?

True; Not only are they agnostic when it comes to hardware and operating system comparability, but they require custom designs, and cannot be bought commercially at the average computer store

27
New cards

What is the most expensive processor style for parallel?

Cluster is the most expensive when it comes to parallel processing implementation, as it is made from commodity parts

28
New cards

Shared processing differs from distributed processing how?

Shared processing/memory methodology keeps processes contained to a single computer, where all processors will share the same overarching memory, and be able to see changes made to said memory