Parallel and Distributed Computing: Module 1, 2, 3

0.0(0)
studied byStudied by 14 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/116

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

117 Terms

1
New cards

Serial Computing

involves processing instructions one at a time, using only a single processor, without distributing tasks across multiple processors.

2
New cards

Sequential Computing

Another term for serial computing.

3
New cards

Parallel Computing

Uses multiple computer cores to attack several operations at once.

It addresses the slow speeds of serial computing.

4
New cards

Parallel Programming

Enables computers to run processes and perform calculations simultaneously, a method called parallel processing.

5
New cards

Parallel Architecture

It can break down a job into its parts and multi-task them unlike serial computing.

6
New cards

Parallel Computer Systems

Well suited to modeling and simulating real-world phenomena.

7
New cards

Task Distribution

Example of Parallel Computing.

The process by which a supercomputer can split the whole grid into sub-grids.

8
New cards

Simultaneous Computation

Example of Parallel Computing.

Thousands of processors work simultaneously on different parts of the grid to calculate the data which is stored at different locations.

9
New cards

Communication between Processors

Example of Parallel Computing.

The main reason for processors to communicate with each other is the fact that the weather for one part of the grid can have an impact on the areas adjacent to it.

10
New cards

Multicore Processors

They consist of multiple processing units, or cores, on a single integrated circuit (IC).

This structure facilitates parallel computing, enhancing performance while potentially reducing power consumption.

11
New cards

Speed and Efficiency

Parallel Computing Benefits.

Allows tasks to be completed faster by dividing them into smaller sub-tasks that can be processed simultaneously by multiple processors or cores.

12
New cards

Handling Large Data Sets (Scalability)

Parallel Computing Benefits.

Essential for processing large data sets that would be impractical or too slow to handle sequentially.

13
New cards

Parallel Computing Models the Real World

Parallel Computing Benefits.

To crunch numbers on data points in weather, traffic, finance, industry, agriculture, oceans, ice caps, and healthcare.

14
New cards

Saves Time & Saves Money

Parallel Computing Benefits.

By saving time, parallel computing makes things cheaper; the more efficient use of resources may seem negligible on a small scale.

15
New cards

Solving Complex Problems

Parallel Computing Benefits.

With AI and big data, a single web app may process millions of transactions every second; parallel computing helps achieve this faster.

16
New cards

Leverage Remote Resources

Parallel Computing Benefits.

With parallel processing, multiple computers with several cores each can sift through many times more real-time data than serial computers working on their own.

17
New cards

Fault Tolerance

Parallel Computing Benefits.

Parallel systems can be designed to be fault-tolerant, meaning they can continue to operate even if one or more processors fail.

18
New cards

HP Z8

The world's packs in 56-cores of computer power that lets it perform real-time video editing in 8K video or run complex 3D simulations.

19
New cards

ILLIAC IV

Developed at the University of Illinois in the 1960s with help from NASA and the U.S. Air Force.

It had 64 processing elements capable of handling 131,072 bits at a time.

20
New cards

Search for Extraterrestrial Intelligence (SETI)

Monitors millions of frequencies all day and night.

It uses parallel computing through the Berkeley Open Infrastructure for Network Computing (BOINC).

21
New cards

Distributed Computing

Originally referred to independent computers interconnected via a network, that is capable of collaborating on a task.

22
New cards

Distributed Computing

Networks of interconnected computers that work together to achieve a common goal.

23
New cards

Distributed Computing

These computers are often spread across different locations and connected through a network, such as the internet or a local area network (LAN).

24
New cards

Distributed Computer System

Consists of multiple software components that are on multiple computers but run as a single system.

25
New cards

Local Network

A network where computers are physically close to each other.

26
New cards

Wide Area Network

A network that connects computers that are far apart.

27
New cards

Cloud Computing

Services like AWS, Microsoft Azure, and Google Cloud Platform that rely on distributed computing for scalability.

28
New cards

Von Neumann Architecture

This architecture introduced the concept of storing both data and instructions in the same memory.

29
New cards

Reduced Instruction Set Computing (RISC)

Its processors are characterized by a small, highly optimized set of simple instructions.

30
New cards

Complex Instruction Set Computing (CISC)

Its processors feature a large and complex instruction set, with instructions that can perform multiple operations simultaneously.

31
New cards

Memory Address

Specifies the location in memory where data or instructions are stored or retrieved.

32
New cards

Memory Data

The actual information (either data or instructions) stored in memory.

33
New cards

Control in Memory

Manages the flow of data and instructions between memory and the CPU.

34
New cards

ALU or Arithmetic Logic Unit

Performs arithmetic and logical operations.

35
New cards

PC or Program Counter

Keeps track of the address of the next instruction to be executed.

36
New cards

IR or Instruction Register

Holds the current instruction being executed.

37
New cards

MAR or Memory Address Register

Stores the address of the memory location being accessed.

38
New cards

MDR or Memory Data Register

Temporarily holds data being transferred to or from memory.

39
New cards

CU or Control Unit

Coordinates the activities of the CPU, managing the flow of data and instructions.

40
New cards

Accumulator

A register that stores intermediate results of arithmetic and logic operations.

41
New cards

General Purpose Registers

Used for temporary storage of data during processing.

42
New cards

I/O Bus

A communication pathway connecting the CPU and memory with I/O devices.

43
New cards

I/O Interface

Acts as a bridge between the CPU/memory and the I/O devices.

44
New cards

Flynn's Taxonomy

Classifies computer architectures according to how many instruction streams (processes) and data streams they can process simultaneously.

45
New cards

Instruction Pool

A collection or queue of instructions that are waiting to be fetched, decoded, or executed by the CPU

46
New cards

Single-Instruction, Single Data (SISD)

A uniprocessor machine that is capable of executing a single instruction, operating on a single data stream.

47
New cards

Single-Instruction, Multiple-Data (SIMD)

A multiprocessor machine capable of executing the same instruction on all the CPUs but operating on different data streams.

48
New cards

Multiple-Instruction, Single-Data (MISD)

A multiprocessor machine capable of executing different instructions on different processors, but all of them operate on the same dataset.

49
New cards

Multiple-Instruction, Multiple-Data (MIMD)

A multiprocessor machine that is capable of executing multiple instructions on multiple data sets. Each processor works independently, running its own program on its own data.

50
New cards

Devices or Systems

Key Components of a Distributed Computing System.

Each of these have their own processing capabilities and may also store and manage their own data.

51
New cards

Network

Key Components of a Distributed Computing System.

It connects the devices or systems in the distributed system, allowing them to communicate and exchange data.

52
New cards

Resource Management

Key Components of a Distributed Computing System.

Used to allocate and manage shared resources such as computing power, storage, and networking.

53
New cards

Client-Server Architecture

Type of Distributed Architecture.

The most common method of software organization on a distributed system.

54
New cards

Three-Tier Architecture

Type of Distributed Architecture.

Client machines remain as the first tier you access.

Server machines, on the other hand, are further divided into two categories.

<p>Type of Distributed Architecture.</p><p>Client machines remain as the first tier you access.</p><p>Server machines, on the other hand, are further divided into two categories.</p>
55
New cards

N-Tier Architecture

Type of Distributed Architecture.

Includes several different client-server systems communicating with each other to solve the same problem.

56
New cards

Peer-to-Peer Architecture

Type of Distributed Architecture.

Distributed systems assign equal responsibilities to all networked computers.

57
New cards

Message Passing Interface (MPI)

A standardized and portable message-passing system designed for parallel computing architectures.

58
New cards

Message passing

Sending of a message to an object, parallel process, subroutine, function, or thread, which is then used to start another process.

59
New cards

Communicator

Defines a group of processes that can communicate with one another.

60
New cards

Message Passing Paradigm

A basic approach for Inter-Process Communication.

The data exchange between the sender and the receiver.

61
New cards

Client Server Paradigm

The server acts as a service provider, the client issues the request and waits for the response from the server.

Here, server is a dump machine. Until the client makes a call server doesn’t communicate.

62
New cards

Peer-to-Peer Paradigm

Direct communication between processes.

Here is no client or server; anyone can request others and get a response.

63
New cards

Message systems

Act as an intermediate among independent processes.

Acts as a switch through which processes exchange messages asynchronously in a decoupled manner.

The sender sends a message, which is dropped at first in the message system, then forwarded to the message queue which is associated with the receiver.

64
New cards

Synchronous Message Passing

Also called rendezvous or handshaking.

The sender and receiver have to 'meet' at their respective send/receive operations to transfer data.

65
New cards

Asynchronous Message Passing

The sender does not wait for the receiver to reach its receive operation; rather, it gets rid of the prepared data and continues its execution.

66
New cards

Message Queue System

A common architecture in distributed computing and asynchronous communication.

67
New cards

Message Queue

It acts as an intermediary in a Message Queue System.

It collects incoming messages and holds them temporarily.

68
New cards

Collective Communication

Where data is aggregated or disseminated from/to multiple processes.

69
New cards

Broadcast

Distributing data from one process to all processes in the group.

70
New cards

Scatter

Takes an array of elements and distributes the elements in the order of process rank.

71
New cards

Gather

This is the inverse of MPI Scatter; it takes elements from many processes and gathers them into one single process.

It takes elements from many processes and gathers them into one single process.

72
New cards

Reduce

Takes an array of elements on each process and returns an array of output elements to the root process.

73
New cards

MPI_Allgather function

A collective communication operation in MPI that gathers data from all processes in a group and distributes it to all members of the group.

74
New cards

Barrier Synchronization

Collective Communication Pattern.

All processes in a communicator must reach the barrier before any can proceed.

It enforces a synchronization point.

75
New cards

Data Movement

Collective Communication Pattern.

Data is distributed or collected among processes.

76
New cards

Collective Operations

Collective Communication Pattern.

One process from which the communicator collects data from each process and operates on that data to compute a result.

77
New cards

Reduce-Scatter

Combines and distributes partial results.

78
New cards

Unique Identifier (UID)

A numeric or alphanumeric string that is associated with a single entity within a given system.

79
New cards

Process Rank

Type of Unique Identifier.

Simple integer: Each process is assigned a unique integer value.

80
New cards

Process ID

Type of Unique Identifier.

System-assigned identifier: The operating system assigns each process a unique identifier.

81
New cards

Logical Topology

Type of Unique Identifier.

Hierarchical or grid-based structure: Processes are organized into a logical topology, such as a tree or a grid.

82
New cards

Custom Identifiers

Type of Unique Identifier.

User-defined identifiers: Processes can be assigned unique identifiers based on various criteria, such as location, function, or workload.

83
New cards

Process Identifiers

These identifiers are crucial for tracking, managing, and controlling processes and users on a system.

84
New cards

User ID

A numerical identifier that represents the user associated with the process, used for security and access control purposes.

85
New cards

Process ID

A numerical identifier for a process in the system calls or commands.

86
New cards

Process Group ID

A numerical identifier for groups that process together a single unit.

87
New cards

Session ID

Associates a group of processes with a particular user session.

A unique number that a server assigns to requesting clients.

88
New cards

Session

A temporary connection between a server and a client.

89
New cards

ID

Stands for identifier and is used to identify and track user activity.

90
New cards

Fork-Join Parallelism

A programming model that allows tasks to be split into subtasks (forking) and later combined (joining) after execution.

91
New cards

Fork

The step that splits the task into smaller subtasks executed concurrently.

92
New cards

Join

The step that merges the results of the executed subtasks into one result.

93
New cards

Divide and Conquer

The strategy that fork-join parallelism use.

94
New cards

Fork-Join Parallelism

Its framework supports a style of parallel programming that solves problems by divide and conquer.

95
New cards

Fork-Join Parallelism

It delineates a set of tasks that can be executed simultaneously, beginning at the same starting point, the fork, and continuing until all concurrent tasks are finished, having reached the joining point.

96
New cards

Multithreading

The ability of a program or an operating system to enable more than one user at a time without requiring multiple copies of the program.

97
New cards

Multithreading

A CPU feature that allows programmers to split processes into smaller subtasks called threads that can be executed concurrently.

98
New cards

Context Switching

The state of a thread is saved and the state of another thread is loaded whenever any interrupt (due to I/O or manually set) takes place.

99
New cards

Process

A program being executed.

100
New cards

Thread

Independent units into which a process can be further divided.