Parallel Final Notes

0.0(0)
studied byStudied by 11 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/82

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

83 Terms

1
New cards

Parallel Computer Architecture

The method of organizing resources to maximize performance and programmability within given limits.

2
New cards

Unified Memory Access

Model where all processors share physical memory uniformly, with equal access time.

3
New cards

Symmetric Multiprocessor

All processes have equal access to peripheral devices in a system.

4
New cards

Asymmetric Multiprocessor

Only one or a few processors can access peripheral devices in a system.

5
New cards

Peripheral Device

Devices like printers, mice, scanners, and keyboards that can transfer data to or from memory without involving the processor.

6
New cards

No Uniform Memory Access

Model where memory access time varies with the location of the memory itself.

7
New cards

Local Memory

Memory physically distributed among all processors, each processor has its own local memory.

8
New cards

Distributed Memory Multicomputer System

Consists of multiple computers (nodes) interconnected by a message passing network, each node has its own processor, local memory, and I/O devices.

9
New cards

Pipelining

Technique that divides a task into smaller subtasks and assigns them to different processors to work concurrently, improving performance and efficiency.

10
New cards

Parallelism by Multiple Functional Units

The number of functional units that can efficiently be utilized is restricted by data dependencies between neighboring instructions 

11
New cards

Superscalar Processors

Dependencies determined dynamically at runtime by hardware, instructions dispatched to instruction units using dynamic scheduling.

12
New cards

Parallelism at Process or Thread Level

System where each core of a multicore processor must obtain a separate flow of control, accessing the same memory and sharing caches, requiring coordination of memory accesses.

13
New cards

Memory System Parallelism

Increasing the number of memory units and communication bandwidth.

14
New cards

Communication Parallelism

Increasing the amount of interconnections between elements and communication bandwidth.

15
New cards

Dataflow Architectures

Architecture that processes data based on availability and dependencies of data rather than sequential order of instructions, hard to build correctly.

16
New cards

Coherence

Writes to a location become visible to all processors in the same order, implemented with a hardware protocol based on the model of memory consistency.

17
New cards

Sequential Consistency

Ensures the order of operations executed by different processes appears consistent with a global order of execution and order of operations on each individual process.

18
New cards

ACID Transactions

Atomicity, Consistency, Isolation, and Durability ensure the integrity and reliability of database transactions.

19
New cards

Distributed Memory Systems

Memory architecture where physically separated memory can be addressed as a single shared address space.

20
New cards

Page Based Approach

Uses virtual memory to map pages of shared data to the local memory of each processor.

21
New cards

Shared Variable Approach

Uses routines to access shared variables distributed across processors.

22
New cards

Object Based Approach

Uses objects as units of data distribution and access, with each object having a set of methods that can be performed on processors.

23
New cards

Components of Interconnection Networks

Links, Switches, Network Interfaces.

24
New cards

Direct Connection Networks

Fixed point-to-point connections between neighboring nodes with fixed topology, such as rings, meshes, and cubes.

25
New cards

Indirect Connection Networks

Communication topology can change dynamically based on application demands, such as bus networks, multistage networks, and crossbar switches.

26
New cards

Routing

Determines the path from source to destination and how packets are routed.

27
New cards

Dimension Order Routing

Limits legal paths to have exactly one route from each source to each destination.

28
New cards

Deterministic Routing

Route taken by a message is determined exclusively by its source and destination, not by other traffic in the network.

29
New cards

Minimal Routing Algorithm

Selects the shortest route toward the destination of the message.

30
New cards

Domain Name System (DNS)

Translates domain names to IP addresses for browsers to load internet resources.

31
New cards

IP Address

Unique ID for a device connected to the internet, allows browsers to interact.

32
New cards

DNS Recursor

Server designed to receive queries from client machines through web browsers, responsible for making additional requests to satisfy the query.

33
New cards

Root Nameserver

First step in translating human-readable host names into IP addresses.

34
New cards

Top Level Domain (TLD) Nameserver

Hosts the last portion of a hostname, such as ".com" in "example.com".

35
New cards

Authoritative Nameserver

Returns the IP address for the requested hostname to the DNS recursor if it has access to the requested record.

36
New cards

Transmission Control Protocol (TCP)

Provides reliable, ordered, and error-checked delivery of a stream of bytes between applications running on hosts communicating via an IP network.

37
New cards

User Datagram Protocol (UDP)

Communications protocol used to establish low latency and loss-tolerating connections between applications on the internet, enabling faster transmission.

38
New cards

Open Systems Interconnection (OSI) Model

Conceptual model enabling diverse communication systems to communicate using standard protocols.

39
New cards

Distributed System

Collection of interconnected computers working together to achieve a common goal, processing and storing data, performing computations, and providing services across multiple machines.

40
New cards

MapReduce

A programming model or pattern within the Hadoop framework used to access big data in the Hadoop File System (HDFS).

41
New cards

Hadoop

 A technology that gives companies the ability to store and process huge amounts of data, it is a distributed file system

42
New cards

Apache Spark

A multi-language engine for executing data engineering, data science, and machine learning on single node machines or clusters

43
New cards

Map

Splits data into smaller blocks and assigns them to mappers for processing.

44
New cards

Reduce

Maps output values with the same key are assigned to a single reducer.

45
New cards

Combine

(Optional) A reducer that runs individually on each mapper server.

46
New cards

Partition

Translates key-value pairs from mappers to another set of key-value pairs to feed into the reducer. It decides how the data has to be presented.

47
New cards

YARN

Goal is to split up the functionalities of resource management and job scheduling/monitoring into separate daemons.

48
New cards

Container

Holds physical resources like a disk on a single node, CPU cores, or RAM.

49
New cards

Container Launch Context (CLC)

Contains records of dependencies, security tokens, environment variables.

50
New cards

Application Master

Posts CLC by requesting the container from the node manager.

51
New cards

Node Manager

Takes care of individual nodes in the Hadoop cluster and manages containers related to each node. It is registered to the Resource Manager and sends each node's health status.

52
New cards

Resource Manager

The master daemon of YARN and assigns resources.

53
New cards

Scheduler

Responsible for allocating resources to various applications subject to familiar constraints of capacities, queues, etc.

54
New cards

Applications Manager

Responsible for accepting job submissions, negotiating with the first container for executing the application-specific ApplicationMaster, and provides the service for restarting the ApplicationMaster container on failure.

55
New cards

Multi-tenancy

Allows access to multiple data processing engines.

56
New cards

Pipelining Steps

Fetch, decode, execute, write back

57
New cards

Data Parallelism

Increases the amount of data to operated on at the same time

58
New cards

Processor Parallelism

Increases the amount of processors

59
New cards

Atomicity (ACID)

Entire transaction takes place at once

60
New cards

Consistency (ACID)

Database must be consistent before and after the transaction

61
New cards

Isolation (ACID)

Multiple transactions occur independently without interference

62
New cards

Durability (ACID)

Changes of a successful transaction occurs even if there is a system failure

63
New cards

Links

A cable of one or more optical fibers or electrical wires that transmits analog signals from one end to the other to obtain the original digital information

64
New cards

Switches

Composes of a set of I/O ports, an internal cross bar connecting input to output, internal buffers, and they control the logic to affect the I/O connection at each point in time

65
New cards

Network Interfaces

Formats the packets and constructs the routing and controls information, may check end to end error and flow control

66
New cards

Topology

The pattern to connect the individual switches to other elements like processors, memories, and other switches

67
New cards

Concurrency of Distributed Systems

Distributed systems leverage concurrency and parallelism to improve performance and throughput

68
New cards

Redundancy

Multiple copies of data or services are maintained to ensure availability in case of failures

69
New cards

Fault Tolerance

Mechanisms that include replication and data recovery techniques

70
New cards

Client Server Architecture

Clients request services or resources from central servers and central servers handle data processing and storage

71
New cards

Peer to Peer Architecture

Allows distributed nodes (peers) to act as both clients and servers where peers share resources directly without a central server

72
New cards

Microservices Architecture

Where an application is composed of small, independent services that focus on a specific function and communicates through APIs, common in cloud based applications

73
New cards

Distributed Storage Systems

NoSQL, Cassandra, MongoDB, Hadoop Distributed File System (HDFS) for big data all use distributed storage

74
New cards

Content Delivery Network

Uses geographical distribution to reduce latency and improve user experience

75
New cards

Driver

Converts the user’s code into multiple tasks that can be distributed across worker nodes

76
New cards

Executors

Run on the worker nodes and execute the tasks assigned to them

77
New cards

Complex Problems

Often require exponential time to solve, making them impractical for large datasets

78
New cards

P Problems

Problems that are solvable in polynomial time, they have a predictable execution time related to the input size

79
New cards

NP Problems

Problems whose validity can be verified in polynomial time

80
New cards

Decision Problems

Involve determining a binary outcome based on the input

81
New cards

Optimization Problems

Seek the best solution from a set of feasible solutions

82
New cards

Nick’s Class

Represents problems efficiently solvable in parallel, emphasizing low depth circuits

83
New cards

Algorithmic Complexity

Refers to the efficiency problem of algorithms in terms of time and space requirements. Assesses how the performance of an algorithm scales with an increase in input size