COMP1860 Lecture 9.2 Notes: Parallel Processing

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/22

flashcard set

Earn XP

Description and Tags

Flashcards covering symmetric multiprocessing, cache coherence, multithreading, clusters, NUMA, and cloud computing.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

23 Terms

1
New cards

Symmetric Multiprocessing (SMP)

Enhances processing power by using multiple processors under a unified OS.

2
New cards

SMP Key Characteristics

Includes multiple processors of similar capability, shared main memory and I/O facilities, equal memory access time for all processors, symmetric architecture where all processors perform the same tasks, and a single OS to manage all processors.

3
New cards

Time-Shared Bus

A simple multiprocessor interconnection mechanism using shared control, address, and data lines.

4
New cards

Addressing

Identifies data sources and destinations on a bus.

5
New cards

Arbitration

Resolves competing bus access requests.

6
New cards

Time-Sharing

Allows one module to control the bus at a time.

7
New cards

Cache Coherence Problem

Occurs when one processor updates a value, and others must be notified to prevent inconsistencies.

8
New cards

MESI Protocol

A cache coherence protocol widely used in x86 architectures, combining hardware and software strategies.

9
New cards

Write-back Policy

Updates only the cache; main memory is updated when the cache evicts the data.

10
New cards

Write-through Policy

Writes update both the cache and main memory simultaneously.

11
New cards

Directory Protocols

Centralized tracking of cache states.

12
New cards

Snoopy Protocols

Each cache monitors memory transactions.

13
New cards

Write-Invalidate Protocol

A write operation invalidates copies in other caches before proceeding, ensuring only one writer at a time.

14
New cards

Write-Update Protocol

Updates are broadcasted so all caches holding the line can modify it, allowing multiple readers and writers.

15
New cards

MESI Cache Line States

Modified, Exclusive, Shared, and Invalid, which indicate the status of a cache line.

16
New cards

Interleaved Multithreading (Fine-Grained)

The processor handles multiple thread contexts simultaneously, switching to a different thread at each clock cycle.

17
New cards

Blocked Multithreading (Coarse-Grained)

Executes instructions from a single thread continuously until a long-latency event occurs, then switches to another thread.

18
New cards

Simultaneous Multithreading (SMT)

Allows multiple threads to issue instructions in the same cycle, maximizing CPU resource utilization.

19
New cards

Chip Multiprocessing (Multicore)

Multiple cores are integrated on a single chip, each executing its own thread independently.

20
New cards

Clustering

An alternative to SMP, consisting of multiple interconnected computers (nodes) working together.

21
New cards

NUMA (Nonuniform Memory Access)

Memory access time depends on which region is accessed; different processors experience different memory latencies.

22
New cards

CC-NUMA (Cache-Coherent NUMA)

A NUMA system that maintains cache coherence among processors.

23
New cards

Cloud Computing

Delivers flexible, on-demand compute resources at scale.